A New Algorithm for Automatically Calculating Noise, Spatial Resolution, and Contrast Image Quality Metrics: Proof-of-Concept and Agreement With Subjective Scores in Phantom and Clinical Abdominal CT

Cécile R L P N Jeukens*, Maikel T H Brauer, Casper Mihl, Emmeline Laupman, Estelle C Nijssen, Joachim E Wildberger, Bibi Martens, Carola van Pul

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

45 Downloads (Pure)

Abstract

OBJECTIVES: The aims of this study were to develop a proof-of-concept computer algorithm to automatically determine noise, spatial resolution, and contrast-related image quality (IQ) metrics in abdominal portal venous phase computed tomography (CT) imaging and to assess agreement between resulting objective IQ metrics and subjective radiologist IQ ratings.

MATERIALS AND METHODS: An algorithm was developed to calculate noise, spatial resolution, and contrast IQ parameters. The algorithm was subsequently used on 2 datasets of anthropomorphic phantom CT scans, acquired on 2 different scanners (n = 57 each), and on 1 dataset of patient abdominal CT scans (n = 510). These datasets include a range of high to low IQ: in the phantom dataset, this was achieved through varying scanner settings (tube voltage, tube current, reconstruction algorithm); in the patient dataset, lower IQ images were obtained by reconstructing 30 consecutive portal venous phase scans as if they had been acquired at lower mAs. Five noise, 1 spatial, and 13 contrast parameters were computed for the phantom datasets; for the patient dataset, 5 noise, 1 spatial, and 18 contrast parameters were computed. Subjective IQ rating was done using a 5-point Likert scale: 2 radiologists rated a single phantom dataset each, and another 2 radiologists rated the patient dataset in consensus. General agreement between IQ metrics and subjective IQ scores was assessed using Pearson correlation analysis. Likert scores were grouped into 2 categories, "insufficient" (scores 1-2) and "sufficient" (scores 3-5), and differences in computed IQ metrics between these categories were assessed using the Mann-Whitney U test.

RESULTS: The algorithm was able to automatically calculate all IQ metrics for 100% of the included scans. Significant correlations with subjective radiologist ratings were found for 4 of 5 noise (R2 range = 0.55-0.70), 1 of 1 spatial resolution (R2 = 0.21 and 0.26), and 10 of 13 contrast (R2 range = 0.11-0.73) parameters in the phantom datasets and for 4 of 5 noise (R2 range = 0.019-0.096), 1 of 1 spatial resolution (R2 = 0.11), and 16 of 18 contrast (R2 range = 0.008-0.116) parameters in the patient dataset. Computed metrics that significantly differed between "insufficient" and "sufficient" categories were 4 of 5 noise, 1 of 1 spatial resolution, 9 and 10 of 13 contrast parameters for phantom the datasets and 3 of 5 noise, 1 of 1 spatial resolution, and 10 of 18 contrast parameters for the patient dataset.

CONCLUSION: The developed algorithm was able to successfully calculate objective noise, spatial resolution, and contrast IQ metrics of both phantom and clinical abdominal CT scans. Furthermore, multiple calculated IQ metrics of all 3 categories were in agreement with subjective radiologist IQ ratings and significantly differed between "insufficient" and "sufficient" IQ scans. These results demonstrate the feasibility and potential of algorithm-determined objective IQ. Such an algorithm should be applicable to any scan and may help in optimization and quality control through automatic IQ assessment in daily clinical practice.

Original languageEnglish
Pages (from-to)649-655
Number of pages7
JournalInvestigative Radiology
Volume58
Issue number9
Early online date23 Jan 2023
DOIs
Publication statusPublished - 1 Sept 2023

Fingerprint

Dive into the research topics of 'A New Algorithm for Automatically Calculating Noise, Spatial Resolution, and Contrast Image Quality Metrics: Proof-of-Concept and Agreement With Subjective Scores in Phantom and Clinical Abdominal CT'. Together they form a unique fingerprint.

Cite this