Abstract
Objectives: Body composition assessment using CT images at the L3-level is increasingly applied in cancer research and has been shown to be strongly associated with long-term survival. Robust high-throughput automated segmentation is key to assess large patient cohorts and to support implementation of body composition analysis into routine clinical practice. We trained and externally validated a deep learning neural network (DLNN) to automatically segment L3-CT images. Methods: Expert-drawn segmentations of visceral and subcutaneous adipose tissue (VAT/SAT) and skeletal muscle (SM) of L3-CT-images of 3187 patients undergoing abdominal surgery were used to train a DLNN. The external validation cohort was comprised of 2535 patients with abdominal cancer. DLNN performance was evaluated with (geometric) dice similarity (DS) and Lin’s concordance correlation coefficient. Results: There was a strong concordance between automatic and manual segmentations with median DS for SM, VAT, and SAT of 0.97 (IQR: 0.95-0.98), 0.98 (IQR: 0.95-0.98), and 0.95 (IQR: 0.92-0.97), respectively. Concordance correlations were excellent: SM 0.964 (0.959-0.968), VAT 0.998 (0.998-0.998), and SAT 0.992 (0.991-0.993). Bland-Altman metrics indicated only small and clinically insignificant systematic offsets; SM radiodensity: 0.23 Hounsfield units (0.5%), SM: 1.26 cm 2.m − 2 (2.8%), VAT: −1.02 cm 2.m − 2 (1.7%), and SAT: 3.24 cm 2.m − 2 (4.6%). Conclusion: A robustly-performing and independently externally validated DLNN for automated body composition analysis was developed. Advances in knowledge: This DLNN was successfully trained and externally validated on several large patient cohorts. The trained algorithm could facilitate large-scale population studies and implementation of body composition analysis into clinical practice.
Original language | English |
---|---|
Pages (from-to) | 2015-2023 |
Number of pages | 9 |
Journal | British Journal of Radiology |
Volume | 97 |
Issue number | 1164 |
Early online date | 16 Sept 2024 |
DOIs | |
Publication status | Published - 1 Dec 2024 |