TY - JOUR
T1 - Label-efficient transformer-based framework with self-supervised strategies for heterogeneous lung tumor segmentation
AU - Liu, Zhenbing
AU - Li, Weixing
AU - Cui, Yanfen
AU - Chen, Xin
AU - Pan, Xipeng
AU - Ye, Guanchao
AU - Wu, Guangyao
AU - Liao, Yongde
AU - Volmer, Leroy
AU - Wee, Leonard
AU - Dekker, Andre
AU - Han, Chu
AU - Liu, Zaiyi
AU - Shi, Zhenwei
PY - 2025/4/15
Y1 - 2025/4/15
N2 - Precise and automatic segmentation of lung tumors is crucial for computer-aided diagnosis and subsequent treatment planning. However, the heterogeneity of lung tumors, varying in size, shape, and location, combined with the low contrast between tumors and adjacent tissues, significantly complicates accurate segmentation. Furthermore, most supervised segmentation models face limitations due to the scarcity and lack of diversity in labeled training data. Although various self-supervised learning strategies have been developed for model pre-training with unlabeled data, their relative benefits for the downstream task of lung tumor segmentation on CT scans remain uncertain. To address these challenges, we introduce a robust and label-efficient Transformer- based framework with different self-supervised strategies for lung tumor segmentation. Our model training is conducted in two phase, during the pre-training phase, we pre-train the model on a large amount of unlabeled CT scans, employing three different pre-training strategies and comparing their impacts on downstream lung tumor segmentation task. In the fine-tuning phase, we utilize the encoders of the pre-trained models for label-efficient supervised fine-tuning. In addition, we design a surrounding samples-based contrastive learning (SSCL) module at the end of the encoder to enhance feature extraction, especially for tumors with indistinct boundaries. Our proposed methods are evaluated on test sets from seven different center. When only a small amount of labeled data is available, compared to supervised models, Ours (SimMIM3D) demonstrates superior segmentation performance on three internal test sets, achieving Dice coefficients of 0.8419, 0.8346, and 0.8282, respectively. Additionally, it also shows strong generalization on external test sets, with Dice coefficients of 0.7594, 0.7684, 0.6578, and 0.6621, respectively. Extensive experiments confirm the efficacy of our methodology, demonstrating significant improvements over recent state-of-theart supervised segmentation methods in scenarios with limited labeled data. The source code is available at https://github.com/GDPHMediaLab/SSL-Seg.
AB - Precise and automatic segmentation of lung tumors is crucial for computer-aided diagnosis and subsequent treatment planning. However, the heterogeneity of lung tumors, varying in size, shape, and location, combined with the low contrast between tumors and adjacent tissues, significantly complicates accurate segmentation. Furthermore, most supervised segmentation models face limitations due to the scarcity and lack of diversity in labeled training data. Although various self-supervised learning strategies have been developed for model pre-training with unlabeled data, their relative benefits for the downstream task of lung tumor segmentation on CT scans remain uncertain. To address these challenges, we introduce a robust and label-efficient Transformer- based framework with different self-supervised strategies for lung tumor segmentation. Our model training is conducted in two phase, during the pre-training phase, we pre-train the model on a large amount of unlabeled CT scans, employing three different pre-training strategies and comparing their impacts on downstream lung tumor segmentation task. In the fine-tuning phase, we utilize the encoders of the pre-trained models for label-efficient supervised fine-tuning. In addition, we design a surrounding samples-based contrastive learning (SSCL) module at the end of the encoder to enhance feature extraction, especially for tumors with indistinct boundaries. Our proposed methods are evaluated on test sets from seven different center. When only a small amount of labeled data is available, compared to supervised models, Ours (SimMIM3D) demonstrates superior segmentation performance on three internal test sets, achieving Dice coefficients of 0.8419, 0.8346, and 0.8282, respectively. Additionally, it also shows strong generalization on external test sets, with Dice coefficients of 0.7594, 0.7684, 0.6578, and 0.6621, respectively. Extensive experiments confirm the efficacy of our methodology, demonstrating significant improvements over recent state-of-theart supervised segmentation methods in scenarios with limited labeled data. The source code is available at https://github.com/GDPHMediaLab/SSL-Seg.
KW - Lung tumor segmentation
KW - Vision transformer
KW - Contrastive learning
KW - Masked image modeling
KW - Self-supervised learning
KW - NETWORK
U2 - 10.1016/j.eswa.2024.126364
DO - 10.1016/j.eswa.2024.126364
M3 - Article
SN - 0957-4174
VL - 269
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 126364
ER -