Label-efficient transformer-based framework with self-supervised strategies for heterogeneous lung tumor segmentation

Zhenbing Liu*, Weixing Li*, Yanfen Cui, Xin Chen, Xipeng Pan, Guanchao Ye, Guangyao Wu, Yongde Liao, Leroy Volmer, Leonard Wee, Andre Dekker, Chu Han, Zaiyi Liu*, Zhenwei Shi*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Downloads (Pure)

Abstract

Precise and automatic segmentation of lung tumors is crucial for computer-aided diagnosis and subsequent treatment planning. However, the heterogeneity of lung tumors, varying in size, shape, and location, combined with the low contrast between tumors and adjacent tissues, significantly complicates accurate segmentation. Furthermore, most supervised segmentation models face limitations due to the scarcity and lack of diversity in labeled training data. Although various self-supervised learning strategies have been developed for model pre-training with unlabeled data, their relative benefits for the downstream task of lung tumor segmentation on CT scans remain uncertain. To address these challenges, we introduce a robust and label-efficient Transformer- based framework with different self-supervised strategies for lung tumor segmentation. Our model training is conducted in two phase, during the pre-training phase, we pre-train the model on a large amount of unlabeled CT scans, employing three different pre-training strategies and comparing their impacts on downstream lung tumor segmentation task. In the fine-tuning phase, we utilize the encoders of the pre-trained models for label-efficient supervised fine-tuning. In addition, we design a surrounding samples-based contrastive learning (SSCL) module at the end of the encoder to enhance feature extraction, especially for tumors with indistinct boundaries. Our proposed methods are evaluated on test sets from seven different center. When only a small amount of labeled data is available, compared to supervised models, Ours (SimMIM3D) demonstrates superior segmentation performance on three internal test sets, achieving Dice coefficients of 0.8419, 0.8346, and 0.8282, respectively. Additionally, it also shows strong generalization on external test sets, with Dice coefficients of 0.7594, 0.7684, 0.6578, and 0.6621, respectively. Extensive experiments confirm the efficacy of our methodology, demonstrating significant improvements over recent state-of-theart supervised segmentation methods in scenarios with limited labeled data. The source code is available at https://github.com/GDPHMediaLab/SSL-Seg.
Original languageEnglish
Article number126364
Number of pages12
JournalExpert Systems with Applications
Volume269
DOIs
Publication statusPublished - 15 Apr 2025

Keywords

  • Lung tumor segmentation
  • Vision transformer
  • Contrastive learning
  • Masked image modeling
  • Self-supervised learning
  • NETWORK

Fingerprint

Dive into the research topics of 'Label-efficient transformer-based framework with self-supervised strategies for heterogeneous lung tumor segmentation'. Together they form a unique fingerprint.

Cite this