A framework based on hidden Markov trees for multimodal PET/CT image co-segmentation

Houda Hanzouli-Ben Salah, Jerome Lapuyade-Lahorgue, Julien Bert, Didier Benoit, Philippe Lambin, Angela Van Baardwijk, Emmanuel Monfrini, Wojciech Pieczynski, Dimitris Visvikis, Mathieu Hatt*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

163 Downloads (Pure)

Abstract

PurposeThe purpose of this study was to investigate the use of a probabilistic quad-tree graph (hidden Markov tree, HMT) to provide fast computation, robustness and an interpretational framework for multimodality image processing and to evaluate this framework for single gross tumor target (GTV) delineation from both positron emission tomography (PET) and computed tomography (CT) images.

MethodsWe exploited joint statistical dependencies between hidden states to handle the data stack using multi-observation, multi-resolution of HMT and Bayesian inference. This framework was applied to segmentation of lung tumors in PET/CT datasets taking into consideration simultaneously the CT and the PET image information. PET and CT images were considered using either the original voxels intensities, or after wavelet/contourlet enhancement. The Dice similarity coefficient (DSC), sensitivity (SE), positive predictive value (PPV) were used to assess the performance of the proposed approach on one simulated and 15 clinical PET/CT datasets of non-small cell lung cancer (NSCLC) cases. The surrogate of truth was a statistical consensus (obtained with the Simultaneous Truth and Performance Level Estimation algorithm) of three manual delineations performed by experts on fused PET/CT images. The proposed framework was applied to PET-only, CT-only and PET/CT datasets, and were compared to standard and improved fuzzy c-means (FCM) multimodal implementations.

ResultsA high agreement with the consensus of manual delineations was observed when using both PET and CT images. Contourlet-based HMT led to the best results with a DSC of 0.92 0.11 compared to 0.89 +/- 0.13 and 0.90 +/- 0.12 for Intensity-based HMT and Wavelet-based HMT, respectively. Considering PET or CT only in the HMT led to much lower accuracy. Standard and improved FCM led to comparatively lower accuracy than HMT, even when considering multimodal implementations.

ConclusionsWe evaluated the accuracy of the proposed HMT-based framework for PET/CT image segmentation. The proposed method reached good accuracy, especially with pre-processing in the contourlet domain.

Original languageEnglish
Pages (from-to)5835-5848
Number of pages14
JournalMedical Physics
Volume44
Issue number11
DOIs
Publication statusPublished - Nov 2017

Keywords

  • Bayesian inference
  • computed tomography (CT)
  • hidden Markov trees (HMT)
  • positron emission tomography (PET)
  • segmentation
  • wavelet and contourlet analysis
  • CELL LUNG-CANCER
  • TUMOR DELINEATION
  • F-18-FDG PET
  • CT IMAGES
  • TRACER UPTAKE
  • MODEL
  • RECONSTRUCTION
  • BRAIN
  • CLASSIFICATION
  • QUANTITATION

Cite this