Tehran University of Medical Sciences

Science Communicator Platform

Stay connected! Follow us on X network (Twitter):
Share this content! On (X network) By
Information Fusion for Fully Automated Segmentation of Head and Neck Tumors From Pet and Ct Images Publisher



Shiri I1 ; Amini M1 ; Yousefirizi F2 ; Vafaei Sadr A3, 4 ; Hajianfar G1 ; Salimi Y1 ; Mansouri Z1 ; Jenabi E5 ; Maghsudi M6 ; Mainta I1 ; Becker M7 ; Rahmim A2, 8 ; Zaidi H1, 9, 10, 11
Authors
Show Affiliations
Authors Affiliations
  1. 1. Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
  2. 2. Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
  3. 3. Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
  4. 4. Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, United States
  5. 5. Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
  6. 6. Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
  7. 7. Service of Radiology, Geneva University Hospital, Geneva, Switzerland
  8. 8. Department of Radiology and Physics, University of British Columbia, Vancouver, Canada
  9. 9. Geneva University Neurocenter, Geneva University, Geneva, Switzerland
  10. 10. Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
  11. 11. Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark

Source: Medical Physics Published:2024


Abstract

Background: PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. Purpose: Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. Methods: The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. Results: In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76–0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen–Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak, SUVmean and SUVmedian. Conclusion: PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC. © 2023 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.
7. Cardiac Pattern Recognition From Spect Images Using Machine Learning Algorithms, 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record# NSS/MIC 2021 and 28th International Symposium on Room-Temperature Semiconductor Detectors# RTSD 2022 (2021)
8. Combat Harmonization of Image Reconstruction Parameters to Improve the Repeatability of Radiomics Features, 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record# NSS/MIC 2021 and 28th International Symposium on Room-Temperature Semiconductor Detectors# RTSD 2022 (2021)
Experts (# of related papers)
Other Related Docs
9. Pet Image Radiomics Feature Variability in Lung Cancer: Impact of Image Segmentation, 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record# NSS/MIC 2021 and 28th International Symposium on Room-Temperature Semiconductor Detectors# RTSD 2022 (2021)
11. Lung Cancer Recurrence Prediction Using Radiomics Features of Pet Tumor Sub-Volumes and Multi-Machine Learning Algorithms, 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record# NSS/MIC 2021 and 28th International Symposium on Room-Temperature Semiconductor Detectors# RTSD 2022 (2021)
13. Lymphovascular Invasion Prediction in Lung Cancer Using Multi-Segmentation Pet Radiomics and Multi-Machine Learning Algorithms, 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record# NSS/MIC 2021 and 28th International Symposium on Room-Temperature Semiconductor Detectors# RTSD 2022 (2021)
14. Mri Radiomic Features Harmonization: A Multi-Center Phantom Study, 2022 IEEE NSS/MIC RTSD - IEEE Nuclear Science Symposium# Medical Imaging Conference and Room Temperature Semiconductor Detector Conference (2022)
15. Robust Versus Non-Robust Radiomic Features: Machine Learning Based Models for Nsclc Lymphovascular Invasion, 2022 IEEE NSS/MIC RTSD - IEEE Nuclear Science Symposium# Medical Imaging Conference and Room Temperature Semiconductor Detector Conference (2022)
19. Deep Vision Transformers for Prognostic Modeling in Covid-19 Patients Using Large Multi-Institutional Chest Ct Dataset, 2022 IEEE NSS/MIC RTSD - IEEE Nuclear Science Symposium# Medical Imaging Conference and Room Temperature Semiconductor Detector Conference (2022)
20. Deep Adaptive Transfer Learning for Site-Specific Pet Attenuation and Scatter Correction From Multi-National/Institutional Datasets, 2022 IEEE NSS/MIC RTSD - IEEE Nuclear Science Symposium# Medical Imaging Conference and Room Temperature Semiconductor Detector Conference (2022)
22. Machine Learning-Based Overall Survival Prediction in Gbm Patients Using Mri Radiomics, 2022 IEEE NSS/MIC RTSD - IEEE Nuclear Science Symposium# Medical Imaging Conference and Room Temperature Semiconductor Detector Conference (2022)
25. Deep Learning-Based Automated Delineation of Head and Neck Malignant Lesions From Pet Images, 2020 IEEE Nuclear Science Symposium and Medical Imaging Conference# NSS/MIC 2020 (2020)