Tehran University of Medical Sciences

Science Communicator Platform

Stay connected! Follow us on X network (Twitter):
Share this content! On (X network) By
A Deep Neural Network to Recover Missing Data in Small Animal Pet Imaging: Comparison Between Sinogram-And Image-Domain Implementations Publisher



Amirrashedi M1, 2 ; Sarkar S1, 2 ; Ghadiri H1, 2 ; Ghafarian P3, 4 ; Zaidi H5, 6, 7, 8 ; Ay MR1, 2
Authors
Show Affiliations
Authors Affiliations
  1. 1. Tehran University of Medical Sciences, Department of Medical Physics and Biomedical Engineering, Tehran, Iran
  2. 2. Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
  3. 3. Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran
  4. 4. PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
  5. 5. Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
  6. 6. Geneva University Neurocenter, Geneva University, Geneva, CH-1205, Switzerland
  7. 7. Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, 9700 RB, Netherlands
  8. 8. Department of Nuclear Medicine, University of Southern Denmark, 500 Odense, Denmark

Source: Proceedings - International Symposium on Biomedical Imaging Published:2021


Abstract

Missing areas in PET sinograms and severe image artifacts as a consequence thereof, still gain prominence not only in sparse-ring detector configurations but also in full-ring PET scanners in case of faulty detectors. Empty bins in the projection domain, caused by inter-block gap regions or any failure in the detector blocks may lead to unacceptable image distortions and inaccuracies in quantitative analysis. Deep neural networks have recently attracted enormous attention within the imaging community and are being deployed for various applications, including handling impaired sinograms and removing the streaking artifacts generated by incomplete projection views. Despite the promising results in sparse-view CT reconstruction, the utility of deep-learning-based methods in synthesizing artifact-free PET images in the sparse-crystal setting is poorly explored. Herein, we investigated the feasibility of a modified U-Net to generate artifact-free PET scans in the presence of severe dead regions between adjacent detector blocks on a dedicated high-resolution preclinical PET scanner. The performance of the model was assessed in both projection and image-space. The visual inspection and quantitative analysis seem to indicate that the proposed method is well suited for application on partial-ring PET scanners. © 2021 IEEE.