Tehran University of Medical Sciences

Science Communicator Platform

Stay connected! Follow us on X network (Twitter):
Share this content! On (X network) By
Pars-Net: A Novel Deep Learning Framework Using Parallel Residual Conventional Neural Networks for Sparse-View Ct Reconstruction Publisher



Khodajouchokami H1 ; Hosseini SA1 ; Ay MR2, 3
Authors
Show Affiliations
Authors Affiliations
  1. 1. Department of Energy Engineering, Sharif University of Technology, Tehran, Iran
  2. 2. Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
  3. 3. Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran

Source: Journal of Instrumentation Published:2022


Abstract

Sparse-view computed tomography (CT) is recently proposed as a promising method to speed up data acquisition and alleviate the issue of CT high dose delivery to the patients. However, traditional reconstruction algorithms are time-consuming and suffer from image degradation when faced with sparse-view data. To address this problem, we propose a new framework based on deep learning (DL) that can quickly produce high-quality CT images from sparsely sampled projections and is able for clinical use. Our DL-based proposed model is based on the convolution, and residual neural networks in a parallel manner, named the parallel residual neural network (PARS-Net). Besides, our proposed PARS-Net model benefits from a loss based on the geodesic distance to effectively reflect image structures. Experiments have been performed on the combination of two large-scale CT datasets consisting of CT images of whole-body patients for different sparse projection views including 120, 60, and 30 views. Our experimental results show that PARS-Net is 4-5 times faster than the state-of-the-art DL-based models, with fewer memory requirements, better performance in other objective quality evaluations, and improved visual quality. Results showed that our PARS-Net model was superior to the latest methods, demonstrating the feasibility of using this model for high-quality CT image reconstruction from sparsely sampled projections. © 2022 IOP Publishing Ltd and Sissa Medialab.