Isfahan University of Medical Sciences

Science Communicator Platform

Stay connected! Follow us on X network (Twitter):
Share this content! On (X network) By
Cardsegnet: An Adaptive Hybrid Cnn-Vision Transformer Model for Heart Region Segmentation in Cardiac Mri Publisher Pubmed



Aghapanah H1 ; Rasti R2, 3 ; Kermani S1 ; Tabesh F4 ; Banaem HY5 ; Aliakbar HP6 ; Sanei H4 ; Segars WP2
Authors
Show Affiliations
Authors Affiliations
  1. 1. School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
  2. 2. Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
  3. 3. Department of Biomedical Engineering, Duke University, Durham, 27708, NC, United States
  4. 4. Cardiovascular Research Institute, Isfahan University of Medical Sciences, Isfahan, Iran
  5. 5. Skull Base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
  6. 6. Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran

Source: Computerized Medical Imaging and Graphics Published:2024


Abstract

Cardiovascular MRI (CMRI) is a non-invasive imaging technique adopted for assessing the blood circulatory system's structure and function. Precise image segmentation is required to measure cardiac parameters and diagnose abnormalities through CMRI data. Because of anatomical heterogeneity and image variations, cardiac image segmentation is a challenging task. Quantification of cardiac parameters requires high-performance segmentation of the left ventricle (LV), right ventricle (RV), and left ventricle myocardium from the background. The first proposed solution here is to manually segment the regions, which is a time-consuming and error-prone procedure. In this context, many semi- or fully automatic solutions have been proposed recently, among which deep learning-based methods have revealed high performance in segmenting regions in CMRI data. In this study, a self-adaptive multi attention (SMA) module is introduced to adaptively leverage multiple attention mechanisms for better segmentation. The convolutional-based position and channel attention mechanisms with a patch tokenization-based vision transformer (ViT)-based attention mechanism in a hybrid and end-to-end manner are integrated into the SMA. The CNN- and ViT-based attentions mine the short- and long-range dependencies for more precise segmentation. The SMA module is applied in an encoder-decoder structure with a ResNet50 backbone named CardSegNet. Furthermore, a deep supervision method with multi-loss functions is introduced to the CardSegNet optimizer to reduce overfitting and enhance the model's performance. The proposed model is validated on the ACDC2017 (n=100), M&Ms (n=321), and a local dataset (n=22) using the 10-fold cross-validation method with promising segmentation results, demonstrating its outperformance versus its counterparts. © 2024 Elsevier Ltd