Deep-Learning Models for Ultrasound, Mammography, and MRI Fusion for Accurate Tumor Segmentation

Authors

  • Abdul Razak Mohamed Sikkander* Professor, Department of Chemistry, GKM College of Engineering and Technology, Chennai-600063, India. Author
  • Joel J. P. C. Rodrigues Federal University of Piauí (UFPI), Teresina - PI, Brazil Author

Keywords:

Deep Learning, Multimodal Imaging, Tumor Segmentation, Ultrasound, Mammography, MRI, Data Fusion

Abstract

Accurate tumor segmentation plays a pivotal role in computer-aided diagnosis (CAD) systems, facilitating early cancer detection and guiding treatment planning. However, single-modality medical imaging often presents significant challenges, such as noise, low contrast, and incomplete structural information, which hinder precise tumor delineation. This study addresses these challenges by proposing a multimodal deep-learning framework that integrates Ultrasound (US), Mammography (MG), and Magnetic Resonance Imaging (MRI) data to improve tumor segmentation accuracy. A hybrid convolutional neural network (CNN) architecture is designed, combining modality-specific encoders and an attention-based fusion mechanism. This approach enables the model to effectively learn complementary features from each modality while adapting to their varying contributions.

The framework is evaluated using simulated multimodal datasets, comprising images from US, MG, and MRI modalities, with ground truth tumor masks annotated by experts. Experimental results demonstrate that the proposed multimodal fusion model significantly outperforms unimodal and bimodal approaches across multiple segmentation metrics, including Dice Similarity Coefficient (DSC), Intersection over Union (IoU), sensitivity, and specificity. Notably, the fusion model achieves a substantial improvement in all these metrics, showcasing the ability of the attention-guided fusion strategy to capture and integrate modality-specific features effectively.

The results underscore the potential of multimodal deep-learning fusion to provide robust and clinically reliable tumor segmentation, offering a promising approach to overcoming the limitations of individual imaging modalities. By combining the complementary strengths of Ultrasound, Mammography, and MRI, the proposed framework enhances tumor boundary delineation, particularly in challenging cases involving low contrast or complex tumor morphology. This research demonstrates that multimodal fusion can significantly advance the accuracy and reliability of tumor segmentation in medical imaging, with important implications for clinical decision support systems and personalized treatment strategies.

 

Downloads

Published

2026-01-27

Issue

Section

Articles

How to Cite

[1]
Abdul Razak Mohamed Sikkander* and Joel J. P. C. Rodrigues, Trans., “Deep-Learning Models for Ultrasound, Mammography, and MRI Fusion for Accurate Tumor Segmentation”, WJAMS, vol. 3, no. 1, pp. 19–32, Jan. 2026, Accessed: Feb. 05, 2026. [Online]. Available: https://wasrpublication.com/index.php/wjams/article/view/218