Trustworthy and Transparent AI for Genomic Discovery

Authors

  • A. Mohamed Sikkander* Department of Chemistry, GKM College of Engineering and Technology, Chennai -600063 Tamil Nadu INDIA Author
  • Manoharan Meena Department of Chemistry, R.M.K. Engineering College, Kavaraipettai, Chennai-India Author
  • Hala S. Abuelmakarem Department of Biomedical Engineering, College of Engineering, King Faisal University, Al-Ahsa, 31982, Saudi Arabia. Author

DOI:

https://doi.org/10.65336/WJMS.2025.21203

Keywords:

artificial intelligence, genomic data, interpretability, deep learning, feature attribution, explainable AI, variant prediction, regulatory genomics

Abstract

Artificial intelligence (AI), particularly deep learning, has demonstrated remarkable ability to analyze genomic data, uncover patterns, predict disease associations, and infer regulatory mechanisms. Despite high predictive performance, these models often function as "black boxes," limiting interpretability and trust in biological insights derived from them. Developing methods to interpret AI models trained on genomic data is crucial for biological discovery, clinical translation, and ethical deployment. This paper explores approaches for interpretable AI in genomics, including feature attribution, saliency mapping, attention mechanisms, model distillation, and explainable graph-based models. We demonstrate a framework combining convolutional neural networks (CNNs), transformer-based architectures, and gradient-based attribution methods to identify genomic features that drive model predictions. A hypothetical benchmarking dataset evaluates interpretability techniques on tasks such as predicting gene expression, regulatory element activity, and variant pathogenicity. Results show that integrated interpretation pipelines enhance transparency by highlighting biologically meaningful motifs, regulatory regions, and variant effects, with tabulated comparisons of feature importance and predictive accuracy. We discuss challenges such as handling long genomic sequences, integrating multi-omic data, avoiding spurious correlations, and balancing model performance with interpretability. Future perspectives include development of standardized interpretability metrics, integration of multi-scale genomic features, and creation of interpretable AI platforms for clinical genomics. In conclusion, interpretable AI methods are essential to bridge predictive power and biological insight, enabling responsible use of AI for genomic research, precision medicine, and genome editing.

 

Downloads

Published

2025-12-19

Issue

Section

Articles

How to Cite

Trustworthy and Transparent AI for Genomic Discovery. (2025). World Journal of Multidisciplinary Studies, 2(12), 39-45. https://doi.org/10.65336/WJMS.2025.21203