Multimodal Deep Learning for Alzheimer’s Disease Diagnosis: Integrating Neuroimaging and Genetic Data
DOI:
https://doi.org/10.32996/jcsts.2025.7.2.29Keywords:
Deep Learning, Alzheimer’s Disease, Alzheimer's Disease Neuroimaging Initiative (ADNI), Genetic Data, cognitively normal (CN).Abstract
Conventional diagnosis of Alzheimer’s disease (AD) has usually relied upon data from individual modalities, which inherently restricts how data can be comprehended for understanding the disease process. To this end, in the current study, we present a novel multimodal deep learning framework that integrates clinical assessments, genomic information and imaging characteristics to enhance diagnosis and disease staging. This study uses Contrastive Stack Denoising Autoencoder and 3D CNNs to represent genetic data (e.g. single nucleotide polymorphisms, or SNPs), clinical test scores, and MRI scans. In addition to the correct categorization of people into three groups, AD, MCI, and CN. Compared with existing interpretability methods, this method selects the most prominent features by clustering them and performing perturbation analysis. Using data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), we show through experiments that our proposed deep learning framework outperforms traditional machine learning methods, including support vector machines, random forests, and k-nearest neighbors, for these imaging features. The multimodal model outperforms the single-modality models across all metrics, including accuracy, precision, recall, and F1 scores. This by itself validates the therapeutic relevance of the model, as it highlights classic AD proteins that are present in the disease, including the hippocampus, amygdala, and the Rey Auditory Verbal Learning Test (RAVLT), which are all widely known to be impacted in AD as per conventional medical knowledge of the disease.