Exploring Explainable AI for Predicting Hip Replacement Risk from EHRs
In the realm of healthcare, explainable artificial intelligence (AI) is a burgeoning field, offering the potential to demystify complex AI models. This is particularly pertinent in predicting medical outcomes, such as the risk of hip replacement within a certain timeframe. By utilizing electronic health records (EHRs), researchers aim to enhance the transparency and usability of AI models, ultimately improving patient care and clinical decision-making.
Methods and Analysis: Unveiling the AI Mechanisms
To achieve explainability in AI predictions, a pre-trained temporal graph-based convolutional neural networks (TGCNN) model was employed. This model was designed to generate visual explanations using four distinct methods: the original gradient weighted class activation mapping (Grad-CAM) applied to graphs, a modified Grad-CAM using absolute weights (referred to as Grad-CAM (abs)), sliding element-wise multiplication of feature maps with patient graph inputs (fm-act), and filtering/kernels for 3D convolutional neural networks with patient graph inputs (Edge-Act).
These methods aim to visually elucidate the TGCNN model’s predictions regarding an individual’s likelihood of requiring a hip replacement within five years, based on clinical codes extracted from EHRs. The evaluation of these models encompassed qualitative human analysis studies, sensitivity quantification, edge detection bias assessment, and sparsity evaluation.
Results: Unraveling the Findings
Among the methods, Edge-Act surfaced as the most effective in terms of graph sparsity and model sensitivity. Subgraph analysis revealed that prescriptions played a pivotal role in influencing the model’s predictions. Physicians reported that the visualizations were beneficial in elucidating model predictions. However, they acknowledged that the complexity of these visualizations posed challenges for clinical decision-making, particularly when dealing with extensive patient EHRs.
Conclusions: Advancing Towards Greater Explainability
The fm-act and Grad-CAM (abs) methods produced graphs lacking sparsity, posing potential interpretation challenges for patients with lengthy EHR histories. Conversely, the Edge-Act method demonstrated superior sparsity, rendering it potentially more interpretable for patients with long EHR trajectories. Through the application of four post hoc methods on the TGCNN model, the explainability of hip replacement risk predictions was enhanced. Nevertheless, further refinement of these methods could bolster their utility in clinical decision-making.
For more detailed insights, the comprehensive study can be accessed Here.
“`

