<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Interpretability on Rachid Youven Zeghlache</title><link>https://youvenz.github.io/tags/interpretability/</link><description>Recent content in Interpretability on Rachid Youven Zeghlache</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 01 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://youvenz.github.io/tags/interpretability/index.xml" rel="self" type="application/rss+xml"/><item><title>Explainable AI for Health</title><link>https://youvenz.github.io/research/explainable-ai-for-health/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://youvenz.github.io/research/explainable-ai-for-health/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Deep learning models used in clinical settings must not only be accurate — they must be interpretable. Clinicians need to understand &lt;em&gt;why&lt;/em&gt; a model predicts high-risk progression to trust and act on that prediction. Explainable AI (XAI) provides tools to open the black box and reveal which input features drive each decision.&lt;/p&gt;
&lt;h2 id="techniques-i-apply"&gt;Techniques I apply&lt;/h2&gt;
&lt;h3 id="saliency-maps"&gt;Saliency maps&lt;/h3&gt;
&lt;p&gt;Gradient-based saliency methods (GradCAM, Integrated Gradients, SHAP for images) highlight which spatial regions of a retinal image or OCT scan most influenced the model&amp;rsquo;s prediction. A heatmap overlaid on a fundus image showing drusen, haemorrhages, or neovascularisation has direct clinical meaning.&lt;/p&gt;</description></item></channel></rss>