<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Medical-Imaging on Rachid Youven Zeghlache</title><link>https://youvenz.github.io/tags/medical-imaging/</link><description>Recent content in Medical-Imaging on Rachid Youven Zeghlache</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 01 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://youvenz.github.io/tags/medical-imaging/index.xml" rel="self" type="application/rss+xml"/><item><title>Medical Image Analysis</title><link>https://youvenz.github.io/research/medical-image-analysis/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://youvenz.github.io/research/medical-image-analysis/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Medical image analysis is the computational backbone of my research. Before building longitudinal or predictive models, we need robust methods for understanding what is in a single image: detecting lesions, quantifying biomarkers, segmenting anatomical structures. In my work the primary modalities are ophthalmological.&lt;/p&gt;
&lt;h2 id="imaging-modalities"&gt;Imaging modalities&lt;/h2&gt;
&lt;h3 id="colour-fundus-photography"&gt;Colour Fundus Photography&lt;/h3&gt;
&lt;p&gt;Wide-field retinal photographs capture the optic disc, macula, vessels, and peripheral retina. DR grading, vessel segmentation, and optic disc detection are well-established tasks. I use fundus images as the primary modality in several longitudinal and progression prediction pipelines.&lt;/p&gt;</description></item><item><title>Multi-modal Learning</title><link>https://youvenz.github.io/research/multi-modal-learning/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://youvenz.github.io/research/multi-modal-learning/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Multi-modal learning addresses one of the fundamental challenges in medical AI: clinical decisions are rarely made from a single data source. A clinician diagnosing diabetic macular oedema consults fundus photographs, OCT B-scans, fluorescein angiography, and the patient&amp;rsquo;s longitudinal record simultaneously. My research develops deep learning architectures that can fuse these heterogeneous modalities into a coherent representation.&lt;/p&gt;
&lt;h2 id="key-research-directions"&gt;Key research directions&lt;/h2&gt;
&lt;h3 id="cross-modal-feature-alignment"&gt;Cross-modal feature alignment&lt;/h3&gt;
&lt;p&gt;Standard concatenation of modality-specific features often fails because different modalities live in incompatible representation spaces. I explore contrastive objectives and cross-attention mechanisms that align representations across modalities without requiring paired data at every follow-up visit.&lt;/p&gt;</description></item><item><title>MARIO AMD Progression Challenge</title><link>https://youvenz.github.io/projects/mario-challenge/</link><pubDate>Sun, 01 Sep 2024 00:00:00 +0000</pubDate><guid>https://youvenz.github.io/projects/mario-challenge/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;MARIO AMD Progression Challenge&lt;/strong&gt; was held at MICCAI 2024 and focused on the automated assessment of &lt;strong&gt;Age-related Macular Degeneration (AMD)&lt;/strong&gt; progression using deep learning.&lt;/p&gt;
&lt;p&gt;We are pleased to announce the publication of our comprehensive analysis of the challenge results, and the corresponding dataset is now publicly available for the research community.&lt;/p&gt;
&lt;h2 id="research-focus"&gt;Research Focus&lt;/h2&gt;
&lt;p&gt;This challenge addressed two key clinical questions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Progression prediction&lt;/strong&gt;: Will a patient&amp;rsquo;s AMD progress over the next 12 months?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Visual acuity change&lt;/strong&gt;: Will the patient&amp;rsquo;s visual acuity improve, stabilise, or worsen?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="dataset"&gt;Dataset&lt;/h2&gt;
&lt;p&gt;The MARIO dataset provides:&lt;/p&gt;</description></item></channel></rss>