<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Fusion on Rachid Youven Zeghlache</title><link>https://youvenz.github.io/tags/fusion/</link><description>Recent content in Fusion on Rachid Youven Zeghlache</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 01 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://youvenz.github.io/tags/fusion/index.xml" rel="self" type="application/rss+xml"/><item><title>Multi-modal Learning</title><link>https://youvenz.github.io/research/multi-modal-learning/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://youvenz.github.io/research/multi-modal-learning/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Multi-modal learning addresses one of the fundamental challenges in medical AI: clinical decisions are rarely made from a single data source. A clinician diagnosing diabetic macular oedema consults fundus photographs, OCT B-scans, fluorescein angiography, and the patient&amp;rsquo;s longitudinal record simultaneously. My research develops deep learning architectures that can fuse these heterogeneous modalities into a coherent representation.&lt;/p&gt;
&lt;h2 id="key-research-directions"&gt;Key research directions&lt;/h2&gt;
&lt;h3 id="cross-modal-feature-alignment"&gt;Cross-modal feature alignment&lt;/h3&gt;
&lt;p&gt;Standard concatenation of modality-specific features often fails because different modalities live in incompatible representation spaces. I explore contrastive objectives and cross-attention mechanisms that align representations across modalities without requiring paired data at every follow-up visit.&lt;/p&gt;</description></item></channel></rss>