<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Masked-Autoencoder on Rachid Youven Zeghlache</title><link>https://youvenz.github.io/tags/masked-autoencoder/</link><description>Recent content in Masked-Autoencoder on Rachid Youven Zeghlache</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 01 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://youvenz.github.io/tags/masked-autoencoder/index.xml" rel="self" type="application/rss+xml"/><item><title>Self-Supervised Learning</title><link>https://youvenz.github.io/research/self-supervised-learning/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://youvenz.github.io/research/self-supervised-learning/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Supervised deep learning requires large amounts of labelled data. In medical imaging, expert annotations are expensive, time-consuming, and often in short supply. Self-supervised learning (SSL) sidesteps this bottleneck by learning rich representations from &lt;em&gt;unlabelled&lt;/em&gt; data through pretext tasks — the labels emerge from the data itself.&lt;/p&gt;
&lt;h2 id="methods"&gt;Methods&lt;/h2&gt;
&lt;h3 id="masked-autoencoders-mae"&gt;Masked Autoencoders (MAE)&lt;/h3&gt;
&lt;p&gt;Inspired by BERT in NLP, MAE randomly masks a high proportion (75%) of image patches and trains an encoder-decoder to reconstruct the missing regions. The encoder learns spatially rich features without any labels. My L-MAE extends this to temporal sequences of medical images, masking across both space and time.&lt;/p&gt;</description></item></channel></rss>