March 5, 2026
Submit & Monitor HPC Jobs Using SLURM — Without Losing Your Data to Path Errors
You’ve just logged into your university’s HPC cluster for the first time. You write what you think is a perfect job script, submit it with sbatch, and 10 minutes later you find a cryptic error in the log file: “file does not exist.” Your absolute path was wrong. Or you forgot to specify GPU resources and your job waited in the queue for 3 hours doing nothing.
March 5, 2026
Transfer Code to HPC Clusters Using SSH, Singularity/Apptainer & SLURM
You’ve written working code on your laptop. Now you need to run it on your institution’s HPC cluster with 100+ GPUs—but you’re staring at a terminal with no idea how to get your files there, containerize your environment, or submit a job without breaking it.
This is the moment most researchers feel lost.
The HPC Stack in 90 Seconds
High-performance computing (HPC) clusters are shared servers with massive CPU and GPU resources. To use them safely and reproducibly, you need three tools working together: