Skip to content

ipy-slurm-exec

Use our custom Jupyter magic tool ipy-slurm-exec to execute individual Notebook cells on our Slurm cluster. Particularly useful for offloading to GPU. https://github.com/WIMM-IT/ipy-slurm-exec

Quick start

Import ipy_slurm_exec

import ipy_slurm_exec
%load_ext ipy_slurm_exec

Setup data in Notebook:

import numpy as np
data = np.arange(12).reshape(3, 4)
scale = 2.5

Use %%slurm_exec cell magic to mark cell to run in a Slurm cluster job:

%%slurm_exec
scaled = np.asarray(data) * scale
reduced = scaled.sum(axis=0)

Execution report:

Submitted Slurm job 1321 (folder: slurm_exec/20251223T1152-82a5c43a)
Job completed                                                                   
Imported: data, reduced, scale, scaled

Print result in Notebook:

print(reduced)
[30.  37.5 45.  52.5]

slurm_exec arguments:

Specify arguments with %%slurm_exec to manage variables and the Slurm job.

Managing variables

If you do not specify a list of variables to export, then all are exported to the Slurm job. Similarly for importing after the job finishes. For large notebooks this may cause problems - overwriting a variable in another part of your notebook, or exporting many big variables that are never used (wasting memory). Use these arguments to manage.

-i, --inputs: list of variables to input into the Slurm job

e.g. %%slurm_exec -i data,scale ...
-o, --outputs: list of variables to output from Slurm job into Notebook

e.g. %%slurm_exec -o reduced ...

Slurm job parameters

Refer to our Slurm cluster documentation for setting these parameters, as the values are simply passed to Slurm. When not set then Slurm defaults are used, same as when directly submitting a job to Slurm.

--partition
--time
--ntasks
--cpus-per-task
--mem
--gpus

e.g. %%slurm_exec --partition=gpu --gpus=1 --time=00:10:00

Environment modules

Have job load from our installed modules.

--modules: list of modules to load in Slurm job. Add prefix '+' to inherit modules loaded in Notebook.

Examples:

%%slurm_exec --modules=cuda  # load only cuda

%%slurm_exec --modules=+cuda  # load cuda and inherit modules loaded in Notebook - probably just python-cbrg

%%slurm_exec --modules=cellranger,spaceranger  # load these two modules

Related: loading modules in your Notebook.

Export/import fail

Variable export or import can fail when the target environment is missing a module or package present in the source environment. A common example is importing a variable from Slurm GPU job that was created with CUDA - your notebook very likely lacks CUDA.

If a specified variable fails, then notebook execution will halt with an error. If an unspecified variable fails, then a warning is printed and execution continues:

Skipped variables in Notebook:
  device_vec: 'cudaErrorInsufficientDriver: CUDA driver version is insufficient for CUDA runtime version'

GPU example

Import ipy_slurm_exec

import ipy_slurm_exec
%load_ext ipy_slurm_exec

Setup data in Notebook:

import torch
import numpy as np
seed = 123
vector = np.linspace(-2, 2, 256, dtype=np.float32)
torch.manual_seed(seed)

Code to run on a GPU:

%%slurm_exec -i seed,vector -o torch_result --partition=gpu --gpus=1 --mem=1G
torch.manual_seed(seed)
device = torch.device("cuda")
x = torch.from_numpy(vector).to(device)
y = torch.tanh(x @ x.T)
torch_result = y.sum().item()

Execution report:

Submitted Slurm job 1326 (folder: slurm_exec/20251223T1152-dc0f96dc)
Job completed                                                                   
Imported: torch_result