-
Notifications
You must be signed in to change notification settings - Fork 15
Open
Description
Hi everyone!
I am a sysadmin, trying to help our users run bigstitcher on the HPC cluster. I don't necessarily know how BigStitcher works or what it does, and I am also not too familiar with spark. I was hoping that you could give us a few pointers to how to run this in a distributed mode in slurm?
Here is how I currently run it within a single node
#!/bin/bash
#
#SBATCH --job-name=bs-test # give your job a name
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=1
#SBATCH --ntasks=8
#SBATCH --time=00:30:00
#SBATCH --mem=16GB
module purge
module load bigstitcher-spark/20231220
affine-fusion -x /software/apps/bigstitcher-spark/20231220/example/test/dataset.xml \
-o ./test-spark.n5 \
-d '/ch488/s0' \
--UINT8 \
--minIntensity 1 \
--maxIntensity 254 \
--channelId 0
How can I tell affine-fusion to distribute across multiple compute nodes (once I request multiple nodes from slurm).
Thanks!
Metadata
Metadata
Assignees
Labels
No labels