*** Wartungsfenster jeden ersten Mittwoch vormittag im Monat ***

Skip to content
Snippets Groups Projects
Commit 973aaaa8 authored by Muck, Katrin's avatar Muck, Katrin
Browse files

Merge branch 'multi-node-zen3' into 'main'

added zen3 example for multi node batch script

See merge request !1
parents 55807b92 c66b5861
No related branches found
No related tags found
1 merge request!1added zen3 example for multi node batch script
#!/bin/bash
##############################################################################
# User Request:
# - run the same program 256 times on multiple nodes
# - for each program (aka job step) we will use 1 physical core
# - limit time to 5 minutes
#
# Provided Allocation:
# - exclusive access
# - 2 nodes
# - 2x128 physical cores / 2x256 logical cores
# - 2x500 GB memory in total
# - 128 tasks per node
# - 1 physical core bound to each task
# - 500 GB memory accessible for each task
# - memory access is not restricted by slurm and needs to be managed
# by the application
#
# VSC policy:
# - shared=0 -> exclusive node access
#
# Accounting:
# - 2x 128 = 256 core hours / hour
##############################################################################
#SBATCH --job-name="multi node; 2 nodes; 256 tasks; full node per task"
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=128
# implicit default: --cpus-per-task=1
#SBATCH --partition=zen3_0512
#SBATCH --qos=zen3_0512
#SBATCH --time=00:05:00
../util/print_job_info.sh
##
# Either use srun directly
# use --cpus-per-task= if the default was changed
# see https://slurm.schedmd.com/archive/slurm-22.05.2/sbatch.html#OPT_cpus-per-task
srun ../util/print_task_info.sh
# OR
# use e.g. mpirun with n = number of tasks per node
#mpirun --map-by ppr:<n>:node ./my/mpi/application
##
\ No newline at end of file
# Run jobs allocating more than one node
In VSC we currently only allow either partial node allocations or full node allocations. So in order to run jobs using more than one node we have the following options:
- Specify multiple full nodes and configure the tasks accordingly (see [multi-node-full-node.sh](multi-node-full-node.sh)
- Preferred (unless there is a reason): Use Array Jobs with full node allocations
Note: in theory you can also configure the tasks in the full node allocation to use not all available resources, however for accounting the full node will be considered.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment