Skip to content

Available Resources

Available Resources

Name Partition CPU specs GPU specs RAM Other notes
node-accel-1 accel AMD EPYC 9555 @ 3.0 GHz, 256 cores 8 x NVIDIA H200 NVL 3094.00 GB scratch
node-gpu Intel Xeon Gold 6226R CPU @ 2.90 GHz, 64 cores 2 x NVIDIA RTX A6000 250.00 GB scratch
node-epyc-1 power AMD EPYC 9755 @ 2.70 GHz, 512 cores - 750.00 GB scratch
node-epyc-2 AMD EPYC 9755 @ 2.70 GHz, 512 cores - 750.00 GB scratch
node-epyc-3 AMD EPYC 9755 @ 2.70 GHz, 512 cores - 750.00 GB scratch
node-epyc-4 AMD EPYC 9755 @ 2.70 GHz, 512 cores - 750.00 GB scratch
node1 weakold Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 61.44 GB
node2 Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 61.44 GB
node3 Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 61.44 GB
node4 Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 61.44 GB
node5 Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 125.95 GB
node6 Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 125.95 GB
node7 Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 253.95 GB
node8 Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores - 253.95 GB
node10 regular Intel Xeon Gold 5220R CPU @ 2.20 GHz, 48 cores - 1156.34 GB
node11 Intel Xeon Gold 5220R CPU @ 2.20 GHz, 48 cores - 378.88 GB
virtnode1 debug Intel Xeon Gold 5218 CPU @ 2.30 GHz, 6 cores - 11.26 GB 2:00:00 Timelimit
virtnode2 regular Intel Xeon Gold 5218 CPU @ 2.30 GHz, 6 cores - 62.00 GB
virtnode3 Intel Xeon Gold 5218 CPU @ 2.30 GHz, 6 cores - 11.26 GB
virtnode4 Intel Xeon Gold 5218 CPU @ 2.30 GHz, 12 cores - 22.53 GB
virtnode5 Intel Xeon Gold 5218 CPU @ 2.30 GHz, 12 cores - 22.53 GB
virtnode6 Intel Xeon Gold 5318Y CPU @ 2.10 GHz, 42 cores - 61.44 GB
virtnode7 Intel Xeon Gold 5318Y CPU @ 2.10 GHz, 42 cores - 61.44 GB
virtnode8 Intel Xeon Gold 5318Y CPU @ 2.10 GHz, 92 cores - 350.21 GB
virtnode9 Intel Xeon Gold 5318Y CPU @ 2.10 GHz, 46 cores - 245.76 GB

Important information

High-performance storage (scratch)

Scratch storage

Scratch storage is intended to speed up computations when large amounts of data need to be read and written.

Key points to keep in mind about scratch disks
  1. Available only on specific nodes (in Slurm marked with Feature=scratch).
  2. Similar to /home, each user has their own personal scratch directory under /scratch.
  3. Scratch disk contents are not synchronized automatically; users must copy data at the start and end of the job.
  4. Files that have not been used for a week are automatically deleted from scratch disks.

Submitting GPU jobs

If your job requires GPU resources, you must specify the number of GPUs requested in your SBATCH script (and optionally the GPU type).

Parameters

-G, --gpus=[type:]<number>

Specify the total number of GPUs required for the job. An optional GPU type specification can be supplied.

Example
1 GPU (any type):      --gpus=1
1 A6000 GPU:           --gpus=a6000:1
2 H200 GPU:            --gpus=h200:2
Additional information
  • At the moment, reservations are only possible for a full GPU card.
  • GPUs cannot be split into smaller parts to share with other users.

Partitions

Jobs can be submitted to five different partitions by setting the partition parameter in Slurm scripts: --partition=<name>

The debug partition is intended for short test jobs. It is designed to be readily available.

  • Default runtime: 2 hours
  • Maximum runtime: 2 hours

The regular partition is intended for typical compute jobs.

  • Default runtime: 24 hours
  • Maximum runtime: unlimited

The power partition is intended for high-performance CPU compute jobs.

  • Default runtime: 2 hours
  • Maximum runtime: 72 hours

High-performance storage is available on this partition.

The accel partition is intended for GPU-accelerated compute jobs.

  • Default runtime: 2 hours
  • Maximum runtime: 48 hours

GPU jobs must specify the GPU count (see Submitting GPU jobs).

High-performance storage is available on this partition.

The weakold partition consists of relatively old and lower-performance compute servers, intended for testing or low-priority jobs.

  • Default runtime: 24 hours
  • Maximum runtime: unlimited

Archiving

Less frequently used files can be archived by moving them from /home/<username> to /archive/<username>.