Available Resources
Available Resources
Name | Partition | CPU Specs | RAM | Other Notes |
---|---|---|---|---|
node1 | weakold | Intel Xeon CPU E-2650 0 @ 2.00 GHz, 16 cores | 61.44 GB | |
node2 | 61.44 GB | |||
node3 | 61.44 GB | |||
node4 | 61.44 GB | |||
node5 | 125.95 GB | |||
node6 | 125.95 GB | |||
node7 | 253.95 GB | |||
node8 | 253.95 GB | |||
node10 | regular | Intel Xeon Gold 5220R CPU @ 2.20GHz, 48 cores | 1156.34 GB | |
node11 | 378.88 GB | |||
node-gpu | gpu-jp | Intel Xeon Gold 6226R CPU @ 2.90GHz, 64 cores | 250 GB | 2 NVIDIA RTX A6000, NVIDIA Tesla K80 |
virtnode1 | debug | Intel Xeon Gold 5218 CPU @ 2.30GHz, 6 cores | 11.26 GB | |
virtnode2 | regular | 62 GB | NVIDIA Tesla K80 | |
virtnode3 | 11.26 GB | |||
virtnode4 | Intel Xeon Gold 5218 CPU @ 2.30GHz, 12 cores | 22.53 GB | ||
virtnode5 | 22.53 GB | |||
virtnode6 | Intel Xeon Gold 5318Y CPU @ 2.10GHz, 42 cores | 61.44 GB | ||
virtnode7 | 61.44 GB | |||
virtnode8 | Intel Xeon Gold 5318Y CPU @ 2.10GHz, 92 cores | 350.21 GB | ||
virtnode9 | Intel Xeon Gold 5318Y CPU @ 2.10GHz, 46 cores | 245.76 GB |
Accessing the node-gpu node
To access the node-gpu node, contact hpc@lu.lv separately.
- Jobs can be submitted to four different partitions by specifying the Slurm script parameter --partition=< name >.
- The regular partition is intended for typical compute jobs. Its default runtime is 24 hours, and there is no maximum time limit (override with --time=dd-hh:mm:ss in sbatch scripts).
- The debug partition is for short test jobs. It is always available (not filled with long jobs). The default and maximum runtime is two hours.
- The weakold partition consists of older, lower-power servers intended for testing or low-priority jobs. Its default runtime is 24 hours, with no maximum time limit.
- The cluster is not intended for long-term data storage! Please regularly delete unimportant data and back up to other storage resources available to you.