Technical specifications of the High Performance Computing (HPC) environment on Fender
Software
Key ingredients of the High Performance Computing (HPC) environment of the Fender cluster
- Linux OS: Rocky version 9.5 .
- Job scheduling: Slurm Workload Manager 23.02.7
- Module system: Lmod
- Deployment of (Bioinformatics) software: EasyBuild
Virtual Servers
- Jumphosts: corridor
- User Interfaces (UIs): fender
- Deploy Admin Interfaces (DAIs): fd-dai
- Sys Admin Interfaces (SAIs): fd-sai
- Compute Nodes: fd-node-a01, fd-node-a02, fd-node-a03, fd-node-a04, fd-node-a05, fd-node-a06, fd-node-a07, fd-node-a08
Shared Storage
A Logical File System (LFS) is usually a piece of a larger Physical File System (PFS) that serves a specific need for a specific user group. In case it as a network file system you could call it a share. In addition to LFS-ses for home dirs and the centrally deployed software and reference data the Fender HPC cluster has access to the following LFS-ses:
- Available prm LFS-ses: prm10
- Available tmp LFS-ses: tmp10
Resources available to Slurm jobs
regular partition
Resource | Amount/value |
---|---|
Number of nodes | 8 |
Cores/node | 14 |
RAM/node (MB) | 55598 |
Storage/node (MB) | 900 |
Node features | tmp10 |
user_interface partition
Resource | Amount/value |
---|---|
Number of nodes | 1 |
Cores/node | 1 |
RAM/node (MB) | 1024 |
Storage/node (MB) | 0 |
Node features | prm10,tmp10 |