Technical specifications of the High Performance Computing (HPC) environment on Hyperchicken
Software
Key ingredients of the High Performance Computing (HPC) environment of the Hyperchicken cluster
- Linux OS: Rocky version 9.4 .
- Job scheduling: Slurm Workload Manager 23.02.7
- Module system: Lmod
- Deployment of (Bioinformatics) software: EasyBuild
Virtual Servers
- Jumphosts: portal
- User Interfaces (UIs): hyperchicken
- Deploy Admin Interfaces (DAIs): hc-dai
- Sys Admin Interfaces (SAIs): hc-sai
- Compute Nodes: hc-node-a01
Shared Storage
A Logical File System (LFS) is usually a piece of a larger Physical File System (PFS) that serves a specific need for a specific user group. In case it as a network file system you could call it a share. In addition to LFS-ses for home dirs and the centrally deployed software and reference data the Hyperchicken HPC cluster has access to the following LFS-ses:
- Available prm LFS-ses: prm09
- Available tmp LFS-ses: tmp09
Resources available to Slurm jobs
regular partition
Resource | Amount/value |
---|---|
Number of nodes | 1 |
Cores/node | 14 |
RAM/node (MB) | 55598 |
Storage/node (MB) | 975 |
Node features | tmp09 |
user_interface partition
Resource | Amount/value |
---|---|
Number of nodes | 1 |
Cores/node | 1 |
RAM/node (MB) | 1024 |
Storage/node (MB) | 0 |
Node features | prm09,tmp09 |