Hardware Overview¶
HAICORE is integrated with HoreKa and consists of 16 individual servers called "nodes". All nodes are connected by an extremely fast, low-latency InfiniBand 4X HDR interconnect.
The operating system installed on every node is Red Hat Enterprise Linux (RHEL) 8.x. On top of this operating system, a set of (open source) software components like Slurm has been installed. Some of these components are of special interest to end users and are briefly discussed here. Others are mostly just of importance to system administrators and are thus not covered by this documentation.
Login Node
The login node is the only nodes directly accessible to end users. This nodes can be used for interactive logins, file management, software development and interactive pre- and postprocessing.
Compute Nodes
Most nodes are dedicated to computations. These nodes are not directly accessible to users, instead the calculations have to be submitted to a so-called batch system. The batch system manages all compute nodes and executes the queued jobs depending on their priority and as soon as the required resources become available.
Administrative Service Nodes
Some nodes provide additional services like resource management, external network connections, monitoring, security etc. These nodes can only be accessed by system administrators.
HAICORE compute node hardware¶
GPU4 nodes | GPU8 nodes | |
---|---|---|
No. of nodes | 12 | 3 |
CPUs | Intel Xeon Platinum 8368 | AMD "Rome" EPYC 7742 |
CPU Sockets per node | 2 | 2 |
CPU Cores per node | 76 | 128 |
CPU Threads per node | 152 | 256 |
Cache L1 | 64K (per core) | 4MiB (je 64*32KiB L1I,L1D) |
Cache L2 | 1MB (per core) | 32MiB (64*512KiB) |
Cache L3 | 57MB (shared, per CPU) | 256MiB (16*16MiB) |
Main memory | 512 GB | 1 TB |
Accelerators | 4x NVIDIA A100-40 | 8x NVIDIA A100-40 |
Memory per accelerator | 40 GB | 40 GB |
Local disks | 960 GB NVMe SSD | 6x NVMe SSDs |
Interconnect | InfiniBand HDR | InfiniBand HDR |
Interconnect¶
An important component of HoreKa and HAICORE is the InfiniBand 4X HDR 200 GBit/s interconnect. All nodes are attached to this high-throughput, very low-latency (~ 1 microsecond) network. InfiniBand is ideal for communication intensive applications and applications that e.g. perform a lot of collective MPI communications.