Our most powerful hardware are 8 NVIDIA Basepods, each with 8 H100 GPUs (each with 80 GB VRAM), which can be used for training AI models. The H100 pods communicate with each other via 400 Gb/s Infiniband or 200 Gb/s Ethernet.
For inference, 5 NVIDIA pods with 8 A30 each are available, which communicate with 25 Gb/s Infiniband or 40 Gb/s Ethernet. In addition, we offer access to an NVIDA Jetson AGX module, an ARM server and a server in a medium price range which are also relevant for SMEs.