Original link: xAI’s Colossus supercomputer cluster uses 100,000 Nvidia Hopper GPUs — and it was all made possible using Nvidia’s Spectrum-X Ethernet networking platform / TechRadar.
Training large LLM's requires both lots of computational power and fast connectivity.