Infineon claims it is a full-on redesign of how power gets from grid to silicon, straight into GPU gullets without wasting juice. Infineon’s toolkit spans silicon, silicon carbide and gallium nitride, giving it a solid handle on the materials needed to feed the power-hungry beasts training our future overlords.
Infineon Power & Sensor Systems division president Adam White said: “Infineon is driving innovation in artificial intelligence. The combination of Infineon's application and system knowlege in powering AI from grid to core, combined with Nvidia’s world-leading expertise in accelerated computing, paves the way for a new standard for power architecture in AI data centers to enable faster, more efficient and scalable AI infrastructure.”
The idea is to eliminate the spaghetti of multiple PSUs and convert power right at the GPU on the server board. That trick, paired with the 800V backbone, should squeeze more reliability and efficiency out of the system while dealing with the heat and bulk of AI clusters pushing 100,000 GPUs. Racks demanding more than one megawatt each are expected before 2030.
Nvidia system engineering vice president Gabriele Gorla said: “The new 800V HVDC system architecture delivers high reliability, energy-efficient power distribution across the data center. Through this innovative approach, Nvidia is able to optimize the energy consumption of our advanced AI infrastructure, which supports our commitment to sustainability while also delivering the performance and scalability required for the next generation of AI workloads.”
Infineon reckons its centralised HVDC gear will take up less space in server racks and cut down on wasteful conversion steps. That means fewer components, fewer points of failure and more room to cram in processing muscle.
It’s still flogging its DCDC multiphase solutions to hyperscalers and anyone not ready to bin AC just yet, but HVDC is the main show.