Speaking to the gathered throngs at Computex, CEO Rick Tsai said MediaTek was one of the earliest adopters of NVLink Fusion and a key custom ASIC designer.
For those who came in late. NVLink Fusion is Nvidia's answer to AI bloat and lets hyperscalers cobble together bespoke compute setups using Nvidia GPUs, Grace CPUs, co-packaged optics, and rack-scale gear, all tied with its signature high-speed interconnect. This allows them to cram more AI muscle into less space, burn less time on deployment, and standardise the racks before the next model decides to need ten times the horsepower.
MediaTek reckons it’s got the IP and silicon chops to pull it off. The outfit claims “unmatched technology leadership,” dangling its ASIC skills and high-speed interconnect wizardry like a shiny lure. This enables it to take on AI workloads like model training and agentic inference with kit that scales up without dragging power and latency with it.
Its silicon menu includes SerDes, optical and high-speed I/O, die-to-die chat lines and memory voodoo which are ingredients to make the hyperscaler AI cake rise without collapsing under its own weight.
The partnership isn’t just limited to datacentres. MediaTek helped Nvidia cook up the Grace Blackwell-powered GB10 chip for its DGX Spark “personal AI supercomputer.”
MediaTek is hawking a “strategic engagement model” where datacentre clients can pick and mix their own deals, get something taped out by 2026 and avoid the usual vendor lock-ins. It is leaning on their TSMC foundry ties and ecosystem partners like Cadence, Synopsys and high-bandwidth memory peddlers to keep the AI pipeline flowing.