Published in News

Micron beats SK hynix to HBM4 sampling

by on12 June 2025


Micron claims pole position in high-bandwidth memory race

 US memory outfit, Micron claims to have leapfrogged SK hynix in the HBM race by slipping samples of its 12-layer 36GB HBM4 to key clients.

The company claims it is the first to push out samples of HBM4, outmanoeuvring the South Korean giant that has dominated the high-bandwidth memory scene for a while. 

HBM4 uses a mature 1β DRAM node with a 12-layer stack and fancy in-memory self-test (MBIST) features, supposedly making it ideal for next-gen AI kit. Micron's boffins are flogging it as a one-stop solution for AI hardware partners gasping for more bandwidth and less power drain.

According to Micron, the kit touts a 2048-bit interface and shifts data at more than 2.0TB/s per stack, claiming it's more than 60 per cent faster than the previous gen. That's a serious boost if you want to pump large language models full of data or run chain-of-thought AI wizardry without watching your hardware choke.

It bragged that their older HBM3E set an industry energy-efficiency bar and that HBM4 tightens the belt even further with a 20 per cent boost. For Data centres  this could mean less power and more performance per watt.

With everyone+dog trying to wedge AI into anything that moves, Micron insists HBM4 will transform sectors like healthcare, finance, and transport. 

Micron Technology, cloud memory business boss Raj Narasimhan said, "Micron's HBM4 is a testament to Micron's leadership in memory technology and products with superior performance, higher bandwidth and industry-leading energy efficiency. Building on the significant milestones achieved by HBM3E, we will continue to lead innovation with HBM4 and our robust portfolio of AI memory and storage solutions."

He added the rollout matches up neatly with what their customers want for their next AI builds, suggesting there won’t be much of a wait for volume production.

 

Last modified on 12 June 2025
Rate this item
(0 votes)

Read more about: