HBM: Your memory solution for AI & HPC

1:06

As AI powered by GPUs transforms computing, conventional DDR memory can’t keep up. The solution? High-bandwidth memory (HBM). HBM is memory chip technology that essentially shortens the information commute. It does this using ultra-wide communication lanes. An HBM device contains vertically stacked memory chips. They’re interconnected by microscopic wires known as through-silicon vias, or TSVs for short. HBM also provides more bandwidth per watt. And, with a smaller footprint, the technology can also save valuable data-center space. All this makes HBM ideal for workloads that utilize AI and machine learning, HPC, advanced graphics and data analytics. The latest iteration, HBM3, was introduced in 2022, and it’s now finding wide application in market-ready systems. Compared with the previous version, HBM3 adds several enhancements: Higher bandwidth More memory capacity Improved power efficiency Reduced form factor The AMD Instinct MI300A accelerator combines a CPU and GPU for running HPC/AI workloads. It offers HBM3 as the dedicated memory with a unified capacity of up to 128GB. Similarly, the AMD Instinct MI300X is a GPU-only accelerator designed for low-latency AI processing. It contains HBM3 as the dedicated memory, but with a higher capacity of up to 192GB.

View More
View Less

Share this video

Embed