Samsung Inaugurates New HBM Team To Upgrade AI Chip Yield

The Korean Electronics firm prominently has an HBM team within its memory division just to upgrade the production efficiency of their upcoming AI memory (HBM4) as well as the AI accelerator (Mach 1).
As per the reports of the Korea Economic Daily (KED), it is revealed that an HBM team of Samsung is behind the research, development, and sales of DRAM and NAND flash memory. Samsung took the initial step for HBM back in January; from then to now, this appears to be the second team focused on HBM.
This new HBM team will work under the leadership of Hwang Sang-joon, Executive Vice President and Chief of DRAM Product and Technology at Samsung. If the reports from KED are to be believed, then the Korean giants are looking forward to surpassing SK Hynix, which is leading the HBM industry.
Back in 2019, the Korean giant’s HBM team vanished just because of a silly mistaken belief that the market would not see significant growth. Looking at the previous TrendForce press release, the three major original HBM producers held industry shares last year, which are as follows:
- SK Hynix with 46-49%
- Samsung with 46-49%
- Micron with 4-6%
Samsung, for supremacy in the AI chip industry, is now accepting a “two-track” strategy by coincidentally producing two cutting-edge memory chips, which are HBM and Mach-1. Earlier this year, SK Hynix took the lead in getting HBM3e memory validated by customers. On the other side, Micron appears close behind and predicts having their HBM3e goods ready by the end of the first quarter.
This coincides with the plan of Nvidia to announce their H200 products, which likely use HBM3e memory, by the end of the first half. Although Samsung seems a bit behind in sample submissions, it is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out by the first half of this year. Apart from all this, it is also reported that Samsung is also setting up to develop the next-generation accelerator, “March-2,” tailored for AI inference.