AI semiconductor market shifts focus from HBM to low-power DRAM module Socamm2

A 256GB Socamm2 module shipped by Micron on March 3 [MICRON]

A 256GB Socamm2 module shipped by Micron on March 3 [MICRON]

 
Competition in the AI semiconductor market is expanding beyond high bandwidth memory (HBM) to server low-power dynamic random-access memory (DRAM) modules known as Socamm2.
 
The race among the three major memory makers — Samsung Electronics, SK hynix and Micron — has now entered a new stage centered on 256GB, an ultrahigh-capacity benchmark.
 

Related Article



What is Socamm?
 
If HBM functions as the “blood vessels” that carry data at ultrahigh speed to the GPU, which serves as the brain of AI, then Socamm serves as the “muscles” of AI servers by efficiently processing large volumes of data while using less power.
 
Socamm is a modular low-power DRAM memory designed for servers. Each module contains four low-power DRAM chips. Compared to conventional server memory, Socamm offers more data transmission channels, which allows for higher speed and better power efficiency.
 
Unlike conventional server memory, Socamm modules can be detached and replaced. That feature is drawing attention from data center operators.
 
Instead of replacing an entire server when needing to switch or upgrade memory, operators can improve performance simply by swapping out memory modules.
 

Jensen Huang, the president and CEO of Nvidia, speaks during a special keynote speech at CES 2026 in Las Vegas on Jan. 6. [JOINT PRESS CORPS]

Jensen Huang, the president and CEO of Nvidia, speaks during a special keynote speech at CES 2026 in Las Vegas on Jan. 6. [JOINT PRESS CORPS]

 
Micron’s decisive move
 
The U.S. chipmaker Micron shipped the world’s first customer samples of a 256GB Socamm2 module last Tuesday, according to the company. The capacity is about 33 percent larger than the 192GB Socamm2 products that Samsung Electronics and SK hynix have been positioning as their flagship offerings.
 
Higher capacity allows a larger volume of data needed for AI computing to be processed at once. That creates advantages in large-scale model inference and other complex workloads.
 
Micron’s latest move is closely linked to its effort to recover technological esteem after remaining in third place for an extended period. Micron was the first company to secure Nvidia’s approval as a supplier in the Socamm1 market, the predecessor to Socamm2. However, it lost its lead during the shift to the Socamm2 standard.
 

A Socamm chip developed by SK hynix [SK HYNIX]

A Socamm chip developed by SK hynix [SK HYNIX]

 
“Our new product delivers the industry’s smallest form factor, the highest capacity and the lowest power consumption,” said Raj Narasimhan, the senior vice president at Micron. “It will accelerate the shift toward using larger memory capacity while consuming less power in data centers.”
 
Socamm2 may be Micron’s chance at getting back into the race.
The global DRAM market share in the fourth quarter of last year stood at 36.6 percent for Samsung Electronics, 32.9 percent for SK hynix and 22.9 percent for Micron, according to the market research firm Omdia.
 

Samsung Electronics' booth at the Mobile World Congress 2026 in Barcelona, Spain, on March 3. [SAMSUNG ELECTRONICS]

Samsung Electronics’ booth at the Mobile World Congress 2026 in Barcelona, Spain, on March 3. [SAMSUNG ELECTRONICS]

 
Samsung and SK hynix in the lead
 
Korean companies currently hold the upper hand in the Socamm2 market. Samsung Electronics began mass production of a 192GB Socamm2 module, the first such move in the industry. The company has used that lead to reinforce its position across the supply chain.
 
Samsung Electronics applied its 10-nanometer-class fifth-generation process, known as 1b, securing stable yields and performance. 
 
“Samsung Electronics’ Socamm2 supply to Nvidia is estimated at 10 billion gigabits, which accounts for about 50 percent of Nvidia’s Socamm2 demand,” said Kim Dong-won, the head of research at KB Securities. “Samsung Electronics is expected to rank first in supply share.”
 
SK hynix is also maintaining its momentum.
 
“We will continue expanding the Socamm2 product lineup through a transition to the 10-nanometer-class sixth-generation process, known as 1c,” said Song Hyun-jong, the president of SK hynix, during a conference call in January this year.
 
The company’s strategy is to carry over its technological credibility in the HBM market to the Socamm sector.
 
Current estimates indicate that SK hynix has secured more orders than Micron. All three companies are expected to unveil Socamm2 memory products at Nvidia’s developer conference, GTC 2026, which will take place from March 16 to 19.
 

A visitor walks past the logo of SK hynix during the Korea Electronics Show 2025 at Coex in Gangnam District, southern Seoul, on Oct. 22, 2025. [AFP/YONHAP]

A visitor walks past the logo of SK hynix during the Korea Electronics Show 2025 at Coex in Gangnam District, southern Seoul, on Oct. 22, 2025. [AFP/YONHAP]

 
Qualcomm and AMD also eyeing Socamm2
 
A key turning point for wider market adoption is expected to come with Nvidia’s AI accelerator, Vera Rubin, scheduled for release in the second half of this year.
 
Nvidia has decided to place Socamm next to Vera, the CPU used in Vera Rubin. Qualcomm, which is strong in mobile chips, and AMD, a rival to Nvidia, are also reviewing whether to adopt Socamm.
 
The low-power DRAM market, including Socamm, is expected to grow at an average annual rate of 8.1 percent through 2033, according to the market research firm Market Research Intellect. The market is projected to reach $25.8 billion.
 
“Right now, the three companies are on similar footing, but once large-scale supply begins, the winner will be determined by subtle differences in optimization, particularly yield and price competitiveness,” a semiconductor industry insider said.

This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.

BY KIM SU-MIN [[email protected]]