Death of Memory Chips: timing its fall

Samsung Electronics crossed $1 trillion in market cap, SK Hynix is nearing $500 billion, and the KOSPI is up 30% YTD on the back of two stocks. The last time memory stocks looked this good was May 2018. How long will it last?

Two years ago, barely any equity investors could explain what high-bandwidth memory was.

Fast forward to today, HBM has become the most uttered acronym in semiconductor research, the subject of more sell-side notes than any single component in the AI stack, and the reason Google reportedly fired its senior procurement team in Korea after they failed to secure supply that SK Hynix and Micron told them was “impossible” to guarantee.

Samsung Electronics crossed $1 trillion in market capitalisation in January. SK Hynix, a company that posted a $5.8 billion operating loss in 2023, delivered record operating profit of KRW 47.2 trillion in 2025, overtaking Samsung as the most profitable listed company in South Korea. 

Between them, these two companies now account for roughly a third of the KOSPI’s total market cap. South Korea’s benchmark index is up 30% year-to-date, making it one of the world’s best-performing major markets, and this rally is ultraconcentrated in memory.

But memory is a cyclical business and AGI’s arrival won’t be able to stop this boom & bust cycle of memory stocks. It has been cyclical for 49 years without a single exception. 

In this edition of Impactfull Weekly, we study what made (and destroyed) the previous memory supercycle, what is new in today’s cycle and try to time the end of this speculative run by looking at the fault lines that could crack. 

To understand today’s market dynamics and try our best to understand where this memory market is going, we have to look at previous cycles and learn what built them and what undid them.

But first, a refresher on the difference between DRAM, SRAM, and HBM:

Think of computer memory as a ladder with three key rungs, each trading speed for capacity and cost as you climb down.

At the top sits SRAM (Static RAM): the fastest and most expensive rung. Each bit is held by a circuit of six transistors that never needs refreshing, making it almost instant to access. But its size and cost per bit are so high, that it only ever appears in tiny quantities: the L1, L2, and L3 caches sitting directly on your CPU die.

One rung down is DRAM (Dynamic RAM), the workhorse of main memory. Each bit is stored as a charge in a tiny capacitor, which is far more compact and cheaper to produce, enabling the gigabytes of RAM your system relies on. The catch is that capacitors leak: DRAM must be constantly refreshed, introducing latency and power overhead that makes it noticeably slower than SRAM.

At the bottom of the ladder, on a specialised branch, sits HBM (High Bandwidth Memory). It uses the same DRAM cell technology as the rung above, but reinvents the packaging: multiple DRAM dies are stacked vertically via Through-Silicon Vias (TSVs) and placed directly alongside the processor on the same package. This makes it so that a single HBM3 stack can deliver over 800 GB/s of bandwidth, roughly ten times that of a standard DDR5 module. 

Back to the previous memory supercycle.

Dissecting 2016-2018:

The last true DRAM supercycle from bottom to top ran from mid-2016 to the first half of 2018: roughly two years. 

If you remember well, this is the time mobile phones entered a new era of higher storage, and higher performance with the Samsung Galaxy S7, the Apple iPhone 7 Plus, as well as Asus & Vivo releasing 6GB & 8GB RAM phones for the first time. 

This led to memory-heavy apps & features like Tiktok, Instagram Live, Google Photos, Pokemon Go, Netflix streaming 4K HDR to go mainstream, making the case to use more memory across use cases.

Second, cloud data centres were also scaling rapidly as Amazon, Microsoft, and Google fought for infrastructure dominance. This was also the time that these cloud players introduced massive upgrades in memory to lock their enterprise customers who used memory-heavy databases like SAP HANA.

Third, and critically, the memory oligopoly had just finished consolidating after a solid decade or more of bankruptcies and mergers (Elpida, Qimonda, Inotera), Samsung, SK Hynix, and Micron came to control over 95% of global DRAM production and had developed an implicit discipline around supply management.

Result of these three forces colliding: DRAM contract prices roughly tripled over twenty months. Micron’s stock surged 600% from trough to peak. All three companies printed record margins fuelled by memory expansion in consumer and enterprise use cases.


(this meme explains how 8GB RAM in 2006 was the cutting edge of computing, and in 2020 it was the entry-level memory capacity for modern computers & applications)

But every rollercoaster that goes up, must also come down. What undid this superperformance of memory stocks? 

What killed 2016’s memory run

Four triggers ended this party in mid-2018.

China’s antitrust probe: Beijing launched a formal investigation into the memory oligopoly in mid-2018, signalling that pricing power had a political ceiling. Samsung, SK Hynix, and Micron were accused of collusion. The investigations never produced formal penalties, but it forced the three companies to calm their pricing behaviour.

Smartphone saturation: Global handset shipments peaked in 2017 at 1.5 billion units and began declining for the first time. The largest single driver of DRAM demand growth had stalled. 

For example, the iPhone 7 Plus & the Galaxy S7 were remarkably resilient over time, making buyers less likely to purchase another handset.

Double ordering unwound: During the memory shortage, hyperscalers and Original Equipment Manufacturers (OEMs) panic-bought everything they could get their hands on, building inventory buffers of 12 to 15 weeks against a normal 8 weeks. 

Once supply came back to normal, these buffers became deadstock. The real demand turned out to be lower than what the order books said.

Capex overshoot: All three manufacturers had simultaneously ramped capital spending at the exact moment demand was cooling. Samsung invested KRW 33 trillion in 2017 alone, a record at the time. The new capacity arrived in 2019 into a market that didn’t need it.

Micron fell 56% from its May 2018 peak to December 2018. But an interesting fact to note, the stocks of these memory giants peaked 2-3 months before their earnings peaked. The market priced the top before the earnings caught up to that fact.

If you’ve been living under a rock, there has been this little thing called ChatGPT released in 2022 by OpenAI that has redrawn the maps of the technological world as we know it today and its competitor Claude by Anthropic has caused mayhem in the public SaaS stocks by releasing a feature every week that killed every software niche one by one. 

Jokes apart, AI has been the biggest demand driver for memory driven by the unprecedented demand in memory that AI requires to operate across workflows. We covered this in many previous essays listed below: 

And the single biggest product category that has benefitted from this demand is HBM memory chips that we covered above.

What’s changed

First, HBM is a new product category that did not exist in any meaningful volumes during the 2016 to 2018 run. As we covered above, one gigabyte of HBM eats roughly three times the silicon wafer area of conventional DDR5. The manufacturing yields are significantly lower because a single defect in a 16-layer stack could make the entire unit worthless.

Only SK Hynix, Micron and Samsung Electronics can make this and it’s a bigger slice of the pie in terms of revenue, generating over 30% of DRAM revenue.

Second, AI servers are absurdly memory-hungry. A single AI training server requires roughly 4.6 terabytes of total memory (1.6TB of HBM plus 3TB of DDR5), which is 6-8x the memory content of a traditional cloud server.

Data centres consumed approximately 50% of global DRAM output in 2025, up from 32% five years earlier, and that share is projected to exceed 60% by 2030.

We’re currently also witnessing a double whammy of demand as the AI training GPU clusters needs manufacturing & deployment at the same time as a “normal datacenter server” needs upgrading.

Third, the order books are locked in. SK Hynix, Samsung, and Micron have all confirmed their entire 2026 HBM and DRAM production is 100% sold out. Hyperscalers have committed $660 to $690 billion in capital spending for 2026, with roughly three-quarters directed at AI infrastructure.

In February, both Samsung and SK Hynix secured 100% quarter-over-quarter LPDDR5X price hikes from Apple for the iPhone 17 without negotiation. When Apple, the world’s most powerful buyer, accepts a doubling in memory costs without a fight, you know the suppliers are in charge.

These factors will likely play a role in extending the cycle beyond what happened previously, but it cannot keep the memory prices and stocks artificially high for too long.

Cracks in the bull run

The current supply crunch won’t last beyond 2028.

(source: Korea Economic Daily)

Every memory cycle starts its value destructive phase when every major player starts to expand production to “meet demand” to not leave chips on the table. This is especially the case in an oligopoly like this where SK Hynix, Samsung Electronics & Micron control the market. 

SK Hynix’s new M15X cleanroom in Cheongju enters pilot operations in May 2026. Micron raised capital spending to $20 billion for FY2026, with its Idaho fab beginning DRAM output in mid-2027. SK Hynix’s $15 billion Indiana packaging facility reaches mass production in H2 2028.

History has shown us that when all three oligopolists expand simultaneously, oversupply arrives two to three years later. But if we are to learn a lesson from 2018: pricing peaks before the capacity physically arrives, because markets discount the future. If the fabs come online in 2027 to 2028, the memory chip pricing peak falls somewhere in late 2026 to mid-2027.

Increasing concentration risk via consumer demand destruction.

When you keep increasing the price of memory chips that are critical for all consumer devices like tablets, smartphones, watches, wearables, even automobiles, there comes a breaking point where upgrading or replacing equipment is simply too expensive for consumers. 

Once that happens, you end up putting all your money on a single horse being able to win you the race, in this case, AI hyperscalers & frontier labs as your key customer. 

The global smartphone market will shrink 12.9% in 2026, the largest annual decline ever recorded. HP disclosed that memory now accounts for 35% of PC build costs, up from 15 to 18% just one quarter earlier. Dell hiked PC prices 15 to 20% in December, Lenovo followed in January. Sony is reportedly considering pushing the PlayStation 6 back to 2028 or 2029 because of memory costs.

This matters because it narrows the revenue base. Focussing your profitability needs only on enterprise is a single point of failure. Even in 2016 to 2018, memory had two legs (consumer and enterprise). Today, one of those legs is being sawn off.

Panic buying is back.

There is mass hysteria and infighting among the hyperscalers on who gets to secure the most memory chips for their datacentre needs. It’s gotten so bad that Google fired senior procurement executives in Korea over HBM supply failures. Microsoft executives reportedly walked out of SK Hynix negotiations because they felt they were being price-gouged. 

Big tech is placing open-ended orders for deliveries 2 years down the line, accepting any volume at any price. This is what we mean by panic buying and it mirrors the double-ordering phenomenon that undid the 2018 memory bust. 

Nvidia now has a path to circumvent the memory players.

As we covered in our previous essay on the AI Slowdown, Nvidia’s $20 billion acquisition of Groq provided them with the solution to one of their achilles heels: memory. Groq is a SRAM-based inference chip manufacturer that integrates 230MB of on-chip SRAM with 80 TB/s of internal bandwidth, roughly 24 times that of Nvidia H100’s HBM.

Jensen announced at CES 2026 earlier this year that for certain workloads, this approach of using SRAM would be gamechanging. 

Groq’s chip is extremely efficient at “batch-size-1 inference” (e.g., a single user chatting with an AI agent in real time), consuming only 10% of the equivalent power consumption on a GPU. That is the exact use case that is exploding in demand right now: low-latency, real-time AI agents.

Do you remember what we spoke about when we mentioned SRAM being the tiniest and the most expensive piece of the memory puzzle? There’s a reason why Jensen speaks so highly of these chips being integrated in their next generation Vera Rubin chips.

There’s a limit to how much the hyperscalers can spend.

Amazon, Alphabet, Microsoft, Meta, and Oracle plan to spend roughly 90% of their combined operating cash flow on capex in 2026, up from a ten-year average of 40%. 

The hyperscalers are spending 22 Manhattan Project’s worth of capital every year into AI infrastructure. For context, the Manhattan Project cost the US Government nearly $30 billion in today’s terms. 

The pricing peak of HBM3, HBM4 and DDR5 chips are bound to arrive in H2 2026 or Q1 2027. Stock prices will likely peak two to three quarters earlier, just as they did in 2018. 

However, these numbers are carefully looked at and almost priced in by the market. The newest factor in either accelerating the fall of this memory supercycle or delaying it is the state of the global supply chain as of today. 

A prolonged Hormuz closure would worsen the memory shortage in the short run, and even if the USA & Canada try to ship helium to South Korea & Japan with the negligible extra capacity they have, there is a limit to how much production can be sustained. 

There is a high chance this aggravates the panic buying seen from hyperscalers, and front-loading the orders. Nevertheless, here are three points you must keep an eye on to time the end of this memory supercycle:

  • When quarter-over-quarter contract price increases begin decelerating from their current 90%+ pace 
  • When supplier inventory weeks start rising from the current record low of 3.3 weeks toward 8 to 10 weeks
  • When memory stocks stop going up on record earnings. This was the tell in 2018 and it will be the tell again.

As of today, none of these triggers have set off. But the current upcycle of 2.5 years is already past the historical 2 year norm. The clock is ticking…

(our list of 15 companies that are most exposed to the incoming crash of the memory supercycle)

Link to the artifact here

To dig further into smaller companies that are most exposed to the incoming crash of the memory supercycle, create your own StockScreener like we did:

Bonus: ETFScreener

To find out more about ETFs available to index this memory supercycle crash, make your own ETF Screener like we did:

We are not calling the top today. But we are giving you all the pieces of the puzzle that make it crystal clear why the top is coming sooner than we think, and how the end of this memory supercycle might not exactly pan out to the one in 2018. 

The conditions that precede every memory top are now present: record earnings, record capex commitments, panic buying from customers, and a narrowing revenue base that increasingly depends on a single category of buyer.

We are shifting our exposure from “ride the cycle” to “survive the turn.” The rotation is clear, 2025 was the year where South Korea with SK Hynix and Samsung Electronics was the top equity index, but the tide is turning and the non-OPEC+ dollar-aligned energy exporters like Brazil, Argentina, etc. are going to win, as we have been talking about in our previous essays. 

Every memory supercycle in the past 49 years has ended. The question has never been if, only when. And for the first time in this cycle, the answer is: soon.

Stay invested, cautiously.