Micron Gears Up For Its Potential Datacenter Memory Boom

If you don’t like gut-wrenching, hair-raising, white-knuckling boom bust cycles, then do not go into the memory business. There is a reason that Intel was driven out of it back in the 1980s, and once again in recent years as it exited the 3D XPoint ReRAM flash-ish memory business. The memory business is not for wimps, just like growing old isn’t.

But Micron Technology, which started out in 1978 in the basement of a dental office with four people in Boise, Idaho with some ideas about how to make denser DRAM memory chips, is no wimp. It has hung in there through all of the turbulent years in the memory market, and has become a player in NOR and NAND flash as well as DRAM.

And now, it is in position to benefit from the AI wave, with Nvidia adopting its LPDDR5 memory for the “Grace” Arm server CPU and its HBM3E memory for the more recent variants of its “Hopper” H200 GPU accelerators. Micron is also a supplier of the double-pumped MCR-DIMM memory that will be available for Intel’s forthcoming “Granit Rapids” Xeon 6 P-core server processors. SK Hynix is also delivering its own variant of MCR DRR5, and of course, Sky Hynix is the biggest supplier for the HBM stacked DRAM memory that is commonly used to boost the bandwidth on GPUs and other accelerators.

Micron has also cooked up CXL memory to stretch expanded memory across the PCI-Express bus in servers in case that opportunity takes off, and Samsung and SK Hynix as well as others are playing here, too.

Still, Micron is only getting started in its next boom cycle, as is the rest of the DRAM and flash industry. The memory and flash industries took it on the chin pretty hard as both the PC and server markets went into recession in the wake of the coronavirus pandemic buying spree, and by the end of 2022 or so, people did not want new PCs and corporate datacenters had plenty of excess server capacity. They might have wanted AI servers crammed with expensive GPUs, but these were hard to come by and it is only their high price that has kept the entire server business out of a revenue recession.

The forecast that Micron made for the datacenter, based on the prognostications of various market researchers and its own input from its server OEM and ODM customers, back in the second quarter of fiscal 2024 that ended in February has not changed in the thirteen weeks if its third fiscal quarter that ended in May. That forecast is for server unit shipments to grow “mid to high single digits” in 2024, driven by a modest growth for traditional servers and strong growth in AI servers. The latter does not help unit shipments much, but does radically increase revenues and server average selling prices since these AI machines cost anywhere from $250,000 to $400,000, depending on how they are configured.

We suspect that Micron will be a much stronger barometer for the health of the datacenter than it has been in the past, and so we are initiating coverage of its financials along with the usual suspects we cover here at The Next Platform.

In the quarter ended in May, Micron’s revenues were up 81.5 percent to $6.81 billion, and it posted a net income of $332 million, or 4.9 percent of revenues. This quarter, Micron had a $377 million tax bill, compared to a $622 million one-time tax benefit in fiscal Q2 that artificially inflated its net income to $793 million, or 13.6 percent of revenues.

It is hard to say what the “new normal” will be for income as share of revenue for Micron as the PC and server markets recover and it gets a piece of the generative AI action. But anything has to be better than the $7.07 billion in net losses against $20.27 billion in revenues that Micron stomached in the five quarters from the fall of 2022 through nearly the end of 2023. That was a pretty deep trough below the redline of profitability shown in the chart above.

Micron does not break out its datacenter products in its financial reports, but it does talk about them and just like Nvidia did a decade ago, when and if these datacenter products start taking off, it might break them out separately.

Sanjay Mehrotra, Micron’s president and chief executive officer since 2017 who ran flash drive maker SanDisk for nearly three decades before that, said on a call with Wall Street analysts that Micron’s datacenter revenues rose by over 50 percent sequentially from fiscal Q2, and that the company increased its market share in HBM memory, datacenter SSDs, and high capacity server memory.

“Our mix of datacenter revenue is on track to reach record levels in fiscal 2024 and to grow significantly from there in fiscal 2025,” Mehrotra said on the call. “Robust AI-driven demand for datacenter products is causing tightness on our leading edge nodes. Consequently, we expect continued price increases throughout calendar 2024 despite only steady near term demand in PCs and smartphones. As we look ahead to 2025, demand for AI PCs and AI smartphones and continued growth of AI in the datacenter creates a favorable setup that gives us confidence that we can deliver a substantial revenue record in fiscal 2025, with significantly improved profitability underpinned by our ongoing portfolio shift to higher-margin products.”

Note: From here on out, if we mention a year, it is a calendar year. If we are talking about a fiscal year or fiscal quarter, we will say that. If we ran the universe, all years and quarters would be calendar years and quarters. . . .

Datacenter inventories are mostly normalized after the excess capacity acquired in the pandemic and immediately after, and for datacenter products in particularly, which are on the most advanced chip etching nodes in the Micron foundries, customers are eager to pen supply agreements for 2025.

Datacenter DRAM revenues doubled year on year, but we don’t know how much for either quarter. Datacenter SSD revenues nearly doubled sequentially

Micron started shipping its HBM3E memory in its second fiscal quarter, and generated over $100 million in sales in the third fiscal quarter from these products – and these products were profitable, according to Mehrotra. Micron expects to several hundred million dollars of revenues for HBM memory in fiscal 2024, which only has one more quarter left, and multiple billions of dollars in fiscal 2025, which will start in September 2024 and run through August 2025. Micron’s HBM supply is already sold out for 2024 and 2025 and pricing has been set for all of 2024 and most of 2025 already.

Over the long haul, Micron expects for its share of the HBM market to be roughly the same as its share of the overall DRAM market sometime in 2025.

Interestingly, Micron is sampling its twelve-high HBM3E stacks and will ramp it into volume production in 2025, and it has HBM 4 and HBM4E in the works for future products.

Further on the server front, Micron has 128 GB DDR5 memory modules based on single, monolithic dies rather than the stacked chips that have been commonplace for server memory for a long time. Micron expects for high capacity server memory to generate several hundred million dollars in revenues in the second half of fiscal 2024. Existing servers released in 2022 and 2023 can make use of these modules as well as with future servers. And Nvidia is buying a bunch of LPDDR5 memory for Grace CPUs, and ithers may follow suit for appropriate use cases where memory capacity needs are modest; every watt not used for memory is one that can be applied to an AI accelerator.

The fun bit – pun intended – is that industry supply of DRAM and HBM will be below demand this year, according to Mehrotra, and part of the reason is that HBM severely cannibalizes DRAM. HBM3E consumes 3X the wafers as normal DDR5 memory to produce a given amount of capacity at a given memory process node. And the yields for HBM4 are expected to be even worse because of the complexity of the packaging and the higher performance required of the memory (which also has a knock-on effect to lower yields of the final HBM package even further.)

The trick is to make money – meaning profits – on more complex products. Micron has done this in the past, and it has a good a chance as the other memory makers to do it. A lot depends on what Nvidia, AMD, Intel, and others are willing to pay for HBM3E, HBM4, and HBM4E memory in the coming years. Clearly we need more of it.

The flash market is recovering among hyperscalers and cloud builders, mainly as they need to build out fast and capacious storage for their AI datalakes. Paying $25,000 to $40,000 for a GPU only makes sense if your storage is fast enough to feed it. And your networks linking storage to the GPUs as well, now that we think of it.

Datacenter flash sales are hitting records, but we have no sense of the revenues. (We are working on building a model of Micron’s memory and flash storage. Fear not.) Micron bragged that “bit shipments” of its 6500 series flash drives, which boast 30 TB of capacity and which are based on 232-layer QLC flash, tripled in the quarter, presumably sequentially since this is a new product.

Finally, Micron is pointing at the upper decks to say that it will have record revenues in fiscal 2025, but the company is not foolish enough to talk about record gross margins or record operating or net income. It will say that datacenter SSDs and HBM are adding to the margins, but it does not say by how much.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

6 Comments

  1. I can’t speak for the industry as a whole, but the reason why the Old Hewlett Packard and some of its offspring and the companies that it influenced moved to a November 1 to January 31 financial quarter is because Bill Hewlett and Dave Packard did not want their managers putting pressure on employees to ship product by the end of December. Shifting this pressure to the end of January is kinder and allows employees to enjoy the holiday season to a greater degree.

    • When you put it that way, it makes sense. But it is still annoying that things do not line up. Perhaps you could just call it the end of the year on December 15 and let everyone go home.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.