AMD rumored to use 'slow' 18Gbps GDDR6 in RDNA 4 — leaker says Team Red's next generation GPUs won't use GDDR7

AMD
(Image credit: AMD)

AMD will be utilizing "slow" 18Gbps GDDR6 memory modules with its next generation "RX 8000-series" RDNA 4 GPU lineup, according to a post from prolific leaker Kepler on X (Twitter). If true, this will be the fourth RDNA architecture in a row to utilize GDDR6 memory.

The first RDNA chips from the RX 5000-series used GDDR6 clocked at 12–14 Gbps. RDNA 2 and RX 6000-series GPUs bumped that to 14–18 Gbps (note that only the RX 6600 uses 14Gbps memory, while the others are all 16–18 Gbps). Most recently, RDNA 3 and the RX 7000-series GPUs have used 18–20 Gbps GDDR6 memory. So if RDNA 4 only uses 18Gbps chips, that would actually be a step backward.

As with all leaks, take this one with a big helping of salt. We find it hard to believe that AMD would limit its RDNA 4 graphics cards to 18Gbps maximum speeds. As noted above, several of AMD's current RX 7000-series GPUs — the RX 7900 XTX and 7900 XT — already utilize faster 20Gbps modules. The RX 7800 XT also uses a 19.5Gbps memory speed. It's possible AMD would opt for slower modules if it simply doesn't need more bandwidth, but we strongly suspect that won't be the case on at least some of the GPUs.

There's legitimacy to Kepler's GDDR6 claims as a whole. AMD steered clear of faster GDDR6X solutions in favor of the more power-efficient (but slower) vanilla GDDR6 memory modules. AMD also pioneered using a large L3 cache dubbed Infinity Cache in its last two GPU architectures to make up for the lack of raw bandwidth improvements. That has generally worked well for AMD, and Nvidia followed suit with its Ada Lovelace RTX 40-series GPUs, which include larger L2 caches to improve the effective bandwidth of the GDDR6/GDDR6X solutions.

It could make sense for AMD to focus on enhancing its Infinity Cache design with RDNA 4, rather than moving to newer GDDR7 memory designs. There were rumors that AMD was investigating the potential for stacking an extra 16MB of L3 cache on each of its MCDs (Memory Cache Dies) with RDNA 3, and while that never happened with RX 7000-series parts, that doesn't mean it won't happen on RDNA 4. With potentially 192MB of Infinity Cache on a 384-bit interface with six MCDs, effective bandwidth might be high enough to compete with the upcoming Nvidia Blackwell RTX 50-series GPUs while sticking with GDDR6 yet again.

However, there are a lot of good reasons to switch to GDDR7 regardless of how much cache you have. GDDR7 is the industry's next-generation graphics memory architecture, and it will potentially double the bandwidth of GDDR6 memory (up to 40Gbps versus up to 20Gbps), while boasting a 50% greater data transmit efficiency rating. The combination of increased bandwidth and efficiency looks tantalizing, not to mention the upcoming 24Gb (3GB) GDDR7 chips that would increase per-channel capacity by 50%.

There aren't too many RDNA 4 rumors as of yet, with this being one of the first. There's some rumbling that AMD will focus on the $600 and lower markets with its next GPU architecture as well, similar to what we saw in the past when Polaris didn't even attempt to compete with Nvidia's top GPUs. Vega came out a year later and only reached as far as the GTX 1080, opting not to try for the GTX 1080 Ti. The HD 4000-series and HD 3000-series parts also opted for a "small die" approach, using dual GPUs and CrossFire for the fastest models. Could AMD be trying for the mainstream/value market approach again?

It will be interesting to see what AMD does. AMD's decision to opt for GDDR6 will depend on how much memory bandwidth it needs, and how much money it can save on PCB designs by using the existing memory standard. The beefier a GPU's memory sub-system is, the more complex the PCB design needs to be. Even if AMD does opt to support GDDR7 on the fastest RDNA 4 GPUs, it would make sense to stick with GDDR6 on lower tier parts.

If AMD does opt for a pure GDDR6 route while Nvidia goes for GDDR7, it may be a bit of a lopsided battle between the next-generation GPU architectures. We could see AMD RDNA 4, Nvidia Blackwell, and Intel Battlemage GPUs all launch this fall.

Aaron Klotz
Contributing Writer

Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.

  • Notton
    I think this rumor is based off of story back in February that only nvidia is buying large quantities of GDDR7.
    AMD, on the other hand, has been ordering tons of GDDR6 20Gbps.

    IDK if anything has changed since then, but it seems kind of late to start ordering GDDR7 for a Q3 launch window.
    Reply
  • usertests
    If there's no performance increase, there's probably little need for a bandwidth increase. And new microarchitectures (such as RDNA4) could require less bandwidth for the same results.

    There's also the Infinity Cache to consider, which regressed in amount in some cases between RDNA2 and RDNA3, but can be a lot faster. ~1800 GB/s "effective memory bandwidth" for the 6950 XT, 3500 GB/s for the 7900 XTX:

    https://en.wikipedia.org/wiki/RDNA_2#Desktophttps://en.wikipedia.org/wiki/RDNA_3#Desktop
    Reply
  • ezst036
    It seems kind of clear that AMD isn't aiming for the ultra high end, so it might be legit. They have tied themselves to the "small die" strategy for a little too long. (See the Radeon 9700 section here: for what I mean. Nvidia is seemingly always the one with a bigger die and that simply means more shader units and so forth to work with.)

    GDDR7 would also make for a pretty good refresh 6 months after launch.
    Reply
  • vanadiel007
    It's clear they are betting on AI, so therefore they do not really need the bandwidth. They will rely on AI enhanced FSR and AI optimizations.
    It all makes sense.
    Reply
  • The Historical Fidelity
    I think AMD is smartly exploiting the prevailing opinions in the gaming community.

    Obviously people are forking out $2000 for 4090’s so it’s not too high of a price, but social media is filled with people ranting about how “high end used to be $450” or “back in my day you could buy a voodoo for a nickel” (tongue in cheek on the latter example).

    So, there will be 2 mindsets about RDNA4. 1: AMD can’t compete with Nvidia so they are pivoting to being a bargain bin supplier, or 2: AMD is listening to the market and customers.

    I think if they hit it out of the ballpark, mindset # 1 would simply sound pedantic. By ballpark I mean AMD releases a 7900 XTX grade card with 24 GB’s, improved raster and improved ray tracing for $599, have drivers optimized before launch, and release a competitive AI enhanced FRS they have been teasing. If this happens, there’s not much to complain about, hardware “obsession-ists” (much more accurate vs enthusiasts, and yes I am calling myself obsessed) get their halo card and pay their “flex tax”, while AMD puts out cost effective “every-man” cards and builds market share.

    We will see how it goes…
    Reply
  • Eximo
    Voodoo cards became cheap, they launched high. $300 for a voodoo 2, and you still had to have or buy a 2D card. With inflation that is $574. Same for the Voodoo 1, $299, though later reduced to $199.

    I have my money on Battlemage, good or bad, just want to mess with it.
    Reply
  • DavidLejdar
    Steam Hardware Survey March 2024,
    58.45% have 1080p as primary display resolution, while 3.44% have 4K.

    Based on that, one could argue, that there isn't really that much market demand for high-end GPUs, when mid-range GPUs can deliver plenty of FPS at below 4K - and that that is where the market is at these days.

    I mean, at least myself, I might upgrade GPU in the near future (partially depending on whether AMD stock rebounces at least a little bit, harhar). But that will go hand in hand with upgrading to 4K, and for VR headset. If it wouldn't be for this, even with new titles still enough FPS for me with older GPU at 1440p (albeit perhaps not running every setting always at ultra) .
    Reply
  • Trake_17
    The Historical Fidelity said:
    I think AMD is smartly exploiting the prevailing opinions in the gaming community.

    Obviously people are forking out $2000 for 4090’s so it’s not too high of a price, but social media is filled with people ranting about how “high end used to be $450” or “back in my day you could buy a voodoo for a nickel” (tongue in cheek on the latter example).

    So, there will be 2 mindsets about RDNA4. 1: AMD can’t compete with Nvidia so they are pivoting to being a bargain bin supplier, or 2: AMD is listening to the market and customers.

    I think if they hit it out of the ballpark, mindset # 1 would simply sound pedantic. By ballpark I mean AMD releases a 7900 XTX grade card with 24 GB’s, improved raster and improved ray tracing for $599, have drivers optimized before launch, and release a competitive AI enhanced FRS they have been teasing. If this happens, there’s not much to complain about, hardware “obsession-ists” (much more accurate vs enthusiasts, and yes I am calling myself obsessed) get their halo card and pay their “flex tax”, while AMD puts out cost effective “every-man” cards and builds market share.

    We will see how it goes…
    "People" are spending $2k for a 4090? Steam surveys say less than 1% of gamers have a 4090, and just 6% of anyone who has bought a current Gen graphics card lately has one. Yeah, "people" are, just not that many. I don't think 4090 sales aren't dictating the market. You could argue the reputation bonus the crown earns is good marketing for all of the other cards, but there's still a substantial market for people unwilling to spend $2k on a card and plenty of room for AMD to compete on cost, which it will be better able to leverage with cheaper RAM, especially if the faster RAM isn't bottlenecking mid-range GPU performance. I mean this strategy, along with diversifying into graphics cards at all, is what helped it survive Intel and look how well that's going lately. Calling it bargain bin sounds pejorative, it's a very viable strategy full excellent products. Especially when people have to start rationing their power as the data centers start hogging the grid to feed their AI. No one will be interested in 6-700 watt GPUs for their computer. Ok maybe that's too far afield, but the rest is sound.
    Reply
  • Trake_17
    DavidLejdar said:
    Steam Hardware Survey March 2024,
    58.45% have 1080p as primary display resolution, while 3.44% have 4K.

    Based on that, one could argue, that there isn't really that much market demand for high-end GPUs, when mid-range GPUs can deliver plenty of FPS at below 4K - and that that is where the market is at these days.

    I mean, at least myself, I might upgrade GPU in the near future (partially depending on whether AMD stock rebounces at least a little bit, harhar). But that will go hand in hand with upgrading to 4K, and for VR headset. If it wouldn't be for this, even with new titles still enough FPS for me with older GPU at 1440p (albeit perhaps not running every setting always at ultra) .
    Stick with a 1440 ultrawide
    Reply
  • Amdlova
    My primary display is 2560x1080 75hz, my secondary is 1600x1200 60hz and my projector is 1920x1080... just want a gpu can play max settings at 150w max :) let's see what rdna4 can push efficiently
    Reply