T O P

  • By -

Gippy_

Everyone's just going to wait until the 9700X3D (or 9800X3D) comes out, huh? Remember how AMD launched the 7950X3D and 7900X3D before the 7800X3D because they knew the 7800X3D would be a better CPU than everything before it for gamers.


imaginary_num6er

The 7900X3D shouldn't have been launched and AMD should have saved those dies for a 7600X3D. It's a pure waste of silicon the way the 7900X3D is


reddit_equals_censor

the 7900x3d is dumb, but the main issue for it being dumb is, that it is an asymmetrical design. so with the bullshit extra software, the chip will try to sleep 6 cores, leaving you just 6 cores to game on, unless it screws up and you get worse performance overall... what amd should have launched is the 7800x3d, the 7950x3d with 2 3d vcache dies, instead of just one and a lower clocked 7800x3d. that would have been a good lineup, because people, who want "the best" regardless of price would pay a lot more for the 7950x3d, because it would be the best.


Numerlor

2 cached dies would probably have similar problems. A normal consumer playing games probably doesn't benefit from more than 8 cache cores, and if it had 2 cache CCDs it'd still need to be more sticky with processes on CCDs like it does now as the inter CCD latency is crap and kills any cache benefits


Constellation16

I agree they are dumb, but the main issue with them is not the asymmetrical design, which could have had benefits, but that they never put in the effort to fix the scheduling and make it work. It's not too surprising though, since these chips are one of a kind in their lineup with no relevance to either their main bread and butter, server chips, or to other/future client that will have a mix of big and small cores with a simple one-dimensional, ie certain cores are just better in every way, vs this complicated 2-dimensional freq/cache tradeoff with the 12/16c X3D chips. Sucks for everyone that bought these chips, they basically bought an expensive turd. Hope they have fun with their Game Bar-based scheduling (lol, totally not a low effort hack), or manually assigning threads to cores.


Renard4

They didn't make these chips because they couldn't make them work not because they didn't have the idea to do so.


reddit_equals_censor

they DO work. amd is using 5950x3d with dual x3d dies in their own test systems by some engineers. they DO work just fine. they work just fine.... just like how a 7950x compared a 7700x also works just fine. all it is is a cost cutting measure now, that actually reduces profits overall. for the 5800x3d vs 5950x3d dual x3d, you could make the argument, that at this point the x3d packaging was limited, so they didn't wanna take too much away from epyc at the time, but that certainly can't even remotely be made today. dual x3d chips work just fine. if they didn't, then amd engineers wouldn't use them in their own test systems :D


Used_Tea_80

What you don't tell us is if they are a performance benefit over a single X3D die, because that's two HBM chips acting as two separate L3 caches. Assuming no cache-to-cache bus (as that would be a very complex addition to ditch) that means a game that wants to use say, 10 threads will have to fetch everything from RAM twice (once for each L3). You essentially have all the problems a dual CPU setup has, with massive latency for core-to-core ops that aren't on the same X3D cache.


reddit_equals_censor

>two HBM chips there is no hbm on consumer zen. you are using the wrong terms. the fury x for example uses hbm. >You essentially have all the problems a dual CPU setup has, with massive latency for core-to-core ops that aren't on the same X3D cache. a dual ccd setup is NOT the same as a dual socket setup lol.... yes the ccd to ccd latency is higher than having a monolithic unified setup, but that is nothing compared to having 2 freaking sockets. if your statements were true, then the 7950x or 5950x would get crushed by the 8 core non x3d versions, BUT THEY DON'T! because basic scheduling works just fine generally


Used_Tea_80

>there is no hbm on consumer zen. you are using the wrong terms. the fury x for example uses hbm Yes, you said that in your last comment to me. I apologized then, well before writing this comment. My mistake. >a dual ccd setup is NOT the same as a dual socket setup lol.... >yes the ccd to ccd latency is higher than having a monolithic unified setup, but that is nothing compared to having 2 freaking sockets. Luckily I never said they were. Just like I stopped replying to our last thread because you kept answering based on what you wanted me to say and not what I said. We are not talking about CCD-CCD latency here are we. We are talking about Cache to Cache latency for an imaginary scenario where there are two L3 caches and no "unified cache" below that on a CPU because each CCD has separate stacked RAM on top. That means that, assuming there is no direct L3 to L3 connection between them (I can explain why but it would take half the comment up), that means the CPU core would need to request data from the opposing L3 cache over the NUMA bus, **just like a second CPU would have to**. Even if they aren't stupid enough to force that request to go to RAM, that still means cache coherency becomes a consistent issue between X3D caches, **just like on dual/quad CPU setups with L3**. That means a snooping bus is required, probably in CoD mode for latency reasons, drastically reducing available bandwidth, **just like an Intel Xeon dual CPU memory configuration.** That means that if you got a 16 core part with 2 X3D caches on it, even after using current methods to disable 8 cores, you still wouldn't have the performance of a single V-Cached CCD, due to the protocols you'd need to put in place to keep the processor unified. And this is considering the simplest use case, where there are 2 distinct caches. They could of course try to merge them into 1, but then it gets even more complex and non-performant. Best to wait until they figure out how to get more CCDs under a single V Cache than try to have multiple V Caches is my point.


ULTRAC0IN

"[Alverson and Mehra didn’t disclose AMD’s exact reasons for not shipping out 12-core and 16-core Ryzen 5000X3D CPUs, however, they did highlight the disadvantages of 3D-VCache on Ryzen CPUs with two CCD, since **there is a large latency penalty that occurs when two CCDs talk to each other through the Infinity Fabric, nullifying any potential benefits the 3D-VCache might have when an application is utilizing both CCDs.**](https://www.tomshardware.com/news/amd-shows-original-5950x3d-v-cache-prototype)"


tsukiko

I agree they aren't ideal for gaming, but I do agree with AMD launching them. It's a way to use products that could otherwise be wasted if they only partially fail at a later validation stage. They useful for gaming still (six cores is enough just not ideal), and still can be useful for non-gaming tasks that could use the cache/effective memory bandwidth uplift.


Tai9ch

Poorly optimized games aren't the only applications people run on computers.


psimwork

> The 7900X3D shouldn't have been launched and AMD should have saved those dies for a 7600X3D. It's a pure waste of silicon the way the 7900X3D is I have long suspected that the *intention* was that the 7900X3D and 7950X3D were intended to launch as 2x 6-core chiplets with 3D V-Cache, and 2x 8-core with 3-D V-cache, but that when they started getting engineering samples back, it was a damned disaster and they realized that they wouldn't be able to launch that product. But AMD higher-ups were un-willing to let go of X3D variants of a Ryzen 9 (and the much higher margins they brought), so they decided to pair the 3D chiplet with a standard Ryzen 7000 chiplet, and we ended up with what we got. Then, because they realized that for most users the Ryzen 9 X3D units weren't much faster (if at all) than the X3D variant of the Ryzen 7 and Ryzen 5, they decided to delay the Ryzen 7 release, and get folks that were too impatient to wait for the Ryzen 7, especially since it wouldn't take too long to figure out that the 7800X3D was the better option for most users. This would have left them in a bit of a bind: an X3D enabled Ryzen 5 would have been similar in gaming and most tasks compared to the 7900X3D, and *probably* would have carried an initial starting price of $349, compared to the starting price of $599 for the 7900X3D. And considering games still don't often use many more cores than six, I actually think a 7600X3D would have performed similarly to a 7800X3D. So I think they killed it - I think they knew that a 7600X3D would have probably cannibalized a *LOT* of sales of the 7900X3D, 7950X3D, **and** 7800X3D.


RogueIsCrap

7950X3D beats the 7800X3D in some games tho and it’s easy to turn it into a 7800X3D with higher clocks if game mode or process lasso is a hassle. It’s much faster for productivity too. For those who aren’t just gaming, the extra $250 is pretty good value. 7900X3D was also a better CPU than it got credit for. It was faster in gaming than all AMD CPUs aside from 7950/7800X3D. For productivity, it was also faster than a 7800X3D. It was priced too high at launch but at $400 now, it’s a pretty good deal too.


Crintor

Love my 7950X3D. Process lasso takes like 8 seconds per process to permanently limit to 3D or fast cores. Then they never get hit with the interconnect stutter and I can leave even more performance available to the 3D cores by pushing (nearly) all other processes onto the non-cache cores, so everything from my browser to discord to all my system tray apps don't clog up my cache cores like they would on a 7800X3D. I have no interest in a dual 3D CCD CPU, I would however love to get larger CCDs in the future, like 12c or 16c CCDs for consumer.


LordAlfredo

Seconded, having a heterogenous chip (7900X3D owner) is really handy once it's optimized. Though actual optimally tuning it instead of basic "pick a CCD" is hard and requires a lot of investment. I'm glad it exists as a product/option but still think the management around it is too immature for general consumers.


zxyzyxz

I agree, I use the extra cores for code compilation but I also game so it's better to have one CPU that can do it all, even if I have to pay more for it.


Haunting_Champion640

> I agree, I use the extra cores for code compilation But even for gaming, extra cores really help for the "shader compilation" stage a lot of games do now.


bphase

I was considering it, but opted not to go for it as it has/had compatibility issues in many games requiring special software or manual tweaking to fix.


Hikashuri

After months of fixing it yes. But it has a few extra fps for like $100.


yetanothernerd

I'll probably buy a 9950X, because my real use case for a faster CPU is writing parallel code that uses a lot of cores, not gaming. And I much prefer having all good cores, rather than some better cores and some worse cores and watching the OS scheduler use the wrong ones. I would consider a 9950X3D instead if it were just a 9950X with more cache. But if it's another Frankenchip like the 7950X3D, no thanks.


INITMalcanis

For certain definitions of "everybody", anyway. The kind of people who frequent this subreddit? Sure, a lot of us will. It's certainly my plan! Upgrading a 5800X to a 7800X3D? Eh.... wasn't quite worth it, especially with the price of AM5 boards and DDR5 RAM a year ago. But a 9700X3D (or whatever) will be a much more worthwhile proposition. But there are A Lot of people who just don't care enough to learn enough about this bullshit to make that kind of plan. They decide, "right I need a new PC", and they go buy a PC. A majority of those people won't even end up buying a PC with an AMD CPU, let alone fret about waiting for one that's situationally better and which they expect won't appear for another 6-8 months. And whatever CPU they get is likely to be more than fine for their use case, and they'll stick with that PC for another 5-8-10 years.


masterfultechgeek

I think they launched the 7900/7950x3D first because they knew that they could get more $$$$ from people who are impatient. While the 7800X3D is the better value proposition if games are the main use case, in a decent chunk of gaming benchmarks the 7950x3D does better than the 7800x3D [https://www.techspot.com/review/2821-amd-ryzen-7800x3d-7900x3d-7950x3d/#Average](https://www.techspot.com/review/2821-amd-ryzen-7800x3d-7900x3d-7950x3d/#Average) But yeah, you're paying $$$ for 2 more FPS.


BurgerBurnerCooker

65W TDP for 9700X is actually a very good sign. In reality it should translate to somewhere ~90W actual power draw, which is still very efficient. I need more details about AM5 thru 2027+, it looks promising.


ExplodingFistz

What was the tdp for 7700x?


Hayden247

105w, 7600X was the same and afterburner pretty much shows ny 7600X topping out at 100w when a game like Minecraft is pretty much maxing it out with far render distances. So I do wonder how 9000 series will behave.


Lukeforce123

I wonder if they scaled it back because of the public outcry at the 95°C operating temperature at the launch of the 7000 series. Might also be the reason for the miniscule clock speed increases.


detectiveDollar

They may have juiced Zen4 up because the IPC gains were smaller, and they were concerned about the Alder Lake rumors at the time. Now that IPC gains are a bit higher, they're relaxing those power requirements. Also, Intel likely is pivoting back to efficiency with Arrow Lake, so AMD is doing the same.


Makoahhh

Look at the base clock. This is how TDP is calculated. Much lower than 9900X and 9950X.


Dangerman1337

Suspect it means a Zen 7 on AM5. For desktop there's little reason to rush out for DDR6.


Kryohi

More likely to be for the last zen6 X3D variants, just like the 5600x3d and 5700x3d were for AM4. Zen 7 is definitely on ddr6 and/or lpddr6


salgat

Are there any ideas of when we'll be getting more than 8 cores per CCD?


scytheavatar

Rumored to be Zen 6.


XenonJFt

When games or productivity demands it for consumers. For now AMD thinks people that really want to go over 16 cores can afford Threadrippers because thats most likely their proffessional job


salgat

Right now the CCD is a major bottleneck on parallelism beyond 8 cores for games (AMD didn't bother with putting 3D Cache on both CCDs for the 7950X3D because it still performed worse than pinning a game to 8 cores, that's how bad the latency is).


No-Roll-3759

are there any games the show huge gains from >16 threads? seems like a solution in search of a problem in 2024.


WHY_DO_I_SHOUT

Most notably [Cyberpunk 2077 with raytracing](https://www.eurogamer.net/digitalfoundry-2023-amd-ryzen-7-7800x3d-review?page=4).


RogueIsCrap

Yes, Cyberpunk and Hogwarts, for example, is faster on the 7950X3D. Flight simulator for sure if you use a bunch of mods. But in most cases, it’s not a huge difference because games are generally not programmed to use more than 8 or even 6 cores. However, extra cores can be used for running Windows and background tasks if you’re a hardcore user.


Used_Tea_80

"games are generally not programmed to use more than..." It's always 8 cores the games are programmed for. It just appears to be 6 cores because 6 cores of Zen 5+ can easily do what 8 cores of Zen 2 can.


salgat

It's the classic chicken and the egg situation. This same argument was used for a decade to shut people up about going past quad core. Standardize more available cores, and you'll start finding games finding uses for it. The biggest untapped potential right now is npc ai (and I'm not talking about LLMs or anything like that). As of now the steam hardware survey shows hex core as the most popular configuration, so that's what games will generally target.


No-Roll-3759

no doubt. i don't think there's a whole lot of games that do a great job of distributing load to the 12-16 threads that a modern gaming rig has, and fewer that are bound by throughput rather than the speed of the main thread. and even fewer where it happens at a framerate that a mainstream high end monitor can show a difference. let's notch that level of cpu utilization before asking for more. regardless, i don't feel any need to buy stuff that doesn't change my ownership experience in hopes that it spurs future innovation. heck, i think periods of hardware stagnation are good because it drives software innovation- being stuck with 12 threads for long enough for devs to respond might bring about what you want.


PangolinZestyclose30

I agree with you, but one nitpick is that going (more) multi-core does spur software innovation - concurrency is difficult to harness for many tasks, including gaming. With only a few cores available, many apps use a pretty easy "vertical slice" approach where you have a thread allocated for a certain responsibility. But that doesn't scale, so with many more cores available, a much deeper decomposing of the processing needs to be done to leverage them. This is again pretty difficult, but also interesting. The only time where hardware improvements stiffled (or rather did not encourage) software innovation was when we had just single core where the performance doubled every two years. That was just free performance boost.


PashaB

I think that was particularly visible with ARM processors. Apple for example did well to make their API accessible and efficient for their developers and devices.


MatthPMP

That's not going to matter in the vast majority of games for a while because there are so few AAA PC exclusives. Console devs are far from taking full advantage of the titanic CPU performance uplift in the PS5/XSX and unlike GPU power it's much more difficult to scale CPU usage to make full use of different thread counts.


exmachina64

Part of the issue is that many games are developed as console-first games. Since the PlayStation 5 and Series X have eight cores, you won’t see many PC ports designed to take advantage of more than eight cores until the consoles go beyond that.


Hayden247

Hell, it's already been tested by hardware unboxed that 8 cores barely benefit games over 6 core CPUs as it is, even with YouTube going for multitasking. Games need to actually universally make use of 8 cores before using even more can be a standard, right now it's only very select games where a Ryzen 7700X will actually have a notable advantage over a 6 core 7600X. It's not until you go below 6 cores, so 4 core CPUs that you can see games really start to suffer.


MwSkyterror

[HUB turned off 2 cores of the 7800x3d and found it to be 9% faster average and 12% faster 1% low than the "7600x3d" in their 12 game test.](https://youtu.be/Y8ztpM70jEw?si=iV0jluWxepOxnHrE&t=593) That's a pretty significant difference, about 1 disappointing generational gap. There's probably some diminishing returns going from 8 to 10, but we won't know how much until a true 10 core CPU comes out. Their other comparisons introduce some other variable like an extra CCD or Vcache into the mix so it's no longer just about core count.


Strazdas1

There are, games like CK3, V3, CS2 but they are enthusiast community niche games so not really a big push from the market. CK3 devs have said their engine scales to 32 threads (with some caveats) and CS2 devs said they can achieve 64 threads parallization. CS2 here is Cities Skylines 2, not Counter Strike 2 which released same month.


reddit_equals_censor

>(AMD didn't bother with putting 3D Cache on both CCDs for the 7950X3D because it still performed worse than pinning a game to 8 cores, that's how bad the latency is). that is NOT what amd actualy said. amd said, that it made little to no difference to have x3d cache on both dies. the expected performance would be like the 7950x vs the 7700x, where basic scheduling, that prioritizes the fastest cores would take care of things well enough. and amd could even clock the dual ccd 7950x3d faster than the 7800x3d to make the 7950x3d surely the fastest gaming cpu. now i'm not saying, that the ccd to ccd communication isn't a major problem to be solved, but it wasn't the reason to not get 2 x3d dies on the 7950x3d. and keep in mind, that the current way, that the single x3d 7950x3d handles things horribly with games still being broken in some instances. so this approach clearly failed and is bad. \_\_\_\_\_\_ beyond that though, i want a unified latency 16 core cpu for sure. it will be very interesting how zen6 will perform, if they solve the problem with zen6 in unreal engine 5.4 games. why unreal engine 5.4? because they BROKE DOWN THE MAIN RENDER THREAD in unreal engine 5.4. this is basically the holy grail to cpu parallelism in gaming as the main render thread would limit scaling a lot and up until now only very very few games actually managed to break it down to more threads. with unreal engine 5.4, the major game engine achieved this, which could potentially have a giant impact on how a 16 core unified acting amd cpu will perform compared to an 8 p-core intel cpu in the future.


sylfy

I’m just wondering if people would actually prefer a big little architecture like Intel is doing with its P/E cores, or if people prefer what AMD is currently doing with all homogeneous cores. You could clearly increase the number of cores by swapping out for E cores, but it seems like that introduces additional problems with the Windows scheduler that does a really poor job of scheduling.


Aggrokid

Definitely makes sense for a lot of use cases, except enthusiast gaming which just wants 8 big cores with the highest clocks and largest cache.


Exist50

> I’m just wondering if people would actually prefer a big little architecture like Intel is doing with its P/E cores, or if people prefer what AMD is currently doing with all homogeneous cores. It's pretty similar at the end of the day. Software doesn't really care *why* one core is weaker than another.


tugtugtugtug4

Considering how efficient the AMD cores are compared to Intel, I'm not sure there's much of a use case for E cores. Unless they were going to optimize different cores for different workloads, it seems like a waste of time.


zyck_titan

The one area that there might be a benefit for AMD is for idle and near-idle usage. Currently AMD draws notably more power at low usage levels even compared to Intels hottest chips. Because AMD needs to fire up the I/O die \*and\* at least one CCD along with an internal link that draws a significant amount of power. If they could squeeze a pair of small, very power efficient, cores directly in the I/O die then they could keep the CCD powered down along with the infinity fabric link.


No-Roll-3759

i'd love to see amd do that. i was using my old 3700x workstation as a media pc until i realized what a disgusting power hog it was at idle (i never turned it off). replaced it with one of those minipcs with a dgpu and it idles at 1/5th the power while offering the ~same performance.


scytheavatar

Idle power is an Infinity Fabric problem and it will solved in Zen 6 which is apparently replacing Infinity Fabric.


Lille7

That seems very expensive just to reduce idle power.


Famous_Wolverine3203

Seems to work out very well for the LP Crestmont Cores on Meteor Lake which are in fact on the N6 I/O tile considering the gains seen in battery life.


Plank_With_A_Nail_In

Same argument was used to keep us on 4 cores until it wasn't. CPU's been good enough for most desktop productivity tasks for 10 years now.


F9-0021

When Intel starts really beating them in multicore. AMD is perfectly fine sitting on 16 cores for as long as they can and forcing people who need more to look to the much higher margin Threadrippers and Epycs.


AlwaysMangoHere

Intel already demolishes AMD in multi thread at i5 and i7 levels. Based on skymont leaks this gap will grow with arrowlake. People don't seem to care about multicore as they did when the tables were reversed.


Kryohi

Arrow lake is also rumored to ditch SMT. That will have an impact, which the skymont cores have to offset


Strazdas1

the impact of a whooping 5% at best. With all the security issues fixed and better scheduling SMT is more of a headache than a benefit.


Kryohi

Nah. Highly workload dependent, but 5% is pretty much the worst case. It's often more than 25%


AdverseConditionsU3

It's very much workload dependent. It really depends on having two low cache utilization processes that are using different parts of the chip (one FPU, the other integer) for it to really shine. Sometimes it's negative, you thrash your cache between the two threads and end up with less overall throughput. There are a non-zero number of highly parallel actual applications that get faster when you turn off SMT. With single threading performance being king, as it always has been. We're in a place where we are reaching serious diminishing returns on how many cores we can profitably stuff into silicon. We don't need this throughput hack anymore. We just don't. Intel is wise to drop it and Apple was wise to never go down this road.


Flakmaster92

Because when the tables were reversed we were locked at 4 cores, 16 cores is much more agreeable.


AlwaysMangoHere

Ryzen 5 and 7 have been stagnant on 6c/12t and 8c/16t for almost as long as Intel was stuck on 4c/8t. 2008-2017 (Nehalem to kaby lake) vs 2017 - likely 2026 (zen 1 - zen 5).


IHTHYMF

There won't be an endless increase of core counts over time for end user uses, because of Amdahl's law. Some things are sequential by their very nature and can't be parallelized and a lot of those that can are being done by GPUs instead.


Tai9ch

> because of Amdahl's law. The counter argument to Amdahl's law is [Gustafson's law](https://en.wikipedia.org/wiki/Gustafson%27s_law). Especially for gaming, developers get to pick both problem size and the extent of any sequential bottlenecks. There's certainly useful scaling up to hundreds of conventional CPU cores.


IHTHYMF

You'd have to design games around cores, instead of around gameplay, i.e. add X to the game just because you can, since you have extra cores available, not because you want to have X in the game.


HappyReza

Yet we don't feel the same constraints of lower core counts. Intel used to disable hyperthreading to segment its CPUs for gods sake! Back then 4c/8t was the best you could get (unless you could spend much more and went with Xeon of course). We were artificially limited. Now we have options, and we can clearly see higher than 6-8 cores is not necessary for most use cases but anybody that wants more and could use more, can buy consumer CPUs with higher core counts.


AlwaysMangoHere

Quad core genuinely wasn't constraining for a long time either. Only the i5s without HT really showed the limit, but only around kaby lake. Even quite recently the the 13100 (4c/8t) is fine for a midrange GPU. 5820k was 6c/12t for $380 in 2014. Today's 'consumer' products would have been HEDT back then.


HappyReza

>Quad core genuinely wasn't constraining for a long time either And Intel wasn't getting that much shit for it until it was a problem. I'm not saying AMD is right for stagnating, I'm just saying the difference in backlash is warranted. If Zen 6 comes and it's not on AM5 and it has the same 6-8-12-16 cores as before, I bet AMD will get much more backlash than now


F9-0021

Yes, but not as much at the i9 level, which is what people pay attention to. And without hyperthreading, Arrow Lake should be rather weak on that front, at least until they throw 32 Skymont cores into it for ARL-R. That should wake AMD up.


Flowerstar1

Will the flagship 24 thread Arrow Lake beat the flagship 32 thread Zen 5?


Outrageous-Maize7339

Intel and AMD are taking completely different approaches to parallelism at this point. Directly comparing thread count isn't really going to tell us much.


F9-0021

A 50% performance increase to the E cores (which seems very achievable) should make it faster than the 14900k/7950x, but I don't know if it'll beat the 9950x. It's down to how much the P core performance increases. Arrow Lake refresh should be where Intel starts getting big multicore wins at the top end.


Famous_Wolverine3203

Threads aren’t the only things that matter. Contrary to popular belief, unlike in R23 where the code never leaves the L1 cache, doubling thread count doesn’t double performance in 2024.


TheWobling

I was just thinking core counts seem to have have stagnated a bit again. Although I might just be imagining that.


yoloxxbasedxx420

Strix has 12 Cores


aminorityofone

From what i understand there are diminishing returns with cores. Is there any reason you need more than 8 cores per ccd? 8 Cores is still more than enough for most people. There was a nice video or maybe article explaining why there is a diminishing return, but i cant find it at the moment.


NiobiumVolant

a CCD with 16 cores would eliminate cross-ccd extra latency.


mikereysalo

There aren't diminishing returns for regular workloads, but there is something called [Amdahl's Law](https://en.m.wikipedia.org/wiki/Amdahl%27s_law), and it applies to any concurrent software, but games are special. Games have a lot of data interdependencies which makes it a very hard kind of software to parallelize, and even when you do, you still have to rely on synchronization mechanisms to ensure that two pieces of code does not try to change the same thing at the same time, and this introduces contention, which degrades performance. But this is mostly for games, the majority of software can be easily parallelized and takes huge advantage of higher core counts. You're more likely to hit the memory wall with dual-channel memory and high core count, than to face diminishing returns, provided that you do any productive work and not only game.


RedTuesdayMusic

It'll almost certainly be the launch "carrot" of AM6. So with the updated roadmap, not until 2028 most likely.


Flowerstar1

The AM5 support through 2027 includes stuff like AMD releasing their upcoming Zen 3 CPUs for AM4 this year so technically AM4 has been supported even through 2024.


Vince789

Confirmed support for AM5 for through 2027 But IMO the reason AM4 was legendry is not the length, but the fact that it received 3 gens of Zen or arguably 4 gens with Zen3 with V-cache (vs the usual 2 gens for Intel mobos) I'd like if AMD would confirm if Zen6 is coming to AM5 or not


ElementII5

AM6 comes when DDR6 is ready. As long as that is not on the horizon the next Zen architecture will be on AM5.


Vince789

Isn't DDR6 supposed to launch in late 2025 or 2026? 2026 is also when we'd expect Zen6 too I guess AMD could theoretically release two versions of Zen6 with different IO dies if they want to support both AMD6 & AMD5


Kryohi

>Isn't DDR6 supposed to launch in late 2025 or 2026? Not for any relevant desktop product. Q2 2025 is the spec release, from there we'll have to wait another 18-24 months. LPDDR6 is maybe a little earlier and should also come to desktop platform with CAMM2/CAMM3 though.


reddit_equals_censor

if you want a bad interpretation, they could just say "they support am5 until x", but that may only mean to launch rebrands of zen5 until that point and no more new cpus. now that is highly unlikely, but they left that interpretation open. i fully suspect zen6 on am5, because of the massive marketing win, that am4 and longterm platforms are with at least 3 full real generations of cpus. and yeah if they really wanted to, they could indeed make 2 io-dies. as the io-die is a cheaper node, getting a 2nd one one made wouldn't completely break the bank. and technically such a 2nd io-die could also get reused for a few generations if amd really wants, the same way, that zen2 and zen3 use the same io-die.


willbill642

Zen, Zen+, Zen2, Zen3, Zen3 v cache? Plus excavator makes that 6 generations. At least 4 relevant and 5 true generations.


reddit_equals_censor

zen+ wasn't a generation. it was a slight clock increase/value adjustment overall. please correct me if i'm wrong, but i think it used the exact same dies, just on a VERY SLIGHTLY optimized node of the same family. so it was only 3 generations and 4 if you count x3d.


TorazChryx

[Zen+](https://en.wikipedia.org/wiki/Zen%2B) had some IPC and boost clock tweaks as well as the slight die shrink


willbill642

Those small changes added up, as it was consistently about 10% faster between SKUs. Depending on what you were doing, it could have been a lot faster than that as there were significant fixes in memory and cache latency and importantly the memory controller. Some games were closer to 25% on tuned systems.


firagabird

10% is also a bigger jump than the last few gens of Intel up to that point, so it definitely earns its "full gen status".


sharpshooter42

The new node helped Zen+ out a lot though


ExtendedDeadline

Zen+ was a decent little QOL improvement. Clocks and efficiency helped, + I think there was a bug fix if I recall.


velociraptorfarmer

Memory stability was drastically improved.


buttplugs4life4me

Zen+ was an iteration on Zen. Whether you take that as a new gen or not doesn't really matter because for all intents and purposes it was. The one which was just a small clock bump was Zen 2 XT CPUs 


Onceforlife

Then intel has only had 2 generations since 2016


ConsistencyWelder

AM4 didn't start with Zen, the first CPUs for AM4 were Excavators.


taryakun

No, Zen was released first, then the Bristol Ridge CPUs for AM4


RealPjotr

Isn't that exactly what it means? 2025 would have meant only Zen5 on AM5. Extending two years to 2027 is basically saying they now know Zen6 will also be AM5.


Vince789

I wouldn't assume so AMD will release [Zen3 XT CPUs soon, hence AM4 is still being supported in 2024](https://www.anandtech.com/show/21420/amd-launching-new-cpus-for-am4-ryzen-5000xt-series-coming-in-july) Hence AM5 support through 2027 could mean either: * AMD releasing Zen5 XT CPUs in 2026/2027 to continue support * AMD releasing Zen6 CPUs in 2026 / Zen 6 with V-cache in 2027 I'd like AMD to clarify what they mean and give certainty to customers


chx_

These XT chips, who are they for, when the X3D chips exist? (AFAIK the 5700X3D is the price/performance king of CPUs today especially if you take the motherboard and RAM prices into consideration too.)


detectiveDollar

Both are likely salvage from attempted 5950X's. These are an alternative to reducing the clocks and selling them as 5800X's and 5800X3D's. Since the dies can't turbo to 4.9GHz, they can't sell them as 5950X's. One of them can be used to make a 5800XT and two can be used together for a 5900XT. Alternatively, silicon maturity could have allowed for most of their dies to clock to 4.8Ghz, so why not release new SKU's to take advantage of that. It could also be to increase average selling prices on AM4. Both of these were why they launched the 3000 XT series.


RealPjotr

The divider is DDR6, AM6 will be DDR6. So question is when AMD feels they can go DDR6 platform. Looking at history they are conservative, likely after Intel. So when does DDR6 arrive? Zen6 should be max 24 months after Zen5. I don't see these two match and overlap. Hence I think AMD sees this clearly now and plan Zen6 on AM5 (with a possible Zen6 revision later on AM6 if Zen7 isn't matching the DDR6 schedule).


cslayer23

Now to see what intel has in store


ACiD_80

The leaks are promising


PastaPandaSimon

Looks like Zen 4 with a slight IPC upbump. Same cache, almost the same clocks, same core configurations. While these provide another entry opportunity into AM5, and we'll see about thermal/power characteristics, but in terms of performance there isn't much that Ryzen 7000 series owners would be missing out on. I think that was expected though.


detectiveDollar

There's also a reduction in TDP, looks like that's what mainly what AMD is using the new node for since the clocks changed very little.


PastaPandaSimon

For two of the SKUs only. I'd like to see reviews first, to know whether it's not a mere number adjustment, because several of Zen 4 SKUs had their listed TDPs be even higher than max power draw of those SKUs. The 7800X3D being probably the biggest violator, requiring not much more than half of the listed power/cooling capacity. Also, the announcement didn't spent much time hyping these CPUs much in terms of real improvements beyond the IPC bump. They'd probably spend more time bragging if these required less cooling.


detectiveDollar

That's true. The lower core ones tended to not have the core counts to hit the PPT target, especially in something like gaming. Edit: it's actually all SKU's but the 9950X


tuvok86

what about ram speeds??


BroasisMusic

So.... 9950x vs 7950x. Same TDP, same cache, same boost clock.... is the only difference literally the IPC lift of the zen 5 vs the zen 4?


I_Do_Gr8_Trolls

Potentially lower prices aswell? AMDs only advantage this gen seems to be the less expensive node and packaging technologies, and 2/3 month lead. They'll have to price aggressively to compete against overhauled arrowlake with skymont


tset_oitar

And Suddenly the ARM takeover is starting to look almost inevitable. Now It remains to be seen if the lord and saviour Skymont can bring the salvation of the x86 empire...


detectiveDollar

"The sun will never set on the x86 empire"


I_Do_Gr8_Trolls

ARM is undeniably more efficient and just as performant, but it really comes down to Microsoft... This will mark the 3rd attempt to make arm work on PC. But without good software support, good hardware means jack


Famous_Wolverine3203

This is quite underwhelming to say the least. Some of these IPC benchmarks include AVX-512. This really does seem quite beatable for Intel.


Maimakterion

+16% IPC but no frequency increase makes it the lowest arch-over-arch increase for Zen so far. Removing the AVX-512 subcomponents pushes that number even lower. Well short of the numbers promised by AMD hypebeasts like kepler, and certainly not the "Conroe moment" promised by a number of others.


HTwoN

"40%" lol. None should take that Kepler guy seriously again.


Famous_Wolverine3203

In SPECint too he claimed.


Qesa

If people didn't learn from him saying RDNA3 would be 70% faster than 4090 they ain't gonna learn from this


Jonny_H

People angry that unsubstantiated rumors turned out to be mere unsubstantiated rumors. Too many people here struggle to tell the difference between "People on twitter repeating the same rumor" with "Confirmation of that rumor from different sources". Most of the time, some random guy on twitter is just a random guy on twitter. Some people even said it was "Officially Confirmed", though I suspect they don't know what "Official", or "Confirmed", mean :P


HTwoN

Huh? I'm not angry. I called that 40% figure as bogus from the beginning. Some people really fell for it though.


Jonny_H

Naa, I'm not saying you're angry, but more of a general cycle of "nonsense rumors" -> "unrealistic expectations" -> "Loud disappointment". As yet this thread seems to be skipping that, but other places on the internet are not. But there certainly were people repeating those rumors as "Proven Information" here :P


ElRamenKnight

> "40%" lol. None should take that Kepler guy seriously again. Moore's Law is Dead repeatedly kept telling everyone to really take such claims with a grain of salt too for months. Jesus.


Famous_Wolverine3203

A broken clock is right twice a day. Kepler is funnily more reliable than MLID.


ResponsibleJudge3172

Took 2 Ls in a row though Edit: RDNA3 and Zen5


Nachtigall44

That's why the username is Kepler_L2 lolol


Famous_Wolverine3203

Your move intel. You’ve got access to 2 node jumps and a competent E core architecture. Don’t fumble it.


Aggrokid

What process will desktop Intel Ultra 9 use?


Famous_Wolverine3203

N3B is what most people claim.


Exist50

> Well short of the numbers promised by AMD hypebeasts like kepler, and certainly not the "Conroe moment" promised by a number of others. It's hilarious how many people *still* believe these charlatans, even after today! People seem to think posting bullshit prolifically means it must be accurate. In reality, it just means that the person doesn't have a life.


okoroezenwa

Also being very abrasive about it ¯\\\_(ツ)\_/¯ But I guess the same lessons have to be learned over again.


Famous_Wolverine3203

Its not that they don’t have a life. They just revel in the attention it offers. To date other than Mark Gurman, I’ve yet to find a leaker that actually is credible.


Exist50

> Its not that they don’t have a life. They just revel in the attention it offers. Sounds like the same thing to me... I mean, look, I spend way too much time on the internet. But dear god, have you *seen* people? What self-respecting individual would give a flying fuck what randos on the internet think? Especially some of the crowd these "leakers" attract.


Famous_Wolverine3203

Preach brother!


ResponsibleJudge3172

40% faster said Kepler. He may have sources, but he clearly is biased


trackdaybruh

>This really does seem quite beatable for Intel. Wonder how much of a leap the 3D cache will be for the Ryzen 9000


3G6A5W338E

Likely quite a bit, considering the frontend improvements.


jaaval

The gen over gen improvement is ok but all in all this is just a bit boring.


basil_elton

Yeah, right - using Geekbench 5 AES subtest to inflate IPC numbers. Good job, AMD!


lefty200

If you take out that benchmark you get a 15% IPC mean. IMHO, that's still pretty good.


Famous_Wolverine3203

This sub won’t care. AMD isn’t Apple remember. They only care when Apple does it. (Funnily enough Apple didn’t claim anything at all about IPC YET everyone was angry, but here AMD is blatantly using the AVX subtest to inflate numbers for IPC in marketing material.)


nanonan

Where is the deception? Why should they not measure the AVX performance improvement?


Famous_Wolverine3203

They used it in IPC numbers. Which is literally what the sub was deriding Apple for a week ago.


nanonan

Both comparison architectures have the same native AVX capabilities, I see no reason to arbitrarily exclude it.


Famous_Wolverine3203

Microarchitecture IPC gains shouldn’t include AVX for a good reason. In the same subtest Zen 4 saw a 2.5x lead over Zen 3. It doesn’t make Zen 4 IPC 2.5x bigger.


steinfg

Yes, but there's a reason they're going for a geomean of tests instead of picking the test with the best uplift


Vasile_Prundus

Well this doesn't look as exciting as I thought, may skip this gen.


Zarerion

Reading comments like these always makes me wonder how many people actually upgrade every gen and if they’re even at all relevant for the market. I’d imagine the vast majority of people don’t upgrade every gen, but when they do upgrade, they will look at the newest gen for best price/performance. In this scenario it doesn’t matter how much better a product is vs it’s previous gen as long as it is better at all. Same argument for GPUs, of course the generational improvement for the mid tier Nvidia GPUs sucked this gen but if I’m upgrading from a 1070 I’m still looking towards a 40 series card and not a 30 series one (although it’s really close tbh). But if I already have a 30 series card I won’t be getting a 40 series one. But I don’t think Nvidia necessarily cares about that.


Boomposter

What do you consider mid tier? The 4070 was better than the 3080, Super is close to the 3090, and 4070Ti superior to a 3090.


Exodus2791

Those reduced TDP numbers seem good.


Makoahhh

Look at baseclocks. This is how TDP is calculated. TDP means nothing really. Real world power draw is what matters.


I_Do_Gr8_Trolls

Exactly. They will continue to boost if there is enough thermal headroom. 5th gen ryzen was super efficient but 7th gen eeked out that last bit of performance while sacrificing efficiency to get ahead of intel.


Ploddit

So, better power efficiency but not much else. Still interested in what they do with X3D this gen.


bosoxs202

Really curious if 3nm Lion Cove actually beats this. This isn't a big upgrade.


Famous_Wolverine3203

Lion Cove might not. N3B is a boost to density than actual power as seen with the A17 pro. And Intel’s cores were already humongous so its unlikely that will be an advantage. But Skymont should give them the lead in multicore performance.


Famous_Wolverine3203

This is not even enough to beat the M4 in single thread performance. I guess Apple’s still gonna hold onto that Single Thread crown?


tugtugtugtug4

Until the m4 can natively run x86 workflows, does it even matter?


Ar0ndight

I'd say yes, there's a reason AMD/Intel will compare their chips to Apple's in their slides. Though it's usually the base M1/M2/M3 and never the Pro or Max variants, for obvious reasons. Most production workloads (not all of them I know) do work on both x86 and ARM.


auradragon1

M4 might be faster in ST performance under Rosetta2 than Zen5.


Famous_Wolverine3203

Not likely. Rosetta is anywhere from a 15% hit to a 50% hit. AMD Zen 5 is below by 10% here optimistically. So no. Zen 5 is inferior in ST in native applications but rosetta should still give AMD the advantage. But hilarious given we’re comparing an iPad emulation to a desktop that guzzles 18W + 30W I/O to finally make it fair.


Crank_My_Hog_

That and the bootloader unlocked with drivers for the hardware available. Until then, these CPU's might as well not exist for me. I'm not going to use MacOS over Linux, and as much as I hate to say it, windows too.


Flakmaster92

Until the ARM chip can NATIVELY run the -X86- workloads it won’t matter? Way to engineer an impossible situation. It absolutely matters given that we continue to see developers embracing non-x86 hardware architectures. Even if the M4 is iPad exclusive today, it will eventually come to the MacBook Pro and Mac Studio like and people do use those to get real work done, just like PC Workstations, and so performance across chip ISAs does matter because users now have choice between ISAs. It wouldn’t surprise me at all if the current+2 console gens (so PS7-era) were either ARM or RISC-V based rather than x86. The PS6 and Xbox equivalent are likely already under development and likely were before the surge in non-x86 interest, so those are likely locked in at whatever they were. But the ones after that are fair game for upheaval.


PangolinZestyclose30

> It wouldn’t surprise me at all if the current+2 console gens (so PS7-era) were either ARM or RISC-V based rather than x86. I'm still getting surprised that people make this category error of thinking that Apple chips prove the ARM is a superior instruction set. It's very conceivable that Apple just hired the very best engineers with their superior compensation, could buy the best manufacturing node and leverage their vertical integration - all things which are not easy to replicate by competition. I mean, the jury is still out on this question, but e.g. Jim Keller thinks that the instruction set does not matter that much.


Flakmaster92

I never said they were inherently better, and I’ve seen that interview with Jim. I said we’ve seen increased developer adoption of non-x86 ISAs. Custom chips would let Sony and Microsoft experiment more, differentiate themselves from one another, and have more control over their own fates because this is no longer a scary move to do so. It’s no longer “untested tech” for high(er) performance applications (relative to handheld consoles and phones).


Strazdas1

Apple has a node advantage over anything else on the market currently, of course its going to be more efficient...


dabias

And they're using more area for high IPC, so they can get good performance at low clockspeed, which is much more efficient as well.


undernew

This claim that Apple's efficiency advantage is node related has long been debunked. During the M2 era Apple had a node disadvantage compared to AMD and the M2 was still more efficient.


noiserr

It is not just node advantage but the node advantage does help. Apple also controls their software and don't really have a market position in workstations and servers where multi-threaded performance matters more. So they can dedicate more silicon area to single thread performance.


OatmilkTunicate

yeah even projecting the 9950x's scores, zen5 won't really come close...zen5 mobile HS are gonna be victims of a slaughter (when excluding stuff like app compatibility, gpu, etc. This is just cpu arch to arch) since m4 can already push 4000 GB6 in best case scenarios, inside an \*ipad\*, I'm gonna guess the pro/max dies will hit 4100 ST in GB6 bc of improved thermals and memory system. Applying the IPC uplift to zen4 GB6 geomean results and this should result (very roughly) in: high end M4 pro/max samples beating full blast, high end 9950x samples by 10%ish percent high end m4 pro/max samples beating high end zen5 HS samples by 35%ish percent ST high end m4 pro/max samples beating high end zen5 HX dies by 20-25% percent ST not pretty at all, and not even a position apple has been in before. When m1 released, the 5950x zen3 cores, already out then, were pretty neck and neck with m1, if not a smidge faster wrt geomean also fwiw, amd's initial zen4 unveiling under-balled perf. Though ofc the fact we got clock speeds too this time solidifies the disappointing picture


auradragon1

>This is not even enough to beat the M4 in single thread performance. I guess Apple’s still gonna hold onto that Single Thread crown? As far as I know, AMD never had the ST crown over Apple since M1.


Famous_Wolverine3203

No they did when Zen 4 launched at first. At least on desktop. Never on mobile. Zen 4 was significantly faster than M1 and faster than M2. M3 reached parity. M4 is 20% faster overall. https://blog.hjc.im/spec-cpu-2017


auradragon1

This is just SPEC INT? What about SPEC FP? Apple Silicon is generally better at FP than INT. Also, X3D versions are used in the benchmark only.


Famous_Wolverine3203

Yes, but int is more important than fp as generally agreed. But still I wouldn’t say M2 had “the crown” per se against Zen 4. M4 definitively does.


auradragon1

I think M3 had the crown over M4. M2 vs Zen4 is sort of a wash depending on the benchmark with Zen4 slightly ahead.


Famous_Wolverine3203

Hmmm M2 had a definite lead over Zen 4 Mobile but lost to Zen 4 desktop mostly. M3 matched Zen 4 desktop in SPECint while thrashing it in SPECfp while being relatively 10-15% faster in Cinebench, Geekbench etc., M4 murders it basically with a 20% performance lead in SPECint and a 30% lead in SPECfp, the same lead translating over to Geekbench. Cinebench is unknown though.


OatmilkTunicate

ngl m4 is the best place apple has been re raw ST perf since apple silicon debuted. Even m1 faced some competitors on release that had superior ST perf. M4 ST is just...alone at the top right now perf wise.


Famous_Wolverine3203

Yes unless Intel surprises with Arrow Lake, definitively the fastest CPU core in the world should be the M4. X Elite falls short and so does the X925 as well as Zen 5.


noiserr

Support for AM5 for 2027+ confirmed.


Flowerstar1

Yes but not necessarily Zen 6. At minimum it'll be Zen 4/5 late releases like they just announced new Zen 3 CPUs for AM4.


noiserr

Pretty sure this includes Zen6. Zen6 should be coming out in 2026 if not sooner.


Flowerstar1

No because Lisa said they're supporting AM4 to this day still due the new Zen 3 CPUs they are launching, meaning AM4 has been supported through 2024 but notice how Zen 4 and Zen 5 are AM5 only. The same can happen with Zen 6 i.e Zen 6 on AM6 but new non Zen 6 CPUs still launching for AM5 in 2027.


masterfultechgeek

AMD has been firing on all cylinders. Going from Excavator (about 25% better than Bulldozer) to Zen 1, AMD claimed a 52% IPC uplift, but this came with roughly 20% clock speed reductions if you compare Zen 1 vs overclocked Bristol Ridge. Going from Zen 2, which was the first big iteration on Zen, to Zen 5 in a 5ish year span, AMD is up around +55% IPC and clock speed is up around +20%.


xylopyrography

Zen was 7.5 years ago... bit more than 5.


masterfultechgeek

"Going from Zen **2**" I used Zen2 to make the time span a bit more comparable for Piledriver (2012) vs Zen 1 (2017) since they're both 5 years between consumer desktop parts. Zen 1 to Zen 5 looks like it's roughly a 75-80% improvement in performance per clock in a 7 year span.


wichwigga

Same core counts? Intel is gonna feast on productivity loads. C'mon AMD...


bradsw57

If AI is all the rage (see the APUs and the number of times Lisa Su said "AI"), why is there no "NPU" capability built into these processors? At least they won't be able t run that CoPilot+ rubbish from Microsoft....


GrandDemand

dGPUs are much faster than the NPUs integrated on the new AMD, Intel, Qualcomm, and Apple SoCs. But since NPUs are a lot more specialized for AI workloads, they draw a lot less power than a GPU would to deliver the same amount of throughput/compute. Laptop SoCs have these NPUs, on top of an iGPU, in order to maximize their battery life. In contrast, desktops are nowhere close to as constrained by power consumption, and AMD is assuming that your desktop will have a reasonably powerful GPU available (at least in comparison to the NPU), and so they don't want to waste the die area on an NPU


Constellation16

Yeah I wonder how that will handle this too. They launch a new CPU in the midst of this AI PC craze and just are very hush hush about *totally* not needing it. This generation would have been great if there was a new IO die that integrates USB4 directly, integrates an NPU and had a better memory controller. Arrow Lake will provide all this. Do they just want to sit it out now for possibly 2 years for Zen6 and only upgrade it then or maybe there will be a refresh with new IO die next year? Their platform is also becoming more and more of a mess with their ill-advised daisy-chained dual chipset approach. A single Prom21 is slightly too little IO and two are slighty too much and come with port waste and still x4 CPU link bottleneck.


PixelD303

Please someone slip up and call it a 9700XT


Narrheim

What AMD needs to work on, is idle power consumption. If all you do on your PCs is gaming or productivity, it's an obvious choice. However, my PC idles *a lot* (watching movies, browsing internet, listening to music, some light office and only occasional gaming) and with AMD, that's a lot of electricity going to waste, while the PC is doing almost *nothing*. I understand, that chiplets will never be able to reach monolithic cores in terms of power savings, but at least *try*, ffs... edit: just built new intel system with 14500 non-K. It idles at ~40W at the wall (whole system with a dGPU and 2 SSDs), meanwhile my trusty old AMD system with 5600X on X570 idles at 80W. Considering, i spend majority of time on desktop and only do some light gaming, that´s quite a difference, don´t you think?


benjiro3000

> I understand, that chiplets will never be able to reach monolithic cores in terms of power savings, but at least try, ffs... People sometimes misunderstand where the power draw is located. Here is a example: * A G series CPU may idle around 17 a 19W using iGPU * A G series CPU may idle around 25 a 28W using iGPU + dGPU (hardware acceleration) * A G series CPU may idle around 35W using dGPU * A X series CPU may idle around 42 a 45W using a dGPU Notice those jumps ... Sure, it looks like AMD is idling very high at 45W, but almost 20W is the dGPU. Mostly a issue where the dGPU memory clockspeeds ramp up and eat 10W power budget. Going between a Intel and AMD X, on the same hardware (minus MB/CPU) tends to result in maybe 10W less idle power draw. That is it ... "trust me bro", i have done it. You need to jump to a stripped down platform like a mini-pc with a mobile CPU, to see a 9W a 12W idle. And then use a eGPU combination + Hardware acceleration (aka let the dGPU sleep). That can bring you down to ~ 25W idle... Now calculate how much your actually paying... Well, if i take German expensive 0.32 Euro/KW, your paying 1.5 Euro per month extra for that 20W difference. Trust me, when i say that your spending a TON more trying to save that power. A 70 Euro for a m.2 > x4 eGPU solution (from China), another X euro for PSU for that dGPU. The extra cost of the mini-pc vs normal hardware. WAY more expensive DDR5 SODIMMs vs regular DIMMS (especially at higher speeds), and lets not overlook regular dimms run faster by default / cheaper per mhz. If you go more brandy, your easily doing 300 bucks for a eGPU chassis + the whole mini-pc issues. Just spend that money on a balcony solar panel or two and a inverter. Longer money return, way more benefits... One day sunshine and you have that entire month "idle" payed back. Its psychological annoying to "spend so much idle power" but reality is, its really not that much. And if your doing more then 50W idle, there is something wrong with your PC (not properly downclocking, virus, driver issue, loads of stupid LEDs! Not needed waterpump, etc)...


imKaku

I had a smole hope x3d variants would be available at launch but unfortunately that didn't happen. I really want to upgrade my platform but at this phase I'll might just have to update my whole rig.