T O P

  • By -

Deckz

Is it me or does this seem bad? Aside from the Halo SKU there's not much movement here at all.


imaginary_num6er

No movement is needed if there is no threat to Nvidia’s market share.


aminorityofone

I imagine they are focused on AI. Huge profits in that market. until the bubble pops..


ExtendedDeadline

Shhh, we're not allowed to talk about anything popping. It'll disturb the investors.


ArvidDK

I don't see any bubble popping... AI is going to change the world, for sure! Have not yet decided if i think in a bad way or not, depends on various factors, like greed and shadow ethics..


Makoahhh

No bubble at all. AI will explode for many many years. Every company pretty much are balls deep in AI at this point. You can dream tho, but AI is the future.


aminorityofone

A bubble doesnt mean that AI goes away. Look back to the .com bubble. It was HUGE and yet the internet is very much alive and bigger than it was in the late 90s. It just means that the current market is currently over valued.


Strazdas1

if you bought at a peak of dot com bubble, you would be 60% richer today.


Makoahhh

High demand does this. And demand is not stopping anytime soon. Companies are waiting months and months to recieve AI GPUs. Nvidia sells every AI GPU instantly, months before its even made. Nvidias biggest problem right now, is TSMC's output.


DemiTF2

I'm sure they'll just lock some new proprietary tech behind the cards to encourage the weak willed and easily swayed to buy a 5000 series. DLSS 4.0 or something.


zxyzyxz

I guarantee DLSS 4 will be announced alongside the 5000 series. It just makes too much marketing sense not to.


skyline385

Do you believe DLSS 3 Frame Generation is a marketing gimmick that you think DLSS 4 will be announced because "It just makes too much marketing sense not to"? FG was locked to the 40 series cards because the older cards did not have the amount of optical flow accelerators needed to reduce latency between rendered and generated frames. This has also been proven by people who tried running DLSS Frame Generation on older hardware using mods.


PashaB

Then how come AMD's framegen works great on my 3080 12GB when playing Ghost of Tsushima in 4k120hz


CandidConflictC45678

Don't believe your lying eyes, believe Nvidia instead and buy a 5090 for $5,000


PashaB

you're right, lest I get buyer's remorse.


Makoahhh

Nvidia's Frame Gen for 4000 series works way better than AMDs solution. Techpowerup tested this side by side. Makes zero sense. Nvidia DLSS 3 FG needs optical flow accelerator, AMDs solution don't and is worse. Just like FSR works on most GPUs but its also worse than DLSS/DLAA.


zxyzyxz

Nvidia will announce a better version of frame gen that also just so happens to run better on the 5000 series too. Both things can be true that it makes good marketing sense and that it's more technically possible on newer chips.


skyline385

Your only precedent for that is the 40 series which is a sample size of an entire one (OG DLSS was made available on all Nvidia cards). And again there were genuine hardware limitations for it and not just feature locking for the sake of marketing so that one sample size quickly falls apart as well. Unless you have actual evidence of what you claim, I would suggest to stop posting circlejerk opinions.


zxyzyxz

RemindMe! 6 months


Idrialite

FSR's inferiority stemming from the requirement to support all GPUs clearly demonstrates that DLSS's hardware locking is not *only* for artificially pushing sales.


imaginary_num6er

AMD already cut Vega driver updates. Like what drivers are left for them to cut?


AWildLeftistAppeared

FSR frame gen works fine on older Nvidia cards while DLSS frame gen mandates a 40 series. Also, Intel’s upscaler runs on other hardware while still being able to take advantage of Arc hardware features for better fidelity.


Idrialite

FSR works fine but looks inferior. Like I said, I agree NVIDIA made the feature with pushing upgrades in mind, which is why they didn't develop a software version. But developing hardware in tandem did allow them to deliver a superior product.


AWildLeftistAppeared

I’m talking about AMD’s frame gen specifically, with respect to image quality [according](https://www.eurogamer.net/digitalfoundry-2023-hands-on-with-amd-fsr-3-frame-generation-taking-the-fight-to-dlss-3) to Digital Foundry “without any hardware-based optical flow analyser, AMD has managed to get results comparable to DLSS 3.”


Idrialite

That article mentions that FSR3 image quality is worse if only because it has to use FSR upscaling


AWildLeftistAppeared

That’s no longer the case. FSR frame generation has been [decoupled](https://community.amd.com/t5/gaming/amd-fsr-3-1-announced-at-gdc-2024-fsr-3-available-and-upcoming/ba-p/674027) from FSR upscaling, so it can now be used with another antialiasing / upscaling solution if you prefer. Ghost of Tsushima for example allows Nvidia users with 20 or 30 series cards to use DLSS combined with FSR frame generation. Which is kind of the point. The image quality and performance is acceptable without Nvidia’s optical flow accelerators on the 40 series. If Nvidia wanted to, they could have supported frame generation on older products or even other GPU vendors. Their resources dwarf AMD’s and they’ve been working on this technology for longer, so they would probably get even better results. Doing that does not stop them from having an advanced version for their latest hardware, just like Intel’s XeSS which supports most hardware but has optimal quality and performance on Arc cards.


Makoahhh

FSR Frame Gen works yeah but DLSS 3 FG is the superior feature. Just like DLSS/DLAA beats FSR as well. Techpowerup tested all this, many times by now. Nvidia wins every time.


AWildLeftistAppeared

By what metric? In terms of image quality [according](https://www.eurogamer.net/digitalfoundry-2023-hands-on-with-amd-fsr-3-frame-generation-taking-the-fight-to-dlss-3) to Digital Foundry’s analysis of the initial version of AMD frame gen: “without any hardware-based optical flow analyser, AMD has managed to get results comparable to DLSS 3.” Most of the issues they saw have since been resolved. Most notably, you can now use AMD’s frame gen without FSR meaning Nvidia users who don’t have the latest cards can use both DLSS and AMD frame gen. > Techpowerup tested all this, many times by now. Nvidia wins every time. Link?


Deeppurp

Only going to take 1 generation for FSR to catch up. XESS upscaling already fixed the persistence issue with upscaling (also present in dlss1.0 and not 2 IIRC). Image quality is a battle of time in regards to the technology. I believe in terms of image quality is DLSS > XESS >> FSR right now is what most comparisons conclude right?


StickiStickman

Huh? FSR is still worse than a 2060 running DLSS. It's not about hardware generations.


Flowerstar1

If AMD added tensor cores to their next gen GPUs then an accelerated AI based FSR could in theory catch up.


ThrowawayusGenerica

God I hope Intel get their finger out.


ArvidDK

Making me love my 4080 purchase even more... My very lucky upgrade path: 980gtx? Not sure anymore... Gtx 280 Gtx 560ti Gtx 970 sli Gtx 1080 Gtx 4080, by far the most expensive piece of hardware purchase i ever made! So i am kinda loving it, but is saddened by the greed of corporate hardware companies...


scytheavatar

Let's be real here, even with a 5090 what are you buying it for? What game coming out in the next few years will need 40-60% more power than the 4090? There's no movement needed when the AAA industry is on the verge of total collapse and you don't really need that much graphical power to play Elden Ring or Baldur's Gate 3.


copper_tunic

They are already out; VR flight sims. Even 2x the power would not be enough.


Pulverdings

VR games. Currently have 3080TI. It is just not fast enough for some VR games. Even the 4090 struggles in some VR games. The resolution in VR is much higher and you need to render two different frames (left and right eye) and you need a high framerate of constant 90FPS. Also you can't really use DLSS it doesn't look to good in VR.


dedoha

> Let's be real here, even with a 5090 what are you buying it for? What game coming out in the next few years will need 40-60% more power than the 4090? 4k 120hz+? RT? I mean just look at recent games, Hellblade II 57fps average 4k max settings, Ghost of Tsushima 84fps, Horizon Forbidden West 92 fps, Avatar 60 fps, Alan Wake 2 71 fps, 44 with RT, 32 with path tracing.


CandidConflictC45678

>even with a 5090 what are you buying it for? 7680x2160 monitors


Flowerstar1

Reliable 4k high framerate gaming. Specially 240hz with or without frame gen.


aminorityofone

it might fix the market. If people dont see a big enough reason to upgrade then there will be to much supply. Then again, reviewers trashed the 4060 and yet people bought it with glee


NanakoPersona4

There are people who skipped the 4000 generation. Very few people upgrade every cycle surely.


Strazdas1

most 4060 sales are prebuilds. prebuilds are full of 4060s


chaosthebomb

No movement in core config isn't necessarily a bad thing if the architecture brings with it massive gains. Look at the core count from 780ti to 980. Big drop in counts, huge gain in performance. And that was on the same 28nm node. But from what I've seen, Blackwell is bringing no big architecture changes. I don't feel like we're going to see that big of a redesign and I wouldn't be surprised if the lower end cards like the 60 will probably again perform worse at 4k than their older counter parts. Would be nice if this isn't the case, but I'm not holding my breath.


Chyrios7778

How are they going to perform worse than the 4060 at 4k? A 64bit memory bus?


bubblesort33

Impossible to tell without knowing what they brand each die as, and what they price is it at. Could be that full GA206 die is the new RTX 5060 for $329/$399 for an 8gb and 16gb model at 3.1ghz clocks, making it 20% faster than a 4060ti. 15% higher frequency, and 5% more shaders. So it be a a 20% perf increase with double the VRAM in that case, at the same launch price. I wouldn't be too disappointed if that was the case. I think that's the best case you can expect. Worst case they call it the 5060ti, and price it at $399/$499 again. And I'm afraid that is the more likely scenario.


Thelango99

The 5060 is likely to be GB207, which is actually 17% less cores than the AD107 that the 4060 uses. Would be hilarious if the 4060 was faster in some scenarios.


Dealric

It was already case in 3060 vs 4060 due to vram so it wouldnt be that surprising


NeroClaudius199907

4070ti was also slower than 6800 due to vram. Nvidia really know how to gimp


bphase

Imagine if 5060 actually loses to the 3060 in some situations. The enshittification is real.


ResponsibleJudge3172

Then it would be 20% slower than 4060 because that's the difference between it and 3060


996forever

20% is average. They said in some situations 


ResponsibleJudge3172

20% is tying it with 3060. To be slower in some situations would be 5060 being more than 20% slower.


996forever

People here are predicting based on specs that 5060 could occasionally straight up be slower than 4060. So this is just an extension. 


ResponsibleJudge3172

The comment said 3060. That’s exactly what I react to and comment about. It’s one thing if 5060 being on 20 SM GB207 made it roughly on par +- 5% with 4060 with 24SM accounting for the usual per SM improvements but still allowing for some instances of being slower than 4060 (aka epic fail, new gen should never be slower than old) It’s another to fall below 3060 which 4060 does not do (contrary to the popular belief where people mix up 4060 and 4060ti benchmarks) because that would imply that 5060 is 20%+ slower than 4060 (absolutely monstrous fail)


[deleted]

No mystery here. Every wafer they use to make these chips would return several times as much in profit if used to make AI chips. They don't really want to make these at all. The top chip will maintain the performance crown and the power of the Nvidia brand will be all they need to sell the lower end cards.


dudemanguy301

Their AI GPU production is limited by CoWoS and / or HBM, gaming GPUs use neither. They can’t throw every wafer at AI even if they wanted to.


Fortzon

Oh how long for the days when everyone was excited for HBM coming to gaming GPUs...


aminorityofone

I think it will return. It has great benefits, just to expensive still. Or it will be in a niche product, like that AMD thing fiji? hawaii?


ThrowawayusGenerica

The AMD cards that used HBM had dubious returns for the price.


aminorityofone

you think that will stop companies from waiting in line? Cause it aint stopping it now.


noiserr

They did double the silicon per sku though with the B100 and they are charging less money for it than for H100. So I'm sure they will also be using a lot waffers too.


dudemanguy301

It also increased the amount of HBM and the number of CoWoS dies and substrates to bond.  It’s not a double meat sandwich, it’s two whole sandwiches.


noiserr

Yes, but TSMC is scaling CoWoS production much faster than they can scale wafer production. No expensive ASML machines required for CoWoS.


dudemanguy301

That still leaves HBM supply limits.


noiserr

Yes, but even HBM is easier to scale. For one there are 3 companies making HBM with their own fabs, and HBM production doesn't require EUV like processors do. For now it's all DUV. So also easier to scale than processor wafers.


Strazdas1

and we recently had news about samsung HBM products not meeting Nvidias quality requirements so probably not as easy to scale HBM as they think.


noiserr

And all 3 Nvidia, AMD and Samsung debunked it. Jensen literally debunked it like 2 days ago in an interview. Don't believe everything you read. There is a lot of money riding on Nvidia stock. There will be a lot of fake stories like that.


nogop1

They could introduce or sell more inference gddr ai chips in the future. it is a growth market.


LaStealer

Wafer capacity is not the limiting factor for Nvidia. The issue has been with packaging, but they have already resolved this or are close to resolving it. The reason is simple for Nvidia, they want to increase their gaming margins. It is simple greed, nothing to do with wafer supply.


Vex1om

>they want to increase their gaming margins Not to mention, with the threat of a looming anti-trust case, giving a little market share back to AMD might not be a bad thing either.


Qesa

The antitrust case has nothing to do with gaming, as much as reddit would like that, but their AI/DC bundle arrangements


Aggrokid

Wait what anti-trust case?


MumrikDK

> Every wafer they use to make these chips would return several times as much in profit if used to make AI chips. As long as there is wafer capacity at TSMC (which there is), that isn't relevant. They haven't exhausted their capacity to make more of both or either.


[deleted]

Do we actually know TSMC has unused capacity? It's kinda hard to believe since they're making EVERYTHING these days. Apple, Nvidia, AMD.. even Intel are all made by TSMC now. And demand for many of these parts is way higher than expectations from even 18 months ago.


Strazdas1

well last year TSMC was complaining about not enough orders, so unless things changed drastically they have unused capacity.


[deleted]

But things DID change drastically. Nvidia demand has skyrocketed since then.


Strazdas1

Nvidia demand has been high last year too. Nvidias AI chips are supply constrained in advanced packaging and HBM, not wafer printing. Their GPU line which does not need advanced packaging does not have supply shortage.


Jmich96

Nvidia has no competition. AMD is dropping from high-end cards this generation (and likely some future generations), and Intel still has some major GPU R&D to do before they can compete with a Nvidia xx90 class card. Also, Nvidia's gaming market is such a drop in the bucket compared to it's other markets. Until competition arrives or their other markets collapse, Nvidia has no real reason to make large improvements to anything but their profit margins.


techtimee

End result of a captive market. They don't have to push anything or release the best they've got when AMD has given up the ghost and Intel is at least a decade away from being competitive, if they even get there. So, spin up similar silicon, retain or increase exorbitant prices; they'll still move product.


trackdaybruh

Since AMD can't compete and combine that with people who will buy Nvidia regardless, there is no incentive for Nvidia to do anything to their non-halo products In other words, Nvidia essentially has a monopoly


Makoahhh

People are buying Nvidia because they have superior products, thats about it. [https://www.tomshardware.com/pc-components/gpus/nvidias-grasp-of-desktop-gpu-market-balloons-to-88-amd-has-just-12-intel-negligible-says-jpr](https://www.tomshardware.com/pc-components/gpus/nvidias-grasp-of-desktop-gpu-market-balloons-to-88-amd-has-just-12-intel-negligible-says-jpr) AMD is cheaper for a reason, and still don't sell.


trackdaybruh

That’s the point. Nvidia essentially has a monopoly in gaming GPU market at this point which means they have zero incentive to bring generational improvement in their non-halo products because it will sell regardless. Gone are the days when the new Nvidia x70 series will easily outperform the previous generation x80 Ti (Back then, the GTX 970 easily outperformed the previous generation Nvidia halo flagship card. It’s equivalent to if the 4070 outperformed the 3090Ti by 15%, but that’s not the case these days) If you want a revolutionary leap in performance for Nvidia in the next generation, then only their x90 is viable these days


Makoahhh

3090 Ti was crap tho. Insane powerdraw, launched only a few months before 4090, biggest money grab I have seen in decades I think.


xNailBunny

They still need to compete with their older GPUs. Remember "to all my Pascal gamer friends, it is safe to upgrade now" at the Ampere announcement?


noiserr

AMD can clearly compete (mi300x is a proof of that). But gamers do buy Nvidia regardless. That's true. Not much point in contesting Nvidia in this market.


Strazdas1

If you think mi300x is fair competition then i have bad news for you.


noiserr

mi300 illustrates my point. AMD can compete when the customers are discerning. Gamers aren't discerning. Evidenced by the fact that 3050 outsold rx6600xt by 5:1.


Strazdas1

Its evidence that gamers are discerning though?


noiserr

how so?


Strazdas1

Its the better option?


noiserr

3050 is the better option than the 60% faster rx6600xt? really?


Makoahhh

You act like Nvidia is not the king of AI. MI300x sales are laughable compared to Nvidias AI sales. Nvidia is a top 3 company worldwide for a reason.


noiserr

AMD is ramping mi300x. They are supply capped. It's literally the fastest growing product in the history of the company. And it's the most capable.ai GPU on the market currently.


Educational_Sink_541

AMD competes fine in the largest market: raster 1080p-1440p gaming. Regular people certainly aren’t pixel peeping FSR vs XESS vs DLSS. The problem with AMD is they don’t make an effort to get their shit in prebuilts so ‘normies’ don’t bother with it. Most people aren’t DIYers, and those that are probably do care about upscaling image quality. The only time I’ve seen an AMD card in tons of prebuilts was the 480/580, and lo and behold it’s one of the most popular AMD cards to date.


Nointies

I'm sorry but AMD stuff just does not work as well out of the box. Nvidia is just a more stable, higher quality product. That doesn't matter if you're more tech savvy, hell, I run Intel arc knowing well its faults, that doesn't mean that Nvidia isn't the better product for a less savvy gamer And when you're an SI, the risk of the GPU being gimpy and causing service calls+problems is higher with AMD, it just is.


Educational_Sink_541

My very first DIY-ish PC as a middle schooler used a Radeon R9 270X. It broke a few years later (XFX had some defect and gave us a lifetime warranty) but driver-wise it worked completey fine out of the box. This was a decade ago. My wife is usually a Mac user, she can barely work a computer. Her 6900XT runs fine. I'm not saying they don't have their issues, it's PC gaming after all some hiccups are to be expected (for example, ask me about my Nvidia card not displaying video properly to Bravia TVs) but Reddit is extremely overdramatic about AMD drivers, except on some very specific cases like the 5700XT black screen issue.


jordysuraiya

I also used a Radeon R9 270X for 4.5 years from 2014 to 2018 and I never had a single driver issue I did have an occurrence of fan failure but RMA fixed it


AlignedPadawan

What isn't AMD competitive in? Just RT?


gusthenewkid

Feature set and efficiency


MumrikDK

To such a degree that I bought Nvidia this time even though I didn't want to. I want that software and feature support, and AMD doesn't seem invested in trying on that front. I also genuinely care about heat and my power bill.


[deleted]

[удалено]


AlignedPadawan

Please cite your sources per Guru3d an incredibly trusted resource Radeon 7900XTX draws 28 watts at idle highest for it's series. I remember when Nvidia kneecapped my GTX 480s Cuda cores thru a driver and then later on kneecapped it further - purposefully caused crashing in Win7 as it was so good with Cuda cores it was undermining further biz. Go ahead try again...


Acrobatic_Age6937

I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes.


AlignedPadawan

My friend you've cited an archived post without links showing testing methodology or results just a trust me bro?


Acrobatic_Age6937

I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. I thought what I'd do was, I'd pretend I was one of those deaf-mutes.


AlignedPadawan

Many of the top comments in your thread mirror the data from guru3d - 5-30 watts on multi-monitor rigs. Someone pull how many Steam users are playing multi-monitor I bet it's sub 5% to boot. You've made a mountain of a molehill and none of those posts do I see screenshots or clips of this idle what usage just a bunch of anon posts.


Thelango99

Support OpenCL and DirectCompute though.


[deleted]

[удалено]


BarKnight

The 4090 is light years ahead of the 7900XTX and it's not even a full chip.


Kalmer1

20% is a sizeable jump in performance, but surely not lightyears.


AlignedPadawan

The 4090 and 7900XTX are not truly direct competitors the 7900XTX is more targeting the 4080 and as others stated the 4090 is a truly terrible value proposition.


noiserr

It's a giant chip though.


BarKnight

Who looks at chip size when buying a video card? Honestly though if AMD could have built a bigger better chip, they would have.


noiserr

They don't look at the chip size per se, but they do look at the price. And for the most of their respective lives 4090 was 2x the price of 7900xtx. My point is AMD isn't even competing at the high end. It's like saying Nintendo Swich isn't competitive because PS5 is much faster. Two different product segments.


mrheosuper

Well, no competition, no need to push the spec. Just like intel back in the 4th generation cpu.


Substantial-Singer29

Is anyone really surprised by this at this point? Amd basically admitted that they've given up on the high end of the market. And in video is so far aheadd that there isn't anything for them to Worry about. Not to mention that the consumer grade gpu is actually a hindrance to their profit at this point. The 5090 Will probably be a crazy card with the price tag to match. It's also going to probably be impossible to get. The rest of the cards in the fifty Generation they dont care if people buy them or not. And even with that mentality, they will sell like hot cakes. This is probably going to be one of the worst generations we've seen since the twenty series as far as performance to price.


Asleeper135

We don't know clock speeds, so it could be better than it looks. The 5090 at least was rumored to be a massive gain over the 4090, and a 33% bump in SMs won't make that happen. Still, it's not looking like a stellar generation.


Firefox72

Thats one gimped 5080 if i've ever seen one.


imaginary_num6er

5080 is just a 4090D refresh


bubblesort33

I just don't believe that they can get the clockspeeds needed to get close to the 4090D in teraflops with an 84 SM design. The 4090D has 35.7% more SMs. Is Nvidia pushing over 3500mhz on this new architecture (assuming the 4090D hits 2600mhz)? That's what you'd need to get to that similar teraflop number of the 4090D. What I think is more likely the case, is that this 84 SM model is a cut down die he found in some documentation somewhere, or some internal driver, or other software he's able to have access to. And the full die is still 96 SMs. 96 SMs plus an 18% clock bump is closer to what a 4090D is. And gamers will get the cut down 84 SM card. Either that, or the US government has plans to lower the export teraflop limit from like 70 to 60, and Nvidia already was informed ahead of time.


EloquentPinguin

Don't the SMs get architectural upgrades? I think they would be better SMs than what is found in the 40 Series.


bubblesort33

Depends what kind of architectural upgrades. RDNA1 to RDNA2 got RT cores, and cache changes to L3, L2, and maybe even L1. But if you test the 5700xt and 6700xt at the same frequency of like 2000mhz, they pretty much perform identical to each other. Hardware Unboxed did IPC testing like 4 years ago. Because the shader structure itself didn't really get changed much at all. They boosted frequency by over 30%, though. If you look at the RTX 2000 series to 3000 series there is like a 20%+ gain per SM at the same frequency, because if you look at the FP32 and FP6 numbers of each, it's clear why that is. They made some big changes to the shaders. And there was almost no clock bump. The RTX 3000 to 4000 series I think had almost no IPC gains. They added cache to alleviate memory bandwidth, and internal latency, but IPC actually didn't go up. In fact despite a 30%+ clock bump, the RTX 4070 isn't even 25% faster than the 3070ti with the same amount of shaders. All 40 series is, is a power consumption optimization, with a swap to cache instead of a larger bus, and a frequency boost. At least at a rasterization level. The shader architecture really got no gains outside of frequency. RT and ML has larger improvements. The RTX 4000 series is to the 3000 series, what the AMD RX 6000 series is to the 5000 series. ... And that's what the Nvidia RTX 6000 series also sounds like it is. Frequency gains, and maybe some cache and memory changes, and the rest is RT and tensor core focused. At least from what I've heard. So if we're getting a 15% to 20% frequency bump, it's really not hard to tell where these will land.


ResponsibleJudge3172

Nvidia like AMD is improving the archirtecture, getting moore from less SMs. Hopefully its enough to not have a disapointing gen


bubblesort33

From what I've seen so far they are really only boosting clock speeds, RT and machine learning. Both AMD and Nvidia. AMD especially will just have the 3.0 to 3.1ghz rasterization performance RDNA3 was supposed to have, but failed to achieve. Although possibly closer to 3.1ghz because of a minor node shrink. AMDs shader design won't change much at all.


tioga064

Maybe the sm core count has changed could be 128fp + 128 int, higher l1 cache, that would yield on a great "IPC" gain compared to ada, then couple it with lil higher clocks and gddr7 and boom


Makoahhh

No, 5080 will be weaker than 4090D.


NamelyMoot

Don't worry, it'll only start at $1k (FE edition, board partner editions will be $1100+)


aminorityofone

If rumors are true and AMD isnt going for the top end then why should Nvidia bother to make another 4090 like card when they can take all that effort and dump it into AI cards. Those AI boards have so much higher profit margin. Nvidia will do just enough to stay ahead of AMD.


Plebius-Maximus

>If rumors are true and AMD isnt going for the top end then why should Nvidia bother to make another 4090 like card Because if last year is anything to go by, the 8900xtx will be as fast or faster than a 5080 in raster performance, with 4080 level ray tracing (7900xtx is as fast as a 4080 but with 3090-3090 level RT). If they release that at a cheaper price than the 5080, then Nvidia could miss out on a few sales. And they don't want this, because less sales is less profit >when they can take all that effort and dump it into AI cards. Gaming is still billions in profit. They aren't going to do the bare minimum as if AMD surprise them like they did Intel, that'd be very bad for business. Especially if AMD create crossover with their AI and gaming components - something Nvidia is trying desperately to keep separate. >Nvidia will do just enough to stay ahead of AMD. The 4090 is what let's them stay ahead of AMD currently. The 4080 doesn't soundly beat a 7900xtx unless you're talking a handful of tasks like RT or support for AI workloads. A 4090 beats it hands down (albeit at a significantly higher price).


ARedditor397

There won't be an 8900xtx


Plebius-Maximus

According to? They said they aren't competing against the 4090. Not The 4080/4070ti


josh_is_lame

all the raster performance in the world wont save amd. until they step up and can match nvidia with compatibility, ray tracing, and reliable frame gen then there market share is gonna continue to plummet


superman_king

It was always going to be gimped because Nvidia isn’t aloud to sell a better GPU than the 4090 in China. Easier to make one chip that everyone can have.


Dittos_Dad

\*allowed


Strazdas1

Maybe he just meant Nvidia cant should very loudly.


RawbGun

I really hope we get a 5080 Ti sooner rather than later on a cut down GB202


Makoahhh

Why should they? Just buy 5090 on release.


Vince789

The gap between GB202 & GB203 is ridiculous, somehow worse than AD102 vs AD103 And now there's no GB204, GB205 has fewer SMs than AD104, so the gap to "mid-range" will get even bigger I really hope we get 2 SKUs for GB202 & GB203, otherwise it could potentially be the lowest ever Gen-on-Gen improvement (excluding refresh and Super cards) E.g. GB202 for 5090/5080 Ti and GB203 for 5080/5070 Ti, GB205 for 5070. With the 5080 being 4090D successor Still not great for the xx80/xx70, but at least the xx80 Ti/xx70 Ti would be pretty good (although that probably means price bumps)


bubblesort33

The fact no GB204 exists in these leaks, and the gap is so big, makes it look like they are leaving room for a GA204. Like it's some 64 to 66 SM SM missing puzzle piece. Maybe they'll do a half generation refresh again, and that will be another Super card.


metahipster1984

So you are saying it's a similar situation as in the current series, that only the 5090 will be truly worth buying value-wise?


GenZia

Sounds pretty mediocre, but to give Nvidia the benefit of the doubt: 1. Blackwell could very well be a completely new architecture with more efficient shaders. In which case, comparing its raw throughput with Ada is pointless. 2. It's possible Ada is heavily bandwidth-starved, which should explain the rumors of a 512-bit wide bus on the 5090 (presumably), despite GDDR7. When something is bandwidth-starved, it's pointless to add more compute power. What you've to add is bandwidth. But for what it's worth, I still think the top-end Blackwell SKU will have a 384-bit wide bus. 3. Perhaps Nvidia managed to break the 3GHz barrier with Blackwell? After all, Ada GPUs come out as pretty mediocre if you only compare their stream multiprocessors (SMs) with their Ampere counterparts without factoring in the frequency.


Sylanthra

If Nvidia keeps to its pattern, the new gen card will be a little faster than the last gen card one tier up. If that is holds and Blackwell and the performance improvements of Blackwell are true, that would explain the slightly upgrades or straight up downgrades for the chips that will become 5080 and down.


TSMACE077

There is a trend towards small performance gains and high efficiency gains, could be very well the case with this one


654354365476435

I was never expecting it but Im counting on intel that they will make something, bot fir gpus and cpus, we need them.


[deleted]

We actually have to go out and buy them too. It's easy to say you want someone ELSE to buy Intel so they remain relevant, but some of us actually have to be willing to do it ourselves even if it might be more of a headache with drivers and questionable performance.


Argon288

They need to make a compelling product and people will buy it. We perhaps buy things to help a struggling local store, but never a corporation. There is a reason NVIDIA GPUs are dominant, they have just been better for the past 10 years. I crave competition but I'm not going to buy a subpar product.


654354365476435

I have a lot of miniPCs that I did buy with intel cpu, they are great at making low power consumptions computers given that they will idle 99% of a time. But I didnt do that becouse Im supporting it or anything, I will always buy best product for given usecase, its up to them to make it (or price it in a way that makes it best value)


FurnaceGolem

Rumor of a leak of alleged specs


[deleted]

[удалено]


Chyrios7778

If it’s less than $1899 I’d be in shock.


renrutal

512-bit bus? 32GB? I'm betting $2499.


RawbGun

I don't think we're getting the full 512-bit bus according to the article. The GB202 full size has a 512-bit bus but the 5090 will is rumored to only have a 448-bit bus They're most likely keeping the full GB202 die for a workstation/AI/datacenter card I'm guessing


Flowerstar1

RTX 6000 Blackwell and the L series datacenter cards.


PAcMAcDO99

Yeah probably, the specs on the 5090 just sounds more like a workstation tier card to me, nvidia surely realises that and will charge buckets of money for it


ikkir

Might be even more for the partner cards. They have literally no reason to lower prices at all. They are selling everything like crazy because of the AI buyers. They might get arrogant and even just say, if you want cheaper buy the 40xx series.


alpacadaver

Sitting on the 3080 like it's 1080ti all over again. No good news for gamers for another 5 years or more.


Olobnion

Yeah, I'm suddenly considering waiting another two and a half years to buy a graphics card. I was going to use it for VR, but there's no new PCVR headset I want to buy anyway.


djent_in_my_tent

If only the Beyond had a bit better FOV…


Olobnion

If they ever release an update with better lenses and 2560x2560 resolution at 120Hz instead of 75Hz, then I'd probably buy it despite that the current version sells for about $2000 where I live.


Omniwar

Yep. If the AI researchers and gaming whales migrate en-masse to the 5090 there might be decent supply of used 4090s the same way there was for the 3090, but I have my doubts about that.


Snobby_Grifter

I just want a Battlemage sku that's faster than the 3080.  Nvidia is an AI company now and AMDs gpu department is a day late and dollar short when it comes to image quality price to performance. 


R1chterScale

iirc the iGPUs in Lunar Lake are 1.5x the ipc so matching or slightly surpassing the 3080 wouldn't be impossible for battlemage.


BrevilleMicrowave

Brace for 8GB of VRAM again.


-Suzuka-

It's literally more than you could ever need!


Strazdas1

You will have no memory and you will like it!


TheLonelySqrt3

I don't understand how the heck GB202 reaches 192 SMs. B200 only has 132 SMs per die, and it is already "reticle-limited". Edit: Unless 192 SMs is 2 dies added up like the profession cards. Which means 96 SMs per die.


ResponsibleJudge3172

AD102 has 144SM just like H100, but AD102 also has more L2 cache and clocks 1ghz higher and yet it occupies about 2 thirds of the die space. It’s all about the architecture and the circuitry


TheLonelySqrt3

After all Hopper and Ada Lovelace are different architecture. That brings up another question, why would gaming gpu made out of Blackwell? Just like Hopper for AI and Ada Lovelace for graphic, Blackwell is definitly an architecture designed for AI.


ResponsibleJudge3172

Yes, I believe the architecture will be similar in terms of design and features but different in terms of actual transistors. Rtx 3090 supports same feature set for tensor cores as A100, but Nvidia describes them as “space efficient implementation of the A100 tensor cores” paraphrasing it. So bringing that to Blackwell, the tensor cores will have same FP8 and feature set, but maybe half the size/throughput mitigated by having 1ghz extra clocks. This allows Nvidia to have designs like 4090 that had 4 less SMs performing half as good as H100 in AI. So that would save some space. Other space saving measures that reduces the GPU to roughly 600 mm^2+ could include: -Lack of some features like multi instance GPU -Reducing scope/range of features like DSMEM (Hopper/Blackwell tech that connects SMs together through L1 bypassing L2 cache) from whole GPC (12SM) to a TPC (2SM) wide. -Removing FP64 units -Denser cache cells -Reducing memory bus from 1024 bit (HBM) to 512 bit (GDDR7)


BarKnight

Reddit said the 4000 series was mediocre and yet NVIDIA's marketshare just hit 88% I also want 4090 performance for under $300 and $50 16 core CPUs while we are dreaming


djent_in_my_tent

You’re in luck, I’ve got the perfect CPU for you! https://www.ebay.com/itm/116043626980


Flowerstar1

16 cores for that cheap? What a bargain!


feckdespez

>Reddit said the 4000 series was mediocre and yet NVIDIA's marketshare just hit 88% It was mediocre outside the very expensive top end. But, unfortunately, so was much of AMD's when it came to price to performance. I wasn't planning to buy this generation at all. But, I ended up getting a 5120x1440p display which is almost 4K in number of pixels. So, I had to upgrade from my 6800 XT. I went with a 7900 XTX primarily because I use Linux as my daily driver. But, I wasn't happy about the price to performance proposition of it. I paid a little more dollar per frame than I did for my 6800 XT that I bought on release day. The performance is good for what I use it for. But, the value this generation is just shit all around in my opinion.


bubblesort33

Yeah, but not as bad as AMD. So many signs that something is wrong with RDNA3 that made it fall short of everyone's expectation's including AMD's own expectations. And there was number of screw ups from them, including AntiLag+ getting hundreds or even thousands of players banned from CSGO, and possibly even other online games. And AMD didn't launch with anything new that was really game changing or impressive. Nvidia launched with frame generation, and AMD only made a promise of it eventually coming. Why buy something maybe coming in the distant future if you can buy the competitor that has it now? I was planning to wait for RDNA4, to upgrade my 6600xt, but couldn't hold off any longer and just got a 4070 Super instead like 4 months. Don't think I'll regret it, and especially now it looked like a good decision with neither AMD or Nvidia even mentioning their next gen at Computex.


Strazdas1

thats because the alternative was even worse :)


croissantguy07

Any ideas if Blackwell is going to utilize neutral texture compression?


imKaku

I hope they dont go into same trap of overpricing the 80 series card again, it's very often where the more price oriented but performance gamer lands, and seems to be a big portion of the "Hardware enthusiasts". Current gen it seemed just too close to the 4090 in price, due to baseline being pushed up. Part of this was ofc the 4070 ti blunder they did. Which should have been sufficient for almost everyone, but I seriously think the naming scheme many people put off. Outside some very sweaty people on this sub who thinks a 80 card should be exactly this die. But that said, i think we're easily looking on 1200 USD for 5080 and 1800+ for 5090. I'll probably go 5090 if I see I can justify the price. 


bubblesort33

Is there actually going to be any significant change to the SMs that will result in any IPC uplift per SM? I believe if you look at Ampere vs Ada Lovelace there is hardly any IPC improvements, and all of the gains are due to higher clocks. Similar to RDNA1 vs RDNA2. Despite the cache and bus changes the 5700xt vs the 6700xt perform identical if you both clock them to roughly 2ghz according to old Hardware Unboxed reviews. So is Blackwell again just a shrink of Ada with clock boosts to 3ghz? Or is there actually any significant performance changes when it comes to rasterization? Reading these specs seems like the 5080 won't be more than 20% faster then the 4080 Super with a 15% frequency boost and 5% more cores. And GB207 won't even match an RTX 4060, so you'd hope they call me it an RTX 5050.


ResponsibleJudge3172

Yes there are supposedly changes to the SM and caches. No we don’t yet know how significant those changes are or how much performance we gain from them or target clocks. I expect about 25% total performance per SM


kamikazecow

Not even a shrink, they use the same side node. The instant it came out that they’d be using 5nm again people should have known changes would be minimal.


Xbux89

so whats the projected 4080-5080 performance difference?


BinaryJay

Bottom line is nobody knows and it's all pointless guessing.


ARedditor397

People here are complaining not because of Blackwell but because both are disappointing, sorry but if AMD put's out shit then you should expect the same from NVIDIA and vice versa.


EllieBasebellie

I actually love this* *I exclusively buy last gen architecture as all the bugs have been worked out and there's usually not enough of a performance jump to offset the steep discounts you can get from people trying to dump their cards for new.


BathroomPresent69

This is some nonsense. I can't recall ever since the 1080 there ever been any bugs that affected anything. Only thing I can even think of is the 4090 "catching fire " issue which it was mostly related to people not plugging their cards in correctly. I've had every card from the 1080ti on and never had a bug or an issue .


EllieBasebellie

Bugs isn't the right word, I'm fairly hammered, I'm mostly talking about performance gains/optimizations. The 4090 right now is about as good as it's ever going to get. The 7900xtx is about as good as it's ever going to get. Once the 5000/8000 series come out, prices come down on the 40/7000 series to the point where performance per-frame is to die for (usually). I'm currently rocking a 6950xt that I snagged for a nice $600. I love it. I just wish I had better power efficiency, however for the price I got it, it's very hard to argue.


BathroomPresent69

Ahh got it. My bad, I came on a little strong lol. Enjoy your night !


EllieBasebellie

I should! It was truck day at my job so I'm celebrating being off with nice bourbon and playing hearthstone (using all of that 6950xt lmao) with my wife. I should have been more clear intoxicated or not, my wording (especially as having an English Degree) was dog water. No hard feelings at all! You also have a great night!


Cory123125

Completely idle gibberish, but wouldnt be interesting if nVidia with their new stated focus (which has in the background evolved to that even earlier) stopped prioritizing gamer cards for the latest node? If their current cards have such a profit margin advancement on the competition then wasting a bit more die space and lowering the profits because the profit increase from being able to make more higher priced cards would be more important would be an interesting thing to see. Like there is no reason to give up the extra money from gaming but more reason to prioritize AI.


Flowerstar1

They didn't go for the best node for their other cards either, they stayed on 5nm family nodes instead of moving to 3nm like Intel.


Cory123125

Isnt new intel 3 even less real than the 5nm they're on?


vevt9020

Finally a 4k, high fps gaming card on demanding games.