• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Oberon PlayStation 5 SoC Die Delidded and Pictured

John Wick

Member
This 18% is not taking into account, the fluctuations of the PS5's gpu clock speed. The xsx having a static 1825mhz gpu clock will have some advantages beyond the raw compute advantage.
Why do we always get this bullshit fluctuations of the PS5's gpu clock speed.
Show me a game where the GPU and CPU are taxed to the max for the entire time?
Loads differ on a frame by frame basis. Unless you've got an SX game that loads the GPU and CPU to the max every frame?
 

John Wick

Member
Doing what? The ps5 will lower gpu frequency to reduce power on demanding games. The XsX is designed where the GPU can remain at 1825mhz.
Not that it really matter given how scalable engines are these days.
But peoole should not claim the PS5s compute performance is a constant, its not, 10.28tflops is its maximum caperbility but not its constant. It will fluctuate between about 10 - 10.28tflops, but xsx can run at 12.15tflops constantly.
Yes but what's it gonna run at 12.15 teraflops constantly? Games aren't just theoretical maximum compute power. It will run into far too many bottlenecks well before hitting 12.15 teraflops compute in a game.
 

John Wick

Member
We know that a higher clock rate provides more performance on the same chip.
If the gpu did not have to reduce frequency when that "worse case game" was played it would perform better.
The devs would design around it though, in the real world it may mean the resolution may be slightly higher or slightly more frames when the framerate is not locked.
You do know that the PS5 can shift power on frame by frame basis? Again do you have a game that taxes hardware to the max for every frame?
 

ZywyPL

Banned
Yes but what's it gonna run at 12.15 teraflops constantly? Games aren't just theoretical maximum compute power. It will run into far too many bottlenecks well before hitting 12.15 teraflops compute in a game.

The clock is fixed but it's the workload that vary. Sometime's the GPU will have a headroom due to 30/60FPS lock and be loaded in, lat's say 70-80% of those 12TF, other times it'll be constantly pushed to 99-100% and that's when all the framerate/resolution drops happen.
 

SlimySnake

Flashless at the Golden Globes
It's not necessarily game engine-specific, but rather entirely game workload-dependent and therefore can change from moment to moment depending on what the game is rendering at any one time.
Also, the PS5 has double the CUs of the base PS4 and triple that of Xbox One, both consoles were the base for video game development last gen just like the PS5 will be the base for this gen.

BTW, what do you make of Forza Horizon 5's Performance and Quality modes both running at native 4k? One is 30 fps with better visuals while the other is native 4k 60 fps with lower graphics settings. Why not just go with 1440p 60 fps and same graphics settings? Is this due to a bandwidth issue maybe? This reminds me of Ratchet and Spiderman Miles. Both werent able to reduce resolution by half to get double the framerate like you should be able to. They had to reduce lighting quality, number of NPCs, and other visual features.

Why are these GPUs not able to double the framerate at the same graphics settings? Ive been doing that for over a decade on PC. The GPU load should be the same per pixel. 1440p is less than half of the pixels of native 4k so whats going on here? XSX has a 560 GBps of VRAM so it cant be the ram bandwidth like it might be on the PS5's 448 GBps of shared ram. Unless of course their split ram architecture is causing some bottlenecks when running the games at high framerates.
 

rnlval

Member
I would not compare compute and ray trace capabilities by the number of CUs because the CUs and TMUs+Ray are designed differently. Thus performance and efficiency may be also different.
bqOGiYS.jpg


UN2r7vT.jpg


The expected 5 MB L2 cache for the GPU with a 320-bit bus.

Prove PS5 GPU's hardware accelerates RT BVH transverse like on NVIDIA's RTX BVH units.


NVIDIA's Turing RTX and Ampere RTX advantages when compared to AMD's solution.

6wGWvJY.jpg
EiMN2Og.jpg



PS; For divergent workloads, RDNA WGP has improved scalar units when compared to GCN's CU.

J7XZxCk.jpg
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Yes but what's it gonna run at 12.15 teraflops constantly? Games aren't just theoretical maximum compute power. It will run into far too many bottlenecks well before hitting 12.15 teraflops compute in a game.
This isnt true. The game engines are all designed to run on PCs and they push the graphics cards to the maximum at all times when running games at unlocked framerates. You can see the GPU utilization at all times and every game utilizes 95-99% at any given moment. You could be standing still in a particle heavy game like Control and the GPU will automatically be utilized to improve the framerate.

The only time you may not see full 100% GPU utilization is when you yourself purposefully lock the framerate at 30 fps or 60 fps and there is headroom available, but as we have seen time and time again, not a single game this gen runs at a locked 60 fps on either console. We either see dropped frames or dropped resolution. In both cases, the GPU utilization is at its maximum.

This is truer than its ever been on RDNA 2.0 card that run at unlocked frequencies up to 2.7 ghz on cards like the 6600xt. Thats a card with just a 256 GBps RAM bandwidth. Almost half that of the PS5. And yet its GPU maxes out its clocks, the gpu utilization remains at 99% and all you see are lower framerates.



Every single game in this video has the 6600xt pegged at 99% consistently. Only 2 games are below 98% on the 5700xt and aside from Hitman, all of them are above 95%. Thats just how it works. My rtx card behaves the same and always sits around 1.95 ghz in game instead of its advertised boost clock of 1.71 ghz.
 

rnlval

Member
I remember seeing someone claiming the register occupancy is usually 60-80%. Though that doesn't tell the whole story eiter.
Regardless, the only way to get close to 100% FLOPs is with a power virus without a framerate limiter, like furmark. But that wouldn't pass Sony's or Microsoft's compliance tests for publication anyway.


Faster at what?




On compute-limited scenarios the Series X should be up to 20% faster, but on fillrate-limited scenarios the PS5 is up to 20% faster. Which happens more often is probably dependent on game, engine, scenario, etc.


PiDGOyO.png


ROPS bottlenecks can be workaround via UAV texture path.


GemSPNc.jpg


ROPS can be memory bandwidth bound. Note why PC RDNA 2 like 6800/6800 XT/6900 XT has a 128 MB L3 cache (large render cache).

This isnt true. The game engines are all designed to run on PCs and they push the graphics cards to the maximum at all times when running games at unlocked framerates. You can see the GPU utilization at all times and every game utilizes 95-99% at any given moment. You could be standing still in a particle heavy game like Control and the GPU will automatically be utilized to improve the framerate.

The only time you may not see full 100% GPU utilization is when you yourself purposefully lock the framerate at 30 fps or 60 fps and there is headroom available, but as we have seen time and time again, not a single game this gen runs at a locked 60 fps on either console. We either see dropped frames or dropped resolution. In both cases, the GPU utilization is at its maximum.

This is truer than its ever been on RDNA 2.0 card that run at unlocked frequencies up to 2.7 ghz on cards like the 6600xt. Thats a card with just a 256 GBps RAM bandwidth. Almost half that of the PS5. And yet its GPU maxes out its clocks, the gpu utilization remains at 99% and all you see are lower framerates.



Every single game in this video has the 6600xt pegged at 99% consistently. Only 2 games are below 98% on the 5700xt and aside from Hitman, all of them are above 95%. Thats just how it works. My rtx card behaves the same and always sits around 1.95 ghz in game instead of its advertised boost clock of 1.71 ghz.

FYI, RX 6600 XT has a 32 MB L3 cache (render cache).


PS5 GPU has 4 MB L2 cache.

XSX GPU has 5 MB L2 cache.

The last level cache is before hitting external memory I/O.
 
Last edited:

Rea

Member
This isnt true. The game engines are all designed to run on PCs and they push the graphics cards to the maximum at all times when running games at unlocked framerates. You can see the GPU utilization at all times and every game utilizes 95-99% at any given moment. You could be standing still in a particle heavy game like Control and the GPU will automatically be utilized to improve the framerate.

The only time you may not see full 100% GPU utilization is when you yourself purposefully lock the framerate at 30 fps or 60 fps and there is headroom available, but as we have seen time and time again, not a single game this gen runs at a locked 60 fps on either console. We either see dropped frames or dropped resolution. In both cases, the GPU utilization is at its maximum.

This is truer than its ever been on RDNA 2.0 card that run at unlocked frequencies up to 2.7 ghz on cards like the 6600xt. Thats a card with just a 256 GBps RAM bandwidth. Almost half that of the PS5. And yet its GPU maxes out its clocks, the gpu utilization remains at 99% and all you see are lower framerates.



Every single game in this video has the 6600xt pegged at 99% consistently. Only 2 games are below 98% on the 5700xt and aside from Hitman, all of them are above 95%. Thats just how it works. My rtx card behaves the same and always sits around 1.95 ghz in game instead of its advertised boost clock of 1.71 ghz.

He's talking about ALUs utilization. Each CU has 2 SIMDs, each SIMD has 32 ALUs.
 

ToTTenTranz

Banned
Games made in the current time for the current consoles. I have no interest in looking how a game functions that is builded for the PS4 because why even bother buying a PS5 at that point. Its about todays games.
The high-level architecture between 8th-gen and 9th-gen didn't change all that much. They went from x86 to higher-IPC x86, from AMD GCN to AMD RDNA2. The ISAs are very similar between the generations this time, and they all use unified memory. IIRC AMD's GFX10 ISA still needs to be 100% compliant with the GFX7 (precisely because of the consoles), and only RDNA3 / GFX11 is going to break that.
Save for adapting to the much faster mass storage and using the dedicated ray tracing units, there isn't all that much of a difference between last gen and this one. Even Unreal Engine 5's "software triangles" Nanite and Lumen could probably run pretty well on the XBOne and PS4 (at realistic resolutions of course) if they had access to solid state storage.


And in todays games CU's are ultilized without effort simpel by the fact they are running at higher resolutions which will already consume those cu's. 36 or 52 cu's aren't that many at the end of the day specially for 4k when u look at PC hardware that sits in a class above it, its kinda lowish.

People here pretend and try to find evidence to support there narrative that CU only matters in certain scenario's while in reality every single game that's made today specially at 4k will use those cu's without effort. Now will u notice the difference? that all depends on what parity the developer is searching for. Which i come back to my 30 fps remark.
I think you're conflating a number of different things that don't really mix the way you're assuming.
Both consoles have 4 shader arrays and 2 shader engines. The PS5 uses less WGPs per shader engine than the Series X, so while it has a lower maximum theoretical throughput its work distributors are probably more effective. Not only because they each has to handle less compute ALUs, but also because they run at higher clocks.

It is absolutely true that a wider and lower-clocked compute subsystem will not be more performant in all scenarios (just look at the RX 5700's 8 TFLOPs with 36 CUs vs. Vega 64's 11.5 TFLOPs with 64 CUs), just as much as the faster fillrate will not be more performant in all scenarios.

As for the "4K" resolution statements, you should know that neither does a wider architecture gets "more" favored by a larger resolution (there's >2 million pixels to process even at a lowly 1080p, it's not like 2300-3300 ALUs would ever be idle due to low resolution), nor do most games ever render at 3840*2160. Almost all games are using variable resolution between 1440p and 1800p with some kind of upsampling on top.


Now obviously u could make a game run at 36 cu usage cap and get the performance advantage on the PS5, much like how far cry 6 performs better with RT on a 6800xt over a 3080 even while the 3080 is a fuck ton more faster then a 6800xt at RT. At that point its just the developers being special and having a marketing agenda to push certain solutions forwards. More CU's however isn't something specially added for marketing purposes, its basically how the GPU's push performance forwards at higher resolutions, and the same could be said about the PS5 higher clocks on lower CU gpu's is a option u can go for when resolution isn't your first priority or RT.
It could be that Ubisoft (like many others) is simply optimizing their engines for the GPU architecture that was developed for the gaming consoles that constitute most of their software sales, and it has little to do with marketing agendas.
Expect most AAA games from big publishers to follow suit.


Nobody is going to optimize a game that runs at 30 fps on a xbox series X and 24 fps for the PS5 because they will be review bombed so its 30 fps lock versus 30 fps lock.
That would never happen because the Series X isn't 25% faster than the PS5 at anything, really. There's maximum memory bandwidth but the Series consoles seem to lose a bit of effective bandwidth due to memory contention, and nothing ever scaled 100% on memory bandwidth, certainly not games.
 

sncvsrtoip

Member
Why are these GPUs not able to double the framerate at the same graphics settings? Ive been doing that for over a decade on PC. The GPU load should be the same per pixel. 1440p is less than half of the pixels of native 4k so whats going on here?
It shouldn't ? cpu and geometry are still as heavy to compute independent of resolution so for example tests in battlefield
6800xt 64fps avarage in 4k and 97fps in 1440p (not 2.25x more like you would predict but just 1.51x)
 

Darius87

Member
This isnt true. The game engines are all designed to run on PCs and they push the graphics cards to the maximum at all times when running games at unlocked framerates. You can see the GPU utilization at all times and every game utilizes 95-99% at any given moment. You could be standing still in a particle heavy game like Control and the GPU will automatically be utilized to improve the framerate.

This is truer than its ever been on RDNA 2.0 card that run at unlocked frequencies up to 2.7 ghz on cards like the 6600xt. Thats a card with just a 256 GBps RAM bandwidth. Almost half that of the PS5. And yet its GPU maxes out its clocks, the gpu utilization remains at 99% and all you see are lower framerates.



Every single game in this video has the 6600xt pegged at 99% consistently. Only 2 games are below 98% on the 5700xt and aside from Hitman, all of them are above 95%. Thats just how it works. My rtx card behaves the same and always sits around 1.95 ghz in game instead of its advertised boost clock of 1.71 ghz.

SlimySnake SlimySnake above post fully details the topic, do what you want with it, but please don't post such heresy anymore.
do you really believe 99% of GPU utilization means that every transistor - 1% (15 billion and more inside in modern GPU's) is switching from 0 to 1or 1 to 0 every clock cycle? :messenger_grinning_smiling: neither it says in video that 99% of utilization is only CU utilization, hey but "you can run games utilizing 99%-100% of 12 Tflops". :messenger_grinning_smiling:
 

SlimySnake

Flashless at the Golden Globes
Nah, one thing this past year has taught me is that xss will be treated like the switch of this gen. Devs have no issues letting the resolution drop to a paltry 512p in games like Metro. They have no issues completely removing ray tracing from xss instead of completely removing it from the ps5 and xsx just because the xss cant do it.

The UE5's 1080p 30 fps target for hardware accelerated lighting and 1440p 30 fps target for medium Lumens lighting is proof that xss isnt the base. If it was then the UE5's base wouldve been 1080p 30 fps on the xss and native 4k 30 fps on the XSX. The clearly dont care if the XSS version dips below 560p.
 

Lysandros

Member
Both consoles have 4 shader arrays and 2 shader engines. The PS5 uses less WGPs per shader engine than the Series X, so while it has a lower maximum theoretical throughput its work distributors are probably more effective. Not only because they each has to handle less compute ALUs, but also because they run at higher clocks.
You are talking about the Command Processor, ACE and HWS blocks there i presume, their number being the same across both systems? Scheduler to CU ratio is indeed favorable to Sony's machine, i've never thought about this before, i think that's an interesting fact to consider.
 

Boglin

Member
Nah, one thing this past year has taught me is that xss will be treated like the switch of this gen. Devs have no issues letting the resolution drop to a paltry 512p in games like Metro. They have no issues completely removing ray tracing from xss instead of completely removing it from the ps5 and xsx just because the xss cant do it.

The UE5's 1080p 30 fps target for hardware accelerated lighting and 1440p 30 fps target for medium Lumens lighting is proof that xss isnt the base. If it was then the UE5's base wouldve been 1080p 30 fps on the xss and native 4k 30 fps on the XSX. The clearly dont care if the XSS version dips below 560p.
It would suck for XSX owners going forward but I hope you're right about this. I want games targeting the highest hardware possible
 

Loxus

Member
UN2r7vT.jpg


The expected 5 MB L2 cache for the GPU with a 320-bit bus.

Prove PS5 GPU's hardware accelerates RT BVH transverse like on NVIDIA's RTX BVH units.


NVIDIA's Turing RTX and Ampere RTX advantages when compared to AMD's solution.

6wGWvJY.jpg
EiMN2Og.jpg



PS; For divergent workloads, RDNA WGP has improved scalar units when compared to GCN's CU.

J7XZxCk.jpg
Look closely at the patterns of the CUs and TMUs, there all have different patterns. Which may suggest different performance/efficiency, they are not the same.
 

John Wick

Member
This isnt true. The game engines are all designed to run on PCs and they push the graphics cards to the maximum at all times when running games at unlocked framerates. You can see the GPU utilization at all times and every game utilizes 95-99% at any given moment. You could be standing still in a particle heavy game like Control and the GPU will automatically be utilized to improve the framerate.

The only time you may not see full 100% GPU utilization is when you yourself purposefully lock the framerate at 30 fps or 60 fps and there is headroom available, but as we have seen time and time again, not a single game this gen runs at a locked 60 fps on either console. We either see dropped frames or dropped resolution. In both cases, the GPU utilization is at its maximum.

This is truer than its ever been on RDNA 2.0 card that run at unlocked frequencies up to 2.7 ghz on cards like the 6600xt. Thats a card with just a 256 GBps RAM bandwidth. Almost half that of the PS5. And yet its GPU maxes out its clocks, the gpu utilization remains at 99% and all you see are lower framerates.



Every single game in this video has the 6600xt pegged at 99% consistently. Only 2 games are below 98% on the 5700xt and aside from Hitman, all of them are above 95%. Thats just how it works. My rtx card behaves the same and always sits around 1.95 ghz in game instead of its advertised boost clock of 1.71 ghz.

Hahaha. You just can't make this shit up. So when an RTX 3090 is being utilised 99% according to that GPU rendering software do you think it's hitting 99% of it's theoretical 35.58 teraflops? Are you actually niave enough to think that developers have manged to max out the card so quickly. Because to achieve the maximum 35.58 teraflops of power the card would have to work at it's best and fastest at every step. Imagine everything at its maximum. The card would overheat very quickly. These cards are sometimes hitting 85+ degrees normally.
 
Last edited:

Darius87

Member
Why would a PSU melt if it has headroom even with a GPU running at its maximum safest utilization? So confused by this lol
you can run at it's maximum safest TDP or close to it but it isn't running at what your theoretical Tflop limit is running GPU at theoretical limit would need more TDP then your card requires but before that GPU would be bottlenecked by memory bandwidth or something else.
there isn't any game that can do that and never will. it's impossible
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Hahaha. You just can't make this shit up. So when an RTX 3090 is being utilised 99% according to that GPU rendering software do you think it's hitting 99% of it's theoretical 35.58 teraflops? Are you actually niave enough to think that developers have manged to max out the card so quickly. Because to achieve the maximum 35.58 teraflops of power the card would have to work at it's best and fastest at every step. Imagine everything at its maximum. The card would overheat very quickly. These cards are sometimes hitting 85+ degrees normally.
This is literally the dumbest thing ive heard all week. The cards will not overheat because they are spec'd to hit those tflops out of the box. They will only overheat if you push the clocks beyond the power limits set by the manufacturer. Thats called overclocking. the 10.3 and 12.1 tflops limits set by console manufacturers are set precisely to avoid the cards overheating because thats what they have determined to be the safest and highest clocks that stay within the power limits.

You need to start paying attention to what other people post around here instead of dismissing everything. You might learn a thing or two. Otherwise you'd end up embarrassing yourself saying dumb shit like cards overheating because they are hitting their theoretical max. Absolute nonsense.
 

PaintTinJr

Member
This is literally the dumbest thing ive heard all week. The cards will not overheat because they are spec'd to hit those tflops out of the box. They will only overheat if you push the clocks beyond the power limits set by the manufacturer. Thats called overclocking. the 10.3 and 12.1 tflops limits set by console manufacturers are set precisely to avoid the cards overheating because thats what they have determined to be the safest and highest clocks that stay within the power limits.

You need to start paying attention to what other people post around here instead of dismissing everything. You might learn a thing or two. Otherwise you'd end up embarrassing yourself saying dumb shit like cards overheating because they are hitting their theoretical max. Absolute nonsense.
I'm pretty sure the GPU firmware stops the cards from damaging themselves - drawing too much power and thereby throttles by bandwidth, clock or any other means necessary to stop them frying like the EVGA ones did recently IIRC.

Epic's information about UE5 being 2x, 3x more capable than traditional T&L pipelines at getting stuff on screen - because of drawcall bottleneck IIRC - tells us that these cards at best are doing 50% of the throughput - in current games - they could with better software solutions - like nanite/lumen - which in itself won't be 90% efficient, because Epic's info said they have lots of areas to optimise.
 
Its not that hard people. Xbox series X gpu is faster. It's that simple. Let's not pretend those cu's are hard to push in modern games. PC games do it all day long and specially at higher resolutions those CU's are easily used.

The reality however is, it doesn't matter because a developer will never make a game and not make it run on atleast 30 fps on the PS5. So optimization will always be done on the most populaire platform that makes them the most money which is the PS5. If they don't there game will be review bombed to oblivion and its bad business as result u can see this with cyberpunk.

If they focus on the PS5 model and push 60 fps for example as target, the 18% of the xbox ( if its 18% i just fly with what people say here ) is going to be used for minor shit that nobody really notices or cares for. Its like having a 18% faster PC gpu with a 60 fps lock. congratz u can now push 1 shadow setting to a higher level or they just leave that 18% idle or increase the resolution a little bit.

About RT and CU's. again if the PS5 can't run RT at quarter of the resolution which they are doing right now, the game will simple not have RT to start with. ( far cry 6 ) in order to push parity most likely through contracts or simple use the left over performance for minor shit that nobody cares about.

This is why people say constantly, it doesn't matter what the differences are. the differences aren't big enough to be noticable. The 400% zoom in, to see 1 preset quality difference is simple just nonsense.

The same way as hardware unboxed stated with there far cry 6 review, we can't detect raytracing in this picture, but our raytracing experts we put this job infront could nitpick the difference if you zoom in 400x. At that point its useless and defeats its goal.

Now why do you sometimes see dips below the 60 on the xbox and not the PS5. the same way how a 3090 with 5950x and 64gb of ram, dips below the 60's with microstutter in BF5 and the PS5 don't. Optimisation is dog shit or api used have issue's, nothing to do with hardware.

At the end of the day, the cpu's are great, ram is acceptable, SSD's in those boxes are major gigantic improvements over those shit HDD's and the GPU's are servicable. No matter what the marketting teams tell you how gigantic of a difference x makes over y, its all PR shit to make you buy there hardware. The boxes are practical identical on the grand scheme of things.

I think people do underestimate the reality of platform parity and the reality that most devs will choose to first optimize for a lead platform, that platform tending to be the one they anticipate most of their sales/revenue etc. will come from. And, well, it's no secret that said platform happens to be the PS5 between it and Series.

Of course there are exceptions, mainly when a platform has marketing rights to a certain game or some kind of exclusivity deal. There's also the reality that versions of games that release later on a different platform probably benefit from additional expertise gained in the span of time from the initial release and extra time in development to lend to further optimizations that may not have been present on the initial platform due to time constraints and lessened maturity of the dev environment on the software developer's side (i.e Falconeer, The Touryst, The Medium).

So it's absolutely not wise to always try saying the reason some of these games perform better on one platform versus the other is down to hardware architecture (perceived) strengths or weaknesses; even if those are a factor, they are usually never the only factor and sometimes aren't the main factor, either. The industry's come a long way from the days of stuff like SNES/Genesis where entirely different teams were put on a given platform (and usually the "A" team was put on the SNES version while the "B" team put on the Genesis/MegaDrive one) leading to some wild differences in 3P multiplats. That type of stuff was neat and is appreciable in its own way but it also created some headaches and situations where one platform may've been hamstrung on performance when it didn't need to be.

And this is all inn addition to things you've brought up like API issues/differences and certain coding skill sets lending themselves to favor a certain style one group of APIs might use versus the other. So many people ITT (myself included) have gone on long enough about "teh hardwayah the hardwayarh!!!" as if that's the only factor whe n we could be a bit more considerate and mention these other just-as-important (if not MORE important) factors/realities which impact game performance on a given system when it comes to multiplat releases.
 
Also, the PS5 has double the CUs of the base PS4 and triple that of Xbox One, both consoles were the base for video game development last gen just like the PS5 will be the base for this gen.

BTW, what do you make of Forza Horizon 5's Performance and Quality modes both running at native 4k? One is 30 fps with better visuals while the other is native 4k 60 fps with lower graphics settings. Why not just go with 1440p 60 fps and same graphics settings? Is this due to a bandwidth issue maybe? This reminds me of Ratchet and Spiderman Miles. Both werent able to reduce resolution by half to get double the framerate like you should be able to. They had to reduce lighting quality, number of NPCs, and other visual features.

Why are these GPUs not able to double the framerate at the same graphics settings? Ive been doing that for over a decade on PC. The GPU load should be the same per pixel. 1440p is less than half of the pixels of native 4k so whats going on here? XSX has a 560 GBps of VRAM so it cant be the ram bandwidth like it might be on the PS5's 448 GBps of shared ram. Unless of course their split ram architecture is causing some bottlenecks when running the games at high framerates.
From what I've just seen there's no real visual downgrade between Quality and Performance Mode for FH5. The latter seems to have more draw distance and less per-object motion blur but that's about it. Pretty much same visual fidelity in both with the bonus of 60 FPS in the latter.

Don't know where this narrative there are graphical settings differences between the two modes came from. At most, Performance Mode seems to have less motion blur and might have more drops in resolution targets (somewhat more frequent, somewhat lower lows) compared to Quality Mode. But that would be par-for-the-course in terms of expectations IMO.
 
you can run at it's maximum safest TDP or close to it but it isn't running at what your theoretical Tflop limit is running GPU at theoretical limit would need more TDP then your card requires but before that GPU would be bottlenecked by memory bandwidth or something else.
there isn't any game that can do that and never will. it's impossible

Which is why this utterly ignorant insistence on focussing only on BS theoretical TFLOPs marketing numbers is just a massive exercise in stupidity and only serves to muddy any possibility for intelligent discourse on actual computing hardware performance.
 

Corndog

Banned
Why do we always get this bullshit fluctuations of the PS5's gpu clock speed.
Show me a game where the GPU and CPU are taxed to the max for the entire time?
Loads differ on a frame by frame basis. Unless you've got an SX game that loads the GPU and CPU to the max every frame?
Isn’t that basically what he said.
 

John Wick

Member
This is literally the dumbest thing ive heard all week. The cards will not overheat because they are spec'd to hit those tflops out of the box. They will only overheat if you push the clocks beyond the power limits set by the manufacturer. Thats called overclocking. the 10.3 and 12.1 tflops limits set by console manufacturers are set precisely to avoid the cards overheating because thats what they have determined to be the safest and highest clocks that stay within the power limits.

You need to start paying attention to what other people post around here instead of dismissing everything. You might learn a thing or two. Otherwise you'd end up embarrassing yourself saying dumb shit like cards overheating because they are hitting their theoretical max. Absolute nonsense.
So which card has reached it's theoretical limit teraflops wise. Post it here so we can see your great wisdom? So which test does the GPU manufacturer do to determine the theoretical TF of a GPU?
So are you saying when a GPU is pushed hard it doesn't heat up more? To reach the theoretical limit you would have to push everything to the max for a sustained time. Not for 10 seconds. That would include the power. The card would downclock well before even hitting the limit.
Weren't you the guy crying about why the PS5 IO and SSD hadn't been maxed out by first year titles eh?
 
Last edited:

SlimySnake

Flashless at the Golden Globes
So which card has reached it's theoretical limit teraflops wise. Post it here so we can see your great wisdom? So which test does the GPU manufacturer do to determine the theoretical TF of a GPU?
So are you saying when a GPU is pushed hard it doesn't heat up more? To reach the theoretical limit you would have to push everything to the max for a sustained time. Not for 10 seconds. That would include the power. The card would downclock well before even hitting the limit.
Weren't you the guy crying about why the PS5 IO and SSD hadn't been maxed out by first year titles eh?
What are you blind? Scroll up. I literally posted a comparison of 6600xt and 5700xt which shows both cards consistently hitting their peak clocks and 99% gpu utilization. Do you even read my posts?

You need to read up on this stuff a bit more. Everything you said is wrong. Literally everything. I don’t even know where to start. You are seriously asking me which tests these companies do to determine their peak clocks? Are you new to gaming? How old are you? Serious question. Have you ever owned a gaming PC?

I have never met anyone who thinks GPUs will overheat and die if ran at max clocks out of the box for more than 10 seconds. Its hilarious to see you post lol emojis to every post because your replies are laughable.

Take a few hours to watch YouTube videos of PC YouTubers benchmarking cars and see how overclocking is done to push the card beyond its limits. Go look at AIBs’ versions of gpus that are cooled with better cooling solutions and overclocked to get better performance for the same exact chip but with a higher clock. Look at the gpu utilization during these benchmarks of games and demos. It will almost always be 99%. Because even with higher clocks on the same chip with the same CU count, the clocks define the performance gains.
 
That would never happen because the Series X isn't 25% faster than the PS5 at anything, really. There's maximum memory bandwidth but the Series consoles seem to lose a bit of effective bandwidth due to memory contention, and nothing ever scaled 100% on memory bandwidth, certainly not games.

I already touched on this earlier; when you factor out CPU usage and even take into account possible penalty from having two virtualized "split" memory pools typical GPU bandwidth for Series X is probably around 466.5 GB/s or so, possibly a tad less. That's for a likely typical instance where CPU is using the memory for 10% of the time and audio for 5% of the time, and a possible 2% penalty in case of data needing to be reshuffled between the "fast" and "slow" pools in the system (or the application needing to switch between the two pools for required data).

Even there though that is 93.8 GB/s between the two and I don't think the CPU in any of these systems is going to need anywhere near that, and audio probably would not need more than 20 GB/s (Tempest can use up to 20 GB/s of PS5's memory bandwidth, and I doubt Series X's audio needs more than that), so likely usage would be closer to 70 GB/s between the two, so likely GPU-intensive operations probably see Series X's GPU at about 490 GB/s effective GDDR6 bandwidth.

For comparison you would factor out that same 70 GB/s from its total bandwidth due to CPU & audio usage assuming a scenario where for a given second operations are mostly GPU-bound but CPU and audio may need to operate at full load (for whatever reason), so GPU effective bandwidth on PS5 in such a scenario would be closer to 378 GB/s. In both system cases I'm not taking read/write operations from/to SSD from RAM into account.

Even so, the part of your comment I've quoted is likely factually wrong because just from the scenario I put forth (which could be seen as a typical usage scenario of hardware resources of a game on both platforms), Series X's effective GPU bandwidth is still closer to being a 30% advantage over PS5's for that same usage scenario, if just talking GPU.
 
Last edited:

CrustyBritches

Gold Member
So which card has reached it's theoretical limit teraflops wise. Post it here so we can see your great wisdom? So which test does the GPU manufacturer do to determine the theoretical TF of a GPU?
MS chose that core clock based on total system power consumption for government energy standards for next-gen consoles, fan noise, and chip yields. It has little to do with the theoretical capability of the GPU. You could put that chip into a PC and hit 14-15TF pretty easily.
 

John Wick

Member
What are you blind? Scroll up. I literally posted a comparison of 6600xt and 5700xt which shows both cards consistently hitting their peak clocks and 99% gpu utilization. Do you even read my posts?

You need to read up on this stuff a bit more. Everything you said is wrong. Literally everything. I don’t even know where to start. You are seriously asking me which tests these companies do to determine their peak clocks? Are you new to gaming? How old are you? Serious question. Have you ever owned a gaming PC?

I have never met anyone who thinks GPUs will overheat and die if ran at max clocks out of the box for more than 10 seconds. Its hilarious to see you post lol emojis to every post because your replies are laughable.

Take a few hours to watch YouTube videos of PC YouTubers benchmarking cars and see how overclocking is done to push the card beyond its limits. Go look at AIBs’ versions of gpus that are cooled with better cooling solutions and overclocked to get better performance for the same exact chip but with a higher clock. Look at the gpu utilization during these benchmarks of games and demos. It will almost always be 99%. Because even with higher clocks on the same chip with the same CU count, the clocks define the performance gains.
Your dumb as fuck. Who is talking about overclocking? Have I mentioned anything about overclocking the GPU? So do you think by overclocking the 3090 you will hit it's 35.5tf theoretical performance?
This is about the theoretical 12.15 teraflops peak performance that keeps on getting bandied about on here. That the SX can achieve it because of its fixed clock speed.
Which I explained no GPU can reach it's theoretical teraflops number because everything would have to work at its maximum speed to achieve it. Imagine feeding all 52 CU's and keeping them at maximum all the time? You would run into bottlenecks long before and overheating with downclocking kicking in.
As I stated before the SX will have an advantage in compute and RT. About 10-15% on average.
 
Last edited:

Loxus

Member
What are you blind? Scroll up. I literally posted a comparison of 6600xt and 5700xt which shows both cards consistently hitting their peak clocks and 99% gpu utilization. Do you even read my posts?

You need to read up on this stuff a bit more. Everything you said is wrong. Literally everything. I don’t even know where to start. You are seriously asking me which tests these companies do to determine their peak clocks? Are you new to gaming? How old are you? Serious question. Have you ever owned a gaming PC?

I have never met anyone who thinks GPUs will overheat and die if ran at max clocks out of the box for more than 10 seconds. Its hilarious to see you post lol emojis to every post because your replies are laughable.

Take a few hours to watch YouTube videos of PC YouTubers benchmarking cars and see how overclocking is done to push the card beyond its limits. Go look at AIBs’ versions of gpus that are cooled with better cooling solutions and overclocked to get better performance for the same exact chip but with a higher clock. Look at the gpu utilization during these benchmarks of games and demos. It will almost always be 99%. Because even with higher clocks on the same chip with the same CU count, the clocks define the performance gains.
You still have lots to learn.
The first person in the comments was spot on about the PS5's variable frequency. Make sure to read all to get the full understanding.


There’s no better explanation that what Mark Cerny has already given in his sermon, and later clarified in his DigitalFoundry interview.

It’s tied to power usage, not temperature. It’s designed so that for the most part the clocks stay at their highest frequencies, regardless of whether the console is in a TV cabinet or somewhere cold.
It’s designed to be deterministic. The purpose is to reduce clocks when they don’t need to be so high to help with power usage and keeping the fans quiet. If a GPU is expected to deliver a frame every 16.6ms (60 FPS) and it’s done its work already in 8ms, then there’s no point it sitting there idle at 2.23Ghz sucking power and generating heat. If it could intelligently drop the clocks so that it finishes its frame just before the 16.6ms you get the same 60 FPS game, the same graphical detail, but with much less fan noise.

Anyone with a gaming PC will know that GPU utilisation is rarely at 100%
It typically takes burn tests and crazy benchmark software to get that.

Cerny seemed to suggest that you’d need quite a synthetic test to really load up both the CPU and GPU enough to cause them to declock for power reasons, and that it won’t show up in any normal game.
He said that same synthetic test would simply cause a PS4 to overheat and shutdown.
And even then, dropping power consumption by 10% only drops core clocks by a “few” percent. Which makes sense if you’re used to overclocking modern GPUs. You need to crank up the power to get even a minimal amount of extra clock, and cranking up an already jacked up GPU clock by a “few” percent barely makes a difference to performance anyway.

It’s all about keeping the fan noise down without sacrificing performance or overheating.

PS5 variable clock speeds aren’t at all the same animals as boost clocks on mobile devices or PC CPUs.
It’s closer to modern GPU overclocking where you max out your power budget, and do so with sufficient cooling that you never hit the thermal limit.

You couldn’t have a game console that scaled its clocks based on temperature. It would never work. Neither would having unpredictable or variable performance.

The PS5 devkits allow developers to intentionally select different power profiles so that they can test and profile performance.

tl;dr The variable frequency system as described by Cerny means every PS5 will play exactly the same way regardless of temperatures (to the extent a PS4 does) or whatever is happening in the game at any given moment. They all behave identically, and outside of the kind of burn-tests that would overheat and crash a PS4, the PS5 clocks stay at the “boost” frequencies. If a developer wants to do something that does exceed the power budget, they can know precisely how it will react.

PS5 has a massive focus on efficiency
 
Last edited:

John Wick

Member
MS chose that core clock based on total system power consumption for government energy standards for next-gen consoles, fan noise, and chip yields. It has little to do with the theoretical capability of the GPU. You could put that chip into a PC and hit 14-15TF pretty easily.
No GPU hits it's theoretical TF limit because that is a best case scenario. It would achieve that if it only calculated fp instructions only and if everything ran at 100% maximum including feeding all 52 CU's simultaneously and at maximum. That would never happen while gaming.
I don't know why your telling me that if you got the SX GPU and overclocked it, it would hit higher TF limit? That was never the argument.
 

CrustyBritches

Gold Member
No GPU hits it's theoretical TF limit because that is a best case scenario. It would achieve that if it only calculated fp instructions only and if everything ran at 100% maximum including feeding all 52 CU's simultaneously and at maximum. That would never happen while gaming.
I don't know why your telling me that if you got the SX GPU and overclocked it, it would hit higher TF limit? That was never the argument.
Did you not ask "So which test does the GPU manufacturer do to determine the theoretical TF of a GPU?"?. The answer is that MS based their core clock on government environmental regulations, noise levels, and chip yields.
 

rnlval

Member
The high-level architecture between 8th-gen and 9th-gen didn't change all that much. They went from x86 to higher-IPC x86, from AMD GCN to AMD RDNA2. The ISAs are very similar between the generations this time, and they all use unified memory. IIRC AMD's GFX10 ISA still needs to be 100% compliant with the GFX7 (precisely because of the consoles), and only RDNA3 / GFX11 is going to break that.
Save for adapting to the much faster mass storage and using the dedicated ray tracing units, there isn't all that much of a difference between last gen and this one. Even Unreal Engine 5's "software triangles" Nanite and Lumen could probably run pretty well on the XBOne and PS4 (at realistic resolutions of course) if they had access to solid state storage.



I think you're conflating a number of different things that don't really mix the way you're assuming.
Both consoles have 4 shader arrays and 2 shader engines. The PS5 uses less WGPs per shader engine than the Series X, so while it has a lower maximum theoretical throughput its work distributors are probably more effective. Not only because they each has to handle less compute ALUs, but also because they run at higher clocks.

It is absolutely true that a wider and lower-clocked compute subsystem will not be more performant in all scenarios (just look at the RX 5700's 8 TFLOPs with 36 CUs vs. Vega 64's 11.5 TFLOPs with 64 CUs), just as much as the faster fillrate will not be more performant in all scenarios.

As for the "4K" resolution statements, you should know that neither does a wider architecture gets "more" favored by a larger resolution (there's >2 million pixels to process even at a lowly 1080p, it's not like 2300-3300 ALUs would ever be idle due to low resolution), nor do most games ever render at 3840*2160. Almost all games are using variable resolution between 1440p and 1800p with some kind of upsampling on top.

It could be that Ubisoft (like many others) is simply optimizing their engines for the GPU architecture that was developed for the gaming consoles that constitute most of their software sales, and it has little to do with marketing agendas.
Expect most AAA games from big publishers to follow suit.


That would never happen because the Series X isn't 25% faster than the PS5 at anything, really. There's maximum memory bandwidth but the Series consoles seem to lose a bit of effective bandwidth due to memory contention, and nothing ever scaled 100% on memory bandwidth, certainly not games.
RDNA pipeline's instruction retirement latency is about 33% (using wave64) to 40% (using wave32) less than VEGA's wave64 counterpart. Rendering performance is all about rendering time. AMD got the message on lower latency's importance, not just raw TFLOPS.

wave32 = wavefront with 32 element threads, RDNA mode.
wave64 = wavefront with 64 element threads, GCN mode.

The programming model is MIMT i.e. multiple instructions multiple (data) threads.

NVIDIA CUDA has warp32 similar to wave32 since the G80. NVIDIA wins the wave size programming model. Direct Shader 6.x exposes wave compute programming model and GCN's wave 64 is less efficient when compared to NVIDIA's wave 32/warp 32.

NVIDIA / MS imposed wave 32/ warp 32 programming model on AMD!
NVIDIA / MS imposed BVH RT model on AMD!
NVIDIA / MS imposed mesh shader model on AMD!
NVIDIA / MS imposed DirectML on AMD!

Since 2006, NVIDIA's CUDA architecture saw CELL, Xenos (Xbox 360), Terrascale VLIW5/VLIW4, and GCN fall off the cliff.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Your dumb as fuck. Who is talking about overclocking? Have I mentioned anything about overclocking the GPU? So do you think by overclocking the 3090 you will hit it's 35.5tf theoretical performance?
This is about the theoretical 12.15 teraflops peak performance that keeps on getting bandied about on here. That the SX can achieve it because of its fixed clock speed.
Which I explained no GPU can reach it's theoretical teraflops number because everything would have to work at its maximum speed to achieve it. Imagine feeding all 52 CU's and keeping them at maximum all the time? You would run into bottlenecks long before and overheating with downclocking kicking in.
YOU'RE. Not YOUR. Keep that in mind next time you call someone a dumb fuck.

The fact that you dont understand why I am bringing up overclocking just shows how utterly clueless you are about how tflops and clocks correlate.

Which I explained no GPU can reach it's theoretical teraflops number because everything would have to work at its maximum speed to achieve it.
This right here is asinine. You are spitting in the face of over a decade of computer graphics to spout nonsense that has no basis in reality. Every PC GPU can already out perform its theoretical tflops. It can go beyond its theoretical clock limits. We saw this in the video you continue to ignore because it shows the 5700xt hitting well above the 1.91 Ghz clockspeeds AMD themselves used to calculate the card's theoretical tflops number. 9.75.

40 CUs * 64 Shader processors * 2 * 1.91 Ghz = 9.75

For the 6600xt they used the clocks max at 2.589 Ghz to calculate the card's theoretical tflops.

32 CUs * 64 Shader cores * 2 * 2.589 Ghz = 10.6 Tflops

GSpLT2l.jpg


Notice how both cards are able to hit higher clocks in most games which means they are operating beyond their theoretical maximums. Something you said that never happens because cards never even hit their theoritical maximum.

Here is horizon running the game at 2.79 Ghz and the 5700xt runs it at 2.153 Ghz. Both 200 mhz BEYOND the card's theoretical maximum limit AMD themselves advertised.

i6Gts94.jpg



Your theory about cards not hitting max tflops is WRONG on every level. The only way the card will not be fully utilized is if they are capped at 30 fps or 60 fps and the dev is content with leaving a lot of performance on the table. But we have seen every game drop frames this gen and nearly every game utilize DRS which literally drops the resolution to allow the GPU to operate at full capacity so as to not leave performance on the table.

I have a UPS (Uninterruptible Power Supply) that lets me view the power consumption of my PS5, TV or PC at any given moment. I can easily see which games fully max out the APU and which ones dont. BC games without PS5 patches top out at 100w. These are your Uncharted 4's running at PS4 Pro clocks. Then you have games like Horizon which are patched to utilize higher PS5 clocks and they consume a bit more. Then you have games like Doom Eternal running on PS5 SDK fully utilizing the console and i can see the power consumption at 205-211 watts consistently. Same thing DF reported when they ran Gears 5's XSX native port. It was up to 211-220w at times. Whats consuming all that power if not the goddamn GPU running at its max clocks?

This lines up with whatever happens on my PC. When I run Hades at native 4k 120 fps, my GPU utilization sits at roughly 40%. If i leave the framerate uncapped, it goes up to 99% and runs the game at 350 fps. Games are designed to automatically scale up. its been this way for well over a decade since modern GPUs arrived in the mid 2000s. If they didnt scale, you would not see GoW and the Last guardian automatically hit 60 fps on the PS5 without any patches. If they didnt scale with CUs, you would not see Far Cry 6 have a consistent resolution advantage on the XSX. Same goes for Doom Eternal. These games run well because modern GPUs are able to utilize not just clocks but all the shader cores.
 

SlimySnake

Flashless at the Golden Globes
You still have lots to learn.
The first person in the comments was spot on about the PS5's variable frequency. Make sure to read all to get the full understanding.


There’s no better explanation that what Mark Cerny has already given in his sermon, and later clarified in his DigitalFoundry interview.

It’s tied to power usage, not temperature. It’s designed so that for the most part the clocks stay at their highest frequencies, regardless of whether the console is in a TV cabinet or somewhere cold.
It’s designed to be deterministic. The purpose is to reduce clocks when they don’t need to be so high to help with power usage and keeping the fans quiet. If a GPU is expected to deliver a frame every 16.6ms (60 FPS) and it’s done its work already in 8ms, then there’s no point it sitting there idle at 2.23Ghz sucking power and generating heat. If it could intelligently drop the clocks so that it finishes its frame just before the 16.6ms you get the same 60 FPS game, the same graphical detail, but with much less fan noise.

Anyone with a gaming PC will know that GPU utilisation is rarely at 100%
It typically takes burn tests and crazy benchmark software to get that.

Cerny seemed to suggest that you’d need quite a synthetic test to really load up both the CPU and GPU enough to cause them to declock for power reasons, and that it won’t show up in any normal game.
He said that same synthetic test would simply cause a PS4 to overheat and shutdown.
And even then, dropping power consumption by 10% only drops core clocks by a “few” percent. Which makes sense if you’re used to overclocking modern GPUs. You need to crank up the power to get even a minimal amount of extra clock, and cranking up an already jacked up GPU clock by a “few” percent barely makes a difference to performance anyway.

It’s all about keeping the fan noise down without sacrificing performance or overheating.

PS5 variable clock speeds aren’t at all the same animals as boost clocks on mobile devices or PC CPUs.
It’s closer to modern GPU overclocking where you max out your power budget, and do so with sufficient cooling that you never hit the thermal limit.

You couldn’t have a game console that scaled its clocks based on temperature. It would never work. Neither would having unpredictable or variable performance.

The PS5 devkits allow developers to intentionally select different power profiles so that they can test and profile performance.

tl;dr The variable frequency system as described by Cerny means every PS5 will play exactly the same way regardless of temperatures (to the extent a PS4 does) or whatever is happening in the game at any given moment. They all behave identically, and outside of the kind of burn-tests that would overheat and crash a PS4, the PS5 clocks stay at the “boost” frequencies. If a developer wants to do something that does exceed the power budget, they can know precisely how it will react.

PS5 has a massive focus on efficiency

No idea what this has to do with what I posted. I never questioned PS5's variable clock speeds. I never even mentioned the PS5 in that post. You are either confusing me with someone else or you think I am an xbox fanboy just because I dared to defend GPU's hitting their theoretical tflops max. Something we just took for granted when the PS4 said they were 1.84 tflops because their clock was 800 mhz and they had 18 CUs, and when the PS4 Pro came out and they said it was now 36 CUs at 911 Mhz. We consistently saw the PS4 have a 40% resolution advantage over the x1 last gen which settled around 900p after a rough first year. Then we consistently saw the Ps4 Pro offer 2x more resolution than the PS4 in line with the 2.2x increase in theoretical tflops. Then we saw the X1X consistently offer a 40% increase in pixels over the PS4 consistent with its theoritical tflops difference over the PS4 Pro.

But now we are throwing away a decade of console tflops performance results based on theoretical tflops because?
 

Loxus

Member
No idea what this has to do with what I posted. I never questioned PS5's variable clock speeds. I never even mentioned the PS5 in that post. You are either confusing me with someone else or you think I am an xbox fanboy just because I dared to defend GPU's hitting their theoretical tflops max. Something we just took for granted when the PS4 said they were 1.84 tflops because their clock was 800 mhz and they had 18 CUs, and when the PS4 Pro came out and they said it was now 36 CUs at 911 Mhz. We consistently saw the PS4 have a 40% resolution advantage over the x1 last gen which settled around 900p after a rough first year. Then we consistently saw the Ps4 Pro offer 2x more resolution than the PS4 in line with the 2.2x increase in theoretical tflops. Then we saw the X1X consistently offer a 40% increase in pixels over the PS4 consistent with its theoritical tflops difference over the PS4 Pro.

But now we are throwing away a decade of console tflops performance results based on theoretical tflops because?
You replied to someone talking about the XBSX reaching 12TF, who replying to someone saying the PS5 can't reach 10TF with a post about PC GPU utilization.

It's about the PS5 variable frequency and you talking about GPU utilization when it's about consoles create misinformation.

So I was simply letting you know PS5 doesn't work like that, so your post about PC GPU utilization is irrelevant.
 

rnlval

Member
So which game uses 35.58 teraflops of the RTX 3090?
Note that RTX 3090's TFLOPS is split between INT/FP and FP CUDA cores.

Turing SM has INT and FP CUDA cores. Ampere SM evolved Turing INT cores into INT/FP cores.

RdL63yU.png


Integer shader workloads did NOT disappear when RTX Ampere was released!

AMD RDNA has common shader units for both integer and floating units. Typical TFLOPS argument between Turing vs RDNA hides Turing's extra TIOPS compute capability.

Ampere RTX' extra shader compute power is useful for mesh shaders, denoise raytracing, DirectStorage decompression, DirectML, and 'etc'.


17.79 FLOPS from FP CUDA. If you add 40% extra TIOPS it would land on 24.906 TOPS. The difference between RTX 2080 Ti vs RTX 3090 is extra TFLOPS (INT units able to convert into FP units in Ampere), hence RTX 2080 Ti's 24.906 TOPS vs RTX 3090's 38.58 TOPS yields 58% advantage for RTX 3090.

In most games, RTX 3080 Ti and RTX 3090 have about 58% advantage over RTX 2080 Ti.
 
Last edited:
Top Bottom