• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Honest ray-tracing performance?

Would like to preface with this. TFLOPs as a measure of gaming performance is complete and utter bollocks.

Exhibit 1: NVIDIA rates their 3080 as having 30 TFLOPs of power but being only twice as fast as the 2080 which using their official clocks has 10.77 TFLOPs.

The first problem is NVIDIA massively undersells their clock speeds when it comes to Turing, rating the 2080 at only 1710MHz boost when most retail units have no issue reaching 2000MHz.

To further complicate matters, the compute performance equivalent of RDNA isn't even 1:1 with Turing which in turn isn't 1:1 with Ampere, making TFLOPs as a measurement of performance utterly pointless.

How do we solve this? Well we can't really but the best we can do is use known differentials to rate the different cards and add them up. To make this easy, let's use RDNA2 as a baseline.

Series X: 12.2 TFLOPs said to be roughly equivalent to the 2080.
2080: 12.2 TFLOPs
2080 Ti: 15.3 or roughly 25% faster than the 2080.
3080: 20.74 TFLOPs or 60 to 80% faster than the 2080. I used 70% as an average.

Moving on to RT now.

NVIDIA initially used gigarays as a metric boasting about 10 gigarays/s for its 2080 Ti.

145267-laptops-news-nvidia-will-show-its-next-gen-consumer-graphics-soon-image1-ix49b6jonh-jpg.webp

Microsoft came out later, obviously using different metrics claiming 380 billions intersection per second for its Series X. We know by now the metrics used weren't the same at all.

Recently, NVIDIA ostensibly used AMD's metrics to brag about their RT performance using a similar method to Microsoft whom claimed the Series X has the equivalent of 25 TFLOPs worth of raster(12)+RT(13) performance.

They said their 2080 Ti has 34 RT-TFLOPs.

Now it gets even more nebulous because it's very unclear how NVIDIA got those numbers as they also claimed "58 RT-TFLOPs" for the 3080.

hAaitr6CDpu8ZFjPuV6jMJ.jpg

John from DF said Minecraft on the Series with RT hovered between 30-60fps at 1080. The RTX 2060 with DLSS on gets 60fps at 1080p and it was said 1440p with DLSS worked out OK.

With that said, it would seem the RT performance in the Minecraft demo on the SX isn't too far away from the RTX 2060 without DLSS. We could be generous and put it in the ballpark of an RTX 2070.

What's the takeaway in all of that? If the Minecraft demo is even remotely representative of the RT performance of RDNA2, I'm afraid it'll get stomped by Ampere because it's already losing out to Turing. Do keep in mind, this demo was put together in one month by a single engineer so we might get better performance.

Though as it stands, I would be shocked if the RT performance of RDNA2 came anywhere near Ampere because as far as we are aware as of now, they are worse than Turing.

Discuss.
 

GymWolf

Member
So they are terafake more then teraflop.

I knew that 30 tf vs 10 but only 2 times the performance was a bit strange even for a noob like me...
 

Dampf

Member
RDNA2's RT performance is basically entirely unknown. We can compare numbers as much as we want, but they can or cannot mean much during the actual practical usage.

However, we do have two tiny indicators of its performance, as you said.

Minecraft DXR for example. As you said, a 2060 does around 30 FPS at 1080p without DLSS (its unfair to include DLSS in that comparison, so let's leave that aside) while a 2070 does around 40 FPS which would be between 30-60 FPS. A 2070 should be pretty comparable, in theory.

However, we must account that the maps vary greatly in complexity. The map the XSX demo showed basically pales in comparison to the complex RTX worlds professional Minecraft players built for the RTX demo. That can be a huge factor for performance.

That optimization argument for the XSX can also be applied to Turing. Remember, both are using DXR, so the Xbox was benefiting alot from the work that was already etablished with Minecraft RTX. Minecraft RTX received an update to DXR 1.1 a couple of days ago which improved performance for RTX cards by a lot.

The next indicator would be the technical aspect, as RDNA2 only accelerates BVH intersections with fixed functions as opposed to the RT cores, that accelerate BVH intersection and traversal.

But we have to wait and see how the performance stacks up.
 
Last edited:
Nothing really to discuss. You have it spot on.

It's a shame that some people still aren't able to understand that you can't compare teraflops across architectures. I've seen posts saying the 3080 is 3x next gen consoles.
 

Akuji

Member
Please stop this stupidity .. if u got no clue what ur talking about then just don’t do it.

Ridiculous...

RTX3080 is a Monster.
PS5 and XSX will be plenty fast for consoles.
Anything else is speculation and absurdity.

All the fucking time ..
 
Please stop this stupidity .. if u got no clue what ur talking about then just don’t do it.

Ridiculous...

RTX3080 is a Monster.
PS5 and XSX will be plenty fast for consoles.
Anything else is speculation and absurdity.

All the fucking time ..
What the fuck are you talking about? I literally used actual numbers provided to us with the context. Did you freakin’ read the OP?
 
Last edited:

martino

Member
you only reach 2ghz manually, no ? (real question)
if yes, i doubt benchmark are using 2ghz (outside the oc section of the reviews)
 

INC

Member
currently on rtx cards, if they dont have dlss2.0 you're basically getting reflections for half your frame rate I.e. not worth it in the slightest

Unless you want minecraft RT........lol
 
Last edited:

llien

Member
If RT perf of RDNA2 is bad (which includes major consoles, mind you), main impact of that would be that RT would become even less relevant and more like failed hype train like VR.
 

vkbest

Member
Though as it stands, I would be shocked if the RT performance of RDNA2 came anywhere near Ampere because as far as we are aware as of now, they are worse than Turing.

We will have to wait and see the implementation Sony and MS will do. I'm sure they will make a mix software+hardware, for improving performance, even if the quality is worse than Nvidia implementation.
 

VFXVeteran

Banned
Would like to preface with this. TFLOPs as a measure of gaming performance is complete and utter bollocks.

Exhibit 1: NVIDIA rates their 3080 as having 30 TFLOPs of power but being only twice as fast as the 2080 which using their official clocks has 10.77 TFLOPs.

The first problem is NVIDIA massively undersells their clock speeds when it comes to Turing, rating the 2080 at only 1710MHz boost when most retail units have no issue reaching 2000MHz.

To further complicate matters, the compute performance equivalent of RDNA isn't even 1:1 with Turing which in turn isn't 1:1 with Ampere, making TFLOPs as a measurement of performance utterly pointless.

How do we solve this? Well we can't really but the best we can do is use known differentials to rate the different cards and add them up. To make this easy, let's use RDNA2 as a baseline.

Series X: 12.2 TFLOPs said to be roughly equivalent to the 2080.
2080: 12.2 TFLOPs
2080 Ti: 15.3 or roughly 25% faster than the 2080.
3080: 20.74 TFLOPs or 60 to 80% faster than the 2080. I used 70% as an average.

Moving on to RT now.

NVIDIA initially used gigarays as a metric boasting about 10 gigarays/s for its 2080 Ti.

145267-laptops-news-nvidia-will-show-its-next-gen-consumer-graphics-soon-image1-ix49b6jonh-jpg.webp

Microsoft came out later, obviously using different metrics claiming 380 billions intersection per second for its Series X. We know by now the metrics used weren't the same at all.

Recently, NVIDIA ostensibly used AMD's metrics to brag about their RT performance using a similar method to Microsoft whom claimed the Series X has the equivalent of 25 TFLOPs worth of raster(12)+RT(13) performance.

They said their 2080 Ti has 34 RT-TFLOPs.

Now it gets even more nebulous because it's very unclear how NVIDIA got those numbers as they also claimed "58 RT-TFLOPs" for the 3080.

hAaitr6CDpu8ZFjPuV6jMJ.jpg

John from DF said Minecraft on the Series with RT hovered between 30-60fps at 1080. The RTX 2060 with DLSS on gets 60fps at 1080p and it was said 1440p with DLSS worked out OK.

With that said, it would seem the RT performance in the Minecraft demo on the SX isn't too far away from the RTX 2060 without DLSS. We could be generous and put it in the ballpark of an RTX 2070.

What's the takeaway in all of that? If the Minecraft demo is even remotely representative of the RT performance of RDNA2, I'm afraid it'll get stomped by Ampere because it's already losing out to Turing. Do keep in mind, this demo was put together in one month by a single engineer so we might get better performance.

Though as it stands, I would be shocked if the RT performance of RDNA2 came anywhere near Ampere because as far as we are aware as of now, they are worse than Turing.

Discuss.

Really nothing to discuss. We've known for a long time that AMD doesn't have the capital, engineers, or the resources to compete in the graphics space with Nvidia. But Nvidia isn't trying to compete at the CPU level either.

This doesn't bode well for consoles though. If it wasn't for AMD, I don't think consoles would exist anymore.
 

Ellery

Member
This is a good post by Thugnificient and this is exactly why I am happy that the PS5 has exclusive games with extremely talented devs tailor fitting a game to the PS5 hardware and bringing out the very best visual fidelity they can within the parameters they are operating with and on the PC end we have the new beast cards from Nvidia that are going to brute force their way through Multiplatform games and that makes me excited for Gamepass games, but especially games that support RTX and DLSS.

What a time to be a gamer. Juicy performance wherever you look and Nintendo bringing Mario Kart into your living room.
 

martino

Member
Really nothing to discuss. We've known for a long time that AMD doesn't have the capital, engineers, or the resources to compete in the graphics space with Nvidia. But Nvidia isn't trying to compete at the CPU level either.

This doesn't bode well for consoles though. If it wasn't for AMD, I don't think consoles would exist anymore.

I could see a parallel universe without amd
console would still exist but would use qualcom/samsung tech instead of amd.
 
Last edited:

JimboJones

Member
It will all become clear once AMD, nVidia cards and consoles are out and we get solid benchmarks across a range of games.
Although we'll probably still have people claiming bias at something 😅
 

Zathalus

Member
As to the topic, using the XSX = 2080 is a flawed methodology, it was a 2 week port using a weaker CPU. Gameplay later demonstrates performance closer to a 2080ti. Just judging on statements by AMD that performance per watt has increased by 50% over RDNA 1, and Microsoft claims on IPC gains, the XSX GPU should actually be a bit faster then a 2080 Super. Not quite 2080ti level though.

Really nothing to discuss. We've known for a long time that AMD doesn't have the capital, engineers, or the resources to compete in the graphics space with Nvidia. But Nvidia isn't trying to compete at the CPU level either.

AMD didn't use to, but that is rapidly changing. The company is once again flexing it's potential with Zen and Intel appears to be hopeless in competing with them for the near future. In addition to this, both Microsoft and Sony have been assisting in the development of the RDNA 2.

Also, for a company that couldn't compete against Nvidia, RDNA 1 was only 6% behind Turing in IPC, and ahead in performance per dollar. Looking forward, RDNA 2 has a 50% performance per watt increase and 25% increased IPC over RDNA 1. A hypothetical 80 CU RDNA 2 chip at 2-2.2Ghz should be able to compete against the 3080 in normal rasterization, if not outright beat it. This is inline with a number of recent leaks that have occurred about Navi2x. The 3090 should retain the performance crown however. But you would think so for $1500.

This doesn't bode well for consoles though. If it wasn't for AMD, I don't think consoles would exist anymore.
What do you think the Nintendo Switch is? Chopped liver? Both Sony and Microsoft are using AMD because a) Nvidia screwed them in the past or B) A unified SOC is much cheaper with AMD.

If Nvidia was the only player on the market, then they could have easily thrown in the same Zen 2 CPU, and used a separate die for a equivalent Geforce GPU. Microsoft would have likely stuffed a RTX 2080 in there. Of course the impact of this would have been narrower profit margins and perhaps some cooling issues, but it's not like it couldn't have been done. The PS3 is a perfect example of this.

Looking further into the future, with the promised IPC gains for Neoverse, Nvidia also has the potential to deliver a unified SOC with ARM CPUs and Geforce graphics for the PS6/whatever the next Xbox generation.
 
Last edited:

VFXVeteran

Banned
Also, for a company that couldn't compete against Nvidia, RDNA 1 was only 6% behind Turing in IPC, and ahead in performance per dollar. Looking forward, RDNA 2 has a 50% performance per watt increase and 25% increased IPC over RDNA 1. A hypothetical 80 CU RDNA 2 chip at 2-2.2Ghz should be able to compete against the 3080 in normal rasterization, if not outright beat it. This is inline with a number of recent leaks that have occurred about Navi2x.

J5SRdsr.gif


What do you think the Nintendo Switch is? Chopped liver? Both Sony and Microsoft are using AMD because a) Nvidia screwed them in the past or B) A unified SOC is much cheaper with AMD.

That's why I said what I did. I don't think Sony has good relationship with Nvidia. And Nvidia would charge too much for the consoles to be viable IMO.
 
Last edited:
Also, for a company that couldn't compete against Nvidia, RDNA 1 was only 6% behind Turing in IPC, and ahead in performance per dollar. Looking forward, RDNA 2 has a 50% performance per watt increase and 25% increased IPC over RDNA 1. A hypothetical 80 CU RDNA 2 chip at 2-2.2Ghz should be able to compete against the 3080 in normal rasterization, if not outright beat it. This is inline with a number of recent leaks that have occurred about Navi2x. The 3090 should retain the performance crown however. But you would think so for $1500.
A lot of wrong information there.

80 CU’s at 2.2GHz wouldn’t even match the RTX 3080 if the numbers are to be believed. The 3080 is up to 2.25x faster than the PS5, otherwise over 2x. What are the odds AMD can maintain 2.23GHz on 80 CU’s? Good luck with that. Unless NVIDIA massively oversold the 3080 and it’s closer to 50% faster than the 2080 rather than 80%, even 80 CU’s at 2.23Ghz would get beaten out handily.

Second, the 25% gains are over GCN, not over RDNA1, come on man.

As for Gears 5, we also heard the same when they were teasing the Series X, that it’d be faster at release then it ended up mostly getting outclassed by the 1070 anyway. How much optimization do you think they need to make in their own game they know by heart?

If they come out with the performance numbers you claim, sure, I’ll upgrade it. As of right now, it doesn’t compare very favorably to the 2080. When further tests are released, I’ll believe it.
 
Last edited:

Zathalus

Member
Snip image.

You don't have to believe me, the maths checks out. The RX 5700 XT is nipping at the heals of the 2070 Super, do you honestly think doubling of the SM cores, upping the frequency a bit, and IPC gains won't make it match the 3080? The 3080 is only 70% faster then the 2080 in normal rasterization, and the 2080 is 15% faster then the RX 5700 XT, thus the 3080 is 95% faster then the RX 5700 XT. Not even twice as fast.

Now read what I said again, double SM units over the 5700 XT (5120 cores total), likely much, much higher memory bandwidth, increased clock speed, and increased IPC and you don't believe it will be twice as fast as the RX 5700 XT?

That's why I said what I did. I don't think Sony has good relationship with Nvidia. And Nvidia would charge too much for the consoles to be viable IMO.
Relationships can be mended when a multi billion dollar industry is at stake. Nvidia is also not adverse to striking a deal to increase profit margins. Sony and especially Microsoft (Azure) can give them a massive boost to datacenter revenue.

I see you are also neglecting the Switch.

A lot of wrong information there.

80 CU’s at 2.2GHz wouldn’t even match the RTX 3080 if the numbers are to be believed. The 3080 is up to 2.25x faster than the PS5, otherwise over 2x. What are the odds AMD can maintain 2.23GHz on 80 CU’s? Good luck with that. Unless NVIDIA massively oversold the 3080 and it’s closer to 50% faster than the 2080 rather than 80%, even 80 CU’s at 2.23Ghz would get beaten out handily.
As explained above, the 3080 is only 95% faster then the RX 5700 XT. This is from game benchmarks. I am not sure where you are getting your 2.25 figure from. 2.2Ghz might be optimistic I admit, that is why I specified a range of 2 - 2.2Ghz. And since when has a console GPU been faster in Mhz numbers then it's dedicated GPU counterpart?
Second, the 25% gains are over GCN, not over RDNA1, come on man.

Which makes a total of zero sense. RDNA 1 is already over 25% greater IPC then GCN (50% against GCN 1). Are you saying that RDNA 2 IPC regressed, despite AMD saying it has increased? That number only makes sense when comparing against RDNA 1, and it lines up nicely with the stated 50% better performance per watt.

As for Gears 5, we also heard the same when they were teasing the Series X, that it’d be faster at release then it ended up mostly getting outclassed by the 1070 anyway. How much optimization do you think they need to make in their own game they know by heart?

If they come out with the performance numbers you claim, sure, I’ll upgrade it. As of right now, it doesn’t compare very favorably to the 2080. When further tests are released, I’ll believe it.
Well, the gameplay was everything ultra settings with additional graphical settings above and beyond that, at a locked 60FPS, which a 2080 Super can not even do, so I guess Microsoft is lying?
 
Last edited:
You don't have to believe me, the maths checks out. The RX 5700 XT is nipping at the heals of the 2070 Super, do you honestly think doubling of the SM cores, upping the frequency a bit, and IPC gains won't make it match the 3080? The 3080 is only 70% faster then the 2080 in normal rasterization, and the 2080 is 15% faster then the RX 5700 XT, thus the 3080 is 95% faster then the RX 5700 XT. Not even twice as fast.

No, all wrong. First off, the 2080 is way faster than just 15% over the 5700XT. According to Techpowerup the difference is 24%, not 15%. The 3080 is up to 80% faster than the 2080.

5700XT: 1
2080: 1.24
3080: 2.23

It’s way more than 95%.

Now read what I said again, double SM units over the 5700 XT (5120 cores total), likely much, much higher memory bandwidth, increased clock speed, and increased IPC and you don't believe it will be twice as fast as the RX 5700 XT?
At best it’ll be twice as fast but increases aren’t linear. Not that it matters because if the numbers of the 3080 are true, it’s over twice the performance of the 5700XT.

As explained above, the 3080 is only 95% faster then the RX 5700 XT. This is from game benchmarks. I am not sure where you are getting your 2.25 figure from. 2.2Ghz might be optimistic I admit, that is why I specified a range of 2 - 2.2Ghz. And since when has a console GPU been faster in Mhz numbers then it's dedicated GPU counterpart?
From game benchmarks the 2080 is 24% over the 5700XT and the 3080 70-80% over the 2080. Do the math.


Which makes a total of zero sense. RDNA 1 is already over 25% greater IPC then GCN (50% against GCN 1). Are you saying that RDNA 2 IPC regressed, despite AMD saying it has increased? That number only makes sense when comparing against RDNA 1, and it lines up nicely with the stated 50% better performance per watt.
Where did AMD state the IPC gains of RDNA over RDNA2 exactly?


Well, the gameplay was everything ultra settings with additional graphical settings above and beyond that, at a locked 60FPS, which a 2080 Super can not even do, so I guess Microsoft is lying?
Which is bollocks because the damn benchmark had them neck and neck with the same settings. That’s all we care about. The gameplay vid wasn’t benchmarked nor do we know if there was a dynamic res in place.

They tested the game and came to the conclusion the SX performs on par with a 2080 yet here you are monkeying around with made up numbers, drawing your own conclusions.
 

Zathalus

Member
No, all wrong. First off, the 2080 is way faster than just 15% over the 5700XT. According to Techpowerup the difference is 24%, not 15%. The 3080 is up to 80% faster than the 2080.

5700XT: 1
2080: 1.24
3080: 2.23

It’s way more than 95%.

At best it’ll be twice as fast but increases aren’t linear. Not that it matters because if the numbers of the 3080 are true, it’s over twice the performance of the 5700XT.

From game benchmarks the 2080 is 24% over the 5700XT and the 3080 70-80% over the 2080. Do the math.

Not sure where you are getting your numbers from -> https://www.techpowerup.com/gpu-specs/radeon-rx-5700-xt.c3339

2080 is 115% of a RX 5700 XT. So my entire point still stands. I can only assume you were looking at the Radeon RX 5700 and not the Radeon RX 5700 XT.

Where did AMD state the IPC gains of RDNA over RDNA2 exactly?

Not just a IPC gain, but clock speed as well:

08121224574l.jpg


You can assume that Microsoft's statement was referring to GCN, but the GCN - RDNA increase was already more then that. I'll grant you that it is a confusing number and should likely not be used, as it's not even clear what metric it is supposed to be measured against. For example, a 12 TFLOP card that only has 25% IPC gain over GCN won't be able to compete against a 2080 in Gears 5.

Which is bollocks because the damn benchmark had them neck and neck with the same settings. That’s all we care about. The gameplay vid wasn’t benchmarked nor do we know if there was a dynamic res in place.

They tested the game and came to the conclusion the SX performs on par with a 2080 yet here you are monkeying around with made up numbers, drawing your own conclusions.

I'm not making anything up, this was taken directly from the DF video.

But I see where the mix-up came from, the in-game benchmark appears to be much more demanding then regular game-play. I was referring to the benchmark and not game-play for the 2080, so my bad on that one. A 2080 can just about do 60FPS gameplay for Gears 5.

As for why the XSX GPU should be a bit more powerful then the 2080:

1. The 2080 System that was used as a comparison had a significantly faster CPU - this does impact the overall framerate, yet both systems were apparently on par.
2. It was only 2 weeks of work, it didn't even leverage the DirectStorage features. Optimization takes time, especially with the rumored XSX dev kit issues. Porting something to a entire new system in 2 weeks and expecting the same optimization as the Turing card that has had months of driver optimizations for Gears 5 and years of optimization for Unreal 4 seems odd. Even VRS and Mesh Shaders were not taken advantage of.

Now taking the above two points into consideration, the XSX GPU when all else is equal should be roughly be around a 2080 Super (obviously overall performance will be slower if the PC has a faster CPU). Or perhaps even slightly better, as the 2080 Super is only about 2.5% better then a regular 2080 in the benchmark suit.

Now by sheer coincidence the 2080 Super is also a 12 TFLOP card (using 1950 Mhz like most cards can get without OC). So this does indicate that RDNA 2 IPC has increased to roughly around Turing level. Which matches the statements that AMD has made about RDNA 2 IPC.

Now this all leads back to my original point, a 5120 shader Navi 2X at 2-2.2Ghz (those numbers are not inflated, AMD has already stated increase clock speed, and almost all 5700 XTs can OC to 2Ghz) should be in line with a 3080. It's double a RX 5700 XT, say 10% increased clock speed, 384 bit bus with 18Gbps GDDR6 (again double the bandwidth of the 5700 XT or so), and roughly 6% or so IPC gain. I find it incredibly hard to believe that won't increase at least 95% over the 5700 XT.

It's not going to compete against the 3090, but the 3080 does not appear to be a unreachable target.
 

Mister Wolf

Member
Not sure where you are getting your numbers from -> https://www.techpowerup.com/gpu-specs/radeon-rx-5700-xt.c3339

2080 is 115% of a RX 5700 XT
. So my entire point still stands. I can only assume you were looking at the Radeon RX 5700 and not the Radeon RX 5700 XT.



Not just a IPC gain, but clock speed as well:

08121224574l.jpg


You can assume that Microsoft's statement was referring to GCN, but the GCN - RDNA increase was already more then that. I'll grant you that it is a confusing number and should likely not be used, as it's not even clear what metric it is supposed to be measured against. For example, a 12 TFLOP card that only has 25% IPC gain over GCN won't be able to compete against a 2080 in Gears 5.



I'm not making anything up, this was taken directly from the DF video.

But I see where the mix-up came from, the in-game benchmark appears to be much more demanding then regular game-play. I was referring to the benchmark and not game-play for the 2080, so my bad on that one. A 2080 can just about do 60FPS gameplay for Gears 5.

As for why the XSX GPU should be a bit more powerful then the 2080:

1. The 2080 System that was used as a comparison had a significantly faster CPU - this does impact the overall framerate, yet both systems were apparently on par.
2. It was only 2 weeks of work, it didn't even leverage the DirectStorage features. Optimization takes time, especially with the rumored XSX dev kit issues. Porting something to a entire new system in 2 weeks and expecting the same optimization as the Turing card that has had months of driver optimizations for Gears 5 and years of optimization for Unreal 4 seems odd. Even VRS and Mesh Shaders were not taken advantage of.

Now taking the above two points into consideration, the XSX GPU when all else is equal should be roughly be around a 2080 Super (obviously overall performance will be slower if the PC has a faster CPU). Or perhaps even slightly better, as the 2080 Super is only about 2.5% better then a regular 2080 in the benchmark suit.

Now by sheer coincidence the 2080 Super is also a 12 TFLOP card (using 1950 Mhz like most cards can get without OC). So this does indicate that RDNA 2 IPC has increased to roughly around Turing level. Which matches the statements that AMD has made about RDNA 2 IPC.

Now this all leads back to my original point, a 5120 shader Navi 2X at 2-2.2Ghz (those numbers are not inflated, AMD has already stated increase clock speed, and almost all 5700 XTs can OC to 2Ghz) should be in line with a 3080. It's double a RX 5700 XT, say 10% increased clock speed, 384 bit bus with 18Gbps GDDR6 (again double the bandwidth of the 5700 XT or so), and roughly 6% or so IPC gain. I find it incredibly hard to believe that won't increase at least 95% over the 5700 XT.

It's not going to compete against the 3090, but the 3080 does not appear to be a unreachable target.

Read the fine print. That 15% is for 1080p. Look at this:

relative-performance_2560-1440.png

relative-performance_3840-2160.png
 
Not sure where you are getting your numbers from -> https://www.techpowerup.com/gpu-specs/radeon-rx-5700-xt.c3339

2080 is 115% of a RX 5700 XT. So my entire point still stands. I can only assume you were looking at the Radeon RX 5700 and not the Radeon RX 5700 XT.
Dude, it says 1920x1080.

Here

Would you look at this? 16% advantage at 1080p but 23% advantage at 4K for the 2080.

Not just a IPC gain, but clock speed as well.
The hot chips presentation was Microsoft comparing their consoles, not against RDNA. No Microsoft console is RDNA based so why would they compare it to that? 25% is over what the Xbox One X has, not over RDNA.

But I see where the mix-up came from, the in-game benchmark appears to be much more demanding then regular game-play. I was referring to the benchmark and not game-play for the 2080, so my bad on that one. A 2080 can just about do 60FPS gameplay for Gears 5.

As for why the XSX GPU should be a bit more powerful then the 2080:

1. The 2080 System that was used as a comparison had a significantly faster CPU - this does impact the overall framerate, yet both systems were apparently on par.
2. It was only 2 weeks of work, it didn't even leverage the DirectStorage features. Optimization takes time, especially with the rumored XSX dev kit issues. Porting something to a entire new system in 2 weeks and expecting the same optimization as the Turing card that has had months of driver optimizations for Gears 5 and years of optimization for Unreal 4 seems odd. Even VRS and Mesh Shaders were not taken advantage of.

Now taking the above two points into consideration, the XSX GPU when all else is equal should be roughly be around a 2080 Super (obviously overall performance will be slower if the PC has a faster CPU). Or perhaps even slightly better, as the 2080 Super is only about 2.5% better then a regular 2080 in the benchmark suit.

Now by sheer coincidence the 2080 Super is also a 12 TFLOP card (using 1950 Mhz like most cards can get without OC). So this does indicate that RDNA 2 IPC has increased to roughly around Turing level. Which matches the statements that AMD has made about RDNA 2 IPC.

Now this all leads back to my original point, a 5120 shader Navi 2X at 2-2.2Ghz (those numbers are not inflated, AMD has already stated increase clock speed, and almost all 5700 XTs can OC to 2Ghz) should be in line with a 3080. It's double a RX 5700 XT, say 10% increased clock speed, 384 bit bus with 18Gbps GDDR6 (again double the bandwidth of the 5700 XT or so), and roughly 6% or so IPC gain. I find it incredibly hard to believe that won't increase at least 95% over the 5700 XT.

It's not going to compete against the 3090, but the 3080 does not appear to be a unreachable target.
The in-game benchmark has the 2080 Ti with minimums of 51 and average of 62. The Series X staying locked at 60fps at settings beyond Ultra would mean it trounces the 2080 Ti by 15%+ which we both know isn’t happening.
 
Last edited:

psorcerer

Banned
that accelerate BVH intersection and traversal

I'm not sure that it can be counted as universally "good" because you cannot stop traversal early, if you know from somehwere else that you can.
And I think Ampere will not "accelerate" traversal (rumor).
Unfortunately people who do know about it are under heavy NDA, thus always say "accelerated by hw or driver where applicable".
 

GymWolf

Member
J5SRdsr.gif




That's why I said what I did. I don't think Sony has good relationship with Nvidia. And Nvidia would charge too much for the consoles to be viable IMO.
But why?

I mean, it's getting no money at all better than getting less money?

Why they leave an open field to amd?

Can't they produce cheaper stuff for console? It's a huge fucking deal having your hardware inside hundreds milions of consolles...
 
Last edited:

psorcerer

Banned

RT performance depends on exact algorithms used.
HW plays insignificant role (obviously needs to have fixed paths for certain things, but that's it).
I.e. overall RT perf will be guided by the raw power (ALU+RAM bandwidth) and a specific software approach.
 

Zathalus

Member
Dude, it says 1920x1080.

Here

Would you look at this? 16% advantage at 1080p but 23% advantage at 4K for the 2080.

I wouldn't use those numbers as for three reasons.

1. AMDs driver optimizations are not as quick as Nvidia, so rather use a more recent review.
2. A more recent review includes a larger selection of DX12 and Vulkan games, which is a better comparision for the low level APIs that console and future games are using.
3. The stock AMD blower cooler is terrible, almost all AIB models use a much better cooler, leading to much better sustained clock speed.

So using the following:


114% for 1080p
116% for 1440p
118% for 4k

So even using the worst case scenario of 18%, that means the 3080 is twice as fast as the 5700 XT. Not a unreachable target for what I have mentioned before. Sure a double 5700XT with double the bandwidth will likely not scale perfectly, but the clock speed increase and IPC increase should make up for that. Thus, I believe the hypothetical 80 CU RDNA 2 card can compare in regular rasterization with the 3080.

I have no opinion on RT performance for RDNA 2, nor do I believe they will have a alternative to DLSS.

The hot chips presentation was Microsoft comparing their consoles, not against RDNA. No Microsoft console is RDNA based so why would they compare it to that? 25% is over what the Xbox One X has, not over RDNA.

Which, as I mentioned, makes very little sense. The GCN -> RDNA leap was already larger then that. And a 25% leap over GCN 1 is basically the Vega, and the 13.4 TFLOP Vega 7 gets soundly beaten by the 2080 in Gears 5. They must have been referring to a very specific use case.

You can check the maths here if you don't believe me. GCN 1 (Xbox One) -> RDNA 1 is 50%-60%.

The in-game benchmark has the 2080 Ti with minimums of 51 and average of 62. The Series X staying locked at 60fps at settings beyond Ultra would mean it trounces the 2080 Ti by 15%+ which we both know isn’t happening.
Yes, I already acknowledged my mistake. I was assuming the benchmark and gameplay would be similar when they are clearly not. The XSX with a weaker CPU and little optimization equals a 2080. It's hardly a stretch to say that all else being equal it's roughly the equivalent to a 2080 Super or so. A 2080ti is totally out of it's reach, I do not dispute that at all.

The only point I am making on this is that RDNA 2 appears to have 6%+ IPC improvements, putting it in line with Turing, or a bit better.
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
3080: 20.74 TFLOPs or 60 to 80% faster than the 2080. I used 70% as an average.
Don't swap out practical game performance, which measures the whole system and not just the GPU, for synthetic GPU metrics mid-conversation. They're not equivalent.

If a GPU is doubled in power, it doesn't mean you're just going to get double the framerate in a game. "Only" getting an 80% uplift does not mean the GPU by itself is "only" 80% more powerful.
 
I wouldn't use those numbers as for three reasons.

1. AMDs driver optimizations are not as quick as Nvidia, so rather use a more recent review.
2. A more recent review includes a larger selection of DX12 and Vulkan games, which is a better comparision for the low level APIs that console and future games are using.
3. The stock AMD blower cooler is terrible, almost all AIB models use a much better cooler, leading to much better sustained clock speed.

So using the following:


114% for 1080p
116% for 1440p
118% for 4k

So even using the worst case scenario of 18%, that means the 3080 is twice as fast as the 5700 XT. Not a unreachable target for what I have mentioned before. Sure a double 5700XT with double the bandwidth will likely not scale perfectly, but the clock speed increase and IPC increase should make up for that. Thus, I believe the hypothetical 80 CU RDNA 2 card can compare in regular rasterization with the 3080.
That’s 19%. 118/99. Which changes almost nothing.
PS5: 1
XSX: 1.19
3080: 2 to 2.14

It would need 2.23GHz on 80 CU’s and a linear increase of 2x to the rest and even still, it probably wouldn’t match a 3080. Good luck with that.

I have no opinion on RT performance for RDNA 2, nor do I believe they will have a alternative to DLSS.
Which is too bad because that’s the point of this thread.

Which, as I mentioned, makes very little sense. The GCN -> RDNA leap was already larger then that. And a 25% leap over GCN 1 is basically the Vega, and the 13.4 TFLOP Vega 7 gets soundly beaten by the 2080 in Gears 5. They must have been referring to a very specific use case.

You can check the maths here if you don't believe me. GCN 1 (Xbox One) -> RDNA 1 is 50%-60%.
DF straight up said they were comparing it to GCN.
 
Don't swap out practical game performance, which measures the whole system and not just the GPU, for synthetic GPU metrics mid-conversation. They're not equivalent.

If a GPU is doubled in power, it doesn't mean you're just going to get double the framerate in a game. "Only" getting an 80% uplift does not mean the GPU by itself is "only" 80% more powerful.
Which would make sense if 99% of reviewers worth their lick didn’t ensure the system was GPU-bound during reviews. Else there would be no point in those benchmarks.
 

VFXVeteran

Banned
But why?

I mean, it's getting no money at all better than getting less money?

Why they leave an open field to amd?

Can't they produce cheaper stuff for console? It's a huge fucking deal having your hardware inside hundreds milions of consolles...

I don't think Nvidia sees it like that. Their main customer is the PC (for a lot of reasons and not just games). I think they see the consoles as just fluff. They could do it but I really don't think consoles will last long. Internet and render farms may be the new norm where the products give you the money. PCs will just always be in there because of other reasons.
 

Mister Wolf

Member
I wouldn't use those numbers as for three reasons.

1. AMDs driver optimizations are not as quick as Nvidia, so rather use a more recent review.
2. A more recent review includes a larger selection of DX12 and Vulkan games, which is a better comparision for the low level APIs that console and future games are using.
3. The stock AMD blower cooler is terrible, almost all AIB models use a much better cooler, leading to much better sustained clock speed.

So using the following:


114% for 1080p
116% for 1440p
118% for 4k


So even using the worst case scenario of 18%, that means the 3080 is twice as fast as the 5700 XT. Not a unreachable target for what I have mentioned before. Sure a double 5700XT with double the bandwidth will likely not scale perfectly, but the clock speed increase and IPC increase should make up for that. Thus, I believe the hypothetical 80 CU RDNA 2 card can compare in regular rasterization with the 3080.

I have no opinion on RT performance for RDNA 2, nor do I believe they will have a alternative to DLSS.



Which, as I mentioned, makes very little sense. The GCN -> RDNA leap was already larger then that. And a 25% leap over GCN 1 is basically the Vega, and the 13.4 TFLOP Vega 7 gets soundly beaten by the 2080 in Gears 5. They must have been referring to a very specific use case.

You can check the maths here if you don't believe me. GCN 1 (Xbox One) -> RDNA 1 is 50%-60%.


Yes, I already acknowledged my mistake. I was assuming the benchmark and gameplay would be similar when they are clearly not. The XSX with a weaker CPU and little optimization equals a 2080. It's hardly a stretch to say that all else being equal it's roughly the equivalent to a 2080 Super or so. A 2080ti is totally out of it's reach, I do not dispute that at all.

The only point I am making on this is that RDNA 2 appears to have 6%+ IPC improvements, putting it in line with Turing, or a bit better.

Why would you use benchmarks from a costlier overclocked version of the 5700xt. Seems a bit disingenuous and has nothing to do with improved drivers.
 

SF Kosmo

Al Jazeera Special Reporter
Which would make sense if 99% of reviewers worth their lick didn’t ensure the system was GPU-bound during reviews. Else there would be no point in those benchmarks.
In practical game tests, there's no such thing as totally GPU bound. You reach a certain threshold and the CPU starts becoming the bottleneck again.
 
In practical game tests, there's no such thing as totally GPU bound. You reach a certain threshold and the CPU starts becoming the bottleneck again.
It’s GPU bound enough that the difference is within margin of error. You’re basically trying to tell us GPU benchmarks are useless which is nonsense.
 
Last edited:

geordiemp

Member
Really nothing to discuss. We've known for a long time that AMD doesn't have the capital, engineers, or the resources to compete in the graphics space with Nvidia. But Nvidia isn't trying to compete at the CPU level either.

This doesn't bode well for consoles though. If it wasn't for AMD, I don't think consoles would exist anymore.

I thought you of all people would reserve judgement until we see actual performance rather than terraflops and numbers. If 3080 is challenged by a big AMD at 2.1 ghz will you come back and discuss. We dont know how clocks will push on Samsung node vs TSMC and so many other questions.

If Nvidia thought they were out of sight, why the pricing ? They know whats up, this gen will be compatative as hell IMO..

Back on subject, if stuff like localray is employed, doing less than half the work,.....things may not be as simple.
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
It’s GPU bound enough that the difference is within margin of error. You’re basically trying to tell us GPU benchmarks are useless which is nonsense.
I didn't say they're useless -- on the contrary, they're in some sense the most useful -- but they don't scale in a linear way. They're not "within the margin of error," it's complete apples to oranges.

There are synthetic benchmarks designed to test just the GPU using a variety of methodologies, that are maybe better for comparing one component to another independent of the system they're in.
 

GymWolf

Member
I don't think Nvidia sees it like that. Their main customer is the PC (for a lot of reasons and not just games). I think they see the consoles as just fluff. They could do it but I really don't think consoles will last long. Internet and render farms may be the new norm where the products give you the money. PCs will just always be in there because of other reasons.
dude, fluff or not fluff, they are leaving bilions on the table for amd...i don't care how elitist they can be, this is a moron move on their part, and i'm not talking only about now, i'm talking for every past console where they leave an open field to amd.
 

VFXVeteran

Banned
I thought you of all people would reserve judgement until we see actual performance rather than terraflops and numbers. If 3080 is challenged by a big AMD at 2.1 ghz will you come back and discuss. We dont know how clocks will pusyh on Samsung and so many other questions.

Come back and discuss? I'm going off of some of these threads like most of you guys. I believe that guy that talked to AMD about the 3070. If he's wrong then everyone believing him would have been wrong to believe him. No biggie.

If Nvidia thought they were out of sight, why the pricing ? They know whats up, this gen will be compatative as hell IMO..

It could be that or it could be that Nvidia wants more gamers buying their boards. I do believe the 16G RDNA2 board though. I also believe the consoles are going to cost $600.
 
Last edited:

VFXVeteran

Banned
dude, fluff or not fluff, they are leaving bilions on the table for amd...i don't care how elitist they can be, this is a moron move on their part, and i'm not talking only about now, i'm talking for every past console where they leave an open field to amd.

It's the way things go. AMD needs to survive. Why crush them and force a monopoly on the graphics market?
 

SF Kosmo

Al Jazeera Special Reporter
dude, fluff or not fluff, they are leaving bilions on the table for amd...i don't care how elitist they can be, this is a moron move on their part, and i'm not talking only about now, i'm talking for every past console where they leave an open field to amd.
I think the reason they don't chase consoles is because they can't. Console OEMs want a vendor who can provide a complete chipset, CPU and GPU together and AMD probably also makes it worth their while to do it that way. Nvidia can't do that, except maybe on mobile where they can nick ARM cores.
 
Last edited:
Lolwhat.

60 ish percent over 2080 is what it is.
60-80% according to DF with the upper range being much more common. Of course NVIDIA cherry-picked which is why I put it down to 70% in the OP.
If Nvidia thought they were out of sight, why the pricing ? They know whats up, this gen will be compatative as hell IMO..
They also said that regarding Pascal yet AMD ended up getting demolished.

If the RTX 3080 numbers we got so far are somewhat representative of reality, I can’t see big navi matching it.

If they are exaggerated to high-heavens then it’s a different story.

Regardless I’m much more interested in the RT performance of RDNA2 which we know little about. Though that 2070/2060 performance for a 2080-class GPU and the NVIDIA RT-TFLOPs paint a worrying picture for AMD but then again, could be all bullshit.
 

VFXVeteran

Banned
I have a feeling that Big Navi does not have RTX cores but just APUs which will eat up shader bandwidth on already shader-bound games.
 

GymWolf

Member
I think the reason they don't chase consoles is because they can't. Console OEMs want a vendor who can provide a complete chipset, CPU and GPU together and AMD probably also makes it worth their while to do it that way. Nvidia can't do that, except maybe on mobile where they can nick ARM cores.
so the "nvidia prices are too high" narrative is bullshit?!

your theory make much more sense.
 
Top Bottom