• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Specs for nVidia Ampere Supposedly Leaked

Driver "support" gets dropped when a new arch arrives for Nvidia, but it makes no difference. What's important is the underlying hardware feature, which in this case are the exact features consoles have for their entire life-cycle. The GPU doesn't stop working just because it's no longer receiving new drivers.
Pascal still gets driver support despite not having the same architecture as Turing so I have no idea what you are even saying with the first part of your post. As for the last part, when did I say it would "stop working"? It'll keep working, just nowhere as well as it should without proper drivers.
 

pawel86ck

Banned
Watch the whole video instead of being lazy.

12:00: "But for the benchmark they just ran it on full Ultra mode with none of those extra settings like SSGI or Contact Shadows and they compared it directly to a PC with an RTX 2080 and a Threadripper 2950X"

12:19 "That PC running the EXACT SAME SETTINGS as the Series X produced NEARLY IDENTICAL results. Now the PC still has SLIGHT advantages, especially the CPU".
I remember that benchmark part, but gameplay shows higher settings and higher performance even compared to PC version running on 2080ti. Like I have mentioned before I expect Gears 5 on XSX is probably using something like VRS or dynamic resolution scaling, because XSX is certainly not faster than 2080ti.
 
I remember that benchmark part, but gameplay shows higher settings and higher performance even compared to PC version running on 2080ti. Like I have mentioned before I expect Gears 5 on XSX is probably using something like VRS or dynamic resolution scaling, because XSX is certainly not faster than 2080ti.
Gameplay shows nothing of the sort. The benchmark compares them directly and tells us it is on par with a 2080, how can it be clearer?
 

BluRayHiDef

Banned
I was thinking in terms of current parts + new GPU, but your 1080 ti will ofc have no issue keeping up either - you'd just be missing ray tracing initially. 2-3 years later, situation will probably change as DX12U features might make a disproportionate impact relative to just pure rasterised performance (like Mesh Shaders etc). It usually takes time for all these features to find adoption though, things don't really switch-over overnight.

So, you don't think that the lower core count of my CPU (6 cores, 12 threads) relative to the core count of the PS5's and XSX's CPUs (8 cores, 16 threads) will be an issue for the first few years of next gen?
 
Last edited:

pawel86ck

Banned
Gameplay shows nothing of the sort. The benchmark compares them directly and tells us it is on par with a 2080, how can it be clearer?
I'm not talking about benchmark part, but gameplay segments when they talk about differences on each platform. Benchmark results are probably correct, but gameplay itself runs with higher settings and even better performance than 2080ti, because 2080ti dips below 60fps during cutscens and that's without additional features XSX is using (screen space shadows 5m25s, and GI 7m5s). I dont believe XSX is faster than 2080ti, and especially when benchmark suggest RTX 2080 like results, so I expect Gears 5 tech demo on XSX is using either VRS or dynamic resolution scaling.
 
Last edited:

Rikkori

Member
So, you don't think that the lower core count of my CPU (6 cores, 12 threads) relative to the core count of the PS5's and XSX's CPUs (8 cores, 16 threads) will be an issue for the first few years of next gen?

I can absolutely guarantee you it won't unless you're targeting very high frame rates. If we're talking 60 fps then the chance that will happen is exactly 0%.
 
I'm not talking about benchmark part, but gameplay segments when they talk about differences on each platform. Benchmark results are probably correct, but gameplay itself runs with higher settings and even better performance than 2080ti, because 2080ti dips below 60fps during cutscens and that's without additional features XSX is using (screen space shadows 5m25s, and GI 7m5s). I dont believe XSX is faster than 2080ti, and especially when benchmark suggest RTX 2080 like results, so I expect Gears 5 tech demo on XSX is using either VRS or dynamic resolution scaling.
Yes, so where are you getting it outperforming a 2080 when there are no apples to apples comparisons besides the benchmark and you readily admit you don't believe it exceeds a 2080 Ti when the example you're using is suggesting otherwise?
 
Last edited:

CrustyBritches

Gold Member
The Gears 5 demo showed "very similar" results to a stock 2080. It was a "2-week port" not 1 day. The game has been out on PC since Sept. 2019, runs on the most widely used game engine that's constantly getting engine-side updates, and supported on hardware(Turing) that has nearly all the same features as the upcoming XSX.

That being said, XSX has pretty great hardware for what I expect will be $499. I'll be going with Lockharts for my boys this Xmas, and updating my PC. Gonna sell my 2060S and R5 1600 when the time is right and upgrade to a $500 Ampere or RDNA 2 GPU depending on who has the best RT. I haven't ruled out going up to around $600, but we'll see. Will upgrade CPU to either R5 3600 or R7 3700X and add another 16GB RAM. My out of pocket will be similar to the price of a new XSX.

Concerning the CPU discussion, we've seen the benchmarks for the PS5 CPU and it's more in line with a R7 1700 at 3-3.15GHz. I suspect they've stripped the gaming cache down substantially. People will be just fine with R5 3600-level and above.
PS5-CPU-Benchmark-1024x439.jpg


The GPU Mark Cerny stealth-leaked during the Road to PS5 will probably be the successor to the RX 5700 and run $300-350. It's not going to be some world-beating hardware by the end of the year, but it's a nice jump above the beginning of this gen when 7850 2GB was ~$155 and 7870 was ~$175. I'm highly tempted to buy a PS5, play Ratchet and Clank, then sell it back to someone for the same price I got it before Xmas.

Anyway, I'm most interest in RT performance because I'll be spending most of my time with Cyberpunk and I want the best experience. Consoles won't offer that. That's why I'm in the market for a mid-high range Ampere card.
 

pawel86ck

Banned
Yes, so where are you getting it outperforming a 2080 when there are no apples to apples comparisons besides the benchmark and you readily admit you don't believe it exceeds a 2080 Ti when the example you're using is suggesting otherwise?
On gameplay segments it really looks like XSX is outperforming 2080ti because there are additional effects even compared to PC ultra settings, and locked 60fps even during cutscens (that's impossible on 2080ti).

I'm not 100% sure game is using dynamic resolution or VRS (Digital Foundry didnt mentioned it) however that's my guess because otherwise I cant explain why XSX would run this game with better results than 2080ti (and especially when benchmark results shows performance comparable only to RTX 2080, not 2080ti).
 

iHaunter

Member
2080 Ti is more like 17 TF, remember boost clocks in game are different, it's generally 1900+
It should still be a good increase, but since performance doesn't scale linearly with TF either, no one should expect to be blown away solely from the TF difference.



My point is, we have some solid data to base an opinion (comparison) on, so let's do that. If MS works some voodoo magic and it turns out it's closer to a 2080 Ti when it launches - cool, we can adapt to new data (not to wishful thinking).

Well, that's not an OC'd 3080Ti either. I don't see a point in including OC for 2080Ti. It's not big enough for me to "Upgrade" anyway. I'll wait for one more gen.
 

Rikkori

Member
Hm, how is 6 core Haswell at 3.3Ghz in a "very good position" against 8 core Zen2 (3?) at 3.2Ghz?

First of all, it's not 3.3 Ghz because it easily OC's to 4.2
Second of all, it's about framerate targets. If it's 60, then the performance difference between the two is irrelevant because it can be easily accomplished. Sort of like how it doesn't matter if I have a 2080 Ti and you have a 2060 if we're playing at 360p, ya dig?

What you and a lot of other people are missing is that just because the CPU power is there doesn't mean it will be utilised, or make much of a difference, because proper multi-core CPU utilisation in games is VERY DIFFICULT to achieve.

But don't take my word for it, have a look for yourself and see if you can find that magical difference between 6 & 8 cores in games.

 
On gameplay segments it really looks like XSX is outperforming 2080ti because there are additional effects even compared to PC ultra settings, and locked 60fps even during cutscens (that's impossible on 2080ti).

I'm not 100% sure game is using dynamic resolution or VRS (Digital Foundry didnt mentioned it) however that's my guess because otherwise I cant explain why XSX would run this game with better results than 2080ti (and especially when benchmark results shows performance comparable only to RTX 2080, not 2080ti).
It obviously is, otherwise, why would it suddenly start performing worse during a benchmark with lower settings?
Hm, how is 6 core Haswell at 3.3Ghz in a "very good position" against 8 core Zen2 (3?) at 3.2Ghz?
Cuz if the target is 60, it should be fine most of the time.
 
When they say 20% "faster", are they talking about clock speeds? Efficiency? IPC gain? Overall power? It is unfortunately not very clear.

I think we've heard numerous rumours about Ampere being 50% gain over 2080Ti, then we heard 40%, then 30% and now 20%?

What is going on at Nvidia? Are these percentages simply across different metrics? (such as efficiency gain per watt vs clock speed etc..).

Or has the gamble with pressuring TSMC and that backfiring so moving mostly to Samsung nodes limited their performance increase somewhat?
 

GHG

Member
Or has the gamble with pressuring TSMC and that backfiring so moving mostly to Samsung nodes limited their performance increase somewhat?

Latest rumours (along with the noise around the increased power draw and need for the new 12 pin connector) would suggest that's the problem.
 

Kenpachii

Member
The Gears 5 demo showed "very similar" results to a stock 2080. It was a "2-week port" not 1 day. The game has been out on PC since Sept. 2019, runs on the most widely used game engine that's constantly getting engine-side updates, and supported on hardware(Turing) that has nearly all the same features as the upcoming XSX.

That being said, XSX has pretty great hardware for what I expect will be $499. I'll be going with Lockharts for my boys this Xmas, and updating my PC. Gonna sell my 2060S and R5 1600 when the time is right and upgrade to a $500 Ampere or RDNA 2 GPU depending on who has the best RT. I haven't ruled out going up to around $600, but we'll see. Will upgrade CPU to either R5 3600 or R7 3700X and add another 16GB RAM. My out of pocket will be similar to the price of a new XSX.

Concerning the CPU discussion, we've seen the benchmarks for the PS5 CPU and it's more in line with a R7 1700 at 3-3.15GHz. I suspect they've stripped the gaming cache down substantially. People will be just fine with R5 3600-level and above.
PS5-CPU-Benchmark-1024x439.jpg


The GPU Mark Cerny stealth-leaked during the Road to PS5 will probably be the successor to the RX 5700 and run $300-350. It's not going to be some world-beating hardware by the end of the year, but it's a nice jump above the beginning of this gen when 7850 2GB was ~$155 and 7870 was ~$175. I'm highly tempted to buy a PS5, play Ratchet and Clank, then sell it back to someone for the same price I got it before Xmas.

Anyway, I'm most interest in RT performance because I'll be spending most of my time with Cyberpunk and I want the best experience. Consoles won't offer that. That's why I'm in the market for a mid-high range Ampere card.

Jup mentioned this before, i think the PS5 cpu will be more in line with a 2600 output to be honest if you compare it towards PC performance.

1 core is locked and low clocks. We also don't know if the frequency posted by sony is just single core performance or frequency it can sustain over all 8 cores at all times. As AMD only advertises mostly with single core boosts.

6 cores indeed will be fine specially if you sit at the 3600 range totally agree with that.
 
Last edited:

Kenpachii

Member
Driver "support" gets dropped when a new arch arrives for Nvidia, but it makes no difference. What's important is the underlying hardware feature, which in this case are the exact features consoles have for their entire life-cycle. The GPU doesn't stop working just because it's no longer receiving new drivers.

Source: me and my many GPUs over the past 2 decades.


I was thinking in terms of current parts + new GPU, but your 1080 ti will ofc have no issue keeping up either - you'd just be missing ray tracing initially. 2-3 years later, situation will probably change as DX12U features might make a disproportionate impact relative to just pure rasterised performance (like Mesh Shaders etc). It usually takes time for all these features to find adoption though, things don't really switch-over overnight.

900 series is still supported in every game and optimized for. What u even talking about. The support for the 700 series stopped. Nvidia mostly drops there support after 3 generations of hardware shifts which mostly equals towards a console generation by itself.
 

Rikkori

Member
900 series is still supported in every game and optimized for. What u even talking about. The support for the 700 series stopped. Nvidia mostly drops there support after 3 generations of hardware shifts which mostly equals towards a console generation by itself.
Nah, this is false. In the past you could see, like you can still see with AMD, where they explicitly say which cards get a performance increase in which game and the older ones pretty much were forgotten as soon as they could unless it was a game-breaking issue. A lot of the Nvidia performance crown comes from game-specific driver improvements, unlike AMD, so when they get new cards they shift focus and if you have an old card you'll not get the performance tier your hardware should get otherwise.

Let me give you a concrete example, because I've been collating GPU data for myself. Normally, a GTX 1080 was on-par with a Vega 64, sometimes slightly better, sometimes slightly worse, but they were in the same tier. Now, you can see more and more examples of where the Vega just blows it out of the water. Why? And keep in mind, the funny thing is The Surge 2 was actually developed on Pascal GPUs, you can see this in some of the behind the scenes videos. So it's not even a question of the devs not testing the hardware.
On the other hand, Nvidia pumped so many drivers for Maxwell & Pascal for Witcher 3, you can clearly see the result, and ofc it was a flagship title for Nvidia and got all the special treatment. So when I'm saying what I'm saying about the driver support, I'm talking to you about it from years of experience with this shit, because I've been reading driver notes since the 3dfx days. And ofc, if we compare other games you'll find the same pattern emerging (RDR 2 & Control to name 2 others).

K4xbjan.png


Data is from TechPowerUp, the unreleased cards at the bottom are just something I was visualising for myself, obviously that performance is just an estimate. Shoulda cut them out but meh, forgot.
 

Mister Wolf

Gold Member
Nah, this is false. In the past you could see, like you can still see with AMD, where they explicitly say which cards get a performance increase in which game and the older ones pretty much were forgotten as soon as they could unless it was a game-breaking issue. A lot of the Nvidia performance crown comes from game-specific driver improvements, unlike AMD, so when they get new cards they shift focus and if you have an old card you'll not get the performance tier your hardware should get otherwise.

Let me give you a concrete example, because I've been collating GPU data for myself. Normally, a GTX 1080 was on-par with a Vega 64, sometimes slightly better, sometimes slightly worse, but they were in the same tier. Now, you can see more and more examples of where the Vega just blows it out of the water. Why? And keep in mind, the funny thing is The Surge 2 was actually developed on Pascal GPUs, you can see this in some of the behind the scenes videos. So it's not even a question of the devs not testing the hardware.
On the other hand, Nvidia pumped so many drivers for Maxwell & Pascal for Witcher 3, you can clearly see the result, and ofc it was a flagship title for Nvidia and got all the special treatment. So when I'm saying what I'm saying about the driver support, I'm talking to you about it from years of experience with this shit, because I've been reading driver notes since the 3dfx days. And ofc, if we compare other games you'll find the same pattern emerging (RDR 2 & Control to name 2 others).

K4xbjan.png


Data is from TechPowerUp, the unreleased cards at the bottom are just something I was visualising for myself, obviously that performance is just an estimate. Shoulda cut them out but meh, forgot.


 

Shai-Tan

Banned
When they say 20% "faster", are they talking about clock speeds? Efficiency? IPC gain? Overall power? It is unfortunately not very clear.

I think we've heard numerous rumours about Ampere being 50% gain over 2080Ti, then we heard 40%, then 30% and now 20%?

What is going on at Nvidia? Are these percentages simply across different metrics? (such as efficiency gain per watt vs clock speed etc..).

Or has the gamble with pressuring TSMC and that backfiring so moving mostly to Samsung nodes limited their performance increase somewhat?

i stopped paying attention to that kind of speculation and wait for benchmarks. because i watch tech channels on youtube that do reviews I get bombarded with a lot of nonsense "leak" videos with leading video titles. anyway, it seems unlikely with a significant die shrink that it will be on the lower end
 

Kenpachii

Member
Nah, this is false. In the past you could see, like you can still see with AMD, where they explicitly say which cards get a performance increase in which game and the older ones pretty much were forgotten as soon as they could unless it was a game-breaking issue. A lot of the Nvidia performance crown comes from game-specific driver improvements, unlike AMD, so when they get new cards they shift focus and if you have an old card you'll not get the performance tier your hardware should get otherwise.

Let me give you a concrete example, because I've been collating GPU data for myself. Normally, a GTX 1080 was on-par with a Vega 64, sometimes slightly better, sometimes slightly worse, but they were in the same tier. Now, you can see more and more examples of where the Vega just blows it out of the water. Why? And keep in mind, the funny thing is The Surge 2 was actually developed on Pascal GPUs, you can see this in some of the behind the scenes videos. So it's not even a question of the devs not testing the hardware.
On the other hand, Nvidia pumped so many drivers for Maxwell & Pascal for Witcher 3, you can clearly see the result, and ofc it was a flagship title for Nvidia and got all the special treatment. So when I'm saying what I'm saying about the driver support, I'm talking to you about it from years of experience with this shit, because I've been reading driver notes since the 3dfx days. And ofc, if we compare other games you'll find the same pattern emerging (RDR 2 & Control to name 2 others).

K4xbjan.png


Data is from TechPowerUp, the unreleased cards at the bottom are just something I was visualising for myself, obviously that performance is just an estimate. Shoulda cut them out but meh, forgot.

I don't think u understand how things work.

Witcher 3 was a heavy Nvidia focused title. They specifically created code simple for the cards in the game which also tanked AMD performance on purpose and they got called out for it. It's also a DX11 title that favors nvidia because there drivers aren't dog shit unlike AMD drivers.

Lately we have been getting more vulkan/dx12 titles and that's where differences start to show and shit tier AMD drivers start to dissapear for performance hinderance which shifts the bar a little on that front. So it's just progression.

Here's the absolute newest game that just released death stranding.

index.php


index.php


Here u got a dx11 title:

1080.png


2160.png


See how amd falls behind drastically suddenly again?

See how the 1080 holds up perfectly fine? the 1080 pushes stable performance in any game i played. I got the 1080ti however, but it always pushed between 2070 super and 2080 performance in every title i tested and played.

Even if you look at the 1060 which is basically a 980 and a stock 970 falls directly into line what u would expect. The power of a 970 was that it was heavily overclockable towards a 980 which amkes sense outcome wise.

I got a 970, i got a 1650 and i got a 1080ti and frankly all cards push in games exactly what u are supposed to be seeing.
 
Last edited:

Rikkori

Member
I don't think u understand how things work.

Witcher 3 was a heavy Nvidia focused title. They specifically created code simple for the cards in the game which also tanked AMD performance on purpose and they got called out for it. It's also a DX11 title that favors nvidia because there drivers aren't dog shit unlike AMD drivers.

That's not what happened with The Witcher 3 at all, that's what happened with Crysis 2 (hidden tessellation). For TW3 at launch if you enabled hairworks it had forced MSAA 8x which is what tanked performance (for both vendors), but in fact the AMD cards ran it (and HBAO+) just as well if not better than the Nvidia ones with some tweaks. It's just that AMD cards have worse MSAA performance than Nvidia ones (and that's a longer discussion).


We can pick titles all day, but like your ACO example which fits my point perfectly: That's a pre-Turing game for which focus was on for Pascal cards (esp considering the engine is the same since Unity, which back then Ubi was Nvidia partner, you can look it up).

On aggregate you can look at games past-Turing and see that Pascal and others fall further behind their counterparts from both AMD and Turing. And that's even a best case due to how Pascal was structured and its long support etc. If you go further back it's even more of a bloodbath.
 

Kenpachii

Member
That's not what happened with The Witcher 3 at all, that's what happened with Crysis 2 (hidden tessellation). For TW3 at launch if you enabled hairworks it had forced MSAA 8x which is what tanked performance (for both vendors), but in fact the AMD cards ran it (and HBAO+) just as well if not better than the Nvidia ones with some tweaks. It's just that AMD cards have worse MSAA performance than Nvidia ones (and that's a longer discussion).


We can pick titles all day, but like your ACO example which fits my point perfectly: That's a pre-Turing game for which focus was on for Pascal cards (esp considering the engine is the same since Unity, which back then Ubi was Nvidia partner, you can look it up).

On aggregate you can look at games past-Turing and see that Pascal and others fall further behind their counterparts from both AMD and Turing. And that's even a best case due to how Pascal was structured and its long support etc. If you go further back it's even more of a bloodbath.

Read.

"That's not what happened with The Witcher 3 at all, that's what happened with Crysis 2 (hidden tessellation). For TW3 at launch if you enabled hairworks it had forced MSAA 8x which is what tanked performance (for both vendors)"

"They specifically created code simple for the cards in the game which also tanked AMD performance on purpose and they got called out for it."

just as well if not better than the Nvidia ones with some tweaks

Just gotta reduce settings guys to make it run better but then its not a fair comparison anymore is it? wonder why nvidia works gets disabled in the game on every benchmark that involves amd cards there you go to make it faire. Absolute pointless point u are trying to make here.

We can pick titles all day, but like your ACO example which fits my point perfectly

There is nothing to pick all day, your idea of nvidia not optimizing anymore after a single generation of cards is laughable to anybody that has those gpu's. I showcased u a pure dx11 game that is high profile and also sponsored by AMD and optimized for it which on top of it demonstrates the weakness of AMD drivers. It's the perfect example it also launched around the 2000 series and heavily favors AMD which i already mentioned. There is absolute no reason why 1000 cards would perform great on it over AMD or 2000 series if nvidia wanted to push those cards which they don't.

But even then that was not my main point but just a side one.

My main point was just picking the latest AAA title that was death stranding that just came out and the performance of the gpu's are exactly on what u would expect it to be.

Then if you even do a simple search with nvidia drivers u can see what they currently support in there driver solutions.


Version:451.67 WHQL
Release Date:2020.7.9
Operating System:Windows 10 64-bit
Language:English (US)
File Size:561.45 MB

RELEASE HIGHLIGHTS
SUPPORTED PRODUCTS
ADDITIONAL INFORMATION
NVIDIA TITAN Series:
NVIDIA TITAN RTX, NVIDIA TITAN V, NVIDIA TITAN Xp, NVIDIA TITAN X (Pascal), GeForce GTX TITAN X, GeForce GTX TITAN, GeForce GTX TITAN Black, GeForce GTX TITAN Z
GeForce RTX 20 Series:
GeForce RTX 2080 Ti, GeForce RTX 2080 SUPER, GeForce RTX 2080, GeForce RTX 2070 SUPER, GeForce RTX 2070, GeForce RTX 2060 SUPER, GeForce RTX 2060
GeForce 16 Series:
GeForce GTX 1660 SUPER, GeForce GTX 1650 SUPER, GeForce GTX 1660 Ti, GeForce GTX 1660, GeForce GTX 1650
GeForce 10 Series:
GeForce GTX 1080 Ti, GeForce GTX 1080, GeForce GTX 1070 Ti, GeForce GTX 1070, GeForce GTX 1060, GeForce GTX 1050 Ti, GeForce GTX 1050, GeForce GT 1030
GeForce 900 Series:
GeForce GTX 980 Ti, GeForce GTX 980, GeForce GTX 970, GeForce GTX 960, GeForce GTX 950
GeForce 700 Series:
GeForce GTX 780 Ti, GeForce GTX 780, GeForce GTX 770, GeForce GTX 760, GeForce GTX 760 Ti (OEM), GeForce GTX 750 Ti, GeForce GTX 750, GeForce GTX 745, GeForce GT 740, GeForce GT 730, GeForce GT 720, GeForce GT 710
GeForce 600 Series:
GeForce GTX 690, GeForce GTX 680, GeForce GTX 670, GeForce GTX 660 Ti, GeForce GTX 660, GeForce GTX 650 Ti BOOST, GeForce GTX 650 Ti, GeForce GTX 650, GeForce GTX 645, GeForce GT 640, GeForce GT 635, GeForce GT 630

As the 700 and 600 series aren't tested anymore through obvious reasons. i would not be shocked if u could still play games very well today with them without issue's as they should technically still be supported even while there architectures are utterly and completely outdated by now and barely supported by any top end AAA title. The reason why this isn't mostly the case with AMD is because they endlessly rebrand there shit and even this is debatable to some extent.

So i dunno what point u try to make really here. But your whole idea of nvidia dropping support and your years of experience is laughable. I got decades of experience and have loads of gpu's to my disposal throughout the years and frankly i don't see it.

Some games cater better then other games to AMD or Nvidia because they are more involved in this. It's nothing abnormal.
 
Last edited:

VFXVeteran

Banned
When they say 20% "faster", are they talking about clock speeds? Efficiency? IPC gain? Overall power? It is unfortunately not very clear.

I think we've heard numerous rumours about Ampere being 50% gain over 2080Ti, then we heard 40%, then 30% and now 20%?

What is going on at Nvidia? Are these percentages simply across different metrics? (such as efficiency gain per watt vs clock speed etc..).

Or has the gamble with pressuring TSMC and that backfiring so moving mostly to Samsung nodes limited their performance increase somewhat?

3080 = 20% of 2080Ti
3080Ti = 40-50% of 2080Ti

That sounds about right to me. It'll (3080Ti) be a really big jump. I'd love to have 1T/s of bandwidth. The more bandwidth the faster the card will be with reading/writing datasets. Brute force rendering is always preferred.
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
I am never going to buy a Ti card so I don't really care how batshit the top tier is, I am mostly interested to see how big of a leap I. RT performance we see, the rumors have been 300% improvements to RT on equivalent cards and that would be really game changing.
 

Rikkori

Member
Read.



"They specifically created code simple for the cards in the game which also tanked AMD performance on purpose and they got called out for it."

That's speculation, I'm talking about facts. The fact that AMD cards are behind Nvidia's in tessellation wasn't due to "code", it was due to hardware differences.

Just gotta reduce settings guys to make it run better but then its not a fair comparison anymore is it? wonder why nvidia works gets disabled in the game on every benchmark that involves amd cards there you go to make it faire. Absolute pointless point u are trying to make here.

The point about settings wasn't to test them each card with different settings, but to change the reference point. Because there's a difference between how things get calculated rather than how they look. eg I never ran hairworks with more than 2x MSAA, it was just unnecessary. But doing it at 2x rather than 8x changes the stacking. Though I misremembered this point somewhat because I had forgotten it was due to tessellation also.
And also important to note that the cards all excel at different things, so if you only test one thing and one way, it's gonna show a skewed result.


There is nothing to pick all day, your idea of nvidia not optimizing anymore after a single generation of cards is laughable to anybody that has those gpu's. I showcased u a pure dx11 game that is high profile and also sponsored by AMD and optimized for it which on top of it demonstrates the weakness of AMD drivers. It's the perfect example it also launched around the 2000 series and heavily favors AMD which i already mentioned. There is absolute no reason why 1000 cards would perform great on it over AMD or 2000 series if nvidia wanted to push those cards which they don't.

Again, I explained the context earlier. ACO just like the other 3 ACs before it are on the same engine, which were done in heavy collaboration with Nvidia and you can find all the talks on GDC vault about it where Nvidia had engineers work on them (as well as for Far Cry 4). That ACO has an AMD badge doesn't mean anything, that was just because they gave the game in a bundle with GPUs.

But even then that was not my main point but just a side one.

My main point was just picking the latest AAA title that was death stranding that just came out and the performance of the gpu's are exactly on what u would expect it to be.

Then if you even do a simple search with nvidia drivers u can see what they currently support in there driver solutions.

Death Stranding is reversion to the mean, but I'm not saying every new game will show this new skewed tiering, more so a greater proportion than you'd see otherwise. And when I'm talking about driver support I'm talking about specific performance improvements for those cards, not whether or not they show in the drop-down list for driver downloads.
So when you have a situation like this, it gets resolved for Turing - but not Pascal et al.


Note how they still specify performance improvements in these sort of drivers notes, which they have stopped doing. And pay attention that Pascal is ignored.


So in the end my point is very simple (and I think generally uncontroversial), and it is this: When you're an owner of the current generation of Nvidia GPUs, you get the royal treatment in terms of drivers & game specific performance improvements for said cards. When a new gen arrives, they wish you good luck and to go your merry way. The cards don't stop working, but you're no longer getting as much from the hardware as you would when they'd show you the driver care they did at first.
 
Last edited:

mitchman

Gold Member
Jup mentioned this before, i think the PS5 cpu will be more in line with a 2600 output to be honest if you compare it towards PC performance.
Makes little sense considering they have mobile CPUs like the Ryzen 9 4900HS at 35W, or even the stepped down version of Ryzen 7, that all perform well compared to the Ryzen 7 3700X desktop CPU (within 10-15%). HS is likely the basis for the consoles at 35W, but maybe they can do the H series at 45W. We'll see, but for mobile CPUs, they are truly revolutionary at this power envelope and crushes Intel CPUs using more than double the wattage.
 
The only thing getting in my way of the 3080Ti is that new rumored connector for the GPU.
My PSU being modular and has 6 and 8 pin connectors, so I am wondering if I can just use 2x6 pin and be good or is this some new standard I will have to buy a whole new PSU for?
 
Last edited:

Ellery

Member
The only thing getting in my way of the 3080Ti is that new rumored connector for the GPU.
My PSU being modular and has 6 and 8 pin connectors, so I am wondering if I can just use 2x6 pin and be good or is this some new standard I will have to buy a whole new PSU for?

I would be extremely surprised if anyone (that already has a good 500/600W+ PSU) would need to buy a new specific PSU for a graphics card. That would get a lot of backlash and many people wouldn't have it.

To be fair it is Nvidia we are talking about, but they know how much hassle a new PSU can be. Probably just an adapter or something (if those rumors even are true)
 

Kenpachii

Member
Makes little sense considering they have mobile CPUs like the Ryzen 9 4900HS at 35W, or even the stepped down version of Ryzen 7, that all perform well compared to the Ryzen 7 3700X desktop CPU (within 10-15%). HS is likely the basis for the consoles at 35W, but maybe they can do the H series at 45W. We'll see, but for mobile CPUs, they are truly revolutionary at this power envelope and crushes Intel CPUs using more than double the wattage.

Good point never thought about that to be honest, however clock speeds will still be lower and not all cores will be up for grabs for games so it could mean u will need 3000 series CPU after all.
 
Last edited:

CuNi

Member
I would be extremely surprised if anyone (that already has a good 500/600W+ PSU) would need to buy a new specific PSU for a graphics card. That would get a lot of backlash and many people wouldn't have it.

To be fair it is Nvidia we are talking about, but they know how much hassle a new PSU can be. Probably just an adapter or something (if those rumors even are true)

Those new connectors are really overdue though.
I could see them packing a adapter for older PSUs for cards that don't draw as much power, but I honestly wouldn't mind if they'd force you to get new PSUs if you want to run that top-tier power hungry GPU.
Like I said, a change in pins and cables used by PSUs is already long overdue.
 

Skyr

Member

Another leak pointing to 50% boost over 2080Ti.
Is the 3090 supposed to replace the titan or the 2080Ti?
 
Yeah a 50% jump is huge. Enjoy those kinds of gains while they last. When was the last time we saw a 50% jump in CPU performance?

If they can do that with substantially better ray tracing performance + DLSS3.0, I'm in!
 
Last edited:
The Gears 5 demo showed "very similar" results to a stock 2080. It was a "2-week port" not 1 day. The game has been out on PC since Sept. 2019, runs on the most widely used game engine that's constantly getting engine-side updates, and supported on hardware(Turing) that has nearly all the same features as the upcoming XSX.

That being said, XSX has pretty great hardware for what I expect will be $499. I'll be going with Lockharts for my boys this Xmas, and updating my PC. Gonna sell my 2060S and R5 1600 when the time is right and upgrade to a $500 Ampere or RDNA 2 GPU depending on who has the best RT. I haven't ruled out going up to around $600, but we'll see. Will upgrade CPU to either R5 3600 or R7 3700X and add another 16GB RAM. My out of pocket will be similar to the price of a new XSX.

Concerning the CPU discussion, we've seen the benchmarks for the PS5 CPU and it's more in line with a R7 1700 at 3-3.15GHz. I suspect they've stripped the gaming cache down substantially. People will be just fine with R5 3600-level and above.
PS5-CPU-Benchmark-1024x439.jpg


The GPU Mark Cerny stealth-leaked during the Road to PS5 will probably be the successor to the RX 5700 and run $300-350. It's not going to be some world-beating hardware by the end of the year, but it's a nice jump above the beginning of this gen when 7850 2GB was ~$155 and 7870 was ~$175. I'm highly tempted to buy a PS5, play Ratchet and Clank, then sell it back to someone for the same price I got it before Xmas.

Anyway, I'm most interest in RT performance because I'll be spending most of my time with Cyberpunk and I want the best experience. Consoles won't offer that. That's why I'm in the market for a mid-high range Ampere card.
PS5 CPU benchmarks? R7 1700? Where did you get that? Someone tell me when the PS5 CPU went backwards to Zen 1? Jeez this stuff never ends. Anyway, the new 3000 series from Nvidia seems like its gonna Hulk smash.
 
Last edited:

CrustyBritches

Gold Member
PS5 CPU benchmarks? R7 1700? Where did you get that? Someone tell me when the PS5 CPU went backwards to Zen 1? Jeez this stuff never ends. Anyway, the new 3000 series from Nvidia seems like its gonna Hulk smash.
It's the PS5's CPU. This userbench leak came via Komachi a year ago. They quartered the L3 cache and it's paired with GDDR6.
 
It's the PS5's CPU. This userbench leak came via Komachi a year ago. They quartered the L3 cache and it's paired with GDDR6.
PS5' CPU is Zen 2, dude. Your info is bogus. I honestly don't understand with all the information out there you actually think it's a 1700. Good lord. It's the exact CPU as Series X but doesn't reach the same clocks. Also, there's plenty of rumors that Sony's version of Zen 2 has one unified CCX for latency reduction. That hasn't yet been confirmed, but has been rumored for a while and could be the same for the Series X.
 
Last edited:
Top Bottom