• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia Ampere Purportedly 50% Faster Than Turing At Half The Power Consumption

Nvidia could have won any console deal they wanted to in the past couple of generations. They're simply not interested in slicing margins far enough to compete with AMD on that sort of deal. AMD on the other hand will take those low margin, high volume deals every day because it keeps them afloat.

And yet they cut a supposed long term deal with Nintendo?? I mean sure it was to offload Tegra chips, but then why make a long-term partnership if Nintendo is just going to give you similar low margins on future chips??
 
Last edited:

magnumpy

Member
Those are crazy figures. So in summer or whereabouts we'll have 3 x the performance cards available before next gen even arrives. :messenger_face_screaming:

Say next gen boxes are around ~10 TF, the custom 2080 Ti already reaches 16+ TF, so at the same power we're looking at potential ~32 TF aftermarket 3080 Ti's in summer. Assuming NV will release such big card again.

sad but true the high end is not what most games are developed for. if we're lucky 10TF will be the baseline for next gen. rumors say ~8TF could be the baseline. it will be an improvement any way ._.
 

McRazzle

Member
And yet they cut a supposed long term deal with Nintendo?? I mean sure it was to offload Tegra chips, but then why make a long-term partnership if Nintendo is just going to give you similar low margins on future chips??
It wasn't to offload Tegra chips.
The TX1 only came out in 2015, they didn't just make a deal in 2015 for a console that was suppose to launch in 2016.
I think the Switch was just a stopgap.

Nintendo first approached Nvidia in 2013.
Volta was originally on Nvidia's roadmap for 2016 to be it's gaming architecture on 10nm with next-gen memory.
That roadmap was later changed to Pascal on 16nm.

Nvidia reported in 2016 that most of its increase in revenue was due to Nintendo, and there was about $350 million dollars (USD) that was unaccounted for, after adding up all their increased revenue of all their divisions compared to the total amount of increased revenue.
We know from the drop in Nvidia's revenue in the first half of 2019, that Nintendo pays for its chips only on delivery.
Analysts were stating a couple years ago that the cost of designing an soc on 7nm EUV was around $270 million dollars.
Take that and add $80 million for profit to Nvidia and I think you have a chip design for Nintendo.

Nvidia has been remastering Nintendo games in China for the Nvidia Sheild, for over two years now.
The only Wii game coming to the Switch is Xenoblade Chronicles and that's one that would be a nightmare for Nvidia to localize and wouldn't likely have much interest in it.

Personally, I think they have a cross licensing deal;
Nintendo has a license for Nvidia chips and Nvidia has an exclusive license for N64, Gamecube, and Wii games, but Nvidia can't use them worldwide until Nintendo gets it's chips.
 
This rumor is just a generic estimate going from a node to another, not really any kind of inside knowledge of what Nvidia is doing with their chips. People are eager to jump on such fantasy clickbait, but that's all it really is. The theoretical node transition performance improvements hardly translate to similar performance increases in finished products and even less in games themselves.

Nvidia has a challenge in that they've already made massive chips in the current generation, so to really bring a big boost in performance would require making same sized chips with the newer node, which further raises the prices. Consumer space can only tolerate these price hikes so much, as was evident from the original RTX launch. The other option is to make smaller chips that are cheaper and are more efficient, but those won't fulfill people's performance increase demands.

As a consumer, the best you can hope for is that there's further architectural optimizations that come into play, as well as software solutions like variable rate shading used in games. The other side is that AMD needs to be more competitive in the market so that it forces Nvidia to compete on price in the high end too. They certainly have room to do that, but right now there's no reason.

One might hope for Intel entering the competition to also improve things, but I wouldn't hold my breath for them to provide anything in the consumer space in 2020, and they aren't exactly known for offering value either. Of course if they want to compete, they are the challenger and they do have the money to push things if need be.
 

pawel86ck

Banned
50% performance boost compared to 2080ti would be great, however I wonder how much longer Nv can push GPU performance before they will hit the wall like intel.

Progress is already much slower than it used to be. I remember when Nvidia was launching new high end GPU every year with 70-100% performance boost at similar price range 500-600$. Now Nvidia launch their high end GPU's every 2 years, with only 50% performance boost and on top of that people no longer can buy high end GPU at the same price range, so they have to buy even slower GPU's because Nvidia started selling their high end GPU's for 1000-1200$ instead of 500-600$. So people need to wait longer, pay more and still they will get less performance.
 

Dr.D00p

Gold Member
Yeah but when is it coming out? Cyberpunk is out in march. We need these cards now.

..You won't be buying these until August/September (Founders Edition) and probably October before all the 3rd party cards are available.
 

MadAnon

Member
It's either 50% performance increase at the same power or the same performance at half the power. +50% performance at half the power is bs.
 
Last edited:

Ellery

Member
You guys shouldn't worry about +50% performance from the next generation of 7nm. That is definitely going to happen. The jump from 12nm to 7nm is quite big and Nvidia always has done sizeable performance jumps with new architecture + manufacturing process.

Here is my realistic take :

- The 50% more performance + half power consumption at the same time is bullshit (from the news). That would literally be too good to be true
- Still the 12nm to 7nm jump is gigantic and allows Nvidia to do more on smaller chips compared to now
- 7nm is more mature now and the yields will probably be great so the first 7nm cards from Nvidia can be expected to be well rounded products
- The question now is whether Nvidia puts more focus on raw GPU power or RTX cores or if they find a way to surprise us (hybrid cores)
- The other question is in what order is Nvidia going to release cards. On 7nm it would be no problem to release a smaller "3080" that comfortably (15-20%) beats the 2080 Ti
- Are they going all out and bring an actual big chip on release. Lets just call it the TA102 3080 Ti
- What does AMD have by then? If you scale up the 5700XT to a bigger size it could beat the 2080 TI (not by much though because the theoretical 5800 XT or 5900 XT would draw too much power)
- AMD has to show RayTracing aswell and has to make them fit

One thing is for sure. Nvidia on 7nm will definitely allow for something great to happen. The question is just when Nvidia decides to do so, how much are they willing to give to gamers and at what price. AMD and Intel will play a part in 2020 aswell, but mostly for how Nvidia is going to price their products. I just don't see either AMD or Intel being able to come close to Nvidia in the GPU space

Some random thoughts :

- Big RTX games are releasing in 2020 including Cyberpunk 2077, Doom Eternal, Dying Light 2 and Vampire: The Masquerade – Bloodlines 2 and probably a few more.
- Nvidia would want to have a new RTX product out when Cyberpunk 2077 comes out
- Also new consoles are releasing in late 2020
- AMD will have a RayTracing GPU out in 2020, maybe even in time for Cyberpunk 2077? Nvidia wouldn't want an AMD card better than the 2080 Ti out by then.

What I think might happen (just pure speculation) :

Nvidia is going to release an RTX 3080 in early April or late March that improves on the RTX 2080 Ti both in raw GPU power and RTX capabilities. It is probably going to be a small gpu power jump from a smaller chip (like I listed above. Maybe 10-20% over the 2080 Ti but the RTX increase is much bigger. Maybe 40-60%).
AMD is releasing a new Navi / RDNA / RDNA 2.0 card that is bigger and more powerful than the 5700 XT and has dedicated hardware RayTracing. It may be better than the 2080 Ti, but it will be slower than the RTX 3080.
Intel is late to the party and releasing absolute junk that nobody wants. It will be significantly slower than what either AMD or Nvidia has to offer and he will be DoA.
Later in 2020 Nvidia is going to release the bigger 7nm chips. Right before the PS5 and Xbox Series X are launching. Lets call it 3080 Ti with a 50-60% perf increase over the current 2080 Ti.


So that would be my guess if I had to. Please remember that this is all speculation and the numbers are out of my ass, besides the knowledge that we are getting new hardware, the release date for games and new consoles.
 

Ascend

Member
It's good to see some level-headed posts in here. Too many people with green glasses on while having their logic system in hibernation.
 

Dontero

Banned
50% performance boost compared to 2080ti would be great

You won't get 50% more performance. You are looking at best at 20-30%. In order to get 50% they would have to make chip as big as today which at new node will be super expensive. Instead of paying 600$ for GPU you would have to pay 10600$.
So they shrink down chip which makes it cost efficient and you get 20-30% boost instead of 50% or more. Once node process will stablize and their architecture will be tuned you will get that 50-60% but not in one generation compared to current offering.

Progress is already much slower than it used to be. I remember when Nvidia was launching new high end GPU every year with 70-100% performance boost at similar price range 500-600$. Now Nvidia launch their high end GPU's every 2 years, with only 50% performance boost

Your memory is shit. There has been no generation which doubled performance in a year other than some completely new segments showing up like dual gpus or introducing new pro segments. Usually it is 20-30% between node jumps and around 10 to 20% between arch changes.

Very rarely you get arch change and node jump at the same time which is when you can get 30+% but it still won't give you 50%.

however I wonder how much longer Nv can push GPU performance before they will hit the wall like intel.

There is no wall for GPUs like there is for CPUs because GPUs by nature are parallel. GPUs in essence are just thousands of small cores doing simple math. You can always add more cores which directly benefits performance.

CPUs have issues because many tasks that are done on CPUs require linear calculations which can't be parallelized.

CPU: You are doing calculation and wait for that calculation sum to use that sum in other equation.
GPU: You are doing multiple calculations at the same time and you don't need to wait for something to start working.
 
Last edited:

LordOfChaos

Member
Intel and Nvidia should do an APU together.


This man will agree to that when and only when he's CEO of Intel

4daedb514bd7c8344d100000



We'll see if Intel comes up with interesting "APUs" though, with Tiger Lake switching their IGP architecture out for Xe.
 
Last edited:

pawel86ck

Banned
You won't get 50% more performance. You are looking at best at 20-30%. In order to get 50% they would have to make chip as big as today which at new node will be super expensive. Instead of paying 600$ for GPU you would have to pay 10600$.
So they shrink down chip which makes it cost efficient and you get 20-30% boost instead of 50% or more. Once node process will stablize and their architecture will be tuned you will get that 50-60% but not in one generation compared to current offering.



Your memory is shit. There has been no generation which doubled performance in a year other than some completely new segments showing up like dual gpus or introducing new pro segments. Usually it is 20-30% between node jumps and around 10 to 20% between arch changes.

Very rarely you get arch change and node jump at the same time which is when you can get 30+% but it still won't give you 50%.



There is no wall for GPUs like there is for CPUs because GPUs by nature are parallel. GPUs in essence are just thousands of small cores doing simple math. You can always add more cores which directly benefits performance.

CPUs have issues because many tasks that are done on CPUs require linear calculations which can't be parallelized.

CPU: You are doing calculation and wait for that calculation sum to use that sum in other equation.
GPU: You are doing multiple calculations at the same time and you don't need to wait for something to start working.
As I remember my 8800 Ultra was 2x faster compared to 7900 GTX, and previous Nv GPU's were also faster than 50%.
 
Last edited:

LordOfChaos

Member
Intel already did with AMD.

Technically an MCM, there were three distinct dies on an interposer, so presumably there's less cross-IP collaboration needed than on a single die APU since they can all use different fabs and just be placed on the substrate.
 

diffusionx

Gold Member
Oh no we got too cocky consolebros... at least we have... SSDs...

But seriously the 2xxx series always looked like a half-step. The performance jump over 1xxx wasn't there and RTX killed all the performance. There had to be a much better card somewhere down the line and hopefully this is it.
 

Kenpachii

Member
Any idea if my i5 2500k will be a bottleneck? :messenger_smirking:

Depends what u do with the card. My 1080ti is already getting bottlenecked at 1080p on a 9900k so there's that.

However if you going to game at 8k and 60 fps in titles where you cpu can manage to hit 60 fps, u will be fine :p
 

FireFly

Member
Your memory is shit. There has been no generation which doubled performance in a year other than some completely new segments showing up like dual gpus or introducing new pro segments. Usually it is 20-30% between node jumps and around 10 to 20% between arch changes.
How about the 6800 Ultra vs FX 5900, 7800 GTX 512 vs. 6800 Ultra, 8800 GTX vs 7900 GTX? 980Ti to 1080 Ti was also a pretty big jump (~70% at 1440p), though it took an extra year.
 

diffusionx

Gold Member
How about the 6800 Ultra vs FX 5900, 7800 GTX 512 vs. 6800 Ultra, 8800 GTX vs 7900 GTX? 980Ti to 1080 Ti was also a pretty big jump (~70% at 1440p), though it took an extra year.
The FX series was infamously awful, though.

The 2xxx series reminds me of that series in a lot of ways actually, although the Super line is better.
 

Dontero

Banned
How about the 6800 Ultra vs FX 5900, 7800 GTX 512 vs. 6800 Ultra, 8800 GTX vs 7900 GTX? 980Ti to 1080 Ti was also a pretty big jump (~70% at 1440p), though it took an extra year.

If you are going to use high resolution to make a point then don't show up at all. I can also prove that modern processors are 10000000000000% faster i5-2500 by doing some obscure AVX calculation no one will use for a good while.
When people are talking about rise in performance they are talking about standard resolutions people use daily.
 

TeamGhobad

Banned
This man will agree to that when and only when he's CEO of Intel

4daedb514bd7c8344d100000



We'll see if Intel comes up with interesting "APUs" though, with Tiger Lake switching their IGP architecture out for Xe.

ms and sony can't use AMD they need to use rival companies to get the best innovations.
 

FireFly

Member
If you are going to use high resolution to make a point then don't show up at all. I can also prove that modern processors are 10000000000000% faster i5-2500 by doing some obscure AVX calculation no one will use for a good while.
When people are talking about rise in performance they are talking about standard resolutions people use daily.
I am not talking about crazy resolutions though. I'm talking about 1600x1200 or 1080p, which a CRT or 24" widescreen could display. I used 1440p as an example, since that was in 2017, and it had become a pretty standard resolution by then.

At low resolutions you are less likely to be GPU limited, hence the performance increase will be smaller. But here for example, you can see the 8800 GT (not GTX), destroying the previous generation at 1280x1024:


Or here you can see the same thing with the 6800 Ultra:

 

pawel86ck

Banned
If you are going to use high resolution to make a point
There's no point benchmarking GPU in lower resolutions (because of CPU bottleneck). Just admit you were wrong and I will forgive you :messenger_tears_of_joy: :messenger_winking:

TNT2 vs Geforce 1 DDR - 18 fps vs 35 fps (and 44 fps OC), that's 94% improvement (and 144% OC)
exMAyx7NnRznxsZEtYnEZh-650-80.gif


GF1 DDR vs GF 2 Ultra 1600x1200 -25 fps vs 55 fps, that's 125% improvement
5k9bNd8LRokfAfj26uoDYF-650-80.gif


GF2 Ultra vs GF3 ti 500 - 55fps vs 100fps, that's 81% improvement
123a-q3a-1600.gif


GF 3 Ti 500 vs GF 4 ti 4600 - 23 vs 39 fps, that's 69% improvement
aqua-p4.png


GF 4 ti 4600 vs GF 5 (FX) 5900 XT 1280x1024 - 10 vs 38 fps, that's 280% improvement (and people say FX series was the worst :messenger_beaming: ).
doom3-high.gif


GF4 ti 4600 vs FX5900 Ultra - 20 vs 76 fps, that's also 280% improvement
ut_asbestos_aa_af.gif


GF 5900 vs 6800 Ultra - 25 vs 69 fps
0,1468,i=85492,00.jpg


GF 6800 Ultra vs GF 7900 GTX - 80 vs 156 fps, that's 176% improvement
ut2k4high.jpg



7900 GTX vs 8800 GTX - 19 vs 48 fps, that's 152% improvement
14021.png


8800 Ultra vs 280 GTX - 32 vs 50 fps, that's 56% improvement
image024.png


GF 285 GTX vs GTX 480 (FERMI 1.0) vs 580 (FERMI 2.0) - 60 vs 100 vs 119 fps, that's 66% (GTX480) and 99% (GTX580) improvement

farcry2_1920_1200.gif


farcry2_1920_1200.gif


####################################################################################################################################################################################################################

And since 2012 Nv started selling their mid-range GPU's as high end GPU's. GTX 680 wasnt the best kepler, it was only mid-range kepler, however it was sold at the same price as previous high end GPU's from Nv.

GTX 580 vs GTX 680 vs 780ti (real high end kepler) 32 vs 43 (47 with newer drivers in 2'nd chart) vs 72 fps. That's 34% improvement (GTX680), and 125% (780ti).

bf3_2560_1600.gif


bf3_2560_1600.gif



So people started paying much more for much less performance:messenger_beaming:. And where we are today?

1080ti vs 2080ti 59 vs 79 fps, so that's 33% more performance for just 1200 $
far-cry-5-3840-2160.png


If people will still defend Nvidia, their GPU's will be more and more expensive.
 
Last edited:

diffusionx

Gold Member
I really doubt many people spent $1200 to go from a 1080ti to a 2080ti, though.

That's the thing about that late 90s/early 00s - hardware evolved really rapidly. It was actually frustrating because you buy a GPU and it's a slow POS a year later (especially for a poor kid in school like me).

In fact sometimes even worse - GF3ti 500 came out in 10/01, GF4ti 4600 in 2/02. Four months and 70% improvement. And the games often did use that extra power!
 
Last edited:

McRazzle

Member
You won't get 50% more performance. You are looking at best at 20-30%. In order to get 50% they would have to make chip as big as today which at new node will be super expensive. Instead of paying 600$ for GPU you would have to pay 10600$.
So they shrink down chip which makes it cost efficient and you get 20-30% boost instead of 50% or more. Once node process will stablize and their architecture will be tuned you will get that 50-60% but not in one generation compared to current offering.

Not true according to TSMC.


09061753343l.jpg
 

pawel86ck

Banned
I really doubt many people spent $1200 to go from a 1080ti to a 2080ti, though.

That's the thing about that late 90s/early 00s - hardware evolved really rapidly. It was actually frustrating because you buy a GPU and it's a slow POS a year later (especially for a poor kid in school like me).

In fact sometimes even worse - GF3ti 500 came out in 10/01, GF4ti 4600 in 2/02. Four months and 70% improvement. And the games often did use that extra power!
Indeed, my first PC celeron 2 400MHz (OC'ed to 500 if I remember correctly) + TNT2 32MB was barerly running new games 2 year later. Now people can still play games even on very old hardware, something like i7 2600 and 780ti still run mang games at playable framerates at 1080p.
 

Dontero

Banned
There's no point benchmarking GPU in lower resolutions (because of CPU bottleneck). Rather than making absurd arguments just admit you were wrong and I will forgive you

I am not talking about lower resolution but about resolutions most of people used as standard. Like right now standard is 1920x1080 with some switching to 4k. In PC space there is also 1440 but it is big niche.

The point here is that if you want to compare GPU where they bring benefits the most it is precisely in high resolutions because most of the times new gpus have just much more ram that allow for greater resolutions.

If want to argue that this is "fine" then you can expect new gpus to double or triple performance every single year because it is very easy to get old GPU to run 8k and absolutely chug whille new gpu will have shitload of ram to not chug. When new new gpu will come same story. old one on 16k and new one will have even more memory. When people played Q3 default resolution was either 800x600 or 1024x768. 1280x10xx was there but Q3 absolutely shit the bed on most gpus back then so it is no wonder that new gpus were running it well.

What people can expect is is 30% at best.

Not true according to TSMC.

*If you will leave die size the same and won't touch power reqs. Which won't happen because runnign 500mm die size on 7nm would mean 10k gpus in shops or more.
 
Last edited:

diffusionx

Gold Member
1080ti is solid. What if Ampere is the shit though and not a pile of doodoo like the 2080 is?

if Ampere is getting us 50%-75% improvements then I'm on board for it. Comfy couch gaming at 4k/60fps would be very doable at that point, and RTX at more than 1080p/30fps. It's a lot better than a PS5 obviously.
 
Last edited:

Agent_4Seven

Tears of Nintendo
It's a lot better than a PS5 obviously.
It's not better for a one simple reason - you can't play Ghost of Tsushima, Persona 5 Royal, TLOU2, Spider-Man, Horizon, God of War and Uncharted 4 + Lost Legacy (and who knows what else will be at launch next year) on PC. And what you'll get for buying $700-1200 GPU on PC? Absolutely not worth it for me and I better play more games I can't play on PC.

Also, 75%? Ha-ha, yeah, no fucking way. Even 50% is more like a joke.
 
Last edited:

diffusionx

Gold Member
It's not better for a one simple reason - you can't play Ghost of Tsushima, Persona 5 Royal, TLOU2, Spider-Man, Horizon, God of War and Uncharted 4 + Lost Legacy (and who knows what else will be at launch next year) on PC. And what you'll get for buying $700-1200 GPU on PC? Absolutely not worth it for me and I better play more games I can't play on PC.

Obviously it's situational. My situation is that I already have a PS4 Pro, so why would I buy a PS5 to play PS4 games? I thought we were just talking hardware specs here, not console warz.
 
Last edited:

Agent_4Seven

Tears of Nintendo
I already have a PS4 Pro though, so why would I buy a PS5 to play PS4 games?
I'm talking about myself here, obviously. And no console wars, I'm just sayin' I better buy PS5 and 1080Ti is more than enough for me right now.
 
Last edited:

pawel86ck

Banned
I am not talking about lower resolution but about resolutions most of people used as standard. Like right now standard is 1920x1080 with some switching to 4k. In PC space there is also 1440 but it is big niche.

The point here is that if you want to compare GPU where they bring benefits the most it is precisely in high resolutions because most of the times new gpus have just much more ram that allow for greater resolutions.

If want to argue that this is "fine" then you can expect new gpus to double or triple performance every single year because it is very easy to get old GPU to run 8k and absolutely chug whille new gpu will have shitload of ram to not chug. When new new gpu will come same story. old one on 16k and new one will have even more memory. When people played Q3 default resolution was either 800x600 or 1024x768. 1280x10xx was there but Q3 absolutely shit the bed on most gpus back then so it is no wonder that new gpus were running it well.

What people can expect is is 30% at best.



*If you will leave die size the same and won't touch power reqs. Which won't happen because runnign 500mm die size on 7nm would mean 10k gpus in shops or more.
In early 2000 people used CRT's, so they could play games in different resolutions without issues. But regardless of resolution, modern GPU's are also tested with very high resolutions. Most people game at 1080-1440p, while yet many games are benchmarked at 4K.

BTW. 2080ti has around 30-40% performance improvement when it comes to rasterization, however at RT it's MANY times faster than 1080ti, and not to mention 2080ti has VRS. Quake 2 with RT runs slower on 1080ti at 640x480 than 2080ti at 1440p. IMO 2080ti is not the worst Nvidia GPU, but at 1200$ for just 30-40% in standard games is just a terrible value.
 
Last edited:

Ascend

Member
Oh. We introduced consoles now. Let's jump in.
What I can get in 2020, from most likely to least likely;

AMD GPU > Nintendo Switch > Xbox Series X > PS5 > nVidia GPU
 
Last edited:

FireFly

Member
The point here is that if you want to compare GPU where they bring benefits the most it is precisely in high resolutions because most of the times new gpus have just much more ram that allow for greater resolutions.
Right, but if the game was sufficiently GPU limited at low resolutions then you could also see huge performance improvements between past generations, since Nvidia was basically doubling the compute units each time. You can see a historical comparison of GPU FLOPS here:

 

Kenpachii

Member
Making lists of performance increase x generation and seeing its halting is kinda pointless.

Nvidia reacts towards the market and nothing else.

All of there gpu's are positioned to either compete with themselves or with AMD.

Ampere will only increase its performance to the point they think u will upgrade towards it from a nvidia card ( as high end consumer ) over there older generation card, or to outmuscle AMD.

If AMD releases a card for 400 bucks that pushes 50% performance over 2080ti, u will see a full ampere chip straight out of the gate. If it barely beats the 2080ti and not even beats it, u will see another 30% again.
 
Top Bottom