• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Specs for nVidia Ampere Supposedly Leaked

CrustyBritches

Gold Member
PS5' CPU is Zen 2, dude. Your info is bogus. I honestly don't understand with all the information out there you actually think it's a 1700. Good lord. It's the exact CPU as Series X but doesn't reach the same clocks. Also, there's plenty of rumors that Sony's version of Zen 2 has one unified CCX for latency reduction. That hasn't yet been confirmed, but has been rumored for a while and could be the same for the Series X.
Your reading comprehension needs work. It performs like the 1700 and information points to quartered L3 cache and it's paired with GDDR6, hence the results. Komachi's info was correct the whole time. It's not up for debate: Gonzalo/Oberon/Flute/Prospero info was/is the PS5. This is a test for the actual PS5 CPU.
 
Last edited:

mitchman

Gold Member
It's the PS5's CPU. This userbench leak came via Komachi a year ago. They quartered the L3 cache and it's paired with GDDR6.
Doesn't matter, the Zen 2 offers a significant IPC improvement over Zen and Zen+. As I've said previously in this thread, expect CPU performance within 15-20% of the similar 8c/16t Zen2 desktop CPU at 35-45W. Quarted cache also in the mobile Ryzen 7 and 9 CPUs, and they show these numbers.
 
Last edited:

CrustyBritches

Gold Member
Doesn't matter, the Zen 2 offers a significant IPC improvement over Zen and Zen+. As I've said previously in this thread, expect CPU performance within 15-20% of the similar 8c/16t Zen2 desktop CPU at 35-45W. Quarted cache also in the mobile Ryzen 7 and 9 CPUs, and they show these numbers.
This is an actual benchmark for the PS5's CPU. It is what it is. If you choose not to believe it, that's on you.
 

mitchman

Gold Member
The benchmark was quickly removed. I posted the screenshot from before it was taken down. It had other info like 16GB memory with ~154ns latency, fairly high in comparison to most DDR4.
Oh you mean that userbenchmarks image? Questionable image from a very questionable site, and not actual proof of anything.
 
The benchmark was quickly removed. I posted the screenshot from before it was taken down. It had other info like 16GB memory with ~154ns latency, fairly high in comparison to most DDR4.
Anywhere in that leak that suggest it's based off a Zen+ chip, or some Zen2 7nm prototype? It's possible that this test could have acted as a precursor before Sony made a lateral move to Zen 2.
 

Siri

Banned

Another leak pointing to 50% boost over 2080Ti.

50% = I’ll give Nvidia my money.... and sell my RTX 2080 TI
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Those new connectors are really overdue though.
I could see them packing a adapter for older PSUs for cards that don't draw as much power, but I honestly wouldn't mind if they'd force you to get new PSUs if you want to run that top-tier power hungry GPU.
Like I said, a change in pins and cables used by PSUs is already long overdue.

Why is that?
CPUs and Motherboards are more efficient and rarely reach the limits of the cables.....16 core 3950Xs OC'd dont even max out the pins.
Those extra pins on some motherboards are usually totally useless cept to people doing absolute limit LN2 overclocking.

If you need a twelve connect two 6s together.
Everyone else has no need to use the twelve that effectively only one GPU on the market needs.

You want to set a new standard and make people buy new PSUs to accommodate one GPU that is actually already accommodated for?
WTF am I reading?!?!

If Nvidia is going to keep getting inefficient and the 4080ti require a full 24pin to run, Ill probably switch to being a fulltime console gamer and just use renderfarms for all my GPU needs.
 

CrustyBritches

Gold Member
Oh you mean that userbenchmarks image? Questionable image from a very questionable site, and not actual proof of anything.
Flute is in line with Sony's use of Shakespearean codenames for PS5 and it's PCI-id falls within their range as well.

Anywhere in that leak that suggest it's based off a Zen+ chip, or some Zen2 7nm prototype? It's possible that this test could have acted as a precursor before Sony made a lateral move to Zen 2.
It's possible, I suppose. This is the Gonzalo/Oberon SoC being tested that was confirmed by DF to be the final silicon going into PS5.
 
50%boost over a 2080ti is not that great, might be better if coupled with further 80% raytracing performance increase
What the fuck are you smoking? 50% boost is on the higher side of a generational leap. Since 2010 only the jump from 980 Ti to 1080 Ti managed that and even then not really because the 980 Ti OC’d like a beast and could easily close that gap to like 40%.

Edit: Never mind, that is the alleged 3090 so yeah, 50% is OK. Not huge.
 
Last edited:

pawel86ck

Banned
Your reading comprehension needs work. It performs like the 1700 and information points to quartered L3 cache and it's paired with GDDR6, hence the results. Komachi's info was correct the whole time. It's not up for debate: Gonzalo/Oberon/Flute/Prospero info was/is the PS5. This is a test for the actual PS5 CPU.
You are correct and we cant expect console CPUs will match Ryzen 2 CPUs on PC, however consoles have a separate audio and decompression chip, so these CPUs will punch above their weight for sure.
 

ZywyPL

Banned
[...] however consoles have a separate audio and decompression chip, so these CPUs will punch above their weight for sure.

It goes both ways - on one hand, the consoles will have dedicated decompression engines that will offload that task from the CPU, plus audio ones although many mid-high range motherboards also have separate on-bard audio chips, so I count that as an even, BUT - the consoles will have entire core locked for the OS, that's 2 threads taken away from the games, whereas Windows takes literally less than 1% of a modern CPU power.
 

CuNi

Member
Why is that?
CPUs and Motherboards are more efficient and rarely reach the limits of the cables.....16 core 3950Xs OC'd dont even max out the pins.
Those extra pins on some motherboards are usually totally useless cept to people doing absolute limit LN2 overclocking.

If you need a twelve connect two 6s together.
Everyone else has no need to use the twelve that effectively only one GPU on the market needs.

You want to set a new standard and make people buy new PSUs to accommodate one GPU that is actually already accommodated for?
WTF am I reading?!?!

If Nvidia is going to keep getting inefficient and the 4080ti require a full 24pin to run, Ill probably switch to being a fulltime console gamer and just use renderfarms for all my GPU needs.

It's not just about being efficient or not. RN you have a hard limit on how much power your card is allowed to draw. Efficiency just means it can do the same amount of work while needing less power.
Increasing this limit is not about allowing inefficient cards to exist but rather give GPUs more headroom and power they can tap into.
CPUs are now 7nm and GPUs aren't that far behind anymore (in AMDs case even on par already). Silicon is slowly really reaching its limits and there is just so much you can do with architecture changes etc.
At some point, we either need a new material to be able to increase clocks, or give it more power to be able to do more things simultaneously.

The other benefit of that connector would be that cheaper PSUs and people that are not tech savy would have less trouble knowing if their PSU can even power that card.
If you have something like this you might think that you can hook up 2 Cards that each need one 8 (or 6 +2 ) Pin Header or 1 Card that needs 2x 8 (or 6 +2 ) Pin headers.
BUT.. that would be wrong. That second part is only meant as a EXTENDER! It is NOT a 2nd Power-Line. You could run into all sorts of problems with these.

You can read all of this on igorslab with even more in-depth explanations here.
 

Rikkori

Member
gif-explosion-36.gif


vRfcoGk.png


 

CrustyBritches

Gold Member
Videocardz linked to a new Gamers Nexus video where he discusses the rumored discontinuing of the 2070 Super, which according to board partners has not happened fully as claimed, and the alleged announcement date for Ampere...Sept. 9th.


---

I just noticed you posted this, Rikkori Rikkori , here's a chart from Videocardz showing the Wccftech specs rumors vs Igor's Lab rumors in a chart...

Videocardz rumor chart(Wccftech vs Igor's Lab):

kodcjRr.jpg


I guess I'm in the market for a 3080(10GB) or 3070 Super(8GB)? I think the 2080 launched at $699, which is the very top of my price range. That Wccftech rumored 3070 Super(16GB) sounds good, but we'll see if it ever materializes.
 
Last edited:

DeaDPo0L84

Member
I want the 3080ti but if I have to upgrade other components of my PC then I'll wait. I'm anxious at the moment...

Mobo: MSI mpg Z390 Gaming Pro Carbon AC
Cpu: i7-9700k
Psu: evga 750w
RAM: Corsair Vengeance LPX 16g (8x2)
 

Kuranghi

Member
I bought a GTX 1080 for £500 a couple of years ago so it feels silly to upgrade again so soon but I really want to get back to 4K60 in games, most games I have to do 4K@30 or 1440p-1800@60. I'd need something thats about 50% faster than what I have for that though, so, a 3080 at least? Which will be £600-700 I'm guessing :pie_thinking:.

Maybe I'll just get a PS5 at launch instead, don't know if it'll be truly worth it until 2021 though. If BC is amazing for PS4 games - ie doubled framerates and/or native 4K - then it'd be worth it since I've not played God of War or RDR 2 yet, they would surely keep me going until March 2021.

edit - I'm sorry I sullied the PCMR thread with console speak, I have dishonored by legacy...
 
Last edited:

GreatnessRD

Member
gif-explosion-36.gif


vRfcoGk.png


Looks tasty. I think I'll be interested in that tier 1 2070 Super range with the 16GB. Your move, AMD. Show us the leaks! :messenger_tears_of_joy:
 

Kuranghi

Member
vRfcoGk.png



It feels silly to get a 10GB card if you already have 8GB so they are "forcing" me to get a 16GB card lol.

Even though in RE3make maxed out texture settings claims you are hitting a VRAM usage of ~11GB at native 4K (I think it was 14GB with 150% res scaling lol) but it rarely goes above ~6GB from what I could see. Good for the future though.
 
If it's all the same "ecosystem", why does the xbox have paid online and the PC doesn't?

xbox exclusive: online paywall

ridiculous is an xbox fan to have accepted this situation, it seems like brainwashing done by MS.

It makes sense if you think about it. This basically is MS bringing online paywalls to PC.


Gee thanks for that nice trojan horse. Now I can pay more for hardware MS doesn't have to produce, and still give MS a monthly fee.

It's brilliant. Hope I'm wrong.
 
Your reading comprehension needs work. It performs like the 1700 and information points to quartered L3 cache and it's paired with GDDR6, hence the results. Komachi's info was correct the whole time. It's not up for debate: Gonzalo/Oberon/Flute/Prospero info was/is the PS5. This is a test for the actual PS5 CPU.
You know which AMD CPU has quartered L3 Cache?
Renoir.
Which is most certainly based on Zen 2, and absolutely runs circles around the 1700
 

kiphalfton

Member
I have a hard time believing they'll have two memory options for each respective model (for the RTX 2070 Super and RTX 2080 Super successors). I mean this only happened with what, the GTX 1060 (3GB vs 6GB). Even AMD only did it with the RX 570 afaik.
 

Rikkori

Member
Didn't really want to go to just 10 GB but it will all come down to the price. Got a nice chunk of scarola set aside but the price/perf of the 2080ti just felt too disgusting so probably won't jump for the flagship if it's the same situation (which it probably will be or worse).

I fear there's going to be some serious stock shortages though.
 

Antitype

Member
You know which AMD CPU has quartered L3 Cache?
Renoir.
Which is most certainly based on Zen 2, and absolutely runs circles around the 1700

But is Renoir paired with 150ns latency RAM? Super high latency is crippling for a CPU, there's a reason PC never went with GDDR for system RAM. If that 150ns figure is true it's more than double the latency of a PC Ryzen 2 and it has to have an impact.
 

SF Kosmo

Al Jazeera Special Reporter
gif-explosion-36.gif


vRfcoGk.png


I was not expecting the 3070 Super this year. Interesting.

I'm not looking to replace my 2070 for at least a year or so, and I don't think we're gonna get any games that demand much more than that in that time but I am interested to see what devs do with all the extra raytracing power.

Right now raytracing is really performance limited and has to use a lot of optimizations and cut corners and even then it can be transformative. If they can double the RT performance we might start seeing games that look truly next gen.
 

Rikkori

Member
I was not expecting the 3070 Super this year. Interesting.

I'm not looking to replace my 2070 for at least a year or so, and I don't think we're gonna get any games that demand much more than that in that time but I am interested to see what devs do with all the extra raytracing power.

Right now raytracing is really performance limited and has to use a lot of optimizations and cut corners and even then it can be transformative. If they can double the RT performance we might start seeing games that look truly next gen.

Think it's more a dev time issue (needs to keep rasterised lighting model too), since they won't be able to do a full switch for another decade. I think we'll see those kinds of games in due course, RT isn't really that necessary anyway, look at what Lumen can do. I reckon we'll see some true next gen games from 2022 or so. And ofc this november on PC. ;)
 
Last edited:

magnumpy

Member
game over man this is the end of everything and my last wishes are drowning in a flood of tears and screaming. it wasn't supposed to be like this man. horrible dissolution of things. 1,000 years of darkness. frowny face :(
 

Orta

Banned
As usual I'll be going for the xx70 card. Will it be time to upgrade my cpu though, I'm running an i7 6700k. Bottleneck?
 

Type_Raver

Member
Couple things...

1) if those memory configs are correct, I wonder what the prices of these will be?

2) anyone with info answer this - why the high memory capacity options? Is if because ray tracing demands it, or are they brute forcing their way to counteract next gen SSD streaming performance?

Love to know.
 

Chiggs

Member
Couple things...

2) anyone with info answer this - why the high memory capacity options? Is if because ray tracing demands it, or are they brute forcing their way to counteract next gen SSD streaming performance?

Love to know.

Lol, no. But kudos to Sony for installing the narrative.

Anyway, my opinion is that it’s because AMD are likely to have a decent card this time, and are likely to increase their memory configs to 12-16GB at the very least, and Nvidia will want to make sure they remain in the eyes of gamers as THE premium solution.

Also, Nvidia have been stuck around the 11/12 GB mark for the past few years (consumer cards). That’s not gonna fly with anyone...especially with a 1k to 1.5k price.
 
Last edited:

Nydus

Member
1440p ( 1070 gpu) and don't really want a 4k monitor at the moment
It will start to limit a bit, but maybe not to the degree that you will notice it. The 3070 should be in the 2080Ti ballpark. Just Google if the 7700k is limiting the 2080ti @ 1440p and you should have a picture.
 

BluRayHiDef

Banned
It will start to limit a bit, but maybe not to the degree that you will notice it. The 3070 should be in the 2080Ti ballpark. Just Google if the 7700k is limiting the 2080ti @ 1440p and you should have a picture.
What about an i7-5820k paired with a 3080Ti or 3090 at 4K resolution?
 

Nydus

Member
What about an i7-5820k paired with a 3080Ti or 3090 at 4K resolution?
Until now 4k ist very much pure GPU. If 4k60 is enough for you and you don't have the extra cash the CPU could be enough. But for okish 4k60 a 2080Ti equivalent would be enough. If you want 4k120 and get the most out of those cards get an I5 10600k with a good enough board. The 5820k has 6 cores but if it's not heavily overclocked those 3,6Ghz could start to limit the new cards sooner then expected.
 

skneogaf

Member
I have two computers both 9700k but one with a rtx 2080ti and the other gtx 1080ti so I nay sell my gtx 1080ti. How much do they go for nowadays?

I'll hopefully pick up the rtx 3080ti if it is called that although I do believe they will drop ti naming now and stick to super or even 3090 or something.
 

DeaDPo0L84

Member
I have two computers both 9700k but one with a rtx 2080ti and the other gtx 1080ti so I nay sell my gtx 1080ti. How much do they go for nowadays?

I'll hopefully pick up the rtx 3080ti if it is called that although I do believe they will drop ti naming now and stick to super or even 3090 or something.

Do you suspect the i7-9700k will be just fine with the 3080ti? I have the same CPU so I'm curious.
 

GreatnessRD

Member
I have two computers both 9700k but one with a rtx 2080ti and the other gtx 1080ti so I nay sell my gtx 1080ti. How much do they go for nowadays?

I'll hopefully pick up the rtx 3080ti if it is called that although I do believe they will drop ti naming now and stick to super or even 3090 or something.
I've seen $400-$500.
 

kiphalfton

Member
I've seen $400-$500.

This.

B-stock (i.e. refurbs) have gone down to $380 after discount on EVGA's website a couple times over the past month or so, and I would say that's a pretty good deal. So I'm thinking $400 is pretty spot-on, and they might be able to get $500 if lucky. However, I would definitely try and sell it as quick as possible, as the 3000 series is only going to drive the price down more when that releases. If it's worth $400 to $500 now, it's probably not going to be worth that when ampere comes out...
 
Last edited:
Top Bottom