• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Darius87

Member
Actually it is limited to 4 Shader Engine and you can have max of 16CUs per Shader Engine... Raja said in an Anandtech interview they didn't have enough time to make a GCN card with more Shader Engines works... I think that claim shows how hard is to implement over 4 Shader Engines in actual GCN (that is the limitation we are talking about).

Of course a new redesign architecture can break these imitations.

rather then adding extra shader engines which one of them is 25% of gpu in 64CU setup, better should be improving existing components in existing shader engines like geometry, rasterizers, and add more CU's on top, that would prevent from adding + 16CU by increasing 1 shader engine and would be more flexible by adding 2CU in shader total would increase by 8CU in total of 72CU's, i think that's possible customisation sony with amd could do,.
 

ethomaz

Banned
rather then adding extra shader engines which one of them is 25% of gpu in 64CU setup, better should be improving existing components in existing shader engines like geometry, rasterizers, and add more CU's on top, that would prevent from adding + 16CU by increasing 1 shader engine and would be more flexible by adding 2CU in shader total would increase by 8CU in total of 72CU's, i think that's possible customisation sony with amd could do,.
GCN is already 8 years old and AMD never did that "easy" config to add more 2CUs per sharder.
It is a hardware limitation and even Raja saying they never had time to work to break that limitation (each GCN version had a minimum of 2 years of R&D).

So in 8 years didn't have time to break the 64CUs limit... somehow now for Next Gen consoles they will break it in months.

Over 64CUs is impossible... sorry.

Just think a bit about... do you believe AMD is happy not having a card to trade blows with nVidia? A card with more than 64 CUs? To break that limitation they need a complete redesign of GCN and that is what everybody wants but AMD couldn't do yet (or they didn't have time).
 
Last edited:

Darius87

Member
So in 8 years didn't have time to break the 64CUs limit... somehow now for Next Gen consoles they will break it in months.
it's not question of months probably 3 + years. what i believe was goal for sony in first place to fix bottlenecks in polaris(ps4) arch.

Over 64CUs is impossible... sorry.
I wouldn't say that but it's clealy not worth the effort.

Just think a bit about... do you believe AMD is happy not having a card to trade blows with nVidia? A card with more than 64 CUs? To break that limitation they need a complete redesign of GCN and that is what everybody wants but AMD couldn't do yet (or they didn't have time).
no i don't think they goal is to beat nvidia, GCN arch wasn't for gaming in the first place, so competing with nvidia in high end is chasing dream that they never..........
 

Von Hugh

Member
11.5 TF - 12 TF is pretty fucking good.

Add a CPU that isn't dead on arrival such as Jaguar, faster RAM, a speedy SSD, games that are built from ground up for the new architecture, and the general experience will feel like a proper generational leap already.

I am excite. Base PS4 games already look so good, everything beyond that from now on is just a plus.

This is how you stay positive and not get let down when you are hoping dearly for 14 TF machines with 32 GB of RAM such as OP.
 

ethomaz

Banned
it's not question of months probably 3 + years. what i believe was goal for sony in first place to fix bottlenecks in polaris(ps4) arch.


I wouldn't say that but it's clealy not worth the effort.


no i don't think they goal is to beat nvidia, GCN arch wasn't for gaming in the first place, so competing with nvidia in high end is chasing dream that they never..........
Well each one can believe on anything :messenger_winking:

For me over 64CUs is impossible and AMD should be super happy to have a card with more than 64CUs to compete with nVidia and I will say they hard worked to do that but failed because that limitation is not something that simple like some posters in the other forum is trying to make it.

AMD always competed with nVidia on high-end until GCN couldn't envolve anymore so they spread these bullshit about not competing on the high-end that some fans believes to be true lol
 
Last edited:
yeah the costs add up.
People expecting a 12+TF, 2TB SSD, 32GB, 8c/16t 3.2GHZ console are looking at a 1300$ PC.
Personally, my mind has changed on the whole SSD thing, prices are falling pretty rapidly.

I'm somewhat realistically expecting
  • 1TB QLC NVMe SSD
  • 16GB GDDR6
  • 4GB DDR4 (25% on this one, I really don't think there will be separate RAM for the OS now that the SSD is there)
  • 8C/16T Ryzen 3600 3.6GHZ @ 65W
  • Navi RX3080 50+ CU with customizations for bandwidth @ 1.8 Ghz or less

Although my heart wants
  • 1TB SLC NVMe SSD
  • 24GB GDDR6
  • 4GB DDR4
  • 8C/16T Ryzen 3700 3.8GHZ @ 95W
  • Navi RX3080 64CU with customizations for bandwidth @ 1.8 Ghz
Some people need to dial it back a bit.
 
Last edited:

ethomaz

Banned
A slide showing the hardware limit of GCN:

gs4106-the-amd-gcn-architecture-a-crash-course-by-layla-mah-52-638.jpg


- Up to 4 Shader Engines
- 1-16 CUs per Shader Engine
- 1-4 RBEs per Shader Engine (4 ROPs per RBE)

That is a presentation done in 2014 with GCN 3.0 but that is true today with GCN 5.1 (Vega 7nm).
You can't go over that with GCN.
 
Last edited:

Ellery

Member
Personally, my mind has changed on the whole SSD thing, prices are falling pretty rapidly.

I'm somewhat realistically expecting
  • 1TB QLC NVMe SSD
  • 16GB GDDR6
  • 4GB DDR4 (25% on this one, I really don't think there will be separate RAM for the OS now that the SSD is there)
  • 8C/16T Ryzen 3600 3.6GHZ @ 65W
  • Navi RX3080 50+ CU with customizations for bandwidth @ 1.8 Ghz or less

Although my heart wants
  • 1TB SLC NVMe SSD
  • 24GB GDDR6
  • 4GB DDR4
  • 8C/16T Ryzen 3700 3.8GHZ @ 95W
  • Navi RX3080 64CU with customizations for bandwidth @ 1.8 Ghz
Some people need to dial it back a bit.

I would say the "somewhat realistically expectations" are reasonable expectations, but I think the GPU will have lower clocks because of the console design and difficulty cooling.
I mean Sony officially confirmed it is going to be a super fast SSD. Maybe they find some hybrid solution like a 256GB NVMe SSD and a 1TB HDD, but I guess it could very well be a 1TB NVMe SSD in late 2020.
Buying in bulk and prices going down I could see that happening yeah.
 
  • 1TB QLC NVMe SSD
  • 16GB GDDR6
  • 4GB DDR4 (25% on this one, I really don't think there will be separate RAM for the OS now that the SSD is there)
  • 8C/16T Ryzen 3600 3.6GHZ @ 65W
  • Navi RX3080 50+ CU with customizations for bandwidth @ 1.8 Ghz or less

More or less inline with what I'm expecting. Minimum of 16GB application RAM. I'm be disappointed to see 16GB RAM with 4GB reserved for the OS (at launch, coming down after that). 11-12TF GPU.
 
Okay, someone please ELI5 about threads and cores:

Why would you want to disable threads and cores in a CPU? If the thing is capable of N threads, why disable 1? Likewise, there are X cores, but 2 of them are disabled. Why is this? It is something to do with yields? Also, what does that really mean?
 

TeamGhobad

Banned
Okay, someone please ELI5 about threads and cores:

Why would you want to disable threads and cores in a CPU? If the thing is capable of N threads, why disable 1? Likewise, there are X cores, but 2 of them are disabled. Why is this? It is something to do with yields? Also, what does that really mean?

cliffys:
Threads are not effective for games due to workload variations. Threads take up die space. Threads are hypejob.
Supposedly MS is working on RT tech and will use threads for that purpose.
 
Supposedly MS is working on RT tech and will use threads for that purpose.

I'm willing to bet they reserve one thread exclusively for the RT stuff, giving the other two back to the developers.

PS5 will probably work with some sort of hardware solution, which is what it's Secret Sauce is. Or maybe every PS5 will come with a bottle of HP.
 

TeamGhobad

Banned
Zen CPUs:

“MS insiders says PS5 uses Zen1 and Scarlet Zen2.”

Cerny conforms 8 core Zen2:

“MS insiders claim Zen3 quad threaded wallet constricting anaconda.”

we knew they were going to be 7nm and zen2 is the only 7nm cpu available. also i think MS is willing to take a hit on every xbox sold.
 

ethomaz

Banned
Okay, someone please ELI5 about threads and cores:

Why would you want to disable threads and cores in a CPU? If the thing is capable of N threads, why disable 1? Likewise, there are X cores, but 2 of them are disabled. Why is this? It is something to do with yields? Also, what does that really mean?
You need to understand the concept of SMT first.

Think the CPU core is made of a lot of processing units and registers but most of times it only use few processing units to do operations with these registers... so what SMT is about? SMT add more registers to when some processing units are working in determined tasks the others processing units can work with these new registers in parallel.

It is a cost efficient mode to use units to make these iddle processing units do something... so you only need to add new registers to the CPU and so you have a vitual core ready to be used when the CPU Core has iddle processing units.

smt.png


See? The white squares are non-used units that with SMT are used to do something else (green).

But why disable? Well in gaming the CPU is used to a extent that you if you use these white units in idle to do something they will hold performance of the game thread and even if you only do the game thread on that core the overhead done internally to manage SMT make the core performance drop a bit so some very few fps are lost with SMT active.

For games the best is to disable SMT to get the best performance where the Core is being used 100% for the game without any parallel task to hold it.
 
Last edited:

TeamGhobad

Banned
So what you’re saying is, you don’t see the reactionary leapfrogging these ‘insiders’ keep pulling?

i do. but i also think MS doesn't want to be in the same position it was this gen and they would want to one up the ps5. I'm 99% sure that xbox2 is gonna be more powerful at the same price.
 
Last edited:

DeepEnigma

Gold Member
i do. but i also think MS doesn't want to be in the same position it was this gen and they would want to one up the ps5. I'm 99% sure that xbox2 is gonna be more powerful at the same price.

You can be more powerful with tweaking clocks without breaking the bank.
 

ethomaz

Banned
i do. but i also think MS doesn't want to be in the same position it was this gen and they would want to one up the ps5. I'm 99% sure that xbox2 is gonna be more powerful at the same price.
The big issue here is that MS didn't know how Sony chip will be.
Same for Sony.

So both companies target a high spec to be lucky to be the stronger one.

That is exactly why Cerny didn't talk about CUs, clock, etc... because if he does then MS has a target to beat and it will make them probably have a stronger machine.

It is the same reason the actual DevKits are called "Slower mode" because both MS and Sony don't want to give the other enough info in time to make changes in their own project to beat the competition.

Insider are even more blind in that chess play between MS and Sony... Sony last check mate with revealing 8GB GDDR5 break the hopes to MS have the performance lead last gen and MS doesn't want to get in that same situation this generation... Sony is definitively hiding cards like last gen, same for MS but MS has more to lose than Sony in that chess game.
 
Last edited:

TeamGhobad

Banned
You can be more powerful with tweaking clocks without breaking the bank.

tweaking clocks to a point. there are limits. also u need a huge heatsink/vapor chamber huge fan ect ect also unwritten law consoles cant be more than 350watts or something like that ect ect. MS can't compete on software yet so they will compete on HW and multiplats looking better.
 
i do. but i also think MS doesn't want to be in the same position it was this gen and they would want to one up the ps5. I'm 99% sure that xbox2 is gonna be more powerful at the same price.
I do not think so. I feel like the Xbox will be more expensive if it is better spec'd which is why Lockhart exists. There is a certain limitation with the technology that's about to be presented and I have no reason to believe the PS5 and Anaconda are going to be worlds apart. The only thing that microsoft can do at this point is add hardware to take care of certain tasks like physics, Ray-Tracing, Sound, etc. They could add more GDDR6, they could up clock speeds and provide better cooling. But I really don't think they will have extra threads or skip a generation in CPU or GPU, because the process has to exist in order for them to do that. If AMD has this technology now, why would they sell Zen 2 and not jump straight to Zen 3? Same goes for the GPU. It makes no sense.


I would say the "somewhat realistically expectations" are reasonable expectations, but I think the GPU will have lower clocks because of the console design and difficulty cooling.
I mean Sony officially confirmed it is going to be a super fast SSD. Maybe they find some hybrid solution like a 256GB NVMe SSD and a 1TB HDD, but I guess it could very well be a 1TB NVMe SSD in late 2020.
Buying in bulk and prices going down I could see that happening yeah.

It's all about the type of SSD it ends up being also. QLC being the slowest and the most cost efficient. SLC being insanely fast but the most cost prohibitive.
 
Last edited:

onQ123

Member
A slide showing the hardware limit of GCN:

gs4106-the-amd-gcn-architecture-a-crash-course-by-layla-mah-52-638.jpg


- Up to 4 Shader Engines
- 1-16 CUs per Shader Engine
- 1-4 ROPs per Shader Engine

That is a presentation done in 2014 with GCN 3.0 but that is true today with GCN 5.1 (Vega 7nm).
You can't go over that with GCN.



PS4 has 32 ROPS & PS4 Pro has 64 ROPS
 

thelastword

Banned
Zen CPUs:

“MS insiders claim PS5 uses Zen1 and Scarlet Zen2.”

Cerny conforms 8 core Zen2:

“MS insiders claim Zen3 quad threaded wallet constricting anaconda.”
It will be a miracle if MS comes on an E3 stage and say, XB2: November 9th 2019, Zen 3, Arcturus GPU...."Yes, we got it before the PC guys".....$699.99........Forget about products being beta tested in the future.....The Slogan wil be "Tommorows's technology here today"............Walks off stage....
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Xbox One had 16
Xbox One X had 32
PS4 had 32
PS4 Pro had 64

Most likely because the Xbox One was weak as hell.

Interesting.....what are these extra ROPS providing the PS4 Pro then considering that it's overall weaker than the X1X?
 
Last edited:

ethomaz

Banned
PS4 has 32 ROPS & PS4 Pro has 64 ROPS
Sorry my bad... RBE not ROPS... Render Backends.

Each RBE has 4 ROPs... 16 RBE = 64 ROPs is the limit.

Edit - A picture of a fully GCN.

4 Shader Engines
64 CUs
16 RBEs, 64 ROPs
8 ACEs

R9-FURY-X-2-10.png
 
Last edited:

ANIMAL1975

Member
I've seen this rumor before, a few weeks back, and I must say, it's the only leak I believe is real.........

I think specs will improve though, 32 GDDR6 (Games) + 4-8GB DDR4 (OS) seems like it will make it into final kit........They could then fuse GDDR6 with DDR4 as an extended ram allocation if the OS does not use all of the DDR4.....Would be great for devs...
32GB GDDR6? what?
You probably are the only one who believes that leak. Either we are going to get a bit less RAM, or the PS5 will cost more than $600...
I hope there is a God and that he is listening to you thelastword! 🙏
 

SonGoku

Member
There is no way to PS5 has more than 64CUs.

To be fair to get a good yield they will probably choose to disable some CUs so they can use some defective chips... I think at least 4 CUs will be disabled for that.

60 or 56 CUs will be the mostly like config.

Give some time that I will try to make math to estimated the possible PS5 die size.
I read estimations of close to 400mm2 APU die: 64CUs (140mm2), 8 zen cores(75mm2) and all the other jazz that takes up 50% of the die
 

SonGoku

Member
Not saying they will, but could a “crossfire” like setup be done on chip to theoretically take advantage of more CUs in a similar nature like running two separate cards in SLI? Or the butterfly method like the Pro?

Let’s say for the sake of argument GCN was stuck for another decade (which it shouldn’t be due to their next gen chip coming after).
Chiplet designs are supposedly going to do just that
For games the best is to disable SMT to get the best performance where the Core is being used 100% for the game without any parallel task to hold it.
Wasnt there a grass scene in Crysis 3 that benefited from hyper threading?
 
Which of these combinations/set-ups work best for VR?

Which is the most efficient way of computing VR?

When we know that, we will have a good idea of what the specs are.
 

TLZ

Banned
b84d57101953dde49f98b135c289ddb120181112102328.png


The Xbox One fat for a machine sold for $500 it was too weak.
Man that OG Xbone was weak as. I wouldn't have minded that if sold cheaper, but add insult to injury and sell it for a $100 more, because of forced accessory nobody cared about? So, so stupid. I hope they never ever go back to that stupidity ever again.
 

SonGoku

Member
Could the PS5 be heavy on the GPU and light on the CPU or heavy on both?
I think the CPU will be pretty beefy zen2 at 3.2ghz is a giant leap
I hope we get 12+tf GPU, that way its balanced
A midgen refresh could drastically improve vr perfomance if that was the goal, it would be the 4k of next gen.
 
Last edited:
Status
Not open for further replies.
Top Bottom