• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

SonGoku

Member
They'd have to cut the video capture feature (1GB+) and the greedy OS system (2GB+, back to then tens of MB on PS3) .. otherwise they'd have nearly no memory at all.

Plus if you half the chips (not half the capacity of the chips) your bus is now half width too..
Memory bus woulnt have been affected, they would have used less dense chips no less chips
 

xool

Member
Common misconception. 360 design philosophy is nothing like XO
360 main memory pool was GDDR3 which was the fastest graphics memory back then used in discrete GPUs

360 would be equivalent to PS4 memory setup + eDRAM

I'd forgotten this - in my head 360 had DDR4 DDR3 main ram - it sort of looks like MS actually just cheaped out this gen, not just made a bad design decision.
 
Last edited:

TeamGhobad

Banned
Maybe 4GB GDDR5 for games and 4GB DDR3 for OS, yeah in hindsight that would have been a good fall back plan in case higher density GDDR5 chips weren't ready. However i seriously doubt MS was unaware of this, there must have been other unknown factors that ultimately led to SRAM. Maybe cost?

I honestly think it was a combination of pure incompetence on MS' part. from SoC, Always online, Kinect, TV TV TV, 499dollars, and mild sabotage from AMD.
 

xool

Member
Didn't someone recently just post the video were an actual real world XONE hardware designer said they lowballed on the console's power (I'm sure it's in this thread - I don't want to go looking for it)
 

SonGoku

Member
I'd forgotten this - in my head 360 had DDR4 main ram - it sort of looks like MS actually just cheaped out this gen, not just made a bad design decision.
Although their BOM was lower than 360, it was higher than PS4s.
mild sabotage from AMD.
I don't think that was the case
Do you really believe MS engineers didn't consider a split pool? They must have had their reasons
 
Last edited:

TeamGhobad

Banned
Although their BOM was lower than 360, it was higher than PS4s.

I don't think that was the case
Do you really believe MS engineers didn't consider a split pool? They must have had their reasons

if they were incompetent with all other aspects of the XO why not also in engineering?
 

SonGoku

Member
Even discounting Kinect their BOM is slightly higher than PS4s
IHS_Table_1_Preliminary_Xbox_One_Cost_Est_By_Subsystem_USD.PNG
if they were incompetent with all other aspects of the XO why not also in engineering?
Suits=/Engineers
 

JohnnyFootball

GerAlt-Right. Ciriously.
No need, AMD is already weaker than intel and nVidia. Regardless though, there are a lot of rumors of Xbox having Navi also.

PS5 having Navi Exclusivity is a mega bomb. it lines up with nexbox being weaker. MS can't be happy with AMD right now once again they are getting shafted by AMD. MS should retaliate and gimp windows on their HW.
Riiiiiiiiiiiiiiiiiiiight. Makes total sense to gimp the fastest growing CPU maker. Yeah, they might hurt AMD if they did that, but the damage to the Windows brand would be far greater and MS would gain absolutely nothing from such a move.

Also, what evidence do you have that AMD is shafting MS? Let's say it's true and that Navi is exclusively PS tech and Vega 2 is going in MS. Let's not act like Vega 2 is not a very fast piece of hardware. It did go neck and neck with the 2080 from Nvidia in many benchmarks. Power is a concern, but there is little doubt that it is being tweaked for consoles. The next Xbox will be an amazing console if Ryzen 7nm is paired with Vega 2.

Also, recall that MS and AMD both appeared on stage together at CES 2019 talking about partnerships and such.

Please stop the bullshit, there is absolutely no evidence whatsoever that AMD is not working with MS in good faith.
 
Last edited:

Aceofspades

Banned
What did Sony actually do to achieve those fast loading times? Also would MS be able to implement something similar before launch?

Those advancements in loading times seems way better than TF numbers and a legit bragging point for Sony.
 
What evidence do you have that AMD is shafting MS? Let's say it's true and that Navi is exclusively PS tech and Vega 2 is going in MS. Let's not act like Vega 2 is not a very fast piece of hardware. It did go neck and neck with the 2080 from Nvidia in many benchmarks. Power is a concern, but there is little doubt that it is being tweaked for consoles. The next Xbox will be an amazing console if Ryzen 7nm is paired with Vega 2.

Also, recall that MS and AMD both appeared on stage together at CES 2019 talking about partnerships and such.

Please stop the bullshit, there is absolutely no evidence whatsoever that AMD is not working with MS in good faith.

Riiiiiiiiiiiiiiiiiiiight. Makes total sense to gimp the fastest growing CPU maker. Yeah, they might hurt AMD if they did that, but the damage to the Windows brand would be far greater and MS would gain absolutely nothing from such a move.

This is up there with some of the stupidest comments I have seen on GAF.
I would have to agree with you. Radeon VII has been proven to be a fairly competent card but it's also a $700 one. When the RX580 cam out it was ~$250 which is why it was reasonable to put it into the X1X, modified of course. I know it's not really apples to apples. But I don't think it's logical to think AMD or MS would throw the Radeon VII into a console, knowing that it will cost a ton. I really think both consoles are getting NAVI 10.
 

JohnnyFootball

GerAlt-Right. Ciriously.
I would have to agree with you. Radeon VII has been proven to be a fairly competent card but it's also a $700 one. When the RX580 cam out it was ~$250 which is why it was reasonable to put it into the X1X, modified of course. I know it's not really apples to apples. But I don't think it's logical to think AMD or MS would throw the Radeon VII into a console, knowing that it will cost a ton. I really think both consoles are getting NAVI 10.
I mean we literally know nothing about MS specs all the info we have is on the Sony side. If MS went with Vega 2, it would be an excellent choice, but there is no question a lot of changes would have to be made and they would probably have to replace HBM with GDDR6X and make many other power modifications.

Also, here is something to keep in mind. We have no idea on what type of performer Navi is going to be in the PC market. The chances of Navi underperforming are pretty decent. So it's also possible that Vega 2 could indeed be a better GPU. Just possible.
 

xool

Member
So it's also possible that Vega 2 could indeed be a better GPU. Just possible.

"Vega 2" would almost certainly be just fine - I mean a near 64 CU Vega part on 7nm - to put it in the simplest possible terms "two Radeon RX 570s strapped together" that would get to 10+TF at ~150W , sub 230 mm2 die size

(VegaVII Radeon VII aka Vega20 is a power hungry blip probably because of the additional fp64 support with it being a nerfed Radeon Instinct datacenter GPU)

I wouldn't even count on "Navi" being a substantial upgrade on Vega, except those got from moving to 7nm ..
 
Last edited:
I mean we literally know nothing about MS specs all the info we have is on the Sony side. If MS went with Vega 2, it would be an excellent choice, but there is no question a lot of changes would have to be made and they would probably have to replace HBM with GDDR6X and make many other power modifications.

Also, here is something to keep in mind. We have no idea on what type of performer Navi is going to be in the PC market. The chances of Navi underperforming are pretty decent. So it's also possible that Vega 2 could indeed be a better GPU. Just possible.
I don't know much about the differences in GDDR6 or HBM2 but what I'm reading is HBM2 requires less power, can have 1024-bit bus, but it's 2.6Gbps. GDDR6 is 16Gbps with up to a 384-bit bus. I'm not sure how that equates but I get the feeling that even though GDDR6 is faster, it has fewer lanes to travel whereas HBM2 has more. Does anyone know which would technically be better at 16Gb or 24Gb or could point me to the formula to reference the final Gbps number? I apologize if this questions has already been asked.
 

SonGoku

Member
I don't know much about the differences in GDDR6 or HBM2 but what I'm reading is HBM2 requires less power, can have 1024-bit bus, but it's 2.6Gbps. GDDR6 is 16Gbps with up to a 384-bit bus. I'm not sure how that equates but I get the feeling that even though GDDR6 is faster, it has fewer lanes to travel whereas HBM2 has more. Does anyone know which would technically be better at 16Gb or 24Gb or could point me to the formula to reference the final Gbps number? I apologize if this questions has already been asked.
24GB GDDR6 by far
From the OP:
 

xool

Member
I don't know much about the differences in GDDR6 or HBM2 but what I'm reading is HBM2 requires less power, can have 1024-bit bus, but it's 2.6Gbps. GDDR6 is 16Gbps with up to a 384-bit bus. I'm not sure how that equates but I get the feeling that even though GDDR6 is faster, it has fewer lanes to travel whereas HBM2 has more. Does anyone know which would technically be better at 16Gb or 24Gb or could point me to the formula to reference the final Gbps number? I apologize if this questions has already been asked.

I did a bandwidth calculation table here for 8GB https://www.neogaf.com/threads/rumor-ps5-devkits-13-tflops.1479115/page-16#post-254220586

Depending on how you chose chips either HBM or GDDR6 can be better .. there's an assumption that HBM is also much more expensive - but I think that's out of date info when HBM was brandnew - there is at least an additional ~$10-15 cost for HBM for the special interposer it needs.

I agree with SonGoku SonGoku though - right now it looks like GDDR6 would get more bang per buck right now
 

SonGoku

Member
I agree with @SonGoku though - right now it looks like GDDR6 would get more bang per buck right now
To add to this, if 24GB HBM3 ever becomes cheaper to use on a console (compared to GDDR6), Sony can switch to HBM3 for a slim revision for cost reductions.
 
Last edited:

Fake

Member
To add to this, if 24GB HBM3 ever becomes cheaper to use on a console (compared to GDDR6), Sony can switch to HBM3 for a slim revision for cost reductions.
Is this even possible? I mean, no side effect from change video memory in the same gen?
 

SonGoku

Member
Is this even possible? I mean, no side effect from change video memory in the same gen?
Yes, as long as the bandwidth is in excess of the previous setup.
Just look at the X, MS replaced a far more complex memory setup (sram + ddr3 + move engines) with gddr5.
 
Last edited:

xool

Member
Is a change like that recommended for a console?
Is this even possible? I mean, no side effect from change video memory in the same gen?

I think it would mean a redesigned FSB (front side bus) rather than just slightly different packaging .. in theory there's no reason to believe it would break anything or cause any compatibility issues..

In the real world I would expect some games to start behaving oddly for complicated reasons
 
Last edited:

Fake

Member
Yes, as long as the bandwidth is in excess of the previous method.
Just look at the X, MS replaced a far more complex memory setup (sram + ddr3 + move engines) with gddr5.
So is a nice bet. I can see Sony or MS change memory in possible slim versions. Unless in that time GDDR6 became even more cheap I guess.
 

SonGoku

Member
I think it would mean a redesigned FSB (front side bus) rather than just slightly different packaging .. there's no reason to believe it would break anything or cause any compatibility issues..

In the real world I would expect some games to start behaving oddly for complicated reasons
Why would they? the X memory setup was a far bigger change and didnt bring any issues.
 

ethomaz

Banned
mv2GllF.png


seems like Navi has some magic to it.
Actually that is not related to performance... maybe more efficient for compute processing.

It makes the CUs smaller thought.

BTW he is saying 8 Shader Engine with 5 CUs each... that means 40CUs total (Polaris number os units).
 
Last edited:

TeamGhobad

Banned
Actually that is not related to performance... maybe more efficient for compute processing.

It makes the CUs smaller thought.

BTW he is saying 8 Shader Engine with 5 CUs each... that means 40CUs total (Polaris number os units).
means u can pack more CU's on the SoC, PS5 beast confirmed!!! jk
 

ethomaz

Banned
wasnt 4 SE the most we ever seen on GCN? It was theorized as some limit
Yes it was the hardware limit for GCN until 5.1.

means u can pack more CU's on the SoC, PS5 beast confirmed!!! jk
Not exactly...

They changed from SIMD-16 to SIMD-32... that means the limit of 16CUs per SE is now 8 CUs per SE.
Still the same 64CUs max.

The biggest change in my view is that 1x SIMD-32 is smaller than 2x SIMD-16.
 
Last edited:

TeamGhobad

Banned
Yes it was the hardware limit for GCN until 5.1.


Not exactly...

They changed from SIMD-16 to SIMD-32... that means the limit of 16CUs per SE is now 8 CUs per SE.
Still the same 64CUs max.

The biggest change in my view is that 1x SIMD-32 is smaller than 2x SIMD-16.

what are the benefits of doing it this way?
 

Fake

Member
Outside of some simple indie type pixel scrollers i don't see PS5 games targeting nowhere near 8k vicinity
They don't need IMO. Just a option to change the output to 8k and will be good. I guess those 8k TVs already have a next gen upscaler picture processor.
 

ethomaz

Banned
Which GCN card features more than 4 SE?
Navi... GCN 6.0? Like I said it was a hardware limit until GCN 5.1 (Vega 7nm).

what are the benefits of doing it this way?
From what I can guess...

+ Smaller CUs
+ More efficient scheduler for compute tasks

Raw power didn't change at all... 64CUs before has the same peak performance of 64CUs now.
 
Last edited:

ethomaz

Banned
But this SE change is drastic, i've read such change would only be posible with a post GCN arch.
THIS IS HUGE 14TF back on the table baby!
How? Peak performance is 64CUs like before.

Or do you mean it is easier to get high clock with 56-60CUs with Navi? So 14TFs is easier than before? The CU being smaller did help with better clocks.
 
Last edited:

xool

Member
Why would they? the X memory setup was a far bigger change and didnt bring any issues.

If the game is highly optimised for the original hardware, and if the change to HBM doesn't bring any great improvement in bandwidth, they might end up breaking the original optimisation - leading to frame rate drops etc ..

One possible example I can think of is if the original software is optimised to fetch data in the exact size (and aligned to) the original GDDR6 data width .. the changed to the wider HBM data bus width might result in much reduced utilizations of the wider bus - could even be a disaster for performance .. but I need to think about it more.

Working extreme example - original game uses scattered 32 bit wide aligned data fetches .. when shifting to 128 bit data bus width (HBM) those 32 bit fetches now are ultilising only 25% of the available bandwith.

It's an extreme example.. maybe/maybe not [the old adage - if it can break it will]
 
Status
Not open for further replies.
Top Bottom