• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

travisktl

Member
It would be a crime not to release this officially! I mean it is a niche version but 'tis one that would make a buttload of money if they ever did. Instead of all those basic color versions, I really would want this first above all others.

Can you link? Does GAF allow other place links?
It's not too hard to find. Go to the gaming section, and there's a Silent Hill thread on the first page.
 

MistBreeze

Member
I have a question as for series x having significantly more CUs 52 than ps5 36

are developers In multiplatform games will write codes for these many CUs or they only will write the code for the less CUs system and apply in all

can any clarify?

and are huge number of CUs beneficial or harder to harness their power?
 
Last edited:

Bo_Hazem

Banned
Huh, the Elite S2 is £150 in the UK.

It's £319 in that awful meme because it's so popular, has been sold out by Amazon, so their 3rd-party sellers are price gouging.



Don't know what's going on with the One S/One X pricing though :messenger_tears_of_joy: I think retailers here in the UK are just trying to ship as many as possible. You can get the One X for £250 which is a bargain considering the PS4 Pro is holding it's value at £350

Nope, those in USD:



"renewed"


Most of them are discontinued:


This was for $320 with a game as well:

 

MarkMe2525

Member
I dont know who she is either, but she posts that she is / was a professional engine analsyer and optimsier and lots on Era ask here questions, wherther Dr Keo or NX gamer....for what its worth. She did 3 or 4 posts explaining in detail. Its a good read. Lets face it, if its bullshitting it would not last long over there lol.

Anyway, we will see, as its a common SX question, also like Ps5 clocks. SX still has a bandwidth benefit, but not as much as it looks.

Also its a shared bus, shared, so when the slow memory is accessed, the thought is that the fast memory is doing nothing and if your accessing at 336, thats LONGER time when fast memory is not accessing. Unless there is dual access possible to both pools at same time ....most think not.
There was a deep dive on this very subject with an xbox representative in a video that eludes me. They pointed out the the full pool of ram was read simultaneously. The developer would place the appropriate assets into the 10gb fast or 6gb slower as required. Now i have no idea how this all works, I just remember this being talked about.
 
persist. raysoft76, post: 257716971, member: 770002"]
I'm not gonna lie. Usually you want more compute rather than speed. It's a balancing act. But now we have enough ROPs etc for pushing pixels to a 4K screen. The next jump is 8K and that needs much more pixel power to archeive. I think Cerny just made a more balanced box with transistors that counts more than heavy pixel pushing. What are the XSX gonna do with it's superior pixel pushing power? It's not big enough to make the next jump anyawys. Why not then design a console with less pixel performance and boost it where things really needs upgrading, like storage bandwith? XSX has more shaders also ofc, wich will always come in handy, but you also need to saturate them if they should earn their place in the silicon.
Sony's Ace in the hole is that Cerny is a game developer as well. He has a really good grasp of both software and what is required of the hardware to make it tick.
[/QUOTE]

This logic is interesting. I've been wanting to ask why people use the term balanced wrt Ps5 vs XSX.. i cannot fathom what about the machine implies more balance than its competitor device.

On the same note, Xbox division is comprised of many developers, software API engineers and HW engineers just like Sony WWS. Why would a single individual such as Cerny have more or better insight than into how and what a console needs than a company with all of the same resources as well?

Do you think that Phil spencer didn't involve his developers in the development of the systems which he controlled (xbone x and xsx)?

He works very closely with heads of many studios and spent years traveling the world and asking them what they needed and wanted in a console much like Cerny. And they had a direct hand in the system designed choices.

These are always very curious statements of certainty. I don't understand their origin or why they persist. Any insight you can provide would be helpful to my understanding.

Thanks.
 

MarkMe2525

Member
Problem is it'll use Windows, that will make it even slower and will face more throttling.

EDIT: Forgot to add that XSX is not programmed like PS5, so it will NOT work like PS5. Then it means it'll still need duplicates, if not then Phil would've jumped and pointed out that in XSX as well in a #MEtoo fashion like with the misleading Project Acoustics vs The Tempest 3D Audio engine.
are you trolling?
 

ethomaz

Banned
Dumb question...

Do we have full confirmation that Sony's SSD is 825GB? I remember hearing that number around Cerny's talk, but at the time I just assumed it's a 1TB with a huge OS.

I'm making myself a dumb comparison chart for the two consoles so I can pick a day one box, and that one spec just seems odd...
It is 12 lanes of 64GiB each.
768GiB = 825GB.
 
Last edited:

Bo_Hazem

Banned
are you trolling?

Nope, from what I've seen with the loading screen on XSX it's using the traditional PCIe 3.0. I'm using PCIe 3.0 on my PC and it's much faster as well at 3.5GB/s read 2.7GB/s write. Those speeds are unsustainable overall and can be throttled and it's more than likely to happen with XSX. PS5 is confirmed to have AT LEAST 5GB/s (RAW).

The other part is true as well, XSX will always have anchors to hold back its hardware full potential, Xbox One 1.3TF and Jaguar cores for 2 years, PC for later or even Lockhart's 4TF.

Is it clear now?

EDIT: XSX will use duplicates, as it can't directly stream data and has the regular bottlenecks found on PC's.
 
Last edited:

MarkMe2525

Member
Nope, from what I've seen with the loading screen on XSX it's using the traditional PCIe 3.0. I'm using PCIe 3.0 on my PC and it's much faster as well at 3.5GB/s read 2.7GB/s write. Those speeds are unsustainable overall and can be throttled and it's more than likely to happen with XSX. PS5 is confirmed to have AT LEAST 5GB/s (RAW).

The other part is true as well, XSX will always have anchors to hold back its hardware full potential, Xbox One 1.3TF and Jaguar cores for 2 years, PC for later or even Lockhart's 4TF.

Is it clear now?

EDIT: XSX will use duplicates, as it can't directly stream data and has the regular bottlenecks found on PC's.

You surmised that the xsx is using pcie 3.0 from looking at a loading screen? Are you sure you aren't trolling?
 

Kusarigama

Member
This logic is interesting. I've been wanting to ask why people use the term balanced wrt Ps5 vs XSX.. i cannot fathom what about the machine implies more balance than its competitor device.

On the same note, Xbox division is comprised of many developers, software API engineers and HW engineers just like Sony WWS. Why would a single individual such as Cerny have more or better insight than into how and what a console needs than a company with all of the same resources as well?

Do you think that Phil spencer didn't involve his developers in the development of the systems which he controlled (xbone x and xsx)?

He works very closely with heads of many studios and spent years traveling the world and asking them what they needed and wanted in a console much like Cerny. And they had a direct hand in the system designed choices.

These are always very curious statements of certainty. I don't understand their origin or why they persist. Any insight you can provide would be helpful to my understanding.

Thanks.
Unique advantage that Mark Cerny brings, is that he has first hand knowledge of what actual game development is facing challenges with and what is needed to address those issues.
And since he is the Lead Architect, he can help the games that he work on use the system to it's fullest capabilities.
 
Last edited:

Bo_Hazem

Banned

You surmised that the xsx is using pcie 3.0 from looking at a loading screen? Are you sure you aren't trolling?

It's 2.4GB/s read, how about you read about PCIe 3.0 vs PCIe 4.0? PS5 seems to be not using any in the meanwhile, it's totally custom. That's why it's superior and still unmatched and would need a PCIe 4.0 rated at 7GB/s RAW to work properly.
 
Last edited:

3liteDragon

Member
zlJtoXx.jpg

Game announcements, price announcement of the Series X, OR... lockhart reveal??

Miranda straight from IGN saying this, so we should be expecting something Xbox related on Monday.
 
Last edited:

Bo_Hazem

Banned
This logic is interesting. I've been wanting to ask why people use the term balanced wrt Ps5 vs XSX.. i cannot fathom what about the machine implies more balance than its competitor device.

On the same note, Xbox division is comprised of many developers, software API engineers and HW engineers just like Sony WWS. Why would a single individual such as Cerny have more or better insight than into how and what a console needs than a company with all of the same resources as well?

Do you think that Phil spencer didn't involve his developers in the development of the systems which he controlled (xbone x and xsx)?

He works very closely with heads of many studios and spent years traveling the world and asking them what they needed and wanted in a console much like Cerny. And they had a direct hand in the system designed choices.

These are always very curious statements of certainty. I don't understand their origin or why they persist. Any insight you can provide would be helpful to my understanding.

Thanks.

Because XSX is using "off the shelf" strategy like buying from a grocery shop. PS5 is doing heavily costumed solutions to erase the bottlenecks the gaming PC's suffer from in the first place. The I/O that connects and communicates between the SSD/GPU/CPU is so powerful that it's more powerful than XSX's CPU and PS5's as well (9x ZEN 2 cores).

Independent audio processing with unprecedented details, offloads GPU/CPU from that work and frees them for more visuals.

Ultra fast SSD that makes games much more smaller in storage size or use the same size to make 5-10x unique assets at least, and loading them blazingly fast that you don't need to bother both the CPU/GPU as much with dedicated cashe scrubbers that free the GPU from unwanted data for improved performance.
 
Last edited:

Bo_Hazem

Banned
We don't know this, Cerny said it has to be higher than the internal to make up for the overhead of the IO unit handling the extra priority levels
By how much we don't know, could be 6GB/s 6.5GB/s 7GB/s etc.

It's been pointed out to match the 5.5GB/s of PS5's speed:

BLA7jBoXYtjsKZCEuaUJMo.jpg
 
Last edited:

Bo_Hazem

Banned
rewatch that section, Cerny was comparing the peak of pcie 3 & 4

Wow, that just keeps giving more details the more we watch it:




It'll use the I/O directly, no worries there. But the problem is that the NVMe m.2 SSD has 2 true priority levels vs 6 in PS5's SSD. That's 200% more in the PS5 (add it to the SSD gap against XSX as well). It seems they're more likely redesigned for PS5 as 7GB/s might not make up for that difference.
 
Last edited:

MarkMe2525

Member
Wow, that just keeps giving more details the more we watch it:




It'll use the I/O directly, no worries there. But the problem is that the NVMe m.2 SSD have 2 true priority levels vs 6 in PS5's SSD. That's 200% more in the PS5 (add it to the SSD gap against XSX as well). It seems they're more likely redesigned for PS5 as 7GB/s might not make up for that difference.

I just don't see the need to be so flippant towards Microsofts solutions when they themselves have stated that they have a custom paging solution for different priority levels as well. What Sony did was nice but it does not mean that Xbox doesn't know how to make a gaming machine. You seem to spend a lot of time looking into this stuff which leaves me confused on why you keep saying these things. The only answer is you have to be trolling or have a major case of the Dunning-Kruger effect with a little confirmation bias sprinkled in.

Edit: I seem to have quoted the right guy but the wrong post.
 
Last edited:

Bo_Hazem

Banned
I just don't see the need to be so flippant towards Microsofts solutions when they themselves have stated that they have a custom paging solution for different priority levels as well. What Sony did was nice but it does not mean that Xbox doesn't know how to make a gaming machine. Earlier you made a statement alluding to how only ps5 can read directly off of the ssd for "virtual ram" when this is just being obtuse. The developers get 100gb of access to relevant data from the ssd on the XSX. You seem to spend a lot of time looking into this stuff which leaves me confused on why you keep saying these things. The only answer is you have to be trolling or have a major case of the Dunning-Kruger effect with a little confirmation bias sprinkled in.

Edit: I seem to have quoted the right guy but the wrong post.

My friend, why not stop the psychic talk and personal mockery and keep it informative? Can you explain how 100GB-ready is when you can only transfer at the speed of 2.4GB/s (4.8GB/s compressed)? I think you're smarter than me as you've been showing here that I'm suffering from psychic or behavior issues, in that case you should be smarter than that misleading information that adds nothing new. It's like someone saying I can eat a burger in one bite, then the other responds that he can eat a whole cow in 2 months, it's not the same.

EDIT: PS5 is capped at 22GB/s compressed, and BGs BGs has seen 20GB/s in action. XSX is capped at 6GB/s.
 
Last edited:

SonGoku

Member
geordiemp geordiemp P psorcerer Rolling_Start Rolling_Start thicc_girls_are_teh_best thicc_girls_are_teh_best I figured out Lady Gaias math, it checks out
vNpAuwJ.png
Let's start with PS5:
CPU 48GB/s speed is an average, it can actually access 48GB in 0.1071428571 seconds using the full bus capacity, 0.1071428571 seconds of GPU access stalls equals 48GB/s (448*0.1071428571), so the net loss is 48GB/s. Giving the GPU an average of 400GB/s speed (38.9GB/s per TF)

XBX
  1. CPU can access 48GB in 0.1428571429 seconds if the data is on the SLOW pool - 0.1428571429 seconds of GPU access stalls equals 80GB/s (336* 0.1428571429). Leaving GPU with an average bandwidth of 480GB/s (39.6GB/s per TF)
  2. CPU can access 48GB in 0.08571428571 seconds if the data is on the FAST pool - 0.08571428571 of GPU access stalls equals 48GB/s (560*0.08571428571) Leaving GPU with an average bandwidth of 512GB/s (42.3GB/s per TF)
In scenario 1: CPU data (system memory) is allocated on the slow pool for a total of 13.5GB available ram for games.
In scenario 2: CPU data (system memory) is allocated on the fast pool limiting total available memory for games to only 10GB.
 
Last edited:
T

Three Jackdaws

Unconfirmed Member
persist. raysoft76, post: 257716971, member: 770002"]
I'm not gonna lie. Usually you want more compute rather than speed. It's a balancing act. But now we have enough ROPs etc for pushing pixels to a 4K screen. The next jump is 8K and that needs much more pixel power to archeive. I think Cerny just made a more balanced box with transistors that counts more than heavy pixel pushing. What are the XSX gonna do with it's superior pixel pushing power? It's not big enough to make the next jump anyawys. Why not then design a console with less pixel performance and boost it where things really needs upgrading, like storage bandwith? XSX has more shaders also ofc, wich will always come in handy, but you also need to saturate them if they should earn their place in the silicon.
Sony's Ace in the hole is that Cerny is a game developer as well. He has a really good grasp of both software and what is required of the hardware to make it tick.

This logic is interesting. I've been wanting to ask why people use the term balanced wrt Ps5 vs XSX.. i cannot fathom what about the machine implies more balance than its competitor device.

On the same note, Xbox division is comprised of many developers, software API engineers and HW engineers just like Sony WWS. Why would a single individual such as Cerny have more or better insight than into how and what a console needs than a company with all of the same resources as well?

Do you think that Phil spencer didn't involve his developers in the development of the systems which he controlled (xbone x and xsx)?

He works very closely with heads of many studios and spent years traveling the world and asking them what they needed and wanted in a console much like Cerny. And they had a direct hand in the system designed choices.

These are always very curious statements of certainty. I don't understand their origin or why they persist. Any insight you can provide would be helpful to my understanding.

Thanks.
[/QUOTE]


It really comes down to perspective and philosophy from both of these companies. The development of the Series X was based on the failures and shortcomings of the Xbox One and what came with it, including the games. Microsoft were looking at what mistakes they made with the Xbox One and tackled them in the development of the Series X, it explains why they prioritised a highly powerful GPU and large memory bandwidth, Microsoft want this machine to be capable. Much more so than One and One X when they were released.

Sony’s perspective and philosophy is very different from Microsoft’s, leading into nex-gen they are coming off from a massive success and what was introduced in PS4 will only be massively built upon and advanced in the PS5, they saw and learnt what made the PS4 so special and successful and decided to take those factors and amplify them in the next-gen console. Cerny wants to create new “experiences” for gamers, he wants new types of games which have never seen before. He is looking way beyond into the future than Microsoft. This is evident in severel of his remarks during the “Road to PS5 conference”, the PS5 is born from the needs of game developers. I believe the PS5 is going to usher in a new golden age of gaming.

Remember that it’s Microsoft not Sony who need to win back their market after the shit show that was the Xbox one, Sony have already gained the trust of developers and fans alike, this has given them freedom to take the PS5 in a unique direction, they have in some sense transcended the console war. The reception and reaction of the Dualsense controller is evident of this.
 
Last edited by a moderator:

SonGoku

Member
It'll use the I/O directly, no worries there. But the problem is that the NVMe m.2 SSD has 2 true priority levels vs 6 in PS5's SSD. That's 200% more in the PS5 (add it to the SSD gap against XSX as well). It seems they're more likely redesigned for PS5 as 7GB/s might not make up for that difference.
The IO unit handles the extra 4 priorities for expandable storage, to make up for that overhead it needs to be faster than 5.5GB/s but by how much we don't know.
Could be 6GB/s 6.5GB/s 7GB/s etc.
 
Last edited:
If you think 2 % change in MAX clock when it sees taxing code for an instant is same proportionately as 560 max to 336 max then thats your call.

I have been spending some time trying to understand the memory architecture of the XSX and its comparison to the PS5. I thought I understood it last week but now there are quite a few things that are quite mysterious to me.

Let me go over what I do understand and work my way from there. Anyone with a better understanding please jump in and clarify where necessary.

Each VRAM1 lane on both the XSX or PS5 is 56GB/s.

The PS5 has 8 lanes for a total of 8*56 = 448 GB/s of shared bandwidth. Each lane has two 16 bit channels = 8 * (2*16)= 256 bit bus.

Xsx has 10 lanes with 56 GB/s each for 560 GB/s total. Each of those lanes also have 2 16 bit channels. 10*32=320 bit bus.

Each consumer of RAM on each device can consume a whole channel's bandwidth (aka 56GB/s)

So if the PS5 CPU accesses 4 lanes of memory, the CPU will consume 224GBs and 8/16 GB for the GPU. In the PS5 each channel is ostensibly connected to a 2GB ram chips which both CPU and GPU can fully consume or not.

In XSX, the memory is logically split into 6lanes with 2GB chips and 4 lanes with 1 GB chips..

The GPU can see the 4 lanes with the 1 GB chips all the time as well as the lower 1 GB chips in the 6 lanes with 2GB. = 10 GB.

The CPU has been given priority over the top 1GB of the 6 lanes with 2GB chips =6B.

If the CPU chooses to consume all 6 lanes * 56Gb/s = 336 GB and 6GB of slow ram and maybe 6GB of fast ram? This is unknown.

Or would the CPU and GPU share the lane bandwidth and each use 28Gbs and interleave their access? So that the CPU is accessing 6*28GB/s for 168GB/s while the GPU is running at 168+224=392GB per Second to get 10GB.

If not, The GPU is left with 224GB to consume over the four remaining 1GB lanes. (= to what would happen in a split PS5 scenario.) But only 4 GB!

The total bandwidth is still 336+224 =560GB/s.

Conversely if the GPU consumed all lanes it would get 10GB of VRAM storage at 560GB/s. But what happens to the top 1GB of each of the 2GB lanes? Are they dormant and unused?

If the CPU consumed 1/6 would it consume the full 2 Gb of RAM, and the GPU would get 9GB of VRAM at 9*56=504GB/s and the CPU 56GB/s and so on. Or would it consume only the top 1GB while blocking the GPU from accessing the bottom 1GB chip?

Consumption is always equal to total buswidth and speed available.

Now do I like that design? Not really. Because I don't understand as of yet what happens with the "leftover" RAM of one or the other device consumes a full VRAM lane.

Hope I didn't confuse things even more.
 

HawarMiran

Banned
Updates, full game installs, and those with fast internet full game installs will be very fast.

Im not even worried about adding extra storage yet because of Gig internet and how fast the SSD solutions are.
exactly. I am the type of guy who just deletes something from the drive if he doesn't play it. I mean I play single-player games most of the time anyway and there is no need for me too keep them on my drive once I played through them
 

pasterpl

Member
Because XSX is using "off the shelf" strategy like buying from a grocery shop. PS5 is doing heavily costumed solutions to erase the bottlenecks the gaming PC's suffer from in the first place. The I/O that connects and communicates between the SSD/GPU/CPU is so powerful that it's more powerful than XSX's CPU and PS5's as well (9x ZEN 2 cores).

Independent audio processing with unprecedented details, offloads GPU/CPU from that work and frees them for more visuals.

Ultra fast SSD that makes games much more smaller in storage size or use the same size to make 5-10x unique assets at least, and loading them blazingly fast that you don't need to bother both the CPU/GPU as much with dedicated cashe scrubbers that free the GPU from unwanted data for improved performance.

I believe that in df tear down they have said that xbsex uses pci4.0 so please stop spreading FUD.

re. this comment of yours, it is actually other way around. Sony allows of the shelf ssd hard-rives to be inserted into ps5 as expansion while ms developed proprietary expansion card with Seagate. Ssd in ps5 is same as the new ssd that will be available for everylone later this year (when they hit the market). So your statement is correct when you replace Sony with ms. Please stop spreading FUD.
 

Kusarigama

Member
I believe that in df tear down they have said that xbsex uses pci4.0 so please stop spreading FUD.

re. this comment of yours, it is actually other way around. Sony allows of the shelf ssd hard-rives to be inserted into ps5 as expansion while ms developed proprietary expansion card with Seagate. Ssd in ps5 is same as the new ssd that will be available for everylone later this year (when they hit the market). So your statement is correct when you replace Sony with ms. Please stop spreading FUD.
He is not spreading fud. If it is confirmed to be pcie 4.0, He won't be constantly saying that it is pcie 3.0. Regardless of it being 3 or 4, the XSX decompressor unit has max limit of 6GB/s where as the PS5 decompressor unit has been known to do 20GB/s
 
Last edited:
Nope, from what I've seen with the loading screen on XSX it's using the traditional PCIe 3.0. I'm using PCIe 3.0 on my PC and it's much faster as well at 3.5GB/s read 2.7GB/s write. Those speeds are unsustainable overall and can be throttled and it's more than likely to happen with XSX. PS5 is confirmed to have AT LEAST 5GB/s (RAW).

The other part is true as well, XSX will always have anchors to hold back its hardware full potential, Xbox One 1.3TF and Jaguar cores for 2 years, PC for later or even Lockhart's 4TF.

Is it clear now?

EDIT: XSX will use duplicates, as it can't directly stream data and has the regular bottlenecks found on PC's.

It is using a custom pcie v 4 ssd and absolutely CAN stream directly from SSD. I hope you take the time to read up so that we don't have to constantly correct your misunderstanding.
 
He is not spreading fud. If it is confirmed to be pcie 4.0, He won't be constantly saying that it is pcie 3.0. Regardless of it being 3 or 4, the XSX decompressor unit has max limit of 6GB/s where as the PS5 decompressor unit has been known to do 20GB/s
Not the io controller, the hw decompression block. So the system cam decompress at 6GB/s.
 
Dismissed by whom? You have no clue what you are talking about. The interview was taken down but nothing was dismissed or proven wrong.

"Things just got weird. Ali Salehi, the developer who made these comments on PS5, has apparently withdrawn his statements. The interview itself was apparently also taken offline.

The Twitter user who originally translated Salehi’s comments @ man4dead, deleted and said all the tweets about the interview The Crytek engineer “no longer confirms the content of the interview for personal reasons.”

Is that the same as dismissed? I hope that clears the issue up. Let that entire commentary go. He was utterly wrong anyway. Why hold onto it?
 
Last edited:

rnlval

Member
I have been spending some time trying to understand the memory architecture of the XSX and its comparison to the PS5. I thought I understood it last week but now there are quite a few things that are quite mysterious to me.

Let me go over what I do understand and work my way from there. Anyone with a better understanding please jump in and clarify where necessary.

Each VRAM1 lane on both the XSX or PS5 is 56GB/s.

The PS5 has 8 lanes for a total of 8*56 = 448 GB/s of shared bandwidth. Each lane has two 16 bit channels = 8 * (2*16)= 256 bit bus.

Xsx has 10 lanes with 56 GB/s each for 560 GB/s total. Each of those lanes also have 2 16 bit channels. 10*32=320 bit bus.

Each consumer of RAM on each device can consume a whole channel's bandwidth (aka 56GB/s)

So if the PS5 CPU accesses 4 lanes of memory, the CPU will consume 224GBs and 8/16 GB for the GPU. In the PS5 each channel is ostensibly connected to a 2GB ram chips which both CPU and GPU can fully consume or not.

In XSX, the memory is logically split into 6lanes with 2GB chips and 4 lanes with 1 GB chips..

The GPU can see the 4 lanes with the 1 GB chips all the time as well as the lower 1 GB chips in the 6 lanes with 2GB. = 10 GB.

The CPU has been given priority over the top 1GB of the 6 lanes with 2GB chips =6B.

If the CPU chooses to consume all 6 lanes * 56Gb/s = 336 GB and 6GB of slow ram and maybe 6GB of fast ram? This is unknown.

Or would the CPU and GPU share the lane bandwidth and each use 28Gbs and interleave their access? So that the CPU is accessing 6*28GB/s for 168GB/s while the GPU is running at 168+224=392GB per Second to get 10GB.

If not, The GPU is left with 224GB to consume over the four remaining 1GB lanes. (= to what would happen in a split PS5 scenario.) But only 4 GB!

The total bandwidth is still 336+224 =560GB/s.

Conversely if the GPU consumed all lanes it would get 10GB of VRAM storage at 560GB/s. But what happens to the top 1GB of each of the 2GB lanes? Are they dormant and unused?

If the CPU consumed 1/6 would it consume the full 2 Gb of RAM, and the GPU would get 9GB of VRAM at 9*56=504GB/s and the CPU 56GB/s and so on. Or would it consume only the top 1GB while blocking the GPU from accessing the bottom 1GB chip?

Consumption is always equal to total buswidth and speed available.

Now do I like that design? Not really. Because I don't understand as of yet what happens with the "leftover" RAM of one or the other device consumes a full VRAM lane.

Hope I didn't confuse things even more.

wl3uiOn.png

I revised my XSX block diagram is now based on RDNA's block diagram. The APU is a multitasking processor and each 64bit memory controller will continue to process memory requests based on the memory address target.

Overall XSX memory bandwidth remains at 560 GB/s!
 

geordiemp

Member
I have been spending some time trying to understand the memory architecture of the XSX and its comparison to the PS5. I thought I understood it last week but now there are quite a few things that are quite mysterious to me.

Let me go over what I do understand and work my way from there. Anyone with a better understanding please jump in and clarify where necessary.

Each VRAM1 lane on both the XSX or PS5 is 56GB/s.

The PS5 has 8 lanes for a total of 8*56 = 448 GB/s of shared bandwidth. Each lane has two 16 bit channels = 8 * (2*16)= 256 bit bus.

Xsx has 10 lanes with 56 GB/s each for 560 GB/s total. Each of those lanes also have 2 16 bit channels. 10*32=320 bit bus.

Each consumer of RAM on each device can consume a whole channel's bandwidth (aka 56GB/s)

So if the PS5 CPU accesses 4 lanes of memory, the CPU will consume 224GBs and 8/16 GB for the GPU. In the PS5 each channel is ostensibly connected to a 2GB ram chips which both CPU and GPU can fully consume or not.

In XSX, the memory is logically split into 6lanes with 2GB chips and 4 lanes with 1 GB chips..

The GPU can see the 4 lanes with the 1 GB chips all the time as well as the lower 1 GB chips in the 6 lanes with 2GB. = 10 GB.

The CPU has been given priority over the top 1GB of the 6 lanes with 2GB chips =6B.

If the CPU chooses to consume all 6 lanes * 56Gb/s = 336 GB and 6GB of slow ram and maybe 6GB of fast ram? This is unknown.

Or would the CPU and GPU share the lane bandwidth and each use 28Gbs and interleave their access? So that the CPU is accessing 6*28GB/s for 168GB/s while the GPU is running at 168+224=392GB per Second to get 10GB.

If not, The GPU is left with 224GB to consume over the four remaining 1GB lanes. (= to what would happen in a split PS5 scenario.) But only 4 GB!

The total bandwidth is still 336+224 =560GB/s.

Conversely if the GPU consumed all lanes it would get 10GB of VRAM storage at 560GB/s. But what happens to the top 1GB of each of the 2GB lanes? Are they dormant and unused?

If the CPU consumed 1/6 would it consume the full 2 Gb of RAM, and the GPU would get 9GB of VRAM at 9*56=504GB/s and the CPU 56GB/s and so on. Or would it consume only the top 1GB while blocking the GPU from accessing the bottom 1GB chip?

Consumption is always equal to total buswidth and speed available.

Now do I like that design? Not really. Because I don't understand as of yet what happens with the "leftover" RAM of one or the other device consumes a full VRAM lane.

Hope I didn't confuse things even more.

Here is a good article on GDDR6, its written around Nvidia but ....


There are also 4 transfers per clock, and timing / parallel to series and crosstalk also comes into play, your right it is complex, and timing between the "upper and lower" RAM maybe ?

Maybe thats why its 10 and 6 indepenant, and why everyone expected same RAM for each chip which is normal ?

I dont know, but it aint simple straws lol and complex engneering timing, and any explanation is vastly oversimplifying.

This design was definately meant to be 20 GB, or maybe MS thought they could resolve the complex timing and cross talk, lets see. Its unusual for sure and anyone thinking its simple digital straws is funny.

eDTXwNt.png
 
Last edited:
I have a question as for series x having significantly more CUs 52 than ps5 36

are developers In multiplatform games will write codes for these many CUs or they only will write the code for the less CUs system and apply in all

can any clarify?

and are huge number of CUs beneficial or harder to harness their power?

More CPU cores are tricky to work with.
Unique advantage that Mark Cerny brings, is that he has first hand knowledge of what actual game development is facing challenges with and what is needed to address those issues.
And since he is the Lead Architect, he can help the games that he work on use the system to it's fullest capabilities.
As opposed to stables of lead developers and engineers who work hand in hand to develop games and plot out directx features that go into the HW of Nvidia and AMD.

Better than those guys?

This is a very strange perspective to have. Cerny is a brilliant man. Xbox staff will tell you that upfront. But he's isn't the only brilliant person in the console hw space with a background in game design.
 
Here is a good article on GDDR6, its written around Nvidia but ....


There are also 4 transfers per clock, and timing / parallel to series and crosstalk also comes into play, your right it is complex, and timing between the "upper and lower" RAM maybe ?

Maybe thats why its 10 and 6 indepenant, and why everyone expected same RAM for each chip which is normal ?

I dont know, but it aint simple straws lol and complex engneering timing, and any explanation is vastly oversimplifying.

This design was definately meant to be 20 GB, or maybe MS thought they could resolve the complex timing and cross talk, lets see. Its unusual for sure and anyone thinking its simple digital straws is funny.

The chips themselves aren't actually slower. By slow they mean the bandwith dedicated to the GPU 10*56 versus dedicated to CPU 6*56.

The chips themselves are actually all the same speed.
 

BGs

Industry Professional
My friend, why not stop the psychic talk and personal mockery and keep it informative? Can you explain how 100GB-ready is when you can only transfer at the speed of 2.4GB/s (4.8GB/s compressed)? I think you're smarter than me as you've been showing here that I'm suffering from psychic or behavior issues, in that case you should be smarter than that misleading information that adds nothing new. It's like someone saying I can eat a burger in one bite, then the other responds that he can eat a whole cow in 2 months, it's not the same.

EDIT: PS5 is capped at 22GB/s compressed, and BGs BGs has seen 20GB/s in action. XSX is capped at 6GB/s.
That I say that the theoretical peak is +20GB/s does not mean that it is a constant or that I have seen it. Let's try not to read beyond my words.
 

geordiemp

Member
The chips themselves aren't actually slower. By slow they mean the bandwith dedicated to the GPU 10*56 versus dedicated to CPU 6*56.

The chips themselves are actually all the same speed.

Yeah I know that chips are the same speed 14 gbps... and wording I should say narrow slower access......

However, think picosecond timing and access on 4 transfer per clock....go read the article on GDDR6.

I read a post on Beyond 3d a guy was saying timing would be difficult with this configuration to make all bus active - lets see, there is more to this maybe
 
Last edited:
Status
Not open for further replies.
Top Bottom