• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*) Ali Salehi, a rendering engineer at Crytek contrasts the next Gen consoles in interview (Up: Tweets/Article removed)

Panajev2001a

GAF's Pleasant Genius
He was cautioning people to not just assume they bought a off the shelf design just because a similar GPU is available as a PC card roughly at the same time they release (like Pro and RX 470/480), they didn't just buy a RDNA2 card they helped design RDNA2 as part of the collaboration AMD then releases it as a discrete card.

If anything this should be definitive evidence that PS5 will support every feature found on RDNA2 cards at time of release

I agree although there may be part of each XSX and PS5 respective GPU’s that are part of co-designed customisations which may show on Big Navi in Desktop, but are not in the base RDNA2 spec. Conversely both consoles may lack features that are in the next AMD Desktop solution.

With that said, it is not fair to assume that if they did not boast about it means they do not have that feature, I would disagree with that rule of thumb.

I think some of the frustration in his Road to PS5 speech came from media and gamers misrepresenting what he said over and over: like the “must be SW raytracing, won’t believe you... state it with other terms... no other terms again” and the “must be RDNA1, Cerny clarify it!!!!! Concern!!!!”.
 

sinnergy

Member
It all is starting to feel Sony completely ducked up with PS5, a controller with gimmicks , voice chat, feedback , touting SSD as the most important piece ... what really happend.

It feels very unsony .
 

Panajev2001a

GAF's Pleasant Genius
It all is starting to feel Sony completely ducked up with PS5, a controller with gimmicks , voice chat, feedback , touting SSD as the most important piece ... what really happend.

It feels very unsony .

How so? Powerful yet easy to use? Forward looking features? Listening to internal and external developers? How are they ducking PS5 up that “concerns” you so much?
 

sinnergy

Member
How so? Powerful yet easy to use? Forward looking features? Listening to internal and external developers? How are they ducking PS5 up that “concerns” you so much?
Many things . Nothing confirmed as we haven’t seen, any games nor heard a lot of devs . To me the internals look complicated with multiple co processors and what not. A weird looking controller and a even weirder choice of color . It all feels very fragmented, even their PR campaign.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Many things . Nothing confirmed as we haven’t seen, any games nor heard a lot of devs . To me the internals look complicated with multiple co processors and what not. A weird looking controller and a even weirder choice of color . It all feels very fragmented, even their PR campaign.

So just a bit of good ol’ fear, uncertainty, and doubt... and PS5 is the only HW that has not shown games or has multiple coprocessors and complications to it? Nothing else...? Really ;)?

(hint: the additional coprocessors are not a complication to devs, but transparent means to an end)

Honestly I see two consoles with some good differentiation, finally, and some good ol’ console HW like daring choices. Nothing fragmented or unnecessarily complicated.
 
Last edited:

pawel86ck

Banned
And here's the section of the video where Richard from DF points out that, without even using the Series X's new GPU features, a 2 week old unoptimized port on the Gears 5 internal benchmark at equivalent PC settings practically matched the RTX 2080.

Also, if you read the xbox article carefully, the over 100fps was not specifically just about multiplayer. Multiplayer was just mentioned as an additional potential benefit of the kind of performance they're seeing from the system. It was specifically about everything that was mentioned up to that point regarding a 4K beyond PC ultra version of gears with higher textures, particles, contact shadows and some ray traced global illumination.
I dont know how they did it, because I'm assuming XSX port wasnt using VRS and other news features that should boost performance. On PC even 2080ti will not run this game at 4K and 60fps locked with ultra settings (especially during cutscens), and not to mention even higher settings. PCMR guys know that very well so they even assume XSX port was running below 4K native.

So PS5 comparison aside we know for a fact XSX GPU is indeed more powerful than 5700XT, and that's the most important thing.
 
Last edited:

Kumomeme

Member
Many things . Nothing confirmed as we haven’t seen, any games nor heard a lot of devs . To me the internals look complicated with multiple co processors and what not. A weird looking controller and a even weirder choice of color . It all feels very fragmented, even their PR campaign.
your claim based of stuff we havent heard yet while the stuff we already heard is opposite from what you see looks like.

other than main cpu there separate audio chip which is also xsx had..the other coprossesor mainly for ssd..other features like geometry engine also existed in xsx with different name, it just we dont know how much different it is. It still use same architecture as ps4 but with custom ssd and audio and yet lot of devs claim it easier to developed for, while armchair engineer here said it complicated. Not much different to xsx to me
 
Last edited:

rnlval

Member
You said it, the Gears 5 results speak for themselves. RTX 2080 class performance on just 1 month of work? Better than Gears 5 PC Ultra settings at 4K, running at over 100fps? Come the fuck on, that's incredible. Real results put to bed the myth of the mystical bottleneck.
The context
1. Cut scene with better than Gears 5 PC Ultra settings. <---- bad comparison
2. Built-in benchmark at PC Ultra settings which lands on RTX 2080 level performance on XSX and another workstation PC running with RTX 2080 GPU and Ryzen Threadripper 2950X CPU



XSX runs Gears 5 benchmark with PC Ultra setting which lands on RTX 2080 class performance against workstation PC with Ryzen 2950X CPU and RTX 2080
 
Last edited:

rnlval

Member

PocoJoe

Banned
Many things . Nothing confirmed as we haven’t seen, any games nor heard a lot of devs . To me the internals look complicated with multiple co processors and what not. A weird looking controller and a even weirder choice of color . It all feels very fragmented, even their PR campaign.

Well, if you arent dev with first hand experiences of ps5 or really technology oriented hobbyist with years of knowledge about computers/consoles, then your "to me the internals look complicated" doesnt mean anything. I could say that "hmm, ferrari engine looks complicated vs fiat engine" and yeah it can be the case, but complicated doesnt mean worse.

To me, a hobbyist whom have followed computers, tech and evolution of the whole area since 1980's, PS5 doesnt look complicated at all, it looks the opposite = Really well tuned and optimized as whole system without any major bottle necks.

I agree that controller looks bit weird, kind of futuristic and I didnt like the color scheme at first, but does it stop me or others to buy it? no, and there will surely be many variations so it is not a big deal.

Also the ps5 controller is probably really comfortable and fine tuned from DS4. which is (IMO) really superior vs any xbox controller on ergonomics + stick placement.

"multiple co processors" isn't a bad thing if they are engineered wisely. And Sony have epic engineers for hardware and software, so I would guess that when they build up games, they dont have to care about "co processors" at all, because API and dev tools just make things happen as easily as possible.

We will see games for sure, so there is no need to panic or make conclusions about it, it is clearly a strategy to give news out bit by bit and it is working.

Playstation is on the top of the food chain as a brand with HUGE following all around the world. so they an do what they want to do (release it on small doses), xbox is the underdog so their strategy cant be similar as their following isnt so universally huge. (xbox is popular on US/UK, Playstation is popular on whole world)
 

CJY

Banned
Playstation is on the top of the food chain as a brand with HUGE following all around the world. so they an do what they want to do (release it on small doses), xbox is the underdog so their strategy cant be similar as their following isnt so universally huge. (xbox is popular on US/UK, Playstation is popular on whole world)

Interestingly, while what you say is true, that PlayStation is top of the food chain in terms of video game brands, I believe that even if Sony sold 250M PS4s and Xbox sold 50m, PlayStation will still be the underdog. Everybody knows Xbox is Microsoft, one of the biggest companies in the world. Their dominance on PC does not translate into console sales at all. The vast majority of people just don't love Windows.

Sony PlayStation: The perpetual underdog against the school-yard bully of Microsoft.
 
I wonder how MS overcame the asymmetrical RAM speed issue.

Link

Have to say, I'm not a fan of the contents of that link. Their comments on memory strike me as being somewhere between naive and a bit dumb.

They've started with a conclusion - PS5 is better - and created some whack-ass fan-fiction to try and convince themselves and others.

"Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time. "


It's bollocks. Even if you don't understand anything about memory or controllers or accesses, ask yourself this:

Both systems have 16GB of ram at 14 gHz effective frequency. But somehow, after extensive profiling and simulation, MS and AMD have managed to engineer a more complex system, with a wider bus and more chips, using more power, and taking more die area ... and make it worse than the standard 256-bit bus on Navi 10.

I've posted more about this but it's lost in the depths of NeoGaf (things move quick around here!!).

So, can both slow and fast be access at the same time? Someone (here) said it can’t be done with this configuration.

There's nothing to indicate that it can't!

Instead of thinking about fast and slow areas, try thinking for a moment in terms of memory channels. The "slow" area (actually 75% of PS5 bandwidth!) is "slow" because it's accessed across three channels, the "fast" is accessed across 5.

An individual channel, permanently wired up to 2 memory chips, sees no difference in speed (frequency or bandwidth) between 10 GB or 6GB areas. It's the combined effect of 3 channels to the "slow" 6GB vs 5 to the "fast" 10GB that accounts for the bandwidth difference across these areas.

If one controller is accessing the "slow" area, I can see no reason why the other 4 controllers shouldn't still be able to access the fast. The controller shouldn't care. It's given an address and told to do something there. Some accesses might block others (and some accesses can involve multiple controllers), and that might happen a bit more often on XSX due to the way accesses are distributed, but this is ultimately still an issue the PS5 will have to deal with too (or any setup for that matter).

So: 5 channels all of which run at the same speed, but only 3 of those channels are physically connected to the ram that houses the "slow" 6GB section of ram. Unless MS and AMD have fucked something up, this setup won't be a particular problem. I expect that at worst a developer might have to think about where they put some edge case stuff in memory, but the biggest gains will be common sense anyway.
 

geordiemp

Member
From https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
but there was one startling takeaway - we were shown benchmark results that, on this two-week-old, unoptimised port, already deliver very, very similar performance to an RTX 2080.

Yes, games that were made for last gen and 8GB RAM on consoles will run nicely. I am sure TLOU 2 and Gears 5 will run very nicely in 4K60 without breaking a sweat.

Gears5 or TLOU2 dont tell us much. See I am being balanced here, one of each.

What about true next gen games, with 4K hi res assets and more happening with stronger CPU, more enemies and physics, ray tracing and effects and other nice things.....that will likely use more memory is the question for both consoles.

Have to say, I'm not a fan of the contents of that link. Their comments on memory strike me as being somewhere between naive and a bit dumb.

They've started with a conclusion - PS5 is better - and created some whack-ass fan-fiction to try and convince themselves and others.

"Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time. "

It's bollocks. Even if you don't understand anything about memory or controllers or accesses, ask yourself this:

Both systems have 16GB of ram at 14 gHz effective frequency. But somehow, after extensive profiling and simulation, MS and AMD have managed to engineer a more complex system, with a wider bus and more chips, using more power, and taking more die area ... and make it worse than the standard 256-bit bus on Navi 10.

I've posted more about this but it's lost in the depths of NeoGaf (things move quick around here!!).



There's nothing to indicate that it can't!

Instead of thinking about fast and slow areas, try thinking for a moment in terms of memory channels. The "slow" area (actually 75% of PS5 bandwidth!) is "slow" because it's accessed across three channels, the "fast" is accessed across 5.

An individual channel, permanently wired up to 2 memory chips, sees no difference in speed (frequency or bandwidth) between 10 GB or 6GB areas. It's the combined effect of 3 channels to the "slow" 6GB vs 5 to the "fast" 10GB that accounts for the bandwidth difference across these areas.

If one controller is accessing the "slow" area, I can see no reason why the other 4 controllers shouldn't still be able to access the fast. The controller shouldn't care. It's given an address and told to do something there. Some accesses might block others (and some accesses can involve multiple controllers), and that might happen a bit more often on XSX due to the way accesses are distributed, but this is ultimately still an issue the PS5 will have to deal with too (or any setup for that matter).

So: 5 channels all of which run at the same speed, but only 3 of those channels are physically connected to the ram that houses the "slow" 6GB section of ram. Unless MS and AMD have fucked something up, this setup won't be a particular problem. I expect that at worst a developer might have to think about where they put some edge case stuff in memory, but the biggest gains will be common sense anyway.

Yes try just thinking about the memory modules only - both consoles have 14 gbs RAM, its the same speed, only reason XSX can get to 560 is using all of that wider 320 bus. All of it.

Once the slower access is in play, the 560 is no longer able to do its thing.

What we dont know is when the slower access is in use at 336, if the other channels are doing something but slower....or just waiting.

Some on on BY3d was posting about memory timing issues that led to this compromise, but its not verified but could be just FUD.

We shall see when 3rd party games > 10 gb roll out next gen.
 
Last edited:

rnlval

Member
Have to say, I'm not a fan of the contents of that link. Their comments on memory strike me as being somewhere between naive and a bit dumb.

They've started with a conclusion - PS5 is better - and created some whack-ass fan-fiction to try and convince themselves and others.

"Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time. "

It's bollocks. Even if you don't understand anything about memory or controllers or accesses, ask yourself this:

Both systems have 16GB of ram at 14 gHz effective frequency. But somehow, after extensive profiling and simulation, MS and AMD have managed to engineer a more complex system, with a wider bus and more chips, using more power, and taking more die area ... and make it worse than the standard 256-bit bus on Navi 10.

I've posted more about this but it's lost in the depths of NeoGaf (things move quick around here!!).



There's nothing to indicate that it can't!

Instead of thinking about fast and slow areas, try thinking for a moment in terms of memory channels. The "slow" area (actually 75% of PS5 bandwidth!) is "slow" because it's accessed across three channels, the "fast" is accessed across 5.

An individual channel, permanently wired up to 2 memory chips, sees no difference in speed (frequency or bandwidth) between 10 GB or 6GB areas. It's the combined effect of 3 channels to the "slow" 6GB vs 5 to the "fast" 10GB that accounts for the bandwidth difference across these areas.

If one controller is accessing the "slow" area, I can see no reason why the other 4 controllers shouldn't still be able to access the fast. The controller shouldn't care. It's given an address and told to do something there. Some accesses might block others (and some accesses can involve multiple controllers), and that might happen a bit more often on XSX due to the way accesses are distributed, but this is ultimately still an issue the PS5 will have to deal with too (or any setup for that matter).

So: 5 channels all of which run at the same speed, but only 3 of those channels are physically connected to the ram that houses the "slow" 6GB section of ram. Unless MS and AMD have fucked something up, this setup won't be a particular problem. I expect that at worst a developer might have to think about where they put some edge case stuff in memory, but the biggest gains will be common sense anyway.
That's not correct.


XNQZK6Z.png


There are 20 memory channels with XSX.

CPU, audio, and file I/O doesn't see any difference between 6GB and 10 GB memory pools.

Only GPU can see the memory difference between 6GB and 10 GB memory pools.


dp0wejr.png
 
Last edited:
Once the slower access is in play, the 560 is no longer able to do its thing.

That really depends on whether the channels not accessing the top 6GB can still access the lower 10GB. I see no reason why it wouldn't.

What we dont know is when the slower access is in use at 336, if the other channels are doing something but slower....or just waiting.

The BW figures indicate the channels are always are always running at the same speed. Stalling access on all channels once any access to the upper 6GB was in play would be an unimaginably bad design, potentially costing you up to %80 of your bandwidth and completely killing your GPU.

The odds of that being part of any design are vanishingly small.

The memory controller won't care which part of ram it's accessing on its allotted chips.
 

rnlval

Member
Yes, games that were made for last gen and 8GB RAM on consoles will run nicely. I am sure TLOU 2 and Gears 5 will run very nicely in 4K60 without breaking a sweat.

Gears5 or TLOU2 dont tell us much. See I am being balanced here, one of each.

What about true next gen games, with 4K hi res assets and more happening with stronger CPU, more enemies and physics, ray tracing and effects and other nice things.....that will likely use more memory is the question for both consoles.



Yes try just thinking about the memory modules only - both consoles have 14 gbs RAM, its the same speed, only reason XSX can get to 560 is using all of that wider 320 bus. All of it.

Once the slower access is in play, the 560 is no longer able to do its thing.

What we dont know is when the slower access is in use at 336, if the other channels are doing something but slower....or just waiting.

Some on on BY3d was posting about memory timing issues that led to this compromise, but its not verified but could be just FUD.

We shall see when 3rd party games > 10 gb roll out next gen.


kpWgiD9.png

Notice RX 5600 XT 36 at 1712 Mhz average with 7.89 TFLOPS with 336 GB/s memory bandwidth has 8% penality while RX 5700 with 7.7 TFLOPS and 448 GB/s memory bandwidth

XSX's 336 GB/s 8% penalty is dependant on memory access hit rates between 6GB and 10GB memory pools.

My argument is also applicable for PS5's GPU when CPU consumes its memory bandwidth.
 

rnlval

Member
That really depends on whether the channels not accessing the top 6GB can still access the lower 10GB. I see no reason why it wouldn't.



The BW figures indicate the channels are always are always running at the same speed. Stalling access on all channels once any access to the upper 6GB was in play would be an unimaginably bad design, potentially costing you up to %80 of your bandwidth and completely killing your GPU.

The odds of that being part of any design are vanishingly small.

The memory controller won't care which part of ram it's accessing on its allotted chips.
Not correct,

336 GB/s is 60% of 560 GB/s memory bandwidth. The cost is 40 percent of memory bandwidth.
 
That's not correct.


XNQZK6Z.png


There are 20 memory channels with XSX.


dp0wejr.png

As I've said before, it's 5 x 64-bit, or 10 x 32-bit, or 20 x 16-bit. I just don't want to re-type everything every post. It's exhausting.

The fact that the PHYs are set up for connecting to 2 x 16 bit for each chip is because that's how GDDR6 is designed. It doesn't mean that the system schedules accesses with that level of granularity.

The RDNA whitepaper doesn't actually say how many channels each controller has. Whatever, it's going to be an integer multiple of the number of controllers.

The important thing for the discussion I was having is that 1 / 2 / 3 (whatever) channel(s) accessing the higher area doesn't necessarily mean, and shouldn't mean, that every other channel is blocked for the 10GB area.
 
Not correct,

336 GB/s is 60% of 560 GB/s memory bandwidth. The cost is 40 percent of memory bandwidth.

You seem to be skim reading and looking for a fight by misinterpreting what people write. You snipped part of a sentence to remove context, and that's not a good look. I said:

"Stalling access on all channels once any access to the upper 6GB was in play would be an unimaginably bad design, potentially costing you up to %80 of your bandwidth and completely killing your GPU. "

Take the blinkers off, man. I was saying that in a hypothetical situation, where out of five channels, using one to access the upper 6GB blocked all access to to the lower 10GB, could potentially cost you 80% of your bandwidth.

(100 / 5) * 4 = 80.

But I don't think this is how XSX works. I think that would be dumb. I think the idea that one controller can't access the upper while another access the lower is silly.

It's like you're a bot trying to apply template argument against posts, unable to understand context.
 

CJY

Banned
Sounds to me like 6GB of the fast RAM is on the same channel as 6GB of the slow RAM, therefore only 4GB is completely unencumbered.

3.5GB is reserved for OS (let's round it to 4), and lets assume this is suspended when the game is running, and we give this area back to the fast RAM, which leaves 8GB of fast RAM and 2GB+2GB of Slow/Fast RAM.

I'm not going into things into a lot of detail, because nobody can speak with absolute certainty that the slow & fast RAM can accessed at the same time. My understanding is that it can't because it isn't channeled RAM and RAM chips are normally accessed in Parallel.

Either way, XSX RAM situation sounds like a compromise and a potential headache for developers. Performance-wise it might match PS5, but it's a poor design, but potentially it's a smart design to mitigate limitations in their decision to cut back on RAM.

20GB of same-speed RAM across a 320-bit bus with maybe 48 CUs would perhaps have resulted in a very good system also without the flaws apparent in XSX as it exists today.

I can't see XSX's RAM setup as anything but a compromise and cost-cutting measure. It's not a split-RAM set up, but it's not ideal.
 

geordiemp

Member
That really depends on whether the channels not accessing the top 6GB can still access the lower 10GB. I see no reason why it wouldn't.

The BW figures indicate the channels are always are always running at the same speed. Stalling access on all channels once any access to the upper 6GB was in play would be an unimaginably bad design, potentially costing you up to %80 of your bandwidth and completely killing your GPU.

The odds of that being part of any design are vanishingly small.

The memory controller won't care which part of ram it's accessing on its allotted chips.

I also read on Beyond 3d about timing issues on such memory access. Sounds like FUD maybe

Yes it is not well optimsied it appears, maybe they will change to 20 GB or do something clever, maybe ps5 will change to 16 gbs chips....who knows ?

Both solutions scream of cost compromises until we know more info.
 
Last edited:

Handy Fake

Member
I also read on Beyond 3d about timing issues on such memory access. Sounds like FUD maybe

Yes it is not well optimsied it appears, maybe they will change to 20 GB or do something clever, maybe ps5 will change to 16 gbs chips....who knows ?

Both solutions scream of cost compromises until we know more info.
I think with production being severely affected by the pandemic it'll be doubtful to see any chip changes from now until launch. We'll see though.
 

geordiemp

Member
kpWgiD9.png

Notice RX 5600 XT 36 at 1712 Mhz average with 7.89 TFLOPS with 336 GB/s memory bandwidth has 8% penality while RX 5700 with 7.7 TFLOPS and 448 GB/s memory bandwidth

XSX's 336 GB/s 8% penalty is dependant on memory access hit rates between 6GB and 10GB memory pools.

My argument is also applicable for PS5's GPU when CPU consumes its memory bandwidth.

Also stop with the last gen or RDNA1 PC GPU crap, its totally irrelevant, PCs have split pools of RAM and 2 buses, but also cannot share data between CPU and GPU without costs between the 2 pools.

Shared bus on consoles is totally different, some pros, some cons.

Also latest consoles will have their own compression and optimsiations, RNDA2 is different and faster 50 % perf per watt.....so I dont damn give a crap what a PC part of NVidia or RDNA1 says at the old max frequencies...who cares ? ....

Your posts are irrelevant and nobody cares what a PC part did that was 50 % worse perf/watt.
 
Last edited:

CJY

Banned
Also stop with the alst gen or RDNA1 PC GPU crap, its totally irrelevant, PCs have split pools aof RAM and 2 buses, but also cannot share data without costs between the 2 pools.

Also latest consoles will have their own compression and optimsiations, so I dont damn give a crap what a PC part of NVidia or RDNA1 says ....
Yup. Traditional PCs aren't HSA, which the consoles are, so pretty pointless in a comparison. PS5 has a pure HSA implementation albeit average speed-wise, and XSX implementation is a compromised/hybrid approach.

As a result, XSX feels distinctly imbalanced as a system. HSA requires balance for a proper unified memory implementation.

PS5 APU hit upon a breakthrough during design where the thermal density of the CPU and GPU were exactly the same. This screams balance and zen to me.

We'll see though. Balance doesn't necessarily mean it's good. Just that the imbalance seems to have caused the RAM situation on XSX, and therefore it's bad.
 
Last edited:
Sounds to me like 6GB of the fast RAM is on the same channel as 6GB of the slow RAM, therefore only 4GB is completely unencumbered.

The 4GB won't be competing for accesses to the slower area, that's correct. It might be a good place to put latency sensitive data for the GPU. If you have that kind of control over were you put stuff in memory, that is.

The other way to look at it, of course, is that on PS5, all of memory is encumbered by CPU / IO / Audio. But I don't think "encumbered" is the right word.

There's data, and things need it. Doing the job you're designed for shouldn't be seen as an encumbrance!

3.5GB is reserved for OS (let's round it to 4), and lets assume this is suspended when the game is running, and we give this area back to the fast RAM, which leaves 8GB of fast RAM and 2GB+2GB of Slow/Fast RAM.

None of that makes sense. OS reserve is actually 2.5 GB, and it's constant. Games have 10GB fast / 3.5 GB slow. It's literally stated by the guy who designed the system.

I'm not going into things into a lot of detail, because nobody can speak with absolute certainty that the slow & fast RAM can accessed at the same time. My understanding is that it can't because it isn't channeled RAM and RAM chips are normally accessed in Parallel.

RDNA definitely uses channels from multiple controllers, and both the "fast" and "slow" memory are accessed this way. That's why the bandwidth figures are the way there are.

Either way, XSX RAM situation sounds like a compromise and a potential headache for developers. Performance-wise it might match PS5, but it's a poor design, but potentially it's a smart design to mitigate limitations in their decision to cut back on RAM.

Performance wise it will significantly exceed PS5 memory bandwidth. There's no evidence MS have "cut back" on ram.

I can't see XSX's RAM setup as anything but a compromise and cost-cutting measure. It's not a split-RAM set up, but it's not ideal.

It's not a cost cutting measure, it's more expensive than the simpler setup used in the PS5 to get 16GB (which is a perfectly fine solution given Sony's slightly less performant aims with PS5).
 

geordiemp

Member
Performance wise it will significantly exceed PS5 memory bandwidth. There's no evidence MS have "cut back" on ram.



It's not a cost cutting measure, it's more expensive than the simpler setup used in the PS5 to get 16GB (which is a perfectly fine solution given Sony's slightly less performant aims with PS5).

Performance wise, XSX will only signifcantly exceed if most of access is < 10 GB. Otherwise they are very similar bandwidth per TF if we assume 48 GBS for non GPU. No its no significant....at all.

No ....Of course its a cost cutting measure, 20 GB would be better and wide for all access, but its costs allot more, and would be normal for a 320 bus.


VrTvqPd.png
 
Last edited:
I also read on Beyond 3d about timing issues on such memory access. Sounds like FUD maybe

Yes it is not well optimsied it appears, maybe they will change to 20 GB or do something clever, maybe ps5 will change to 16 gbs chips....who knows ?

Both solutions scream of cost compromises until we know more info.

Oh you're right about the compromises thing! Both machines are at the cutting edge of compromises ... and I kind of find that exciting! Which compromises get chosen really highlight the differences in philosophy for these companies at the start of the generation.

Both are fine machines IMO, well designed to support their respective companies goals. MS have gone for the Uber multiplatform machine, Sony are focusing on console as king and trying to bring something new and redefining to Playstation specific experiences.

Would be nice if MS could go to a uniform 20GB, and long term it might not cost too much extra. But I'm inclined to think there's something in this whole signalling / timing business. They need to make millions of these a year cheaply and reliably.
 
Performance wise, XSX will only signifcantly exceed if most of access is < 10 GB. Otherwise they are very similar bandwidth per TF if we assume 48 GBS for non GPU. No its no significant....at all.

VrTvqPd.png

I have a different opinion to Lady Gaia, and I don't expect that access to the upper 6GB will automatically tie up the whole 320-bit bus. I can see no reason for that, and she offers no explanation. I would love to hear her explanation on that.

I don't think going above 10GB will significantly hurt XSX at all. Not with the number of memory controllers they have and the sheer weight of accesses the GPU makes compared to the CPU.

MS made XSX will the full intention of using all 13.5GB that games have available.
 

rnlval

Member
You seem to be skim reading and looking for a fight by misinterpreting what people write. You snipped part of a sentence to remove context, and that's not a good look. I said:

"Stalling access on all channels once any access to the upper 6GB was in play would be an unimaginably bad design, potentially costing you up to %80 of your bandwidth and completely killing your GPU. "

Take the blinkers off, man. I was saying that in a hypothetical situation, where out of five channels, using one to access the upper 6GB blocked all access to to the lower 10GB, could potentially cost you 80% of your bandwidth.

(100 / 5) * 4 = 80.

But I don't think this is how XSX works. I think that would be dumb. I think the idea that one controller can't access the upper while another access the lower is silly.

It's like you're a bot trying to apply template argument against posts, unable to understand context.
I understand your context, but I disagreed with your "potentially cost you 80% of your bandwidth" when the potential cost is 40% of your bandwidth.

MS has stated 336 GB/s BW for 6 GB and 560 GB/s BW for 10 GB and this is the framework that I'm working with.

Your "bot" label on me is hypocritical.
 
Last edited:

Handy Fake

Member
Oh you're right about the compromises thing! Both machines are at the cutting edge of compromises ... and I kind of find that exciting! Which compromises get chosen really highlight the differences in philosophy for these companies at the start of the generation.

Both are fine machines IMO, well designed to support their respective companies goals. MS have gone for the Uber multiplatform machine, Sony are focusing on console as king and trying to bring something new and redefining to Playstation specific experiences.

Would be nice if MS could go to a uniform 20GB, and long term it might not cost too much extra. But I'm inclined to think there's something in this whole signalling / timing business. They need to make millions of these a year cheaply and reliably.
"The cutting edge of compromises". That's both a grim and hilarious take on it.
Cap doffed.
 

CJY

Banned
The 4GB won't be competing for accesses to the slower area, that's correct. It might be a good place to put latency sensitive data for the GPU. If you have that kind of control over were you put stuff in memory, that is.

The other way to look at it, of course, is that on PS5, all of memory is encumbered by CPU / IO / Audio. But I don't think "encumbered" is the right word.

There's data, and things need it. Doing the job you're designed for shouldn't be seen as an encumbrance!



None of that makes sense. OS reserve is actually 2.5 GB, and it's constant. Games have 10GB fast / 3.5 GB slow. It's literally stated by the guy who designed the system.



RDNA definitely uses channels from multiple controllers, and both the "fast" and "slow" memory are accessed this way. That's why the bandwidth figures are the way there are.



Performance wise it will significantly exceed PS5 memory bandwidth. There's no evidence MS have "cut back" on ram.



It's not a cost cutting measure, it's more expensive than the simpler setup used in the PS5 to get 16GB (which is a perfectly fine solution given Sony's slightly less performant aims with PS5).
OK, I got the figure for the OS wrong, which makes the situation even more unfavourable for XSX, meaning even less unencumbered fast RAM available for when games are running.

I don't see how the slow RAM encroaching on the fast RAM isn't an encumbrance. It's not only about speed, it's about the design of the whole RAM system. It does make a huge difference whether each RAM chip is on separate channels or it's a single channel. I've seen no evidence either way to be honest. You got any proof of it? it doesn't make much sense to me that RDNA has multi-channel RAM when all RDNA implementations have been solely in dGPUs so far.

You bring up PS5. I don't know why. There's nothing wrong with PS5's RAM as they weren't going for the performance crown. A post from REEE showed that XSX RAM overall has only 2GB/s more bandwidth than PS5's due to larger CPU.

Same as how the variable clocks on PS5 looks bad from an Xbox fans perspective when actually it's a huge benefit from a thermals, cooling and performance perspective.

All I know is that on the surface, the set up looks really poor. I don't see how XSX RAM setup is superior to a flat set-up vs. a tiered setup. Seems like a lot of mental gymnastics to try conclude about why XSX has less fast RAM than the X1X.

Edit: yup, Lady Gaia, that's it.
 
Last edited:

geordiemp

Member
I have a different opinion to Lady Gaia, and I don't expect that access to the upper 6GB will automatically tie up the whole 320-bit bus. I can see no reason for that, and she offers no explanation. I would love to hear her explanation on that.

I don't think going above 10GB will significantly hurt XSX at all. Not with the number of memory controllers they have and the sheer weight of accesses the GPU makes compared to the CPU.

MS made XSX will the full intention of using all 13.5GB that games have available.

Its fine to have a different opinion to others, However, she claims to have worked on optimisation tools for games as a professional....and as such if lady Gaia story / background is legit she would know this down to the finest details. Other posters at Era such as NX gamer etc and ask her questions and not seen anyone challenger her knowledge......rarely anyway....so .....

As you have not stated your knowledge or expertise, I am inclined to go with her assumptions for now unless you have some professional first hand experience in such things. The only assumption is non GPU at 48 GPs - and lets face it, it will vary greatly between apps anyway.

Also we have only been told by MS that CPU bound data will be favourably be put in the slower part of RAM, we dont know exacty how MS will use it for now

and for cross gen maybe all game will fit in 10 GB anyway....
 
Last edited:
I understand your context, but I disagreed with your "potentially cost you 80% of your bandwidth" when the potential cost is 40% of your bandwidth.

Once again, you completely missed the context, even though I put the whole sentence in there for you. You're hard work, I'll give you that! ;)

In the hypothetical situation where 4 of your 5 controllers are stopped from doing anything, what is that cost?

MS has stated 336 GB/s for 6 GB BW and 560 GB/s BW for 10 GB and this is the framework that I'm working with.

Yes, I know they stated that. And I'm working on the assumption that it's entirely correct.

I'll try and spell this out in baby steps:

If 560 GB/s is with five controllers ....

... and 336 GB/s is with 3 controllers ....

And if using 1 controller to access the "slow 6GB" (because your CPU just wants a little bit of data, doesn't need that full 336 GB/s) blocks access by the other 4 controllers to fast ram (because those are the accesses they have queued up, they don't want anything form the slow ram) ...

How much bandwidth would you have potentially lost? I'll do the maths for you:

(560 GB/s / 5 controllers) x 4 blocked controllers = 448 GB/s.

(448 GB/s) / (560 GB/s) = 0.8.

I don't think this happens, I don't think one controller accessing the slow 6 GB would automatically block all access to the fast 10GB for the others. This is an example of just how dumb that setup would be. That's why I used it.

Quoting MS's figures back at me ... again ... is silly. You didn't read. You're getting triggered by snippets of posts you're not reading properly and throwing macros back at them as replies.

Your "bot" label on me is hypocritical.

You just did it again. You just re-did the same stock quoting of figures against a comment you didn't take the time to read.

Ayy bot LMAO. :messenger_grinning_sweat:
 

ethomaz

Banned
To add some info to the thread, Dark from Digital Foundry confirmed that the Series X version of Gears 5 that was running beyond PC Ultra Settings takes advantage of dynamic resolution scaling. That's basically how it maintained its locked 60fps after being ported in 2 weeks without newer gpu features utilized. Don't know minimum res yet, but asked.


edit:

Lowest he saw it drop was to 1080p in a cutscene.
Thanks for the info.

Like I said they are aiming with two goals:

- Campign: 4k Ultra added effects 60fps (if they didn’t reach 4k all the time it will be pretty close to it).
- Multiplayer: Ultra 120fps (the resolution here will drop below 4k most of time whatever it needs to hold 120fps)

The drastic drops in resolution in the campaign to hold 60fps is due optimizations... the actual performance without resolution scaling is near RTX 2080, that means around 40fps... I can see they reaching 60fps with fixed 4k or little resolution drop in heavy scenes.

That is a example that just creating a profile to scale for the stronger machine doesn’t work and it needs a lot of optimizations...
the scaling across machines magic doesn’t exists.
 

GymWolf

Member
i have a question, in the last 2 gen, who was the people who talked about cell being hard to develop for or the esdram problem on xone etc.?

first party? second party? third party? journalists? yt experts? sony or M himself? who exactly?

do ND seriously shat on cell or 343 on xone esdram? did ubisoft\activision\bethesda ever said something negative about the hardware with the risk of getting M or sony mad at them?

who are the most trustworthy sources that are gonna tell us (without lying or fanboying) what console has bottleneck or how easy is to work for?
 
Last edited:

geordiemp

Member
So, we do not have an agreement on how the SX memory pool works.

That is true, we just have different posters, some saying its sequential and ties in with MS specs so far (Lady Gaia on Era and how most busses work), some saying its indivddual straws (delusional ), and some thinking maybe when the slow access is occuring the rest of the bus could still contribute (unknown and could be something new).

Nobody knows yet.
 

ethomaz

Banned
i have a question, in the last 2 gen, who was the people who talked about cell being hard to develop for or the esdram problem on xone etc.?

first party? second party? third party? journalists? yt experts? sony or M himself? who exactly?

do ND seriously shat on cell or 343 on xone esdram? did ubisoft\activision\bethesda ever said something negative about the hardware with the risk of getting M or sony mad at them?

who are the most trustworthy sources that are gonna tell us (without lying or fanboying) what console has bottleneck or how easy is to work for?
Even Sony admitted early in the gen PS3 Cell was hard to code... specifically to uses its SPUs.

 
Last edited:
Its fine to have a different opinion to others, However, she claims to have worked on optimisation tools for games as a professional....and as such if lady Gaia story / background is legit she would know this down to the finest details. Other posters at Era such as NX gamer etc and ask her questions and not seen anyone challenger her knowledge......rarely anyway....so .....

As you have not stated your knowledge or expertise, I am inclined to go with her assumptions for now unless you have some professional first hand experience in such things. The only assumption is non GPU at 48 GPs - and lets face it, it will vary greatly between apps anyway.

I've done some programming in the past and made some shitty stuff in XNA, Unity that I've been too embarrassed to show another human. I enjoy reading about computers. I've never done any low level optimisation. My opinions are based on reading the RDNA deep dives and whitepape, and most definitely not an authority on computer architectures.

Lady Gaia is going off the same public data as me, and I'll bet she's never been hands on with XSX. I think she's generally right and very smart from what I've seen, but I also think that she's making some assumptions on accesses on one controller (e.g. blocking all accesses on others uniformly as opposed to on a per access based case) that will turn out to be incorrect.

I think by looking at what her proposed arrangement would mean in worst case scenarios (e.g. if you're accessing slow memory, but only a little bit of it and using only a fraction of the 338 GB/s) what that would mean for system performance (most of the memory subsystem would be idle in this case), you can see it would be an unworkable solution. IMO The controllers have to be able to carry on with other work when they're not asked to do anything in the slow memory (accesses permitting).

You're free to go with her assumptions, of course! Hey, at least you heard me out. ;)

I look forward to seeing the DF results. Long term the proof is always in the pudding.

Damn, I've spent too much time in here today. I'll be back!
 

geordiemp

Member
I believe a dev on tweet explained pretty well with pics.


Dr Keo is an era poster not a dev, and the issue was put to bed by the expert lady Gaia

No they are not individual straws, bus works in parallel. I post it again :


EwW58mE.png
 

ZywyPL

Banned
So, we do not have an agreement on how the SX memory pool works.

But does it really matter? It just works, that's what should matter for us, end consumers, it's up to the devs hands to make the best use of it. MS already showed Gears 5 running maxed out on XBX at 4K60, and fully path-traced Minecraft, so obviously there is absolutely no issue/bottleneck as far as this fancy RAM configuration goes.
 

rnlval

Member
Once again, you completely missed the context, even though I put the whole sentence in there for you. You're hard work, I'll give you that! ;)

In the hypothetical situation where 4 of your 5 controllers are stopped from doing anything, what is that cost?


Yes, I know they stated that. And I'm working on the assumption that it's entirely correct.

I'll try and spell this out in baby steps:

If 560 GB/s is with five controllers ....

... and 336 GB/s is with 3 controllers ....

And if using 1 controller to access the "slow 6GB" (because your CPU just wants a little bit of data, doesn't need that full 336 GB/s) blocks access by the other 4 controllers to fast ram (because those are the accesses they have queued up, they don't want anything form the slow ram) ...

How much bandwidth would you have potentially lost? I'll do the maths for you:

(560 GB/s / 5 controllers) x 4 blocked controllers = 448 GB/s.

(448 GB/s) / (560 GB/s) = 0.8.

I don't think this happens, I don't think one controller accessing the slow 6 GB would automatically block all access to the fast 10GB for the others. This is an example of just how dumb that setup would be. That's why I used it.

Quoting MS's figures back at me ... again ... is silly. You didn't read. You're getting triggered by snippets of posts you're not reading properly and throwing macros back at them as replies.


You just did it again. You just re-did the same stock quoting of figures against a comment you didn't take the time to read.

Ayy bot LMAO. :messenger_grinning_sweat:
Why make it complicated?

Reminder
RX 5600 XT (NAVI 10) already has six GDDR6 chips with 336 GB/s memory bandwidth.
RX 5700 (NAVI 10) already has eight GDDR6 chips with 448 GB/s memory bandwidth.

XSX's six GDDR6 chips with 336 GB/s memory bandwidth already duplicates RX 5600 XT's 336 GB/s memory bandwidth with six GDDR6 chips.

336 / 560 = 60%, hence potential memory bandwdith lost is 40%

GDDR6 enables full-duplex read/write or read/read or write/write or write/read with dual 16bit channels.

We don't know if the memory controller arbiters have semi-custom changes to reserve odd or even 16bit links in the six 2GB chips to give GPU optimal memory range higher priority.
 
Last edited:
Top Bottom