• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Bo_Hazem

Banned
o’dium explained it why this is not the case, it is wishful thinking of SDF :messenger_beaming:

I have got a feeling that this whole SSD stuff is “power of the cloud” and the cell all over again and some people here are trying very hard to convince others that their favourite plastic box is “da best”, it is like, if you will keep repeating same thing 100s of times it will become reality.

3tkw5i.jpg
 
This upcoming gen looks to be more interesting than ever, Sony have gone a unique route and MS have gone the common route.

Each gen we always are used to getting new hardware but with bumped up specs and number. Series X has done just that, everything is bumped up from previous gens, like the typical American way “more powaaaarr” bit like muscle cars, throw an insane engine in there but the handling is always rubbish.

Sony have gone more customised, a bit like a tuned Japanese car (skyline) power, but not over the top power, with other components to help, turbos etc (high clock speeds, LOADS of customised silicon).

Sony also have 2 separate engines, Tempest for sound and Geometry for shaders etc, it’ll be interesting to know how these 2 engine work as they should take the strain of the cpu and gpu.

Yes my take of it too.

XsX is more standard, brute force (more off the shelf RDNA2).

PS5 more elegant, esoteric and customised (Geometry Engine, Tempest Engine, GPU cache scrubbers).

:messenger_grinning_smiling:
 
Last edited:

PaintTinJr

Member
There's a lot of confusion on why SSD is so important for next-gen and how it will change things.
Here I will try to explain the main concepts.
TL;DR fast SSD is a game changing feature, this generation will be fun to watch!

It was working fine before, why do we even need that?
No, it wasn't fine, it was a giant PITA for anything other than small multiplayer maps or fighting games.
Let's talk some numbers. Unfortunately not many games have ever published their RAM pools and asset pools to the public, but some did.
Enter Killzone: Shadowfall Demo presentation.
We have roughly the following:

Type Approx. Size, % Approx. Size, MB
Textures 30% 1400
CPU working set 15% 700
GPU working set 25% 1200
Streaming pool 10% 500
Sounds 10% 450
Meshes 10% 450
Animations/Particles 1% 45

*These numbers are rounded sums of various much more detailed numbers presented in the article above.

We are interested in the "streaming pool" number here (but we will talk about others too)
We have ~500MB of data that is loaded as the demo progresses, on the fly.
The whole chunk of data that the game samples from (for that streaming process) is 1600MB.
The load speed of PS4 drive is (compressed data) <50MB/sec (uncompressed is <20MB/sec), i.e. it will take >30sec to load that at least.

It seems like it's not that big of a problem, and indeed for demo it is. But what about the game?
The game size is ~40GB, you have 6.5GB of usable RAM, you cannot load the whole game, even if you tried.
So what's left? We can either stream things in, or do a loading screen between each new section.
Let's try the easier approach: do a loading screen
We have 6.5GB of RAM, and the resident set is ~2GB from the table above (GPU + CPU working set). We need to load 4.5GB each time. It's 90 seconds, pretty annoying, but it's the best case. Any time you need to load things not sequentially, you will need to seek the drive and the time will increase.
You can't go back, as it will re-load things and - another loading screen.
You can't use more than 4.5GB assets in your whole gaming section, or you will need another loading screen.
It gets even more ridiculous if your levels are dynamic: left an item in previous zone? Load time will increase (item is not built into the gaming world, we load the world, then we seek for each item/item group on disk).
Remember Skyrim? Loading into each house? That's what will happen.
So, loading screens are easy, but if your game is not a linear, static, theme-park style attraction it gets ridiculous pretty fast.

How to we stream then?
We have a chunk of memory (remember 500Mb) that's reserved for streaming things from disk.
With our 50MB/sec speed we fill it up each 10 sec.
So, each 10 sec we can have a totally new data in RAM.
Let's do some metrics, for example: how much new shit we can show to the player in 1 min? Easy: 6*500 = 3GB
How much old shit player sees each minute? Easy again: 1400+450+450+45=~ 2.5GB
So we have a roughly 50/50 old to new shit on screen.
Reused monsters? assets? textures? NPCs? you name it. You have the 50/50 going on.

But PS4 has 6.5GB of RAM, we used only 4.5GB till now, what about other 2GB?
Excellent question!
The answer is: it goes to the old shit. Because if we increase the streaming buffer to 1.5GB it still does nothing to the 50MB/sec speed.
With the full 6.5GB we get to 6GB old vs 3GB new in 1 minute. Which is 2:1 old shit wins.

But what about 10 minutes?
Good, good. Here we go!
In 10 min we can get to 30GB new shit vs 6GB old.
And that's, my friends, how the games worked last gen.
You're as a player were introduced to the new gaming moments very gradually.
Or, there were some tricks they used: open doors animation.
Remember Uncharted with all the "let's open that heavy door for 15sec?" that's because new shit needs to load, players need to get to a new location, but we cannot load it fast.

So, what about SSDs then?
We will answer that later.
Let's ask something else.

What about 4K?
With 4K "GPU working set" will grow 4x, at least.
We are looking at 1200*4 = 4.8GB of GPU data.
CPU working set will also grow (everybody wants these better scripts and physics I presume?) but probably 2x only, to 700*2 = ~1.5GB
So overall the persistent memory will be well over 6GB, let's say 6.5GB.
That leaves us with ~5GB of free RAM in XSeX and ~8GB for PS5.

Stop, stop! Why PS5 has more RAM suddenly?
That's simple.
XSeX RAM is divided into two pools (logically, physically it's the same RAM): 10GB and 3.5GB.
GPU working set must use the 10GB pool (it's the memory set that absolutely needs the fast bandwidth).
So 10 - 4.8 = 5.2 which is ~5GB
CPU working set will use 3.5GB pool and we will have a spare 2GB there for other things.
We may load some low freq data there, like streaming meshes and stuff, but it will hard to use in each frame: accessing that data too frequently will lower the whole system bandwidth to 336Mb/sec.
That's why MSFT calls the 10GB pool "GPU optimal".

But what about PS5? It also has some RAM reserved for the system? It should be ~14GB usable!
Nope, sorry.
PS5 has a 5.5GB/sec flash drive. That typically loads 2GB in 0.27 sec. It's write speed is lower, but not less than 5.5GB/sec raw.
What PS5 can do, and I would be pretty surprised if Sony won't do it. Is to save the system image to the disk while the game is playing.
And thus give almost full 16GB of RAM to the game.
2GB system image will load into RAM in <1 sec (save 2GB game data to disk in 0.6 sec + load system from disk 0.3 sec). Why keep it resident?
But I'm on the safe side here. So it's ~14.5GB usable for PS5.

Hmm, essentially MSFT can do that too?
Yep, they can. The speeds will be less sexy but not more than ~3sec, I think.
Why don't they do it? Probably they rely on OS constantly running on the background for all the services it provides.
That's why I gave Sony 14.5GB.
But I have hard time understanding why 2.5GB is needed, all the background services can run on a much smaller RAM footprint just fine, and UI stuff can load on-demand.

Can we talk about SSD for games now?
Yup.
So, let's get to the numbers again.
For XSeX ~5GB of "free" RAM we can divide it into 2 parts: resident and streaming.
Why two? Because typically you cannot load shit into frame while frame is rendering.
GPU is so fast, that each time you ask GPU "what exact memory location are you reading now?" will slow it down to give you an answer.

But can you load things into other part while the first one is rendering?
Absolutely. You can switch "resident" and "streaming" part as much as you like, if it's fast enough.
Anyway, we got to 50/50 of "new shit" to "old shit" inside 1 second now!
2.5GB of resident + 2.5GB of streaming pool and it takes XSeX just 1 sec to completely reload the streaming part!
In 1 min we have 60:1 of new/old ratio!
Nice!

What about PS5 then? Is it just 2x faster and that's it?
Not really.
The whole 8GB of the RAM we have "free" can be a "streaming pool" on PS5.

But you said "we cannot load while frame is rendering"?
In XSeX, yes.
But in PS5 we have GPU cache scrubbers.
This is a piece of silicon inside the GPU that will reload our assets on the fly while GPU is rendering the frame.
It has full access to where and what GPU is reading right now (it's all in the GPU cache, hence "cache scrubber")
It will also never invalidate the whole cache (which can still lead to GPU "stall") but reload exactly the data that changed (I hope you've listened to that part of Cerny's talk very closely).

But it's free RAM size doesn't really matter, we still have 2:1 of old/new in one frame, because SSD is only 2x faster?
Yes, and no.
We do have only 2x faster rates (although the max rates are much higher for PS5: 22GB/sec vs 6GB/sec)
But the thing is, GPU can render from 8GB of game data. And XSeX - only from 2.5GB, do you remember that we cannot render from the "streaming" part while it loads?
So in any given scene, potentially, PS5 can have 2x to 3x more details/textures/assets than XSeX.
Yes, XSeX will render it faster, higher FPS or higher frame-buffer resolution (not both, perf difference is too low).
But the scene itself will be less detailed, have less artwork.

OMG, can MSFT do something about it?
Of course they will, and they do!
What are the XSeX advantages? More ALU power (FLOPS) more RT power, more CPU power.
What MSFT will do: rely heavily on this power advantage instead of the artwork: more procedural stuff, more ALU used for physics simulation (remember, RT and lighting is a physics simulation too, after all).
More compute and more complex shaders.

So what will be the end result?
It's pretty simple.
PS5: relies on more artwork and pushing more data through the system. Potentially 2x performance in that.
XSeX: relies more on in-frame calculations, procedural. Potentially 30% performance in that.
Who will win: dunno. There are pros and cons for each.
It will be a fun generation indeed. Much more fun than the previous one, for sure.

Most of that seems pretty good IMO, but the idea that going as wide as 52 (4 wider than the 48 CU example Cerny used) and slower will results in better procedural or frame-rate performance doesn’t sound right for most games IMHO. For a start the higher the clock the more clocks per frame for procedural fluidity. It also gives you more clocks per frame to counter or cancel unforeseen workloads that will tank frame-rate.

Being philosophical about the different TF setups, would a Mandelbrot set driving a procedural data set be better as 15-20% bigger in resolution (1080p or above)? Or 15-20% more iterations/further forward in the algorithm in a frame (30 or 60 or 120fps)?

Given the way game frame-rates get tested for rushed release, I envisage the PS5 having far better frame-rate and frame pacing in less linear games because of the narrower 36 CU setup and higher clock.

I’m thinking specifically about games like Dark Souls because the strategy a gamer employs produces workloads that may exceed developer test workloads designed to operate with technical limits. So both 12 and 10TF could be insufficient in such a future game and the higher clock allows the software to detect the excess workload and take LoD action in less frames, and when it does drop LoD the narrower CU count will have less CU waste because it is smaller and more frequent, also allowing it to see when normal workload resumes and so quicker to up the LoD in less frames and scale back up to full utilisation.
 
Yes my take of it too.

XsX is more standard, brute force (more off the shelf RDNA2).

PS5 more elegant, esoteric and customised (Geometry Engine, Tempest Engine, GPU cache scrubbers).

:messenger_grinning_smiling:

Lol yes exactly, the average Joe see’s higher numbers (MUST mean better!) and assumes game over, anyone with knowledge of tech and understanding can easily see the two differences each company has gone with.

Until we see games compared and differences explained in detail, expect the typical Armchair devs and people who have less to nil amount of tech knowledge to spew hogwash, as if they know what they’re even talking about.
 

Smoke6

Member
For all the hype , the reveal and tech is an over complicated mess to be honest, not a clean design at all as PS4.

Something went terrible wrong.
How?

the man said it himself that other devs he talked to practically built this damn console with their input!

man I don’t get this thread at all now, gonna wait for a proper reveal for us GAMERS and then see how this unfolds!
 

Shmunter

Member
Seems kinda too good, if high clocks can overcome 40% more CU then why having more CU in more space that costs more? For cooling?

Big Navi is 80CU and 96 ROPS. What the heck ROPS means, I don't know, but it's some of those short names that if it gets more it's better depending on how much workload expected? 🤷‍♂️

The ROPS are what pushes pixels out to the screen buffer. ROPS x Clock = fill rate.

So assuming same ROP count between the 2 consoles, but higher clock on PS5 means higher fill rate.

However, I suspect the more complex the scene cu’s become the bottleneck doing their thing, so max fill won’t be reached in many occasions.
 
How?

the man said it himself that other devs he talked to practically built this damn console with their input!

man I don’t get this thread at all now, gonna wait for a proper reveal for us GAMERS and then see how this unfolds!
Something is wrong both with people and that conference.
Cerny: here my 10.2 TFs console that won't go down more than 2-3%
Everyone: sounds like 9.2 most of the time

Like, the fuck
 

pasterpl

Member
Most of that seems pretty good IMO, but the idea that going as wide as 52 (4 wider than the 48 CU example Cerny used) and slower will results in better procedural or frame-rate performance doesn’t sound right for most games IMHO. For a start the higher the clock the more clocks per frame for procedural fluidity. It also gives you more clocks per frame to counter or cancel unforeseen workloads that will tank frame-rate.

Being philosophical about the different TF setups, would a Mandelbrot set driving a procedural data set be better as 15-20% bigger in resolution (1080p or above)? Or 15-20% more iterations/further forward in the algorithm in a frame (30 or 60 or 120fps)?

Given the way game frame-rates get tested for rushed release, I envisage the PS5 having far better frame-rate and frame pacing in less linear games because of the narrower 36 CU setup and higher clock.

I’m thinking specifically about games like Dark Souls because the strategy a gamer employs produces workloads that may exceed developer test workloads designed to operate with technical limits. So both 12 and 10TF could be insufficient in such a future game and the higher clock allows the software to detect the excess workload and take LoD action in less frames, and when it does drop LoD the narrower CU count will have less CU waste because it is smaller and more frequent, also allowing it to see when normal workload resumes and so quicker to up the LoD in less frames and scale back up to full utilisation.

based on this, and many other recent comments, we should probably all get gaming pcs with Uber fast ssd’s, smaller/older graphic cards and overclock sh*t out of them to finally play crisis 3 on ultra, oh I forgot to mention that 2 gb of ram will be sufficient as I can run everything of my Uber fast ssd ...sorry to say but this logic is flawed. Yes, teraflops are only one element, and definitely not only metric that we should be looking at but it is getting ridiculous on how people keep trying to improve their perception of ps5 with lots of garbage analysis. In 6-8 months we will see ourselves when first games will be shown, then we will be playing these before end of this year. But I am guessing if xbsex games look and work better, we will hear some more about Cernys secret sauce Etc. Power of the cloud shit all over again but this time it is SDF reaching not xbots.
 

xool

Member
I think the more interesting question is how will the slower CPU access to the GDDR6 bus on the XSX affect overall effective bandwidth for the GPU?. I dont think in practise till ever hit 560GB/s with any CPU interactions, and the more CPU interactions the lower the effective bandwidth goes.
Both consoles have CPU on the same bus. As they did this gen.
 

Smoke6

Member
Something is wrong both with people and that conference.
Cerny: here my 10.2 TFs console that won't go down more than 2-3%
Everyone: sounds like 9.2 most of the time

Like, the fuck
Well isn’t there a thread with an article from some months back at how Sony had all this stuff locked in hardware wise? So your argument is invalid if that article is to be believed correct?

but I guess everyone here are developers and shit so carry on
 

SamWeb

Member
ZOpZES3.png

Firstly, the total XSX memory bandwidth will never drop below 448 GB / s (560 - 112 = 448, 336 + 112 = 448). They have a common bus.
There should be at least equality of both. Although XSX can be mined more profitably in some cases.

kN4vLWB.png

Again, PS5 will be forced to share the bandwidth between the GPU and CPU, then the performance separately for the GPU and CPU will inevitably decrease.
Secondly, adding to red (bad) - a slower, but still sufficient fast SSD - is unjustified.
 
Last edited:

Gamernyc78

Banned
And don't forget the crazy ass SSD which will get PS5 to 15 TF equivalent at the very least baby.

Don't forget Combined with the Uber, ultra powerful azure servers they will be renting from their nemesis Microsoft and Microsoft engineers allowing them to unlock the exponential power of the cloud (at 720p XCloud awesomeness I might add lol) there will be no stopping Sony 💦💦😂😂
 

xool

Member
Seeing as the talk was essentially a GDC talk aimed a devs, and with Cerny failing to mention anything about how much RAM is being reserved for the OS, I am really curious about whether they've decided to use maybe an ARM co-processor again, with its own dedicated RAM and/or storage for OS tasks. That would be really great to be honest, perhaps leaving 100% of the GDDR6 RAM for devs. I dunno if this is functionally possible though.

I consider the OS and its features "consumer-facing", so I fully expect them to go into it at the full reveal. Can you imagine how exciting it would be if the whole 16GB of RAM is available just for games? Would be amazing.


I also saw something mention that GameDVR functionality would eat up a whole chunk of RAM... I don't see why this would be the case when the IO is so fast. PS5 could hold as little as 5seconds of video in RAM (for example) and constantly write it out to storage at a low priority. Seems rather obvious to be honest, so maybe I'm missing something on why people think all the video needs to be stored in memory.
It could well be a separate ARM chip for OS, and some DDR4 .. I'd like that, but wouldn't they have mentioned it as a feature in "road to PS5" ?

I dunno about write playcapture video direct to SSD - yes it's totally possible. But it's constant ~1+GB per hour even at 1080 (and we're 4k now) -- that would wear level the SSD too much I think. Maybe it's within usual bounds . not sure.

Edit: on the bandwidth, I believe that it's a fairly decent match for the 36 CUs. Redtechgaming went into extensively enough in the video I linked above. Said there were some rumours that the RAM could have possibly been clocked to 512GB/s, but they decided to keep it at 448GB/s.

It's the same bus as a 5700XT, that only runs at 1.9GHz tops .. but this same bus and bandwidth has to support a 8C/16T Zen2 core as well. (someone told me 5700XT benefits more from memory upclock than GPU boost- suggesting it's already bandwith bottlenecked - I think - not sure if true though)
 
Last edited:
Well isn’t there a thread with an article from some months back at how Sony had all this stuff locked in hardware wise? So your argument is invalid if that article is to be believed correct?

but I guess everyone here are developers and shit so carry on
I don't see what's the point.
Of course PS5 hardware was locked far before the reveal, it seems Cerny found a way to get a 9.2 TFs GPU to 10.2 with minimal changes, and not the 1TF floating as some people claim. Again, his conference, not mine.
 

Shio

Member
uuh, you played yourself bruh

so, its based on this tweet:

vtgvdk73din41.jpg


so, yeah he says:

"Lots of RAM, and as a modern a CPU and GPU as is reasonable certainly helps"

but of COURSE other things like, time/budget/staff helps matter too.

SO, he says TFLpos not only matter, but things like time/budget/staff etc. are MORE important. TRUE, of course. wtf is this crap.
If devs have the same time/budget/staff for PS5 vs Xbox Series X then Xbox Series X will be better according to his argumentation.

He NEVER mentions SSD or anything like that, only that RAM, CPU, GPU (where Xbox Series X is BETTER than PS5) are more important and as important as time/budget/staff, because the better CPU/RAM/GPU is, the less team do you need to optimize

Especially if you have an architecture with variable frequencies like PS5, well then these games will actually look worse than PS5, because optimizing that takes more time and is harder..
I think where it stated that ps5 reduces the dev time to less than a month and the ssd automates the handling of asset streamig will make it easier for devs and same time
 
MS' choice to go with a massive amount of slower CUs in their GPU has resulted in much less space on the chip for custom engines. The benefit is theoretically higher amount of vector operations and potential for ray tracing.

Overrall the downside is the power draw and cost. The power draw is going to be pushed up powering all those transistors, especially when you consider their CPU is clocked 300Mhz higher than PS5's also. Power draw is a concern and 52CU brute force GPU approach is going to be costly, especially if those CUs can't be fed as efficiently as a smaller amount.
 
Don't forget Combined with the Uber, ultra powerful azure servers they will be renting from their nemesis Microsoft and Microsoft engineers allowing them to unlock the exponential power of the cloud (at 720p XCloud awesomeness I might add lol) there will be no stopping Sony 💦💦😂😂
Sony using Azure was a bold move, but MS probably thinks won't do shit so yeah take it why not lol
 

Bo_Hazem

Banned
The ROPS are what pushes pixels out to the screen buffer. ROPS x Clock = fill rate.

So assuming same ROP count between the 2 consoles, but higher clock on PS5 means higher fill rate.

However, I suspect the more complex the scene cu’s become the bottleneck doing their thing, so max fill won’t be reached in many occasions.

Thanks a lot, mate. I might just build my own customized console before next gen consoles launch if I continue at this rate :messenger_grinning_sweat: Learned a lot from all of the guys around here, Sony fans, Xbox fans, and so-called neutrals!

And before reaching 4,000 reaction mark within my first 1.5 month (unless I get removed again from the thread:lollipop_tears_of_joy:) I want to thank you all, especially xbox fans for being good sport and taking the banter lightly, and to all for all the fun we're having here. I apologize if I had annoyed any of you.:lollipop_raising_hand: Hope we end this pandemic with the least impact in our lives, amen.🙌

But I really hate Microsoft and will celebrate when it goes bankrupt:lollipop_horns:
 
Dolby Atmos Can Also Support Hundreds of Objects Like PS5’s Tempest Audio Engine, Says Dolby

Dolby Atmos, the spatial audio technology developed and maintained by Dolby Laboratories, is nowadays fairly common. It can be found on PC, several high-end smartphones, and Microsoft's Xbox One console.

However, Sony never added Dolby Atmos support on PlayStation 4 and with the recent PlayStation 5 specification reveal, we learned that won't happen with the upcoming next-generation console, either. System architect Mark Cerny said Sony's goal with the Tempest engine for 3D audio was to support 'hundreds of sources, not just the 32 that Dolby Atmos supports'.

xCloud Servers to Transition to the Xbox Series X Processor for Improved Performance; PC Testing Is Underway

To address such claims, a new blog post went up yesterday on the Dolby Atmos developer website.

Is it true Dolby Atmos is capped at 32 objects?

No, that is incorrect. As a technology, Dolby Atmos can support hundreds of simultaneous objects.

That being said, we fall back on sage advice from developers of some of the first Atmos games: objects are a fantastic tool, but restraint should be shown with respect to the number of objects active at any time. Too many objects in motion can create a confusing soundscape.

Developers have also told us that avoiding the horizontal "bed" for an all-object mix is an unnecessarily time-consuming and labor-intensive effort. So far, developers are creating next-generation mixes by blending bed audio and object audio. More is good, but more may not necessarily be "better."

 

Bo_Hazem

Banned

Pure damage control, we support, but restraints, but makes confusion 🤷‍♂️ How about Sony just dropping Dolby Vision (HDR10+ instead) and Atmos from their TV's and push the prices down a bit and use The Tempest Engine and make it compatible with any headphone like this approach on PS5?
 
Did you watch RedGamingTech's video yet where he compares the GPU of both system. I timestamped it for you:


I got some things, I think.
The main mistake of the people seems comparing PS5 with PCs starting from the variable frequency and not taking in account cache and the effects of so high clock rates in an actually sustanable manner thanks to cooling which we know nothing.
Well, this guy seems to listen the lead architect of the console to judge how the console which the lead architect build will function.
This is incredible.
 

Aceofspades

Banned
And don't forget the crazy ass SSD which will get PS5 to 15 TF equivalent at the very least baby.

Lets be honest here, nobody said that GPU deficit albeit very small will be magically disappearing because of SSD. What people are saying that PS5 actually have areas that can mitigate the overall performance of it vs XsX and give it advantages in some areas. For exsmple:

- the powerful 3D Audio in PS5 can free resources from GPU/CPU bear in mind that we don't know how XsX audio engine compare to Tempest, it might be even more capable but we don't know (I doubt that personally)

- SSD as much as people here try to dismiss it, is the biggest jump seen in console history, we moved from 50MB/s to 2.4Gb/s in XsX and double that at 5.5Gb/s in PS5. Clearly here PS5 has huge advantage in this area as seen by lots of dev comments and its a fact that faster SSD can translate into faster streaming, loading and even better fps as demonstrated by DF and others.

I/O in PS4 is BEAST , a silicon that more capable than 10 Zen 2 cores CPU, its design to keep bottlenecks into absolute minimum in a design that should be commended by Sony.

Nobody is trying to dismiss Xbox GPU advantage, numbers don't lie but the facts are GPu advantage is way less than this gen and once you reach 4k and beyond 15% advantage is hardly if ever noticable. Also MS for some reason (I assume to keep thermals in check) opted for this weird two speeds Ram setup reducing their BW advantage .

Sony was laser focus into creating a beast machine revolving around their blazing fast SSD, also they managed to have beast GPU and consistent ram speeds in a smaller APU.

Also we still needs to know more about XsX other parts so we can starting to compare and contrast the specs. All in all we are in for a fantastic console generation.
 

Leskov

Neo Member
I think you didn't read it all, which is ok. But if you didn't get it then it applies to all PS5 games, it's systematic.
Your input implies that multiplat game design will be different for xbox and ps5 - so for example next year cod or resident evil will be different in game design for the sake of utilising ps5 ssd and overall power of xbox (and there will be no games for xbox series x that aren't released on basic xbox one at least for the first year). Did I get this right?
And I'm sure that multiplat games will be designed from the point of the weakest possible hardware. Which implies orientation oh the capabilities of hdd with 5400 rpm. We know that there won't be games on xbox series x that aren't playable on xone at least in 2021.
So, how do you see the utilization of ps5 ssd capabilities in multiplat games when gamedesign clearly will be limited by the capabilities of 5400 rpm hdd in the gaming device at least for the games that will be released in 2021 aside from loading times?
 

Lone Wolf

Member
If you use the slower RAM at all, the whole bandwidth goes down to the slower speed

So you have 2 choices: faster bandwidth limited to 10 GB of RAM for games

slower bandwidth with 16 GB or RAM (13.5 GB for games)
Right, so 10GB at 560, used for GPU.
3.5GB at 336, used for CPU Audio etc.
2.5GB At 336 reserved for system OS.

we all know that only the GPU can take full advantage of the 560. The 336 is overkill for what it’s used for. ( the Xbox one X has less than 336 by the way) Unless the 10GB is not enough and starves the GPU, it’s not going to be a problem. Also, we have no idea how PS5 uses memory.
 

SamWeb

Member
If you use the slower RAM at all, the whole bandwidth goes down to the slower speed

So you have 2 choices: faster bandwidth limited to 10 GB of RAM for games

slower bandwidth with 16 GB of RAM (13.5 GB for games)
"Microsoft's solution for the memory sub-system saw it deliver a curious 320-bit interface, with ten 14gbps GDDR6 modules on the mainboard - six 2GB and four 1GB chips. How this all splits out for the developer is fascinating." https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

All memory chips have the same speed. They just share the memory bus is not symmetrical.
 
Last edited:
Status
Not open for further replies.
Top Bottom