• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Audiophile

Member
SSD825GB
5.5GB/s Read Bandwidth (Raw)


Internal StorageCustom 825GB SSD
IO Throughput5.5GB/s (Raw), Typical 8-9GB/s (Compressed)

The 8-9GB/s is "Typical".

22GB/s as mentioned above is a best case scenario for the Kraken compression.
 

OrionNebula

Member
1941cfdac09cd6b1ec1a4b2899810e4520200323194703.jpg

What’s this? I have flashbacks of page 1200s
 
Because it can reach up to 22GB/s with devs seeing 20GB/s in action, faster than DDR4 ram at 15GB/s. Making it a VRAM that you can run your OS directly from it instead of wasting 2.5GB from the RAM like in XSX. Plus you need to load and calculate much more less assets as you can upload/offload up to 22GB/s or 2GB per 0.1 second!!! Assuming you're playing DOOM and turning insanely fast.

It's too huge that normal people like us are still struggling to comprehend until we see some demo in action.

Think about it like the Horizon Zero Dawn technique but on steroids:




Making you deal with MUCH, MUCH less work on the GPU/CPU/RAM.


Not quite; NX Gamer mentions the possibility that majority of the OS data can be transferred to the SSD kept there for storage while keeping just the most essential tasks in the RAM pool. The idea being you could swap the OS data quickly to NAND if a large chunk of it isn't needed for a decent bit of time, then when it's needed again clear out space in RAM to repopulate with that OS data. Both systems should be able to utilize this but XSX might be more limited in its approach for such due to the slower drive and (likely) smaller bandwidth rate on the drive as well.

That all said, I doubt you could run the OS off the SSD. Mainly because the SSD is using NAND, and the OS need bit and byte-level read/modify/write capability to run sufficiently. NAND doesn't provide that. PS4 had a NOR Flash chip and I'd suspect PS5 has one as well. Wouldn't be surprised if XSX does, too. It's a great way of putting some OS code there to allow for XIP (Execute-In-Place), plus NOR Flash data can be read at the bit level, which is actually smaller than volatile memory can do (byte-level), let alone NAND (page-level).

However the main issue with NOR is the insanely slow write speed (compared to NAND), so whatever OS data would be there would stay there permanently except for things like firmware updates. Rather than OS file data the consoles probably would use it for BIOS/UEFI settings.
 
Last edited:

SonGoku

Member
in action through Kraken, but the main spec of the SSD is 8-9GB/s without Kraken.
8-9 GB/s is the typical Kraken compressed data the decompresing unit block can output
22GB/s is for data that can compress that high
Raw output (without Kraken) is 5.5GB/s
However, the results of project Acoustics might replace audio rt. They make a representation model of the game environment and do a very complex wave simulation on azure servers. They then simplify this into a useable sound model for use in the actual game. I imagine the Acoustics engine will have dedicated hardware to utilise this model in-game.
I think you are confusing hardware engines with software engines
Xbox will have an audio chip and PA might control it, but PA functionality/features go beyond the audio chip (i.e audio rt)
I imagine the Acoustics engine will have dedicated hardware to utilise this model in-game
PA will likely use the same dedicated RT hardware used for visuals, RT hardware is the most suited to handle it. Audio RT will barely tax dedicated RT hw
Why make dedicated audio rt chip when you already have dedicated rt hardware that can already handle audio and graphics just as efficiently

. Because other then that, I'm afraid this whole fuss around 3D audio will be just a huge market miscalculation, where few people will actually see (hear) its benefits.
The whole point of Sonys approach is that everyone can enjoy it. With headphones being the gold standard
just mindlessly pursued his personal vision, just like Kutaragi did with Cell.
You are talking as if 3D audio sacrificed other aspects of the console or made it more expensive
The dedicated audio block amounts to 1 CU worth of die space, its essentially a huge upgrade penalty free
 
Last edited:

Audiophile

Member
SSD are for loading content. They do not replace Teraflops,gpus,cpu's.ram,bandwidth etc.

The SSD doesn't load and compress/decompress data by itself, it needs something to tell it what to do and to carry out those tasks. That something would usually be a CPU.

The additional, dedicated hardware which performs this task is equivalent to x amount of zen 2 cores. This frees up the CPU to do something else.


No one is saying the SSD replaces the other things, but it does allow more efficient use of some of them. And it allows devs to fundamentally design games in a different way, which can lead to greater experiences for players.

This is true of both boxes, but more so the PS5.


The XSX is more powerful in terms of graphics compute due to wider CU setup and while the differences aren't as great as the TF number suggest due to clockspeed differences. It will still likely hold a roughly %10 lead in resolution/fx. Everyone has their own set of priorities and some think the small trade off Sony have made in TF is worth the gains made elsewhere.
 
The SSD doesn't load and compress/decompress data by itself, it needs something to tell it what to do and to carry out those tasks. That something would usually be a CPU.

The additional, dedicated hardware which performs this task is equivalent to x amount of zen 2 cores. This frees up the CPU to do something else.


No one is saying the SSD replaces the other things, but it does allow more efficient use of some of them. And it allows devs to fundamentally design games in a different way, which can lead to greater experiences for players.

This is true of both boxes, but more so the PS5.


The XSX is more powerful in terms of graphics compute due to wider CU setup and while the differences aren't as great as the TF number suggest due to clockspeed differences. It will still likely hold a roughly %10 lead in resolution/fx. Everyone has their own set of priorities and some think the small trade off Sony have made in TF is worth the gains made elsewhere.
No sir. You're not mentioning that the PS5s clocks are variable. Its now 10% difference?
 
SSD are for loading content. They do not replace Teraflops,gpus,cpu's.ram,bandwidth etc.
Actually this one can somehow. With the GPU cache scrubber tech, they can load data without stoping current GPU job. So the GPU will become more efficient compared to XSX streaming tech.

Also kraken decompression completelty bypass the CPU, while each XSX velocity task will use some of the CPU for I/O.
 

Disco_

Member
Either way. I prefer easy,simple replacement of rechargeable batteries vs taking the controller apart.
I prefer not dumping batteries into the trash. Even Eneloops only last so long and I have a feeling you're the type of person that doesn't use rechargables.
 

scie

Member
I am not suggesting that the engine will do audio rt. That can be done by the rt cores.

However, the results of project Acoustics might replace audio rt. They make a representation model of the game environment and do a very complex wave simulation on azure servers. They then simplify this into a useable sound model for use in the actual game. I imagine the Acoustics engine will have dedicated hardware to utilise this model in-game.

They bake the audio model similar to how some games bake lighting by using offline calculations.

Oh, you mean something like "prerendering the sound" on their servers and than back the result into the game and the sound chip (or whatever it will be) just needs to encode it?
 
I prefer not dumping batteries into the trash. Even Eneloops only last so long and I have a feeling you're the type of person that doesn't use rechargables.
I do use rechargeables. Not eneloops though. Baught 6 Duracell rechargeable batteries and swap them out. Lasted me over a year at least. No screw driver necessary.
 

Audiophile

Member
No sir. You're not mentioning that the PS5s clocks are variable. Its now 10% difference?

10% difference in real-world performance, not TF.

TF difference is 16%, Cerny quotes 2% clockspeed reduction to reduce power consumption 10%, let's go worst case scenario based off what we know so far and double that to 4%. So a 20% TF advantage to XSX..

The PS5 clock speed is running 17-23% faster than the XSX. The XSX still has more CUs but it's easier for devs to schedule jobs for less, faster CUs and those CUs will work more efficiently (data can get in and out quicker, less time waiting for other elements of the GPU etc.). Outside of the CU the rest of the GPU front and back end is also running faster. Bringing all these factors together it's fair to expect the gap to be mitigated by roughly half.

Expect PS5 to have an advantage in pixel rate and cache speed; and XSX to have an advantage in texture rate. XSX will maintain a compute advantage due to compute width but the gap will be mitigated somewhat by the advantages of a higher clockspeed.
 
Explanation by AMD on how Smartshift works.

10% increase in the performance when you enable the smartshift tech.


According to All Might (it's a dev?) in a very detailed post, Smartshift ain't gonna do much in reality, like few %, maybe it's "up to 10%".
Otherwise smartshift alone could reduce the gap in raw performance between the two GPU to almost nothing, and I stopped believing in fancy names doing fancy tricks. At that point RT would be the only practical difference.
Let's see.
 
It won't heat up since it can't go over the fixed power budget, if a particular CPU instruction needs more power than its being given at the time it will either
A) Run CPU at lower frequency to accommodate the increased power consumption of the load/instruction
B) Siphon power from the GPU (causing GPU frequency to drop) to run the instruction without dropping CPU frequency
C) Siphon power from the GPU (switch GPU to a lighter load while retaining frequency) to run the instruction without dropping CPU frequency

In scenario A) GPU runs at max frequency and CPU runs at lower frequency
In scenario B) GPU runs at lower frequency and CPU runs at max frequency
In scenario C) both GPU & CPU run at max frequency

This is all done by the devs, their choice and design and of course there can also be in between scenarios, i just explained the extremes.

Thanks for clarifying that; about sums up what Cerny was getting at. The variable boost isn't a boost clock like some people are trying to say, or indicative there'll ever be a drop anywhere near the 9.2 TF the Oberon revisions were at.

That said, sounds like it isn't as automatically handled by the system as I was thinking it'd be. This could create a pretty steep learning curve for developers (3rd-party in particular) to wrap themselves around in optimizing for the variable loads. Hopefully Sony will be able to lend resources to them (including sufficient dev tools) to ease the manual management of it for them.
 

SgtCaffran

Member
I think you are confusing hardware engines with software engines
Xbox will have an audio chip and PA might control it, but PA functionality/features go beyond the audio chip (i.e audio rt)

PA will likely use the same dedicated RT hardware used for visuals, RT hardware is the most suited to handle it. Audio RT will barely tax dedicated RT hw
Why make dedicated audio rt chip when you already have dedicated rt hardware that can already handle audio and graphics just as efficiently
I'm not confusing hardware and software. The goal of project Acoustics is quite simple: use offline simulation (azure servers) to create a simplified model (baking) of room audio (reflections, reverb, dampening, etc) that can be easily applied in-game in realtime.

So this technique relies on two parts:
- Creating the audio model. This is done by developers during game development with offline tools.
- Using the audio model. This is done in realtime by the PC or console, either by the CPU or GPU I'm not sure.

For the Series X, Microsoft have probably designed a hardware audio chip that can efficiently handle the audio model so the CPU/GPU don't have to. Again, this technique is a replacement of audio RT, not an implementation of it. Using the simplified audio model for each game environment, you don't actually need audio RT for realistic audio.

- The XsX could do audio RT, same as PS5
- If developers use project Acoustics, they don't have to use audio RT
- The hardware for project Acoustics in the XsX does not do audio RT
- The hardware for project Acoustics will run the baked audio models

Oh, you mean something like "prerendering the sound" on their servers and than back the result into the game and the sound chip (or whatever it will be) just needs to encode it?
Exactly!
 
Last edited:

M-V2

Member
According to All Might (it's a dev?) in a very detailed post, Smartshift ain't gonna do much in reality, like few %, maybe it's "up to 10%".
Otherwise smartshift alone could reduce the gap in raw performance between the two GPU to almost nothing, and I stopped believing in fancy names doing fancy tricks. At that point RT would be the only practical difference.
Let's see.
There's no fancy about tech, I don't know who is All Might, but that coming from AMD & they showed an example. Now how is that going to effect the real world performance?? Idk, yet to be seen.
 

ethomaz

Banned
SSD825GB
5.5GB/s Read Bandwidth (Raw)


Internal StorageCustom 825GB SSD
IO Throughput5.5GB/s (Raw), Typical 8-9GB/s (Compressed)
Typical.

It can reach up to 22GB/s with Kraken compression.
 

SonGoku

Member
That said, sounds like it isn't as automatically handled by the system as I was thinking it'd be. This could create a pretty steep learning curve for developers (3rd-party in particular) to wrap themselves around in optimizing for the variable loads. Hopefully Sony will be able to lend resources to them (including sufficient dev tools) to ease the manual management of it for them.
There might be some automation at play where devs set loads and priorities and the system takes care of the rest, they can then do fine adjustments to balance resolution and performance
The specifics are unknown but i would assume the learning curve isn't steep considering time to triangle is lower than PS4.

edit: Also going by Cerny's comment about GPU/CPU staying at or very close to their capped frequencies most of the time, I would assume the edge cases that require fine grain adjustments of power allocation are few and spread out
 
Last edited:
There's no fancy about tech, I don't know who is All Might, but that coming from AMD & they showed an example. Now how is that going to effect the real world performance?? Idk, yet to be seen.
With "fancy" I mean that 10% can be less, just asking how possible is to get such a boost with this. For the long answer I will see the video I guess.
 
Last edited:
Just an update since i can't edit my last post.





Make of that what you will, an ex-Sony first party game designer.

Thanks for the call, really, but "staggering" it's way exaggerate to me, unless SeX has some crazy software that PS5 doesn't and boost performance by so much, difference in near photorealistic games is gonna be very, VERY slim.
 
He hasnt been a game developer for 12 years and his twitter is all about xbox and it has been posted many times already
You do know that he probably still talks to ex co workers in the industry, right? Hence " I've chatted with a few devs". And he makes fun of fanboys on both sides.

Alex from DF is also an Xbox fanboy i take it?
 
He hasnt been a game developer for 12 years and his twitter is all about xbox and it has been posted many times already
So he's an Xbox fanboy therefore his industry experience and clear work history is automatically discounted? You people are worse than Alex Jones for conspiracy level bullshit.
 
Last edited:
According to All Might (it's a dev?) in a very detailed post, Smartshift ain't gonna do much in reality, like few %, maybe it's "up to 10%".
Otherwise smartshift alone could reduce the gap in raw performance between the two GPU to almost nothing, and I stopped believing in fancy names doing fancy tricks. At that point RT would be the only practical difference.
Let's see.
10.3 was peak performance with smartshift. What are you going on about?
 
Status
Not open for further replies.
Top Bottom