• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*)SSD and loading times demystified

Entroyp

Member
I went ahead and began reading the 2017 patents. This could potentially be a game changer (no pun intended). My interest to see this system in action went through the roof.
 

hyperbertha

Member
Yes and no. It depends on how you design your workload.
I can give you a simple example: tech demos. Why tech demos look that good? Because all the RAM and GPU power is used only to render that demo and nothing else.
Fast data streaming aims to achieve the same perf balance for every game. I.e. you should have in each frame only the data that's used for that frame. Nothing else.
How can a frame have data that's NOT used in that frame? Do you mean the GPU should only have in its memory the data that's used in that frame and nothing else?
Also you didn't answer how the content in that memory affects framerate. For instance super high poly meshes.
 

psorcerer

Banned
Do you mean the GPU should only have in its memory the data that's used in that frame and nothing else?

In the ideal case - yes.

Also you didn't answer how the content in that memory affects framerate. For instance super high poly meshes.

It doesn't.


This demo was totally interactive on the exhibition floor (you could move camera, change assets).
But we never got to similar in-game quality for any PS4 game (maybe apart from Detroit, which was pretty close).
Which means that "amount of high quality assets in frame" was not a problem, the problem was "how to get it there".
 
Last edited:

hyperbertha

Member
It doesn't.

Which means that "amount of high quality assets in frame" was not a problem, the problem was "how to get it there".
Okay so what is it that causes frame rate drops? And how does an increase in resolution cause increase in memory usage and drops in frameerate?
 

hyperbertha

Member
GPU -> CPU sync points, mostly.



Increase in resolution should not cause frame drops.
It will make everything slower.
The good thing is that console games are build with a specific resolution as a target.
So it's accounted for.
:/ I'm talking about the hit you get to your frame rate when you turn up resolution, model detail etc in PC games. Obviously those things cause the GPU to push out less frames with higher quality. I'd like to understand the mechanism behind this.
 

psorcerer

Banned
:/ I'm talking about the hit you get to your frame rate when you turn up resolution, model detail etc in PC games. Obviously those things cause the GPU to push out less frames with higher quality. I'd like to understand the mechanism behind this.

It's specific to the game.
Different games have different bottlenecks in different places.
But if we talk only about resolution at the same settings we can surely see that 2x resolution means roughly 2x GPU compute units are needed to render frame or, 2x time is needed to render that frame with the same amount of compute.
It's not that simple (because there are a lot of other things, sometimes your bottleneck can even move to CPU, for instance) but roughly it is.
 

hyperbertha

Member
It's specific to the game.
Different games have different bottlenecks in different places.
But if we talk only about resolution at the same settings we can surely see that 2x resolution means roughly 2x GPU compute units are needed to render frame or, 2x time is needed to render that frame with the same amount of compute.
It's not that simple (because there are a lot of other things, sometimes your bottleneck can even move to CPU, for instance) but roughly it is.
and what of model quality? A super high polycount model needs more floating point operations to be completed I presume.
 

vpance

Member
In the ideal case - yes.



It doesn't.


This demo was totally interactive on the exhibition floor (you could move camera, change assets).
But we never got to similar in-game quality for any PS4 game (maybe apart from Detroit, which was pretty close).
Which means that "amount of high quality assets in frame" was not a problem, the problem was "how to get it there".


A PS4 plus standard/custom SSD would've made some crazy looking games.
 

psorcerer

Banned
and what of model quality? A super high polycount model needs more floating point operations to be completed I presume.

Not really, that's why deferred shading was born.
And there are even more tricks this gen: geometry shaders, mesh shaders, TSS.
It gets tricky when you have a lot of polys that are smaller than one pixel on screen.
But that can be also dealt with this gen due to extensively programmable culling on RDNA2.
I hope the geometry detail will not be a problem at all.
 
P psorcerer so do you think we will see better crowds in games? Often times you see the same character models repeated throughout. Or how zombie hordes (or just zombies in general) zombie models are often heavily repeated. Crowds in sport games often have same models and animations repeated.

Is this in current day a limitation in hardware or limitation in dev time and money, or both.

Do you see this changing with next gen
 
Last edited:

psorcerer

Banned
P psorcerer so do you think we will see better crowds in games? Often times you see the same character models repeated throughout. Or how zombie hordes (or just zombies in general) zombie models are often heavily repeated. Crowds in sport games often have same models and animations repeated.

Is this in current day a limitation in hardware or limitation in dev time and money, or both.

Do you see this changing with next gen

I think it depends on a game.
Crowd control is a complicated topic, because you need to animate these, you need to move these. They need to have some AI.
Etc. etc.
I would suspect that most of the problems come from there and not from the repeated assets or graphics pipeline.
There will be more CPU and GPU power for those though, so I hope it will improve.
 
I think it depends on a game.
Crowd control is a complicated topic, because you need to animate these, you need to move these. They need to have some AI.
Etc. etc.
I would suspect that most of the problems come from there and not from the repeated assets or graphics pipeline.
There will be more CPU and GPU power for those though, so I hope it will improve.
Playing RE2 and RE3 demo I did notice the limits in Zombies. MLB The Show has gotten better over the years but I sometimes see the same character throughout the crowd all in sync with their Umpire hatred animation lol.

I'm sure there are character models that have higher priority than others tho.
 

StevenPF

Neo Member
The game size is ~40GB, you have 6.5GB of usable RAM, you cannot load the whole game, even if you tried.

Thank you for your message. It was really very informative. Even if it doesn't change the explanation, it seems to me that the PlayStation 4 has "ONLY" 5GB of usable RAM for games, not 6.5.
 
[In] PS5 we have GPU cache scrubbers.

[•••] on PS5 the GPU can render from 8GB of game data. And XSeX - only from 2.5GB

So in any given scene, potentially, PS5 can have 2x to 3x more details/textures/assets than XSeX.

Xbox die hards:

giphy.webp
 

vdopey

Member
Yeah, but you don't need to virtualize the OS to do that. It would make more overhead.
It could be that the kernel uses only 80MB, but you have to take into account the caches and other structures used by it, which eat much more ram than that.

The OS can't be suspended, the user interface could be. But you don't need to run it inside a container to do that.

UNIX has a monolithic kernel, but Windows does as well (Windows NT is monolithic with some benefits of microkernel). So Microsoft could do the same as you are suggesting. I doubt neither would be doing it anyway.
You're kind of changing the roles of OS and games here. The OS is the one that must have the control to decide what runs and what not.
An OS that allows a program to take over the system has failed in its basic function of managing the system resources.
If you let developers control which application/process can or cannot be used when their game is played, expect the worse.
The OS moves programs or processes out of memory when they aren't needed, it has been like this since the beginning, but an address space reserved for OS is necessary, so the kernel uses it without fighting with other programs for memory. Letting the kernel and user space fight each other for memory leads to a big penalty in performance.

The kernel uses at most lets say 100M, you have to understand how unix works I guess. So its not just about the kernel - the kernel manages the hardware, above that in unix/freebsd is the init system, in linux thats now systemd, I wont get into any specific details, then come the service starts which is managed by the init system and finally the user interface, which is just another service (theres more to it like the initramfs which the bootloader uses etc, too much detail for here)

In windows land its straight from kernel to user interface, I stopped using windows many years ago, so I don't know the specifics of how their boot loader works, but you cant for example run windows without the user interface even the windows server edition without a ui actually boots to a user interface that does nothing, it doesn't dump you down to a shell.

The reason I mentioned virtualisation is because this is how PS3 worked, that's how they could support other os aka linux boot up, but with nvidia gpu inaccessible. I assume PS4 and PS5 (will) work the same, probably using something like bhyve. where the boot loader boots the freebsd orbis or whatever its called which then boots the playstation userinterface, but I believe this is done in a virtual instance - even microsoft is talking about virtualised instances for backwards compatibility, so obviously they are using hyper-v for this.

Essentially to have fine grained control of how many cpu threads and how much ram etc virtualisation is used, it might even be bsd jails (look at docker and cgroups to see what is available on linux - FreeBSD has bsd jails - I dont know if they have an updated version of this now) 1 advantage Playstation has by using unix is the extremely low overhead of the actual os, I could go into a lot of detail into this, but I reckon Sony have built the new UI from the ground up to be able to suspend to ssd and back while making use of stuff like partychat etc using apis, an api doesnt need a ui, so no ui overhead for game devs to access specific hw and os features. For microsoft to do this, it would be a lot of work, hence why they haven't done it, its not that they cant, they certainly can its software, but its a lot of work for them. I cant really be bothered to speculate on this anymore, we will know when sony releases more info, they haven't mentioned this we know for a fact Microsoft aren't doing this, at least not yet because they specifically mentioned amount of ram reserved for windows and 1 zen cpu core ie 2 threads reserved.
 

rnlval

Member
I'm not sure what's the difference? You're expecting passive cooling in PS5?
My gaming PC's CPU is liquid-cooled while GPU is cooled by its own cooling solution with five case 14 cm fans (two top, two front, one bottom). A comparison with PS5 is laughable.

EvhwvHS.jpg


iut4wF4.jpg
 
Top Bottom