• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Ok, time to ask, SSDs and still we have pop in.

01011001

Banned
So it could not be done on PS4 as it is, glad we agree ;).

not as is, they would most likely reduce some of the details in fast transitions like that crystal level, and some of the setpieces would most likely have longer transitions between dimensions.

in order to speed some of the more VRAM hungry transitions up they would maybe need to replace unique textures/assets with duplicate ones.

but in general I don't think it would be too much of an issue to port it.

maybe it's even less of a change needed than one might think, it's hard to tell without knowing what the game loads when and how fast it has to load it.
 
The whole point of DirectStorage and the PS5s custom I/O block is to basically alleviate strain on the CPU.

geforce-rtx-30-series-rtx-io-announcing-rtx-io.jpg


Until Devs start using GPU de/compression on PC.
Well if they use GPU decompression there are going to take away GPU ressources that could still potentially cause framerate drops during gameplay. Those could be less impactful though depending of the GPU headroom than what we see in Spider-Man on PC.

On PS5 I/O (done properly) takes no ressources on either CPU or GPU.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Well if they use GPU decompression there are going to take away GPU ressources that could still potentially cause framerate drops during gameplay. Those could be less impactful though depending of the GPU headroom than what we see in Spider-Man on PC.

On PS5 I/O (done properly) takes no ressources on either CPU or GPU.
DirectStorage/RTXIO would be implemented Asynchronously.
Just like current de/compression is Async, your CPU doesnt stop doing everything else to focus on de/compression, the just like the actual graphic/render queue de/comp would fit within the budget, part of DirectStorage API is to help devs not have to work "too hard" on figuring out how to not incur heavy penalities to their graphic/render queue.
Second of all, the current build of GDeflate (Microsofts algo) does some 6 gigs in less than a second near completely freeing the CPU in the process, devs arent exactly going to shutdown the GPU to do that de/comp.
Devs still have to plan fallbacks for PCs that have slower drives but the hit to the GPU is minimal compared to the hit the CPU takes when it needs to do de/compression.
Third I take it you didnt know that both AMD and Nvidia have dedicated Async Copy/Compute capabilities?

DIRECT-STORAGE-1.1GPU-decompression.jpg
 
I'm playing Cyberpunk with a Ryzen 7 and I'm pretty impressed with the draw distance.


But if you drive 200kph on the highway yeah the engine can't keep up.
 

Beechos

Member
nah, if that day ever comes I sure hope someone connects a stock PS4 HDD via a USB adaptor and runs it on that.

that's the first thing I'd do, mainly because I literally have a stock PS4 HDD connected to my PC via USB 🤣
(when I bought my PS4 Fat I instantly removed the stock HDD and replaced it with a 2TB one. I did the same with my Pro, whose stock HDD is now my Emulation drive on the Series X)

I uploaded a video in an older thread where I ran the Matrix UE5 demo on that drive... and it ran basically the same as on my internal Samsung SSD, maybe some small load stutters.
but the initial load took ages lol

I believe it, i can see that port never happening.
 

Beechos

Member
play Titanfall 2's time travel level.

2 different level layouts loaded in at the same time, hit the crystal, your character gets teleported to the other version of the level.

both versions of the level use similar assets, which saves VRAM space, and the destroyed version of the level uses sparsely placed terrain at close proximity instead of the way more complex version in the intact version of it, meaning the destroyed version most likely needs way less VRAM space.

so one way you could do that is to have hidden background streaming of the needed assets from the other version every time you get close to one of the crystals.
the fact that you can't just switch any time you want makes this all way easer than if you could switch at any time like in Titanfall 2 for example.



Rift Apart can most likely have more detail in that level than it could have on a PS4 for example, but such a level can be done on last gen. it would simply need to be paired back a bit like any other game too.
smaller textures, some detail dialed back, and maybe a tiny transition animation if absolutely necessary.

Totally forgot about this man titanfall 2 campaign was one of the goats.
 
My personal pet hate is the texture pop in you get when starting a new level or loading into a save game file. All it would take to fix that is to display the loading screen or a black screen for 1-3 seconds longer before showing the main game screen. Heck, surely there is a way for the engine to report when all the assets are loaded so the screen is not display until then? Problem solved.

This is especially annoying in Unreal Engine 4 games which already have there own issues such as shader compilation stutter on PC for DX12 games and those white texture glitches (geometry culling) you see when turning the camera quickly. All of these contribute to make the games feel far less polished than they should be in my opinion.
 
DirectStorage/RTXIO would be implemented Asynchronously.
Just like current de/compression is Async, your CPU doesnt stop doing everything else to focus on de/compression, the just like the actual graphic/render queue de/comp would fit within the budget, part of DirectStorage API is to help devs not have to work "too hard" on figuring out how to not incur heavy penalities to their graphic/render queue.
Second of all, the current build of GDeflate (Microsofts algo) does some 6 gigs in less than a second near completely freeing the CPU in the process, devs arent exactly going to shutdown the GPU to do that de/comp.
Devs still have to plan fallbacks for PCs that have slower drives but the hit to the GPU is minimal compared to the hit the CPU takes when it needs to do de/compression.
Third I take it you didnt know that both AMD and Nvidia have dedicated Async Copy/Compute capabilities?

DIRECT-STORAGE-1.1GPU-decompression.jpg
I agree that the GPU hit would be small but it would still take GPU ressources (and async compute is already pretty much used extensively in modern engines on consoles since PS4 gen).

GPU decompression will be ideal for traditionnal loading static (or semi-statics like tight crawl space). But during gameplay, say a very CPU/GPU demanding open-world game like Spider-man or Cyberpunk, then the GPU ressources used to decompress data will be substracted from usual image rendering (lighting, RT, textures etc).

That won't be a big problem on PC (people will buy bigger GPUs instead of bigger CPUs) but on consoles with much weaker GPUs (and CPUs) the ressources (include async jobs) are usually already pretty much maximized in many modern engines so GPU decompression still will have a substantial cost.

In the future the way to go for decompressing data (and storing them to Vram) is clearly dedicated hardware (bypassing CPU and GPU like on PS5) as I/O is clearly becoming the main bottleneck in big open world games.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I agree that the GPU hit would be small but it would still take GPU ressources (and async compute is already pretty much used extensively in modern engines on consoles since PS4 gen).

GPU decompression will be ideal for traditionnal loading static (or semi-statics like tight crawl space). But during gameplay, say a very CPU/GPU demanding open-world game like Spider-man or Cyberpunk, then the GPU ressources used to decompress data will be substracted from usual image rendering (lighting, RT, textures etc).

That won't be a big problem on PC (people will buy bigger GPUs instead of bigger CPUs) but on consoles with much weaker GPUs (and CPUs) the ressources (include async jobs) are usually already pretty much maximized in many modern engines so GPU decompression still will have a substantial cost.

In the future the way to go for decompressing data (and storing them to Vram) is clearly dedicated hardware (bypassing CPU and GPU like on PS5) as I/O is clearly becoming the main bottleneck in big open world games.
No user can tell how much resource is being taken away from current workload by any sort of Aysnc Compute.


If you have even a millisecond of free time waiting for the next job, use it.
There is no way game engines are job complete as in they never give the GPU any sort of downtime, the engine or CPU itself couldnt possibly be constantly loading the GPU otherwise we would see GPU temps go as high as they go in Furmark. (I dont think any game takes GPUs that high)

The user facing stats are almost always averages, so just cuz your GPU is constantly hovering at 99% usage in afterburner, that doesnt actually mean its "maxed out".
Part of the reason people are leaning towards GPU decomp is because GPUs are immensely good at parallel workloads, the GPU might give up 1 frame per second while near completely freeing up the CPU.
Would you rather the CPU drop frames to 40fps or the GPU drop 1 frame from 60 to 59 when it is fully loaded.
Noting that as is right now, with the earliest implementation GDeflate is doing 6GB/s.
Do you really think Spiderman, or any game soonish is going to require anywhere near that?
Spiderman is like what 300MB/s?
That would be so negligible to the GPU we could nary say it even existed as a job to begin with.
 

RoadHazard

Gold Member


The video is timestamped at 28:42 This is just before ratchet enters the portal.



28:47. Ratchet enters a disguised loading screen where you literally just ride on one rail and can't do anything else.



28:51. Ratchet leaves the rail set piece and enters another portal



28:54. Ratchet enters another disguised loading screen where you just slide down a wall



28:58. Ratchet enters another portal



29:02. Ratchet yet again enters another disguised loading screen where you have virtually no movement or control over your character whatsoever.



29:06. Ratchet enters another portal



29:08. Yet again, another disguised loading screen...



29:15. Another portal...



29:18. Another disguised loading screen where you can at least move about and fight but you're placed onto a puny little boat fighting against 4 pirates

https://youtu.be/beaH_CCw-vA?t=1767

29:27. Fight ends, cutscene begins.

https://youtu.be/beaH_CCw-vA?t=1941

32:21. Finally you are free and not in a scripted on rails set piece anymore. The game has finally loaded a level.


So you believe all the assets in what you call "disguised loading screens" just magically appear and don't need to be loaded? Those scenes are quite detailed, the fact that you have limited movement control has nothing to do with how much data needs to be loaded. They are simply flashy on-rails sequences.

The PS5 SSD is fast enough to fill the entire RAM with data in about 2 seconds, so why would what you are claiming even be needed? The loading happens when you're floating in the "void" between each section.
 
Last edited:

RoadHazard

Gold Member
And what if devs don’t take advantage of it which is my point doesnt it then become cpu related?

Yes, which is a (probably THE) reason why we still see pretty long load times in games that haven't been made to take advantage of the new APIs (just like PS4 games running on PS5).

Games like Demon's Souls and Miles Morales show what happens when you DO take advantage of that stuff (2 second loading fade-outs).
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
And what if devs don’t take advantage of it which is my point doesnt it then become cpu related?
What are you talking about man?

Multi quote so I can atleast follow your train of thought, you are randomly saying shit talking about RTX2060s and RTX3070s and CPU related issues on consoles, tasks being offloaded to other parts what the fuck man?

6137b182eedea900193d6b2e





Do you mean consoles devs not using the PS5s I/O block and XSXs DirectStorage decomp?
If the devs are already within budget of their scope you cant force them to do anything.
If they arent dropping frames due to CPU decomp then thats that, they will use CPU decomp.
If a dev doesnt mind a loading screen....guess what we get a loading screen.
If an engine currently cant maximize the I/O of currentgen consoles but a dev still forces super high speed sections pop in is likely to happen.
It is very much possible to actually outrun an engine.
 
What are you talking about man?

Multi quote so I can atleast follow your train of thought, you are randomly saying shit talking about RTX2060s and RTX3070s and CPU related issues on consoles, tasks being offloaded to other parts what the fuck man?

6137b182eedea900193d6b2e





Do you mean consoles devs not using the PS5s I/O block and XSXs DirectStorage decomp?
If the devs are already within budget of their scope you cant force them to do anything.
If they arent dropping frames due to CPU decomp then thats that, they will use CPU decomp.
If a dev doesnt mind a loading screen....guess what we get a loading screen.
If an engine currently cant maximize the I/O of currentgen consoles but a dev still forces super high speed sections pop in is likely to happen.
It is very much possible to actually outrun an engine.
You are aware this is a different thread than the one talking about the pc version of Spider-Man right?
 
Yes, which is a (probably THE) reason why we still see pretty long load times in games that haven't been made to take advantage of the new APIs (just like PS4 games running on PS5).

Games like Demon's Souls and Miles Morales show what happens when you DO take advantage of that stuff (2 second loading fade-outs).
Makes sense
 
Various sony devs and 3rd party shills all kept jerking their dicks off on how it's going to be the biggest gaming leap since 3D and revolutionise game design completely. So far all we've seen is fast loading times.

But it is!! It's just that game developer engine technology needs to catch up and that currently takes a LONG time.

We're not talking about something like RT, where it's enabled by a few lines of code in the renderer.

The game design revolution that in-built SSDs will enable will require a fundamental, ground-up rewrite of the asset streaming tech that modern game engines are built on. That can't happen overnight, and especially not when devs are working on getting ongoing projects out the door.

When they do update their tech though, with the console SSD, dedicated decompression hw and I/O advancements, together with GPU primitive/mesh shaders will result in some truly breathtaking game worlds... worlds no longer limited by RAM capacity, but rather with virtualized texture streaming, mesh streaming and direct streaming of other high fidelity assets, we can, in theory, have gameworld ridiculously larger and more detailed than anything we've ever seen before.

You wouldn't even necessarily be limited anymore by the space on-disc, as you could conceivably have game assets streamed on-demand directly from the cloud and rendered locally on your hardware. Meaning games could be positively colossal, ever-expanding through a GaaS model, and devs would never have to worry about gamers running out of local disc drive space.
 
I blame people vastly over hyping the SSD.
There was an element of fanboys trying to use the PS5 SSD as some sort of graphics equaliser over the XSX higher specs.
Remember the "Phat pipes" horseshit?.
I also think Sony over hyped it. If I remember Cerny said it would reduce loading times to 1 second, and yet it takes longer than that.

Don't get me wrong, I'm glad these consoles have SSDs. I don't think I could ever go back to the horrendous loading times of the last generation again.
However, outside of loading times nothing shown this far in couldn't be done on a HDD. The rifts in R&C were done in fortnite on HDDs.
GG said they couldn't do flying on the original Horizon game due to the HDD, yet Horizon 2 had flying on the PS4.
Every game still has the same level designs as older games.
I expected to see the benefits of the SSD to be seen first on PS5 as its all hardware based, while the XSX has more software parts to the XVA and would need more time to be exploited.
At this point, even a demo from MS or Sony showing what can be done by the SSD would be awesome.
 
Top Bottom