• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Unreal Engine 5 Matrix City Sample PC Analysis: The Cost of Next-Gen Rendering

Guilty_AI

Member
It`s impossible to do it dynamically without either sacrificing much of the graphical quality and/or accepting much longer load times disrupting the gameplay.
Just because you could theoretically do it with stick-figure graphics doesn`t mean that it`s an actual option....
The last R&C game as it is would not have been possible without an SSD. That´s a simple and developer verified fact and trying to discuss this away is kinda silly......
There you go, the heart of the issue, which is people really exagerating the cutbacks that would be necessary to make it possible.

Here, this game loads its entire maps into RAM. This one specifically is 16km2, though there are fan made ones as big as 120km2. Naturally you can insta-teleport anywhere in those maps.




Now tell me, does this looks like "Stick-figure graphics" to you?
 
Last edited:
There you go, the heart of the issue, which is people really exagerating the cutbacks that would be necessary to make it possible.

Here, this game loads its entire maps into RAM. This one specifically is 16km2, though there are fan made ones as big as 120km2. Naturally you can insta-teleport anywhere in those maps.




Now tell me, does this looks like "Stick-figure graphics" to you?

Really digging on that music.
 

Ev1L AuRoN

Member
True. It's still one of the biggest issues with most UE games.
Curious how The Coalition was able to make Gears 5 to run super smoothly, with no stutter from either shader compilation or asset streaming.
But Epic seems to still have trouble with this......
I don't understand how that is an issue nowadays. Why not include the cached shaders into the game, or at the very least give us an option to process all the shading data before playing the game. It's fucking annoying having a high-end setup and have to deal with stutters and bad frame pacing on modern titles.
 

8BiTw0LF

Banned
C'mon OF COURSE. Just look at Unreal 5. We know how that's the near future for many many games and if this video doesn't prove that 30 fps is the target for console (esp when high end pc's can't hit 60) then there's no way we'll see 60 fps in unreal 5 games on console. Maybe there will be some secondary mode that doesn't have lumen or nanite that could be 60 but then the game will barely resemble the quality mode. I don't even know if it will be possible to have a performance mode without nanite on UE5.
It's a tech demo that's not optimized for the thousands upon thousands of PC configs!

UE5 is extremely scalable and already decent optimized for current-gen consoles - just look at the demo, lol. All they need to do is scale stuff down and it will hit a rock solid 60fps.
 

Haggard

Banned
So "My bad, i was wrong but really really don't want to admit it". Oh you're such a tsundere.
You should talk to the developers over at insomniac. Obviously you know better than their engine designers.
Gee if all that packed immense knowledge here on gaf would just somehow reach all those poor stupid incompetent developers out there.....

c524cdda95763c2f582033be6d8b17e4--the-simpsons-politicians.jpg
 
Last edited:

Guilty_AI

Member
You should talk to the developers over at insomniac. Obviously you know better than their engine designers.
Gee if all that packed immense knowledge here on gaf would just somehow reach all those poor stupid incompetent developers out there.....

c524cdda95763c2f582033be6d8b17e4--the-simpsons-politicians.jpg
So another person incapable of interpreting comments it seems 🤷‍♂️, well just keep on your dream land of magical Cernys and flowery SSDs.

W5FmbBQ.jpg
 

winjer

Gold Member
I don't understand how that is an issue nowadays. Why not include the cached shaders into the game, or at the very least give us an option to process all the shading data before playing the game. It's fucking annoying having a high-end setup and have to deal with stutters and bad frame pacing on modern titles.

They can't include the cached shaders on PC, with low level APIs, like DX12 and Vulkan. These shaders are compiled by the driver, and since there are so many different GPU's on the market, and users using different driver versions, it's impossible.
On consoles it is possible, because consoles have fixed hardware that devs can target. Also, the games on console run in a kind of sandbox, where the dev can define the driver version that was used when they made the game. So when someone runs a game on a console, that game will run on the respective version where the shaders where compiled.

On PC there are solutions. Like doing a big initial compile workload, this will take a few minutes, but after that, the game won't have stutter from compiling shaders.
Other games can compile shaders during loading, but just for that level.
Steam with Proton, using Vulkan now has an option to share compiled shaders. This means that one gamer, can compile shaders for a game, upload it and then people who have the same GPU and driver can download that compiled shader.
It would be possible for nVidia, AMD and Intel to do the same. An online database, with compiled shaders, that would be updated and shared by gamers.
 

Shmunter

Member
They can't include the cached shaders on PC, with low level APIs, like DX12 and Vulkan. These shaders are compiled by the driver, and since there are so many different GPU's on the market, and users using different driver versions, it's impossible.
On consoles it is possible, because consoles have fixed hardware that devs can target. Also, the games on console run in a kind of sandbox, where the dev can define the driver version that was used when they made the game. So when someone runs a game on a console, that game will run on the respective version where the shaders where compiled.

On PC there are solutions. Like doing a big initial compile workload, this will take a few minutes, but after that, the game won't have stutter from compiling shaders.
Other games can compile shaders during loading, but just for that level.
Steam with Proton, using Vulkan now has an option to share compiled shaders. This means that one gamer, can compile shaders for a game, upload it and then people who have the same GPU and driver can download that compiled shader.
It would be possible for nVidia, AMD and Intel to do the same. An online database, with compiled shaders, that would be updated and shared by gamers.
The pre compilation seems like a no brainer. Unless there is more to it due to some games having dynamic environments/assets that you cannot pre empt?
 

Clear

CliffyB's Cock Holster
Indeed. The point to it all is that from a dev perspective any and all I/o restrictions are gone, *poof*. No workarounds, no cutbacks, not special design decision around bottlenecks, no balancing acts. Dev freedom to harness relatively instant asset loading as they please.

Maybe they do see it as magical after all. 💁

The PS5's custom I/O block is doing a lot more than simply facilitating fast data throughput. Its been stated numerous times that its major advantage is that it can be used to seamlessly transform input data without external intervention. Which from a programming standpoint is pretty "magical".

Data is rarely used raw, it needs some manipulation to be changed from what is optimal for storage space and access speed, to a memory location and format best suited to for cpu and gpu usage. Yes, such tasks are traditionally offloaded to a worker thread on the main processing unit but there are two crucial downsides: The more data then the greater the load on whatever's doing the transformation, and the more the memory bus and caches are in danger of becoming saturated while it handles the transference.

The latter being particularly of interest because its the area that is the hardest to up-spec. You thrash the caches and everything takes a hit, which in turn lowers overall performance as tolerances need to be adjusted across the board. Timing differences that are miniscule per-operation accumulate fast when we're talking about millions of them per frame.

At the end of the day we're always going to run into critical-section scenarios, where task C cannot start until prep-tasks A and B have completed. No matter how fast C can execute, its still got to sit there waiting for A and B before it does its thing. So ultimately when we're looking at a modular system like a PC, where different processing units can perform at radically different levels depending on build and connectivity, when you're dealing with a tech like UE5 which clearly is greatly dependent on all parts contributing together (its drawing more stuff because of the "intelligence" with which it presents its data), a PS5-like solution is demonstrably the most efficient.
 

01011001

Banned
so with all the arguing going on with NXGamer NXGamer I'll just add to the discussion by presenting my performance test here.
note that the Nvidia performance overlay doesn't get captured by... shadowplay... ironically... so while recording the video, I took screenshots THAT DO show the overlay for some reason... so yeah, here we go:

Settings are all on 3 (the defaults basically), crowd and car density is 100, I use a version that supports DLSS, and I'm running on DLSS quality, simply because that frees up GPU resources for me.

my Hardware:

Ryzen 5600X
Geforce 3060ti TUF
16GB 3200mhz DDR4
SSD I ran it from is a Kingston SA2000M81000G (my OS drive, I just plonked the demo folder down on my desktop lol)

Screenshots first, I tried to get one where I drive at full speed at a car


VIDEO TIMESTAMP: ~0:11
citysamplefps1dlssquaytj6l.png


VIDEO TIMESTAMP: ~0:17
citysamplefps2dlssquatyk38.png



and here is the video from the gameplay these screenshots were taken:



there are still a few compilation stutters it seems. but then again, I haven't played the demo that much on PC, at least not this specific build here.
the lowest framerate I saw without compilation stutters was about 29fps I think


i'll try recording with the Xbox Game Bar, maybe that will show the Nvidia overlay lol

edit: indeed it does... Nvidia, get your shit together! LOL

 
Last edited:
so with all the arguing going on with NXGamer NXGamer I'll just add to the discussion by presenting my performance test here.
note that the Nvidia performance overlay doesn't get captured by... shadowplay... ironically... so while recording the video, I took screenshots THAT DO show the overlay for some reason... so yeah, here we go:

Settings are all on 3 (the defaults basically), crowd and car density is 100, I use a version that supports DLSS, and I'm running on DLSS quality, simply because that frees up GPU resources for me.

my Hardware:

Ryzen 5600X
Geforce 3060ti TUF
16GB 3200mhz DDR4
SSD I ran it from is a Kingston SA2000M81000G (my OS drive, I just plonked the demo folder down on my desktop lol)

Screenshots first, I tried to get one where I drive at full speed at a car


VIDEO TIMESTAMP: ~0:11
citysamplefps1dlssquaytj6l.png


VIDEO TIMESTAMP: ~0:17
citysamplefps2dlssquatyk38.png



and here is the video from the gameplay these screenshots were taken:



there are still a few compilation stutters it seems. but then again, I haven't played the demo that much on PC, at least not this specific build here.
the lowest framerate I saw without compilation stutters was about 29fps I think

Oc that memory please
Thanks for the great response.
 

winjer

Gold Member
The pre compilation seems like a no brainer. Unless there is more to it due to some games having dynamic environments/assets that you cannot pre empt?

There are several games on PC that use low level APIs, and that don't have shader compilation issues.
Gears 5 and Doom Eternal are great examples. So it's obvious that it can be solved.
UE does have tools to compile shaders during load time. And some devs used it. For example, Borderlands 3, when running in DX12, during the first time it ran, compiled all shaders.
It took several minutes and some people complained about it, but that was much better than having stutters during gameplay.

Another thing to consider is that UE games have another issue with stutter, from a different source. And that's from asset streaming.
This usually happens when the devs don't configure the streaming cvars correctly.
Some gamers have some success in improving their gameplay experience with tweaks like these

 
And Spiderman runs great on a slow hdd so at any speed can't be overly heavy either. You seem to just feel it should be for some reason, but there is nothing that points to that.

I'm confused. Didn't Sony have an I/O demo with Spiderman? Basically I think the point was to prove that going through the city extremely fast would have an impact on performance on slow drives. It's why I'm confused when you say "at any speed".
 
Last edited:
I'm confused. Didn't Sony have an I/O with Spiderman? Basically I think the point was to prove that going through the city extremely fast would have an impact on performance on slow drives. It's why I'm confused when you say "at any speed".
The argument is if Spiderman at high speed is heavier on data transfer then the city demo. I'm simply saying nothing points to that.
 

01011001

Banned
Anybody tried the demo on a platter hdd?

I could do that real fast... it's a USB external 2.5" drive even! lol... it's REALLY shit, I used that for my XBox 360's storage in the past, and now I use it for media and old games (both old PC games and Roms)
so worst case scenario there!

EDIT:

WOW, so the initial load is FUCKING LONG and stutters more than usual (note that I started recording after the game was already loading for at least 10 to 20 seconds at that point, so add that to the loading you see



BUT, here is in-game... it runs absolutely fine considering the drive... lol... wtf?



I literally had worse performance on that drive with last gen games I tried to run off of it when my storage on my SSDs was low and I had to install them on this crappy drive!
I had really bad loading stutters in RAGE 2 for example on this, and had to move it onto one of the SSDs to play it

it's a 2.5" USB 1TB Seagate Expansion drive, a pretty old one
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Ratchet wasn't maxing it out because it wasn't even making anything that strictly required SSDs. All of its portals were either limited to rail roaded segments (where you often couldn't even look behind you), portals that teleported you through short distances, or portals at specific locations that teleported between 2 different areas, all of which we've already seen before (not talking about titanfall).
The last two don't even require faster storages, the third one in particular being more stressful for the GPU than anything. We've seen those in Portal games (and other source based games) and Prey 2006 respectively.

This is factually incorrect.

The first type of portal can be achieved with good memory management. A game with lower quality assets (and by lower quality i don't mean it looks bad) can make you teleport between all the worlds you want or throught all of the game's map as long as you can fit it properly into RAM. For higher quality assets you just need to load and unload them at appropriate times (since the segment is rail roaded, i can just start loading the next world assets as soon as the character enters a new world, since player has no real control of when or where he'll teleport).

No you cannot. If you fill up your RAM with assets you might need for faster loading, you are limiting other things you can put in there which might be higher quality assets among other things. And Loading and unloading is the fucking bottleneck here. It takes a long time to load and unload those assets at 'appropriate times' which is precisely why simply attaching a 5.5 GBps ssd to a VRAM isnt enough, you need to ensure the data transfer between those two isnt bottlenecked. It's not PR speak like you say below. It's simple common sense.
No we were not. Even if no one thought about optimizing the i/o pipeline, be it on ps5 or other platforms, we'd still get all the 'revolutions' you're talking about by merely adopting SSDs. Thats the point, its the "magical i/o" because people are attributing far more credit to it than it actually has, most of the credit actually belonging just to SSDs in general.

My guess is that all the shiny eyed fans watched that Cerny talk where he compared the PS5 with the PS4 and mistakenly started believing all those super 100x faster speed improvements were something exclusive to the ps5. These comparisons fall much shorter when you put it against modern PCs, even without direct storage or what not.
So don't worry, nothing will be holding back the ps5 capabilities.

Eh I wish you hadnt said that bit about shiny eyed fans. Look, Ive been willing to extend you an olive branch throughout this argument. I dont think PS devs will utilize the full 5.5 GBps of bandwidth. I also think it's overkill. I even think PCs can do all of this by brute forcing all the compression and decompression done by the IO which according to both Sony and MS could take up to 10-15 zen 2 cores. Clearly, PCs have faster SSDs and faster and bigger CPUs that can handle all of this.

What gives me pause is you completely dismissing specs that are almost 20 times faster than even what the Matrix demo is doing at 300 MBps. At some point, we have to start worrying less about shiny eyed PS fanboys and use basic common sense. Is something that is 10x faster than the 500 MBps SATA ssds more capable than those SSDs? Of course. Is something that is designed to FULLY utilize those 5.5 GBps of bandwidth better than those SATA SSD speeds Matrix seems to be targeting? Of course. We have seen this in action with the XSX. More tflops dont always translate to more performance because there are architectural bottlenecks preventing the GPU from reaching its full potential. At least in some cases. That's all Cerny was saying with his 100x SSD slide. Adding more speed isnt enough if you dont improve other parts of the engine. That's all. The real magic in the PS5 is the SSD speed. Everything in the IO is simply designed to help it achieve its fullest potential. It's why Nvidia is investing in Direct Storage. It's why MS added a bunch of decompression hardware to their APU.

I can tolerate pretty much anything on these boards, but I cant stand people downplaying specs. Tflops matter. RAM bandwidth matters. SSD speeds matter. We can see how CPU intensive UE5 is. Now imagine if the CPUs also had to do all the decompresion the PS5 IO is doing in order to fully utilize the SSD, when and in case that ever happens. I am not one of those guys who think that the IO will magically make the PS5 10 tflops GPU perform like a 20 tflops GPU but when it comes to the POTENTIAL of a a game design revolution, the PS5 has all the hardware it needs. All it needs is actual developer ambition which is hard to come by at the moment.
 

SlimySnake

Flashless at the Golden Globes
They can't include the cached shaders on PC, with low level APIs, like DX12 and Vulkan. These shaders are compiled by the driver, and since there are so many different GPU's on the market, and users using different driver versions, it's impossible.
On consoles it is possible, because consoles have fixed hardware that devs can target. Also, the games on console run in a kind of sandbox, where the dev can define the driver version that was used when they made the game. So when someone runs a game on a console, that game will run on the respective version where the shaders where compiled.

On PC there are solutions. Like doing a big initial compile workload, this will take a few minutes, but after that, the game won't have stutter from compiling shaders.
Other games can compile shaders during loading, but just for that level.
Steam with Proton, using Vulkan now has an option to share compiled shaders. This means that one gamer, can compile shaders for a game, upload it and then people who have the same GPU and driver can download that compiled shader.
It would be possible for nVidia, AMD and Intel to do the same. An online database, with compiled shaders, that would be updated and shared by gamers.
Id rather wait and do a longer initial load (Does it have to be done every time you load the game or once per installation?) than go through the insane first 5 minute of stuttering in Elden Rings whenever i load up the game and enter a previously undiscovered area.

Matrix City sample only stutters for me when i reinstall the drivers. But after the initial painful first 5 minutes, the demo hasnt stuttered at all in the last few hours. So if its just on the initial first load then just add it to the Installation time.
 

Guilty_AI

Member
This is factually incorrect.



No you cannot. If you fill up your RAM with assets you might need for faster loading, you are limiting other things you can put in there which might be higher quality assets among other things. And Loading and unloading is the fucking bottleneck here. It takes a long time to load and unload those assets at 'appropriate times' which is precisely why simply attaching a 5.5 GBps ssd to a VRAM isnt enough, you need to ensure the data transfer between those two isnt bottlenecked. It's not PR speak like you say below. It's simple common sense.
A dev who worked on ps3 games that did similar things said pretty much what i said.



Eh I wish you hadnt said that bit about shiny eyed fans.
Didn't really meant any offense with the shiny eyed things. Its perfectly normal to be blinded by new tech, even seasoned veterans in the field can fall victim to that, even more so for fans of a particular brand. Its just thats its important to, at some point, get off the high and look at the reality of things.

Look, Ive been willing to extend you an olive branch throughout this argument. I dont think PS devs will utilize the full 5.5 GBps of bandwidth. I also think it's overkill. I even think PCs can do all of this by brute forcing all the compression and decompression done by the IO which according to both Sony and MS could take up to 10-15 zen 2 cores. Clearly, PCs have faster SSDs and faster and bigger CPUs that can handle all of this.

What gives me pause is you completely dismissing specs that are almost 20 times faster than even what the Matrix demo is doing at 300 MBps. At some point, we have to start worrying less about shiny eyed PS fanboys and use basic common sense. Is something that is 10x faster than the 500 MBps SATA ssds more capable than those SSDs? Of course. Is something that is designed to FULLY utilize those 5.5 GBps of bandwidth better than those SATA SSD speeds Matrix seems to be targeting? Of course. We have seen this in action with the XSX. More tflops dont always translate to more performance because there are architectural bottlenecks preventing the GPU from reaching its full potential. At least in some cases. That's all Cerny was saying with his 100x SSD slide. Adding more speed isnt enough if you dont improve other parts of the engine. That's all. The real magic in the PS5 is the SSD speed. Everything in the IO is simply designed to help it achieve its fullest potential. It's why Nvidia is investing in Direct Storage. It's why MS added a bunch of decompression hardware to their APU.

I can tolerate pretty much anything on these boards, but I cant stand people downplaying specs. Tflops matter. RAM bandwidth matters. SSD speeds matter. We can see how CPU intensive UE5 is. Now imagine if the CPUs also had to do all the decompresion the PS5 IO is doing in order to fully utilize the SSD, when and in case that ever happens. I am not one of those guys who think that the IO will magically make the PS5 10 tflops GPU perform like a 20 tflops GPU but when it comes to the POTENTIAL of a a game design revolution, the PS5 has all the hardware it needs. All it needs is actual developer ambition which is hard to come by at the moment.
I'm not downplaying or dismissing any specs. What i'm saying is that others are overplaying it way too much.
I mean, none of the things you are saying happened overnight either, you're even comparing the ps5 ssd speeds with SATA SSDs, despite the fact much faster NVMEs have already been available in the market for quite some time already, for very good prices.

The architechtural improvements you mentioned are also that. Improvements. Increasing the efficiency of the components that were already there even before the ps5 got announced, one of the many gradual improvements tech make over time, much like the switch from HDDs to SSDs, or even before that from SATA 1 HDDs to SATA 2. It won't shake the world of game design as you think it will, it'll improve performance and make the life of devs easier, and in the case of ps5 specific archtechture, future proof it somewhat so it won't be way too behind the inevitable new tech that comes.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
And Spiderman runs great on a slow hdd so at any speed can't be overly heavy either. You seem to just feel it should be for some reason, but there is nothing that points to that.
No it does not. it literally stops and starts.


UAX83z9.gif





Ix4yQc5.gif


We know Epic has confirmed to DF that it the Matrix demo tops out at 300 MBps. The PS5 is capable of 5.5 GBps. When talking about the POTENTIAL of the tech, it is quite obvious that the PS5 should be able to do better than the 300 MBps asset streaming requirements for the Matrix city demo.

I really dont know why people have to downplay specs. Downplay developer talent, blame lack of ambition or even say it will never be fully utilized but there is zero point in downplaying specs. The potential is there and will always be there.
 
Last edited:

Guilty_AI

Member
this guy explained classic background streaming without mentioning what this entails and what the limitations are, gee I wonder why..... Trash clickbait nonsense.
But good to know what kind of "sources" you use.....
This isn't my source, i just stumbled upon his video one day explaining things i was already aware about. Its good because some here love resorting to appeal-to-authority arguments to dismiss others, so might as well get an "authority" myself since thats easier for me than trying to deconstruct fallacies.

Besides, you're ignoring the context. We're talking about Rift Apart game design specifically. This isn't meant to dismiss the use of SSDs or whatever, in fact its meant to highlight that this specific game, which some really love to put on a pedestal as an example of "revolutionary SSD centered design", can be perfectly achieved with said classic background streaming, no fast loading required.
 
Last edited:

Haggard

Banned
This isn't my source, i just stumbled upon his video one day explaining things i was already aware about. Its good because some here love resorting to appeal-to-authority arguments to dismiss others, so might as well get an "authority" myself since thats easier for me than trying to deconstruct fallacies.

Besides, you're ignoring the context. We're talking about Rift Apart game design specifically. This isn't meant to dismiss the use of SSDs or whatever, in fact its meant to highlight that this specific game, which some really love to put on a pedestal as an example of "revolutionary SSD centered design", can be perfectly achieved with said classic background streaming, no fast loading required.
So the overall graphical quality which is basically 90% of the needed bandwith is not part of the game`s design? K........
That´s like comparing 1990 animation movies to 2022 animation movies and saying "they are both movies".....in the end you say nothing at all while ignoring the glaring technological differences. This seems like such an incredibly nonsensical discussion.
 
Last edited:

Guilty_AI

Member
So the overall graphical quality which is basically 90% of the needed bandwith is not part of the game`s design? K........
That´s like comparing 1990 animation movies to 2022 animation movies and say "they are both movies".....in the end you say nothing at all while ignoring the glaring technological differences. This is such an incredibly nonsensical discussion.
And once again, you're really overplaying the necessary cutbacks to achieve similar result. To get closer to the same page, what i'm doing is more akin to comparing a 2022 animation with a 2012 one.
Maybe you could complain how a game with Sleeping Dogs graphics wouldn't be up to par, but then you're just nitpicking since that game is by no means ugly 🤷‍♂️. It just further shows there is no such revolution, just an improvement from before (yay i can now design around fast load with Cyberpunk graphics instead of Sleeping Dogs graphics)
 
Last edited:

Haggard

Banned
In the end, you're just complaining about graphics 🤷‍♂️,
how many tiny arena levels have we had, how many overlong animations or forced level bottlenecks to squeeze through to hide the background loading.
In the end you only think as far as the border of that strange nostalgia box you`re sitting in where games from 10 years ago don`t look like crap in comparison to modern ones......
 
Last edited:

Guilty_AI

Member
how many tiny arena levels have we had, how many overlong animations or forced level bottlenecks to squeeze through to hide the background loading.
How many seamless open world have we had 🤷‍♂️, how many fast paced racing games have we drove through 🤷‍♂️🤷‍♂️🤷‍♂️. One type of design philosophy existing doesn't really exclude the other.
 
Last edited:

Pedro Motta

Member
There you go, the heart of the issue, which is people really exagerating the cutbacks that would be necessary to make it possible.

Here, this game loads its entire maps into RAM. This one specifically is 16km2, though there are fan made ones as big as 120km2. Naturally you can insta-teleport anywhere in those maps.




Now tell me, does this looks like "Stick-figure graphics" to you?

Yes.
 

SlimySnake

Flashless at the Golden Globes
A dev who worked on ps3 games that did similar things said pretty much what i said.


That guy doesnt know what hes talking about. Insomniac Engineer confirmed that they were loading levels from the disk on every switch. Each level resides in RAM and is swapped out. The whole point of the SSD and IO is to free up the RAM so you can make each level more detailed instead of wasting RAM space on what they might want to load next.

Timestamped:
 

Guilty_AI

Member
That guy doesnt know what hes talking about. Insomniac Engineer confirmed that they were loading levels from the disk on every switch. Each level resides in RAM and is swapped out. The whole point of the SSD and IO is to free up the RAM so you can make each level more detailed instead of wasting RAM space on what they might want to load next.

Timestamped:

And he never denies Insomaniac is doing just that. He says as much in the video and in the pinned comment.
All he's saying is that Rift Apart is designed in a way that wouldn't be impossible to replicate on a normal HDD, aka can't really be considered as an example of game design taking advantage of SSD speeds. Maybe at best get somewhat better graphics, but this game world assets aren't exactly the most heavy.
 
Last edited:
Top Bottom