Ev1L AuRoN
Member
Disappointed to see shader compilation issues with UE5.
No I’m triggeredI'm not triggered just think whoever typed that doesn't understand the tech
There you go, the heart of the issue, which is people really exagerating the cutbacks that would be necessary to make it possible.It`s impossible to do it dynamically without either sacrificing much of the graphical quality and/or accepting much longer load times disrupting the gameplay.
Just because you could theoretically do it with stick-figure graphics doesn`t mean that it`s an actual option....
The last R&C game as it is would not have been possible without an SSD. That´s a simple and developer verified fact and trying to discuss this away is kinda silly......
Disappointed to see shader compilation issues with UE5.
Well it ran great on ps4 so it can't be that heavySpider-Man is likely a lot more asset heavy based on the Ue5 stats, so it stresses I/o more so. But indeed, the engines are completely different, last gen - next gen sort of thing.
Give it up you are just being silly.....Now tell me, does this looks like "Stick-figure graphics" to you?
I'll take that as you saying "My bad, i was wrong"Give it up you are just being silly.....
There you go, the heart of the issue, which is people really exagerating the cutbacks that would be necessary to make it possible.
Here, this game loads its entire maps into RAM. This one specifically is 16km2, though there are fan made ones as big as 120km2. Naturally you can insta-teleport anywhere in those maps.
Now tell me, does this looks like "Stick-figure graphics" to you?
nope, more like: "you either have absolutely no idea what you`re talking about, or you are pretending not to. Either way: not my job "I'll take that as you saying "My bad, i was wrong"
Not at a 1000km/h. Think that’s the point the vid was making.Well it ran great on ps4 so it can't be that heavy
I don't understand how that is an issue nowadays. Why not include the cached shaders into the game, or at the very least give us an option to process all the shading data before playing the game. It's fucking annoying having a high-end setup and have to deal with stutters and bad frame pacing on modern titles.True. It's still one of the biggest issues with most UE games.
Curious how The Coalition was able to make Gears 5 to run super smoothly, with no stutter from either shader compilation or asset streaming.
But Epic seems to still have trouble with this......
So "My bad, i was wrong but really really don't want to admit it". Oh you're such a tsundere.nope, more like: "you either have absolutely no idea what you`re talking about, or you are pretending not to. Either way: not my job "
It's a tech demo that's not optimized for the thousands upon thousands of PC configs!C'mon OF COURSE. Just look at Unreal 5. We know how that's the near future for many many games and if this video doesn't prove that 30 fps is the target for console (esp when high end pc's can't hit 60) then there's no way we'll see 60 fps in unreal 5 games on console. Maybe there will be some secondary mode that doesn't have lumen or nanite that could be 60 but then the game will barely resemble the quality mode. I don't even know if it will be possible to have a performance mode without nanite on UE5.
You should talk to the developers over at insomniac. Obviously you know better than their engine designers.So "My bad, i was wrong but really really don't want to admit it". Oh you're such a tsundere.
So another person incapable of interpreting comments it seems , well just keep on your dream land of magical Cernys and flowery SSDs.You should talk to the developers over at insomniac. Obviously you know better than their engine designers.
Gee if all that packed immense knowledge here on gaf would just somehow reach all those poor stupid incompetent developers out there.....
kSo another person incapable of interpreting comments it seems , well just keep on your dream land of magical Cernys and flowery SSDs.
I don't understand how that is an issue nowadays. Why not include the cached shaders into the game, or at the very least give us an option to process all the shading data before playing the game. It's fucking annoying having a high-end setup and have to deal with stutters and bad frame pacing on modern titles.
The pre compilation seems like a no brainer. Unless there is more to it due to some games having dynamic environments/assets that you cannot pre empt?They can't include the cached shaders on PC, with low level APIs, like DX12 and Vulkan. These shaders are compiled by the driver, and since there are so many different GPU's on the market, and users using different driver versions, it's impossible.
On consoles it is possible, because consoles have fixed hardware that devs can target. Also, the games on console run in a kind of sandbox, where the dev can define the driver version that was used when they made the game. So when someone runs a game on a console, that game will run on the respective version where the shaders where compiled.
On PC there are solutions. Like doing a big initial compile workload, this will take a few minutes, but after that, the game won't have stutter from compiling shaders.
Other games can compile shaders during loading, but just for that level.
Steam with Proton, using Vulkan now has an option to share compiled shaders. This means that one gamer, can compile shaders for a game, upload it and then people who have the same GPU and driver can download that compiled shader.
It would be possible for nVidia, AMD and Intel to do the same. An online database, with compiled shaders, that would be updated and shared by gamers.
Indeed. The point to it all is that from a dev perspective any and all I/o restrictions are gone, *poof*. No workarounds, no cutbacks, not special design decision around bottlenecks, no balancing acts. Dev freedom to harness relatively instant asset loading as they please.
Maybe they do see it as magical after all.
But how can you say it's heavier with any certainty?Not at a 1000km/h. Think that’s the point the vid was making.
Because seemingly Ue5 is not heavy!?!But how can you say it's heavier with any certainty?
And Spiderman runs great on a slow hdd so at any speed can't be overly heavy either. You seem to just feel it should be for some reason, but there is nothing that points to that.Because seemingly Ue5 is not heavy!?!
kAnd Spiderman runs great on a slow hdd so at any speed can't be overly heavy either. You seem to just feel it should be for some reason, but there is nothing that points to that.
so with all the arguing going on with NXGamer I'll just add to the discussion by presenting my performance test here.
note that the Nvidia performance overlay doesn't get captured by... shadowplay... ironically... so while recording the video, I took screenshots THAT DO show the overlay for some reason... so yeah, here we go:
Settings are all on 3 (the defaults basically), crowd and car density is 100, I use a version that supports DLSS, and I'm running on DLSS quality, simply because that frees up GPU resources for me.
my Hardware:
Ryzen 5600X
Geforce 3060ti TUF
16GB 3200mhz DDR4
SSD I ran it from is a Kingston SA2000M81000G (my OS drive, I just plonked the demo folder down on my desktop lol)
Screenshots first, I tried to get one where I drive at full speed at a car
VIDEO TIMESTAMP: ~0:11
VIDEO TIMESTAMP: ~0:17
and here is the video from the gameplay these screenshots were taken:
there are still a few compilation stutters it seems. but then again, I haven't played the demo that much on PC, at least not this specific build here.
the lowest framerate I saw without compilation stutters was about 29fps I think
Thanks for the great response.
The pre compilation seems like a no brainer. Unless there is more to it due to some games having dynamic environments/assets that you cannot pre empt?
Oc that memory please
Sorry I meant vram. Video memory hahapretty sure beyond 3200mhz there's no tangible performance benefit in almost any game... so why should I? getting 3fps more in a 120+fps game isn't worth tinkering around for me tbh.
And Spiderman runs great on a slow hdd so at any speed can't be overly heavy either. You seem to just feel it should be for some reason, but there is nothing that points to that.
The argument is if Spiderman at high speed is heavier on data transfer then the city demo. I'm simply saying nothing points to that.I'm confused. Didn't Sony have an I/O with Spiderman? Basically I think the point was to prove that going through the city extremely fast would have an impact on performance on slow drives. It's why I'm confused when you say "at any speed".
Anybody tried the demo on a platter hdd?
Ratchet wasn't maxing it out because it wasn't even making anything that strictly required SSDs. All of its portals were either limited to rail roaded segments (where you often couldn't even look behind you), portals that teleported you through short distances, or portals at specific locations that teleported between 2 different areas, all of which we've already seen before (not talking about titanfall).
The last two don't even require faster storages, the third one in particular being more stressful for the GPU than anything. We've seen those in Portal games (and other source based games) and Prey 2006 respectively.
The first type of portal can be achieved with good memory management. A game with lower quality assets (and by lower quality i don't mean it looks bad) can make you teleport between all the worlds you want or throught all of the game's map as long as you can fit it properly into RAM. For higher quality assets you just need to load and unload them at appropriate times (since the segment is rail roaded, i can just start loading the next world assets as soon as the character enters a new world, since player has no real control of when or where he'll teleport).
No we were not. Even if no one thought about optimizing the i/o pipeline, be it on ps5 or other platforms, we'd still get all the 'revolutions' you're talking about by merely adopting SSDs. Thats the point, its the "magical i/o" because people are attributing far more credit to it than it actually has, most of the credit actually belonging just to SSDs in general.
My guess is that all the shiny eyed fans watched that Cerny talk where he compared the PS5 with the PS4 and mistakenly started believing all those super 100x faster speed improvements were something exclusive to the ps5. These comparisons fall much shorter when you put it against modern PCs, even without direct storage or what not.
So don't worry, nothing will be holding back the ps5 capabilities.
Devs are full of ambitions. Convincing big wigs you can make money is the hard part.All it needs is actual developer ambition which is hard to come by at the
Id rather wait and do a longer initial load (Does it have to be done every time you load the game or once per installation?) than go through the insane first 5 minute of stuttering in Elden Rings whenever i load up the game and enter a previously undiscovered area.They can't include the cached shaders on PC, with low level APIs, like DX12 and Vulkan. These shaders are compiled by the driver, and since there are so many different GPU's on the market, and users using different driver versions, it's impossible.
On consoles it is possible, because consoles have fixed hardware that devs can target. Also, the games on console run in a kind of sandbox, where the dev can define the driver version that was used when they made the game. So when someone runs a game on a console, that game will run on the respective version where the shaders where compiled.
On PC there are solutions. Like doing a big initial compile workload, this will take a few minutes, but after that, the game won't have stutter from compiling shaders.
Other games can compile shaders during loading, but just for that level.
Steam with Proton, using Vulkan now has an option to share compiled shaders. This means that one gamer, can compile shaders for a game, upload it and then people who have the same GPU and driver can download that compiled shader.
It would be possible for nVidia, AMD and Intel to do the same. An online database, with compiled shaders, that would be updated and shared by gamers.
A dev who worked on ps3 games that did similar things said pretty much what i said.This is factually incorrect.
No you cannot. If you fill up your RAM with assets you might need for faster loading, you are limiting other things you can put in there which might be higher quality assets among other things. And Loading and unloading is the fucking bottleneck here. It takes a long time to load and unload those assets at 'appropriate times' which is precisely why simply attaching a 5.5 GBps ssd to a VRAM isnt enough, you need to ensure the data transfer between those two isnt bottlenecked. It's not PR speak like you say below. It's simple common sense.
Didn't really meant any offense with the shiny eyed things. Its perfectly normal to be blinded by new tech, even seasoned veterans in the field can fall victim to that, even more so for fans of a particular brand. Its just thats its important to, at some point, get off the high and look at the reality of things.Eh I wish you hadnt said that bit about shiny eyed fans.
I'm not downplaying or dismissing any specs. What i'm saying is that others are overplaying it way too much.Look, Ive been willing to extend you an olive branch throughout this argument. I dont think PS devs will utilize the full 5.5 GBps of bandwidth. I also think it's overkill. I even think PCs can do all of this by brute forcing all the compression and decompression done by the IO which according to both Sony and MS could take up to 10-15 zen 2 cores. Clearly, PCs have faster SSDs and faster and bigger CPUs that can handle all of this.
What gives me pause is you completely dismissing specs that are almost 20 times faster than even what the Matrix demo is doing at 300 MBps. At some point, we have to start worrying less about shiny eyed PS fanboys and use basic common sense. Is something that is 10x faster than the 500 MBps SATA ssds more capable than those SSDs? Of course. Is something that is designed to FULLY utilize those 5.5 GBps of bandwidth better than those SATA SSD speeds Matrix seems to be targeting? Of course. We have seen this in action with the XSX. More tflops dont always translate to more performance because there are architectural bottlenecks preventing the GPU from reaching its full potential. At least in some cases. That's all Cerny was saying with his 100x SSD slide. Adding more speed isnt enough if you dont improve other parts of the engine. That's all. The real magic in the PS5 is the SSD speed. Everything in the IO is simply designed to help it achieve its fullest potential. It's why Nvidia is investing in Direct Storage. It's why MS added a bunch of decompression hardware to their APU.
I can tolerate pretty much anything on these boards, but I cant stand people downplaying specs. Tflops matter. RAM bandwidth matters. SSD speeds matter. We can see how CPU intensive UE5 is. Now imagine if the CPUs also had to do all the decompresion the PS5 IO is doing in order to fully utilize the SSD, when and in case that ever happens. I am not one of those guys who think that the IO will magically make the PS5 10 tflops GPU perform like a 20 tflops GPU but when it comes to the POTENTIAL of a a game design revolution, the PS5 has all the hardware it needs. All it needs is actual developer ambition which is hard to come by at the moment.
A dev who worked on ps3 games that did similar things said pretty much what i said.
No it does not. it literally stops and starts.And Spiderman runs great on a slow hdd so at any speed can't be overly heavy either. You seem to just feel it should be for some reason, but there is nothing that points to that.
This isn't my source, i just stumbled upon his video one day explaining things i was already aware about. Its good because some here love resorting to appeal-to-authority arguments to dismiss others, so might as well get an "authority" myself since thats easier for me than trying to deconstruct fallacies.this guy explained classic background streaming without mentioning what this entails and what the limitations are, gee I wonder why..... Trash clickbait nonsense.
But good to know what kind of "sources" you use.....
So the overall graphical quality which is basically 90% of the needed bandwith is not part of the game`s design? K........This isn't my source, i just stumbled upon his video one day explaining things i was already aware about. Its good because some here love resorting to appeal-to-authority arguments to dismiss others, so might as well get an "authority" myself since thats easier for me than trying to deconstruct fallacies.
Besides, you're ignoring the context. We're talking about Rift Apart game design specifically. This isn't meant to dismiss the use of SSDs or whatever, in fact its meant to highlight that this specific game, which some really love to put on a pedestal as an example of "revolutionary SSD centered design", can be perfectly achieved with said classic background streaming, no fast loading required.
The argument is if Spiderman at high speed is heavier on data transfer then the city demo. I'm simply saying nothing points to that.
And once again, you're really overplaying the necessary cutbacks to achieve similar result. To get closer to the same page, what i'm doing is more akin to comparing a 2022 animation with a 2012 one.So the overall graphical quality which is basically 90% of the needed bandwith is not part of the game`s design? K........
That´s like comparing 1990 animation movies to 2022 animation movies and say "they are both movies".....in the end you say nothing at all while ignoring the glaring technological differences. This is such an incredibly nonsensical discussion.
You have a very...very bad case of nostalgia..........Maybe you could complain how a game with Sleeping Dogs graphics wouldn't be up to par, but then you're just nitpicking since that game is by no means ugly
In the end, you're just complaining about graphics , not exactly a 'game design' revolution considering we have popular games with generally worst visuals than Sleeping Dogs.You have a very...very bad case of nostalgia..........
how many tiny arena levels have we had, how many overlong animations or forced level bottlenecks to squeeze through to hide the background loading.In the end, you're just complaining about graphics ,
How many seamless open world have we had , how many fast paced racing games have we drove through . One type of design philosophy existing doesn't really exclude the other.how many tiny arena levels have we had, how many overlong animations or forced level bottlenecks to squeeze through to hide the background loading.
There you go, the heart of the issue, which is people really exagerating the cutbacks that would be necessary to make it possible.
Here, this game loads its entire maps into RAM. This one specifically is 16km2, though there are fan made ones as big as 120km2. Naturally you can insta-teleport anywhere in those maps.
Now tell me, does this looks like "Stick-figure graphics" to you?
A dev who worked on ps3 games that did similar things said pretty much what i said.
That guy doesnt know what hes talking about. Insomniac Engineer confirmed that they were loading levels from the disk on every switch. Each level resides in RAM and is swapped out. The whole point of the SSD and IO is to free up the RAM so you can make each level more detailed instead of wasting RAM space on what they might want to load next.
Timestamped: