• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Epic sheds light on the data streaming requirements of the Unreal Engine 5 demo

VFXVeteran

Banned
Nanite looks cool but lacks so many things .. at some point theyll add reflections ray tracing, etc and then the performance will start to tank. ( but hopefully still be awesome)

Simply lit polygons are very fast on GPUs, add in complex shaders is when you become GPU and TF bound.

Somebody gets it.
 

Falc67

Member
Hollywood quality assets? Not a chance. That demo wasn't anywhere near CG assets.

And that demo didn't run at 4k/30, it ran at 1440p/30.

Is it possible to explain how this looks worse than movie CGI? It honestly looks photo realistic to me.

EG11
 

VFXVeteran

Banned
Is it possible to explain how this looks worse than movie CGI? It honestly looks photo realistic to me.

EG11

Yes, look at the normal maps. They are way too low resolution. The small detail on the spear and the chest plate is smudgy looking, for example. The depth of field is grainy and not accurate. The material doesn't represent real-life light distribution scattering.

Stare at this image (which is orders of magnitude better rendering) and then look back at your screenshot and tell me that your eye doesn't see something not correct about it. All the detail on her holds up being very close to the camera.

Helping-Hand-Love-Death-and-Robots-Ending-Explained.jpg
 
Last edited:

Shmunter

Member
You are not considering the cost of rendering at all. How much bandwidth would I need going to 2x textures for every parameter in a shader? How much longer would it take in shader code to Ray cast, for example, a normal map that 4k instead of 2k.
Which engine? Does it have RT? Which surfaces is it applied to and what precision? Is there bump mapping? All super relevant to secondary storage.
 
Yes, look at the normal maps. They are way too low resolution.

they only use normal maps for the fine details I think the camera need to be more close for them

1920x-1


The small detail on the spear and the chest plate is smudgy looking


wich is common in ancient statues both in metal and stone

109544022_o.jpg




109539106_o.jpg


nick+petronzio+sculpted+Dragon+Emperor+the+mummy+3+jpg..jpg

nick+petronzio%252C+sculpted%252C+terra+cotta+warriors%252C+the+mummy+3.jpg


nick+petronzio%252C+sculpted+horse%252C+the+mummy+3.jpg


if you are going to compare an ancient statue look you should compare with same type of asset or at least the same type of material, fabric gloves of course will look more complex than an ancient and corroded statue, they already look way better than lot of movie CGI
 
Last edited:

VFXVeteran

Banned
they only use normal maps for the fine details I think the camera need to be more close for them

1920x-1





wich is common in ancient statues both in metal and stone

109544022_o.jpg




109539106_o.jpg


nick+petronzio+sculpted+Dragon+Emperor+the+mummy+3+jpg..jpg

nick+petronzio%252C+sculpted%252C+terra+cotta+warriors%252C+the+mummy+3.jpg


nick+petronzio%252C+sculpted+horse%252C+the+mummy+3.jpg


if you are going to compare an ancient statue look you should compare with same type of asset or at least the same type of material, fabric gloves of course will look more complex than an ancient and corroded statue

Well, it's ludicrous to compare CG to PS5 anyway. It's not even worth a conversation.
 

idrago01

Banned
I think it's more for as you said the advantage will be primarily the loading times and snappier systems, plus it's another marketing tool for Sony of course. It really doesn't matter to me either way, I'm getting a 3080ti first day plus the PS5 (primarily just for exclusives), I just pray these consoles can at least somewhat hold their own and we don't see the massive downgrading of games like we did when the PS4/xbox launched.
 
Last edited:
Well, it's ludicrous to compare CG to PS5 anyway. It's not even worth a conversation.

PS5?

I was talking about the Unreal Engine 5 demo

CG is a broad term not a measure of quality, the Mummy 3 uses CG for the statues and this statues in the UE5 demo look better than those in the film, is my uncle a true scotsman? 😉
 
Last edited:

Falc67

Member
Yes, look at the normal maps. They are way too low resolution. The small detail on the spear and the chest plate is smudgy looking, for example. The depth of field is grainy and not accurate. The material doesn't represent real-life light distribution scattering.

Stare at this image (which is orders of magnitude better rendering) and then look back at your screenshot and tell me that your eye doesn't see something not correct about it. All the detail on her holds up being very close to the camera.

Helping-Hand-Love-Death-and-Robots-Ending-Explained.jpg

I think I can kind of see a difference, when zoomed in, some of the shadows are jagged for example, what would cause that?

Also correct me if I’m wrong, but given the way nanite works, as you got closer to that statue, wouldn't it appear
more detailed than what we are seeing in the above picture?
 

Stooky

Member
Probably should compare a cg rendered statue to the Unreal statue. Using a realistic human render to game rendered statue kinda doesn't work. I'm curious if some of this new Unreal tech was used on Mandolarian.
 

Clear

CliffyB's Cock Holster
Dude, all I did was give you an imaginary scenario. You don't have to poke at my career or who I know or what I know, yet again. It's like you guys try your hardest to not agree with simple facts.

No, my issue is your apparent complete and utter disinterest in pragmatic concerns and the realities of game production.

Its a jarring omission, especially for a supposed industry professional.
 

Three

Member
I know that. You guys are still not addressing the rendering point. You are acting like the game can have unbounded resources and it will magically render all those resources no matter what the speed of the GPU to render.

Man stop just saying 'the rendering'. It cannot have unbounded resources but you are not getting where it is bound. If you are streaming your geometry and your textures it doesn't matter how fast you can render it if the amount on screen is identical in amount but unique in each frame. Does that make sense. A faster drive with low latency will 100 percent help. Say you are looking at a horse with 1,000,000 polygons and 8k textures and you turn around in one frametime to look at a sheep with 1,000,000 polygons and 8k textures. what extra work is the GPU doing?
The answer is nothing but you had to pull that data for the sheep in from somewhere. Now why would storing both the sheep and horse in limited memory be better for the GPU if you are saying the drive is overkill and could stream the sheep in as fast as needed but the GPU is too slow to do what?

Somebody gets it.

So now the GPU didn't even do much in that demo because "simply lit polygons are fast on the GPU" yet here we are saying the GPU is the bottleneck and this streaming could be done on old SATA drives. You contradict yourself too often.

Look, at higher res yes you can run into a GPU bottleneck but you really have to get use to the idea that streaming in assets is NOT bottlenecked by the GPU they are decoupled in some way.
 

BluRayHiDef

Banned
I am really unfamiliar with their technique from a programmer's perspective. So I can't really give any experienced opinion on the matter. I would say that there are several things in the shader pipeline that would slow down given large texture sizes like 8k and upwards.

I did a test one day where I loaded up UDIMs on a 3D model of the character Shaggy in the Scoob movie. He had 65 4k UDIM textures and the 2080Ti crashed due to texture memory. I then compressed the textures down and got good performance, but that was for 1 character. There isn't going to be a game with a character having 65 different textures for awhile. But that proves that bandwidth is very important and doesn't just go away with new tech.

I was speaking in regard to rasterization. I understand that more memory is needed as the quality of textures increases, but is more processing power needed?
 
Last edited:

jonesxlv

Neo Member
PS5?

I was talking about the Unreal Engine 5 demo

CG is a broad term not a measure of quality, the Mummy 3 uses CG for the statues and this statues in the UE5 demo look better than those in the film, is my uncle a true scotsman? 😉

Stop being pedantic. When he is referencing CG, he is referencing modern CG.
 

BluRayHiDef

Banned
Yes, look at the normal maps. They are way too low resolution. The small detail on the spear and the chest plate is smudgy looking, for example. The depth of field is grainy and not accurate. The material doesn't represent real-life light distribution scattering.

Stare at this image (which is orders of magnitude better rendering) and then look back at your screenshot and tell me that your eye doesn't see something not correct about it. All the detail on her holds up being very close to the camera.

Helping-Hand-Love-Death-and-Robots-Ending-Explained.jpg

1. In regard to that screen-shot from the UE5 tech demo, is the problem that while the texture resolution is 8K, the native output resolution is 1440p? Could the low native output resolution cause the textures to appear less sharp than they would if the native output resolution were 4K or 8K?

2. The second one is better. However, it still looks like CGI.
 

VFXVeteran

Banned
Man stop just saying 'the rendering'. It cannot have unbounded resources but you are not getting where it is bound. If you are streaming your geometry and your textures it doesn't matter how fast you can render it if the amount on screen is identical in amount but unique in each frame. Does that make sense. A faster drive with low latency will 100 percent help. Say you are looking at a horse with 1,000,000 polygons and 8k textures and you turn around in one frametime to look at a sheep with 1,000,000 polygons and 8k textures. what extra work is the GPU doing?
The answer is nothing but you had to pull that data for the sheep in from somewhere. Now why would storing both the sheep and horse in limited memory be better for the GPU if you are saying the drive is overkill and could stream the sheep in as fast as needed but the GPU is too slow to do what?

Yes, I see what you are saying. You are correct. If you have the same amount of data structures loading in, then it doesn't matter what the content of the data structures is. I don't know if the drive's limit is overkill or not. Like I said many times, we have no realworld example of it being taxed at 5Gb/s. Nor do we have that data being rendered on a PS5 so I'm not sure why you guys are getting ridiculously defensive about it when no one has any realworld demo.

For all the devs that are trying to make a case for 5G/s streaming max on PS5 SSD, make a demo! It's that simple. Show what a game can do and then implement the game! That's the best way to get rid of all this speculation.

You contradict yourself too often.

I think it's more of a matter that I'm not clarifying myself well enough.
 

VFXVeteran

Banned
No, my issue is your apparent complete and utter disinterest in pragmatic concerns and the realities of game production.

Its a jarring omission, especially for a supposed industry professional.

Look, if you work in the industry, make a demo of the PS5 and it's SSD speed advantages and show that can't be done on any other hardware. It's that simple. I'm not going to sit here defending myself over something neither of us has any concrete experience with. I have no access to a PS5 devkit and neither do I have source code to the UE5 demo. That's not my job. If it's YOUR job, then hop to it and get a demo up and running. It doesn't have to be UE5. Use your own graphics engine and use the devkit to put something out that people can see. Taking stabs at my title and experience isn't going to get you anywhere and make you come across as being a dick.
 

Psykodad

Banned
Look, if you work in the industry, make a demo of the PS5 and it's SSD speed advantages and show that can't be done on any other hardware. It's that simple. I'm not going to sit here defending myself over something neither of us has any concrete experience with. I have no access to a PS5 devkit and neither do I have source code to the UE5 demo. That's not my job. If it's YOUR job, then hop to it and get a demo up and running. It doesn't have to be UE5. Use your own graphics engine and use the devkit to put something out that people can see. Taking stabs at my title and experience isn't going to get you anywhere and make you come across as being a dick.
You literally have the UE5 tech-demo designed for and running on PS5 to showcase the benefits of it's hardware.
 

GreyHand23

Member
Yes, I see what you are saying. You are correct. If you have the same amount of data structures loading in, then it doesn't matter what the content of the data structures is. I don't know if the drive's limit is overkill or not. Like I said many times, we have no realworld example of it being taxed at 5Gb/s. Nor do we have that data being rendered on a PS5 so I'm not sure why you guys are getting ridiculously defensive about it when no one has any realworld demo.

For all the devs that are trying to make a case for 5G/s streaming max on PS5 SSD, make a demo! It's that simple. Show what a game can do and then implement the game! That's the best way to get rid of all this speculation.



I think it's more of a matter that I'm not clarifying myself well enough.

Honestly you should be happy that these IO improvements are being focused on in the consoles because AAA game budgets are driven by business realities. The reality is that only a small percentage of PC users will have hardware that surpasses the upcoming consoles, at least in the short term. IO wise that could be even longer, but the good thing for PCs that I see is that it forces all the big players in the industry to improve IO on PC in the future. We're already seeing some efforts towards this with Microsoft bring DirectStorage API to PC in the future. I'm glad that you've come around to the mindset that we should just wait and see on what this kind of IO can do, because I suspect these early games can't fully use it because there are still systems in place that just weren't designed with this level of throughput in mind.
 

VFXVeteran

Banned
I was speaking in regard to rasterization. I understand that more memory is needed as the quality of textures increases, but is more processing power needed?

Just PM me if you want to hear what I gotta say. I'm not engaging in this thread because I feel like every word I type is scrutinized, especially since I can't tell who understands what I'm saying and who wants to pick at what I'm saying to start an argument.
 
Last edited:

VFXVeteran

Banned
You literally have the UE5 tech-demo designed for and running on PS5 to showcase the benefits of it's hardware.

No. Because no one knows how it works. Until there is source code where people can play with what it actually does, we are all throwing darts.
 

Psykodad

Banned
No. Because no one knows how it works. Until there is source code where people can play with what it actually does, we are all throwing darts.
Fair enough.

I don't know enough about all the GPU tech to get into all the details, but I do understand that the fast SSD of PS5 makes for the low RAM-usage for the rendering of the tech-demo.
But for actual games, it means that it allows for more RAM that can be used for all the other stuff, like audio-files, animations and whatever else is needed for gameplay.

So I fail to see how the info we have now doesn't show the benefit of the speed of the SSD, because obviously there's a ton of extra data that would need to be loaded during gameplay in an actual game.
Seems to me that people are to focused on just the visual rendering.

Edit:
Mught be mixing up some terminology there, but whatever.
 
Last edited:

Lort

Banned
And it output to Gbuffer for nanite in 4.25 ms for 1440 and Epic say they are targeting 60.

it will be interesting to see how this scales as the pixel count is driving the geometry engine ... Increase the res in nanite and you increase the CPU and GPU requirements... it wont scale as easily as traditional graphics render pipelines where geometry is the same for all resolutions.
... .Its quite possible if you up the res to 4k you double the CPU requirements as well as the GPU requirements.
 
Nanite looks cool but lacks so many things .. at some point theyll add reflections ray tracing, etc and then the performance will start to tank. ( but hopefully still be awesome)

Simply lit polygons are very fast on GPUs, add in complex shaders is when you become GPU and TF bound.
In one or two more console generations it should be able to run with full path tracing.
Hollywood quality assets? Not a chance. That demo wasn't anywhere near CG assets.

And that demo didn't run at 4k/30, it ran at 1440p/30.

The later conference said nanite itself easily runs at 60fps, it is lumen why it runs at 30fps, and they think they can get it to 60fps with lumen too. If it can easily run 60fps at 1440p, it can very likely run at 4k 30fps.

as for the statue not being hollywood level
the spear has depth of field.
the chest that's probably artistic design.

Let the polygon counts do the talking

"It points to a statue within the demo that is comprised of 33 million triangles."-techspot on nanite ue5 demo

"When Industrial Light & Magic began working on Michael Bay’s Transformers, the VFX crew thought they would be modelling three or four hero robots that might do 14 transformations. One year later, the team had assembled 60,217 vehicle parts and over 12.5 million polygons into 14 awesome automatons that smash each other, flip cars in the air, crash into buildings and generally cause enough mayhem to make even the most jaded moviegoer feel like a 10-year-old again."-creativebloq
  • Devastator is made up of 52,632 geometric pieces and 11,716,127 polygons. The total length of all the pieces is 13.84 miles. [4]-tfwiki
  • Devastator is made up of 6467 total textures, taking up 32 gigabytes of computer space.-tfwiki
"Clash of the Titans. The Kraken had 7 million Polygons." -mathspig

Stop being pedantic. When he is referencing CG, he is referencing modern CG.
spiderman armor looked quite weak in avengers infinity war. not all modern cg is up to par
... .Its quite possible if you up the res to 4k you double the CPU requirements as well as the GPU requirements.
that might only apply if its at the same framerate. If you're pushing 60fps at 1440p vs 30fps at 4k that's near enough the same number of pixels, geometry.
 
Last edited:

HoodWinked

Member
The Xbox ssd part may just be the spec at which the component is. So there wouldn't be much of cost savings if it were to have less bandwidth. It would be like fabricating a Pentium 2 in present times it probably would cost as much to manufacture vs a modern CPU.
 

Clear

CliffyB's Cock Holster
Taking stabs at my title and experience isn't going to get you anywhere and make you come across as being a dick.

I put my concerns as neutrally as I could, explaining why I found your perspective odd for someone with industry experience. Pragmatic concerns should be paramount for professionals, so your avoidance of them struck me as a red-flag.

PS. I'd rather come across as a dick than as a dishonest person. I'm not trying to win popularity contests and I'm not on anybody's side. I just try and keep it real at all times.
 

Rikkori

Member
I think I can kind of see a difference, when zoomed in, some of the shadows are jagged for example, what would cause that?

Also correct me if I’m wrong, but given the way nanite works, as you got closer to that statue, wouldn't it appear
more detailed than what we are seeing in the above picture?

Yes, but it's always hamstrung by the resolution as it ties asset quality to it. That's why if you import those same assets into UE4 right now you can have much sharper and more detailed assets in how they look compared to Nanite. But with the harsh performance requirements as well.

 
Last edited:

Psykodad

Banned
Yes, but it's always hamstrung by the resolution as it ties asset quality to it. That's why if you import those same assets into UE4 right now you can have much sharper and more detailed assets in how they look compared to Nanite. But with the harsh performance requirements as well.


The difference I'm curious about, is that when for example PS5 would use 8K textures at 1440p vs XSX 4K textures at 4K.

Because for arguments sake, let's say we will see that difference given the difference in SSD speeds (if that is a realistic comparison).
 

VFXVeteran

Banned
In one or two more console generations it should be able to run with full path tracing..

I remember interviewing at ATI (now AMD) way back in the early 2000s and they bragged about phong lighting and how realtime 3D was going to catch up to movie quality. I literally argued with the woman on how flawed that statement was. I didn't care about getting the job at that point because I felt I would never want to work with such arrogant people. Here we are 2 decades later and they are still nowhere near Hollywood. I wouldn't bet your life on your statement.

Let the polygon counts do the talking


That's where you are wrong poly counts don't mean jack shit when your lighting/shading is nearly 90% of the render time. I never said GPUs can't render a shit ton of polys. I said GPUs are slow as hell when it comes to shading and lighting. That will remain a fact until movie industry make a complete RT CG animation (which they will be the first to do). One very basic step forward would be to have more than 1 shadow-casting light source in a given scene. I'm still waiting for that to happen in a game and not tank the framerate.
 
Last edited:

martino

Member
You expect
The difference I'm curious about, is that when for example PS5 would use 8K textures at 1440p vs XSX 4K textures at 4K.

Because for arguments sake, let's say we will see that difference given the difference in SSD speeds (if that is a realistic comparison).
You expect more detail with bigger pixel ? How ?
Why would ms not use 8k where you need them ( and in games lot of it is always displayed on screen (character/weapons in fps etc...) )
 

Psykodad

Banned
You expect

You expect more detail with bigger pixel ? How ?
Why would ms not use 8k where you need them ( and in games lot of it is always displayed on screen (character/weapons in fps etc...) )
I expect higher quality assets/textures due to PS5's speed, as XSX has half the speed.
That's also the same reason that answers your second question.

If PS5 can load 768MB/s of data in the RAM pool, I assume XSX would only be able to load 384MB/s, correct?
So either they have to use lower quality assets to reduce the file-sizes and keep up with Ps5's speed, or like others have said, increase the RAM-pool, at which point they would have to make sacrifices elsewhere as they'd be using more RAM (or a mix of added RAM + smaller file-sizes).

But maybe I'm making some error in my line of thinking, so feel free to correct me.
 
Last edited:

Falc67

Member
Yes, but it's always hamstrung by the resolution as it ties asset quality to it. That's why if you import those same assets into UE4 right now you can have much sharper and more detailed assets in how they look compared to Nanite. But with the harsh performance requirements as well.



That’s right. So if you’re were right up close to the statue, it would look like this in nanite right?

jerome-platteaux-statue-screenshot-01-a.jpg


Now that it’s even closer, I can not distinguish that from Movie CGI. Hopefully I can find a 4K shot of it.
 

VFXVeteran

Banned
I expect higher quality assets/textures due to PS5's speed, as XSX has half the speed.
That's also the same reason that answers your second question.

If PS5 can load 768MB/s of data in the RAM pool, I assume XSX would only be able to load 384MB/s, correct?
So either they have to use lower quality assets to reduce the file-sizes and keep up with Ps5's speed, or like others have said, increase the RAM-pool, at which point they would have to make sacrifices elsewhere as they'd be using more RAM (or a mix of added RAM + smaller file-sizes).

But maybe I'm making some error in my line of thinking, so feel free to correct me.

Why would you think that? 768MB/s isn't the maximum speed that PS5 SSD can fetch. And it's nowhere near the 2.5G/s of the XSX. If that's the target, then both can stream the data made for that demo at the same speed.
 
That's where you are wrong poly counts don't mean jack shit when your lighting/shading is nearly 90% of the render time. I never said GPUs can't render a shit ton of polys. I said GPUs are slow as hell when it comes to shading and lighting. That will remain a fact until movie industry make a complete RT CG animation (which they will be the first to do). One very basic step forward would be to have more than 1 shadow-casting light source in a given scene. I'm still waiting for that to happen in a game and not tank the framerate.
Minecraft and Quake already have full path tracing, not sure how that differs from complete RT.

Ampere will also offer 4x the ray tracing performance per tier. -notebookcheck

if true it seems even the 3060 will be significantly faster than the 2080ti in ray tracing performance. maybe nvidia can't keep up this performance increases, or maybe they can and they want to hasten the arrival of full path tracing.

We are now reaching hollywood cg polygon counts. All the tens of additional teraflops are going to go into physics, shading and lighting.

I remember interviewing at ATI (now AMD) way back in the early 2000s and they bragged about phong lighting and how realtime 3D was going to catch up to movie quality. I literally argued with the woman on how flawed that statement was. I didn't care about getting the job at that point because I felt I would never want to work with such arrogant people. Here we are 2 decades later and they are still nowhere near Hollywood. I wouldn't bet your life on your statement.
In static scenes, realtime is already practically photoreal. Isn't mandalorian even using realtime for its production?
 

VFXVeteran

Banned
That’s right. So if you’re were right up close to the statue, it would look like this in nanite right?

jerome-platteaux-statue-screenshot-01-a.jpg


Now that it’s even closer, I can not distinguish that from Movie CGI. Hopefully I can find a 4K shot of it.

The lighting is completely crude. Watch more CG movies. The filtering of the bump textures smoothes out at sharp angles. The simple conventional filter algorithm in GPUs is pretty old (i.e. box filter). GPUs don't even support Blackman-Harris or Catmil-ROM filtering which gives much better results.

Just because you can't tell the difference doesn't mean there isn't a difference (and one by a LARGE margin).

I also hate when demos use instances of the same geometry to try touting that it can render a shit ton of geometry when the geometry isn't even unique. It's only storing 1 asset in memory and transforming it in the world at different locations.
 
Last edited:

Rikkori

Member
The difference I'm curious about, is that when for example PS5 would use 8K textures at 1440p vs XSX 4K textures at 4K.

Because for arguments sake, let's say we will see that difference given the difference in SSD speeds (if that is a realistic comparison).
The SSD will not make a difference because the issue that will first arise is of vram & raw GPU power. What the SSD would help with is with getting those assets into vram/ram(PC) first but you still need enough vram/ram & bandwidth in the first place, and of course enough processing power. Think of it as the RX 470s with 8 GB vram, plenty of vram but not enough GPU horse power for 4K, so the vram advantage doesn't really materialise outside of niche scenarios (heavy texture mods etc).

And besides, don't believe the "8K texture" hype. That's more an advantage for the devs to simplify development but you as the end user will a) not see them because they won't ship with them as the size requirements would be too large; b) as seen in the example above, you'll get worse actual texture resolution & detail with Nanite compared to traditional methods, but you get the advantage of more performance & more geometry (ala mesh shaders et al).

The SSD speed advantage if it ever materialises will only be seen in first party games. No one third party will bother putting in extra time & resources into getting some imperceptible advantage that will only be uncovered by tech channels when they slow down videos & zoom in 800%. Especially when we consider how devs costs are going ever up & the demands and expectations of the public are increasing, and now you don't only have 2 platforms to ship to but like 5+ (incl Stadia, mobile etc).
 

VFXVeteran

Banned
Minecraft and Quake already have full path tracing, not sure how that differs from complete RT.

Ampere will also offer 4x the ray tracing performance per tier. -notebookcheck

if true it seems even the 3060 will be significantly faster than the 2080ti in ray tracing performance. maybe nvidia can't keep up this performance increases, or maybe they can and they want to hasten the arrival of full path tracing.

We are now reaching hollywood cg polygon counts. All the tens of additional teraflops are going to go into physics, shading and lighting.


In static scenes, realtime is already practically photoreal. Isn't mandalorian even using realtime for its production?


You don't understand what goes into full-path tracing. It's all marketing hype.

There is several things missing in those games when implementing "full" path tracing. You don't know how many secondary bounces are happening, you don't know if there is importance sampling on every material with all the materials using multiple BSDFs to churn through. You are looking at reflections and not refractions. You aren't looking at area lighting from the environment. There is no hair that's using path-tracing. Are they path tracing all of the material shader components or only 2? Where is the aniostropic specular shading on objects that require derivatives? Path tracing on FX particles and volumes? Nope. I could go on and on.
 

Falc67

Member
The lighting is completely crude. Watch more CG movies. The filtering of the bump textures smoothes out at sharp angles. The simple conventional filter algorithm in GPUs is pretty old (i.e. box filter). GPUs don't even support Beckman or Catmil-ROM filtering which gives much better results.

Just because you can't tell the difference doesn't mean there isn't a difference (and one by a LARGE margin).

I also hate when demos use instances of the same geometry to try touting that it can render a shit ton of geometry when the geometry isn't even unique. It's only storing 1 asset in memory and transforming it in the world at different locations.

I appreciate the analysis! I didn’t understand half of what you said, but appreciate it nonetheless.
 

Rikkori

Member
That’s right. So if you’re were right up close to the statue, it would look like this in nanite right?

jerome-platteaux-statue-screenshot-01-a.jpg


Now that it’s even closer, I can not distinguish that from Movie CGI. Hopefully I can find a 4K shot of it.

That looks much better, definitely. Not sure if that's "full quality" or not, I guess we'd have to check, but I'd take it in a heartbeat. :p

As it stands, during the demo the quality was much lesser, but that was also the shitty TAAU playing a role. It's too bad they didn't release some HQ 4K footage for us instead of this upscaled stuff.
 

Psykodad

Banned
Why would you think that? 768MB/s isn't the maximum speed that PS5 SSD can fetch. And it's nowhere near the 2.5G/s of the XSX. If that's the target, then both can stream the data made for that demo at the same speed.
I know that isn't the max speed, but that's the RAM-pool they were using for the demo and they will try to get the RAM-pool lower.
Going by Cerny, they designed the SSD to reduce the need for RAM as much as possible, freeing up RAM for all other computational rasks (hence why they went with "just" 16Gb of RAM).
The above two fall in line.

From what I understand, that means that the amount of data needed to be stored, is just 768MB/s. Just for rendering everything seen in the tech-demo.
But on top of that, they can stream all kinds of other files for other things needed for games.
So I assume that's all the other game elements required for gameplay and going by Epic they rendered 500K objects for the outdoor scene and they're able to render at least 1M objects. So that would all require more data to be loaded/streamed as well.

The SSD will not make a difference because the issue that will first arise is of vram & raw GPU power. What the SSD would help with is with getting those assets into vram/ram(PC) first but you still need enough vram/ram & bandwidth in the first place, and of course enough processing power. Think of it as the RX 470s with 8 GB vram, plenty of vram but not enough GPU horse power for 4K, so the vram advantage doesn't really materialise outside of niche scenarios (heavy texture mods etc).

And besides, don't believe the "8K texture" hype. That's more an advantage for the devs to simplify development but you as the end user will a) not see them because they won't ship with them as the size requirements would be too large; b) as seen in the example above, you'll get worse actual texture resolution & detail with Nanite compared to traditional methods, but you get the advantage of more performance & more geometry (ala mesh shaders et al).

The SSD speed advantage if it ever materialises will only be seen in first party games. No one third party will bother putting in extra time & resources into getting some imperceptible advantage that will only be uncovered by tech channels when they slow down videos & zoom in 800%. Especially when we consider how devs costs are going ever up & the demands and expectations of the public are increasing, and now you don't only have 2 platforms to ship to but like 5+ (incl Stadia, mobile etc).
Yeah, that much I get. But doesn't that mean that while the UE5 tech demo is 1440p on PS5, it could for example be 4K on XSX?

If so, what I'm curious about is how it holds up against XSX. If XSX can load at half speed, obviously it would need to load lower quality textures at smaller file sizes to match PS5s 2x speed, correct?

I do understand that resolution improves IQ, but I wonder if medium textures at 4K look drastically better than high quality textures at 1440p. Sharper looking, I get, but better?

Having said that, I haven’t played games on PC since the 386 centuries ago, so I might be way off here.
 
Last edited:

Falc67

Member
That looks much better, definitely. Not sure if that's "full quality" or not, I guess we'd have to check, but I'd take it in a heartbeat. :p

As it stands, during the demo the quality was much lesser, but that was also the shitty TAAU playing a role. It's too bad they didn't release some HQ 4K footage for us instead of this upscaled stuff.

They also didn’t get close enough to the model in the trailer to show that kind of detail. But yeah hopefully there is a direct feed video to match the 4k shots they released.
 
You don't understand what goes into full-path tracing. It's all marketing hype.

There is several things missing in those games when implementing "full" path tracing. You don't know how many secondary bounces are happening, you don't know if there is importance sampling on every material with all the materials using multiple BSDFs to churn through. You are looking at reflections and not refractions. You aren't looking at area lighting from the environment. There is no hair that's using path-tracing. Are they path tracing all of the material shader components or only 2? Where is the aniostropic specular shading on objects that require derivatives? Path tracing on FX particles and volumes? Nope. I could go on and on.
still pixar didn't use ray tracing till cars on the reflections only, and full ray tracing it wasn't until monster university.

We already have some form of ray traced reflections in real time and even full ray tracing to some degree on some games.

Keep in mind that ray tracing h/w is likely significantly faster than software ray tracing. And the h/w is getting faster at what seems like an exponential rate, if the 4x performance increase is to be believed.



Highlighting what's achievable with real-time ray tracing, Unity has collaborated with NVIDIA and the BMW Group to showcase the 2019 BMW 8 Series Coupe. Once thought impossible to achieve, performant real-time ray tracing delivers photorealistic image quality and lighting for any task where visual fidelity is essential, such as for design, engineering, or marketing, at a fraction of the time of offline rendering solutions. -from youtube vid

if the results so closely match the reference shot, which is likely reality. Does it really matter if they are not technically as accurate?
 

Rikkori

Member
Yeah, that much I get. But doesn't that mean that while the UE5 tech demo is 1440p on PS5, it could for example be 4K on XSX?

If so, what I'm curious about is how it holds up against XSX. If XSX can load at half speed, obviously it would need to load lower quality textures at smaller file sizes to match PS5s 2x speed, correct?

I do understand that resolution improves IQ, but I wonder if medium textures at 4K look drastically better than high quality textures at 1440p. Sharper looking, I get, but better?

Having said that, I haven’t played games on PC since the 386 centuries ago, so I might be way off here.

I don't think we can say for sure if the SSD will play a role or not between resolutions (for Nanite). Texture-wise it would be the same (so no medium vs high; the quality will depend solely on what they ship & resolution, unlike how it is done today), what would change between 1440p and 4K is how the geometric detail gets resolved thanks to more pixels but as far as I know that's more GPU compute dependent rather than having to do with the SSD (and will further be improved in performance by adopting mesh shaders, which they haven't done yet for Nanite). Technically you're feeding the same assets so the hit on the SSD should be the same between 1440p and 4K.

The question is how does Nanite scale with all these aspects of graphics processing, and that's something we don't know (and can't therefore totally answer your question). We still need more details tbh, and hopefully with a direct feed from a PC, then all will be revealed. :messenger_smiling_horns:
 

VFXVeteran

Banned
I know that isn't the max speed, but that's the RAM-pool they were using for the demo and they will try to get the RAM-pool lower.
Going by Cerny, they designed the SSD to reduce the need for RAM as much as possible, freeing up RAM for all other computational rasks (hence why they went with "just" 16Gb of RAM).
The above two fall in line.

From what I understand, that means that the amount of data needed to be stored, is just 768MB/s. Just for rendering everything seen in the tech-demo.
But on top of that, they can stream all kinds of other files for other things needed for games.
So I assume that's all the other game elements required for gameplay and going by Epic they rendered 500K objects for the outdoor scene and they're able to render at least 1M objects. So that would all require more data to be loaded/streamed as well.

I get what you are saying there, I'm just not sure why the XSX would be half that RAM pool.
 
Top Bottom