• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Inside Unreal: In-depth look at PS5's Lumen in the land Of Nanite demo(only 6.14gb of geometry) and Deep dive into Nanite

Status
Not open for further replies.

Papacheeks

Banned
That was really only Tim Sweeney... and he was being misleading, that's what happened. There's not much else to it.



Not sure why you keep saying "insane rig setup"... the PS5 demo was using less data than the newer demo.. which works just fine on a standard SSD. And by that I mean, it works perfectly... there is no extra pop-in.. or details that take time to resolve. Lumen is a big resource hog, but the data aspects of it do not have high requirements. That's the absolute and cut and dry truth of it. That does not discount that PS5 has insane I/O, it's just that UE5 demo wasn't needing it.. and wasn't an improvement over what a slower SSD w/o any fancy I/O complex would also be able to render on the screen.

Like.. I don't know why people can't move past that.. PS5 has amazing I/O either way.

It's mostly going to get use for being able to travel really quickly between areas.. which is really mostly what Ratchet and Clank is doing with it as well.

We saw that with Spiderman.. which is doing things that a PC can't do... loading more data than a PC currently can into it's GPU..

It literally has had me buying some games on PS5 that I normally wouldn't have even considered on console.. because I fucking love how fast it is, despite the fact my PC could render the graphics of a game better.

But being capable of loading new stuff as you turn your head doesn't mean games can really do that often, or for increased DETAILS... that is not logical.. it can be used to have a bunch of variety in a scene though, or "quickly change worlds" and that sort of thing. But adding consistent detail, would eat up too much disk storage.

Totally agree, Loading in miles Morales is what blew me away when I first for my PS5. I think the issue is the same people who hate bringing up PS5 in tech talk comparisons. They are in here hardcore. Not you, you've been a great poster.

I think I was just blind sided by what the early demo was showing in relation to what we see now on PS5. So I thought according to them highlighting it, that it was specific demo showing. The PR for Playstation and from people like Sweeney hasn't helped much.

The other people in here like Dong or whoever they are called are the same people in other threads just shitting on PS5 because they are sick of hearing the praise for it.
I also was confused a little not by what people were saying but who I had talked to recently who is developing on PS5. They have their own internal engine, not U5, but were telling me the issues of having to be conscious of asset density is basically all but gone.

So that too has added to my disconnect on what Epic has shown.
Thanks for clearing things up and being super understanding.
 

PaintTinJr

Member
Are you even reading Andrew's comments? Look one post above... he literally called Tim's original comments misleading... he's basically scoffing at the idea that anything mentioned needs anything more than just a standard SSD. He implies Tim Sweeney's comments were "marketing."
I get there was a marketing deal, and Tim's motives - like when he said Pc gaming was dead - was to get the PC market to react with GPU compute to the PS3's massively multi-core synergistic units, and here it was to kill off old hdd as a compat requirement for PC gaming - and it has probably worked. It still doesn't mean anything Tim said was wrong, and Brian's tweet corroborates what he said about the PS5's IO.
I never did any of this. This isn't about PC is better.. this is about what the Unreal dev's have been adamant about...

I stated multiple times the PS5's solution is ahead of it's time.
Sorry, see my edit of that post. The post wasn't aimed at you, just a poor grabbing of a tweet I didn't edit you out of.
 

PaintTinJr

Member
Cerny never even used the word latency in his presentation; nor did any of the slides.. he's talking about the data rate. 5.5GB/second is 55x's the speed of the 100MB/second drive in last gen. Add on compression, which he was being conservative about (not talking about Oodle), and he gets to 100x's the speed. Where has MS ever talked about latency numbers either? But again.. how would you compare that to the PS5, since they haven't given a latency number?

What you are saying makes no sense.. DirectStorage is an API.. it's not a hardware spec... so any number about XVA is going to be for XSX... not RTX IO.

Your math makes no sense about CPU cores either... 10% of a CPU core means 10% of the time? Huh?
Search the transcript of Road to PS5 for "check-in", and read the next paragraph or so and look at the slide. I'll properly quote it later if you want, when I have more time, but it is latency he's talking about, and I'm pretty sure someone Matt haggart(?) was tweeting about the reduced latency.
 

IntentionalPun

Ask me about my wife's perfect butthole
Search the transcript of Road to PS5 for "check-in", and read the next paragraph or so and look at the slide. I'll properly quote it later if you want, when I have more time, but it is latency he's talking about, and I'm pretty sure someone Matt haggart(?) was tweeting about the reduced latency.

I've watched the presentation a dozen times and have the transcript on my desktop lol

That part of the presentation is clearly about bandwidth, not latency.. he even visually represents that, and uses the term "bottleneck".. he's talking about the drive speed the entire time.. 5.5GB/second has nothing to do with latency.. none of the numbers he gives are about latency. Look at this visual, and the use of "bottleneck", which is removed.. meaning that the 10x speed (aka the GB/second) is not maintained:

uLJkFCz.png


In the next slide he shows the narrowing arrows, not narrowing any more. He is never talking about latency, whether it's improved or not. He is talking about how the speed is maintained w/o bottlenecks, and without crippling the CPU.

Latency is certainly improved over the old way of doing things.. but we have absolutely no way to quantify PS5's approach vs. something like RTX I/O, at all.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
I get there was a marketing deal, and Tim's motives - like when he said Pc gaming was dead - was to get the PC market to react with GPU compute to the PS3's massively multi-core synergistic units, and here it was to kill off old hdd as a compat requirement for PC gaming - and it has probably worked. It still doesn't mean anything Tim said was wrong, and Brian's tweet corroborates what he said about the PS5's IO.

I really just don't know what else to say here... the UE5 devs have, to me, clearly said the PS5 demo would have been the same on a PC SSD. No DirectStorage... no massive RAM cache.

There is on indication that by "latency" they are talking about the improvements from skipping the system RAM step; all SSDs offer low latency and high bandwidth compared to SSDs.. there is no indication that the demo ever requires the data rates of anything beyond a PC SSD, bottlenecks and all... based on Brian's video.. Andrew's comments, etc. And that does in fact contradict Sweeney's statements, whether he admits it or not. His own dev called his statements misleading.. in a conversation where he scoffed at the idea that any of it even required something like DirectStorage. None of them ever talked about some crazy fast latency requirement either. It's not like a PC SSD is actually high latency lol, they have nano-second seek times... an HDD has literally millions of times that latency as far as seek times go... there's zero reason to believe that Brian or Andrew are talking about removing the "data goes through the CPU bottleneck."
 
Last edited:
The other people in here like Dong or whoever they are called are the same people in other threads just shitting on PS5 because they are sick of hearing the praise for it.
Lol wtf are you talking about? I haven't dismissed any praise you are speaking of.

The problem you have with me Papacheeks Papacheeks is that I participate in threads that disprove the dumb shit you fanboys try and push. There's an obvious reason you didn't reply to any of my previous posts that I quoted you on, that debunk each one of your claims. You read them, but choose to disregard them, since I'm not being a fangirl for your plastic box.


Like.. I don't know why people can't move past that.. PS5 has amazing I/O either way.

Why can't you praise the fact that PC doesn't even need direct storage to run this demo with better performance? Is that not impressive that it can even run the demo after Sweeney sold you guys on the bs PR?
 

PaintTinJr

Member
I've watched the presentation a dozen times and have the transcript on my desktop lol

That part of the presentation is clearly about bandwidth, not latency.. he even visually represents that, and uses the term "bottleneck".. he's talking about the drive speed the entire time.. 5.5GB/second has nothing to do with latency.. none of the numbers he gives are about latency. Look at this visual, and the use of "bottleneck", which is removed.. meaning that the 10x speed (aka the GB/second) is not maintained:

uLJkFCz.png


In the next slide he shows the narrowing arrows, not narrowing any more. He is never talking about latency, whether it's improved or not. He is talking about how the speed is maintained w/o bottlenecks, and without crippling the CPU.

Latency is certainly improved over the old way of doing things.. but we have absolutely no way to quantify PS5's approach vs. something like RTX I/O, at all.
Bottlenecks is a pseudonym for latency in all of that text.

The reason we physically know it is latency related is because they highlighted the edram in the io complex - which has latency of an LLC(L2 /L3) cache, much lower than RAM - and they highlighted the custom controller with 12 lanes and 6 priorities, all designed for high utilisation - even at the cost of some bandwidth to ensure a constant isochronous stream that doesn't bottleneck throughput under heavy load.
 

IntentionalPun

Ask me about my wife's perfect butthole
Bottlenecks is a pseudonym for latency in all of that text.

The reason we physically know it is latency related is because they highlighted the edram in the io complex - which has latency of an LLC(L2 /L3) cache, much lower than RAM - and they highlighted the custom controller with 12 lanes and 6 priorities, all designed for high utilisation - even at the cost of some bandwidth to ensure a constant isochronous stream that doesn't bottleneck throughput under heavy load.

The term bottleneck does not apply to latency. A bottleneck would apply to bandwidth.. or throughput.. not latency.. latency is something that gets added to, not "bottlenecked." Latency can limit your throughput, despite having a lot of bandwidth, but by itself you shouldn't ever use "bottleneck" to describe a change in latency.

Like you just said at the end of your post lol... "does not bottleneck throughput"..lol

The only numbers Cerny ever discusses.. are throughput...the PS5 has significantly higher throughput than the XSX.. we know that. We don't really know how it compares to RTX IO though.. as that's not "the XVA".. XVA is the XSX hardware + the DirectStorage API.
 
Last edited:

PaintTinJr

Member
I really just don't know what else to say here... the UE5 devs have, to me, clearly said the PS5 demo would have been the same on a PC SSD. No DirectStorage... no massive RAM cache.

There is on indication that by "latency" they are talking about the improvements from skipping the system RAM step; all SSDs offer low latency and high bandwidth compared to SSDs.. there is no indication that the demo ever requires the data rates of anything beyond a PC SSD, bottlenecks and all... based on Brian's video.. Andrew's comments, etc. And that does in fact contradict Sweeney's statements, whether he admits it or not. His own dev called his statements misleading.. in a conversation where he scoffed at the idea that any of it even required something like DirectStorage. None of them ever talked about some crazy fast latency requirement either. It's not like a PC SSD is actually high latency lol, they have nano-second seek times... an HDD has literally millions of times that latency as far as seek times go... there's zero reason to believe that Brian or Andrew are talking about removing the "data goes through the CPU bottleneck."
They all used very specific wording, and if you just want to take away the gist of those comments then what you are saying is entirely logical.

clipmaps - or virtual texturing as it is more commonly referred to, which nanite is a derived solution of AFAIK from Brian's words in the video - are a perfect solution for scaling something to "work" across anything from smartphone to a state of the art PC workstation, and work on both, while not being exactly equal.

Depending on the hardware's specs or the use case requirements you can adjust the [parameters (or Cvars as the Epic engineer told people to try - of the clipmap frusta and trade off memory footprint, IQ and streaming speed of the storage device updating the resident clipmap data - as camera position changes.

When Brian showed late cluster streaming on his UE5 editor showing Lumen in Land of nanite, that was completely new - the PS5 demo which DF needed told wasn't native 4K exhibits no late streaming. Wasting RAM and Vram on a PC that has no fixed spec for UE5 games using nanite is no big deal, that's the market, and has been the same since I started with Intel 8088 and 80286. However, on a console with just 16GB for the foreseeable future, regardless of a mid cycle update - adjusting that clipmap to have the lowest memory footprint, while having the lowest streaming latency and bandwidth to have the best IQ from clipmaps accessed - at any expected game camera traversal speed - is a big deal, and well worth the silicon area and edram cost IMHO.

The guy that claimed an enterprise SSD raid can match the PS5 IO could actually test it now with the land of ancients, because they could massively reduce residency of the world with that new grid system(eventually down to 768MB, or 1.5GB to be fairer as the PS5 was using GPU compression for the 768MB), and then try and adjust the cvars as the engineer advised, and then turn on cluster view and see at what camera traversal speed the engine could keep the clusters below 4 pixels in size while the camera is moving. That's what I believe the PS5 can do, but UE5 doesn't need it, as the Engineers have said, but it would make the experience better as Brian said, and is still a class leading IO solution that Tim was fine to promote as important - as UE5 games limited to 16GBs of unified ram will be served better by the one that uses the least memory for nanite, and still has the smallest clusters while the camera is traversing.
 
Last edited:

PaintTinJr

Member
The term bottleneck does not apply to latency. A bottleneck would apply to bandwidth.. or throughput.. not latency.. latency is something that gets added to, not "bottlenecked." Latency can limit your throughput, despite having a lot of bandwidth, but by itself you shouldn't ever use "bottleneck" to describe a change in latency.

Like you just said at the end of your post lol... "does not bottleneck throughput"..lol

The only numbers Cerny ever discusses.. are throughput...the PS5 has significantly higher throughput than the XSX.. we know that. We don't really know how it compares to RTX IO though.. as that's not "the XVA".. XVA is the XSX hardware + the DirectStorage API.
Of course it does, most routers are bottlenecked by their switching latency, not their bandwidth.

The x5 improvement for latency of the IO complex (100x improvement) versus XVA/RTX IO (x20 improvement) makes perfect sense for a few reasons.

For a start, why would the PS5's bottleneck be bandwidth when the decompression unit that DMAs can theoretically decompress 6GB more data per second than the 16GB of RAM the console has(22GB/s), and it is a theoretical figure because compressing game data by that much would be an edge case, so more bandwidth doesn't alleviate a "bottleneck"

The Road to PS5 words make no sense unless the bottleneck is the wait delay(latency) that stops work getting done immediately.

The major difference between the IO complex and the XVA/RTX IO solution AFAIK is that they don't use edram for the decompression process, and use RAM/VRAM respectively. The uncompressed block transfers from the io complex edram to ram will operate as fast as the RAM will allow, but the work done by the decompressor has edram latency bottlenecks when IO complex is running kraken on a block, not ram latency bottlenecks like XVA using the CPU core to run zlib/kraken on a block.

RAM is low latency compared to SSD or HDD, but EDRAM has less latency like a L2/L3 last-line of cache, so I still believe the PS5's IO solution has at least 5x less latency.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Of course it does, most routers are bottlenecked by their switching latency, not their bandwidth.

No, their throughput is bottlenecked by having poor switching latency, limiting their ability to actually reach their bandwidth... latency is a measure of time, not a measure of.. anything possible to bottleneck.

Just going to completely disagree on the rest, because I think you are fundamentally just wrong.

Time can not be faster either; faster means more speed... speed is a measure of unit/time. Like.. the GB/Second numbers Cerny had on the slides right before describing 100x's faster.

Latency is lowered.. or shortened.
 

PaintTinJr

Member
Road to PS5 said:
For PlayStation 5 our goal was not just that the SSD itself be a hundred times faster it was
that game loads and streaming would be a hundred times faster so every single potential
bottleneck needed to be addressed.

The quote above is the reason I used the isochronous term in an earlier post, because streaming data at 100x the speed for a game needs that bandwidth evenly spread across the unit of time, as it is no good getting some at the first 5frames, none for the next 7 frames, and the rest condensed across the last 18 frames (assuming 30fps). Isochronous data is a Quality of Service characteristic of the communication, and requires latency at a specific level, Cerny stating "streaming would be hundred times faster" means he's talking about isochronous data, so the latency went down by 100x too AFAIK for the context.
 

IntentionalPun

Ask me about my wife's perfect butthole
The quote above is the reason I used the isochronous term in an earlier post, because streaming data at 100x the speed for a game needs that bandwidth evenly spread across the unit of time, as it is no good getting some at the first 5frames, none for the next 7 frames, and the rest condensed across the last 18 frames (assuming 30fps). Isochronous data is a Quality of Service characteristic of the communication, and requires latency at a specific level, Cerny stating "streaming would be hundred times faster" means he's talking about isochronous data, so the latency went down by 100x too AFAIK for the context.

No, he's talking about.. removing the bottleneck.. which makes it so that the throughput of the SSD, isn't limited elsewhere by.. a bottleneck.. which causes the actual throughput of data going into the GPU to be much lower, like on PC.

Which he describes and represents visible, and not once talks about latency.

And which is also accomplished by XVA... and RTX IO. XVA having lower raw throuhgput, and lower compression rations.. RTX IO just isn't known.. but will be much higher than XVA.. nVidia already showed an example w/ 14 GB/second.

Curious why you think Cerny didn't just, talk about latency? Mention latency? Quote a latency figure?
 
Last edited:

PaintTinJr

Member
No, he's talking about.. removing the bottleneck.. which makes it so that the throughput of the SSD, isn't limited elsewhere by.. a bottleneck.. which causes the actual throughput of data going into the GPU to be much lower, like on PC.

Which he describes and represents visible, and not once talks about latency.

And which is also accomplished by XVA... and RTX IO. XVA having lower raw throuhgput, and lower compression rations.. RTX IO just isn't known.. but will be much higher than XVA.. nVidia already showed an example w/ 14 GB/second.

Curious why you think Cerny didn't just, talk about latency? Mention latency? Quote a latency figure?
Why would he want to use a negative word - when revealing the all new powerful sauce PS5 technical specs -like latency, which is a pseudonym for "delay", and possible conjures thoughts of "slow", "weak", "rubbish" :) ?

The language he uses to explain about the improvements all have positive and strong connotations IMHO.
 
Last edited:

FireFly

Member
Of course it does, most routers are bottlenecked by their switching latency, not their bandwidth.
Not at all. If the router can process all the packets it gets sent, then no traffic will have to be queued up. That's the case no matter what the latency is. It's like the speed limit on a road. You can have it set relatively low (high latency), but as long as the road is big enough to contain the quantity of traffic flowing along it, no one will need to slow down further. (Congestion with routers, like with cars, happens when many sources converge on the same destination).
 

PaintTinJr

Member
Not at all. If the router can process all the packets it gets sent, then no traffic will have to be queued up. That's the case no matter what the latency is. It's like the speed limit on a road. You can have it set relatively low (high latency), but as long as the road is big enough to contain the quantity of traffic flowing along it, no one will need to slow down further. (Congestion with routers, like with cars, happens when many sources converge on the same destination).
Which is the real world scenario of use - which I was obviously meaning, but probably should have specified, and wasn't meaning some uni-directional repeater setup where bandwidth gets saturated.
 
It’s not me against the thread and so far my evidence is in this thread were i can claim that the editor is less resource heavy. I even fly fast over the canyon, same they did in the last section from the PS5 demo. Editor mode is more flexible to go anywhere on the map without any mechanics enabled.
My own video showed is was not taxing in editor mode, you are fucking blind that's for sure...so cut the crap.

Same demo, different execution. One is just a editor overview and the other realtime running on a different system with all game mechanics enabled which is more taxing.
We did this last week, and it's the editor, not an actual compiled demo (I assume it would run similar to PS5 because why wouldn't it?). Ratchet and Clank reviews/footage got people shook, so I guess we're back to the doom and gloom posts about Sony for the remainder of the week.
You have no clue. Editor mode doesn't have a lot of things on that are super taxing like post processing effects. Hence why at the end he shows you the portal but there's no effects applied. You can by all means turn things on and run them in editor mode. But the whole point of editor mode is to make changes and test. And how you would fully test any changes to see how they look and affect Frame rate, hitching, or any other effects is to play them out in realtime. Which means things get added depending on your setup/demo you have created with your assets. Certain scripts and what not do not run in editor mode until you want to look at them and run them in editor mode by activating them. If you were to activate everything you have setup to be applied and compiled for playout it add's significant stress to the system.

PaintTinJr PaintTinJr Bo_Hazem Bo_Hazem

"The lumen in the land of nanite demo in editor, so performance is lower in editor than it would be in game"
Daniel Wright, Engineering Fellow in Graphics at Epic Games also runs the PS5 demo on his pc!
 
Last edited:

MonarchJT

Banned
Seeing PC gamers so quick to explain how their $2.5k rigs aren't behind in any facet is juicy. Good entertainment. I too would be upset if I spent that much money to be outdone in any area by a $400 machine.
the point is that the list for who will run better ue5 will be like this
PC>>>XSX> PS5> XSS

which is very different from what some fanboys used to say
PS5 >>>>>>>>>>>>>>>> PC (in some year)
xsx>xss (they will have to settle for an eternal downgrade)
 
In what world does anyone think the editor is easier to run than the game? What are you all smoking!

Anyway, latency helps this engine, as latency is a floor on how fast the disk can swap out the data. The bandwidth requirements aren't really stressed, lots of small swaps, so systems with disks with lower latency could potentially have less pop-in.
If the whole thing is in RAM, that will have the least pop-in. Although that will not be practical for full fledged games.
 
Last edited:
the point is that the list for who will run better ue5 will be like this
PC>>>XSX> PS5> XSS
I've been saying that since forever, but I would be bombarded by the fanboys saying it can't run on PC nor Xbox. It's pretty funny to look back on in retrospect though.

Wonder if Snake29 ever recovered from being so fucking incorrect this whole time. It was like 20 different people telling him the same thing, over and over, and he still never understood how to compile the demo (fucking lmao) despite me explaining in great depth hope to export it.
 

MonarchJT

Banned
I've been saying that since forever, but I would be bombarded by the fanboys saying it can't run on PC nor Xbox. It's pretty funny to look back on in retrospect though.

Wonder if Snake29 ever recovered from being so fucking incorrect this whole time. It was like 20 different people telling him the same thing, over and over, and he still never understood how to compile the demo (fucking lmao) despite me explaining in great depth hope to export it.
yeah but a part from the last thread where snake29 was very present .. is long time . basically from the presentation of the engine that many saying inaccuracies .. from the mythical Bo_Hazem Bo_Hazem to ethomaz ethomaz , and again assurdum assurdum P Panajev2001a James Sawyer Ford James Sawyer Ford Papacheeks Papacheeks and many many others that i blocked from some time and do not come to mind now.The thread on the consoles the one on the next gen and finally this one on the engine will remain milestones The crows have been served
 
Last edited:
This is funny to see. The amount of people who would constantly post the UE5 demo in every other post, have nothing to post about anymore. UE5 must have really broke some people.
The PR got some people so hard that they declared raytracing as entirely useless because Lumen is so much better. It's doubly funny once you realize that Lumen is literally raytracing.
 

Panajev2001a

GAF's Pleasant Genius
yeah but a part from the last thread where snake29 was very present .. is long time . basically from the presentation of the engine that many saying inaccuracies .. from the mythical Bo_Hazem Bo_Hazem to ethomaz ethomaz , and again assurdum assurdum P Panajev2001a James Sawyer Ford James Sawyer Ford Papacheeks Papacheeks and many many others that i blocked from some time and do not come to mind now.The thread on the consoles the one on the next gen and finally this one on the engine will remain milestones The crows have been served
Nice to live rent free :).
 

Papacheeks

Banned
yeah but a part from the last thread where snake29 was very present .. is long time . basically from the presentation of the engine that many saying inaccuracies .. from the mythical Bo_Hazem Bo_Hazem to ethomaz ethomaz , and again assurdum assurdum P Panajev2001a James Sawyer Ford James Sawyer Ford Papacheeks Papacheeks and many many others that i blocked from some time and do not come to mind now.The thread on the consoles the one on the next gen and finally this one on the engine will remain milestones The crows have been served

What crow? Even the Unreal Dev's said as much about PS5 being super effecient in bandwidth, and data? So people can be wrong when we dont have the information for a whole year and are lead to believe specific aspects shown were tailored for a specific set of hardware. Anyone with a brain knows Unreal is coded and built on PC, and PC is primary deployment method?

Can you blame people when tim sweeney and other developers praise PS5's I/O SSD solution which still is top class currently for development. Also we have you blocked as well and are not the people who literally got banned a while ago for fanboying in other threads.

People can be wrong, and not everyone has the same mindset and expierence. I was too caught up in my old knowledge of older unreal engines to really see that it was running differently on PC, as opposed to console.

It's like you have a complex or something?
 

Bo_Hazem

Banned
yeah but a part from the last thread where snake29 was very present .. is long time . basically from the presentation of the engine that many saying inaccuracies .. from the mythical Bo_Hazem Bo_Hazem to ethomaz ethomaz , and again assurdum assurdum P Panajev2001a James Sawyer Ford James Sawyer Ford Papacheeks Papacheeks and many many others that i blocked from some time and do not come to mind now.The thread on the consoles the one on the next gen and finally this one on the engine will remain milestones The crows have been served

Not sure why those guys dragging people here for an already well-discussed thread just to start some sort of platform wars or some brawls. :lollipop_tears_of_joy: Good games starvation has many side effects.

i can't lol GIF by globaltv
 
  • Like
Reactions: Rea

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
You think panning around a single static object is as impressive as the dynamic, constantly moving and incredibly full and lifelike world of Forbidden West? You have to be joking.

You wont last long here NeoMember
I frikken called it.
I thought this cat would atleast make it passed E3.
But not even.
 

PaintTinJr

Member
PaintTinJr PaintTinJr Bo_Hazem Bo_Hazem

"The lumen in the land of nanite demo in editor, so performance is lower in editor than it would be in game"
Daniel Wright, Engineering Fellow in Graphics at Epic Games also runs the PS5 demo on his pc!

I'm not exactly sure what the point of your message was, or why I got a mention, but the full video linked below is far more informative:


In the full video, it would seem that whatever his setup (PC+PS5devkit), that he is working specifically on the PS5 version - unless we are to believe that neither PC or XsX have hardware ray tracing to be missing from this picture - when he searches for that feature, with only PS5 showing up, and then notices what it says and hides the PS5 part, 3 seconds later (see images).


rGnS9bI.jpg
EIvfBvY.jpg


RERGlVP.jpg
iTmwWsQ.jpg
1voCm2u.jpg
hQh3c5I.jpg
Pc1I3NN.jpg
4Ymzu13.jpg
LwnnsSB.jpg
QygJbz8.jpg
sxhFKll.jpg
zNa3n5L.jpg
kM14yAq.jpg
BCAwLCA.jpg
Q9wbYe9.jpg
Swm5qc0.jpg
bZYIr4c.jpg
gBsPi0F.jpg

In the Q&A at the end of the video, Daniel answers a question from someone that was trying to build a forest in UE5 EA and noticed a drop off in the lighting- which was confirmed to be limited to 200m - for being full illuminated with lumen's primary SW techniques - as HW RT is limited to 20m IIRC - so the surface tracing + mesh distance fields is limited to 200m, and then it falls back to global distance fields, and the user is asking if they can extend that range. Daniel confirms it can be via cvars, but doesn't provide the info, but then mentions that the PS5 Lumen in the Land of nanite(internally called project reverb, like Cerny's comment about reverb for 3d audio ray tracing in Road to PS5) had the long flying view scene use all sw lumen features for 1km - via hand tweaking the cvars.

Loads more great points made in the video, not least confirming that Lumen is only available for high-end PC and next-gen console - which infers nanite is available for lower specs - and as RTX 2000 series is listed, this confirms the China engineering laptop is a high-end PC - as I said it was.

Anyone thinking crow is being served should really watch the video in full IMHO.
 
Last edited:

CamHostage

Member
In other UE5 news... Unreal Engine 5 has I think been committed to as the engine behind one game this year, I count. (Have I missed other ones I didn't notice?)

  • Echoes of the End, by Myrkur Games (*very early)


Of course, I didn't expect UE5 to be all over the show this soon and blowing up the paradigm (the toolkit is still very much in development and the Early Access just came out, though there are I have to imagine other key pilot partners besides Ninja Theory who have been working with it before the Early,) yet I did have in my head to expect a lot of commitments for UE5 at E3, or whatever the hell this summerfest is supposed to be. It feels weird that we've seen the new Unreal Engine, we can play with the new Unreal Engine ourselves, yet unlike other versions of UE and its development suite, we don't yet have a huge list of "Untitled XX UE5 Project" by a bunch of studios yet It'll still be a while, I guess.

(Not that this subtopic needs to be in this thread, I just thought maybe a distraction would help calm down the arguing, which is abating a bit now anyway.)
 
Last edited:

FireFly

Member
Further confirmation from Andrew Lauritzen on the IO requirements of Lumen in the Land of Nanite:

"Again I'll won't put words in Nick's mouth, but to your specific question I can reiterate to be absolutely clear: the demo Brian was running on his PC during the twitch stream with just a regular SSD (no DirectStorage or anything fancy) is the same demo/content that was run last year on the PS5. Obviously there have been engine improvements since then, but nothing that really affect the IO question here. An SSD is important to make this stuff work well, but it doesn't need to be a super fancy one. All the Nanite and virtual texture data can happily stream as you move around the world dynamically. Fabian's twitter thread that was linked earlier is a great summary in general."

 

PaintTinJr

Member
One of the differences between the previous talk on nanite, and the latest one on lumen, is that exact performance figures weren't given for lumen - whereas in the previous talk, Brian reused the PS5's nanite performance figures giving a total below ~5ms.

The reason I'm intrigued by this lack of detail for lumen, is that the Lumen in the Land of nanite was internally named Reverb - Daniel mentions he can't hide that from viewers in the talk - which may suggest that nanite and lumen began life at a lower fidelity/performance for running on the PS5's 3D audio tempest accelerator solution and was extended to include graphics when they realised how effective it could be. But that then begs the question, does lumen in the original demo use the Tempest engine - and hence the capped 1400p30 for Epic settings, which is far more than the 1080p30 for Epic setting implied as a max res for next-gen consoles in all scenarios. In the video, Daniel also implies that lumen doesn't really run in a single frame, and lighting results - from scene changes -are accumulated over multiple frames - and at the Q&A he handles a question specifically about a muzzle flash for a game's gun, that he advises should be rendered without lumen and use UE4 lighting techniques - which then makes the ~14ms lumen slide figure - from the 2020 Unrealfest PS5 demo video - look very misleading.

I'd speculate there is a possibility that lumen can be real-time (single frame ~ 1400p30) on PS5 provided a developer isn't wanting to use the Tempest engine for audio in a different way, than letting UE5's lumen ray trace using a combination of the GPU and Tempest engine to eliminate redundancy by lumen's sw traced lighting being built as a superset of the audio tracing - like the Road to PS5 RT slide implies. That would also mean that the 1080p30 downgrade for PS5 lumen, wouldn't really be a downgrade, because I'd assume that would be for a scenario where Lumen just uses the GPU, and the tempest engine is used independently for audio.
 

Lethal01

Member
In other UE5 news... Unreal Engine 5 has I think been committed to as the engine behind one game this year, I count. (Have I missed other ones I didn't notice?)

  • Echoes of the End, by Myrkur Games (*very early)


Of course, I didn't expect UE5 to be all over the show this soon and blowing up the paradigm (the toolkit is still very much in development and the Early Access just came out, though there are I have to imagine other key pilot partners besides Ninja Theory who have been working with it before the Early,) yet I did have in my head to expect a lot of commitments for UE5 at E3, or whatever the hell this summerfest is supposed to be. It feels weird that we've seen the new Unreal Engine, we can play with the new Unreal Engine ourselves, yet unlike other versions of UE and its development suite, we don't yet have a huge list of "Untitled XX UE5 Project" by a bunch of studios yet It'll still be a while, I guess.

(Not that this subtopic needs to be in this thread, I just thought maybe a distraction would help calm down the arguing, which is abating a bit now anyway.)


Square Enix is another, they have stated that Dragon Quest 12 will be made using Unreal 5.
 
One of the differences between the previous talk on nanite, and the latest one on lumen, is that exact performance figures weren't given for lumen - whereas in the previous talk, Brian reused the PS5's nanite performance figures giving a total below ~5ms.

The reason I'm intrigued by this lack of detail for lumen, is that the Lumen in the Land of nanite was internally named Reverb - Daniel mentions he can't hide that from viewers in the talk - which may suggest that nanite and lumen began life at a lower fidelity/performance for running on the PS5's 3D audio tempest accelerator solution and was extended to include graphics when they realised how effective it could be. But that then begs the question, does lumen in the original demo use the Tempest engine - and hence the capped 1400p30 for Epic settings, which is far more than the 1080p30 for Epic setting implied as a max res for next-gen consoles in all scenarios. In the video, Daniel also implies that lumen doesn't really run in a single frame, and lighting results - from scene changes -are accumulated over multiple frames - and at the Q&A he handles a question specifically about a muzzle flash for a game's gun, that he advises should be rendered without lumen and use UE4 lighting techniques - which then makes the ~14ms lumen slide figure - from the 2020 Unrealfest PS5 demo video - look very misleading.

I'd speculate there is a possibility that lumen can be real-time (single frame ~ 1400p30) on PS5 provided a developer isn't wanting to use the Tempest engine for audio in a different way, than letting UE5's lumen ray trace using a combination of the GPU and Tempest engine to eliminate redundancy by lumen's sw traced lighting being built as a superset of the audio tracing - like the Road to PS5 RT slide implies. That would also mean that the 1080p30 downgrade for PS5 lumen, wouldn't really be a downgrade, because I'd assume that would be for a scenario where Lumen just uses the GPU, and the tempest engine is used independently for audio.

Boy are you delusional. So instead of listening to unreal engine engineers. You have conjured up your own conclusion. My goodness they literally told you that the demo is 1080p instead of 1440p because it’s way heavier than the last demo and way more unoptimized.

But that doesn’t satisfy the required worship of your plastic box. So you invent that the internal names which epic and other companies create for their internal projects is somehow linked with PS5 audio chip. Which has absolutely zero connection. This is how delusional you are. Reverb is just the internal name for that demo just like topaz is the internal name for the valley of the ancient demo. They talked about it in the first stream that they are not to use internal names because no one would know what they were referring to and they sorta had a friendly bet on how many times they will mistakenly use the internal name. And chance used it the most.

Daniel saying he can’t hide the name is referring to that first stream and banter they had.

The landscape 1km feature for lumen can already be turned on using the control variable Lumen.DistantScene. This is in the documentation.

I feel sad for you.
 
Last edited:
Boy are you delusional. So instead of listening to unreal engine engineers. You have conjured up your own conclusion. My goodness they literally told you that the demo is 1080p instead of 1440p because it’s way heavier than the last demo and way more unoptimized.

But that doesn’t satisfy the required worship of your plastic box. So you invent that the internal names which epic and other companies create for their internal projects is somehow linked with PS5 audio chip. Which has absolutely zero connection. This is how delusional you are. Reverb is just the internal name for that demo just like topaz is the internal name for the valley of the ancient demo. They talked about it in the first stream that they are not to use internal names because no one would know what they were referring to and they sorta had a friendly bet on how many times they will mistakenly use the internal name. And chance used it the most.

Daniel saying he can’t hide the name is referring to that first stream and banter they had.

The landscape 1km feature for lumen can already be turned on using the control variable Lumen.DistantScene. This is in the documentation.

I feel sad for you.
Lmfaaaao, so it's not just me noticing this. When I read the tempest engine, I figured I might as well get my conspiracy hat on for that shit.












Although it's easier to just listen to the developers, rather than make up your own fantasy....
 

PaintTinJr

Member
Boy are you delusional. So instead of listening to unreal engine engineers. You have conjured up your own conclusion. My goodness they literally told you that the demo is 1080p instead of 1440p because it’s way heavier than the last demo and way more unoptimized.
Not 1440p on PS5. I'm talking about ~1400p30 in the (Reverb) UE5 demo of Lumen in the Land of Nanite showcased on PS5 12months ago, and it takes ~15ms for Lumen to run according to Epic's slides - which Daniel explicitly said resolution is the main performance issue so 1080p30 is a massive drop in performance. It takes ~5ms for nanite for Reverb, and at a capped 30fps you have 33ms, so on avg they've got a few ms shy of a 16.6ms frame in Reverb, spare.

Daniel also goes into detail why the Land of the Ancients use of lumen is much easier than indoor lighting - inferring that Reverb taxes lumen harder with the indoor GI lighting, while Ancients is more taxing with Nanite, but Nanite works on everything, and full Lumen like Reverb needs a RTX 2000 series GPU, and Epic frame budget info for each technology from Reverb backs that up.
But that doesn’t satisfy the required worship of your plastic box.
What plastic box? I'm more about being interested in hardware engineering complimenting the problem domain of software engineer for games - but as Sony most definitely do that part best from a fixed budget, I do prefer Sony engineering, so it is no wonder you'd see it as something else.

So you invent that the internal names which epic and other companies create for their internal projects is somehow linked with PS5 audio chip. Which has absolutely zero connection. This is how delusional you are. Reverb is just the internal name for that demo just like topaz is the internal name for the valley of the ancient demo. They talked about it in the first stream that they are not to use internal names because no one would know what they were referring to and they sorta had a friendly bet on how many times they will mistakenly use the internal name. And chance used it the most.
Daniel saying he can’t hide the name is referring to that first stream and banter they had.

The landscape 1km feature for lumen can already be turned on using the control variable Lumen.DistantScene. This is in the documentation.
He said it because you can see the filename in the UI frame, and it is interesting that the PC - of intentionally undisclosed specs - he's showing it on only has the PS5 platform target when he searches for Hardware RT options. Inferring that it is a bespoke setup for his work on PS5, like for PS5 specific RT for reverb audio - but yeah feel free to ignore the special engineering relationship Sony has with Epic for UE and assume that hundreds of millions of dollars only buys a marketing arrangement.
As for the delusions, are we really to believe that despite pushing lumen harder in the Reverb demo - which cost 3x the nanite performance per frame - and it capped at 30fps, the PS5 hardware can only do nanite + lumen at 1080p30 now, without some change that cost nearly 14ms and renders nearly 50% less pixels without a reason?

Either what we were first shown was fake, the software is now more demanding or the 1080p30 target is to remove a comparison of the consoles, and unrelated to actual capabilities.
 
Not 1440p on PS5. I'm talking about ~1400p30 in the (Reverb) UE5 demo of Lumen in the Land of Nanite showcased on PS5 12months ago, and it takes ~15ms for Lumen to run according to Epic's slides - which Daniel explicitly said resolution is the main performance issue so 1080p30 is a massive drop in performance. It takes ~5ms for nanite for Reverb, and at a capped 30fps you have 33ms, so on avg they've got a few ms shy of a 16.6ms frame in Reverb, spare.
Yes Unreal engine engineers ALREADY SAID that the entire Lumen in the land of nanite demo is way less taxing than the valley of the ancient demo. They also SPECIFICALLY SAID this is why the demo is 1080 instead of 1440p on consoles.

Stop calling it reverb. If you are going to call it reverb, you also have to also call valley of the ancient Topaz.

Daniel also goes into detail why the Land of the Ancients use of lumen is much easier than indoor lighting - inferring that Reverb taxes lumen harder with the indoor GI lighting, while Ancients is more taxing with Nanite, but Nanite works on everything, and full Lumen like Reverb needs a RTX 2000 series GPU, and Epic frame budget info for each technology from Reverb backs that up.
Again stop calling it reverb and the other valley. It’s either lumen in the land of nanite and valley of the ancient or Reverb and Topaz. Secondly no he wasn’t saying that interior lighting taxes Lumen nor was he saying that interior lighting is heavier to run for Lumen. He was specifically talking about art viz (architecture visualization) that requires the absolute cleanest and highest quality and perfect lighting. Any artifact will show up on those art viz white walls. It’s a statement on quality not performance. If you had any clue on 3D rendering maybe you would underrstand.

secondly the lumen demo does not need a RTX 2000 series. GTFO again with your bs delusional lies. We already have the full lumen implementation which you can crank up all the way. Epic games did not use hardware ray tracing for lumen on the PS5 demo. The engineer litterally said IT DID NOT EXIST. We have All the software Lumen features available and they can be cranked up to the highest levels on the valley of the ancient demo. That demo does not use lumen.distantscene feature that the PS5 demo used because it does not have any landscape. But you can still turn it on through the command line using lumen.distantscene.

What plastic box? I'm more about being interested in hardware engineering complimenting the problem domain of software engineer for games - but as Sony most definitely do that part best from a fixed budget, I do prefer Sony engineering, so it is no wonder you'd see it as something else.
No, instead of listening to developers and engineers who created something. You are trying to tell them how what they created works so you can sleep better at night.
He said it because you can see the filename in the UI frame, and it is interesting that the PC - of intentionally undisclosed specs - he's showing it on only has the PS5 platform target when he searches for Hardware RT options.
No it’s not. That’s just a project settings option. This is how the engine works . You don’t create a target for pc. It’s there because among other scene this is one Daniel uses to test.

Inferring that it is a bespoke setup for his work on PS5, like for PS5 specific RT for reverb audio - but yeah feel free to ignore the special engineering relationship Sony has with Epic for UE and assume that hundreds of millions of dollars only buys a marketing arrangement.
you are delusional. This has nothing to do with the audio chip that PS5 has.

As for the delusions, are we really to believe that despite pushing lumen harder in the Reverb demo - which cost 3x the nanite performance per frame - and it capped at 30fps, the PS5 hardware can only do nanite + lumen at 1080p30 now, without some change that cost nearly 14ms and renders nearly 50% less pixels without a reason?

Either what we were first shown was fake, the software is now more demanding or the 1080p30 target is to remove a comparison of the consoles, and unrelated to actual capabilities.

First of all lumen IS NOT PUSHING harder in the lumen in the land of nanite demo. Lumen has always been heavy, then and now. In fact Lumen is heavier than because it uses ray tracing hardware which it didn’t use before.

Lastly The engineers who created the demo and nanite and lumen LITERALLY TOLD YOU. You DONT WANT TO LISTEN TO THEM. It’s a YOU problem.
 

FireFly

Member
I'd speculate there is a possibility that lumen can be real-time (single frame ~ 1400p30) on PS5 provided a developer isn't wanting to use the Tempest engine for audio in a different way, than letting UE5's lumen ray trace using a combination of the GPU and Tempest engine to eliminate redundancy by lumen's sw traced lighting being built as a superset of the audio tracing - like the Road to PS5 RT slide implies. That would also mean that the 1080p30 downgrade for PS5 lumen, wouldn't really be a downgrade, because I'd assume that would be for a scenario where Lumen just uses the GPU, and the tempest engine is used independently for audio.
I find the "Secret Power of the Tempest Engine" talk hilarious, considering we are talking about a single CU, which the PS5 GPU already has 72 of (36 dual CUs/WGPs). Yes, I'm sure a part delivering 1/72nd of the power of the PS5 is the secret to unlocking its almighty power. By my calculations, Tempest would have 0.143 Teraflops, if it was running at the same speed as the GPU. So that's a nice bump over the PS4 CPU, like Cerny was saying.
 
Last edited:
No. They are directly name after ps5's tempest and geometry engine respectively... /s








That's a very valid point though, you might be on to something.

Lol, I dont mind the names that Sony has for their bespoke PS5 technologies. It's like how Epic replaced Havok physics in UE4 with their own Chaos physics, both words convey the idea of realistic destruction.

And in that sense I'm curious to know if the internal names were also conveying something similar, which they decided to change for public consumption; both Lumen and Nanite have connotations to light and geometry but neither are actual words in use outside of UE5.
 

Loope

Member
yeah but a part from the last thread where snake29 was very present .. is long time . basically from the presentation of the engine that many saying inaccuracies .. from the mythical Bo_Hazem Bo_Hazem to ethomaz ethomaz , and again assurdum assurdum P Panajev2001a James Sawyer Ford James Sawyer Ford Papacheeks Papacheeks and many many others that i blocked from some time and do not come to mind now.The thread on the consoles the one on the next gen and finally this one on the engine will remain milestones The crows have been served
arya stark morgan GIF
- That's quite the list you got there, my man.
 
Status
Not open for further replies.
Top Bottom