• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Tim Sweeney on the Tech Demo: "Nanite and Lumen tech powering it will be fully supported on both PS5 and Xbox Series X and will be awesome on both."

-Arcadia-

Banned
Except Fortnite next year; and we don't know how easy the upgrade to UE5 is going to be.

These features might be built on top of UE4 and they are just naming it UE5 because of the leap in fidelity, leaving an easy path for upgrade for devs. Or it might be a fundamentally different engine and it will take years to see games, we really just don't know yet.

Agreed. Though given that it supports current generation consoles and mobiles years back, I’m inclined to think it’s the former. It would probably be a really easy, nice transition for devs if it were.
 

IntentionalPun

Ask me about my wife's perfect butthole
Agreed. Though given that it supports current generation consoles and mobiles years back, I’m inclined to think it’s the former. It would probably be a really easy, nice transition for devs if it were.
Yeah from what I understand UE4 is an incredibly flexible engine and there'd have to be a really compelling reason to truly re-architect it.

But they could have; or something about the way Nanite works might require some sort of major change making it not an easy upgrade path.

Never been so excited to find out more about a game engine though lol... gonna be pretty amazing.
 

VFXVeteran

Banned
Sorry pal, but to me the way you write makes me doubtful that you've spent much if any time working in the games business. I'm sorry, but contrary to what you say in your final paragraph you do seem to be unduly fixated on PC, and all too willing to discount the importance of overall i/o throughput to engine performance.

To be precise I'm not suggesting that you dont have any technical insight or experience, maybe you work in academia or some other commercial field, just not in game-dev. Your sentiments seem... out-of-whack compared to people I know/have known over the years.

Maybe I'm wrong, I make no claims of having more than a strong intuition on the matter. But it has been formed over reading posts of yours going back a few months.

Yea you be wrong. Been working in realtime graphics for 3yrs now... before that 20yrs in film making CG films so I know the general techniques and have implemented them. As I mentioned prior - the speed of I/O wasn't the end all be all in that demo. And RAM would definitely be a better choice than SSD if a dev had infinite resources.

As I also state with all of my claims, if you have a coding background and have done this before and disagree, show me where I'm wrong. Until then, I'll keep trying to educate people.
 
Last edited:

darkinstinct

...lacks reading comprehension.
It will also be on PC 2 lol
And probably on Switch.


Nanite and Lumen were never even in question, though? It was the flying section and lack of LOD pop-in that people were speculating would only be possible with an ultra fast SSD I/O solution.
Which is complete bullshit, because exactly the same thing was shown in the Hellblade 2 demo. And everybody wondered how it was possible that it had absolutely zero visible pop-in.
 
Last edited:

VFXVeteran

Banned
I don't know.

I remember asking you about PlayStation hardware specifically and you kept bringing up PC hardware, which had absolutely nothing to do with my question. I think if people are talking about consoles specifically, then you don't need to bring up PC hardware into the discussion (at least for the most part).

I bring up PC hardware because it's the main platform for which to gauge performance and a medium in which it has enough support for doing R&D. I've been to several gaming studios and have talked with many developers. All of them have PCs they use to develop the groundwork for their graphics engines. ALL of them and that includes Sony 1st party devs. I also have friends scattered throughout the entire gaming industry - hell, I've even worked for Kim (CTO @ Epic) before. We all do our discovery on the PC. It's just that simple. So I bring it up in order to make good comparisons.
 
Last edited:

FireFly

Member
I want my vindication first. :messenger_tears_of_joy: How many times have I gone into great detail about how some of these subsystems work and get the cold shoulder or trolled. Even after all of my info on PS5 and Sony plans only to still be flamed.

I had one developer come in here and educate me on how the details of the Nanite works. But none corrected me on misinformation about it being the sole reason that demo was done on the PS5 because it's SSD was the crowning jewel.

People think I push the PC narrative but what I'm really doing is using the PC as an agnostic piece of hardware as a base point by which all other comparisons can be made. People need to stop trying to defend their hardware of choice and adopt a "it's all about the games" mentality.
You said the demo had nothing to do with the SSD, and you could run it on a PC without one. So I think your vindication is waiting on someone running it off their laptop's mechanical hard drive.
 

DForce

NaughtyDog Defense Force
I bring up PC hardware because it's the main platform for which to gauge performance and a medium in which it has enough support for doing R&D. I've been to several gaming studios and have talked with many developers. All of them have PCs they use to develop the groundwork for their graphics engines. ALL of them and that includes Sony 1st party devs. I also have friends scattered throughout the entire gaming industry - hell, I've even worked for Kim (CTO @ Epic) before. We all do our discovery on the PC. It's just that simple. So I bring it up in order to make good comparisons.

This is what I'm talking about.

DForce said:
Spider-Man on Playstation 5 with an SSD
VS
Spider-Man on PlayStation 5 with just a Standard HDD

The way data is streamed into the game, you're telling me they're not able to stream more higher quality assets into the game with an SSD in comparison to just a regular HDD?


VFXVeteran said:
That's the PS5 vs. a standard HDD on the PS4.

That test wasn't a PS5 SSD vs. XSX SSD or PC SSD or PC w/64G of DDR4 RAM (I have this in my box now). A PC can hold the entire Spiderman game in memory easily. So it'll crush any SSD on any hardware. So that's what your ultimate goal is. Since the consoles don't have that kind of bandwidth, they use the SSD to stream assets. But just because they used a PS5 SSD doesn't mean that demo wouldn't work on a PC/XSX with SSD.

1. PC had nothing to do with the conversation.
2. I never referenced the demo Sony showed off, I was giving you a scenario in which the same spider-Man was running on the PlayStation 5 - One with an SSD and the other with a standard HDD.

You were telling me how a PC can hold the entire game on memory easily when it was irrelevant to our conversation.

If I'm asking how can insomniac improve the Spider-Man game with an SSD over a standard HDD, then there's no reason to bring up a PC in the conversation.
 

VFXVeteran

Banned
You said the demo had nothing to do with the SSD, and you could run it on a PC without one. So I think your vindication is waiting on someone running it off their laptop's mechanical hard drive.

I didn't say it had nothing to do with the SSD. I said that it wasn't streaming every frame like you guys thought it would. Look I don't need to prove anything to you guys. Those of you who doubt what I say clearly have an agenda even though you have 0 experience programming graphics at all.
 

VFXVeteran

Banned
This is what I'm talking about.






1. PC had nothing to do with the conversation.
2. I never referenced the demo Sony showed off, I was giving you a scenario in which the same spider-Man was running on the PlayStation 5 - One with an SSD and the other with a standard HDD.

You were telling me how a PC can hold the entire game on memory easily when it was irrelevant to our conversation.

If I'm asking how can insomniac improve the Spider-Man game with an SSD over a standard HDD, then there's no reason to bring up a PC in the conversation.

I don't remember the context from which I made that. But if it was that simple then you are right. I'm constantly having to debuke the "superior hardware advantage" of Sony products because a lot of you guys are so elitist. The MS guys get thrashed over and over and the PC guys get roped into arguments with ridiculous claims of the PS being superior to the PC. It's annoying and none of you even admit it.
 
Last edited:

FireFly

Member
I didn't say it had nothing to do with the SSD. I said that it wasn't streaming every frame like you guys thought it would. Look I don't need to prove anything to you guys. Those of you who doubt what I say clearly have an agenda even though you have 0 experience programming graphics at all.

OMG. Are you serious? You criticize Epic's engine every chance you can. Why are you now switching sides? And this demo has nothing to do with SSD. This demo is for every platform.

 

VFXVeteran

Banned

Yes, and that's correct. That demo was a tech demo for the graphics engine. It wasn't a market commercial for the incredible speed of the Sony PS5 SSD.

You guys took one comment "PS5 SSD is more efficient than the PC" and then "this demo showcases our Nanite technology" and somehow strung them together to fit your narrative. Admit that.
 
Last edited:

Vawn

Banned
You know what. Yes. I have a bad post history. And I've been banned for it before, and I've taken my punishment. I'm in dialogue with the mods on the site, and it is a private manor so you are in no right to talk about it when you have no clue about why things have happened like they did.

Perhaps not saying PlayStation breaks promises you know they never made would be a better start at this "new you"?

And, yes, I have every right to call you out on this stuff. Don't "I'm not a fanboy like YOU" me and then tell me I have no right to point out your console war statements not only in this thread but throughout your entire posting history.
 

Clear

CliffyB's Cock Holster
Yea you be wrong. Been working in realtime graphics for 3yrs now... before that 20yrs in film making CG films so I know the general techniques and have implemented them. As I mentioned prior - the speed of I/O wasn't the end all be all in that demo. And RAM would definitely be a better choice than SSD if a dev had infinite resources.

As I also state with all of my claims, if you have a coding background and have done this before and disagree, show me where I'm wrong. Until then, I'll keep trying to educate people.

First of all, I'm not trying to be rude or needlessly combative with you. Hopefully I explained my reasoning as to why, specifically, I wasn't sold on your perspective. Secondly, thanks for being moderate and civil in your reply. Sincerely, that means a lot as I wouldn't blame you for feeling slighted, if not straight-up insulted by my post.

My point is that your perspective seems extremely narrow in terms of what is actually important from a production standpoint. I say this as someone who has 20+ years experience as a coder, designer, and producer in commercial game production.

I suspect, what may be tripping me up, or at least affecting my impression of what you're saying, is that if your background is primarily in CG movies your thinking may be bound to a "shot-for-shot" paradigm, as opposed to the more holistic, pragmatic methodology of game production.

For example, you mention "infinite resources", which is anathema to games because its a mass-market business where only as tiny minority of users and clients will have access to, or even need for, the highest-end hardware.

Its about constraint and limitation management. Its not just about what you're rendering, its about the whole game world simulation, dynamics, mechanics, inputs and how all these things flow in real-time.

What that demo showed visually was the least interesting part of the tech. You could do that climactic flythrough spooling pre-rendered video but that'd miss the point. The valuable part is that it showed ultra high-detail assets being streamed in rapidly in real-time on a relatively modestly specced closed-system (PS5). Additionally it promises accelerated workflow thanks to GI and the ability to avoid generating multiple LOD levels for every asset.

The main thing though is that the promise of a high efficiency I/O stack is that all this stuff can fit together seamlessly as part of a larger whole.
Just because that demo can run on a slower SSD doesn't take away from the fact that any additional headroom on i/o is still going to be extremely valuable, especially in the context of a larger more complex application like a full-blown game.
 

FireFly

Member
Yes, and that's correct. That demo was a tech demo for the graphics engine. It wasn't a market commercial for the incredible speed of the Sony PS5 SSD.
I feel like "not specifically about" and "nothing to do with", have different meanings, and it is pretty hard to argue that the SSD played no role in the demo when Tim Sweeney specifically highlighted it as being necessary to achieve the level of fidelity shown off. I am not one of the PS5 "special sauce" people, and it is perfectly possible that a laptop NVMe drive will be adequate to run the demo, but that doesn't mean that the same applies to a mechanical hard drive, given some reasonable amount of installed RAM.
 
Last edited:

VFXVeteran

Banned
First of all, I'm not trying to be rude or needlessly combative with you. Hopefully I explained my reasoning as to why, specifically, I wasn't sold on your perspective. Secondly, thanks for being moderate and civil in your reply. Sincerely, that means a lot as I wouldn't blame you for feeling slighted, if not straight-up insulted by my post.

My point is that your perspective seems extremely narrow in terms of what is actually important from a production standpoint. I say this as someone who has 20+ years experience as a coder, designer, and producer in commercial game production.

I suspect, what may be tripping me up, or at least affecting my impression of what you're saying, is that if your background is primarily in CG movies your thinking may be bound to a "shot-for-shot" paradigm, as opposed to the more holistic, pragmatic methodology of game production.

And that may well be true. I understand the tricks of the trade are important and understand the more complicated pipelines are needed to extract performance. However, since I've been in both camps (not gaming but realtime graphics), I can see a broader picture of where gaming ultimately wants to go. Every generation it becomes more and more apparent that tricks are less coveted and just a straight up pipeline similar to film is most wanted. This entire demo focuses on the need to want a simple pipeline (like in film) where artists can do what they love and not spend enormous amounts of time trying to figure out how to make it work on closed systems like the consoles. Ray-tracing is no exception. It's much better to just cast rays into the scene and just get your final pixel color instead of combining multiple layers to capture details from the macro-level to the micro-level. I know that infinite resources isn't something realistic, but having much more resouces is better than having too little. To that end, with each generation, we get more and more resources and that makes production faster and you gain more in the long run.

So yes, I am a bit biased when it comes to film and games because ultimately the games are trying to reach film quality levels and I know what it'll take to get there. Unfortunately, we aren't there yet and that's OK.
 

VFXVeteran

Banned
I feel like "not specifically about" and "nothing to do with", have different meanings, and it is pretty hard to argue that the SSD played no role in the demo when Tim Sweeney specifically highlighted it as being necessary to achieve the level of fidelity shown off. I am not one of the PS5 "special sauce" people, and it is perfectly possible that a laptop NVMe drive will be adequate to run the demo, but that doesn't mean that the same applies to a mechanical hard drive, given some reasonable amount of installed RAM.

Understood.

But you can't scratch what I propose as invalid though when it's a perfectly realistic idea.

If I were visiting Epic and they showed me this demo, I would ask what would happen if you had 64G of RAM and was able to stream from RAM instead of HDD. It's a valid question that may well be answered in the near future. If the algorithm can work on mobile applications, it may very well take advantage of RAM in the way we aren't used to. To get trolled because of my stance on that is a bit unfair and shows that the Sony zealots are blatantly biased against anything where their platform may not be required to get the best level of graphics. The fact that I try to rationalize that in a meaningful way and get trolled for it is very telling. So I'm a troll but the Sony guys aren't because I'm arguing with PC hardware and that's unfair? That's very hypocritical especially when they get every chance they can to attack the Xbox boys for not having a faster SSD.
 

yurinka

Member
Yea you be wrong. Been working in realtime graphics for 3yrs now... before that 20yrs in film making CG films so I know the general techniques and have implemented them. As I mentioned prior - the speed of I/O wasn't the end all be all in that demo. And RAM would definitely be a better choice than SSD if a dev had infinite resources.

As I also state with all of my claims, if you have a coding background and have done this before and disagree, show me where I'm wrong. Until then, I'll keep trying to educate people.
You are wrong, the huge increase in I/O speed what makes possible the main innovation of the demo.

The demo has two main points: global illumination and the big ass streaming to draw insanely detailed scenes on real time. In the 2nd point the I/O speed/bandwith is key.

The main point of the demo is that UE5 will now accept directly insanely big assets (with CG film quality or made in Zbrush huge poly count and 8K textures) and the engine will adapt them to the capabilities of each machine depending on its CPU/GPU/RAM/loading speed. More or less what we did in the past buut....

News in this demo is that due to the super fast SSDs (and related I/O stuff) with UE5 you can stream as you walk a huge amount of triangles, to the point that there are more triangles than pixels (over 2x) in the rendered native resolution (which mostly means that even if the engine reduced the polycount basically you won't notice the difference). So this means that you don't need anymore things like LOD, polycount/texture/memory budgets or optimizations, normal mapping, etc. that you needed with the old system to optimize your memory to maximize the amount of stuff you acn put in every streamed portion, in addition consider it for your level design.

Without that I/O speed you aren't able to stream this insanely detailed scene, because being so fast you can have in memory mostly what the camera sees. If you had let's say the double of memory but a HDD instead of SSD you wouldn't be able to have the scene that detailed, because you also would need to have way more stuff in the memory because the streaming speed would be way slower. Let's say that with this you have a very detailed portion of a room, while without this you'd have maybe 4 or 5 way less detailed rooms (only visible a part of one of them) in the memory.

Thanks to the insane I/O speed you can show may more detailed stuff, and in addition to this you save a ton of work in many tricks you had to do in the past (and if on top of that you have this global illumination, you also save even more work from things like lightmaps and so on).

Obviously, GPU, CPU, memory and I/O are important, all of them. And to throw more of them would help. But what the new paradigm for gamedev that sets this UE5 demo is thanks to the insame stream from fast SSDs (+global illumination which is thanks to GPU & CPU).

If instead of switching from HDD to SSD they would have put let's say 16x or 32x times the RAM, we'd be able to render similarly detailed scenes and would also save all that work in the areas I mentioned, but we'd have way longer loading screen times that we had last gen (versus virtually no loading screens in the current PS5+UE5 implementation).

The thing is that the amount of data loaded to RAM with the UE5 method in let's say 5 minutes of gameplay is way bigger in the current PS5+UE5 case than these 16x/32x times of RAM+HDD case. In PS5 compared to previous gen they doubled the amount of RAM but load it from the SSD around almost 100x faster.

P.S.: Tim Sweeny said UE5 can scale down to mobile or current gen hardware, but disabling Nanite (this insane streaming feature) and Lumen (global illuminaion), because these are next gen exclusive features and in old devices they'd be replaced by traditional techniques.
 
Last edited:

VFXVeteran

Banned
You are wrong, the huge increase in I/O speed what makes possible the main innovation of the demo.

The demo has two main points: global illumination and the big ass streaming to draw insanely detailed scenes on real time. In the 2nd point the I/O speed/bandwith is key.

The main point of the demo is that UE5 will now accept directly insanely big assets (with CG film quality or made in Zbrush huge poly count and 8K textures) and the engine will adapt them to the capabilities of each machine depending on its CPU/GPU/RAM/loading speed. More or less what we did in the past buut....

News in this demo is that due to the super fast SSDs (and related I/O stuff) with UE5 you can stream as you walk a huge amount of triangles, to the point that there are more triangles than pixels (over 2x) in the rendered native resolution (which mostly means that even if the engine reduced the polycount basically you won't notice the difference). So this means that you don't need anymore things like LOD, polycount/texture/memory budgets or optimizations, normal mapping, etc. that you needed with the old system to optimize your memory to maximize the amount of stuff you acn put in every streamed portion, in addition consider it for your level design.

Without that I/O speed you aren't able to stream this insanely detailed scene, because being so fast you can have in memory mostly what the camera sees. If you had let's say the double of memory but a HDD instead of SSD you wouldn't be able to have the scene that detailed, because you also would need to have way more stuff in the memory because the streaming speed would be way slower. Let's say that with this you have a very detailed portion of a room, while without this you'd have maybe 4 or 5 way less detailed rooms in the memory.

Thanks to the insane I/O speed you can show may more detailed stuff, and in addition to this you save a ton of work in many tricks you had to do in the past (and if on top of that you have this global illumination, you also save even more work from things like lightmaps and so on).

Obviously, GPU, CPU, memory and I/O are important, all of them. With let's say 16x or 32x times the RAM instead of these SSD, we'd be able to render similarly detailed scenes and saving all that work in the areas I mentioned, but with long loading screen times (versus virtually no loading screens here).

The thing is that the amount of data loaded to RAM with the UE5 method in let's say 5 minutes of gameplay should be way bigger than these 16x/32x times of RAM. In PS5 compared to previous gen they doubled the amount of RAM but load it from the SSD around almost 100x faster.

OK. so this is exactly what I mean about people talking up on technology that they really don't understand what's going on under the hood. The game developers on these boards should be correcting these kinds of gung-ho Sony gamers so that they can understand how things really work.

We can start with this: only the geometry that's visible in the camera's frustum needs to be resident in RAM - not the entire scene. Secondly, the assets from Zbrush are actually scaled down. The hardware does not stream in 1:1 triangles from zbrush into the UE. Another fact that people tend to ignore.
 

yurinka

Member
OK. so this is exactly what I mean about people talking up on technology that they really don't understand what's going on under the hood. The game developers on these boards should be correcting these kinds of gung-ho Sony gamers so that they can understand how things really work.

We can start with this: only the geometry that's visible in the camera's frustum needs to be resident in RAM - not the entire scene. Secondly, the assets from Zbrush are actually scaled down. The hardware does not stream in 1:1 triangles from zbrush into the UE. Another fact that people tend to ignore.
I'm a game developer (including programmer, designer and other roles) since 15 years ago in places like Ubisoft. I'm not a native English speaker, but I suggest you to read twice my post, watch again the UE5 demo video + related video with Geoff interviewing them and Cerny's talk.

When I said "and the engine will adapt them to the capabilities of each machine depending on its CPU/GPU/RAM/loading speed" I mean that UE5 will scale down the poly count and texture size to what the hardware can handle. And then I added "to the point that there are more triangles than pixels (over 2x) in the rendered native resolution (which mostly means that even if the engine reduced the polycount basically you won't notice the difference)". In that demo Epic mentioned that they make a 'lossless' reduction from up to 1B triangles in a frame -in the original assets, reduced obviously before exporting to the console- to ~20M triangles -what is put/I think seen it the console-.

I think they mentioned 'lossless' because a native 4K frame has around 8.3M pixels and the demo was rendered at 1440p, around 3.7M pixels. So adding a more than 20M triangles (I bet they already are way more than enough) wouldn't make sense because the extra quality wouldn't be appreciated.

In previous gen/using HDDs you had to load 'the whole scene' (let's say a portion of a level enough for around 30-40 seconds of gameplay, including the stuff outside the frustum) to the RAM. Then you work in VRAM that area the camera is seeing, apply culling yourself, shading, etc.

As Cerny said in his talks and what this demo proved is that thanks to the PS5 I/O system they can stream mostly only the portion of the scene that it's in the frustrum (it really is a bit more, but way smaller than the one streamed using HDDs) and then the console uses its Geometry Engine to apply things like culling and other stuff to the geometry by hardware even before loads that stuff in the RAM and is drawn. Which makes room in the RAM to put more stuff there.

The whole idea is to focus the console CPU/GPU/RAM resources in mostly what you're seeing to apply extra detail there instead of in stuff of this scene/portion of the level that you may or may not see and is relatively away/maybe behind a door so it would be wasting resources. The examples Cerny mentioned was that now you can stream from the SSD what it's behind a door in the 0.5/1 second animation of opening the door, or what it's behind you in that 0.5s animation of turning the camera to behind you.
 
Last edited:

FireFly

Member
Understood.

But you can't scratch what I propose as invalid though when it's a perfectly realistic idea.

If I were visiting Epic and they showed me this demo, I would ask what would happen if you had 64G of RAM and was able to stream from RAM instead of HDD. It's a valid question that may well be answered in the near future. If the algorithm can work on mobile applications, it may very well take advantage of RAM in the way we aren't used to. To get trolled because of my stance on that is a bit unfair and shows that the Sony zealots are blatantly biased against anything where their platform may not be required to get the best level of graphics. The fact that I try to rationalize that in a meaningful way and get trolled for it is very telling. So I'm a troll but the Sony guys aren't because I'm arguing with PC hardware and that's unfair? That's very hypocritical especially when they get every chance they can to attack the Xbox boys for not having a faster SSD.
Right but phrasing it as a question, is already starting from the position that you don't know the answer, because it depends on a further set of complex and unanswered questions. That's different from simply asserting that the PC can do X, or the consoles can do Y. Take a look at the Beyond3D forums, which are mostly frequented by developers. There, there is no consensus as to exactly what the IO requirements of Nanite are, or how demanding released games will be. I think the below Reddit post represents some quite interesting speculation, since it meshes together the idea that the particular demo may be pushing the IO capabilities of the PS5 hard, with the idea that shipping games will be less geometrically complex anyway. That would reconcile Sweeney's comments that NVMe drives can deliver awesome performance, with his claims about why the PS5 was chosen for this demo.

"What's up with the SSD? Is the PS5 special?

There are two questions here: does the Nanite technology require PS5-tier SSD speeds?, and does this demo require PS5-tier SSD speeds?

The first question is simple: no. As shown above, Nanite is first and foremost a way of storing and rendering geometry. Despite first appearences, this is an especially data-efficient way of storing geometric detail, and should always look more detailed than a normal-map approach at similar file sizes.

For the second question, you should understand how much data this demo had. The demo had “hundreds of billions” of ‘virtual triangles’, so even if after compression they are spending only a couple of bytes per triangle, and some geometry was reused, that's still way in excess of a hundred gigabytes of data, being scanned over in under ten minutes of gameplay. This is so much data that despite the PS5's insane SSD speed, I actually think there was at least one disguised loading screen, most visibly at 7:20-7:40, where the outside is whited out while I assume that portion of the level was loaded in. An XBox Series X's SSD would have taken twice as long to load this, and likely otherwise been slightly less timely with loading in geometric detail.

However, you should remember that this is a pathological case. Unless we're about to get shipped terabyte file sizes, shipped games will not be quite this geometrically dense. This is still not to say they won't look nearly this good; 8k textures are excessive, and a mix of 2-4k textures is still incredibly detailed, given that each pixel represents geometry."

 

VFXVeteran

Banned
I'm a game developer (including programmer, designer and other roles) since 15 years ago in places like Ubisoft.

Good. I'm glad you elaborated more of what you are saying so now we don't have to assume you are a Sony fan with no experience at all.

The whole idea is to focus the console CPU/GPU/RAM resources in mostly what you're seeing to apply extra detail there instead of in stuff of this scene/portion of the level that you may or may not see and is relatively away/maybe behind a door so it would be wasting resources. The examples Cerny mentioned was that now you can stream from the SSD what it's behind a door in the 0.5/1 second animation of opening the door, or what it's behind you in that 0.5s animation of turning the camera to behind you.

OK. But you still don't know how much was streamed and how much wasn't. We have no way of knowing that in that demo. As we've seen some comments that Epic China mention that the scene isn't that starved for SSD bandwidth. I would go as far as to say it's more starved for GPU bandwidth.

My claim of running this on the PC with an HDD and more CPU RAM is a valid one. I just don't know how much RAM would be needed for a given set of conditions with a targeted FPS and resolution.
 

Nikana

Go Go Neo Rangers!
Good. I'm glad you elaborated more of what you are saying so now we don't have to assume you are a Sony fan with no experience at all.



OK. But you still don't know how much was streamed and how much wasn't. We have no way of knowing that in that demo. As we've seen some comments that Epic China mention that the scene isn't that starved for SSD bandwidth. I would go as far as to say it's more starved for GPU bandwidth.

My claim of running this on the PC with an HDD and more CPU RAM is a valid one. I just don't know how much RAM would be needed for a given set of conditions with a targeted FPS and resolution.
So wait you're telling me...The SSD isn't the GPU?

tenor.gif
 

ethomaz

Banned
Good. I'm glad you elaborated more of what you are saying so now we don't have to assume you are a Sony fan with no experience at all.



OK. But you still don't know how much was streamed and how much wasn't. We have no way of knowing that in that demo. As we've seen some comments that Epic China mention that the scene isn't that starved for SSD bandwidth. I would go as far as to say it's more starved for GPU bandwidth.

My claim of running this on the PC with an HDD and more CPU RAM is a valid one. I just don't know how much RAM would be needed for a given set of conditions with a targeted FPS and resolution.
Just to be accurate about Epic China.
That is what he said.

2:06:30 If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5.


2:08:00, SSD bandwidth (for the flying part) isn’t as that high as ppl said, not need a stricted spec SSD (decent SSD is ok).
this is very important to understand how SSD paly a role in the fly scene



In 1080p with lower quality you can run without need PS5’s SSD level.
The flying part is not SSD bandwidth intensive... not the whole demo.
He said he used a mobile RTX 2080 (RTX 2070 level) without specific the SSD in a notebook... he run only the opening part at 40fps (no resolution specified).
 
Last edited:

Rolla

Banned
You don’t need to system war this hard. The plastic box isn’t going to fuck you.

The problem here isn't about which piece of plastic that I, personally, think is doing better in the lead up to next gen launch. I talking about the much broader picture in terms of building momentum for next gen launch. If you think the wording is system war-ish then you're missing the huge point. This tweet and the gratification/justification some are getting from this tweet is so far beside the point that it's in another realm entirely in comparison to what Sony wanted to achieve.
 
Last edited:

yurinka

Member
OK. But you still don't know how much was streamed and how much wasn't. We have no way of knowing that in that demo. As we've seen some comments that Epic China mention that the scene isn't that starved for SSD bandwidth. I would go as far as to say it's more starved for GPU bandwidth.
All this stuff is a paradigm shift, and I still didn't work with PS5 or UE5, but I understand Cerny won't lie in a GDC talk while describing the potential of its architecture design, and Sweeney won't lie when describing the main feature of the next numbered UE.

Sweeney said that they had ~20M triangles for a 1440p frame that has ~3.6M pixels (btw, this demo runs at 30fps in a PS5 devkit). I assume many of these triangles are bigger than a pixel, so I assume they have these 20M include a bigger portion of the level than the one seen in the frustum (even before culling). I assume these 20M cover around 1 second of gameplay, or probably they were targeting a higher native resolution or higher framerate but still don't have the engine optimized enough and still can't achieve their target.

The PS5 SSD runs (counting decompression) at ~8-9GB/s, which means that in half a second they can read ~4.5GB to the RAM. I'd say it's more than enough to do that. In fact, this demo will run in Series X and next gen PCs -with a good SSD- too I assume that with a pretty similar level of quality, so this demo must be doable with around half of this I/O -SSD+decompression+other stuff- speed, which is the Series X I/O speed. Who knows, maybe it's made to run in slower PC SSDs.

Being a multiplatform engine, it must consider minimum common denominator in all cases: in this case next gen GPU/CPU/SSD solution/decent amount of RAM. So maybe in a device is the GPU frequency, in another teraflops, in another amount of RAM, or I/O speed, etc.
 
Last edited:

psorcerer

Banned
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
But it's still Fortnite, with low poly models that are meant to run even on a microwave, it will get migrated to UE5 just like Dota2 to Source 2, with basically no difference to the end users.
Their entire engine is designed to be massively scalable with Nanite. A game designed for high end consoles an work on a phone according to Epic.

I seriously doubt it will be basically no difference. They are going to use Fortnite to show off the engine.
 
Last edited:
Why is there even speculation of 1080p? A gaming laptop paired with an RTX 2080, Samsung 970, and a 1080p screen, of all screen resolutions in 2019/2020? Which seems like the odd ball to you guys?

the laptop wouldn't have a 1080p screen, because the SSD could create holographic visuals in 4D@40+fps, in the Unreal Engine demo
 

oldergamer

Member
I'm certain that the PS5 Unreal demo wasn't coming anywhere close to the peak I/O of the Playstation, and you would get similar performance from slow NVME or SSD drives.
 

FireFly

Member
Why is there even speculation of 1080p? A gaming laptop paired with an RTX 2080, Samsung 970, and a 1080p screen, of all screen resolutions in 2019/2020? Which seems like the odd ball to you guys?

the laptop wouldn't have a 1080p screen, because the SSD could create holographic visuals in 4D@40+fps, in the Unreal Engine demo
A quick look on Google and 1080p seems like a pretty common resolution for gaming laptops with 2080s.

 

ethomaz

Banned
Why is there even speculation of 1080p? A gaming laptop paired with an RTX 2080, Samsung 970, and a 1080p screen, of all screen resolutions in 2019/2020? Which seems like the odd ball to you guys?

the laptop wouldn't have a 1080p screen, because the SSD could create holographic visuals in 4D@40+fps, in the Unreal Engine demo
It is an RTX 2080 mobile... it perform below RTX 2070.
Nobody knows the SSD.
The resolution is not confirmed but like he said if you use 1080p and 2 triangles per pixels (vs 4 per pixels in PS5 demo) you don't need the high bandwidth PS5 allow... so it is probably running in 1080p even more with the mobile GPU and 40fps at the opening part.

Plus most notebooks with RTX 2080 mobile has native 1080p screens.
That didn't tell anything because you can run the demo in the resolution you choose and that not means it needs to be native.

I'm certain that the PS5 Unreal demo wasn't coming anywhere close to the peak I/O of the Playstation, and you would get similar performance from slow NVME or SSD drives.
What the Epic china guy said.

"2:06:30 If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5. "

PS5 demo was at 1440p, 4 triangles per pixel, without these compression on vertex.
This new video source actually give us more insight how the demo ran and it requirement to run at PS5 level.
 
Last edited:

skneogaf

Member
Unreal engine games always have texture streaming issues compared to other engines, I bet epic are very grateful consoles have ssds built in as standard.

The only way its possible to get batman arkham Knight to run locked 60fps at 4k is using an nvme ssd and that was due to streaming issues.
 
Unreal engine games always have texture streaming issues compared to other engines, I bet epic are very grateful consoles have ssds built in as standard.

The only way its possible to get batman arkham Knight to run locked 60fps at 4k is using an nvme ssd and that was due to streaming issues.

True. Texture streaming issues did seem to pop up pretty often in various UE games
 
Last edited:
A quick look on Google and 1080p seems like a pretty common resolution for gaming laptops with 2080s.


Maybe I should have been more clear. Why would a DEV have a 1080p screen? And let's say that he did (if you can speculate, so can I), who's to say this wasn't connected to an external display? Just about every dev uses external screens.
It is an RTX 2080 mobile... it perform below RTX 2070.
Nobody knows the SSD.
The resolution is not confirmed but like he said if you use 1080p and 2 triangles per pixels (vs 4 per pixels in PS5 demo) you don't need the high bandwidth PS5 allow... so it is probably running in 1080p even more with the mobile GPU.

Plus most notebooks with RTX 2080 mobile has native 1080p screens.
That didn't tell anything because you can run the demo in the resolution you choose and that not means it needs to be native.
You said it yourself, the resolution is not confirmed. So why pass this off as a fact? You also have no clue the data rate which was used in the ps4 demo. Especially as it runs as a higher framerate on the same hardware that Jensen said would be more careful than next gen hardware. I would have much more faith in the lather jacket man from Nvidia, than someone who passes off speculation, as facts. No offense, but Jensen would be in no position to openly lie to the public, and get would have more knowledge about hardware and even this engine than you or I.
 

ethomaz

Banned
Maybe I should have been more clear. Why would a DEV have a 1080p screen? And let's say that he did (if you can speculate, so can I), who's to say this wasn't connected to an external display? Just about every dev uses external screens.

You said it yourself, the resolution is not confirmed. So why pass this off as a fact? You also have no clue the data rate which was used in the ps4 demo. Especially as it runs as a higher framerate on the same hardware that Jensen said would be more careful than next gen hardware. I would have much more faith in the lather jacket man from Nvidia, than someone who passes off speculation, as facts. No offense, but Jensen would be in no position to openly lie to the public, and get would have more knowledge about hardware and even this engine than you or I.
Yeap it makes all sense to run at the same resolution than PS5 in a way weaker GPU and slower SSD.
And 1440p fits this comment too :messenger_tears_of_joy: .

2:06:30 If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5.

Jesen is the Epic guy in the video? If so then he is saying what I'm posting lol

It doesn't matter the Chinese "tape" mentions 2x less per pixel detail. 1.78x less resolution and less vertex detail (unspecified).
I'm just curious... I believe Epic said 4 triangles per pixel on PS5 demo.
For what the China Epic guy said makes really no difference.
 
Last edited:
Top Bottom