• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Epic sheds light on the data streaming requirements of the Unreal Engine 5 demo

lyan

Member
Because that figure is the peak limitation of what is seen at any given point in time. You're not swapping out that entire pool or even remotely close to that, it defeats the purpose. What were you guys expecting? This is a multiplatform engine, it's configured to encompass everything.
If that is the case then they even need any streaming, just do it like old days load everything upfront.
 
If SeX's SSD is overkill, PS5's SSD (as Ned Flanders would put it) is overkill +1. Sony should have scaled back the SSD and put that dough into other specs like MS.

Didn't sound right from the beginning for PS5 to be a good spec system (for a console, since uber PC rigs always beat console) to be paired up with an SSD better than any PC SSD out there (until recently I think where someone said there's some PC SSDs now just as good or better coming out).

I would not come to claim to be smarter than MS / Sony engineers, just because this demo used certain amount of data. Illogical and bit stupid/biased conclusion from many. Like certain fanboys want to get to say "lol ssd is worthless!"

Those speeds affect many things and so does I/O, maybe slow one would work too, but super fast one probably allows some new tech in the future and something that slow one cant do.

(Im not 100% familiar with these terms about streaming pool, but to me it doesnt sound like that game data on disc is ONLY 768MB total)


Also even if it is "Only" 768MB -> if disc can move 768 MB/s but engine needs to load 768 MB in 0.2s, it doesnt work. Or engine needs smaller chunk, lets say 100MB in 0,01s, it doesn work on that disc.

if someone seriously thinks that MS/Sony spend so much money to engineer SSD that DO ABSOLUTELY NOTHING MORE, than 6 years old sata-disc, then they are fucking imbeciles.

Barely enough vs. lot of head room for special situations. One cant handle em, one can. And I bet that those engineers have thought them trough and made their specs.

Even this quick resume-between games would not work with much slower discs, even now xsex changes games IMO really slowly (more than 5 seconds) to feel anything special or almost instant, if it would take 10-20s with slower disc = nobody would care
 
Last edited:

pawel86ck

Banned
It was abvious streamimg memory pool cant be as high as 4.8-9 GB on limited 13.5 GB RAM available to developers, so what epic is saying shouldn't surprise people.

But 768MB memory pool for streaming doesn't tell the whole story if they are transfering data on frame to frame basis because if you will transfer 768MB data a couple of times during one second you will still need much faster SSD than 768MB/s.

I will wait with my conclusions until more info is available (when UE5 tech demo will launch on PC I'm sure people will test how fast SSD is needed), because I'm not fighting for any particular brand and I'm only interested in facts.
 
Last edited:

Gavin Stevens

Formerly 'o'dium'
Oh, look...
k7RRFMi.jpg
 

jigglet

Banned
On the one hand, it's fucking sad people are celebrating the fact that Sony's SSD probably won't be utilised outside of a few first party efforts. This seems like cool tech.

On the other hand, if it's truly overkill then so be it.
 

Aceofspades

Banned
Oh so just because ONE SPECIFIC USE CASE for the SSD doesnt fully utilize it, it is overkill?

Good BS OP.

True, OP reminded me of that famous quote by Bill Gates that said nobody ever needs more than 640k of Ram (I think he denied it later)

But my point is, in technology never say never to having bigger numbers. It might be overkill today but it sure will be outclassed tomorrow.
 

geordiemp

Member
It was abvious streamimg memory pool cant be as high as 4.8-9 GB on limited 13.5 RAM available to developers, so what epic is saying shouldn't surprise people.

But 768MB memory pool for streaming doesn't tell the whole story if they are transfering data on frame to frame basis because if you will transfer 768MB data a couple of times during one second you will still need much faster SSD than 768MB/s.

I will wait with my conclusions until more info is available (when UE5 tech demo will launch on PC I'm sure people will test how fast SSD is needed), because I'm not fighting for any particular brand and I'm only interested in facts.

Exactly. If you need to stream up to 768MB every frame, then at 60 FPS thats well too fast and over 20 GBs.

I doubt its 758 mb per frame, it will be allot less and EPIC said it renders to Gbuffer in 4.25 ms.

We need to know how much is prefetched and how it works..
 
Last edited:

Altares13th

Member
I suspect they used the maximum throughput of the PS5's IO block which is 22GB/s in best case with maximum compression (cf. Cerny's presentation). That's why they insist that they are still optimising and can do with much lower data sizes. It's basically a POC.

22GB/s is 733MB per frame on a 30fps presentation.

We need more info to be sure.
 
Last edited:

Blond

Banned
I'm very interested to see the hardcore playstation only fanatics get out of this one!

They won't admit it, I'm going to get a PS5 eventually but anyone who thought a SSD was going to suddenly make a console "punch above it's weight" is delusional. It'll provide some great gameplay design opportunities but to think that suddenly you're going to be outmatched by a console much more powerful in every other department you're delusional.

That UE developer from China was telling the truth.

The pushback I had from that thread was nuts, why did Epic IMMEDIATELY shut down all talk of the host PC and claimed it was a video when it was clear the details he posted couldn't come from a video file? Because they already made a stupid claim that they had to modify the engine to "keep up with the ps5 SSD" guess all that engineering work was for naught or it never happened.
 

oldergamer

Member
Ha i knew it. I called this out early on that we didnt know if ps5 was hitting its max performance on the ssd or not for the demo. I suspected not and that it would be capable on both ssd drives.
 

TriSuit666

Banned
The Demo runs sub 1440p at 30fps. If they could get more out of that GPU then why didn't they?

This is because Nanite+Lumen is currently locked to 30fps, while they 'currently investigate 60fps'.

I think they also confirmed the Hellblade 2 demo was done in UE4 (from one of the slides, they cite Senua and the SeX demo).

Lots of good info in that video, thanks OP.
 
Last edited:

Dontero

Banned
If SeX's SSD is overkill, PS5's SSD (as Ned Flanders would put it) is overkill +1. Sony should have scaled back the SSD and put that dough into other specs like MS.

Those changes don't really cost anything when you manufacture them. Expanding chip size by adding more CUs does.

This is because Nanite+Lumen is currently locked to 30fps, while they 'currently investigate 60fps'.

I don't see reason why it would be locked to 30fps other than lack of power in system.
 
Last edited:

TriSuit666

Banned
I don't see reason why it would be locked to 30fps other than lack of power in system.

Watch the video, it's nothing to do with system power, it's the engine isn't currently optimised yet, it's not launching until Fall 2021, so 'next-gen' features are backported to UE4 from 4.25 (for instance Niagra was used to model the water tech in F2)

The video also confirmed Fortnite will launch on PS5 and XsX 'as soon as the platform launches' and a lot of proposed tech from UE5 is already in Fortnite Chapter 2, and Fortnite will upgrade to UE5 well before anyone-else.

Seriously people, it's a really informative video, please watch it.
 
Last edited:

Herr Edgy

Member
5.5GB/s is a speed given in seconds, not a per-frame max capacity.
If we go by that raw number it can load the 768 MB of data from drive in about 768/5500 seconds, so a bit more than a 10th of a second, so let's say 100 ms.
For 60 FPS, which is 16 ms per frame, it means it would take about 6 frames to load in those 768 MB. And remember this is also theoretical, with several showstoppers along the way.

If you distribute those 768 MB over 6 frames you get a bit over 100 MB per frame.

Half the speed of the SSD would mean half the amount it can do in a second and half it can do in a frame.
OP looks at the data thinking "if 768 MB is the max anything above 768 MBps means it's bottlenecked". Which isn't true, obviously. Seconds are not frames.

(I got the 768 from my mind, might be close to it)
 

the_master

Member
It's going to be a shocker for some and for others like me not so much, because I've been saying this for months now. I hate to break it to some of you but that demo's data streaming could be handled by a 5 year old SATA SSD.

8wl1rua.png


768MB is the in view streaming requirement on the hardware to handle that demo, 768 MEGABYTES... COMPRESSED. And what was the cost of this on the rendering end?

Well, this is the result...

dQOnqne.png


This confirms everything I've said, not that these SSD's are useless, because they're 100% not. That data streaming would be impossible with mechanical drives, however, and this is a big however. That amount of visual data and asset streaming is already bottlenecking the renderer, it's bringing that GPU to its knees. There's very little cost to the CPU as you will see below, but as noted about 100 different times on this website and scoffed at constantly by detractors; the GPU will always be the limiting factor..

lNv2lKl.png


I've maintained this since square one, Microsoft and Sony both went overkill on their SSD's. That amount of I/O increase is not capable of aligning with the rendering pipeline in terms of the on demand volume of data streaming these SSD allow.

So what's the point here? You've got two systems with SSD's far more capable than their usefulness, but one came at a particularly high cost everywhere else in the system. I'll let you figure out which one that is and where.

deadest.png
That's it, cancel the new consoles, they have too much ssd speed, a guy from the internet figured it out based on some loose data from one tech demo.
 

Redlight

Member
People are way overthinking things.

1. Epic could have chosen any system to demo Unreal 5, they chose PS5.

2. Epic demoed using the strengths of the PS5 that are unique to PS5.

It’s not long now til we see what XSX is capable of, we should just wait.
This is precisely the point though...

1. Epic chose the PS5 to debut the engine, at least partly, due to the relationship they have with Sony and very tasty $250,000,000 (far from debunked btw).

2) It seems likely that the demo of the "strengths of the PS5 that are unique to the PS5" produced something that can also be done on the Series X.
 

bitbydeath

Member
This is precisely the point though...

1. Epic chose the PS5 to debut the engine, at least partly, due to the relationship they have with Sony and very tasty $250,000,000 (far from debunked btw).

2) It seems likely that the demo of the "strengths of the PS5 that are unique to the PS5" produced something that can also be done on the Series X.

1. If you can’t trust the word of someone actually in the know then clearly you’re already well past the tin foil hat stage.

2. Nobody said it couldn’t scale down to Series X. Epic admitted that themselves.
 

Three

Member
The proof is the images, the proof is in what they said, the proof is in the GPU bottlenecking. You're in denial and lashing out.

You said you were leaving the thread, and yet you remain. There's nothing to argue here so a gif mocking you does suffice.
Can you explain in as technical terms as you are capable of why the GPU will bottleneck the streaming with a 4.5ms GBuffer time and 768MB streaming pool. I'll wait.
 
Last edited:

hyperbertha

Member
Sorry friend, I'm not picking up what you're putting down. The gentlemen doesn't breakdown what the 700+mb of compressed Nanite data is - only that it's Nanite's scene data, and it's much larger data footprint than what they're used to working with, and assured the viewer that Epic will be working to make it smaller as soon as possible.

You're throwing the 700mb figure against the PS5's 5.5gb/s SSD and equating the two values, saying it only needs 700mb per second. The gentlemen in the video didn't make that comparison, so I'm not going to either.
He only said this represented what the Nanite scene needed based on the current view. So, with that in mind your interpretation doesn't make sense. For example, given that the view can change every frame, at 30FPS the worst case scenario would be the full 700 loaded every 33.33ms, which scales up to something like 23gb of data streamed per second. We don't know how long it takes for the initial scene to load - they may be front loading the initial data load, meaning they only need to stream in a small amount of changed data per frame. 160mb of altered data per frame, for example, would tax the raw PS5's SSD capability.

As you can see, this can be interpreted in several different ways. Frankly, no one here is qualified or experienced enough to explain it properly.

So, no - Sony have not made an error of building a 5.5gb/s SSD when they only needed a 1gb/s SSD. Microsoft didn't build a 2.4gb SSD when they only needed a 1gb/s SSD. Neither company has wasted hundreds of millions in R&D because they screwed up primary school math.
P psorcerer might be able to shed some light on the matter if he has time.
 

TriSuit666

Banned
Sorry friend, I'm not picking up what you're putting down. The gentlemen doesn't breakdown what the 700+mb of compressed Nanite data is - only that it's Nanite's scene data, and it's much larger data footprint than what they're used to working with, and assured the viewer that Epic will be working to make it smaller as soon as possible.

They absolutely do... if you watch the whole video.

Later on, they deconstruct the whole UE5 demo in real time on a standard PC.
 

Redlight

Member
1. If you can’t trust the word of someone actually in the know then clearly you’re already well past the tin foil hat stage.

2. Nobody said it couldn’t scale down to Series X. Epic admitted that themselves.

1) It takes a special kind of naive to think that Sweeney wasn't marketing on Sony's behalf, especially considering...

2) ...that it seems that the Series X is equally capable of running that demo without any 'scale down' required.
 

oldergamer

Member
My thoughts on this is how much texture memory is being used that is not visible? Could it 2.2 GB? With only making use of a third of the textures loaded?

...And what is the performance hit for culling what isnt visible?

Imo i think this plays right into what ms have said about where they made optimizations. They made smart choices and its going to pay off with higher performance when used correctly. They are going to get more bang for the buck if the features are used correctly.

Sony has a more brute force approach that may hit bottlenecks in the system that we dont know about. Or its just a matter of giving peak vs actual numbers. Its pretty obvious that with the state of the demo code, they could not achieve 60fps on ps5 and used dynamic res to maintain 30fps. The kicker is they said frame rendering time was simlar to fortnight at 60fps.

I think the assumption it was the lighting that was the limiting factor ( being gpu heavy) makes more sense then the ssd being bottlenecked
 

Kadve

Member
Ive been saying for years that unless the game is overloading your ram or using your hdd as a cache (ie, mmo's) Ssd's wont actually give you that big of an performance boost. More ram (32gig or so) would have been prefferable rather than a "super" ssd.
 
Last edited:

ZehDon

Gold Member
They absolutely do... if you watch the whole video.

Later on, they deconstruct the whole UE5 demo in real time on a standard PC.
Not really - at least in the video the OP linked to me. Time stamp for the data stream breakdown and I/O requirements for the demo?

You may have confused a breakdown of the Engine's asset pipe line, and the third speaker explaining the methodology that Nanite uses, for a breakdown of the streaming pool figure of 700-ish mb. That info only explains that a streaming pool is used - it doesn't describe the fundamentals of how it is used in relation to the specific demo shown on the PS5. They didn't break down what, when, and where streaming occurred in the demo, or the raw data I/O requirements necessary to use Nanite in the demo. The requirements for this feature in the Engine aren't "xgb/s" but asset dependent, so it follows they wouldn't break down a tech demo for their third party engine in the specific way we'd need to better understand Sony's bespoke I/O hardware.

Happy for someone with more knowledge to fill in the gaps for me.
 
Last edited:

Rikkori

Member
I think going overkill on SSD makes sense in the long run & also if they plan to refresh the console, same as with going overkill on the CPU. While the GPU is the most important and will always be the bottleneck, it's also the easiest to upgrade in a new console without affecting the base spec. Plus, sometimes there manufacturing limitations as well to consider, eg why they chose to go for 8c/16t Zen2 instead of 4c/8t - the chiplet nature of Ryzen would've made no economic sense for them to do 4c/8t and likely would not have saved them much money while also severely restricting their capability. It's a somewhat similar with the SSD, its controller and the flash memory. That's why as great as it is in terms of speed/latency, you'll notice it's limited in capacity, and further adding to that will be significantly expensive.

Another point I don't see brought up (probably because all the fanboys arguing didn't even bother watching the talk in the first place) is how much you'll be limited by capacity, and indeed it's something we've been saying since the beginning. Epic says they used Cinema assets for the demo, but that you wouldn't use these for an actual game, and that they used an unworkable amount of data. What that means is we'll see even smaller assets used for games which in turn mean you'll not even need that much memory or as fast/low latency for a final game. Because otherwise you'll be looking at 300 GB+ game sizes.

And remember, nanite is LIMITED in what it works with! That's why GPU bottleneck is still so important, because while they resolve geometry with that, there's still a lot of other things which it can't do FOLIAGE being a particularly big one.

In the end, this is great news for PC. We have so much excess VRAM & RAM, it means we have nothing to worry about, and even a PCI-E 4.0 SSD isn't required but rather we can just truck on with a 3.0 x4 one.

Have any of you guys downloaded the 4K video of the demo and watched it carefully/critically? I'm a little concerned by the number of prolonged scenes where the nanite system seems to fail at it's cinema asset > pixel interpretation resulting in some rather nasty looking IQ. These engine artifacts are readily apparent in real time in motion and don't require still frames or slowmo. Many instances wind up looking worse than traditional current gen renderers like ND's or Decima.
Yes, I saw that the moment I saw the demo. UE's TAA is probably the most horrendous looking in the business sadly. But also, the assets don't hold up as well as how much they brag about how they're "8K" / "cinema" assets. Granted, it's a net positive for performance, but in terms of IQ it's clearly not up to par with using them without Nanite. If you just drop those same assets into UE4 you can see a MUCH sharper and cleaner environment (but ofc unworkable in a game like that due to performance). See eg:
 

oldergamer

Member
Not to this proportion, but you know that. The extreme point of view you have taken is still nice as we will see it squarely only dedicated to one console only although from your analysis both have wasted tons of money for something useless... or you are just ignoring latency and the difference of data to move and the speed it must be swapped in and out at.

Still, will be entertaining to see you wave wars against XVA and all the hype behind it ;).
Hmm i dont recall you stating that anyone should hold up because we had no idea how much bandwidth from the ssd was being used in the demo.

Speaking of latency are you assuming the latency on texture streaming is higher or lower on ps5 compared to xsx? Curious if you are, whats it based on?
 

Psykodad

Banned
Facts redeeming logic over marketing with no surprise
The i/o bottleneck always was mix/ramdom read capabilities/iops (and not raw ones) streaming data. A number a lot lower than raw speed in most SSD.
768mo/s is already a Huge number in this case and with optimisation and different scenery higher can probably be hit.
We really need to know this number or iops of both in this case It's the important number here to know
Like another poster said, aren't people forgetting added data for audio, animations and whatnot?
With the move to efficency with all devs as well as both Sony, MS, AMD and Nvidia, in that flying scene your GPU would not be asked to render the full detail of the buildings as you fly past them and wont even notice the detail. It's the exact scenario where things such as VRS and lower quality textures and assets would be used.
Fair enough, that's what UE5 will be designed to do anyway.

But we don't even know to what extent details will be rendered.
In any case, unless I'm misinterpreting some things, it seems that in the video Epic is pretty clear in that there is enough room to do much more than what the demo showed.
 
...and if the OP is correct, then the Series X would indeed be capable of the streaming speeds required by the UE5 demo. It certainly hasn't been denied by any of the major players.

Look... next gen is going to depend on what devs decide to do...

PS5 absolutely CAN stream more data faster than series X, if devs take advantage there will be a difference
on the flipside Series X absolutely CAN push visual effects beyond PS5

denying the importance of double the throughput is no different to denying the importance of 2 extra TLFOPS.
 

cormack12

Gold Member
It was abvious streamimg memory pool cant be as high as 4.8-9 GB on limited 13.5 GB RAM available to developers, so what epic is saying shouldn't surprise people.

But 768MB memory pool for streaming doesn't tell the whole story if they are transfering data on frame to frame basis because if you will transfer 768MB data a couple of times during one second you will still need much faster SSD than 768MB/s.

I will wait with my conclusions until more info is available (when UE5 tech demo will launch on PC I'm sure people will test how fast SSD is needed), because I'm not fighting for any particular brand and I'm only interested in facts.

Yeah I agree. The streaming pool for Killzone Shadowfall was 600MB. I think this is something different to what OP thinks it is. However, I have no conclusive proof so will have to wait and see more tech demo specs bein released. I think the demo was about moving those non-streaming textures into memory quicker rather than having to wait until the mechnical disk had loaded them into memory.

KUtFJoh.png
 
Top Bottom