• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Epic sheds light on the data streaming requirements of the Unreal Engine 5 demo

It's going to be a shocker for some and for others like me not so much, because I've been saying this for months now. I hate to break it to some of you but that demo's data streaming could be handled by a 5 year old SATA SSD.

8wl1rua.png


768MB is the in view streaming requirement on the hardware to handle that demo, 768 MEGABYTES... COMPRESSED. And what was the cost of this on the rendering end?

Well, this is the result...

dQOnqne.png


This confirms everything I've said, not that these SSD's are useless, because they're 100% not. That data streaming would be impossible with mechanical drives, however, and this is a big however. That amount of visual data and asset streaming is already bottlenecking the renderer, it's bringing that GPU to its knees. There's very little cost to the CPU as you will see below, but as noted about 100 different times on this website and scoffed at constantly by detractors; the GPU will always be the limiting factor..

lNv2lKl.png


I've maintained this since square one, Microsoft and Sony both went overkill on their SSD's. That amount of I/O increase is not capable of aligning with the rendering pipeline in terms of the on demand volume of data streaming these SSD allow.

So what's the point here? You've got two systems with SSD's far more capable than their usefulness, but one came at a particularly high cost everywhere else in the system. I'll let you figure out which one that is and where.

deadest.png
 
Last edited:

StreetsofBeige

Gold Member
If SeX's SSD is overkill, PS5's SSD (as Ned Flanders would put it) is overkill +1. Sony should have scaled back the SSD and put that dough into other specs like MS.

Didn't sound right from the beginning for PS5 to be a good spec system (for a console, since uber PC rigs always beat console) to be paired up with an SSD better than any PC SSD out there (until recently I think where someone said there's some PC SSDs now just as good or better coming out).
 
Last edited:
If SeX's SSD is overkill, PS5's SSD (as Ned Flanders would put it) is overkill +1. Sony should have scaled back the SSD and put that dough into other specs like MS.

Didn't sound right from the beginning for PS5 to be a good spec system (for a console, since uber PC rigs always beat console) to be paired up with an SSD better than any PC SSD out there (until recently I think where someone said there's some PC SSDs now just as good or better coming out).
And this is also what I've said, they diverted costs away from the rest of the system in terms of compute and pushed it into the SSD. This was a mistake.

There's already 8 Terabyte 15GB/s SSD's on the market for PC right now, like today.
 

StreetsofBeige

Gold Member
And this is also what I've said, they diverted costs away from the rest of the system in terms of compute and pushed it into the SSD. This was a mistake.

There's already 8 Terabyte 15GB/s SSD's on the market for PC right now, like today.
I'm not a tech whiz, but I don't get why Sony would do this.

Spending money to jam in an SSD which is better than 99.9% of people's PC SSD but pair it with specs that look like can't even take advantage of it leads to some curious decision making and perhaps some secret shit Sony has up its sleeve to use it. Maybe 3 years from now after every dev gets accustomed to console SSDs they are expecting some large jump in development quality and they are hedging their bets that a 5.5 gb/s SSD is worth the investment over a crappier MS 2.4 gb/s SSD.

Who knows.

Even if PS5 core specs were locked and loaded and the SSD part could be a last minute swap in, why even bother unless the cost to put in such a better SSD than MS costs hardly anything. SO maybe just bump it up for the sake of it. But if the cost of a 5.5 vs 2.4 is a decent amount and game quality might not take advantage, you'd think any corporation would just go with the spec that works fine and bank the cost savings.

At some point, well find out (might take years) as to why Sony has gone with an SSD that is such a big jump (even vs. PC) when the rest of the specs can't compare against a high end rig.
 

Neofire

Member
And this is also what I've said, they diverted costs away from the rest of the system in terms of compute and pushed it into the SSD. This was a mistake.

There's already 8 Terabyte 15GB/s SSD's on the market for PC right now, like today.
Hmmm i have to bookmark this thread. All that R&D at Sony and it seems they were to incompetent to design a console and the money put into developing the SSD is for not.....

🤷🏿‍♂️ We shall see
 
Am I crazy, but that sounds like great news. If that's what it looks like to stream 768MB of compressed data... imagine what 2/4/6GB of data would look like.

I'm not gonna lie and pretend like I'm a tech wizard. Not gonna pretend like I even know what I'm talking about, like some in here, they talk like they've got a PHD in computer engineering. But, I still believe that the XBox/PS chose the right direction with innovating everything around the CPU/GPU. CPU/GPU tech advances every couple of years, that's nothing new. In a year the Xbox Series X/PS5 CPU/GPUs will be deemed "dated" while by all indications, based on techies talking on it, their architectures won't be, right?

Edit: Like Neofire Neofire said, are those engineers at MS and Sony that STUPID? Like were they really? Or were they on to something? Who knows. I think we'll only know in a year or two, maybe 3. But to call it now, like we know, that's a bit premature i think.
 
Last edited:
I'm not a tech whiz, but I don't get why Sony would do this.

Spending money to jam in an SSD which is better than 99.9% of people's PC SSD but pair it with specs that look like can't even take advantage of it leads to some curious decision making and perhaps some secret shit Sony has up its sleeve to use it. Maybe 3 years from now after every dev gets accustomed to console SSDs they are expecting some large jump in development quality and they are hedging their bets that a 5.5 gb/s SSD is worth the investment over a crappier MS 2.4 gb/s SSD.

Who knows.

Even if PS5 core specs were locked and loaded and the SSD part could be a last minute swap in, why even bother unless the cost to put in such a better SSD than MS costs hardly anything. SO maybe just bump it up for the sake of it. But if the cost of a 5.5 vs 2.4 is a decent amount and game quality might not take advantage, you'd think any corporation would just go with the spec that works fine and bank the cost savings.

At some point, well find out (might take years) as to why Sony has gone with an SSD that is such a big jump (even vs. PC) when the rest of the specs can't compare against a high end rig.
Well I also questioned why for the PlayStation 4 Pro they put in a 128% more powerful GPU, and a 31% more powerful CPU but only bothered to increase the memory bandwidth by a mere 23% which resulted in their GPU being bottleneck prone.

After the original PS4 their hardware decisions are a bit questionable.

Am I crazy, but that sounds like great news. If that's what it looks like to stream 768MB of compressed data... imagine what 2/4/6GB of data would look like.

I'm not gonna lie and pretend like I'm a tech wizard. Not gonna pretend like I even know what I'm talking about, like some in here, they talk like they've got a PHD in computer engineering. But, I still believe that the XBox/PS chose the right direction with innovating everything around the CPU/GPU. CPU/GPU tech advances every couple of years, that's nothing new. In a year the Xbox Series X/PS5 CPU/GPUs will be deemed "dated" while by all indications, based on techies talking on it, their architectures won't be, right?

Edit: Like Neofire Neofire said, are those engineers at MS and Sony that STUPID? Like were they really? Or were they on to something? Who knows. I think we'll only know in a year or two, maybe 3. But to call it now, like we know, that's a bit premature i think.
You're kind of missing the point, that mere 768MB's is bottlenecking their GPU already... It's never been about how much you can stream, it's how much can the renderer take before it collapses. This is showing to be a considerably lower figure than the drives themselves are spec'd to.
 
Last edited:
Yeah, this matches up with what I've seen some other people were speculating at earlier on. 700 - 900 MB/s...maybe 1 GB/s pushing it. I saw a few folks on B3D giving these figures and they weren't far off.

Don't know if this is compressed or uncompressed figure; I'd assume since they didn't specify it would be referring to uncompressed. So with that figure alone we know it's nothing that couldn't be handled by the Series systems or even PCs with lower-end SSDs. And that bodes extremely well for devs across the board going into next-gen. However I'm still curious of the frame-to-frame streaming amount.
 

Clintizzle

Lord of Edge.
Awesome to see that it's not very resource-heavy.

Its all up to devs now. The hardware on offer is pretty great but it won't be realized if the devs don't take advantage of it.

In my opinion, that is what is going to be the differentiator this gen, it will be about how far the 1st party studios are willing to go to create amazing gaming experiences for us.

HYPE
 

Mister Wolf

Member
Well I also questioned why for the PlayStation 4 Pro they put in a 128% more powerful GPU, and a 31% more powerful CPU but only bothered to increase the memory bandwidth by a mere 23% which resulted in their GPU being bottleneck prone.

After the original PS4 their hardware decisions are a bit questionable.

You're kind of missing the point, that mere 768MB's is bottlenecking their GPU already... It's never been about how much you can stream, it's how much can the renderer take before it collapses. This is showing to be a considerably lower figure than the drives themselves are spec'd to.

Excellent post.
 
Yeah, this matches up with what I've seen some other people were speculating at earlier on. 700 - 900 MB/s...maybe 1 GB/s pushing it. I saw a few folks on B3D giving these figures and they weren't far off.

Don't know if this is compressed or uncompressed figure; I'd assume since they didn't specify it would be referring to uncompressed. So with that figure alone we know it's nothing that couldn't be handled by the Series systems or even PCs with lower-end SSDs. And that bodes extremely well for devs across the board going into next-gen. However I'm still curious of the frame-to-frame streaming amount.
It's compressed.
 

StreetsofBeige

Gold Member
You're kind of missing the point, that mere 768MB's is bottlenecking their GPU already... It's never been about how much you can stream, it's how much can the renderer take before it collapses. This is showing to be a considerably lower figure than the drives themselves are spec'd to.
Would a suitable analogy be something like:

- Someone can read pages in a book at lightning speed, but is gimped by how fast he can turn the pages?
 
You're kind of missing the point, that mere 768MB's is bottlenecking their GPU already... It's never been about how much you can stream, it's how much can the renderer take before it collapses. This is showing to be a considerably lower figure than the drives themselves are spec'd to.
That's the thing, why did they both go with such over spec'd SSDs? Were they really just that dumb? Did they not test how much data could be stream and how much the render could handle? I have a hard time believing that they did not test all this stuff out. Like they tested this stuff out. The engineering teams must have been pretty big, so this would be a major oversight. So there has to be a reason why. As to what, I don't know. I guess we'll see in a year or two.... or sooner, maybe?

Edit: I'm just not ready to condemn either console makes decision. I mean, they really haven't let us down thus far. We have games that still look damn good today on 7 year old hardware.
 
Last edited:
Sorry OP, where's the link to the source for this data?
Ask and you shall receive.



Would a suitable analogy be something like:

- Someone can read pages in a book at lightning speed, but is gimped by how fast he can turn the pages?
That seems like a fair analogy.



That's the thing, why did they both go with such over spec'd SSDs? Were they really just that dumb? Did they not test how much data could be stream and how much the render could handle? I have a hard time believing that they did not test all this stuff out. Like they tested this stuff out. The engineering teams must have been pretty big, so this would be a major oversight. So there has to be a reason why. As to what, I don't know. I guess we'll see in a year or two.... or sooner, maybe?
Microsoft seems like they went with a device that is safe but still excessive to not ever reach a point of bottleneck in the storage. Sony on the other hand just went a bit nuts with theirs, and it's shocking how excessive it actually is with some real context.
 
Ask and you shall receive.



That seems like a fair analogy.



Microsoft seems like they went with a device that is safe but still excessive to not ever reach a point of bottleneck in the storage. Sony on the other hand just went a bit nuts with theirs, and it's shocking how excessive it actually is with some real context.

In all fairness, if 768MB is the max right now... they both went a little nuts. they're sporting 4/5-8/9 gb compressed. It's gotta be for something. To quote a great character "It can't be for nothing" - Ellie lol
 
Where is this info coming from?
From Epic themselves, today.

In all fairness, if 768MB is the max right now... they both went a little nuts. they're sporting 4/5-8/9 gb compressed. It's gotta be for something. To quote a great character "It can't be for nothing" - Ellie lol
Maybe just for the loading times and having snappier systems, that's a different discussion though.
 
Last edited:
I always doubted the people that said that the renderer (=cpu/gpu) will be fast enough to put everything on screen that the ssd can throw at them. It is obvious that the GPU will still be the bottleneck for most graphics intense games.

xbox wins again :messenger_sunglasses:
still buying the ps5 though.
I mean it's common sense, what did people think was going to happen when you're suddenly able to increase the volume, density and quality of everything around you spanning god knows how far out into the distance.

It all has to be rendered, there's not much detective work involved. You know?
 
From Epic themselves, today.
Maybe just for the loading times and having snappier systems, that's a different discussion though.
Hold up. I just watched to that part in the video, where you clipped the screen shot out of. I only heard that the view amount, whats visible, from Nanite was 768MB. I didn't hear that It was the max the GPU could process. He said that was the amount in that view and that it could be optimized further. Did I miss something? I'm not sure, if he used wording that I just don't understand.
 
Last edited:

GenericUser

Member
I mean it's common sense, what did people think was going to happen when you're suddenly able to increase the volume, density and quality of everything around you spanning god knows how far out into the distance.

It all has to be rendered, there's not much detective work involved. You know?
some people just seem to believe every bs that corporations throw at them. The 5.5GB/s throughput of the PS5 SSD is still neat though and I think we will definitely see games that will use it in clever ways. It's just not that it will make a super huge difference for the average call of duty/fifa player.
 
Hold up. I just watched to that part in the video, where you clicked the screen shot out of. I only heard that the view amount, whats visible, from Nanite was 768MB. I didn't hear that It was the max the GPU could process. Did I miss something? I'm not sure, if he used wording that I just don't understand.
Basically the throughput requirement of what you're actually seeing and is being rendered is 768MB/s, far beyond the data usage of anything we have now, but a far cry from the capability of these drives.

The bad part about it though is what's happening to the GPU, how much it's being hammered with just that relatively small amount of unique visual data being pushed at it all at once.
 

Mister Wolf

Member
Basically the throughput requirement of what you're actually seeing and is being rendered is 768MB/s, far beyond the data usage of anything we have now, but a far cry from the capability of these drives.

The bad part about it though is what's happening to the GPU, how much it's being hammered with just that relatively small amount of unique visual data being pushed at it all at once.

So basically they have a GPU inside that cant even handle 1/5 of what the Storage can stream.
 

StreetsofBeige

Gold Member
I mean it's common sense, what did people think was going to happen when you're suddenly able to increase the volume, density and quality of everything around you spanning god knows how far out into the distance.

It all has to be rendered, there's not much detective work involved. You know?
some people just seem to believe every bs that corporations throw at them. The 5.5GB/s throughput of the PS5 SSD is still neat though and I think we will definitely see games that will use it in clever ways. It's just not that it will make a super huge difference for the average call of duty/fifa player.
I'm no techie myself, so without people in forums going over how different specs work with each other, the average person not knowing this would read marketing PR and assume that a super fast SSD is all you need. And that's what Sony has been plugging since Cerny. The 3D audio part is different as people are smart enough to separate audio from visual, but when Sony is promoting SSD like that's all that matters, many people believe it.

Only a small number of people can figure out how stuff relate to each other like: SSD I/O speed, Ram bandwidth, rendering, etc...... I don't know myself. I just rely on what people say and trust what I hear.
 
Last edited:

Dr Bass

Member
You’re conflating amount of data transferred with speed to transfer that data. It wouldn’t matter if the amount was 50 megabytes. It’s about latency and throughput.

aybe you don’t remember but N64 had a memory subsystem that could transfer 500 MB/second. What was the biggest cartridge that ever released on that system? 64MB? It was about speed and latency not the “amount” of data. This slide doesn’t change that at all.

You do not know more about the UE5 demo than Epic.

You do not know more about computer and system engineering than the PlayStation team.

There is a serious need to try and disprove Epics plainly stated facts about UE5 and PlayStation 5 for some reason. Do you really think they made the statements they did just a few weeks ago knowing, if they were lies, that it would then be discovered when they talked about it a little bit more? These people are not idiots. But they would have to be to be doing what you, the OP, is ascribing to them.

So yeah. Pretty sure everything is in the same place as before this presentation.
 
You’re conflating amount of data transferred with speed to transfer that data. It wouldn’t matter if the amount was 50 megabytes. It’s about latency and throughput.

aybe you don’t remember but N64 had a memory subsystem that could transfer 500 MB/second. What was the biggest cartridge that ever released on that system? 64MB? It was about speed and latency not the “amount” of data. This slide doesn’t change that at all.

You do not know more about the UE5 demo than Epic.

You do not know more about computer and system engineering than the PlayStation team.

There is a serious need to try and disprove Epics plainly stated facts about UE5 and PlayStation 5 for some reason. Do you really think they made the statements they did just a few weeks ago knowing, if they were lies, that it would then be discovered when they talked about it a little bit more? These people are not idiots. But they would have to be to be doing what you, the OP, is ascribing to them.

So yeah. Pretty sure everything is in the same place as before this presentation.
The first in denial post, welcome to the thread.

This is the current view streaming demand, and it's 768MB of compressed data. There's no two ways around this, the data demands are far less than many people led on. I can't even remember how many posts of people trying to say this wouldn't be possible on laptops, the Series X, other SSD's and so on and so forth, but the reality is that demand could be met by an SSD from half a decade ago.
 

Dr Bass

Member
The first in denial post, welcome to the thread.

This is the current view streaming demand, and it's 768MB of compressed data. There's no two ways around this, the data demands are far less than many people led on. I can't even remember how many posts of people trying to say this wouldn't be possible on laptops, the Series X, other SSD's and so on and so forth, but the reality is that demand could be met by an SSD from half a decade ago.

You completely didn't even understand what I wrote. Care to respond to my original post? It's not a denial it's an affirmation of what Epic has been saying all along. Can I ask what you do for a living? Are you a software or hardware engineer? I am. :messenger_beaming:

Also that article where they state it takes about as much GPU power to render as Fortnite. That makes complete sense given what they wrote about it and how it's essentially rendering an image that gets compressed down to 1 triangle per pixel. The whole rendering pipeline sounds incredible, and quite revolutionary.

They expressly mentioned having to load data per frame for it to work. If all they had to do was load 760 MB into RAM then sure, that's entirely different. You're looking at a raw number "768 MGB!" and making a whole bunch of erroneous conclusions from that. It's a lot more in depth and complicated than that. Again going back to the initial discussion, which people seem to forget over and over ... and over.


It's not just "fast loading!" :rolleyes:

I don't give a $^#$ about Playstation vs Xbox. I am getting them both. I do care about engineering a heck of a lot more though. Especially when it comes to arm chair critics and experts trying to sound smarter than the people that actually engineer the products. It's completely absurd.
 
Last edited:
You completely didn't even understand what I wrote. Care to respond to my original post? It's not a denial it's an affirmation of what Epic has been saying all along. Can I ask what you do for a living? Are you a software or hardware engineer? I am. :messenger_beaming:
I understand what you wrote, you just don't like the answer, and that's not my problem.

Oh, and here come the genetic fallacies.

:messenger_tears_of_joy:
 
Last edited:

TheAssist

Member
Disclaimer: I have no idea about tech and just want to add another point of view.

Apart from what Dr. Bass has said about latency, if I remember correctly they stated in a interview that they only used a handful of different textures to create this demo (I guess apart from the statues which seem to be unique). The rest was done manipulating these textures to create the illusion of looking like its all different stuff.
So from that point, this is a very simple scene with just a few rock textures, 1 character, no NPC's, hardly any audio sources, no AI and except for the flying part at the end (which I still find impressive no matter what) its rather slow paced, so nothing needs to get in quick.

So how would that number grow in say ... a city like new York. With loads of different and unique textures, massive amounts of NPC's with AI, hundreds of sound sources and a some guy going really quick on top of it all?

Genuine question. Would it go up several times, or is it just a minor increase since non of these things take that much more data or how does it work?
 
Last edited:
Top Bottom