• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Epic sheds light on the data streaming requirements of the Unreal Engine 5 demo

Basically the throughput requirement of what you're actually seeing and is being rendered is 768MB/s, far beyond the data usage of anything we have now, but a far cry from the capability of these drives.

The bad part about it though is what's happening to the GPU, how much it's being hammered with just that relatively small amount of unique visual data being pushed at it all at once.
Well, we don't know, you don't know. This thread is already being held up as proof that the SSD's on these machines are stupid and over engineered. That the max their GPU's can render is 768MB/s. Already this thread is filling with fanboys, coming in with "I told you so" and "I knew it" Damn man, is this what this forums has be come?
 
Last edited:
Well, we don't know, you don't know. This thread is already being held up as proof that the SSD's on these machines are stupid and over engineered. That the max their GPU's can render is 768MB/s. Already this thread is filling with fanboys, coming in with "I told you so" and "I knew it" Damn man. Is this what this forums has be come?
These threads are fine when it's positive confirmation but suddenly they're not when it's negative confirmation, and ironically from the horses mouth and the EXACT SAME SOURCE.

You can't have it both ways. It's not turning into a fanboy anything, it's crushing reality bringing people back in line which is exactly why this thread is so detractor free, there's nothing to argue.
 
Last edited:
Well, we don't know, you don't know. This thread is already being held up as proof that the SSD's on these machines are stupid and over engineered. That the max their GPU's can render is 768MB/s. Already this thread is filling with fanboys, coming in with "I told you so" and "I knew it" Damn man, is this what this forums has be come?
Well, what did you expect after months of "WOAH PS5 SSD SUPERIORITY" that always sounded fishy to anyone with just a tiny bit of knowledge about hardware architecture and render pipelines.
 
These threads are fine when it's positive confirmation but suddenly they're not when it's negative confirmation.
You can't have it both ways.

"You're kind of missing the point, that mere 768MB's is bottlenecking their GPU already..."
Nah man, sorry but you miss the point. You already claimed that the GPU's are already bottle necking at 768MB, i quoted you right there. You're implying that the figure they showed 768MB was the max their GPU's could render thus making their SSD's and I/O's completely useless.

If your point was to say that this demo could run on Series X and PC, then that would have been very valid. But, your reply to me was not about that, you made a claim that the video never made.
 

Mister Wolf

Gold Member
Well, we don't know, you don't know. This thread is already being held up as proof that the SSD's on these machines are stupid and over engineered. That the max their GPU's can render is 768MB/s. Already this thread is filling with fanboys, coming in with "I told you so" and "I knew it" Damn man, is this what this forums has be come?

The Demo runs sub 1440p at 30fps. If they could get more out of that GPU then why didn't they?
 
Last edited:

Dr Bass

Member
I understand what you wrote, you just don't like the answer, and that's not my problem.

Oh, and here come the genetic fallacies.

:messenger_tears_of_joy:

Man, the thing is, you really, really don't. :messenger_pensive: But no skin off my teeth in the end. Just frustrating to read sometimes.

The Demo runs sub 1440p at 30fps. If they could get more out of that GPU then why didn't they?

Epic claimed they did it really fast. That's how a lot of tech demos work when you finally start showing things with your proof of concept. Once you get to that point of it working you complete a "quick and dirty" version of what you want to show. I'm sure the whole thing is nowhere near production ready and has quite a bit of work left to do. I'm sure we will see higher res games running on PS5 and Xbox and PC and whatever else down the road.

There are a number of reasons they might have picked that resolution/framerate btw. They compared it to Fortnite. Maybe they decided to see what they could accomplish using the same GPU requirements so only allowed a certain amount of GPU power to be used. Nanite might need a lot of optimization still. Because they felt like it. Any number of internal reasons that none of us are privy to and not even worth guessing at because they already stated how much power it needed in terms of GPU compute.
 
Last edited:
Did you watch the video? or did you just read the OP? The guy in the video said "there is enough overhead to run this demo at 60fps" Please watch the video.
He said GPU time is well within budget for a 60hz game, and goes on to speak of the CPU because the budget is only at 4.5ms. That doesn't relate to the overall rasterization cost of this particular demo, and that is immense.
 

Psykodad

Banned
Did you watch the video? or did you just read the OP? The guy in the video said "there is enough overhead to run this demo at 60fps" Please watch the video.
Not only that, they also say that during the scene where she flies through the city, they were streaming in 500K objects and they can easily stream 1 Million.

I don't know too much about tech, but I'm pretty sure, going by this, that they weren't maxing out PS5 at all.
Or that the GPU was a bottleneck, if that's a better way of saying it.
 
Last edited:

martino

Member
Facts redeeming logic over marketing with no surprise
The i/o bottleneck always was mix/ramdom read capabilities/iops (and not sequential ones) streaming data. A number a lot lower than raw speed in most SSD.
768mo/s is already a Huge number in this case and with optimisation and different scenery higher can probably be hit.
We really need to know this number or iops of both in this case It's the important number here to know
 
Last edited:
So you wont answer. Duly noted.
The claims in this thread are made based on the video in this thread, not the Eurogamer thing. So, you're just focusing the thing that makes your point. So what's the point of me talking to you. You won't watch the video. "but the Eurogamer interview"

Edit: It's so difficult to try and be impartial on these forums. To try and be as even as possible and not be biased, but damn man you guys make it so hard. So much petty energy in here.
 
Last edited:
Facts redeeming logic over marketing with no surprise
The i/o bottleneck always was mix/ramdom read capabilities/iops (and not raw ones) streaming data. A number a lot lower than raw speed in most SSD.
768mo/s is already a Huge number in this case and with optimisation and different scenery higher can probably be hit.
We really need to know this number or iops of both in this case It's the important number here to know
768MB/s is already like 20x the throughput of what we see now in games. People are having a tough time grasping that this is an immense amount of data, and at the same time given the specifications of these SSD's they're largely underutilized.
 
He said GPU time is well within budget for a 60hz game, and goes on to speak of the CPU because the budget is only at 4.5ms. That doesn't relate to the overall rasterization cost of this particular demo, and that is immense.
You're just guessing. You have no idea. You're making claims with no proof. You're twisting numbers to suite a narrative. I'm out. Most of the people in there have already made up their minds, some without even having watched the video. Peace :messenger_peace:
 
You're just guessing. You have no idea. You're making claims with no proof. You're twisting numbers to suite a narrative. I'm out. Most of the people in there have already made up their minds, some without even having watched the video. Peace :messenger_peace:
giphy.gif
 
I'm guessing it's still being optimized since it's not due it until next year.

But let's go ahead and make claims about the consoles based off in engine designed to run across multiple platforms and wide ranges of hardware.
What claims? Who is making claims?

A claim is something which is unsubstantiated, Epic quite literally has substantiated the previous claims we made over the course of the last few months.

We're no longer making claims, we're perpetuating fact, we have been vindicated.
 
Last edited:

Tschumi

Member
Who came up with the narrative that the SSD is forcing Sony to downgrade everything? Watch the deep dive, there is a hell of a lot of extra stuff built into the system to streamline it and eliminate bottlenecks. It's not a few chips strapped to an SSD - it's a whole ecosystem. I mean, can you at least go through Cerny's video line by line, present your criticism, and let us judge what's right?

Sony aren't playing the same game as Xbox. You make it seem like they're both consoles who went for huge power, but only one got it right - that's inaccurate and disingenuous to suggest. Everyone with an open mind knows that Sony is trying to move away from the power arms race and are trying to do something different. Whether or not they succeed is an entirely different question.

You even use the word "bottleneck", after Cerny talks for minutes of that presentation about all the bottlenecks they have cleared~

Whatever unreal engine has shown you, presenting it as a clear and direct indicator of the precise capabilities of the PS5 is a leap that i for one cannot really follow. It's an indicator only of what unreal have put together, and how it functions.
 

Silver Wattle

Gold Member
What claims? Who is making claims?

A claim is something which is unsubstantiated, Epic quite literally has substantiated the previous claims we made over the course of the last few months.

We're no longer making claims, we're perpetuating fact, we have been vindicated.
The only thing you can really take from this, is that this demo can most likely be done on the XBSX.
Everything else you have said is pulled from your ass.
 

Panajev2001a

GAF's Pleasant Genius
The first in denial post, welcome to the thread.

This is the current view streaming demand, and it's 768MB of compressed data. There's no two ways around this, the data demands are far less than many people led on. I can't even remember how many posts of people trying to say this wouldn't be possible on laptops, the Series X, other SSD's and so on and so forth, but the reality is that demand could be met by an SSD from half a decade ago.

How quickly you move data to RAM vs how much data, but no it is better to think that both consoles are several times over specced. It seems like you went with a fine tooth comb through confirmation bias lane to pick the data you wanted, but do not let that stop you.
 
How quickly you move data to RAM vs how much data, but no it is better to think that both consoles are several times over specced. It seems like you went with a fine tooth comb through confirmation bias lane to pick the data you wanted, but do not let that stop you.
Yes, this is all a setup, I'm cherry picking everything and trying to frame you guys. You caught me.

Or the reality is I culled the only pertinent information relative to the demands on the biggest questionable piece of hardware (SSD + how it bears down on rendering) and let them bear out as they are rather than fantasized by marketing. You guys took a huge L on this with the amount of nonsense which has been spread based upon the original showing of this demo, a contextless showing. Well the context is now here and it doesn't frame the excess in these SSD's in a positive light, it rather frames them as costly and one much more than another, and in one case a cost displacement which negatively bears down on the rest of the hardware in the system.

You don't have to like this, and I don't care whether you do one way or another, but that is what it is.
 

Panajev2001a

GAF's Pleasant Genius
What claims? Who is making claims?

A claim is something which is unsubstantiated, Epic quite literally has substantiated the previous claims we made over the course of the last few months.

We're no longer making claims, we're perpetuating fact, we have been vindicated.

You are reading few bits of data and taking leaps with it to wage war on people you disagree with on a forum: you start from a slide and few sentences and the castle you build on it is entirety your claim and your speculation.
We will see when the post by Epic stating all of that is not a minuscule fraction of your “explanation” of it.
 

Mister Wolf

Gold Member
From the rest of the video the current bottleneck seems to be lumens (their real-time GI solution) and lack of optimisation time: they think they can get lumens working at 60 FPS with more optimisation.
Voxel GI is demanding, part of the reason we didn't see it used on consoles this gen. Sadly I expect most developers to just bake lighting like they've been doing even when using UE5.
 
Last edited:

-Arcadia-

Banned
I mean, if I'm wrong, I'm wrong, and I might well be.

But I have this crazy feeling that two multibillion dollar companies, and some of the best engineering teams in the world, with advice from every developer in the world, might know a little more than message board posters.

Consider me skeptical that they're overdesigned.
 
Last edited:
What has the pool size got to do with the bandwidth requirement? In a very stupid implementation one might load and unload the entire pool each frame (23GB/s at 30fps with 768MB of data).
Because that figure is the peak limitation of what is seen at any given point in time. You're not swapping out that entire pool or even remotely close to that, it defeats the purpose. What were you guys expecting? This is a multiplatform engine, it's configured to encompass everything.
 

Panajev2001a

GAF's Pleasant Genius
Yes, this is all a setup, I'm cherry picking everything and trying to frame you guys. You caught me.

Or the reality is I culled the only pertinent information relative to the demands on the biggest questionable piece of hardware (SSD + how it bears down on rendering) and let them bear out as they are rather than fantasized by marketing. You guys took a huge L on this with the amount of nonsense which has been spread based upon the original showing of this demo, a contextless showing. Well the context is now here and it doesn't frame the excess in these SSD's in a positive light, it rather frames them as costly and one much more than another, and in one case a cost displacement which negatively bears down on the rest of the hardware in the system.

You don't have to like this, and I don't care whether you do one way or another, but that is what it is.

Both consoles have wasted all this tech to sustain several GB/s of bandwidth (need to see how harsh you were as a consumer getting robbed of an even faster GPU on XSX due to the money invested in XVA) or you are rushing to make this point to wage some console war mostly aimed at one target... mmm 🤔.

In case you missed it, here is your chance to spend a few hours yelling at people that they are being misled by BS overspecced I/O and its fancy name apparently: https://www.neogaf.com/threads/micr...ies-xs-high-speed-secret-sauce.1555518/page-5
 
Last edited:

Dontero

Banned
So what's the point here? You've got two systems with SSD's far more capable than their usefulness, but one came at a particularly high cost everywhere else in the system. I'll let you figure out which one that is and where.

The problem is that both new and old SSDs have pretty much the same random 4k read which games use. New SSDs might have 5GB/s or 8GB/s sequential read but they have 100-150-200MB/s 4k random read going from first SSDs to latest. That random read can't be improved with I/O improvements.

If games were sequential read which they are not then we would be talking different tune.
 
I mean, if I'm wrong, I'm wrong, and I might well be.

But I have this crazy feeling that two multibillion dollar companies, and some of the best engineering teams in the world, with advice from every developer in the world, might know a little more than message board posters.

Consider me skeptical that they're overdesigned.
Sony created the last console which had a big memory related bottleneck and Microsoft didn't, who was more likely to greatly overengineer?

Microsoft had the 360 which suffered the RROD and what did they do? They greatly overengineered the form of the Xbox One to ensure that wouldn't happen again.

Overengineering isn't uncommon, especially not on the back of a big misstep.
 

Panajev2001a

GAF's Pleasant Genius
Sony created the last console which had a big memory related bottleneck and Microsoft didn't, who was more likely to greatly overengineer?

Microsoft had the 360 which suffered the RROD and what did they do? They greatly overengineered the form of the Xbox One to ensure that wouldn't happen again.

Overengineering isn't uncommon, especially not on the back of a big misstep.

Not to this proportion, but you know that. The extreme point of view you have taken is still nice as we will see it squarely only dedicated to one console only although from your analysis both have wasted tons of money for something useless... or you are just ignoring latency and the difference of data to move and the speed it must be swapped in and out at.

Still, will be entertaining to see you wave wars against XVA and all the hype behind it ;).
 
Last edited:

hemo memo

Gold Member
I mean, if I'm wrong, I'm wrong, and I might well be.

But I have this crazy feeling that two multibillion dollar companies, and some of the best engineering teams in the world, with advice from every developer in the world, might know a little more than message board posters.

Consider me skeptical that they're overdesigned.

Sometimes they don’t. If they always did, we wouldn’t have Wii U or Vita.
 

dottme

Member
You’re conflating amount of data transferred with speed to transfer that data. It wouldn’t matter if the amount was 50 megabytes. It’s about latency and throughput.

aybe you don’t remember but N64 had a memory subsystem that could transfer 500 MB/second. What was the biggest cartridge that ever released on that system? 64MB? It was about speed and latency not the “amount” of data. This slide doesn’t change that at all.

You do not know more about the UE5 demo than Epic.

You do not know more about computer and system engineering than the PlayStation team.

There is a serious need to try and disprove Epics plainly stated facts about UE5 and PlayStation 5 for some reason. Do you really think they made the statements they did just a few weeks ago knowing, if they were lies, that it would then be discovered when they talked about it a little bit more? These people are not idiots. But they would have to be to be doing what you, the OP, is ascribing to them.

So yeah. Pretty sure everything is in the same place as before this presentation.
So I watch the video. And the 768Mb is the pool of memory reserve by the Unreal engine. This is going to be updated much more than 1 time per seconds probably as they are going to stream much more the data. So you clearly need much more bandwidth than 768MB/s.
Also, further in the video, they clearly say that the GPU hit is coming mainly from Lumen and not Nanite.


Maybe I'll also be call a denier, but the information in the op is wrong and is not even matching what the video he shared is saying.
 
So I watch the video. And the 768Mb is the pool of memory reserve by the Unreal engine. This is going to be updated much more than 1 time per seconds probably as they are going to stream much more the data. So you clearly need much more bandwidth than 768MB/s.
Also, further in the video, they clearly say that the GPU hit is coming mainly from Lumen and not Nanite.


Maybe I'll also be call a denier, but the information in the op is wrong and is not even matching what the video he shared is saying.
The entire point of the SSD is volume capability not possible with the limited capacity of dedicated RAM, you're going to swap out data as infrequently as possible. That's the entire point, scene volume, complexity and consistency.

In terms of Lumen and Nanite that's a non-starter conversation because you still need lighting, you still need shadows, you still need AA, AF, possibly RT and so on and so forth. The combined effect of a full scene rendering is heavy demand on the GPU, and the capability of something like Nanite requires higher accuracy and scaling from a system like Lumen, there's a knock on effect across the entire rendering stack.

It's all related. I've already got my bases covered, I'm not some shave tail louie.
 
Last edited:
Top Bottom