• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
headline from PC Gamer

"UE5 Demo runs better on laptops than on the PS5"


Reading through it, I think he is just going over the Epic China leak/interview



article ends with



edit

It will be really interesting if we are ever able to truly directly compare the demo on all platforms
Great, so this should run just fine on the XSX.
 
However, as far pure visual fidelity/image quality, I just need to know if there is difference in how good it can look from platform to platform based strictly on storage bandwidth.

It’s what everyone wants to know. There’s no argument that the Engine and Lumen and Nanite are scalable, and even that the demo shown could be modified to get the best out of even a two year old Android device using older techniques.

The way I see it, either this particular demo that Sony had a hand in really was designed to showcase PS5’s strengths with the excessive amount of movie quality assets being pulled in.
Or Epic made something that doesn’t really even tickle Sony’s IO, and it wasn’t a demo for Sony showcasing what PS5 can do, and that perhaps even further than that Sony and Epics conversations over the years have lead to Sony really fucking up and overspending cash and die space on IO accelerators at the expense of CUs that are completely redundant.

I find the latter half of the second scenario hard to believe, but can imagine that Epic came up short on delivering something that really showed what Sony’s IO could do (despite saying it does) and are a bit coy about divulging that.
For me it makes sense to take things at face value rather than in invent (even plausible) conspiracy theories to just dismiss what was shown as nothing special.
It also seems that we’re willing to accept that XSX will outright have a modest GPU compute advantage that could have a demo written to showcase it, but that it’s preposterous to accept that PS5 has a significant IO bandwidth and latency advantage that has had a demo written for it in partnership with Epic to promote PS5.

The implication behind all of it seems to be that 18% extra compute is a big deal but 2x or more IO speed directly addressable from the GPU through a significant amount of fixed function hardware accelerators is a complete waste of time, an oversight, and doesn’t enable any new rendering techniques that Cerny and Sweeney were talking about.

It’s like one toy needs to win in all categories.
 

Vaztu

Member
On another note... BCPack might simply be a collection of the different texture compression techniques listed here;


BCPack is just an evolution to previous texture compression methods. DrKeo DrKeo talked about this pages back.



On another note,

But the CPU/GPU see 100GB of the SSD as RAM


(He talks about Virtual memory for textures, that word is still present)

If rumours of XSX having 100GB of Virtual Memory is true, I don't see how that is a good thing. Like a user was mentioning several pages back, how is this limit going to apply to games above 100GB. How will it work when switching 4-5 games quickly?

One way I can see this being true is that they use it for File IO and Mapping. MS mentioned that they reduced TWO CPU Zen 2 cores worth of workload to one-tenth CPU Zen 2 core worth of workload for File IO and Mapping. Is this how they achieved it ?

There will be latency issues with this, which I don't see too many people talking about (only SSD bandwidth). Even if Virtual Memory is false, the latency issue is still there. Might not be a big problem but it exists.
 
Last edited:
For example, through a custom‐designed high‐speed SSD, we plan to realize game data processing speeds that are approximately 100 times faster than PS4."

Do I believe that data can be accessed 100 times faster?

Yes I do.

But can it be processed 100 times faster?

I have some serious doubts about that.

Sony really needs to clarify this otherwise it's can be seen as dishonest marketing. They have to fix this before it gets serious.
 

JLB

Banned
Everybody watched it knowing it was a game engine tech demo. The premise alone prevents it from being deceptive.

And the demo ran with v-sync on and frame rate capped at 30 for stable video output, Epic already confirmed it runs above 30 (though obviously not at 60 yet) with both of those options off. Probably tears frames like it's nobody's business too, though. I think they will get it up to 60fps on both PS5 and XsX before final release.
High-end PCs will very easily reach (and maybe even pass) 60fps without breaking a sweat, I'm confident of that too.

Something can be deceptive even under those circumstances.
See, when a company puts a "game engine" whatever in white font 8px on the bottom of the page, many people can get confused that that is the actual game, when is not.
MS, Sony and all the others know this.
 

Bo_Hazem

Banned
Gran Turismo 7 vs Forza 8 is going to make talk around U5 demo look civilized.

GT7 gonna look nasty, look at how good it is now on PS4 Pro (watch in 4K@60fps):





They are aiming for 240fps now! Probably for PSVR2, and maybe 120fps for regular TV's.

 
Last edited:

DeepEnigma

Gold Member
Do I believe that data can be accessed 100 times faster?

Yes I do.

But can it be processed 100 times faster?

I have some serious doubts about that.

Sony really needs to clarify this otherwise it's can be seen as dishonest marketing. They have to fix this before it gets serious.

Nobody said it can be processed 100 times faster by the GPU, but, GPUs are data starved by current storage solutions and can do so much more than what they are fed through to the RAM in streaming engine tech.

Insomniac specifically talked about this with developing Spider-Man that they had to scale back the graphics based on a streaming baseline from mechanical drives. 20MB/s was what they settled on as a baseline since the drive can be replaced by shittier ones. So in conclusion, they said they could have gotten more out of the game/GPU graphically, if the storage solution did not hold them back.
 
Last edited:

NewChoppa

Banned
GT7 gonna look nasty, look at how good it is now on PS4 Pro (watch on 4K@60fps):





They are aiming for 240fps now! Probably for PSVR2, and maybe 120fps for regular TV's.



Those reflections at the very end of the video look crazy...


oVVhIWk.png
 
But can it be processed 100 times faster?
dude do you even know what are you talking about? what is processed mean for you, do not just throw scientific sounding and on-point looking keywords around like that, it is too general. If you are talking about feeding GPU directly then yes that Nanite tech will be the most data starved GPU application that will not only require high bandwidth but also low latency. PS5 is not crunching big data for enterprises ofc it will 'process' 100 times faster data, or why the hell else would they engineer and design a console like that in the first place??
 
dude do you even know what are you talking about? what is processed mean for you, do not just throw scientific sounding and on-point looking keywords around like that, it is too general. If you are talking about feeding GPU directly then yes that Nanite tech will be the most data starved GPU application that will not only require high bandwidth but also low latency. PS5 is not crunching big data for enterprises ofc it will 'process' 100 times faster data, or why the hell else would they engineer and design a console like that in the first place??

Basically taking that information from the SSD and then displaying it on the screen.
 

Bo_Hazem

Banned
I wonder if this Nvidia paper (PRACTICAL REAL-TIME VOXEL-BASED GLOBAL ILLUMINATION FOR CURRENT GPUS) gives us a ballpark for extrapolating how the PC hardware measures up on Lumen/Nanite tech.


IWTdHDu.jpg


tUqi8OF.jpg


MJqBN7W.jpg


uKlDnN3.jpg

In the paper, we can see that the best looking images(slide 1) needs a GTX Titan, 2.5GB of VRAM, render at 1080p and a frame-rate just shy of 40fps(25.4ms per frame), and traces 1/4 or less of the UE5 demo rays, and does so with models with a fraction of the polygons(see slide 3) and texture detail (even at ultra settings).

The thing that got me thinking is, UE5 doesn't use HW RT of the PS5 or the fixed path Rasterization IIRC. So even if the latest RTX 2060/3080s can scale through those multi factor deficits of the older GTX Titan to match the PS5 image, the PS5 will still have GPU headroom to do much more, presumably - if some (or all?) of this is being done on the PS5's CPU a-sync cores (with acceleration help of its IO complex). The logical rational is that a current PC's CPU can't replace the compute of even a GTX titan, never mind something that exceeds its capabilities.

Maybe Nvidia found major algorithm improvements since then and this is a poor reference point to extrapolate from, but I thought it was quite interesting o extrapolate from, and marvel again at the visuals of the UE5 at 1440p30 with 20M polys on screen per frame..

This post, people, I'm not sure how it went without enough attention! Such an intriguing analysis and it even makes us wonder how much headroom it's being left for the GPU to work much more efficiently! Could we even expect small movie makers to try and hack the PS5 to use it as the cheapest, most seamless 3D production system? Interesting things are unfolding. keep them coming, sir!
 

Vaztu

Member
I fully agree. But some people are calling some sort of victory before reaching the finish line...

Of course its not a victory.

But it does give concrete evidence to the dream Mark Cerny talked about in Road to PS5. Before UE5 tech demo, there was not much conversation about PS5 IO capabilities. Afterwards, its the buzz around forums. Who is right, no one yet knows.

I, for one, am waiting to see game designs that are revolutionary.


dLbx3S6.jpg
 

SlimySnake

Flashless at the Golden Globes
Do I believe that data can be accessed 100 times faster?

Yes I do.

But can it be processed 100 times faster?

I have some serious doubts about that.

Sony really needs to clarify this otherwise it's can be seen as dishonest marketing. They have to fix this before it gets serious.
yes. cerny made sure to mention that you dont just need a 100x faster SSD. you need to add enhancements in the i/o on the APU chip to make sure it can be processed a 100x faster.

there is literally a slide where he shows that both SSD and I/O are getting a 100x boost this gen.
 
Basically taking that information from the SSD and then displaying it on the screen.
Yup. Even at that breakneck speed GPU can still be data starved based on the utilization of compute shaders which are highly parallel. Though having narrow and fast GPU really helps since this was what Mr. Cerny was talking about when he said he prefers 36CUs at higher frequencies, it will be easier to efficiently utilize all those CUs in parallel when you have less of them. If the frame target was 40 million polys for example (instead of 20) the bandwidth probably wouldn't be enough.

Target of 20 million is most likely carefully selected so even at the speed of the last portion, the details when paused is still high, higher in fact than last gen. So it all depends on the target. Let's say you are making an Uncharted game, at most speed during whole game will give you the target to strive for, and if a character goes slower than UE5 demo then the target can be 30 million or 40 million polys. Now think about even more detailed world design and assets than those in UE5 which is just a tech demo anyway.
 
yes. cerny made sure to mention that you dont just need a 100x faster SSD. you need to add enhancements in the i/o on the APU chip to make sure it can be processed a 100x faster.

there is literally a slide where he shows that both SSD and I/O are getting a 100x boost this gen.

It's not like I haven't seen them. I'm just trying to understand this better.

images

ps5-ssd-gdc-presentation.jpg

sony_ps5_ssd_graphic.jpg
 

Corndog

Banned
I think this is the Tweet you are referring to?



If so then I take it he is referring to texture filter units in the GPU. Below you can see them in RDNA 1:
2019-08-03-image-Copy-1024x506.png


No doubt Microsoft have tweaked/customised these units to better serve their need.
i don’t think that is it but I could be wrong. I thought it was a question whether it was part of rdna.
 

Corndog

Banned
Yup. Even at that breakneck speed GPU can still be data starved based on the utilization of compute shaders which are highly parallel. Though having narrow and fast GPU really helps since this was what Mr. Cerny was talking about when he said he prefers 36CUs at higher frequencies, it will be easier to efficiently utilize all those CUs in parallel when you have less of them. If the frame target was 40 million polys for example (instead of 20) the bandwidth probably wouldn't be enough.

Target of 20 million is most likely carefully selected so even at the speed of the last portion, the details when paused is still high, higher in fact than last gen. So it all depends on the target. Let's say you are making an Uncharted game, at most speed during whole game will give you the target to strive for, and if a character goes slower than UE5 demo then the target can be 30 million or 40 million polys. Now think about even more detailed world design and assets than those in UE5 which is just a tech demo anyway.
Ps4 seemed to have an advantage over Xbox one by having more cu’s even though they were a bit slower. Are you saying this is no longer the case?
 
Yup. Even at that breakneck speed GPU can still be data starved based on the utilization of compute shaders which are highly parallel. Though having narrow and fast GPU really helps since this was what Mr. Cerny was talking about when he said he prefers 36CUs at higher frequencies, it will be easier to efficiently utilize all those CUs in parallel when you have less of them. If the frame target was 40 million polys for example (instead of 20) the bandwidth probably wouldn't be enough.

Target of 20 million is most likely carefully selected so even at the speed of the last portion, the details when paused is still high, higher in fact than last gen. So it all depends on the target. Let's say you are making an Uncharted game, at most speed during whole game will give you the target to strive for, and if a character goes slower than UE5 demo then the target can be 30 million or 40 million polys. Now think about even more detailed world design and assets than those in UE5 which is just a tech demo anyway.

Oh I get it now. It's the flow of the river and not how wide it is.

Sonys SSD has to be fast so it can input data fast enough into the GPU. A slower GPU that's wider doesn't need information as quickly.

From the looks of it Cerny wanted to go narrow and fast from the very beginning and for that work well you need a very fast I/O system.
 

HeisenbergFX4

Gold Member
Is your source still confident in June 4th? Jeff Grubb thinks it moved out from this date.

I just wish Sony would confirm a date. Put us out of our misery.

He thinks as of yesterday its still early June.

He has not 100% heard a confirmation date just the June 4th talk in the office as the target.

Even if it moves a few days say the following Mon/Tues I am okay with that, lets just get this show started.
 

HAL-01

Member
How about taking something new? What's everyone's opinion on game sizes in next gen?
On the one hand, we can expect no redundancy of files and higher compression of assets. On the other, quality of textures will go up as well as capabilities of shuffling them around. Overall, can we expect smaller install sizes or is 825 GB/1 TB going to be a real pain?
The lack of asset duplication will make a huge difference, in open world games specially. Though the increase in asset and texture detail might just make up for it, I expect games to be around the same size, if not a bit bigger for the uncharteds and such
 

Dodkrake

Banned
It might be more efficient in some cases, not more effective. That's exactly what the Tweet said too. If their SSD speeds were equal, the XSX one would be better. Except they are not equal. The PS5's is faster. That means that the gap between them is (partially) diminished, but the PS5 would probably still be stronger.
If you were actually interested in reading rather than reacting and mocking, you would have understood that

If my grandma had balls she'd be my grandad.
 

Lunatic_Gamer

Gold Member
The Ascent Dev: Microsoft Is Very Approachable; Xbox Series X Is Easy to Develop for, Leap Is Huge

The Xbox Series X leap is huge. We're doing, of course, 4K 60 FPS in The Ascent, which is blazingly fast. The hard drive read speed and stuff like that - there are so many things. Faster load speeds - it just comes out, magically. You just click, and there you go, it loads way faster.

The draw distances are going to be as long as we can possibly make them anyway, so it's mostly about the clarity of detail, because we have more detail than you can actually see at lower resolutions.

 

quest

Not Banned from OT
Ps4 seemed to have an advantage over Xbox one by having more cu’s even though they were a bit slower. Are you saying this is no longer the case?
It's what ever people want to make up. You really think a pro PS5 will be 36 cu at 4Ghz? Sony has great messaging today 52 cu are impossible to program for. Next 72 CUs are great and the new sweetness. Best Messaging in the industry I admire it the coordination, timing and locking up epic was pure genius. By the time Microsoft can respond with a unreal 5 demo they will have been dreamcasted as they say. Brilliantly played and pulled off. This was the game Sharing video of E3 2013 moment the death blow. There is no recovery from this I applaud it as someone who loves strategy play out.
 

Corndog

Banned
Disclaimer: He was only referring to PS5, you may suffer from pop-ins, stutter, frame drops, screen tearing, and load screens on other inferior platforms.
I watched from your time stamp and on for several minutes. Nowhere did I hear that. Do you have an exact time stamp for where they say only no pop in on PS5 and not other platforms?
 

Corndog

Banned
If I understood correctly, someone asked at the video presentation if his own PC (with a 2070 or something) would be able to run the demo. To which he got a yes (obviously), "pretty good" or something. No remarks on level of detail or anything.

Is this supposed to be the grand Epic-Sony conspiracy?
No , just people saying that demo was only possible on PS5 when it’s not true.
 

Corndog

Banned
Bullshit.
You cannot mmap flash into RAM without decryption. Otherwise it's as good as having no copy protection whatsoever.
And decrypting it in the memory controller that needs to work 560GB/sec - good luck with that.
Why would the data and other assets need to be encrypted? Is that how it is done now?

Edit: That quote about it not being part of rdna was the one I was talking about earlier.
 
Last edited:

Ascend

Member
Bullshit.
You cannot mmap flash into RAM without decryption. Otherwise it's as good as having no copy protection whatsoever.
And decrypting it in the memory controller that needs to work 560GB/sec - good luck with that.
As far as I'm aware, the portion from the 100GB would be decompressed and sent directly to the GPU for processing, bypassing RAM.
 
Last edited:

FeiRR

Banned
Shaders are assets. They run on GPU. Change a shader, it gives you root access. Problem solved. XBox OG was hacked by a shader AFAIR.
Or change data for a shader making it fail and give you root access.
Possibilities are endless.
I thought about it before but forgot to ask: how much do you think will decryption have an impact on performance? We discuss decompression but there's that another layer of processing data that comes from storage. Will it be hardware this time? AFAIR, in PS3 one SPU was dedicated to it and in PS4 it's one Jaguar core.
 
Last edited:

psorcerer

Banned
I thought of it before but forgot to ask: how much do you think will decryption have an impact on performance? We discuss decompression but there's that another layer of processing data that comes from storage. Will it be hardware this time? AFAIR, in PS3 one SPU was dedicated to it and in PS4 one Jaguar core.

Sony has it all in the SSD patent. Same decompression blocks will run decryption and tamper detection procedures. I.e. the stated speeds include all of that already.
 

Bo_Hazem

Banned
I watched from your time stamp and on for several minutes. Nowhere did I hear that. Do you have an exact time stamp for where they say only no pop in on PS5 and not other platforms?

Here, a $5,500 RTX8000 struggling with frame rates, stutter, resolution, and VRS ruining the overall image released few days ago:





Again, $5,500 graphics card with 48GB VRAM. That happens when you can't offload/upload data seamlessly you'll get a nasty hit on the GPU because of computational waste, not to mention traditional GPU caches flushing everything instead of PS5's partial scrubbing which avoids stalls. If anything could brute force those deficiencies it would be this $5,500 card. Funny to see the ball in semi-480p/240p just to keep up or whatever craziness happening there.
 
Last edited:

Ascend

Member
And stored where?

My problem with XBSX discussion is that it's all too low level. Dunno, like physics discussions in Big Bang Theory - cringy and ignorant.
Not even Rick and Morty level.
Let me put it another way... Normally, data travels like this (extremely simplified);
Disk -> RAM -> GPU -> Display

What XVA claims to do is make 100GB of the SSD work as RAM. So technically it's this;
Disk -> GPU -> Display

But the system would see that exact same operation like this;
RAM -> GPU -> Display

So the loaded data doesn't need to be stored anywhere. It might use the 16GB RAM pool itself for the buffering after it being transferred to the GPU.
 

psorcerer

Banned
So the loaded data doesn't need to be stored anywhere. It might use the 16GB RAM pool itself for the buffering after it being transferred to the GPU.

To decode/decompress data you need to copy from RAM to RAM.
You cannot fetch something into GPU register or cache if it's not in RAM (not to mention that "piecewise" (de)compressors that can fetch small data are compressing badly)
Unless page faults are served by the memory controller directly. But it means the data needs to be prepared (decompressed/decoded) by the controller.
You can read everything in the Sony's SSD patent, I don't see any deficiencies there, it's an optimal path, given the constraints. And XBSX has the same constraints.
 

Bo_Hazem

Banned
How about taking something new? What's everyone's opinion on game sizes in next gen?
On the one hand, we can expect no redundancy of files and higher compression of assets. On the other, quality of textures will go up as well as capabilities of shuffling them around. Overall, can we expect smaller install sizes or is 825 GB/1 TB going to be a real pain?

With textures being roughly 60% of total game size, with many LOD's, duplicates in HDD, and PS5's 10% better compression, I would say they'll remain around 50-200GB in size for few years, depending on DLC's as well. I can see PS5's linear/semi-linear games use insane 8K assets, and Open-world games using 4K-1080p assets.
 
Status
Not open for further replies.
Top Bottom