• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry claims PS5 doesn't exhibit any evidence of VRS(Variable Rate Shading) from PS5 showcase.

pyrocro

Member
Access vs loading are two different words. Laughing at some people here thinking the XSX will be able to stream that much data, skipping the Sys/GPU RAM all together, and having that data rendered by the GPU straight from the SSD.

You would think Aaron Greenberg would be tweeting about this if that were the case.

reference:
www.dictionary.com
Can we stop this shit, It GOES to RAM FIRST. TO MEMORY FIRST. SSD->MEMORY->GPU
You don't need to make up shit for the PS5 SSD to win.
NO one is running extra traces on a motherboard from the SSD to the GPU for 5.5GB/s of bandwidth.

Tim himself outlines it here.
 
Last edited:

sendit

Member
Can we stop this shit, It GOES to RAM FIRST. TO MEMORY FIRST. SSD->MEMORY->GPU
You don't need to make up shit for the PS5 SSD to win.
NO one is running extra traces on a motherboard from the SSD to the GPU for 5.5GB/s of bandwidth.

Tim himself outlines it here.


Lol what? Did you actually read what I typed? Or did you go in auto defense mode?
 

Tomeru

Member
DF used to be perfectly fine and THE source for comparisons between PS4 and Xbox One, very professional, la crème de la crème, but something happened in 2017, something that would change Digital Foundry's credibility completely through present day. Can anyone guess what happened?

What? What? I'm dying to know!

WHAT?!?!?!?!
 

chilichote

Member
DF used to be perfectly fine and THE source for comparisons between PS4 and Xbox One, very professional, la crème de la crème, but something happened in 2017, something that would change Digital Foundry's credibility completely through present day. Can anyone guess what happened?
Even in the PS360 era, it was clear that DF was leaning towards Xbox, from 2013 it became even clearer. 2017; or the release of the One X, has nothing to do with it. Something would like to be invented here that has never happened in reality.
 

Jon Neu

Banned
Dial it back on the constant ad hominem and personal insults. Attack the argument, not the person.
If people are only interested in hearing what they want to hear, I cannot spend more time on this beyond the following answer. I use you as an excuse but the answer is addressed to all those interested in the subject.

Funny, all your posts are about over hype the SSD on the PS5 and downplay all the benefits that XsX have over the PS5. You seem to be the exact same thing you are condemning.

You’re spaniard, so it’s no surprise your Sony bias from the country of “la plei es la plei”.

Actually, are you Lherre for a chance?

That would explain a lot.
 

geordiemp

Member
with that logic, 2080ti should lose to the 2080 Right? since when does wider lose to narrower in the same GPU family. I wonder why billion-dollar company Nvidia would sell their faster GPU for less, but what do they know.

oh oh, I know the answer I know, pick me :lollipop_raising_hand:pick me.:lollipop_raising_hand:

RTX 2080Ti - 13.4TF
RTX 2080 - 10.1TF and VRAM limited

So at 30 % more TF, do you see 30 % more performance at 4K ?
 
Last edited:

BGs

Industry Professional
Funny, all your posts are about over hype the SSD on the PS5 and downplay all the benefits that XsX have over the PS5. You seem to be the exact same thing you are condemning.

You’re spaniard, so it’s no surprise your Sony bias from the country of “la plei es la plei”.

Actually, are you Lherre for a chance?

That would explain a lot.

Jon Neu Jon Neu
giphy.gif

Ignored.
 
Last edited:

Lethal01

Member
WTF do you think "to optimize a game" means? no wonder you are not impressed you are clueless.
At least read some of the posts here to get a clue.


Optimization doesn't always mean making comprimises the player care about, spiderman optimized it's compression with no drawbacks.
 

DeepEnigma

Gold Member
How are they not? The whole video they are claiming the Xbox SX couldn't actually achieve this level of fidelity.
Now they are implying the PS5 is missing a specific feature.

Both are concerns about the system being lacking.

I would rather have a backhanded complement than concern trolling for the fanatics to run with.
 

magnumpy

Member
isn't VRS just a software feature? I'm not aware of any piece of hardware which would enable VRS on one system and not the other. they both use the same GPU architecture
 

Jon Neu

Banned
I would rather have a backhanded complement than concern trolling for the fanatics to run with.

Questioning the authenticity isn’t a compliment, it’s the normal reaction to seeing a level of graphical fidelity that is literally CGI level.

If the game finally delivers those graphics, then it will be a well earned compliment.
 

DeepEnigma

Gold Member
Questioning the authenticity isn’t a compliment, it’s the normal reaction to seeing a level of graphical fidelity that is literally CGI level.

If the game finally delivers those graphics, then it will be a well earned compliment.

"I can't believe that is running in real time on there, that looks damned near CGI."

It is all about perspective, innit?
 

Jon Neu

Banned
"I can't believe that is running in real time on there, that looks damned near CGI."

It is all about perspective, innit?

They literally say they don’t believe it, it’s not a compliment it’s actually a lack of trust, which is the contrary of a compliment.

And it’s pretty normal, I’m skeptic too about any of this consoles reaching a graphical level that damn good.

I hope so, but right now the normal reaction to something like that being possible in-game is skepticism.
 
Last edited:
isn't VRS just a software feature? I'm not aware of any piece of hardware which would enable VRS on one system and not the other. they both use the same GPU architecture
Yes, it's 95% a software API using one small RDNA2 hardware feature available on all RDNA2 GPUs. But even then it's a compromise as we can easily see the artefacts it brings, at least using MS own implementation.

From patents we know Sony has a different implementation of a very similar feature but they won't talk about it because their job is to promote their games, not their software APIs that are hidden behind strict NDAs.
 

sendit

Member
Funny, all your posts are about over hype the SSD on the PS5 and downplay all the benefits that XsX have over the PS5. You seem to be the exact same thing you are condemning.

You’re spaniard, so it’s no surprise your Sony bias from the country of “la plei es la plei”.

Actually, are you Lherre for a chance?

That would explain a lot.

Awesome example of when you can't rebuttal a point.

"100 GB rendered straight from the SSD" :messenger_loudly_crying:
 
Last edited:

Lethal01

Member
Did you actually see the gameplay footage, here:

?

I think you got something wrong. There is no such thing as instantly loading a whole new game, there is still a hidden loading screen, see here:

They are basically just jumping, as you can see here:






For a real level change, we are seeing a very long loading screen, the loading screen starts as soon as the camera zooms hin:



I think this is even possible on current-gen easily. Titanfall 2 has a similar mission.


Nah, loading probably starts right as you go through the portal. before that it's just a cinematic to make the transition more interesting.
<2 seconds of loading, would probably be 30 on current consoles. The mechanic is absolutely not feasible with an sdd, but way to early to say it's doable or not doable on XSX.

Edit: Nvm he's banned.
 
Last edited:

Jon Neu

Banned
Awesome example of when you can't rebuttal a point.

"100 GB rendered strtaight from the SSD" :messenger_loudly_crying:

What point I have to refuse? The point in which Sony fanboys say that the 100GB is bullshit? It’s hardly a point and more like wishful thinking and damage control.

Seems to me you are all coming to the realization that the difference between the SSD’s is going to be the definition of diminishing returns, so you have to retort to childish antics such as “I don’t believe what MS says!”.

At the same time that you all believe things like the CPU and GPU of the PS5 being capable of their maximum theorical speeds all the time because Sony says so.

It’s the marvelous world of fanboyism; swimming in hypocrisy.
 

Xplainin

Banned
The issue I see going forward is how would we know if something cannot be done on one console and not the other. It would have to be an exclusive, which would never be on the rival console.

Obviously resolution and framerates will be different on rival consoles. But I feel when people are talking about things that cannot be done it is to do with game design and features.
Yep, just like with Ratchet being a PS5 exclusive, people are saying its only possible on PS5, so we will never know
If people are only interested in hearing what they want to hear, I cannot spend more time on this beyond the following answer. I use you as an excuse but the answer is addressed to all those interested in the subject.

Obviously I don't know how much data is being loaded into R&C. I don't work at Insomniac. But I have enough experience to get an idea that they are more than 2.4GB (which is the XSX SSD bandwidth limit).

You start from the false belief that XSX can load 100GB/s. That is not entirely "true" as you understand it. It is not like this. The bandwidth is 2.4GB/s and at most, effectively, compressed you get an equivalent of 4.8GB/s. Equivalent. No more.

PS5 is designed to offer up to 22GB/s effective. And it would be necessary to explain some peculiarities of PS5 for you to understand the "why", and since I cannot explain the "why" I accept the fact that you could not believe it. It is not your fault not having access to these peculiarities.

If you still want to give for certain the interested claims about the 100GB of XSX then you could also say that on PS5 they are 458GB. And surely you must also believe the 600 "megas" of Movistar (telephone company). I do not know if this happens in all countries, but I suppose that this example will only be understood by those residents in Spain. I think it doesn't take much explanation in this regard. It is not reality. In the case of data, equivalences are one thing and actual bandwidth is another. There's no more.

Now the question I ask. Do you think R&C is loading more than 2.4 or 4.8 GB of data? And the second question. At what speed do you think XSX can do it in practice from the time it leaves the SSD until it reaches the screen? And not to do it once in a specific place but to be able to do it at any time several times and in different areas with different assets.

As I said in my first speech about it, I'm not saying that XSX can't deliver that quality. I'm just saying that XSX cannot do R&C as it is intended. It would have to be done differently. And you as a user should care little about how they achieve it while it is achieved. But in no case it would not be done in the way that PS5 can.

And assuming that the one loaded is less than 4.8, assets prepared specifically for XSX, texture compression, geometry optimization, etc. would have to be readjusted (I will not explain why).

And assuming that the loaded is less than 2.4 then we could start talking about comparisons.

Do you think R&C loads less than 2.4GB/s?

In any case, if these explanations do not help you either, I cannot say more. I respect your opinions, but sometimes they are not realistic in one direction or the other. PS5 does not have 12TF. And XSX does not have the PS5 data management system. The XSX GPU has over 15% more theoretical gross power (albeit at a slower speed, and speed is essential and transcendent for some tasks). The PS5 data management system on the other hand is in practice around 5 times better than that of XSX in the worst case scenario. And about 9 times better than XSX in the best possible scenario for PS5. And on average, with XSX at the maximum capacity with compression and PS5 at the minimum without compression PS5 continues to manage twice as much data as XSX in its most idyllic not constant moments.

It would be interesting to start neglecting the argument that the PS5 SSD will bring no benefit.

XSX has a GPU with superior theoretical raw power. That's where it all ends. What was shown in the presentation of PS5 does not do justice to what these machines can do (both) but you have to start accepting the limitations of each one. It's nice to have a long penis, but sometimes it's better to have it a little shorter but much fatter. It could offer you a more satisfying experience in the long run (or not, according to everyone's tastes, of course).

It would be very debatable whether or not it is necessary to use, for example, 8K textures instead of 4K textures, it would be legitimately debatable. But the reality is that the volume of data managed by PS5 cannot be managed by XSX, nor can a home PC. At the beginning of generation, surely what you see on screen will make you think that this difference does not have relevance, but at the end of generation you will realize the relevance that it has. As long as you can believe or not believe, you are in all your rights, I only expose what there is. Neither pay me one, nor pay me another, my only interest is VR with respect to hardware and games in general with respect to software, whatever the platform. And I think that I have been quite critical regarding the games presented by Sony, perhaps you have missed it.

I will also tell you that I have not created the hardware, I only use it, and internally there are many things that I do not fully understand, but I will analyze the results. Still I know that there are many people around here that if they had the data on their screen they could say better than me how it works and its limitations (like other coworkers). But unfortunately I cannot offer you this information.

Cheers
When you use the figure of 22gbs for the PS5 SSD compared to 2.4gbs on the XSX then I question your motives.
You made a statement that the XSX cant do a screen swap, but yet you have never seen, let alone worked on a XSX, you have no idea on how much data it needed to do it, and you are intimating that it will take up a great % of the XSXs RAM, which it wouldn't.
 

sendit

Member
What point I have to refuse? The point in which Sony fanboys say that the 100GB is bullshit? It’s hardly a point and more like wishful thinking and damage control.

Seems to me you are all coming to the realization that the difference between the SSD’s is going to be the definition of diminishing returns, so you have to retort to childish antics such as “I don’t believe what MS says!”.

At the same time that you all believe things like the CPU and GPU of the PS5 being capable of their maximum theorical speeds all the time because Sony says so.

It’s the marvelous world of fanboyism; swimming in hypocrisy.

First, I think the XSX is a very well designed console.

However, the 100GB is bullshit unless Microsoft explains how they calculated that number. You can’t logically explain how the GPU will render data straight from the SSD skipping the Sys/GPU ram.

Addtionally, can you provide a source on who is claiming the PS5 will run at 100% CPU/GPU? Because Cerny clearly said most of the time.

You yelling “fanboy” and “hypocrisy” is like the pot calling the kettle black.
 
Last edited:

geordiemp

Member
First, I think the XSX is a very well designed console.

However, the 100GB is bullshit unless Microsoft explains how they calculated that number. You can’t logically explain how the GPU will render data straight from the SSD skipping the Sys/GPU ram.

Addtionally, can you provide a source on who is claiming the PS5 will run at 100% CPU/GPU? Because Cerny clearly said most of the time.

You yelling “fanboy” and “hypocrisy” is like the pot calling the kettle black.

The access time of current Phison controller is about 25 microseconds or whatever if anyone can be arsed to read the specsheet of each model :messenger_beaming:, however that is just the access time to find where the data is, not to transfer it.

Instantly accessible is a play on words, does not mean instantly loads.
 
Last edited:

BGs

Industry Professional
Yep, just like with Ratchet being a PS5 exclusive, people are saying its only possible on PS5, so we will never know

When you use the figure of 22gbs for the PS5 SSD compared to 2.4gbs on the XSX then I question your motives.
You made a statement that the XSX cant do a screen swap, but yet you have never seen, let alone worked on a XSX, you have no idea on how much data it needed to do it, and you are intimating that it will take up a great % of the XSXs RAM, which it wouldn't.
Knowing that there were professionals who questioned Tim Sweeney (and they had to retract publicly) I do not know why it is strange that my words are questioned. But you know that I don't care. You did not know anything when you wrote your message and you will not know anything when you finish reading mine. Ignored. Next?
 

Dolomite

Member
I think most of PS Studios already have their own copy of that Nanite tech, probably not Polyphony engine as it's the most suspect to have used VRS as some parts showed the ladder effect to me with some pop ins as well. GT7 is still using old LOD methods.
That's a dangerous claim. Epic copyrights their engines for a reason. That's like saying XGS has their version of Décima. If you're instead implying that Sony's in-house devs have an answer to achieve the LOD and high Poly count offered by Unreal 5 (a multiplat engine that scales to Mobile devices) then sure. But every next Gen 1st party studio should unless thier vision doesn't call for photorealism
 
isn't VRS just a software feature? I'm not aware of any piece of hardware which would enable VRS on one system and not the other. they both use the same GPU architecture

You need the hardware to perform it if you want the benefits of hardware acceleration. Otherwise you implement your own solution in software, which is less efficient, but can be technically done.

IIRC among AMD GPUs RDNA1 does not support VRS, just RDNA2 and onward. If Sony has an equivalent to VRS, they are implementing it differently, maybe with some customizations to the GE and PSes. And it would not be called VRS as that particular term is patented by Microsoft.

Some people mentioned MS and Intel's patents referencing Sony's, but didn't keep in mind that Sony's was for foveated rendering in application with the PSVR. That and VRS are similar in some aspects but operate and are applied differently. You can have two technologies with similar base DNA but very different implementations and functionality in practice, just look at PCM technologies like 3D Xpoint and ReRAM. It's nothing new.

First, I think the XSX is a very well designed console.

However, the 100GB is bullshit unless Microsoft explains how they calculated that number. You can’t logically explain how the GPU will render data straight from the SSD skipping the Sys/GPU ram.

The 100 GB bit, some of us have speculated, might be in regards to the GPU addressing a partition of data on the drive as extended RAM (it sees it more or less as RAM) through GPU modifications built off of pre-existing features of the XBO such as executeIndirect (which only a couple of Nvidia cards have support for in hardware). GPUDirectStorage, as nVidia terms it, already allows GPUs in non-hUMA setups to access data from storage into the VRAM. It's particularly useful for GPUs in that type of setup, but since these are hUMA systems that on the surface wouldn't seem necessary.

But...what if there's more to that virtual pool partition on XSX than meets the eye? We know the OS is managing the virtual partitions of the two RAM pools on the system, is it possible in some case that the GPU can access the 4x 1 GB RAM modules while the CPU accesses the lower-bound 1 GB of the 6x 2 GB modules? We don't know if this is the case or not, but if the OS can virtualize a split pool for optimizing the bus access of the GPU and CPU in handling contention issues, it might also theoretically be able to implement a mode, even if just in specific usage cases, to virtualize the pool as a 4x 1 GB chunk to the GPU and 6x 1 GB chunk to the CPU that can have them work simultaneously on the bus in those instances.

The tradeoff there would be collectively only 10 GB of system memory is being accessed, but the OS could then just re-virtualize the normal pool partition logic as needed, usual penalties in timing factoring in. Which wouldn't necessarily be massive whatsoever; if Sony can supposedly figure a way of automating the power load adjusting in their variable frequency setup to 2 Ms or less, I don't see how MS wouldn't be unable to do what's proposed here in even smaller a time range.

Anyway, the 100 GB being "instantly available" was never a reference to the speed of access but maybe something in regards to the scenario I've just described; even if the data is going to RAM, and the RAM it can go to is cut down to 4 GB physical with this method (if it would need to go to more RAM than that and/or need a parallel rate of data transfer greater than 224 GB/s, it'd have to re-virtualize the normal memory pool logic), at the very least the GPU can still transfer data while the CPU has access to data in the 6 GB pool on the rest of the bus, simultaneously.

Again, though, it'd depend on what customizations they've done with the GPU here and also, what extent the governing logic in the OS and kernel for virtualizing the memory pool partitions operates at. But it certainly seems like a potential capability and a logical extension of the GPUDirectStorage features already present in nVidia GPUs as well as things like AMD's SSG card line (it works very similarly I would assume, i.e drawing data directly from the 2 TB of NAND and transferring it to the GPU's onboard HBM2 VRAM, rather than needing to have the CPU draw the data from storage, dump it in system RAM, and then have the GPU shadow-copy those assets to the VRAM as how many older-generation CPU/GPU setups on PC operate). I'm gonna do a little more thinking on this because there might be some plausibility in it being what MS has done with their system setup, IMHO.
 
Last edited:

Bo_Hazem

Banned
That's a dangerous claim. Epic copyrights their engines for a reason. That's like saying XGS has their version of Décima. If you're instead implying that Sony's in-house devs have an answer to achieve the LOD and high Poly count offered by Unreal 5 (a multiplat engine that scales to Mobile devices) then sure. But every next Gen 1st party studio should unless thier vision doesn't call for photorealism

It's not stealing, like the collaboration between Sony and AMD, or let's say Sony and Panasonic for Blu-Ray. Shared copyright or mutually owning the same.

 

Dolomite

Member
Depends which theory about the 100GB/s your talking about.

If it's the theory that 100GBs of data can be made instantly accessible to the GPU that part is bullshit. And the main reason why are the hard limits of the SSD that determines how fast data can transfer over.
MS (IMHO purposefully) never mentioned the 100gb of instant assets being STREAMED via an NVME SSD. The XVA is more of a give mind than individual moving parts. If any thing the research I've come across and info teased from hardware Developers (much of it available on Twitter) seems to hints at the 100gigs of data being either coded directly into RAM or mapped onthe XVA as a whole without the SSD's stream speed as a Crucial Variable.
 

Yoboman

Member
What would be the reason Sony wouldn't have a technology that's already been out for over a year and has been in the pipeline for even longer for graphics cards?
 

Genx3

Member
Correct. Being able to locate that data at instant speed isn't the same thing as transferring it over.

I believe that's where alot of this confusion is coming from.

I think what MS means by 100 GBs are instantly accessible is that you can keep 100 GBs of assets in the SSD ready to be moved into Ram. Of course it will be moved into Ram at a rate the SSD can handle. Which if managed properly will be almost unnoticeable but the PS5 should have an advantage hiding this load time better than XSX.
 
MS (IMHO purposefully) never mentioned the 100gb of instant assets being STREAMED via an NVME SSD. The XVA is more of a give mind than individual moving parts. If any thing the research I've come across and info teased from hardware Developers (much of it available on Twitter) seems to hints at the 100gigs of data being either coded directly into RAM or mapped onthe XVA as a whole without the SSD's stream speed as a Crucial Variable.
I think what MS means by 100 GBs are instantly accessible is that you can keep 100 GBs of assets in the SSD ready to be moved into Ram. Of course it will be moved into Ram at a rate the SSD can handle. Which if managed properly will be almost unnoticeable but the PS5 should have an advantage hiding this load time better than XSX.


I think the big thing here is that Microsoft never really clarified it when they should have. I've read so many different theories on the 100GB/s that I don't really know which ones are true. Well except for the ones that are complete BS of course.
 

Dolomite

Member
It's not stealing, like the collaboration between Sony and AMD, or let's say Sony and Panasonic for Blu-Ray. Shared copyright or mutually owning the same.


yeah no. I and everyone with a working monitor understood the marketing partnership that went into Developing the Demo. Not the engine. The engine Powers the demo. What I said was Sony can't have a copy cat of UE5 they can license it like everyone else or achieve the same or better results with similar techniques. Which is good because Décima and all Sony in-house engines should strive to be the standard like they have been.
Check this out

And get this; Quixel is available right now FOR FREE to valid Students with a purchase of UE4. The video alone Offers more impressive LOD and geometry than the tech demo under optimal circumstances.

These visuals can be achieved. An SSD wont save us and TF damn sure won't. Skilled and passionate developing will
 
BGs BGs I don't know what you mean with the following:

XSX has a GPU with superior theoretical raw power. That's where it all ends

Because in truth that's not where it all ends, not even close. You can't handwaive away a larger physical L1 and L2 GPU cache (I'm talking actual data amount, not cache speed rate). You can't handwaive off the additional 112 GB/s in which the XSX's GPU can access data across the bus compared to PS5, because that affects how quickly the caches can be loaded with new data. You can't handwaive away the ML customizations MS have seemingly done with the GPU with Tensor Core-like equivalents that can essentially provide it around 13 TF of RT performance (or you can convert that to ML compute performance if preferred; point being in terms of features like RT they have other hardware there besides simply relying on the CUs for it, same with ML), etc.

That's seemingly downplaying aspects of XSX's GPU when even people like Matt on Era have hinted that there's some notable differences in the GPUs between the two systems that the paper specs don't really mention, as they've never really mentioned about almost anything (which is why even early on I would say context is always important). FWIW, faster GPU clocks generally help with the cache speed rate (I'm almost convinced now that the reason for the cache scrubbers is to ensre there are no cache hit misses or errors given the speed of the caches due to the GPU clock, i.e something like the scrubbers might've been necessary due to the clock rate and to help facilitate it) and pixel fillrate, but when you look at GPU benchmarks between GPUs of same architectures where there's a smaller one clocked higher and a larger one clocked lower, the latter tends to win in almost all the major benchmark categories.

That doesn't mean the smaller GPUs don't hold their own; far from it. But it does kind of point how, if you're a designer forced between a smaller GPU clocked higher or a larger GPU clocked lower, assuming the frontend in the GPU architecture is designed well, you'll most likely always prefer taking the larger GPU clocked lower.

I think the big thing here is that Microsoft never really clarified it when they should have. I've read so many different theories on the 100GB/s that I don't really know which ones are true. Well except for the ones that are complete BS of course.

The time in which they mentioned it wasn't meant to be a deep dive. That might be happening later this month, or in August at Hot Chips. These companies delve deeper into their tech when they feel it's time, I figure they thought May was not the time.

Considering how much of their approach is software-driven, they were likely ironing out some kinks to finalize things, the system went into manufacture state at the end of May IIRC, or maybe just very early this month (like PS5 has).
 

Tchu-Espresso

likes mayo on everthing and can't dance
I do wonder if Sony first party might start offering a 4K/30fps with RT etc but also offer a 60fps mode with the res and RT scaled back.

I am sure a game running at 1800p wil be difficult to tell its not 4K.
I remember a DF video theorising about the PS4Pro before it came out saying that 1440p scales cleaner to 4K than 1800p (coinciding with their test system finding parity with base PS4 at 1440p).

I presume this is still the case?

I can’t find that video anywhere now
 

Dolomite

Member
I think the big thing here is that Microsoft never really clarified it when they should have. I've read so many different theories on the 100GB/s that I don't really know which ones are true. Well except for the ones that are complete BS of course.
Very true. The man behind the curtain needs to pop out already. The August hardware breakdown can't come fast enough
 

Men_in_Boxes

Snake Oil Salesman
If you think I'd be caught dead playing a game without VRS you GOT to be out of you God damn mind.

We must have standards people or we are no better than the beasts in the field!
 

Razvedka

Banned
If people are only interested in hearing what they want to hear, I cannot spend more time on this beyond the following answer. I use you as an excuse but the answer is addressed to all those interested in the subject.

Obviously I don't know how much data is being loaded into R&C. I don't work at Insomniac. But I have enough experience to get an idea that they are more than 2.4GB (which is the XSX SSD bandwidth limit).

You start from the false belief that XSX can load 100GB/s. That is not entirely "true" as you understand it. It is not like this. The bandwidth is 2.4GB/s and at most, effectively, compressed you get an equivalent of 4.8GB/s. Equivalent. No more.

PS5 is designed to offer up to 22GB/s effective. And it would be necessary to explain some peculiarities of PS5 for you to understand the "why", and since I cannot explain the "why" I accept the fact that you could not believe it. It is not your fault not having access to these peculiarities.

If you still want to give for certain the interested claims about the 100GB of XSX then you could also say that on PS5 they are 458GB. And surely you must also believe the 600 "megas" of Movistar (telephone company). I do not know if this happens in all countries, but I suppose that this example will only be understood by those residents in Spain. I think it doesn't take much explanation in this regard. It is not reality. In the case of data, equivalences are one thing and actual bandwidth is another. There's no more.

Now the question I ask. Do you think R&C is loading more than 2.4 or 4.8 GB of data? And the second question. At what speed do you think XSX can do it in practice from the time it leaves the SSD until it reaches the screen? And not to do it once in a specific place but to be able to do it at any time several times and in different areas with different assets.

As I said in my first speech about it, I'm not saying that XSX can't deliver that quality. I'm just saying that XSX cannot do R&C as it is intended. It would have to be done differently. And you as a user should care little about how they achieve it while it is achieved. But in no case it would not be done in the way that PS5 can.

And assuming that the one loaded is less than 4.8, assets prepared specifically for XSX, texture compression, geometry optimization, etc. would have to be readjusted (I will not explain why).

And assuming that the loaded is less than 2.4 then we could start talking about comparisons.

Do you think R&C loads less than 2.4GB/s?

In any case, if these explanations do not help you either, I cannot say more. I respect your opinions, but sometimes they are not realistic in one direction or the other. PS5 does not have 12TF. And XSX does not have the PS5 data management system. The XSX GPU has over 15% more theoretical gross power (albeit at a slower speed, and speed is essential and transcendent for some tasks). The PS5 data management system on the other hand is in practice around 5 times better than that of XSX in the worst case scenario. And about 9 times better than XSX in the best possible scenario for PS5. And on average, with XSX at the maximum capacity with compression and PS5 at the minimum without compression PS5 continues to manage twice as much data as XSX in its most idyllic not constant moments.

It would be interesting to start neglecting the argument that the PS5 SSD will bring no benefit.

XSX has a GPU with superior theoretical raw power. That's where it all ends. What was shown in the presentation of PS5 does not do justice to what these machines can do (both) but you have to start accepting the limitations of each one. It's nice to have a long penis, but sometimes it's better to have it a little shorter but much fatter. It could offer you a more satisfying experience in the long run (or not, according to everyone's tastes, of course).

It would be very debatable whether or not it is necessary to use, for example, 8K textures instead of 4K textures, it would be legitimately debatable. But the reality is that the volume of data managed by PS5 cannot be managed by XSX, nor can a home PC. At the beginning of generation, surely what you see on screen will make you think that this difference does not have relevance, but at the end of generation you will realize the relevance that it has. As long as you can believe or not believe, you are in all your rights, I only expose what there is. Neither pay me one, nor pay me another, my only interest is VR with respect to hardware and games in general with respect to software, whatever the platform. And I think that I have been quite critical regarding the games presented by Sony, perhaps you have missed it.

I will also tell you that I have not created the hardware, I only use it, and internally there are many things that I do not fully understand, but I will analyze the results. Still I know that there are many people around here that if they had the data on their screen they could say better than me how it works and its limitations (like other coworkers). But unfortunately I cannot offer you this information.

Cheers

I assume you're mod verified. If so, that makes you the genuine article as a game developer with access to these devkits (as you claim).

This is a fascinating write up. If you don't mind my asking, when you say that PS5 is 22GB/s effective, you mean that this isn't a pipedream figure but something that developers can meaningfully achieve? This is something that your team, for instance, is seeing right now in development?

I know you're bound out the wazoo in NDAs, if you tell me you can't talk about it I get it.
 

Tchu-Espresso

likes mayo on everthing and can't dance
The medium showed it done already on XSX. In fact the XSX didnt need the one second broken glass load interval screen to do it.
Only it loads in significantly less assets or asset complexity.

There isn’t really a simple equivalence there.
 

Bo_Hazem

Banned
yeah no. I and everyone with a working monitor understood the marketing partnership that went into Developing the Demo. Not the engine. The engine Powers the demo. What I said was Sony can't have a copy cat of UE5 they can license it like everyone else or achieve the same or better results with similar techniques. Which is good because Décima and all Sony in-house engines should strive to be the standard like they have been.
Check this out

And get this; Quixel is available right now FOR FREE to valid Students with a purchase of UE4. The video alone Offers more impressive LOD and geometry than the tech demo under optimal circumstances.

These visuals can be achieved. An SSD wont save us and TF damn sure won't. Skilled and passionate developing will


That Quixel demo is using 5.2x times gimped graphics (25% compressed 4K assets) or even more compared to the UE5 demo, and running in less than 4K (wide ratio, cinematic) at 24fps. Plus, a cutscene.

The UE5 demo is using 8K uncompressed, Hollywood level assets with 16K shadows, compressing losslessly over a billion polygons to 20 million polygons per frame, equivalent to 4.7 polygons per one pixel for 1440p running around 40fps and aiming for solid 60fps when the engine is finalized.

So far the Godfall PS5 version is embarrassing the PC version in case you didn't keep up:

Native 4K and more stable 60fps with better graphics and lighting:




Only 1080p shown, could be as well running at 1080p and stuttering down to below 20fps. Check out the dislike count:




All the theoretical talk is useless if it's not met by convincing results. Let's talk again when we so another version of the UE5 running on another platform, which is more than likely will be heavily gimped.
 
Last edited:

Dolomite

Member
That Quixel demo is using 5.2x times gimped graphics (25x compressed 4K assets) or even more compared to the UE5 demo, and running in less than 4K (wide ratio, cinematic) at 24fps.

The UE5 demo is using 8K uncompressed, Hollywood level assets with 16K shadows, compressing losslessly over a billion polygons to 20 million polygons per frame, equivalent to 4.7 polygons per one pixel for 1440p running around 40fps and aiming for solid 60fps when the engine is finalized.

So far the Godfall PS5 version is embarrassing the PC version in case you didn't keep up:

Native 4K and more stable 60fps with better graphics and lighting:




Only 1080p shown, could be as well running at 1080p and stuttering down to below 20fps. Check out the dislike count:




All the theoretical talk is useless if it's not met by convincing results. Let's talk again when we so another version of the UE5 running on another platform, which is more than likely will be heavily gimped.

You've literally proved my point🤦🏾‍♂️🤦🏾‍♂️
The Quixel presentation was powered by Unreal Engine 4 and is more than a year old. Why would you chose to believe these can't be achieved at a higher parity on UR5 with more capable hardware?
First you say it's not impressive (Stevie wonder could tell you it looks better than the land of Nanite and is a year old) then you say it's because and I that's lack of 8K texture resolution and frame rate? That's no better than the fanboys trashing the UE5 Demo because it was sub 4k and ran at last gen's FPS. Lick an argument. They are both impressive period. The way they achieved these goals is moot, if the Quixel megascans gameplay is dismissed Because of polycount or resolution than you can't cape for the UE5 Demo on the same merit. Btw you seem to forget that they are the same engine. It's great to have brand loyalty as it is how we identify ourselves amongst our community but we can't act like no other platform can do what mine does on an engine built for mobile games. Coding is not a one route freeway. You can use many means to reach many results. Hell Epic had a copy of Gears 2 running in PS3 for Pete's sake. Why? Because it was a multiplat engine being tested
 

JimboJones

Member
Even in the PS360 era, it was clear that DF was leaning towards Xbox, from 2013 it became even clearer. 2017; or the release of the One X, has nothing to do with it. Something would like to be invented here that has never happened in reality.

Was this a dream (or nightmare in your case) you had?
 

Bo_Hazem

Banned
You've literally proved my point🤦🏾‍♂️🤦🏾‍♂️
The Quixel presentation was powered by Unreal Engine 4 and is more than a year old. Why would you chose to believe these can't be achieved at a higher parity on UR5 with more capable hardware?
First you say it's not impressive (Stevie wonder could tell you it looks better than the land of Nanite and is a year old) then you say it's because and I that's lack of 8K texture resolution and frame rate? That's no better than the fanboys trashing the UE5 Demo because it was sub 4k and ran at last gen's FPS. Lick an argument. They are both impressive period. The way they achieved these goals is moot, if the Quixel megascans gameplay is dismissed Because of polycount or resolution than you can't cape for the UE5 Demo on the same merit. Btw you seem to forget that they are the same engine. It's great to have brand loyalty as it is how we identify ourselves amongst our community but we can't act like no other platform can do what mine does on an engine built for mobile games. Coding is not a one route freeway. You can use many means to reach many results. Hell Epic had a copy of Gears 2 running in PS3 for Pete's sake. Why? Because it was a multiplat engine being tested

I'm enough with theories here anyway, let the rest enjoy their VRS talk.
 

Andodalf

Banned
I remember a DF video theorising about the PS4Pro before it came out saying that 1440p scales cleaner to 4K than 1800p (coinciding with their test system finding parity with base PS4 at 1440p).

I presume this is still the case?

I can’t find that video anywhere now

With modern TAA and image reconstruction, native resolution isn’t as important, and by extension scaling a perfect 4-1/ 2-1 to native res isn’t either
 
Top Bottom