• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Can't June 4th just come already?
qW.gif

It’s going to be a massive attraction for sure, just based on how much interaction the DualSense reveal got. I’m really looking forward to HZD2, thats probably my most anticipated next gen game right now
 
Last edited:
While not being ram intensive in volume, it does of use ram, just not in a huge footprint due to the perfect 100% VALU mark talks about. It still moves a ton of data, and as Mark says, it can even move enough to negatively impact system performance if unchecked.

"Bandwidth-wise, the Tempest engine can use over 20GB/s, but we have to be a little careful because we don't want the audio to take a notch out of the graphics processing. If the audio processing uses too much bandwidth, that can have a deleterious effect if the graphics processing happens to want to saturate the system bandwidth at the same time."

If it can potentially cause "deleterious effect" on the system bandwidth, it's capable of hitting the Memory hard.
Yeah. Need the data stream from ssd to be prioritzed then allocate audio ray tracing its data.
 

Andodalf

Banned
Good heavens. No wander we have them swarming here with misinformation.

Who is them? and what misinformation?

Following the link instead of just reading the tweet, the article only mentions needing "SSD". Also the article actual links another one, which links some primary sources and offers no source for some important quotes. We're left to assume that "SSD" on it's own means sata here, because the article goes on to say that a NVMe drive would be ideal, something we knew from the moment the DF article with the actual Tim Sweeney quotes referenced came out mid reveal. A 2070 super has also been talked about for a while, which is pretty much a dead ringer for a 2080 mobile talked about earlier today if that's what's bothering you.

There was not a single new thing in this article, and the tweet contradicts the article itself.
 
While not being ram intensive in volume, it does of use ram, just not in a huge footprint due to the perfect 100% VALU mark talks about. It still moves a ton of data, and as Mark says, it can even move enough to negatively impact system performance if unchecked.

"Bandwidth-wise, the Tempest engine can use over 20GB/s, but we have to be a little careful because we don't want the audio to take a notch out of the graphics processing. If the audio processing uses too much bandwidth, that can have a deleterious effect if the graphics processing happens to want to saturate the system bandwidth at the same time."

If it can potentially cause "deleterious effect" on the system bandwidth, it's capable of hitting the Memory hard.


Isn't that more about occupying the bus the GPU uses, rather than it actually using RAM?

My understanding was the audio is streamed directly from the SSD to the modified cacheless CUs without going into RAM first, but that the pathway it takes is shared with the GPU. I might have misunderstood it, though.
 

Andodalf

Banned
Isn't that more about occupying the bus the GPU uses, rather than it actually using RAM?

My understanding was the audio is streamed directly from the SSD to the modified cacheless CUs without going into RAM first, but that the pathway it takes is shared with the GPU. I might have misunderstood it, though.

Not too sure about that exactly either, but when they say bandwidth, that usually refers to system memory. My understanding is that the SPU like modified CU gets data from the SSD, but also from the ram. And this would need to be the case, as the CU needs data from the ram about where all the objects are in the scene that will be making the sounds, not to mention data about changing environments and player movement. Static data can be pulled from SSD, but dynamic stuff lives in memory. When it's finished with its processing, it sends off the finished audio to the ram to be output?, and the CU waits until it receives the information about the next scene. My current understanding is that the PS5's memory is completely unified, in contrast to the XSX's semi asynchronous system, so the GPU and CPU share all the same memory and bandwidth to it. Honestly not sure how the audio output works here, I'd imagine it needs to wait for the rest of the APU, as i doubt the CU handles I/O (I/O here meaning like the hdmi or optical out)
 

GermanZepp

Member
Because multiplat games will have to take into account the slowest SSD of the two consoles. Ergo, they won't take full advantage of the PS5 SSD as much as the PS5 exclusives will.

For example, if WB makes a Flash game (based on the DC comics), and they make The Flash fast enough to run through an entire city in ten seconds on the Series X and three seconds on the PS5, they will need to make him travel in ten seconds on the PS5 version in the end as well, in order for the two versions to play identically.

Ergo my Ass. Im not into the "lowest denominator" thing. Was tomb raider in ps4 holded by the xone? No . Double the framerate. If there any obvious advantage to take that make the job easy, developers gonna use if. I tell you, if the lower denominator is the targeted goal in a 3rd party project and a, at least x2 difference in performance is not used in any way, you know they are in a parity arrangement.

And the same goes bought ways. You think if developers can push more resolution or frames in xbox they are not going to do it?

Of course all those things are tied to game design, time, budget, deadlines, etc. So if nobody takes advantage of the systems those are the reasons
 
Last edited:

sircaw

Banned
Who is them? and what misinformation?

Following the link instead of just reading the tweet, the article only mentions needing "SSD". Also the article actual links another one, which links some primary sources and offers no source for some important quotes. We're left to assume that "SSD" on it's own means sata here, because the article goes on to say that a NVMe drive would be ideal, something we knew from the moment the DF article with the actual Tim Sweeney quotes referenced came out mid reveal. A 2070 super has also been talked about for a while, which is pretty much a dead ringer for a 2080 mobile talked about earlier today if that's what's bothering you.

There was not a single new thing in this article, and the tweet contradicts the article itself.

I personally think Windows central is trying to dirty the waters here. I do not believe their article is based in good faith.

There goal here is to try and downplay what ever Sony is saying, like North Korean or Chinese state propaganda.
They are not telling lies but nether are they telling the whole truth. I think their motives are slanted.

Just like the other day, with the tweets that a couple of the Microsoft head honchos fired out, they felt cheap and unnecessary.

At the end of the day we know that demo runs at 30 fps at 1440, that Chinese video could be running at a lower res , with less assets. In the end it scales or is meant to scale really well.

What Windows centrals goal is here to say, that ssd is nothing special and we can run that exact demo with out mentioning any requirements or specs other than we can run it , It's a clever way of them throwing shade on the ps5 and at the same time praising their own device,

So are they lying, no, are they being completely honest, no.
Can you prove this, no.

Headlines are important,the majority of people read those and skip/ flip past the rest.
In the end context matters.
 
Last edited:

DrKeo

Member
And in fairness the tweets pushing lossy texture compressor BPack as a silver bullet to match the IO complex and 2x raw SSD speed double down on that idea, because even if the XsX can largely match that demo, the use of 8K textures on PS5 is going to be less detailed on XsX (at possibly 4K or less) if they need to use the lossy compression of bpack to make up that IO deficit.
4k textures are x4 smaller than 8K textures, not x2. And there is no reason to have lower-quality textures or models, they will just stream slower which means more pop-in but not the same pop-in we know today on consoles.

This was running at 1080p@30FPS with ray-traced global illumination, shadows and reflections enabled all at the same time right? REALLY interested to see how PS5 does hardware-accelerated triangle-based ray-tracing. Guess we'll find out June 4th. (Hopefully)
It's much more than just that, it's fully path traced. That's the most intensive RT demo I've ever seen until NVIDIA showed their marble demo which melted my face.

Well, so far, it's been using "software-based" global illumination on the PS5, which is the most critical type of Ray Tracing that's much more visible than reflections which need certain reflective surfaces. So with that lifted with HW RT later on when they implement it that demo should benefit from higher resolution/framerates. Plus, we're not sure if it was utilizing The Tempest 3D Audio engine or it was using only traditional, pre-baked audio behavior. Global illumination, shadows, reflections were all used in video in real time to some extent, not sure if ray-traced audio. All of those taxing types aren't utilizing the built-in HW RT.

ETaGfjWWkAA-RyL.jpg
What Epic is using can't use the RT hardware because they don't use rays. They are using three GI methods in Lumen and none of them use rays (actually, UE4's SSGI might but even if it does, it casts very few rays), probably because they want to be hardware-agnostic.

Can I have Metroid Prime 4 using Nanite ?
You've just described my wet dream.

Things are unclear... It was stated in a Eurogamer article that one statue uses 24 8K textures. That would amount to 24 x 7680 x 4320 x 32 / (8,000,000,000) = 3.2GB of assets for one statue. If all statues are the same, no duplicate data is necessary. In any case, the full uncompressed statue can be fully streamed in less than a second (0.6 seconds) by the PS5, while the XSX would take 1.3 seconds. As a reference, an HDD at 50 MB/s would take 64 seconds.

Aaaaand there we go again...
8K textures are actually 8192 X 8192, not 7680 X 4320 like TV resolution. When developers talk about texture sizes they are talking about 1:1 ratio and always in the 2^x. So 1K is 1024x1024, 4K is 4096x4096, and so on. But even if you are using 8K textures you are still paging them and using different MIPS so it's not like there are 24 8K textures in memory at the same time. Also, BCPack and Kraken will make them 30%-60% smaller to stream on average.
 
Last edited:

DrKeo

Member
Isn't that more about occupying the bus the GPU uses, rather than it actually using RAM?

My understanding was the audio is streamed directly from the SSD to the modified cacheless CUs without going into RAM first, but that the pathway it takes is shared with the GPU. I might have misunderstood it, though.
If it uses memory bandwidth, it goes through the memory.
 

PaintTinJr

Member
4k textures are x4 smaller than 8K textures, not x2. And there is no reason to have lower-quality textures or models, they will just stream slower which means more pop-in but not the same pop-in we know today on consoles.
No, you misunderstood my point, the comparison of an uncompressed 8K cinema texture to an 8k DXT lossy compressed texture is like comparing to a lower resolution texture in absolute PQ terms of signal to noise.
 
I was reading a certain book on SSD technology for like the 3rd time now because I love computer engineering and after passing a certain point I remembered something about nvme drives that made something click in my mind. If you guys didn't know, certain nvme drives on pc like the pro and evo series from Samsung have had a small soldered in chipset containing 4 or more ARM Cortex A series low power cpu cores (from the A 50 series like the A55 and A57) that in the case of Samsung are slightly customised and repurposed for handling certain storage I/O operations as well take some compression/decompression load off the cpu to free some of its resources for other things. Not all nvme ssds have these chipsets though (only those from certain brands do). And it made me wonder. Is that the reason why Microsoft chose to use a single decompression block on the Series X? I mean they've basically been trying to cater to the pc community since 2016 when they started bringing their games to the Pc platform while attempting to unify pc and xbox as one platform. Since the pc already has nvme gen 4 drives with speeds faster than those of the ssd in the Series X. There's one that i have in my build, the Aorus gen 4 nvme ssd from Gigabyte which manages 5.5GB/s and 4.5GB/s in read and write speeds respectively. Could it be that Microsoft chose to use an ssd with speeds of 2.4 GB/s raw and 4.8GB/s compressed to match the speeds already available on pc in order to make games easier to port? It might sound a little outrageous, but if you think about it for quite a bit, it really starts to make sense. I don't know guys, that's just my two cents based on stuff I've already read, publicly available information as well as my own knowledge. What do you guys think? I would love to hear your thoughts on this.
 
Last edited:
Not too sure about that exactly either, but when they say bandwidth, that usually refers to system memory. My understanding is that the SPU like modified CU gets data from the SSD, but also from the ram. And this would need to be the case, as the CU needs data from the ram about where all the objects are in the scene that will be making the sounds, not to mention data about changing environments and player movement. Static data can be pulled from SSD, but dynamic stuff lives in memory. When it's finished with its processing, it sends off the finished audio to the ram to be output?, and the CU waits until it receives the information about the next scene. My current understanding is that the PS5's memory is completely unified, in contrast to the XSX's semi asynchronous system, so the GPU and CPU share all the same memory and bandwidth to it. Honestly not sure how the audio output works here, I'd imagine it needs to wait for the rest of the APU, as i doubt the CU handles I/O (I/O here meaning like the hdmi or optical out)

I wasn't thinking about the bandwidth involved in returning the processed audio back out. That's probably what is meant, as 20GB/s would be too much for the SSD to push, especially as audio data is already compressed.

So yeah I'd imagine the main game code running on the CPU primes the GPU "SPU" with information on the kinds of transformations it needs to apply, along with an address to the source audio on the SSD (accessed directly via the DMAC). before the processed audio is then pushed back into RAM or some other buffer before it's handled by whatever.

I'd not heard there were any differences between how PS5 and XSX use their memory, CPU and GPU? I thought they were the same there? At least PS4 and XO were unified? XSX has the unusual split pool of memory compromise, but that's more of a quirk of likely falling back to 16GB instead of 20GB than an architectural difference.
 
Could it be that Microsoft chose to use an ssd with speeds of 2.4 GB/s raw and 4.8GB/s compressed to match the speeds already available on pc in order to make games easier to port?

I personally doubt that. Having extra capacity still keeps you compatible with PC. Having more of something doesn't make something harder to port. It's also a machine designed to last for 7 years, so if they wanted parity they'd likely have targeted what they expect the average PC to have 4-5 years from now or something.
 

DrKeo

Member
No, you misunderstood my point, the comparison of an uncompressed 8K cinema texture to an 8k DXT lossy compressed texture is like comparing to a lower resolution texture in absolute PQ terms of signal to noise.
Oh, got it. I disagree because XSX doesn't use DXT, it uses BCn and BC7 looks wonderful even at ~70% compression, not to mention closer to 60% and whatever enhances BCPack has over BC7. When you add on top of that all sorts of filtering and post processing, I doubt even DF will be able to tell the difference. I mean, most users can't even tell when an image is reconstructed or if VRS is used.

What I do hope they solve is the noise on the alpha channel which actually DXT can resolve. Maybe that's BCPack, 3 channels compressed in BC7, and 1 channel compressed in DXT5. Who knows? We will have to wait and see.
 
Last edited:

Radical_3d

Member
That UE5 demo surely ruffled some feathers, let's dig up garbage info from an unknown person in China no less, and believe it!
June the 4th will come and go. Then some presentations will be suspicious and it’ll all starts again until the July event. Then comparisons between games that are not the same game. Console warring all the summer and then in November, finally, DF comparisons between the same game.

But some games will have advantages on each system and so console warring all the winter and Sales Age Forum all the launch window. After that 3-4 years of more console warring until the Pro versions.

Boy. Do I love new hardware and GAF.
 

DrKeo

Member
June the 4th will come and go. Then some presentations will be suspicious and it’ll all starts again until the July event. Then comparisons between games that are not the same game. Console warring all the summer and then in November, finally, DF comparisons between the same game.

But some games will have advantages on each system and so console warring all the winter and Sales Age Forum all the launch window. After that 3-4 years of more console warring until the Pro versions.

Boy. Do I love new hardware and GAF.
Don't forget the warring on PS5 Pro VS XSX-X and then, after they are out, speculation on PS6 VS next-box starts!
 
Last edited:

Bo_Hazem

Banned
Yeah it really is a TRUE GENUINE NEXT-GENERATION LEAP!!

Guys, we are in for a treat for next-gen, if this is just a tech demo thrown around easily like that, you'll see games with crazier graphics, animations, destruction, AI, world simulations and so many stuff in the next 3 years.

Look at even Richard Leadbetter's opinion and what he said about it here in this picture below:


RQBHaG8.png

It's the unbearable truth that even DF had to leak a bit. :lollipop_tears_of_joy:

The tempest engine is the one thing about PS5 that I'm sceptical about.

Do these sound-boosting techs really make a worthwhile difference? Afraid I won't notice, It's like my ears are constantly drunk and horny, they're happy with whatever they get.

You'll get it and understand it when you hear it ;) It's safe to say any headset, especially good ones, would give you great experience. Current PS Gold and Platinum already sport 50mm, and the Platinum supports 3D audio in some selected games (pre-baked 3D audio). So if you have already a good headset you'll enjoy it. If you don't have yet better wait to see a new PS headset or the next Sony 1000XM4 if you have the budget. Avoid bland headsets that lack punchy bass like Astro A40 that I own and it has an optical mixer.

Listening from the DS4 controller right now is only 2.0 channel, the lowest quality no matter what headset you use.
 

Radical_3d

Member
Don't forget the warring on PS5 Pro VS XSX-X and then, after they are out, speculation on PS5 VS next-box starts!
You know I was out of the games stuff. In fact I have to catch a lot of this generation and that’s why I want a PS5: to play a lot of this generation with better graphics in my 4K. I couldn’t care less about HZD2. So I came back recently and then all of this next GeV drama happened and suddenly you can have an account in GAF easily.

The best of times.
 

Ascend

Member
8K textures are actually 8192 X 8192, not 7680 X 4320 like TV resolution. When developers talk about texture sizes they are talking about 1:1 ratio and always in the 2^x. So 1K is 1024x1024, 4K is 4096x4096, and so on. But even if you are using 8K textures you are still paging them and using different MIPS so it's not like there are 24 8K textures in memory at the same time. Also, BCPack and Kraken will make them 30%-60% smaller to stream on average.
Thanks for the size correction. But from the perspective of the UE5 demo, no MIPS are being used (for nanite and GI at least) as far as I'm aware, because you don't require usage of LOD for those.
 
Last edited:

Bo_Hazem

Banned
It will most likely depend what kind of audio devixe you are using and if those hrtf profiles qill suit you.

Try watch the UE 5 Demo with a headset/headphone. Watch at the character and the environment shes in and listen to the sound. To me it felt way better and realistic then the games I played latly.

Not just that, with ray traced audio it should give you awareness of your surroundings, check this old, short demo:




Another old, short demo video about ray traced audio:




Put in mind that all those impressive tech demo videos are outdated and inferior to The Tempest engine.
 
Last edited:
Don't forget the warring on PS5 Pro VS XSX-X and then, after they are out, speculation on PS5 VS next-box starts!

I hope Microsoft's beast GPU causes them to really make some inroads and be seen as a marketing success. That would make the potential PS5 Pro/XSX-X battle absolutely titanic. Even the power supplies will have a bag of teraflops stuffed in the corner.

Not just that, with ray traced audio it should give you awareness of your surroundings, check this old, short demo:



Quake 3's doppler effect as a rocket flies past is still the coolest sound related thing in any game ever to me. That rasp as one just misses you is terrifying.
 
Last edited:

PaintTinJr

Member
Oh, got it. I disagree because XSX doesn't use DXT, it uses BCn and BC7 looks wonderful even at ~70% compression, not to mention closer to 60% and whatever enhances BCPack has over BC7. When you add on top of that all sorts of filtering and post processing, I doubt even DF will be able to tell the difference. I mean, most users can't even tell when an image is reconstructed or if VRS is used.

What I do hope they solve is the noise on the alpha channel which actually DXT can resolve. Maybe that's BCPack, 3 channels compressed in BC7, and 1 channel compressed in DXT5. Who knows? We will have to wait and see.
As far as I know, they are all derivative formats from DXT, and no matter how nice they look - no one said they looked awful - they are not the same as the source texture, and can be downscaled and measured against the uncompressed 4K version in signal to noise, and lower orders. When you are doing UE5 graphics at infinite geometry and texture detail, the noise from lossy compression is going to be more problematic, than in a game like Gear5 using the classic rendering. Ultimately the XsX version has to choose a compromise.
 

ThisIsMyDog

Member
Forgive me for my ignorance if this is a stupid question, i want to ask if UE5 tech can be used to improve rendering, remove pop ins etc. of more lively environment, such as forests, jungles and cities?
 
Last edited:
Forgive me for my ignorance if this is a stupid question, but UE5 tech can be used to improve rendering, remove pop ins etc. of more lively environment, such as forests, jungles and cities?

I think Nanite ("infinite" detail, no LODing or pop in) is just for static objects at the moment. So yes for buildings, but maybe no for moving cars etc.
 

Bo_Hazem

Banned
Thats what he said.
Theres basically going to be a menue to configure the sound output depending on the device you are using, the hrtf profile you are using and maybe even the distance to you speakers.

Then the audioengine will calcute how to modify the soundoutput so you'll receive the sound like you are actually ingame.



Sony's new XH95 (X950) TV for example has this interesting tech that sense your surrounding for optimal speaker feedback, timestamped:




Sony TV's are obviously the first to be optimized, if any other TV or sound system outside Sony to be optimized to begin with.
 
Last edited:

Ascend

Member
Forgive me for my ignorance if this is a stupid question, i want to ask if UE5 tech can be used to improve rendering, remove pop ins etc. of more lively environment, such as forests, jungles and cities?
As far as I'm aware, Nanite won't work if there is animation involved. The character in the demo is still rendered the traditional way.
 
Has Sony said anything about noise levels or the cooling system of the PS5?

They just said “FFFFFFFFFFFFFUUUURRRRHHHHHHH”

The only official statement is that we should be quite happy with what the engineering team came up with, and also admitted they hadn’t done the best job of it on PS4.

Their fixed wattage varying frequency paradigm is centered around cooling. They should know exactly what they need to achieve a sound level goal.

Personally I just hope it’s a simple task to clean out the heatsink from any blockages. XSX looks like it will be straight forward to do that, which is nice.
 
Last edited:

DrKeo

Member
Thanks for the size correction. But from the perspective of the UE5 demo, no MIPS are being used as far as I'm aware, because you don't require usage of LOD.
I actually don't remember anything regarding MIPS in Nanite, just geometry. I mean, MIPS are still required, they are just streamed in pages on GPU requirement from the SSD to memory, don't they?

I hope Microsoft's beast GPU causes them to really make some inroads and be seen as a marketing success. That would make the potential PS5 Pro/XSX-X battle absolutely titanic. Even the power supplies will have a bag of teraflops stuffed in the corner.

Quake 3's doppler effect as a rocket flies past is still the coolest sound related thing in any game ever to me. That rasp as one just misses you is terrifying.
Competition is good, especially if your PSU gets some TF!

As far as I know, they are all derivative formats from DXT, and no matter how nice they look - no one said they looked awful - they are not the same as the source texture, and can be downscaled and measured against the uncompressed 4K version in signal to noise, and lower orders. When you are doing UE5 graphics at infinite geometry and texture detail, the noise from lossy compression is going to be more problematic, than in a game like Gear5 using the classic rendering. Ultimately the XsX version has to choose a compromise.
DXT1 is BC1, DXT3 is BC2 and DXT5 is BC3. No one really calls them DXT and they aren't used much today anymore but I went along because you called them DXT :)
If you are still using DXT to describe them, you should check out BC7 for instance which gives extremely good quality on the compression rates MS talked about and BCPack should be an upgrade over BC7.

Regarding compromises, You don't have to compromise in quality, you can compromise in delivery time. If the PS5 can stream in extra detail for the asset in 33ms, what's stopping the XSX streaming the same in 67ms? Yes, it will give it a delay of 33ms, and extremly quick pop-in that morphs in, but will it be actually noticeable when it takes 33ms? I have no idea, I guess we will have to wait for DF analysis for that one.

As far as I'm aware, Nanite won't work if there is animation involved. The character in the demo is still rendered the traditional way.
GI too, at least two out of three methods Lumen uses for GI require none-animated objects and the third is SSGI which IMO looks like shit :)
 
Last edited:
Don't forget the warring on PS5 Pro VS XSX-X and then, after they are out, speculation on PS5 VS next-box starts!
Cerny presenting the PS5 pro:

-Guys we did it!
-Oh not what you did now?
-We reach 4.6 Ghz in our new GPU and now our SSD reach 11 GB/s in raw bandwidth using a similar cooling solution than the used
in the nuclear reactors in Japan
-What?


In other side Andrew Goossen/Phil Spencer with XSX^2:

-We put 56 WGP in our console
-Why do you repeat the meme of the refrigerator behind of you ?
-What meme? that is the console also we create the Hipervelocity Architecture with a bandwidth of 6 GB/s with instant access to 200 GB
-What ?
 
Last edited:

PaintTinJr

Member
No clue. We have no idea what it takes. What you describe sound amazing, What if it's 30 times what we saw on last gen? If It's 30 times as demanding, which is a revelatory increase, then no, PS5 is hardly breaking a sweat. There's just no point of reference for this. The I/O leap from 5400 rpm drives as a baseline to SATA ssds is about as big a jump as we've seen in gaming, and the new consoles are jumping far past Sata to NVMe.
Actually there is point of reference for this(IMHO). RAM speeds have increased linearly over the last few decades, while processors have followed moore's law, so there has always been a shortfall in memory in consoles, and developers will exhaust however much they are given in each generation. So for the UE5 demo, if the PS5 has 16GBs of memory with 448GB/s bandwidth and an IO complex/SSD that can deliver between 4.8GB/s and 22GB/s storage into RAM - which is still an order of 20x -100x less than memory - and we already know that the data per frame being loaded is exhausting the system to the point of needing to use dynamic resolution - a resolution that is directly tied to memory bandwidth exhaustion -IMHO it doesn't sound plausible that the storage device that is still a bottleneck won't be pushed more than 50%.

And that's forgetting that in this thread people linked tweets from Sweeney being asked about the UE5 demo use of the PS5 IO/SSD and he referred them to Cerny's GDC talk for the details. We've gone from 60-100MB/s read speeds streaming on mechanical SATA, with 5 minutes worth of game world assets loaded with LoD varying PQ, before a loading delay. And now to the UE5 demo that loads all new data per frame with infinite geometry PQ. Even if we said 5mintutes was actually just 10seconds in the old way of game data, the order of magnitude difference between old data paradigm and new would be 30 frames x 10 seconds: a difference of 300x,.

The other aspect of streaming technology is that they can use whatever they are given, as is the case for clipmaps or the derivate megatextures. The clipmap pyramid just has more of the pyramid available at higher quality to render, and again, filling RAM from storage becomes the limiting factor. Is it really so hard to believe that the PS5 IO system gets thoroughly worked out in the UE5 demo?
 

PaintTinJr

Member
DXT1 is BC1, DXT3 is BC2 and DXT5 is BC3. No one really calls them DXT and they aren't used much today anymore but I went along because you called them DXT :)
If you are still using DXT to describe them, you should check out BC7 for instance which gives extremely good quality on the compression rates MS talked about and BCPack should be an upgrade over BC7.

Regarding compromises, You don't have to compromise in quality, you can compromise in delivery time. If the PS5 can stream in extra detail for the asset in 33ms, what's stopping the XSX streaming the same in 67ms? Yes, it will give it a delay of 33ms, and extremly quick pop-in that morphs in, but will it be actually noticeable when it takes 33ms? I have no idea, I guess we will have to wait for DF analysis for that one.
Only people that use DirectX call them BC - because the underlying techniques they derive from weren't by Microsoft-and I'm pretty sure the BCpack compressor is even fed with a DXTn flag from a link I checked out about 900 pages ago in this thread :)

As for pop-in, how does that work when you are loading the entire detail per frame? A delay is half frame-rate, yes? Or...it is a reduction in IQ for the frame you do load on time, and that's still not accounting for the noise issue in the lossy compressed textures you needed to use to avoid another magnitude of lost IO bandwidth compared to the PS5 IO/SSD solution. UE5 seems to be rendering at such fine grain I would speculate lossy textures using linear gradients will cause pixels in the final image to flash between two shades if a lookup is on an lossy error in a texture.
 
Status
Not open for further replies.
Top Bottom