• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Bo_Hazem

Banned
It's razor sharp before action because that's how compression works. With worse quality of most streaming services right now, you're going to get artifacts in moving fragments of the video, that ball to be precise. Raytracing artifacts look different, more grainy. Also there's no reason why a powerful RTX card wouldn't be able to deal with one object of that size.

I'm sorry, mate, but the ball here is probably 480p level, enlarge the image. I'm well aware of moving fragments and so and it should be sharper with the original demo, but things are obvious:

Shitty ball quality at some locations (wither it's ray tracing, VRS, etc)

148963.jpg


Severe ladder effect caused by lowered (partial, VRS) resolution

148964.jpg


Insanely sharp, flawless quality at the beginning

148965.jpg


It's VERY obvious, and even noticed all of that in one go. This is the second time I watched it.
 
Last edited:
Like I said from start, that demo is designed to run on pretty much any high-end machine. It was far from hitting peak IO.

I know what you mean, but people asserting opinion as fact is what makes it so hard to get any real information. You may very well be right, but nobody knows unless Epic reveal more. There's a lot of confident analysis and estimation going on right now from hobbyists and journalists alike, but an Epic Games graphics engineer on Twitter is having a chuckle about how nobody he's seen yet has got it right, despite everyone having an opinion, but that it's good to hear what other people think is going on as a reality to check to confirm the way they went was right.
 
Last edited:

xacto

Member
Exactly, we're talking about software against custom hardware, in a closed box system!... With years of research and development dedicated to its engineering!
And that will come with a dedicated, and also fully customized, software support tool set... the high level and low level PS5 API that we haven't even seen yet.

But... but I have this laptop, you see... what about this laptop?! It also has a decent SSD... 😂
 

Bo_Hazem

Banned
I think that X3D PACKAGING is spot on for PS5 SoC, in fact it can be a whole SIP which is something Moore's Law Is Dead YT guy has heard from one of his source who said about PS5 chip, "Think Vita" and it makes perfect sense when you also figure the cooling patent into the picture.

Article explaining PS Vita chip

New+Picture+%25281%2529.png

New+Picture+%25282%2529.png


Now remember Sweeney mentioned NAND chips so close to main processor, it almost removes any access latency, so it is quite possible those chips are stacked around the main die in a single package -> APU in middle and NAND chips around 4 sides and stacked 3 times to make 12 total, in a single package connected by Infinity fabric and secondary storage controller chip. They would be so close that it compares to VRAM chips latency times, as they are still conventionally all around the SIP.

PS5-heatsink-patent-playstation-5-650x387.png


Just look at the monstrous Package size in that image, it is almost as big as the heat sink. Middle is PCB where the heat dissipation rods go through. And the thing inscribed as '5' is the giant SIP with main APU, storage controller and stacked NAND chips all connected with Infinity Fabric. And that gives you >10x bandwidth and it makes perfect sense for a custom engineers piece of hardware like PS5. Now we know why the console and cooling design is best kept secret and hyped up beyond anything for the reveal.

"A heatsink (21) is disposed on a lower surface of a circuit board (10). The circuit board (10) has through holes (h1) that penetrate the circuit board (10) in an area (A) where an integrated circuit apparatus (5) is disposed. Heat conduction paths (11) are provided in the through holes (h1). The heat conduction paths (11) connect the integrated circuit apparatus 5 and the heatsink (21). This structure allows for disposition of a component different from the heatsink (21) on the same side as the integrated circuit apparatus (5), thus ensuring a higher degree of freedom in a component layout"

Wow, that's something new to me. And that bold (thus ensuring a higher degree of freedom in a component layout) suggests PS5 Pro might be stacked dies? Pretty insane tech, more like how future GPU's could be upgradeable without losing the former one!

41pn8z.jpg
 
Last edited:

Bo_Hazem

Banned
WD2 crushed my PC when I tried maxing it our at 1440p when I had a 1070 ti. Haven’t tried it with my 2070 Super.

I think I remember analysis videos saying it looks really good on the Pro and One X. Can’t remember the details tho

It was pretty decent on PS4 Pro, even better on Xbox One X. I don't mind reconstructed 1440p/1800p if the final quality is great, but you can easily notice the difference when put side by side with a native 4K. Here's a native 4K:




It was the first game I played on PS4 Pro, I started it on base PS4 (gave it to a friend after buying the pro) then shifted to PS4 Pro. The amount of details difference was mind-blowing between PS4 and the Pro! Those words on shops were readable from a moderate distance while it was near impossible on base PS4! 1800p is the closest you get to native 4K, and 1440p is a noticeable downgrade from that. Watch Dogs 2 was 1800p checkerboard to 4K according to DF:

 
Last edited:

Exodia

Banned
Just a little something:

Textures are generally squares to make them easier to tile and break up for parallel computing, or for different quality levels within the same file. It’s a 1:1 aspect ratio image file. It’s a piece of data.

A 4K texture has nothing at all to do with a 4K game or TV.

One is literally a 4096x4096 image that gets pasted over a 3D model. The other is the final output resolution of a game, or TV, at 3840x2160 at 16:9.

The same applies to an 8K texture versus a new 8K TV. They are totally different things, and you don’t choose 4K textures for a 4K game and a “1080P” texture for a 1080P game, etc

There’s no such thing as a 1080P texture, or a 480P texture. There is such a thing as a 512x512 texture, and a 1024x1024 texture etc

When you confuse the two or imply they’re connected to make some kind of a point, you just show that you don’t really understand what you’re talking about.

Also, nobody (that I can recall in this thread) ever said Nanite and Lumen wouldn’t run on XSX or PC. The point being made was that this particular “Lumen in the land of Nanite” scripted demo was made entirely for PS5 by leaning heavily on its IO capabilities, and wouldn’t be possible without changes on other hardware with slower IO, like XSX and PCs.
This is what Sweeney said when it was first revealed, this is what he recently was compelled to clarify.

The demo shown and the engine are two different things.
UE5 with its Nanite and Lumen technologies will do amazing things on PS5, XSX and high-end PCs with NVMe SSDs.
The specific UE5 demo shown is pushing Nanite hard and was only possible due to Sony’s IO in this instance.

It’s not hard. It’s what Sweeney said originally. It’s what an Epic spokesperson said when contacted by Kotaku. It’s what Sweeney again clarified on Twitter.

Separate the game from the engine.

UE5 works on pretty much all devices.
Lumen and Nanite can be used on all devices with the required resources (CPU/GPU/IO).
Lumen in the Land of Nanite can be ran on anything with enough CPU/GPU/IO to keep up with the amount of assets in it, and how fast the character is moving through the world. Something Sweeney has repeatedly said is only possible with Sony’s IO.
It would make no sense at all to build a tech demo for PS5 and then only use less than 2.4GB/s raw storage speed. That’s not a technical demo. If LitLoN did use way less than 2.4GB/s raw storage speed you can bet they’d already have shown it running on XSX and PC as part of how amazing their engine is, which is what they’re really there to sell.

Just wanted to point out that the textures being used in that exact demo is nothing special, the assets for the UE4 kite demo all used 8k textures (rocks, boulders, cliffs, caves, etc). So the idea that having 8k textures for a tiny vertical slice needs a mytical SSD is simply illogical. This is without taking into account virtual texturing which wasn't available in the kite demo. Which reduces texture memory streaming overhead dramatically.

When you look at the number of unique textures, the small number of unique assets, its a no brainer.
 
While AMD is now introducing chiplets into their manufacturing process, we have no idea at this point if this cutting edge tech has been adopted for PS5. There are advantages like higher yields (lower cost), better heat dissipation (the PS5 GPU is clocked pretty high), and more flexibility for layout. We could see stacked dies, dies at different process nodes, cutting edge unified caches, etc., and Sony’s cooling patent does suggest a stacked APU. As others have pointed out, Sony already did something similar with PS Vita.

It seems pretty clear at this point that the XSX uses a monolithic die, so this difference, if it materializes, would be pretty interesting.

So much cool speculation in this forum, keep up the good work people!
 

Mod of War

Ω
Staff Member
Can we know who the developer is or at least a bit more info like:

- Worked on AAA games before?
- Is an engine programmer?

Just something man. Otherwise dude could be anything and I can't even rate his opinion.

Third party dev that works on game design and engine applications.

Nothing more out of me.

This was not from that infamous Crytek developer interview, was it?

Negative.
 
If that demo was the result of a Sony/Epic collaboration, why would a PC version even exist?

Why would it not? It was pointed out earlier that a PC with sufficient memory would have no problem running it, and lots of development would still occur on a PC. Epic is after all targeting a wide variety of computers with their software. A fast SSD helps overcome memory constraints.

Past console iterations have seen massive jumps in system memory by around 16 times. Memory costs being what they are today, next generation consoles are only seeing a 2x jump. By better integrating the I/O, a fast SSD basically extends the memory available. Also, improvements in the rendering system are allowing fast system memory to be used more efficiently.
 

husomc

Member
I'm sorry, mate, but the ball here is probably 480p level, enlarge the image. I'm well aware of moving fragments and so and it should be sharper with the original demo, but things are obvious:

Shitty ball quality at some locations (wither it's ray tracing, VRS, etc)

148963.jpg


Severe ladder effect caused by lowered (partial, VRS) resolution

148964.jpg


Insanely sharp, flawless quality at the beginning

148965.jpg


It's VERY obvious, and even noticed all of that in one go. This is the second time I watched it.
It might be the result of denoising.
 
The demo can run on all platforms supported by Unreal 5. Even an Android version could exist, but it wouldn't use any feature of interest.
Do you have any official source to back that up?

Because the only thing they said is that UE5 tech will be fully supported on all platforms.

That tech demo is very likely exclusive for PS5 and if not, where is that demo footage running on a PC?

32QQdHu.jpg
 
Last edited:

Bo_Hazem

Banned
Mag = person from Newcastle. Short for Magpie. Also knowns as Geordies.
Mackem = person from Sunderland


It's a thing. :D

Yup, I get it, I hear the Magpies every time Man United play against them (Newcastle) :lollipop_tears_of_joy: But that Sunderland nickname is new, Mr. Mackem ;) Plus I know Scouser (Liverpool) and Mancunian (Manchester):messenger_winking_tongue:

Thanks for the details!:lollipop_raising_hand:
 
Last edited:

Handy Fake

Member
Yup, I get it, I hear the Magpies every time Man United play against them (Newcastle) :lollipop_tears_of_joy: But that Sunderland nickname new, Mr. Mackem ;) Plus I know Scouser (Liverpool) and Mancunian (Manchester):messenger_winking_tongue:

Thanks for the details!:lollipop_raising_hand:
Mackem comes from shipbuilding. Sunderland used to "Make them and take them".
In our native speech, it's pronounced "Mak'em and tak'em" and that's where "Mackem" comes from. ;)
 

Bo_Hazem

Banned
Mackem comes from shipbuilding. Sunderland used to "Make them and take them".
In our native speech, it's pronounced "Mak'em and tak'em" and that's where "Mackem" comes from. ;)

That's interesting! I read as well that the main rivalry between Liverpool and Manchester caused by Manchester stealing the show and being the main port for trade, causing many Scousers losing their trade/jobs. Probably you Mackems had some under-the-table deal with the Mancunians. :messenger_winking_tongue:
 

Thirty7ven

Banned
Regarding the information coming from China presentation on U5, you can find it at: https://forum.beyond3d.com/threads/unreal-engine-5-2021-tech-demo.61740/page-28

-The Epic guy is saying the first scene(Lumen) can run at 40fps on his notebook, not the whole demo.

-If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5.

-UE4.25 implemented asynchronous/overlapped loading (Because bottleneck was the CPU). They overhauled their shaders to work well with the event-driven loader. This gave them >50% loading speed improvement.

-In the final UE5 scene, compression and careful disk layout avoided the need for high speed SSD. The workload wasn't that high.

-Guy mentioned they can run the demo in the editor at 40fps, not 40+ but did not specify resolution.

-Currently Nanite has some limitations such as only works on static meshes, doesn't support deformation for animation, doesn't support skinned character model, supports opaque material but no mask.

-Lumen costs quite a bit more than Nanite.UE5 could eventually be a hybrid renderer using both Lumen and Raytracing in the future.

A few things that I noticed from skimming through the video.
* Guy mentioned they can run the demo in the editor at 40fps, not 40+ but did not specify resolution.
* Currently Nanite has some limitations such as only works on static meshes, doesn't support deformation for animation, doesn't support skinned character model, supports opaque material but no mask.
* Lumen costs quite a bit more than Nanite
* UE5 could eventually be a hybrid renderer using both Lumen and Raytracing in the future.


By the way, I just love how a guy in a chinese forum, posting on a thread with around 6 or 7 users fanboy warring each other says he "called" the Epic dude and got from him that they were using a 2080, then he himself speculated that they were using a 970 EVO, and that shit is now being parroted around the web like fact. It's fucking hilarious.
 
Last edited:
Where is this original post and who made it ?
 

Bo_Hazem

Banned
My biggest question regarding the UE5 demo is how much of this detail can be maintained at 90 or 120fps. VR will be very compelling on PS5 if they can keep a lot of this detail in place. No more PS3 looking VR games, hopefully.

I think Nanite itself is tuneable, meaning you can still push it down to render less without changing all assets, to the point that it meets your latency, or lower the assets and have a faster process to 20M polygons.
 
Last edited:

geordiemp

Member
Regarding the information coming from China presentation on U5, you can find it at: https://forum.beyond3d.com/threads/unreal-engine-5-2021-tech-demo.61740/page-28






By the way, I just love how a guy in a chinese forum, posting on a thread with around 6 or 7 users fanboy warring each other says he "called" the Epic dude and got from him that they were using a 2080, then he himself speculated that they were using a 970 EVO, and that shit is now being parroted around the web like fact. It's fucking hilarious.

So in summary the laptop was running it at 1080p....?
 
Last edited:

LED Guy?

Banned
First Dictator, now Tom Warren. All praise UE5 demo, it was that disruptive.

giphy.gif
Yeah it really is a TRUE GENUINE NEXT-GENERATION LEAP!!

Guys, we are in for a treat for next-gen, if this is just a tech demo thrown around easily like that, you'll see games with crazier graphics, animations, destruction, AI, world simulations and so many stuff in the next 3 years.

Look at even Richard Leadbetter's opinion and what he said about it here in this picture below:


RQBHaG8.png
 

Lunatic_Gamer

Gold Member
PS5’s Tempest Engine Is More Exciting Than GPU/CPU Speeds, Says Developer

Speaking with GamingBolt, Halestorm said that the Tempest engine personally excites him more than other aspects of the PS5’s hardware, like its CPU or its clock speed, and is looking forward to how it can help get people more immersed in games with better audio.

“You know, features like the Tempest excite me more than GPUs and GPU speeds,” Halestorm said. “For me, there are other ways to get invested and immersed in a game than impressive visuals; audio – and in specific directional audio – is one of those ways, at least for me.”

 

kuncol02

Banned
Currently Nanite has some limitations such as only works on static meshes, doesn't support deformation for animation, doesn't support skinned character model, supports opaque material but no mask.

That would confirms what I am afraid. I don't think there is way around lack of deformation, which probably would require rebuilding data on SSD. Riggid animation could be achieved with multiple passes composed with each other, but that's don't seems to be achievable on current generation. That are exactly same limitations like in sparse voxel octree rendering.
If someone needs proof, then look at stones falling at beginning of the demo. They are not on scene when they show mesh view. Also rotating wheel used to open doors looks much worse than statues. Details are much softer on it.
 

banjo5150

Member
PS5’s Tempest Engine Is More Exciting Than GPU/CPU Speeds, Says Developer

Speaking with GamingBolt, Halestorm said that the Tempest engine personally excites him more than other aspects of the PS5’s hardware, like its CPU or its clock speed, and is looking forward to how it can help get people more immersed in games with better audio.

“You know, features like the Tempest excite me more than GPUs and GPU speeds,” Halestorm said. “For me, there are other ways to get invested and immersed in a game than impressive visuals; audio – and in specific directional audio – is one of those ways, at least for me.”


I am confused on this Tempest Engine. I get that it is for audio but exactly what is it doing? Do I need to buy a whole new home audio setup that supports Tempest?
 

Lunatic_Gamer

Gold Member
Xbox Series X production is underway, France on priority list

The news comes from French website Xboxygen, which is evidently specific to both Xbox and France, in case you couldn’t tell.

“What we learn today is that [Xbox Series X] is in production, and that France is one of the priority countries for the delivery of machines at the end of the year,” says the article when roughly translated. Admittedly, I am a linguistic philistine, so I had to use Google Translate. Regardless of my likely poor job, I think the meaning carries. Xbox Series X consoles are currently in production.

“We have the units for the launch,” Xbox France director Ina Gelbert told Xboxygen. “France is in the priority countries.”

“Now, will we have enough … This is always a big question, and this is where we enter into discussions for us, and it is my role, to favor France over to other countries. Show that we have the community, that we have the market, that there are huge expectations around the Xbox Series X in France and that we need units to cover this demand.”

 

Bo_Hazem

Banned
It's hard to say how much date was "crunched" in this demo, especially since they used an army of the same model. If each model had been different, it seems like that would have caused more data overhead.

If each model is the same, the workload is the same. Even on a simple engine-like game like LBP if you replicate the same model the stress is the same. Some may correct me if I'm wrong?

Yup, hence the demo running at 60 FPS with 4K assets (or whatever the scaling is) seems a better way forward, still requires the same high streaming SSD and delivery as your working on smaller assets but at twice the frame rate.

Got to admit, the cgi quality assests made a strong point.

I think using HW RT will be more than enough to make it 1440P@60fps or 4K@30fps without compressing the assets.

That's how marketing works. Sell people on something that they don't need. And then make a killing.

Yup, to me VRS seems like a shit tech I don't wanna see it used heavily, except if combined with the likes of DLSS to mitigate the side effects.
 

DaGwaphics

Member
If each model is the same, the workload is the same. Even on a simple engine-like game like LBP if you replicate the same model the stress is the same. Some may correct me if I'm wrong?



I think using HW RT will be more than enough to make it 1440P@60fps or 4K@30fps without compressing the assets.



Yup, to me VRS seems like a shit tech I don't wanna see it used heavily, except if combined with the likes of DLSS to mitigate the side effects.

You only need to move one model to memory, moving 50 different models in the same frame would be more demanding.

I'm also curious why you think HW RT would improve performance of the demo, given everything we've seen that is a highly unlikely scenario. Use any game available and toggle RT on/off, there is typically a performance penalty (doesn't matter if you have dedicated hardware for the BVH or not).
 
I am confused on this Tempest Engine. I get that it is for audio but exactly what is it doing? Do I need to buy a whole new home audio setup that supports Tempest?
To put it simply: No the tempest engine computes audio on a way that will sound more immersive to your specific ears.

More detailed: Its a chip that can do certain calculations very fast which are important to calculate hundreds of audiosources. They will then use this sources and adjust them to a hrtf profile that will help you hear the sound much more natural to yozr biological ear. Therefore the sound will feel more 'real' to you. Like you are actually in that games environment. All that is done with that PS5 Tempest chip. You can still use your current speakers / headphones and just configure the consoles audio output to achive this. Though Cerny already said this is going to work best with headphones. Speakers and Sourroundsoundsystems will also be supported but might fell less '3D'.

Also those hrtf profiles might not suite you perfectly because everybody hears sound a little bit different depending on the structure of your ears and such.
 
Last edited:

Exodia

Banned
Regarding the information coming from China presentation on U5, you can find it at: https://forum.beyond3d.com/threads/unreal-engine-5-2021-tech-demo.61740/page-28






By the way, I just love how a guy in a chinese forum, posting on a thread with around 6 or 7 users fanboy warring each other says he "called" the Epic dude and got from him that they were using a 2080, then he himself speculated that they were using a 970 EVO, and that shit is now being parroted around the web like fact. It's fucking hilarious.

These translations are almost all wrong.
 
Last edited:

PaintTinJr

Member
It's hard to say how much date was "crunched" in this demo, especially since they used an army of the same model. If each model had been different, it seems like that would have caused more data overhead.
I'm not sure that's true. The assets are at a level you probably can't place them entirely in RAM, so you can't gain from the typical benefit of repeatedly rendering the same model. In the scene the crunched polygons would be different - even for two of the same model with the same orientation because their worldspace relationship and projected sizes would result in different crunched subsets of polygons. There could definitely be an advantage earlier in the disk access, but as it is looking like the CPU is doing an initial workload on the data, followed by the GPU (and IIRC there has been mention of two async compute software raterization processes in this thread) that process might be looking direct at the data on the disk and crunching as it loads through the pipeline, so the process wouldn't distinguish at all between the same model and 500 different ones in terms of workload.
 
Status
Not open for further replies.
Top Bottom