• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

IntentionalPun

Ask me about my wife's perfect butthole
Possible, but not confirmed. You're just supposing from your "random thoughts".

You gotta be fucking kidding me here.. I'm posting a slide from a Microsoft presentation, surely vetted by AMD... you are posting tweets from some random dude on Twitter with ~1300 followers.

AMD is developing their super resolution for RDNA 2. Where are the Tensor Cores? There's not accelerated hardware, only shaders. Why you assume they're made hardware modifications for Mesh Shaders? More random thoughts coming for you.

How is this an answer to my question?

We have no idea if FidelityFX Super Resolution will be locked behind RDNA 2 cards or not.. as we don't know if it requires Infinity Cache or some other RDNA 2 feature.

If it doesn't, you can bet AMD will enable it on older cards.. as they have a long history of not arbitrarily locking features behind old generations, despite them being supportable on older ones.

And that's why it'd be really shitty of them to do that specifically for competing with an nVidia feature we know resides on specific hardware units like Mesh Shaders.

Not even sure why I need to even explain this logic considering MS detailed them as a separate item on a GPU hardware block diagram..

But just... you know.. using logic here... to support an argument.. backed by official sources.
 
Last edited:
You gotta be fucking kidding me here.. I'm posting a slide from a Microsoft presentation, surely vetted by AMD... you are posting tweets from some random dude on Twitter with ~1300 followers.



How is this an answer to my question?

We have no idea if FidelityFX Super Resolution will be locked behind RDNA 2 cards or not.. as we don't know if it requires Infinity Cache or some other RDNA 2 feature.

If it doesn't, you can bet AMD will enable it on older cards.. as they have a long history of not arbitrarily locking features behind old generations, despite them being supportable on older ones.

And that's why it'd be really shitty of them to do that specifically for competing with an nVidia feature we know resides on specific hardware units like Mesh Shaders.

Not even sure why I need to even explain this logic considering MS detailed them as a separate item on a GPU hardware block diagram..

But just... you know.. using logic here... to support an argument.. backed by official sources.
You don't have confirmations. Only some evidences. Geometry Engine could means only software. You don't know.

You can't confirm there's hardware differences between Primitive Shaders RDNA 1 and Mesh Shaders RDNA 2.

Don't agree? Log into twitter and talk. Here's your test. Don't be shy.

 
Last edited:

PSX

Member
Ah gotcha, thanks.

Sony has tiny FPU tee hee
Tiny PS5 FPU destroyer of the power dreamers.
q92U1WT.jpg
 
Being totally honest I havent gamed much with many of my friends of late because I have been spending time playing PC games that my main friends do not play mainly World of Tanks

I do plan on having a fishing trip next month with my little birdie friend and will ply him with generous amounts of alcohol and get their roadmap so to speak.
You don't want to knock him out now so go with Vodka. He won't have as much of a hangover.
If all else fails use handcuffs and lotion. Call me if you'd like me to show you how.
 
Lysandros Lysandros and IntentionalPun IntentionalPun ,

As far as I can tell, VRS is enabled by the updated RDNA RB+

image-135-1536x912.png


Once again, AMD hasn’t provided many details on what has changed with the RDNA 2 RBs (RB+). What we do know is that the throughput has been doubled and fine-tuned for VRS and other optimizations that come with DX12 Ultimate.

Each RB+ can process 8 32-bit pixels, a 2x increase compared to RDNA 1 and 1.1. This is primarily the result of the doubled 32bpp color rate. The new multi-precision RBs are also supplied to the shader engine at twice the rate, primarily improving the performance with mixed-precision workloads such as VRS.
Source: Here.
 

Interfectum

Member
So Sackboy A Big Adventure is a 3rd party developed 1st party published game? And that is easier that just saying its a 2nd party game? It just seems like calling it 2nd party indicating that the IP is owned by the platform owner even if they didn't develop the game is easier. Publishing is assumed by both 1st and 2nd party titles to be by the platform owner. I never thought 2nd party was an unnecessary term especially when dealing with the specifics of who developed the title.
bernie sanders GIF
 

Rea

Member


From this video, what i understand is that, VRS happens after processing triangles and it works best when triangles are big. For PS5 using custom GE, you can process only necessary primitives just before VRS. Saving bandwidth. Maybe that's the reason PS5 doesn't need VRS. Also, moving forward, game engines are gonna utilize small triangles (like UE5), so VRS is not much useful for very small triangles. IMO.
 


From this video, what i understand is that, VRS happens after processing triangles and it works best when triangles are big. For PS5 using custom GE, you can process only necessary primitives just before VRS. Saving bandwidth. Maybe that's the reason PS5 doesn't need VRS. Also, moving forward, game engines are gonna utilize small triangles (like UE5), so VRS is not much useful for very small triangles. IMO.


Well Cerny did say why he chose only 36 CUs for the system.

Also it's easier to fully use 36CUs in parallel than it is to fully use 48CUs when triangles are small it's much harder to fill although CUs with useful work.

Seems like working with triangles when they are small seems to be a big thing with the PS5.
 

Rea

Member
Well Cerny did say why he chose only 36 CUs for the system.



Seems like working with triangles when they are small seems to be a big thing with the PS5.
It makes sense when think about the design choice of PS5, Cerny knows that devs are making new engines which is mainly focusing on heavy use of small triangles with a very fast I/O throughput, so he discarded features which would not benefit PS5 design. And creating his own version of features for PS5, for example : gpu cashe scrubber and I/O co processors. Edit: also Tempest Engine which can be use for games as well
 
Last edited:

SSfox

Member
Do not misunderstand me. I really respect you and the information you have. It's just that it's so weird that Microsoft has so many insiders and Sony has virtually none. I don't know about Katarhsis. She says that there is Silent Hill, which is being developed by Sony Japan Studio. And as far as I understand, you haven't heard about it
The fact that Sony has less leakers is a good thing actually.
 
OK, you want an argument, lets have a conversation and to help lets bring Mark in on it.

After I had that chat I went back and re-watched the video again to see what MC was saying and I quote.

p1.png
p2.png
p3.png
p4.png
p5.png
p6.png
p7.png


You can see what he is saying and eluding to here, the fact the problem on PS5 is to support this they needed the VF to enable that power from CPU to GPU shift.

As such ALL he is saying is that the process now is deterministic and even though an engine MAY need 256-bit instructions (not just AVX) it may be low to medium use. Nothing has changed here from that statement, other than the Die shot shows that some sacrifices here seem based on that being a lower level than other areas, which is exactly what he said.

Supporting 256-bit instructions IS NOT the same thing as supporting ALL 256-bit instructions.

can somebody explain to me, why we are arguing about a CPU feature of questionable usefullness for games, when both consoles are already mainly GPU bound at 1440-1600p internal resolutions even in cross-gen games?
 

Garani

Member
I don't know about that but I have seen this.

rdna-2-arch-100867215-large.jpg

EuNU-3oXMAIiJ6a


The layout does seem similar to me.

Layout wise, the PS5 is more aligned to the 68xx chips, while XSX/S are designed to look like the APU used in the XBOne. Means nothing, in the end, other than data paths being different (and maybe latencies).


I find that custom PS5 adorable. Oh, and Durex sucks. There are better brands and products out there.
 

sinnergy

Member
I mean, I haven't absolutely the same level of knowledge of leviathan, I'm just an enthusiast but more or less I understood the logic behind the development. I already suspected mesh shaders doesn't required magically hardware architecture engineering but still people fallen again in the MS propaganda. Jeez even VRS was on some PS4 engine, really someone here is convinced everything can be replicated just via hardware? Though there is a lot of confusion about mesh shaders in RDNA2 and what really is, but seems really absurd argue GE on ps5 can't use any mesh shaders because just "full RDNA2 does it".
you can write a whole engine only software based .. that’s also what they did in the 90s, and is basically what epic is doing now with UE5? How is it proof? It’s the offloading part to the fixed functions that’s important.

Otherwise you are taking percentages of the cpu and gpu for these tasks. Percentages otherwise used for other tasks.

But the problem is with Sony, they won’t disclose their hardware in the way MS did .. MS clearly showed they are RDNA 2 compliant and that Series hardware supports VRS , Meshading and SFS..
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
But the problem is with Sony, they won’t disclose their hardware in the way MS did .. MS clearly showed they are RDNA 2 compliant and that Series hardware supports VRS , Meshading and SFS..
That is not a problem, there is no problem with Sony (or MS), despite best efforts trying to make it look at such by specs sheet checkboxing wars. Funny considering that now the full RDNA2 spiel means fully support (only) the features AMD revealed that day :lol apparently.

Both had a certain minimum performance target and then remaining budget for other features and it is clear which they are. For MS having AMD match the DX12U API (with some changes for a console environment) it was an overall much bigger priority than it was for Sony (see MS’s transition to the GDK from the XDK and their overall big plan on all platforms simultaneous distribution).
 
Last edited:

Elog

Member
can somebody explain to me, why we are arguing about a CPU feature of questionable usefullness for games, when both consoles are already mainly GPU bound at 1440-1600p internal resolutions even in cross-gen games?
It is interesting from a hardware point of view. It is clear that 256-bit instructions are not important for games. Reading the Anandtech article regarding Microsoft's talk about the XSX APU and its power and thermals, it is clear that the 256-bit instruction capability is the deciding factor behind the entire APUs power/thermal/frequency choices - it is the limiting APU HW spot.

In other words, native 256-bit instruction capabilities hold back the entire XSX APU and without it the same power and thermal envelope would be able to sustain higher frequencies.

To then see a cut back in exactly that area of PS5 is interesting and looks like a smart move by Sony in light of the XSX data.
 

MonarchJT

Banned
I am late to the party, so please bear with me.


I am still amazed by the fact the Cerny explained all this almost a year ago and we are still going around the issue. It's clear that Sony had a strategy, while MS another. The end result is that, at the moment, they are pretty much on par, with a bit of an advantage for Sony.



Please, you can't be serious. Photomode has nothing to do with gameplay. Trying to get a "win" thanks to a still image that has no need to deal with the rest of the game cycles, is quite petty.

I respectfully disagree. At the moment, during a game, Sony has a better GPU output. The real big difference between the system is the I/O compatment. It's not the SSD in itself that makes things different, but the fact that the whole process is totally off loaded. And Cerny told us all about it in details.
there's no win ..if you aren't using your hardware. The gen is long enough and I'm sure the specs in the end will talk for themself like always happen with every hardware on every platform.

Photomode gave a glimpse of the gpu capability differences ..now the devs have to get to pull those out.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
It is interesting from a hardware point of view. It is clear that 256-bit instructions are not important for games. Reading the Anandtech article regarding Microsoft's talk about the XSX APU and its power and thermals, it is clear that the 256-bit instruction capability is the deciding factor behind the entire APUs power/thermal/frequency choices - it is the limiting APU HW spot.

In other words, native 256-bit instruction capabilities hold back the entire XSX APU and without it the same power and thermal envelope would be able to sustain higher frequencies.

To then see a cut back in exactly that area of PS5 is interesting and looks like a smart move by Sony in light of the XSX data.

I am interested, in both consoles, what gotchas there are if you were running AVX-256 on more than one core or all at the same time. Intel famously reduced clockspeed on their desktop and server CPU’s when running such workloads (like they do if you enable SMT).

It looks like they redesigned/customised more of the Ryzen 2 core than I expected (which was very very close to zero) and yet got wide and enhanced compatibility with PS4 and PS4 Pro titles.

All in all they kept power and SoC cost (and size) under check, kept cooling solution’s cost under check (MS went for a bigger die and even more expensive cooling fan and vapour chamber approach, they did get to 3.6 GHz with the unmodified Ryzen 2 core which is 100 MHz higher), and got a very performance solution.
 
Do you know what's most funny about those Xbox fans allegations? As far as we know, there could be two leprechauns inside the machine making it work, PS5 is still performing better than XSX in most games.
Sadly we are stuck where we are in the console war, because supply issues means the ultimate result will be massively delayed,. In the mean time Xbox customers make threads about how MS has already won, or that somehow victory is inevitable. I guess they need to have their fun while they still can.
 
Microsoft calls it out a "Mesh Shading Geometry Engine" on their GPU block diagram:

XSX-10.jpg


So this is an advancement of the GE from RDNA 1 most likely.

The assumption is Sony's custom GE is doing something similar.. but it's not AMD's Mesh Shader hardware implementation.

And nVidia uses a combo of what they Task Shader Unit and Mesh Shader Unit:


But either way.. you never answered any of my questions the other day. You really think AMD of all companies is going to develop a software solution for something that nVidia does in hardware, and then hide it behind RNDA 2, locking out RDNA 1 cards?

Sounds like the least AMD thing ever.
AMD have not stated or highlighted any architectural changes to their geometry engine or command processor even though every other architectural change from RDNA 1 to 2 was highlighted and it also lines up with why Mesh Shaders are still compiled into Primitive Shaders in code on the RDNA 2 cards. Mesh Shaders are simply an API implementation.

Microsoft stating “Mesh Shading Geomtry Engine” was because it was an advancement over the GE of the Xbox One and One X.
You gotta be fucking kidding me here.. I'm posting a slide from a Microsoft presentation, surely vetted by AMD... you are posting tweets from some random dude on Twitter with ~1300 followers.

He’s not just some “random twitter dude”, he has a degree in Computer Science and specialises in Gaming Architecture, if anyone knows about this stuff it’s him.
 
Last edited:

Zoro7

Banned
You gotta be fucking kidding me here.. I'm posting a slide from a Microsoft presentation, surely vetted by AMD... you are posting tweets from some random dude on Twitter with ~1300 followers.



How is this an answer to my question?

We have no idea if FidelityFX Super Resolution will be locked behind RDNA 2 cards or not.. as we don't know if it requires Infinity Cache or some other RDNA 2 feature.

If it doesn't, you can bet AMD will enable it on older cards.. as they have a long history of not arbitrarily locking features behind old generations, despite them being supportable on older ones.

And that's why it'd be really shitty of them to do that specifically for competing with an nVidia feature we know resides on specific hardware units like Mesh Shaders.

Not even sure why I need to even explain this logic considering MS detailed them as a separate item on a GPU hardware block diagram..

But just... you know.. using logic here... to support an argument.. backed by official sources.
Damn he got banned. Never understood why he consistently lost his shit during debates. Like chill dude!
 

Garani

Member
there's no win ..if you aren't using your hardware. The gen is long enough and I'm sure the specs in the end will talk for themself like always happen with every hardware on every platform.

Photomode gave a glimpse of the gpu capability differences ..now the devs have to get to pull those out.
Oh please, still going on with the photomode FUD. You don't play in photomode and you don't care about stable framerate in photomode: you care only of taking a cool static picture.

Let it rest, a console is not just the GPU, it's a lot more than that, and I'd rather play at 30 locked than 40ish that dip down at random.
 
Last edited:

roops67

Member
It is interesting from a hardware point of view. It is clear that 256-bit instructions are not important for games. Reading the Anandtech article regarding Microsoft's talk about the XSX APU and its power and thermals, it is clear that the 256-bit instruction capability is the deciding factor behind the entire APUs power/thermal/frequency choices - it is the limiting APU HW spot.

In other words, native 256-bit instruction capabilities hold back the entire XSX APU and without it the same power and thermal envelope would be able to sustain higher frequencies.

To then see a cut back in exactly that area of PS5 is interesting and looks like a smart move by Sony in light of the XSX data.
BINGO!
 

Garani

Member
Doubting the almighty Mark Cerny can lead you to a dark path... may the light find him again
I could write the same stupid thing about Spencer.

You people have to understand, once and for all, that we are tallking about multi-billion dollar corporations that are publically traded on the stock market. It's not a mom and pop shop. If they represent something that isn't true, the consequences in court would be very very dire: something that gamers like us can't even get around about the kinda money that we are talking about. Share holders care only about money, not about the specs of our plastic boxes.
 

Fafalada

Fafracer forever
Supporting 256-bit instructions IS NOT the same thing as supporting ALL 256-bit instructions.
I mean, OK. But you realise there really isn't any use for 256bit outside of SIMD. And while custom ISA is totally possible, die shot implies execution units shrank.
As far as optimisation goes, the most obvious cutback that consoles have almost always made from desktop class chips, is double precision FP, not sure if it would account for reduction seen, but sounds more likely than any other suggestions so far to me.
 

Zheph

Member
I could write the same stupid thing about Spencer.

You people have to understand, once and for all, that we are tallking about multi-billion dollar corporations that are publically traded on the stock market. It's not a mom and pop shop. If they represent something that isn't true, the consequences in court would be very very dire: something that gamers like us can't even get around about the kinda money that we are talking about. Share holders care only about money, not about the specs of our plastic boxes.
Or you could have take a look at the context here and saw that I just made a little joke about IntentionalPun

"You people" :messenger_tears_of_joy:
 

chilichote

Member

Patch 1.31 on PS5 adds in the fog and screen space reflections that were missing in version 1.30. The black crush also seems to be fixed on PS5 with patch 1.31. PS5 uses a dynamic resolution with the highest native resolution found being approximately 3456x1944 and the lowest native resolution found being 1920x1080. Native resolution pixel counts above 2400x1350 seem to be rare and native resolution pixel counts at or slightly above 1920x1080 seem to be common. There is a form of temporal reconstruction used that can increase the resolution up to 3840x2160. Note that the highest native resolution found here was in a less demanding scene than the previous test.
 
I could write the same stupid thing about Spencer.

You people have to understand, once and for all, that we are tallking about multi-billion dollar corporations that are publically traded on the stock market. It's not a mom and pop shop. If they represent something that isn't true, the consequences in court would be very very dire: something that gamers like us can't even get around about the kinda money that we are talking about. Share holders care only about money, not about the specs of our plastic boxes.
JOqXvOb.gif
 

BGs

Industry Professional
Due to the private messages I have decided to write it publicly for the last time (on this subject).

Advance notice, this message is not created to please anyone, and I will not respond to any response it may generate.

- Custom RDNA.
- Custom Tools with custom nomenclatures (better than DX Tools).
- Custom RT.
- Custom 3D audio.
- Custom...
- Cust...
- C...

There are more things in the world besides DX (and a lot better). In fact, in the world of consoles, a general DX is the worst thing you can use if you want to take advantage of a specific hardware. And XSX does not use a specific and dedicated DX. It would be time for them to make one if they want to compete with PS5.

You are wasting your time comparing names and dispositions.

You should compare results.

It's the only thing that matters. XSX can already have a fluzzo capacitor that will be useless if the console is not able to maintain a stable performance superior to that of its direct competitor. The superiority according to many should be clear and evident, but the reality is that at the moment it is not, not to say that currently the superiority is null (and I would not have much hope that this will ever change, sony is not going to sit idly by while MS improves his tools, if MS improve them.) So you can keep talking but that reality is not going to change, MS still does not understand what is important in this sector (and many of you neither), games.

Remember, I will not answer any more messages about it, things that you want to understand are understood quickly. I can't waste any more time with this topic, if you're not happy with your product, sell it and buy another.
 
Last edited:
Status
Not open for further replies.
Top Bottom