• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Die Shot has been revealed

MonarchJT

Banned
lmao...welcome back

your delusions still haven’t gone away. “XSX” in a “performance tier” above PS5 yet overwhelmingly evidence points to that NOT being the case.

the situation doesn’t generally change over time either, if past gens are anything to go by. XSX is not some esoteric and complex hardware to develop for.

can’t believe you’re coming back get spouting the usual fanboy nonsense despite being proven wrong
Good old Colonel James it's nice to meet you again after more than 10 years and see you still fighting for your team .
Proven wrong by what? just to get me a culture about it
 
Last edited:

Loxus

Member
there is nothing , like z e r o in the engineering of the ps5 that is making it special over the other consoles (and I'm pulling also the switch in this) the ps5 is a nice sony console of 10.2 tf and is special.becauee is the only console that can run sony exclusives ..and that's it . And that extensively given the perfomance/size it could be engineered much better ... given the competition performance / features / dimensions
do not mythologize something that has nothing mythical about it.
Quit whining because I'm having a discussion. Who doubts that both consoles are heavily engineered beyond our basic understanding? Nobody doubts their custom nature to fit the particular needs of their respective client (Microsoft and Sony) What's impressive about the Xbox Series X is far from just the teraflops, just as there is more to PS5 than its teraflops. I'm excited about the features Series X has confirmed on top of the high compute performance. Just because we are having a discussion where I believe one may be better than the other doesn't have to mean it's automatically a war.

Put the knife down.
So way is the PS5 performing and sometimes even out performing the XBSX?
You guys are who makes this Forum look bad with your fan boy nonsense.

Tools alone isn't enough to explain why the PS5 is performing the way it is.
Engineering also plays a role.
 

James Sawyer Ford

Gold Member
No offense but as far as I’m concerned everyone have a preference, and I’m not sure I would trust a prelaunch video from DF about if a higher clockspeed could give PS5 an advantage...

Personally I’m thinking if they’re both closer than we assumed, which it sounds like, no full RDNA 2 here etc, then the big differences are the number of CUs and the clockspeed. I don’t see why a higher clockspeed wouldn’t be beneficial, at least in these early games.

But then I’m thinking, it they’re close, why is XSX 0.4GHz slower? The cooling seems decent. Why not go higher?

can’t go higher because of thermal issues.

a 400Ghz increase on a wider chip will consume a ton more power that’s not realistic for a console form factor
 

longdi

Banned
FUD machine. :messenger_tears_of_joy:
Eh no. Im just curious and concern why one of the more definition feature upgrade from zen->zen2 had been dropped. Extra fpu muscle help for 3d games iirc.

The work arounds suggested here seems weak.

-Do it on gpu? Why not both?
-It's too hot! I thought part of the excitement moving from jaguar to zen2 is that we finally have a fuller fatter cpu now. But the liquid meta and continuous boost capabilities..
-Do it as per zen, split it between units and run another cycle..but we are zen2 now!
-Probably not as useful because ps5 need not run windows or something...but this is avx, i doubt my pc win10 requires or use avx256..
-mark cerny said it! ..but he just said 256bit, not avx256? He even hinted this shit gets hot and affect their variable boost design
 
Last edited:

Loxus

Member
there is no infinite cache ....my god....this are the bullshits inculcated by fud spreaded by that youtuber red gaming something
Read my post again properly.
So your telling me RDNA 2 GPU's doesn't have Infinity Cache?
OPVgeoa.png
 

MonarchJT

Banned
So way is the PS5 performing and sometimes even out performing the XBSX?
You guys are who makes this Forum look bad with your fan boy nonsense.

Tools alone isn't enough to explain why the PS5 is performing the way it is.
Engineering also plays a role.
because all games on which a comparison was made do not use a single hardware advantage features of the xsx. Hivebusters and gear tactics are the only one (and in any case taking advantage of all the features).
And still we having something like hitman 3 that reach to render 44% of more pixels on the screen.
We don't make the forum look bad, but users who think they have the truth in their pocket without knowing what's going on. The ps5 will improve its performance until reach + or - it's ceiling of 10.2tf . The xsx Will improve until it's perfomance reach closer to it's too at 12.1tf
 
Last edited:

Md Ray

Member
I guess this is why MS showed their die shot forever ago. They knew they have the advantage, and it'll show even more later down the road.
Let me guess... Teraflops advantage?

Here's a reminder to the 'teraflops' crowd by Unity's principal engineer:


Basically means "most of the time GPUs aren't 100% ALU (TFLOPs) bound". He is also the developer of Claybook, and is showing evidence of the ALU usage using profiling tools.

This "it'll show even more down the road" isn't true because games/scenes can be bound by different parts of the GPU, it's not teraflops always. PS5 GPU has 22% more geometry, triangle rasterization throughout, and faster caches, etc. And when scenes are bound by these, PS5 will pull ahead or at least be on par XSX and we've seen this in multiple titles now.
 
Last edited:
Eh no. Im just curious and concern why one of the more definition feature upgrade from zen->zen2 had been dropped. Extra fpu muscle help for 3d games iirc.

The work arounds suggested here seems weak.

-Do it on gpu? Why not both?
-It's too hot! I thought part of the excitement moving from jaguar to zen2 is that we finally have a fuller fatter cpu now. But the liquid meta and continuous boost capabilities..
-Do it as per zen, split it between units and run another cycle..but we are zen2 now!
-Probably not as useful because ps5 need not run windows or something...but this is avx, i doubt my pc win10 requires or use avx256..
-mark cerny said it! ..but he just said 256bit, not avx256? He even hinted this shit gets hot and affect their variable boost design

Your concern is touching. The ps5s of the world are appreciative of your concern for their well-being and if they’re full rdna2 or not.

Why don’t you just type lol rdna1 and be done with it?
 

MonarchJT

Banned
Let me guess... Teraflops advantage?

Here's a reminder to the 'teraflops' crowd by Unity's principal engineer:


Basically means "most of the time GPUs aren't 100% ALU (TFLOPs) bound". He is also the developer of Claybook, and is showing evidence of the ALU usage using profiling tools.

This "it'll show even more down the road" isn't true because games/scenes can be bound by different parts of the GPU, it's not teraflops always. PS5 GPU has 22% more geometry, triangle rasterization throughout, and faster caches, etc. And when scenes are bound by these, PS5 will pull ahead or at least be on par XSX and we've seen this in multiple titles now.

absolutely true.
now tell all the advantages of a wider gpu and you will understand why all the market and gpu maker are going that route )
 
Last edited:
But then I’m thinking, it they’re close, why is XSX 0.4GHz slower? The cooling seems decent. Why not go higher?
There is a point of diminishing returns on your yields, 400mhz can make a big difference in volyage (heat) on a giant APU like this, especially when everything has to run at full throttle all the time-for a relatively small gain in performance.

Somewhere an engineer made the call, it's probably the best they could muster given all the other factors that they had to account for.
 
Even the 3090 isn't in a performance league above the 3080 and that's far more capable than your Xbox and a bigger gap between PS5 and SX. As most devs have said they are pretty close.

I feel they're close enough also, but maybe my definition of different performance tier is being looked at the wrong way. Nvidia and AMD whenever they release GPUs they have performance tiers. They have mid-range, and then they have their enthusiast options that are a bit better at higher resolutions/visual settings.

In Nvidia GPU terms I think Series X is the RTX 2080 and potentially better approaching or exceeding 2080 Super at full potential.
In Nvidia GPU terms I think PS5 is the RTX 2070 and potentially better approaching or matching 2070 Super at full potential.

Many would consider those cards fairly close also in practical terms, but we just know the 2080 cards are a cut above the 2070 cards. That's what I believe the difference to be between the two machines.

Why do I use the word exceeding for series x in relation to 2080 Super, but not for PS5 in relation to 2070 Super? Memory Bandwidth. There's a key memory bandwidth element that I think is being largely overlooked between the two consoles. Series X has 560GB/s for the GPU at its very best (yes I know, shared CPU and asymmetric design) PS5 has 448GB/s (also shared CPU and a much more ideal fully symmetrical design)

The key drawback I feel for Series X right now is in its ability to fully benefit from all that compute and use all that memory bandwidth in a satisfying and consistent enough fashion to really show what it can do. With 10GB as a hard limit for 560GB/s that isn't quite so easy, which is why Sampler Feedback Streaming is so vital, and also why Mesh Shaders is an important factor as well. But the faster short term solution is to use SFS I think.

If these two consoles were runners PS5 would be Tyson Gay in his proper track gear, Series X would be Usain bolt, but wearing these pants and shoes. Series X needs to get on its track gear :messenger_tears_of_joy:

71HAgFkXJPL._AC_SL1200_.jpg


No PS5 only owner will ever feel like they got shafted is my take. Just because I feel it's ultimately going to prove to be a bit weaker compared to Series X doesn't mean I think the PS5 is weak or lacking in real capability. I don't.
 
-It's too hot! I thought part of the excitement moving from jaguar to zen2 is that we finally have a fuller fatter cpu now. But the liquid meta and continuous boost capabilities..
-Do it as per zen, split it between units and run another cycle..but we are zen2 now!
-Probably not as useful because ps5 need not run windows or something...but this is avx, i doubt my pc win10 requires or use avx256..
-mark cerny said it! ..but he just said 256bit, not avx256? He even hinted this shit gets hot and affect their variable boost design
All chips eventually run 'too hot'.

The rest is (on your part)
🤢🤮
-Do it on gpu? Why not both?
Same reason you don't walk with both your hands and legs... We have a better way to perform 'fpu' tasks now.
 
So way is the PS5 performing and sometimes even out performing the XBSX?
You guys are who makes this Forum look bad with your fan boy nonsense.

Tools alone isn't enough to explain why the PS5 is performing the way it is.
Engineering also plays a role.

You want the simplest answer?

Series X:
10GB of GPU Optimal Memory at 560GB/s (Series X GPU is using this, its performance is great)
6GB of Standard Memory at 336GB/s (If the GPU touches any of this RAM performance WILL suffer)

Meanwhile on PS5:
16GB at 448GB/s

That is the primary reason we see PS5 winning ANY performance battles so far. It's developers having games that are organized in their memory access patterns in a certain way, and it's just plain simpler to work with the PS5's fully symmetric memory design. There's less to plan for or think about. Series X's design requires more careful planning, or use of a feature like Sampler Feedback Streaming to greatly mitigate the just 10GB of faster RAM for the GPU. If the Series X GPU touches that other 3.5GB reserved for games past that 10GB, the GPU's performance will dip because now you have a 52 Compute Unit 1825MHz GPU now trying to survive on just 336GB/s of memory bandwidth. It becomes bandwidth starved.

Some developers handle this better than others. In cases where series x and ps5 perf are identical a dev has just opted for parity (don't have an issue here so long as the game is solid on both), or the additional headroom offered by Series X allows it to JUST match what PS5 is outputting or brings it slightly below. Control is clear evidence of there being more headroom for higher performance on Series X. That additional performance headroom on Series X existing is how you get cases like in Hitman 3.

That is the reason PS5 sometimes performs better. It's down to the memory design causing the Series X GPU to be bandwidth starved in momentary blips. There are solutions to this.
 

Fredrik

Member
can’t go higher because of thermal issues.

a 400Ghz increase on a wider chip will consume a ton more power that’s not realistic for a console form factor
Overclocking 400MHz would be overkill but if I was MS I would absolutely try going up to 1.9 or 2GHz just to see what would happen. Iirc they increased the cpu clock on Xbox One.
 
The unified cache stuff was always odd considering Sony's own diagram showing cache scrubbers showed a cache only attached to the GPU, not the CPU.

The Cerny talk was pretty damn detailed.. why everyone thinks they had major features not shown is beyond me lol
IIRC unified cache was only in reference to the CPU's L3$, not a unified last-level cache between CPU and GPU. The Infinity Cache was something separate and referencing the GPU specifically.

Maybe I've heard wrong, though.

Well you did give me a better answer than Digital Foundry "I don't know" one.

It's still on the table whether or not there will be any big differences between the two in the future. The current results make it seem like they are practically on par with each other.

And for all intents and purposes, right now they're practically performing on par with each other and that's the main takeaway in terms of analysis so far.

However Microsoft should be announcing some more features at their event later this month. I think DirectML is supposed to be focused on among a few other technologies (not all of them pertaining to Series consoles tbf). I'm curious if anything they discuss will elaborate on things they've mentioned in the past but haven't gone super in-depth on.

Let me guess... Teraflops advantage?

Here's a reminder to the 'teraflops' crowd by Unity's principal engineer:


Basically means "most of the time GPUs aren't 100% ALU (TFLOPs) bound". He is also the developer of Claybook, and is showing evidence of the ALU usage using profiling tools.

This "it'll show even more down the road" isn't true because games/scenes can be bound by different parts of the GPU, it's not teraflops always. PS5 GPU has 22% more geometry, triangle rasterization throughout, and faster caches, etc. And when scenes are bound by these, PS5 will pull ahead or at least be on par XSX and we've seen this in multiple titles now.


Some of this is true for sure: PS5 has a higher culling rate (17.84 gtri/s) and triangle rasterization rate (8.92 gtri/s). The cache part is more tricky though. Assuming 1 ns latency for the L0$, PS5's L0$ bandwidth should be about 10.172 TB/s. The thing is that while the GPU does run 405 MHz faster, that mainly affects per-CU cache bandwidth on a cyclic basis.

AKA if you break down L0$ bandwidth on a per-frame basis that's 169.53 GB/s per frame in a 60 FPS target on PS5 vs. 200.425 GB/s per frame for the same on Series X. However, like I was saying earlier this throughput on Series X only really comes with game engines that are designed to scale higher. Assuming a game is running PS5's GPU at peak, for similar performance on Series X you'd need to saturate 44 of the CUs to account for the slower GPU clock.

But that's just mainly in terms of achieving similar cache bandwidth rates; obviously once you saturate more of the Series X's GPU beyond 36 CUs that starts to give it some other advantages particularly with texture fillrate and BVH traversal intersections for RT. So I guess what I'm getting at is yes, it's always been more than TF, but there's also some idea Microsoft designed their system solely around TF and that actually isn't true, either.

Both Sony and Microsoft took a lot of other things into consideration besides raw TF peaks; they just prioritized many of those things somewhat differently.
 
Last edited:

FlyyGOD

Member
You want the simplest answer?

Series X:
10GB of GPU Optimal Memory at 560GB/s (Series X GPU is using this, its performance is great)
6GB of Standard Memory at 336GB/s (If the GPU touches any of this RAM performance WILL suffer)

Meanwhile on PS5:
16GB at 448GB/s

That is the primary reason we see PS5 winning ANY performance battles so far. It's developers having games that are organized in their memory access patterns in a certain way, and it's just plain simpler to work with the PS5's fully symmetric memory design. There's less to plan for or think about. Series X's design requires more careful planning, or use of a feature like Sampler Feedback Streaming to greatly mitigate the just 10GB of faster RAM for the GPU. If the Series X GPU touches that other 3.5GB reserved for games past that 10GB, the GPU's performance will dip because now you have a 52 Compute Unit 1825MHz GPU now trying to survive on just 336GB/s of memory bandwidth. It becomes bandwidth starved.

Some developers handle this better than others. In cases where series x and ps5 perf are identical a dev has just opted for parity (don't have an issue here so long as the game is solid on both), or the additional headroom offered by Series X allows it to JUST match what PS5 is outputting or brings it slightly below. Control is clear evidence of there being more headroom for higher performance on Series X. That additional performance headroom on Series X existing is how you get cases like in Hitman 3.

That is the reason PS5 sometimes performs better. It's down to the memory design causing the Series X GPU to be bandwidth starved in momentary blips. There are solutions to this.
Or could it be that these are 1st generation ports of old games on New hardware? I don't think any developers have scratched the surface of what these machines are capable of.Why don't we wait until the RDNA 2 features are utilized before we make judgements on the power narrative.
 

DESTROYA

Member
So will we finally see AMD Infinity cache support on the PS5 Pro ?
Infinity Cache amplifies the bandwidth available to a GPU. According to AMD, Infinity Cache can deliver up to 3.25x the effective bandwidth of 256-bit Gbps GDDR6 VRAM.
 
IIRC unified cache was only in reference to the CPU's L3$, not a unified last-level cache between CPU and GPU. The Infinity Cache was something separate and referencing the GPU specifically.

Maybe I've heard wrong, though.



And for all intents and purposes, right now they're practically performing on par with each other and that's the main takeaway in terms of analysis so far.

However Microsoft should be announcing some more features at their event later this month. I think DirectML is supposed to be focused on among a few other technologies (not all of them pertaining to Series consoles tbf). I'm curious if anything they discuss will elaborate on things they've mentioned in the past but haven't gone super in-depth on.

I don't want to sound like an asshole but less talk and more delivering is what I want to see.

If they promise massive performance gains that will be what I will expect.
 

assurdum

Banned
So basically what this seems to boil down to is what i got banned in the speculation thread for saying... PS5 is more based off of RDNA1 architecture and was ready for release probably at least a year before XSX... PS5 tools had MUCH more time to mature. XSX was waiting for more RDNA2 features to implement so tools are way behind. Hence why we are seeing PS5 keeping up at the moment. Once XSX hits its full stride and the tools have a chance to mature more it very well may had an advantage period moving forward. *tooools
So basically it seems you haven't a single clue of what this thread talking about...but sure series X rulez
 

quest

Not Banned from OT
Overclocking 400MHz would be overkill but if I was MS I would absolutely try going up to 1.9 or 2GHz just to see what would happen. Iirc they increased the cpu clock on Xbox One.
Microsoft met their goals they are taking zero chances after the rrod cost them billion or 2. They have over cooled shit since and left plenty of headroom for a billion reasons. The original Xbox one was laughingly huge for this reason. The one x was a cooling beast like the series x. The series s is conservative clocked laughingly to avoid any cooling issues in a small form factor.
 

Clear

CliffyB's Cock Holster
absolutely true.
now tell all the advantages of a wider gpu and you will understand why all the market and gpu maker are going that route )

Well, an awful lot of pc code isn't especially optimal. So if you are aiming for a broad-spectrum of applications the easiest route is just more of everything. There are plenty of people (and corporate clients especially) that will happily pay through the nose for performance gains, so there's less impetus to try the sort of radical rethink that we see with PS5's design.

As I've been saying, Cerny's team was free to approach the problem of price/performance without nearly so much legacy baggage as the Xbox team, who will have had to keep in mind that the software ecosystem was always going to include PC as a target.
 
The right question should actually be: why the XSX is not performing like a 12tf RDNA GPU? Because the PS5 does seem to perform like a 10tf GPU.
Devs not having enough time to optimise the game for xbox is the most likely answer to that. Dev kits for ps5 came out early 2019 where as xbox dev kits came out late 2019. With a lot of countries going into lockdown early 2020 this restricted access to the dev kits since they had to stay at the studios . Which also explains why MS didn't have gameplay to show us last year and why halo was shown on a similar spec PC. Devs likely had maybe 14 to 16 months with the PS5 dev kits but 8 to 10 with xbox.
 

DESTROYA

Member
Listen a single game is enough to prove the others are just rushed port, bad tools or payed by Sony secretly to screw the image of series X. You have to accept the reality of the fact, don't be a child.
You really think SONY paid off a company to make Hitman 3 worse fo a competitor? LMAO

If you think any company would agree to make something they produce to purposely look bad for another console your pretty delusional.
 
You want the simplest answer?

Series X:
10GB of GPU Optimal Memory at 560GB/s (Series X GPU is using this, its performance is great)
6GB of Standard Memory at 336GB/s (If the GPU touches any of this RAM performance WILL suffer)

Meanwhile on PS5:
16GB at 448GB/s

That is the primary reason we see PS5 winning ANY performance battles so far. It's developers having games that are organized in their memory access patterns in a certain way, and it's just plain simpler to work with the PS5's fully symmetric memory design. There's less to plan for or think about. Series X's design requires more careful planning, or use of a feature like Sampler Feedback Streaming to greatly mitigate the just 10GB of faster RAM for the GPU. If the Series X GPU touches that other 3.5GB reserved for games past that 10GB, the GPU's performance will dip because now you have a 52 Compute Unit 1825MHz GPU now trying to survive on just 336GB/s of memory bandwidth. It becomes bandwidth starved.

Some developers handle this better than others. In cases where series x and ps5 perf are identical a dev has just opted for parity (don't have an issue here so long as the game is solid on both), or the additional headroom offered by Series X allows it to JUST match what PS5 is outputting or brings it slightly below. Control is clear evidence of there being more headroom for higher performance on Series X. That additional performance headroom on Series X existing is how you get cases like in Hitman 3.

That is the reason PS5 sometimes performs better. It's down to the memory design causing the Series X GPU to be bandwidth starved in momentary blips. There are solutions to this.
I think part of the problem of the series X is that it doesn't have dram cache for its ssd unlike the ps5, which I think does have additional dram for its ssd.

Also I think games could very well start using over 10GB of graphics memory at a time, thanks to the massive I/O asics and dram ps5 can not only use 11-12GB of graphics memory but it can do so per scene, as it is fast enough to stream 12+GB into memory for the next scene in about a second.
 

James Sawyer Ford

Gold Member
Overclocking 400MHz would be overkill but if I was MS I would absolutely try going up to 1.9 or 2GHz just to see what would happen. Iirc they increased the cpu clock on Xbox One.

their power supply and cooling likely won’t allow for it, even small gains like that have a huge impact on power
 

ToTTenTranz

Banned
This "it'll show even more down the road" isn't true because games/scenes can be bound by different parts of the GPU, it's not teraflops always. PS5 GPU has 22% more geometry, triangle rasterization throughout, and faster caches, etc. And when scenes are bound by these, PS5 will pull ahead or at least be on par XSX and we've seen this in multiple titles now.
Glad to see sebbbi stepping in to clear some nonsense.
The PS5 also has the same amount of shader engines as the SeriesX, so less WGPs/CUs per work distributors, meaning ALU utilization should be higher.


The only clear-cut advantage I saw in the SeriesX was theoretical memory bandwidth, but it seems that has been somewhat mitigated by the memory contention issues due to the memory pools with different bandwidths.
I wonder if Microsoft didn't lose an opportunity here to just launch the console with all 10 channels populated with 2GB chips for a total of 20GB GDDR6, even if they had to charge an extra $50 for the console. They'd have both the bandwidth and memory amount advantages with no caveats other than price.


Regardless, IMO the SeriesX's biggest problem will be developers in general concluding that they need to make the game to the SeriesS first, as common denominator with lower RAM amounts, and then scale up. If they do the opposite then the result will be the SeriesS without high-FPS modes and/or no ray tracing effects.
The list of developers complaining about that has been steadily piling up.


So will we finally see AMD Infinity cache support on the PS5 Pro ?
Infinity Cache amplifies the bandwidth available to a GPU. According to AMD, Infinity Cache can deliver up to 3.25x the effective bandwidth of 256-bit Gbps GDDR6 VRAM.
If Infinity Cache becomes a crucial component of Navi 3x's chiplet architecture (e.g. for faster coherency between GPU chiplets?), then I think it's very likely yes.
Sony already has patents on multi-chip (multi-GPU or even multi-SoC) solutions, probably for the PS5 Pro.
 
Last edited:

MonarchJT

Banned
I think part of the problem of the series X is that it doesn't have dram cache for its ssd unlike the ps5, which I think does have additional dram for its ssd.

Also I think games could very well start using over 10GB of graphics memory at a time, thanks to the massive I/O asics and dram ps5 can not only use 11-12GB of graphics memory but it can do so per scene, as it is fast enough to stream 12+GB into memory for the next scene in about a second.
yes, I am also sure that the I / O will have a fundamental role in the course of the generation but as someone has already said, it is not that Ms has not also thought about these situations. the SFS the Mesh shader the VRS they all come into play to substantially decrease the required bandwidth. but until devkits, engines, tools and ESPECIALLY devs will go in mutual agreement towards the use of these the console will not express its potential. To have some glimpse of his performances in the short term we can only hope in turn10, the coalition and playground ... usually it is the teams that first experiment with new technologies
 
I don't want to sound like an asshole but less talk and more delivering is what I want to see.

If they promise massive performance gains that will be what I will expect.

You're certainly not alone in that; Microsoft IMHO dropped the ball both by not having a 1P showcase ready by launch (they could've even delayed games like Bleeding Edge to spruce up with optimizations for Series X and S), and by virtually not hyping the Gears 5 Hivebusters DLC whatsoever (felt like a stealth release basically).

Relying too much on 3P timed exclusives nets you with games like The Medium which, well, it has great-looking backgrounds and the main character model looks nice enough (plus she has a nice butt ;) ), but heavily uncanny valley and janky animations among other things. Microsoft'll need more impressive visual showcases to come through in order to start bringing a performance narrative to their way (or at least mitigate 3P performance delta if current trend continues).

So will we finally see AMD Infinity cache support on the PS5 Pro ?
Infinity Cache amplifies the bandwidth available to a GPU. According to AMD, Infinity Cache can deliver up to 3.25x the effective bandwidth of 256-bit Gbps GDDR6 VRAM.

It's practically guaranteed. Only question is how large will the cache be. 32 MB - 64 MB would be my guess. If there's a Series X upgrade in the pipeline I guess that will also have some form of IC at similar size; by that point (2023), it should be relatively commonplace.
their power supply and cooling likely won’t allow for it, even small gains like that have a huge impact on power

Seems both systems are topping out at around 215 watt TDPs, which is already a good bit for consoles. A GPU clock increase of even probably 75 MHz would probably add another 5-6 watts on top of that, which could probably exponentially increase thermal output (power and heat don't share a linear relationship).

If Microsoft do any clock adjustments or upgrades, it'd most definitely be CPU related. The GPU would cause too much an issue there and even if they could they are still bound by the memory bandwidth so that's more peak GPU power with less bandwidth per TF (and with the GPU clocked even higher they would need to probably increase the CPU clock by a dependent amount to make sure it doesn't bottleneck the GPU).

So basically, from greatest to least chance of clock adjustments for Series X would be CPU >>> GDDR6 memory clock >>>>>>>>> GPU clock
 

Clear

CliffyB's Cock Holster
yes, I am also sure that the I / O will have a fundamental role in the course of the generation but as someone has already said, it is not that Ms has not also thought about these situations. the SFS the Mesh shader the VRS they all come into play to substantially decrease the required bandwidth. but until devkits, engines, tools and ESPECIALLY devs will go in mutual agreement towards the use of these the console will not express its potential. To have some glimpse of his performances in the short term we can only hope in turn10, the coalition and playground ... usually it is the teams that first experiment with new technologies

Once again, this saves rendering bandwidth not data bandwidth. Everything you describe helps to discard redundant data, but what it doesn't address is feeding the data to the GPU. If you feed it a 10,000 vertex mesh, that is then culled and tesellated down to 5,000 for rendering, rasterization speed increases but the i/o load remains untouched.
 
You're certainly not alone in that; Microsoft IMHO dropped the ball both by not having a 1P showcase ready by launch (they could've even delayed games like Bleeding Edge to spruce up with optimizations for Series X and S), and by virtually not hyping the Gears 5 Hivebusters DLC whatsoever (felt like a stealth release basically).

Relying too much on 3P timed exclusives nets you with games like The Medium which, well, it has great-looking backgrounds and the main character model looks nice enough (plus she has a nice butt ;) ), but heavily uncanny valley and janky animations among other things. Microsoft'll need more impressive visual showcases to come through in order to start bringing a performance narrative to their way (or at least mitigate 3P performance delta if current trend continues).

They certainly did screw up by not proving the systems power right up to launch. Since they bragged about the XSX power I was expecting a clear difference at launch. I'm listening to what people have to say about the XSX but I prefer if they actually deliver instead of just talking about what might be possible. It kind of reminds me of Sony and the PS3s Cell in a way.

What Microsoft needs more than ever is their own Demon Souls. Something that shows what the system is really capable of. Even Digital Foundry was extremely impressed with what Bluepoint achieved in that game. Now it's Microsofts turn to do something similar.

P.S Oh and no more Craigs please.
 
In Nvidia GPU terms I think Series X is the RTX 2080 and potentially better approaching or exceeding 2080 Super at full potential.
In Nvidia GPU terms I think PS5 is the RTX 2070 and potentially better approaching or matching 2070 Super at full potential.

I'm the only one that don't like using Nvidia GPUs to compare?
Turing/Ampere and RDNA2 are completely different architectures making comparisons always inconsistent because each will deliver different results depending on the game.
 

MonarchJT

Banned
Once again, this saves rendering bandwidth not data bandwidth. Everything you describe helps to discard redundant data, but what it doesn't address is feeding the data to the GPU. If you feed it a 10,000 vertex mesh, that is then culled and tesellated down to 5,000 for rendering, rasterization speed increases but the i/o load remains untouched.
i think that there will not be any problem regarding that with both architectures customization and the nvme ssd imho both console are over specced in that regard. I do not see how there will be a leap forward in the use of data in a way where a nvme with hw decompressor and other stuff will not be enough
 
i think that there will not be any problem regarding that with both architectures customization and the nvme ssd imho both console are over specced in that regard. I do not see how there will be a leap forward in the use of data in a way where a nvme with hw decompressor and other stuff will not be enough
I think the lack of dram could cause issues with relying on fast streaming. Aren't both ps5 and Series X TLC ssds? TLC tends to have bad performance, sometimes a portion of the TLC ssd is converted into SLC to serve as cache and to improve performance. Not sure if either of the consoles are doing that.
 

Clear

CliffyB's Cock Holster
i think that there will not be any problem regarding that with both architectures customization and the nvme ssd imho both console are over specced in that regard. I do not see how there will be a leap forward in the use of data in a way where a nvme with hw decompressor and other stuff will not be enough

My point was mainly that what we are seeing is a fundamental philosophical difference in approach in how best to manage workload. To my mind a far more interesting thing to observe than just comparing spec-sheets.

Especially because these things are not mutually exclusive; Sony have gone fairly conservative in terms of core specs but are hoping to make up the shortfall with efficiency gains along the entire pipeline, however there's no reason these same alterations couldn't be applied to a much larger, faster, more state-of-the-art chip.

I just think we should want Sony's stuff to be as effective as advertised because it could have massive benefits for all AMD devices going forwards.
 
Top Bottom