• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

Analysis Hardware Microsofts Series X Hot Chips Presentation. - Full Video

Md Ray

Gold Member
Nov 12, 2016
2,409
7,732
685
India
Quoting nyself one again since I see there's some serious reading comprehension issurs:


2) Even if it's there, it doesn't mean it's being used, at all, like in PS4 Pro, hence "mythical".





I never said that, quite the opposie actually, but what to expect from one of the most famous die-hard PS fanboys who seeks for console warring wherever possible, colour me surprised... Same as above, learn to read first, then write.

Again, it doesn't matter if it's being used or not. There are no "ifs" and "buts" The doubling of TF has to do with FP16 compute, hence it's not mythical.
 

Tschumi

Banned
Jul 4, 2020
2,559
2,968
590
Japan
 

ZywyPL

Member
Nov 27, 2018
4,475
7,505
675
Again, it doesn't matter if it's being used or not. There are no "ifs" and "buts" The doubling of TF has to do with FP16 compute, hence it's not mythical.

So I stand by what I said - untill MS will be able to put the tech into an actual use, it'll be nothing but an on-paper feature, just like RPM in Pro, or Mesh Shaders in Turing/Ampere GPUs. Theory means nothing if it cannot be transkated into practical use.
 

Md Ray

Gold Member
Nov 12, 2016
2,409
7,732
685
India
So I stand by what I said - untill MS will be able to put the tech into an actual use, it'll be nothing but an on-paper feature, just like RPM in Pro, or Mesh Shaders in Turing/Ampere GPUs. Theory means nothing if it cannot be transkated into practical use.
It's been used on PC. Vega GPUs FP16 capability was put to use in FC5. Behind the scenes more games could be using those features, we just don't know.

Ubisoft details their use of FP16 compute/Rapid Packed Math in Far Cry 5 | OC3D News (overclock3d.net)
 
Last edited:

rnlval

Member
Jun 26, 2017
899
750
355
Sector 001
gpucuriosity.wordpress.com
This is the optional (originally RDNA1) vector ALU option that extends and scales RPM from FP16 to INT8 and INT4 (2x, 4x, 8x of the FP32 throughput respectively).
Rapid Pack Math has doesn't resolve into 32bit datatypes e.g. dual 16-bit packed operands resolves into 16-bit datatype

ML's Dot Math resolves into 32bit datatypes e.g. dot2 16-bit packed operands resolves into 32-bit datatype, dot4 8-bit packed operands resolve into 32-bit datatype, dot8 4-bit packed operands resolves into 32-bit datatype.




AMD separates ML's dot Math from RPM.
 
Last edited:

rnlval

Member
Jun 26, 2017
899
750
355
Sector 001
gpucuriosity.wordpress.com
Go look up the GPU specs tab on AMD's website or look up the difference between FP16 and FP32. Again, it's not mythical, it's math. FP16 TF is meant to be 2x that of FP32's compute perf.





1. BiG NAVI has machine learning tensor math support.

2.Tensor math needs pack math that can resolve into 32-bit datatype! Rapid Pack Math doesn't resolve into 32bit datatype
 
Aug 28, 2019
3,949
8,211
630
www.instagram.com
You would think so, as if MS just extended the shader array so they could keep 4, they would know that they would get less back in performance so their technical people would know this already, However, I believe they would think well we will beat 36 CU anyway.

8 shader arrays was not an option so they had no other choice ...

My gut tells me MS are really mainly interested in the cloud for expansion...... but I dont know.

Well I came across some interesting posts on B3D that could also suggest into this. This one in particular from iroboto feels like a nail-on-the-head moment and might explain why we're seeing some of the performance differences in some 3P games between PS5 and Series X, though it feels like a somewhat bleak (if realistic) take:

To be clear, I only say this because I haven't measured the wattage myself. But 135W/145W respectively on SX seems erroneously bad. I will check myself to hold that claim, which is what started this discussion in the first place.

I don't necessarily think in particular narrow/fast is delivering more performance than wide/slow. I think in any GPU lineup for a family of cards, you'd generally see the smaller, narrow chips at the beginning of the lineup, and get slower but much wider as they get to the flagship card. Clockspeed is king because everything goes faster, but the power/mm^2 is unsustainable as clockspeeds go higher which is why to gain more performance we move to going wider, and thus slower. But if there was a GPU lineup today, the XSX would be the 6600. And a PS5 would undoubtedly be a 6500 and the 6500 would never outperform the 6600. And the 6600 would not outperform the 6700 and vice versa to the 6800 XT.

But here we are with data suggesting differently: let's get tools out of the way entirely then for the sake of supporting your argument.
I think when you consider say comparing a 2080 and a 2080TI. There is a massive clock speed difference on the spec sheet. But comparison after video comparison, the boost just takes it right to the top and keeps it in line with what the 2080 is. This is probably applying to a whole slew of GPU families. The 3070 runs at 1965Mhz for AC Valhalla, with the advertised boost clock of 1725Mhz. And both of these cards, 2080TI and 3070 are super wide compared to 2080 but are putting in the same clock rates, because, it can.

I think it's just going to come down to, what you (? I'm pretty sure it's you), variable clock rate has been in the industry for so long, and appreciably it's likely ten fold better than fixed clocks. If I had to attribute the XSX to a critical performance failure, it's not about narrow and fast, it's just about maximizing the power envelope and SX cannot. I really underestimated how effectively the GPU was being utilized here and that cost them. Like if I look at this video here:

DOOM Eternal RTX 3070 | 1080p - 1440p - 2160p(4K) | FRAME-RATE TEST - YouTube

The marketed spec is 1730Mhz for boost: but in the test it's running 2070Mhz in 1080p, 1920Mhz at 1440p, and back up to 2070Hz for 4K. That really shows you how it's maximizing the full potential of the card. It's so lightly loaded its going to overclock itself beyond the marketed spec. Like no where there did it even come close to its marketed clock rate. As long as it's running cool, it'll take it's boost way above the expected values.

And so when I think about it, 2230Mhz, I think yea it's probably 95% of sitting there as it looks like there is so much downtime it's got no issues holding it, because things aren't as heavy duty as we've made them out to be.
And in doing so, it's making full usage of it's power envelope.

But the SX, if you got titles coming in at 135W, that's still a fraction of what the power supply is able to provide, and it's just not maximizing that power envelope.
It could have easily jacked the clockrate all the way up as long as the power didn't go beyond say 210W. And so I'm not even sure if the advertised clock rate really ever mattered or matters, it's whether or not a fluctuating 135-155W worth of processing can compete with a constant 200W pumping at all times.

I think it would be a very different story if SX had the same variable clock rate system as PS5. Because then it would be maximizing it's power envelope at all times. A simple graph of the wattage for PS5 is just a straight line for nearly all of it's titles. And I'm positive if I do this for XSX, it's going to be all over the place. And in the end, I'm pretty convinced that if there was a reason (hardware wise) why XSX can't really beat PS5 it's this. It's not about front end, it's not about sizing work for CUs. It's just a complete inability to take full advantage of the power envelope available. When I think about the other reasons brought forward as to why PS5 is performing better:
ROPs? No, 4Pro had 2x ROPs the X1X had, and got beat handidly.
Secret Cache? No, 32mb of esram operating at 100% clock speed w/ simultaneous read/write, xbox got creamed
Faster clockspeed? X1S is faster than PS4 with the cache, so nope.
Bandwidth? X1S has higher aggregate bandwidth than PS4 with less latency. Nope.
4Pro has 2.2x the compute of PS4 and more bandwidth, but for a great deal of many titles still only ran 1080p. (10% I think we counted)
But for all console generations prior to this, they all ran fixed clocks. So they all suffered the ups and downs of unoptimized code together.

So really, I'm looking at one thing here that's the major difference: It's not about narrow and fast here beating slow and wide. It's about maximizing that power envelope. 200W of a faster but narrower chip will not outperform the same 200W of a slower but wider chip. Vis a vis, PS5 is going to run 200W continuous 99.99% of the time and the rolling average of XSX will be < 200W. The 20% increase in TF may not be enough to make up the deficit of the rolling average of power consumption. But that's a problem for MS, there are likely very few games that will come along and keep the power envelope around 200W for XSX. At least not any time soon.

That's my explanation. I may have seriously underestimated how much performance can be left on the table using fixed clocks.

^ Bolded and underlined emphasis mine.

This is after they've done some power consumption measurements on a few multiplat 3P games on Series X and realized some were only getting around 135 watts to 150 watts on average. Until I read this post I didn't actually consider what the implications around fixed clocks could produce WRT being an impacting factor in fully leveraging resources. With a variable clock setup on any downtime the system can just shift its power around to draw out more performance, and it's also a good mask for sloppy programming. Tons of computer software throughout the '90s and early '00s benefited from ever-faster CPUs due to this, and we saw what happened when they lost the advantage of fast clocks once multi-core designs came about; entire rewrites of the software had to be done in order to take advantage of the design paradigm shift.

That feels like to me what the Series X might be presenting to 3P devs; all consoles up to now have used fixed clocks but if you look at the bolded parts above, due to that, no other additional features like more ROPs, more cache etc. actually made that much of an impact unless you made the software specifically tailored to it, which takes time and effort. So I actually don't know anymore if the "tools" argument is the only one at play when it comes to explaining some of the 3P results on Series X, a bigger factor there could actually just be them going with fixed clocks. And if that's the case, such would be an architectural feature/design choice, something devs still have to work around regardless how much more improved the tools get.

The irony is that a lot earlier I thought it'd be the PS5 presenting this challenge because of its variable clocks and when Cerny gave the example of AVX 256 instructions being "particularly taxing" on the CPU that just further solidified the earlier viewpoint. But apparently I crossed some wires and forgot about how outside of gaming, variable frequency has generally been the rule on PC development, since you can't program to a particular spec and therefore a particular frequency that spec might lock in at (and I do specifically mean PCs here as in IBM PC-compatibles; it was different with microcomputers since only one manufacturer made them and they generally shared a range of specifications that could be more easily targeted directly, though that doesn't mean I'm sure if microcomputer devs programmed targeting fixed clocks).

So on that note, like others have said here Series X (and S) aren't doing anything different by going with fixed clocks; all consoles before have used fixed clocks. But that also might have factored into previous difficulties in maximizing the power envelopes (and therefore performance) of those consoles, and that will inevitably be a factor with the Series consoles as well. That's what makes the PS5 particularly interesting since it is the first console going with variable clocks, which means, yes, it is operating at full or near-full performance 99% of the time, and it'll only dip when power budget is exceeded, just as was said back in March.

Now do I think that will still create some complications? Yes. But those are complications that can be worried about later in the generation which is usually when devs start trying to squeeze more performance out anyway. I didn't think I'd be saying this but for MS's system the learning curve might actually be rougher than I initially thought because devs will have to go through the traditional process of learning the hardware and optimizing for it over time to make sure it stays fully saturated with work to ensure maximized use of the power envelope (power consumption should appear more or less steady; I think Gamers Nexus actually showed off what I'm talking about here in his 6800 review showing the power consumption patterns on 6800 with SAM active versus the Nvidia 3080 (or 3070) in some gaming benchmarks, where you could see the latter going up and down all over the place while the 6800 kept steady the whole time), and we know how much time that usually takes.

It literally could be years, so I'm kind of now thinking two things and this is in line with a few other posters on B3D. Mainly, I'm not expecting a 20% - 30% performance gain over PS5 to ever manifest itself. This is mainly because even though there's a lot of room for devs to grow on Series X, there's at least about as much room (if slightly less) for them to grow on PS5, just in a different manner. Might go a step further and even say I'm not sure an 18% gain will ever manifest itself outside of some of the 1P MS titles. So it's actually maybe realistic to say 3P game performance between PS5 and Series X will be basically even and for games that don't start to adapt usage of the Series X silicon better to saturate it, PS5 might actually retain a lead in 3P game performance. Unless those suggested "stress points" on the power budget start to take effect and force downclocking of the GPU and CPU, in which cases I can see Series X having the edge in those 3P games. How often will that actually happen, though?

And how long will that go on? A year like some are saying? Well, it could take up to a year for MS to iron out GDK tool issues and devs to get more or less familiarized with those tools, but there's still the learning curve of fully saturating the power envelope on the system to keep it saturated 100% of the time on fixed clocks. And that there's the kicker: the fixed clocks. GPUs don't have an issue with saturating their silicon with work but that's also because they don't operate on fixed clocks. That basically uncaps them from otherwise limiting restraints to let games get the performance they need, maximizing the power envelope. That's still possible on a system with fixed clocks but it also requires a lot more developer-specific tuning and optimization, which will take time and effort, not all of which every developer has. Even moreso if you start to factor how marketshare could impact priorities.

I hope MS's tools make for as easy utilization of system resources to maximize effective room of their power envelope as possible; we can see some 1P optimized titles like Gears 5 doing this but it looks like 3P devs, at least for a while, will struggle to do so. By a year? Two years? Three years? It's a bit ironic since MS did message the Series X as being the best place for 3P titles but in reality they may have to rely on their 1P titles to try pushing the system to its limits if Sony's narrative with 3P game performance solidifies itself. That also means MS needs those big 1P games to hit sooner, not later.

And in all of this, what does that say about PS5? Is it being tapped out early? Well again, variable clocks haven't really been used in a games console until now, so it's hard to say. I still say that eventually, devs will have to begin contending with the fixed power budgets to GPU and CPU and begin tailoring their code to optimize the power consumption of that code, that way they don't run into issues with downclocks (which for now we can take at Cerny's word as being 2%, but realistically we don't know if it'll be more than that in practice). The benefit with their design, though, is that they won't need to really start worrying about that until a few years down the road.

In light of this I'm maybe a bit flustered with some of MS's design choices, TBQH. This is maybe an example where wanting to dual-purpose the APU is rearing its ugly head; fixed clocks and a wider net of CUs is great for a server rack, because if the clocks are just high enough to give enough performance but just low enough to ensure steady performance for prolonged periods, and you're going to be crunching a ton of compute data on them in said server rack, then that design is optimal. It's also optimal for streaming multiple One S gameplay instances, since even at a clock maybe conservative for an RDNA2 design, it's still more than enough for running virtualized One S machines.

Likewise to things like the SSD size and bandwidth, where I think the Series S factored into that decision. You can't go with too many smaller-capacity NAND modules for a solution because of space concerns, even if tried PoP and multi-channel setups. You'd want a footprint that could work not just for a larger console, but a smaller one. And you wouldn't want the bandwidth to be too high because larger bandwidth means more NAND modules or higher-performance NAND which means more power consumption which means more heat generation which means more required cooling which means increased costs.

But these (particularly the APU point) ultimately mean concessions were being made to accommodate the use-cases and this might have a heavier impact on the gaming performance of such a design than maybe first imagined. Taking that same APU perfectly fit for a server rack, now trying to leverage it as a gaming APU? Well now you're coming in under the power envelope and have to optimize your software to deal with it, but that requires a lot of work and time to do. I'm almost tempted to say this is the second generation in a row where MS let factors outside of a pure games machine lead influence in their console design. With the XBO it was multimedia/TV, and now with Series X/S it feels like it might've been servers/Azure (for Xcloud game streaming).

Where does that leave Series X? I still think it's an excellent system in many respects, and MS has a lot going for them WRT Gamepass and upcoming 1P content. Xcloud has tons of potential, too. But I think some of the early results we're seeing also shows that MS should consider not messaging on boasts going forward, especially those circling around a competitor whose specifications you don't 100% know about. If MS didn't push into messaging regarding power, or things to suggest it (like the "full RDNA2" stuff back in October), I don't think some of these 3P multiplat issues would have much weight against them. Granted if they had an actual 1P game or two that was new at the launch and optimized for Series X, the 3P multiplat issues would be even further regressed in people's minds because at least the 1P games would show what the system can actually do when optimized.

But that isn't what happened, and if their going with fixed clocks is what's really affecting some of the 3P non-BC performance on the platform, then improved tools won't be enough. Even a year of further familiarity by 3P devs may not be enough, to manifest in any tangible performance gains over PS5. PS5 might still hold that performance lead on 3P games going into next fall, probably even longer. And if that happens MS will be forced to rely one thing and one thing only to ensure early adopters keep glued: games. New games, exclusive to the ecosystem, optimized to show what the system can really do. Which will mean almost certainly closely-tied timed 3P exclusives, and 1P exclusives. Some of which need to start showing up within the next 3-4 months.

They have some confirmed: Scorn, The Medium, Exo-Mecha, Bright Memory Infinite, The Gunk etc. There's hints to some 1P stuff for next year like FS2020 port and maybe Forza Horizon 5. But they'll need a lot more than that if Sony's 2021 turns out to be as stacked as it's looking: Horizon Forbidden West, GoW Ragnarok, R&C Rift Apart, Gran Turismo 7, Kena Bridge of Spirits, Deathloop (provided things haven't happened behind-the-scenes there), and now rumors of both Silent Hill remake (reboot?) and a Metal Gear Solid remake. That's an onslaught, I'm not sure if even Halo Infinite (provided it's well-polished) is enough against all of that.

Here's wishing for the best to them because, if it's really the fixed clocks providing the most resistance to easily squeezing performance out of the system, that doesn't leave the Series X with a lot of advantages over PS5 aside from GamePass (which if Sony are attempting to make their own version of (prob through revamping PS Now), that will swing things Sony's way too) and in some areas BC (again, if Sony add even PS1 & PS2 BC, let alone PS3, I think that strips away Series X's BC advantage). Maybe Xcloud, but I don't see that being a particularly beneficial thing to individual games unless the game can leverage the innate benefits of multi-device local/streaming gaming sessions for game design purposes (so the game would have to know you're on Series X in one session, then on your smartphone on another, and maybe able to tell if you're indoors or outdoors, that sort of thing). That could be a unique feature for MS considering the range of devices they'll be wanting to integrating Xcloud into going forward.

ADDENDUM: While at it I guess I should touch on the GPU issues affecting some PS5s, as well. Reason being because, if the problem is tied to overstressed GPU at the current power budgets, and Sony are forced to lower the power budget, that can have its own implications on PS5 titles going forward.

It's not necessarily a good thing to speculate on this further until more evidence comes forth of bad systems having GPU issues, though. For all we know those problems could be related to systems with bad yield APUs, it happens. If it turns out there's something more to it, though, I think it's fair at that point to speculate if the clock peaks are the reason, considering Sony (and MS) are very likely on "only" 7nm DUV.
 
Feb 14, 2010
2,499
976
1,020
Yeah despite people mocking the cardboard cutout audience, it was much better presented than this borefest. Don't get me wrong, both were borefests but this was on another level of boring.
you weren't the target audience. Wasn't originally made available for everyone to see.
 

jroc74

Phone reception is more important to me than human rights
Jun 1, 2013
8,627
2,971
770
Well I came across some interesting posts on B3D that could also suggest into this. This one in particular from iroboto feels like a nail-on-the-head moment and might explain why we're seeing some of the performance differences in some 3P games between PS5 and Series X, though it feels like a somewhat bleak (if realistic) take:



^ Bolded and underlined emphasis mine.

This is after they've done some power consumption measurements on a few multiplat 3P games on Series X and realized some were only getting around 135 watts to 150 watts on average. Until I read this post I didn't actually consider what the implications around fixed clocks could produce WRT being an impacting factor in fully leveraging resources. With a variable clock setup on any downtime the system can just shift its power around to draw out more performance, and it's also a good mask for sloppy programming. Tons of computer software throughout the '90s and early '00s benefited from ever-faster CPUs due to this, and we saw what happened when they lost the advantage of fast clocks once multi-core designs came about; entire rewrites of the software had to be done in order to take advantage of the design paradigm shift.

That feels like to me what the Series X might be presenting to 3P devs; all consoles up to now have used fixed clocks but if you look at the bolded parts above, due to that, no other additional features like more ROPs, more cache etc. actually made that much of an impact unless you made the software specifically tailored to it, which takes time and effort. So I actually don't know anymore if the "tools" argument is the only one at play when it comes to explaining some of the 3P results on Series X, a bigger factor there could actually just be them going with fixed clocks. And if that's the case, such would be an architectural feature/design choice, something devs still have to work around regardless how much more improved the tools get.

The irony is that a lot earlier I thought it'd be the PS5 presenting this challenge because of its variable clocks and when Cerny gave the example of AVX 256 instructions being "particularly taxing" on the CPU that just further solidified the earlier viewpoint. But apparently I crossed some wires and forgot about how outside of gaming, variable frequency has generally been the rule on PC development, since you can't program to a particular spec and therefore a particular frequency that spec might lock in at (and I do specifically mean PCs here as in IBM PC-compatibles; it was different with microcomputers since only one manufacturer made them and they generally shared a range of specifications that could be more easily targeted directly, though that doesn't mean I'm sure if microcomputer devs programmed targeting fixed clocks).

So on that note, like others have said here Series X (and S) aren't doing anything different by going with fixed clocks; all consoles before have used fixed clocks. But that also might have factored into previous difficulties in maximizing the power envelopes (and therefore performance) of those consoles, and that will inevitably be a factor with the Series consoles as well. That's what makes the PS5 particularly interesting since it is the first console going with variable clocks, which means, yes, it is operating at full or near-full performance 99% of the time, and it'll only dip when power budget is exceeded, just as was said back in March.

Now do I think that will still create some complications? Yes. But those are complications that can be worried about later in the generation which is usually when devs start trying to squeeze more performance out anyway. I didn't think I'd be saying this but for MS's system the learning curve might actually be rougher than I initially thought because devs will have to go through the traditional process of learning the hardware and optimizing for it over time to make sure it stays fully saturated with work to ensure maximized use of the power envelope (power consumption should appear more or less steady; I think Gamers Nexus actually showed off what I'm talking about here in his 6800 review showing the power consumption patterns on 6800 with SAM active versus the Nvidia 3080 (or 3070) in some gaming benchmarks, where you could see the latter going up and down all over the place while the 6800 kept steady the whole time), and we know how much time that usually takes.

It literally could be years, so I'm kind of now thinking two things and this is in line with a few other posters on B3D. Mainly, I'm not expecting a 20% - 30% performance gain over PS5 to ever manifest itself. This is mainly because even though there's a lot of room for devs to grow on Series X, there's at least about as much room (if slightly less) for them to grow on PS5, just in a different manner. Might go a step further and even say I'm not sure an 18% gain will ever manifest itself outside of some of the 1P MS titles. So it's actually maybe realistic to say 3P game performance between PS5 and Series X will be basically even and for games that don't start to adapt usage of the Series X silicon better to saturate it, PS5 might actually retain a lead in 3P game performance. Unless those suggested "stress points" on the power budget start to take effect and force downclocking of the GPU and CPU, in which cases I can see Series X having the edge in those 3P games. How often will that actually happen, though?

And how long will that go on? A year like some are saying? Well, it could take up to a year for MS to iron out GDK tool issues and devs to get more or less familiarized with those tools, but there's still the learning curve of fully saturating the power envelope on the system to keep it saturated 100% of the time on fixed clocks. And that there's the kicker: the fixed clocks. GPUs don't have an issue with saturating their silicon with work but that's also because they don't operate on fixed clocks. That basically uncaps them from otherwise limiting restraints to let games get the performance they need, maximizing the power envelope. That's still possible on a system with fixed clocks but it also requires a lot more developer-specific tuning and optimization, which will take time and effort, not all of which every developer has. Even moreso if you start to factor how marketshare could impact priorities.

I hope MS's tools make for as easy utilization of system resources to maximize effective room of their power envelope as possible; we can see some 1P optimized titles like Gears 5 doing this but it looks like 3P devs, at least for a while, will struggle to do so. By a year? Two years? Three years? It's a bit ironic since MS did message the Series X as being the best place for 3P titles but in reality they may have to rely on their 1P titles to try pushing the system to its limits if Sony's narrative with 3P game performance solidifies itself. That also means MS needs those big 1P games to hit sooner, not later.

And in all of this, what does that say about PS5? Is it being tapped out early? Well again, variable clocks haven't really been used in a games console until now, so it's hard to say. I still say that eventually, devs will have to begin contending with the fixed power budgets to GPU and CPU and begin tailoring their code to optimize the power consumption of that code, that way they don't run into issues with downclocks (which for now we can take at Cerny's word as being 2%, but realistically we don't know if it'll be more than that in practice). The benefit with their design, though, is that they won't need to really start worrying about that until a few years down the road.

In light of this I'm maybe a bit flustered with some of MS's design choices, TBQH. This is maybe an example where wanting to dual-purpose the APU is rearing its ugly head; fixed clocks and a wider net of CUs is great for a server rack, because if the clocks are just high enough to give enough performance but just low enough to ensure steady performance for prolonged periods, and you're going to be crunching a ton of compute data on them in said server rack, then that design is optimal. It's also optimal for streaming multiple One S gameplay instances, since even at a clock maybe conservative for an RDNA2 design, it's still more than enough for running virtualized One S machines.

Likewise to things like the SSD size and bandwidth, where I think the Series S factored into that decision. You can't go with too many smaller-capacity NAND modules for a solution because of space concerns, even if tried PoP and multi-channel setups. You'd want a footprint that could work not just for a larger console, but a smaller one. And you wouldn't want the bandwidth to be too high because larger bandwidth means more NAND modules or higher-performance NAND which means more power consumption which means more heat generation which means more required cooling which means increased costs.

But these (particularly the APU point) ultimately mean concessions were being made to accommodate the use-cases and this might have a heavier impact on the gaming performance of such a design than maybe first imagined. Taking that same APU perfectly fit for a server rack, now trying to leverage it as a gaming APU? Well now you're coming in under the power envelope and have to optimize your software to deal with it, but that requires a lot of work and time to do. I'm almost tempted to say this is the second generation in a row where MS let factors outside of a pure games machine lead influence in their console design. With the XBO it was multimedia/TV, and now with Series X/S it feels like it might've been servers/Azure (for Xcloud game streaming).

Where does that leave Series X? I still think it's an excellent system in many respects, and MS has a lot going for them WRT Gamepass and upcoming 1P content. Xcloud has tons of potential, too. But I think some of the early results we're seeing also shows that MS should consider not messaging on boasts going forward, especially those circling around a competitor whose specifications you don't 100% know about. If MS didn't push into messaging regarding power, or things to suggest it (like the "full RDNA2" stuff back in October), I don't think some of these 3P multiplat issues would have much weight against them. Granted if they had an actual 1P game or two that was new at the launch and optimized for Series X, the 3P multiplat issues would be even further regressed in people's minds because at least the 1P games would show what the system can actually do when optimized.

But that isn't what happened, and if their going with fixed clocks is what's really affecting some of the 3P non-BC performance on the platform, then improved tools won't be enough. Even a year of further familiarity by 3P devs may not be enough, to manifest in any tangible performance gains over PS5. PS5 might still hold that performance lead on 3P games going into next fall, probably even longer. And if that happens MS will be forced to rely one thing and one thing only to ensure early adopters keep glued: games. New games, exclusive to the ecosystem, optimized to show what the system can really do. Which will mean almost certainly closely-tied timed 3P exclusives, and 1P exclusives. Some of which need to start showing up within the next 3-4 months.

They have some confirmed: Scorn, The Medium, Exo-Mecha, Bright Memory Infinite, The Gunk etc. There's hints to some 1P stuff for next year like FS2020 port and maybe Forza Horizon 5. But they'll need a lot more than that if Sony's 2021 turns out to be as stacked as it's looking: Horizon Forbidden West, GoW Ragnarok, R&C Rift Apart, Gran Turismo 7, Kena Bridge of Spirits, Deathloop (provided things haven't happened behind-the-scenes there), and now rumors of both Silent Hill remake (reboot?) and a Metal Gear Solid remake. That's an onslaught, I'm not sure if even Halo Infinite (provided it's well-polished) is enough against all of that.

Here's wishing for the best to them because, if it's really the fixed clocks providing the most resistance to easily squeezing performance out of the system, that doesn't leave the Series X with a lot of advantages over PS5 aside from GamePass (which if Sony are attempting to make their own version of (prob through revamping PS Now), that will swing things Sony's way too) and in some areas BC (again, if Sony add even PS1 & PS2 BC, let alone PS3, I think that strips away Series X's BC advantage). Maybe Xcloud, but I don't see that being a particularly beneficial thing to individual games unless the game can leverage the innate benefits of multi-device local/streaming gaming sessions for game design purposes (so the game would have to know you're on Series X in one session, then on your smartphone on another, and maybe able to tell if you're indoors or outdoors, that sort of thing). That could be a unique feature for MS considering the range of devices they'll be wanting to integrating Xcloud into going forward.

ADDENDUM: While at it I guess I should touch on the GPU issues affecting some PS5s, as well. Reason being because, if the problem is tied to overstressed GPU at the current power budgets, and Sony are forced to lower the power budget, that can have its own implications on PS5 titles going forward.

It's not necessarily a good thing to speculate on this further until more evidence comes forth of bad systems having GPU issues, though. For all we know those problems could be related to systems with bad yield APUs, it happens. If it turns out there's something more to it, though, I think it's fair at that point to speculate if the clock peaks are the reason, considering Sony (and MS) are very likely on "only" 7nm DUV.
This is alot, not gonna lie I checked out after the Beyond3D post, lol. I'll finish reading later.

The part I stopped at goes with what Matt said on Resetera. I'm paraphrasing, and IIRC, but basically he said the same thing. That if the Series X was made with variable clocks, it would perform better vs fixed clocks. I just remembered some others said this too and showed examples.

He said this before the consoles even launched, before any face offs, before any games were even shown running on Series X.

Cerny even said because of variable clocks the PS5 can handle some situations with game code better vs fixed clocks. That interview with DF after The Road to PS5.
 
Last edited:

geordiemp

Member
Sep 5, 2013
11,898
24,906
1,010
UK
Well I came across some interesting posts on B3D that could also suggest into this. This one in particular from iroboto feels like a nail-on-the-head moment and might explain why we're seeing some of the performance differences in some 3P games between PS5 and Series X, though it feels like a somewhat bleak (if realistic) take:



^ Bolded and underlined emphasis mine.

This is after they've done some power consumption measurements on a few multiplat 3P games on Series X and realized some were only getting around 135 watts to 150 watts on average. Until I read this post I didn't actually consider what the implications around fixed clocks could produce WRT being an impacting factor in fully leveraging resources. With a variable clock setup on any downtime the system can just shift its power around to draw out more performance, and it's also a good mask for sloppy programming. Tons of computer software throughout the '90s and early '00s benefited from ever-faster CPUs due to this, and we saw what happened when they lost the advantage of fast clocks once multi-core designs came about; entire rewrites of the software had to be done in order to take advantage of the design paradigm shift.

That feels like to me what the Series X might be presenting to 3P devs; all consoles up to now have used fixed clocks but if you look at the bolded parts above, due to that, no other additional features like more ROPs, more cache etc. actually made that much of an impact unless you made the software specifically tailored to it, which takes time and effort. So I actually don't know anymore if the "tools" argument is the only one at play when it comes to explaining some of the 3P results on Series X, a bigger factor there could actually just be them going with fixed clocks. And if that's the case, such would be an architectural feature/design choice, something devs still have to work around regardless how much more improved the tools get.

The irony is that a lot earlier I thought it'd be the PS5 presenting this challenge because of its variable clocks and when Cerny gave the example of AVX 256 instructions being "particularly taxing" on the CPU that just further solidified the earlier viewpoint. But apparently I crossed some wires and forgot about how outside of gaming, variable frequency has generally been the rule on PC development, since you can't program to a particular spec and therefore a particular frequency that spec might lock in at (and I do specifically mean PCs here as in IBM PC-compatibles; it was different with microcomputers since only one manufacturer made them and they generally shared a range of specifications that could be more easily targeted directly, though that doesn't mean I'm sure if microcomputer devs programmed targeting fixed clocks).

So on that note, like others have said here Series X (and S) aren't doing anything different by going with fixed clocks; all consoles before have used fixed clocks. But that also might have factored into previous difficulties in maximizing the power envelopes (and therefore performance) of those consoles, and that will inevitably be a factor with the Series consoles as well. That's what makes the PS5 particularly interesting since it is the first console going with variable clocks, which means, yes, it is operating at full or near-full performance 99% of the time, and it'll only dip when power budget is exceeded, just as was said back in March.

Now do I think that will still create some complications? Yes. But those are complications that can be worried about later in the generation which is usually when devs start trying to squeeze more performance out anyway. I didn't think I'd be saying this but for MS's system the learning curve might actually be rougher than I initially thought because devs will have to go through the traditional process of learning the hardware and optimizing for it over time to make sure it stays fully saturated with work to ensure maximized use of the power envelope (power consumption should appear more or less steady; I think Gamers Nexus actually showed off what I'm talking about here in his 6800 review showing the power consumption patterns on 6800 with SAM active versus the Nvidia 3080 (or 3070) in some gaming benchmarks, where you could see the latter going up and down all over the place while the 6800 kept steady the whole time), and we know how much time that usually takes.

It literally could be years, so I'm kind of now thinking two things and this is in line with a few other posters on B3D. Mainly, I'm not expecting a 20% - 30% performance gain over PS5 to ever manifest itself. This is mainly because even though there's a lot of room for devs to grow on Series X, there's at least about as much room (if slightly less) for them to grow on PS5, just in a different manner. Might go a step further and even say I'm not sure an 18% gain will ever manifest itself outside of some of the 1P MS titles. So it's actually maybe realistic to say 3P game performance between PS5 and Series X will be basically even and for games that don't start to adapt usage of the Series X silicon better to saturate it, PS5 might actually retain a lead in 3P game performance. Unless those suggested "stress points" on the power budget start to take effect and force downclocking of the GPU and CPU, in which cases I can see Series X having the edge in those 3P games. How often will that actually happen, though?

And how long will that go on? A year like some are saying? Well, it could take up to a year for MS to iron out GDK tool issues and devs to get more or less familiarized with those tools, but there's still the learning curve of fully saturating the power envelope on the system to keep it saturated 100% of the time on fixed clocks. And that there's the kicker: the fixed clocks. GPUs don't have an issue with saturating their silicon with work but that's also because they don't operate on fixed clocks. That basically uncaps them from otherwise limiting restraints to let games get the performance they need, maximizing the power envelope. That's still possible on a system with fixed clocks but it also requires a lot more developer-specific tuning and optimization, which will take time and effort, not all of which every developer has. Even moreso if you start to factor how marketshare could impact priorities.

I hope MS's tools make for as easy utilization of system resources to maximize effective room of their power envelope as possible; we can see some 1P optimized titles like Gears 5 doing this but it looks like 3P devs, at least for a while, will struggle to do so. By a year? Two years? Three years? It's a bit ironic since MS did message the Series X as being the best place for 3P titles but in reality they may have to rely on their 1P titles to try pushing the system to its limits if Sony's narrative with 3P game performance solidifies itself. That also means MS needs those big 1P games to hit sooner, not later.

And in all of this, what does that say about PS5? Is it being tapped out early? Well again, variable clocks haven't really been used in a games console until now, so it's hard to say. I still say that eventually, devs will have to begin contending with the fixed power budgets to GPU and CPU and begin tailoring their code to optimize the power consumption of that code, that way they don't run into issues with downclocks (which for now we can take at Cerny's word as being 2%, but realistically we don't know if it'll be more than that in practice). The benefit with their design, though, is that they won't need to really start worrying about that until a few years down the road.

In light of this I'm maybe a bit flustered with some of MS's design choices, TBQH. This is maybe an example where wanting to dual-purpose the APU is rearing its ugly head; fixed clocks and a wider net of CUs is great for a server rack, because if the clocks are just high enough to give enough performance but just low enough to ensure steady performance for prolonged periods, and you're going to be crunching a ton of compute data on them in said server rack, then that design is optimal. It's also optimal for streaming multiple One S gameplay instances, since even at a clock maybe conservative for an RDNA2 design, it's still more than enough for running virtualized One S machines.

Likewise to things like the SSD size and bandwidth, where I think the Series S factored into that decision. You can't go with too many smaller-capacity NAND modules for a solution because of space concerns, even if tried PoP and multi-channel setups. You'd want a footprint that could work not just for a larger console, but a smaller one. And you wouldn't want the bandwidth to be too high because larger bandwidth means more NAND modules or higher-performance NAND which means more power consumption which means more heat generation which means more required cooling which means increased costs.

But these (particularly the APU point) ultimately mean concessions were being made to accommodate the use-cases and this might have a heavier impact on the gaming performance of such a design than maybe first imagined. Taking that same APU perfectly fit for a server rack, now trying to leverage it as a gaming APU? Well now you're coming in under the power envelope and have to optimize your software to deal with it, but that requires a lot of work and time to do. I'm almost tempted to say this is the second generation in a row where MS let factors outside of a pure games machine lead influence in their console design. With the XBO it was multimedia/TV, and now with Series X/S it feels like it might've been servers/Azure (for Xcloud game streaming).

Where does that leave Series X? I still think it's an excellent system in many respects, and MS has a lot going for them WRT Gamepass and upcoming 1P content. Xcloud has tons of potential, too. But I think some of the early results we're seeing also shows that MS should consider not messaging on boasts going forward, especially those circling around a competitor whose specifications you don't 100% know about. If MS didn't push into messaging regarding power, or things to suggest it (like the "full RDNA2" stuff back in October), I don't think some of these 3P multiplat issues would have much weight against them. Granted if they had an actual 1P game or two that was new at the launch and optimized for Series X, the 3P multiplat issues would be even further regressed in people's minds because at least the 1P games would show what the system can actually do when optimized.

But that isn't what happened, and if their going with fixed clocks is what's really affecting some of the 3P non-BC performance on the platform, then improved tools won't be enough. Even a year of further familiarity by 3P devs may not be enough, to manifest in any tangible performance gains over PS5. PS5 might still hold that performance lead on 3P games going into next fall, probably even longer. And if that happens MS will be forced to rely one thing and one thing only to ensure early adopters keep glued: games. New games, exclusive to the ecosystem, optimized to show what the system can really do. Which will mean almost certainly closely-tied timed 3P exclusives, and 1P exclusives. Some of which need to start showing up within the next 3-4 months.

They have some confirmed: Scorn, The Medium, Exo-Mecha, Bright Memory Infinite, The Gunk etc. There's hints to some 1P stuff for next year like FS2020 port and maybe Forza Horizon 5. But they'll need a lot more than that if Sony's 2021 turns out to be as stacked as it's looking: Horizon Forbidden West, GoW Ragnarok, R&C Rift Apart, Gran Turismo 7, Kena Bridge of Spirits, Deathloop (provided things haven't happened behind-the-scenes there), and now rumors of both Silent Hill remake (reboot?) and a Metal Gear Solid remake. That's an onslaught, I'm not sure if even Halo Infinite (provided it's well-polished) is enough against all of that.

Here's wishing for the best to them because, if it's really the fixed clocks providing the most resistance to easily squeezing performance out of the system, that doesn't leave the Series X with a lot of advantages over PS5 aside from GamePass (which if Sony are attempting to make their own version of (prob through revamping PS Now), that will swing things Sony's way too) and in some areas BC (again, if Sony add even PS1 & PS2 BC, let alone PS3, I think that strips away Series X's BC advantage). Maybe Xcloud, but I don't see that being a particularly beneficial thing to individual games unless the game can leverage the innate benefits of multi-device local/streaming gaming sessions for game design purposes (so the game would have to know you're on Series X in one session, then on your smartphone on another, and maybe able to tell if you're indoors or outdoors, that sort of thing). That could be a unique feature for MS considering the range of devices they'll be wanting to integrating Xcloud into going forward.

ADDENDUM: While at it I guess I should touch on the GPU issues affecting some PS5s, as well. Reason being because, if the problem is tied to overstressed GPU at the current power budgets, and Sony are forced to lower the power budget, that can have its own implications on PS5 titles going forward.

It's not necessarily a good thing to speculate on this further until more evidence comes forth of bad systems having GPU issues, though. For all we know those problems could be related to systems with bad yield APUs, it happens. If it turns out there's something more to it, though, I think it's fair at that point to speculate if the clock peaks are the reason, considering Sony (and MS) are very likely on "only" 7nm DUV.

Firstly I would like to address the clock, as lets face it some AIB 6800XT with air cooling have hit 2.5 Ghz, so add liquid metal to a die half the size and Ps5 can probably sit at 2.23 Ghz all day. Sony would up the fan curve anyway, and only reason to do that would be the memory temps not the APU.

I had read some of the power watts discussion, we know the GPU on XSX is not stressed, so the work is not being given to the 14 CU in a timely manner. No need to measure watts, we can see it in how games run.

My thoughst are MS wanted 4 games, had to do 4 shader arrays **, they knew the hit was losing say XX % efficiency but it is what it is (distributing wavefronts, parameter and LDS cache hits). I have no Idea what XX % is, but Microsoft will have known for 3 years, but they did it anyway as its still great for 1440p60 target and the main goal was server IMO.

** Note I dont know anything about server running, I am assuming each game runs private per shader array, thats why they distributed RB and primatives to each array, and again how that effects efficiency vs shared over shader engine is anybodys guess.

Look on the bright side, some workloads could favour the large shader arrays, possibly some GPU compute, maybe ray tracing, maybe some ML functions so its never clear cut.

I also guessing XSX will perform better with whole game and graphics in the 10 GB, but who knows, look what bandwiths 6800 has

We still know so little, ps5 everything except die dize and 36 CU and clocks, XSX how those 14 CU are suported (LDS / param cache and whether they did anything expansive here).

I know I have some GIF fun, especially with my boy Riky but lets face it the consoles are twins.
 
Last edited:
Aug 28, 2019
3,949
8,211
630
www.instagram.com
This is alot, not gonna lie I checked out after the Beyond3D post, lol. I'll finish reading later.

The part I stopped at goes with what Matt said on Resetera. I'm paraphrasing, and IIRC, but basically he said the same thing. That if the Series X was made with variable clocks, it would perform better vs fixed clocks.

He said this before the consoles even launched, before any face offs, before any games were even shown running on Series X.

Cerny even said because of variable clocks the PS5 can handle some situations with game code better vs fixed clocks. That interview with DF after The Road to PS5.

Yep, and I'm now starting to see how all of that might just be true. Hell I might even owe an apology to Matt in a while, even tho their posts are pretty non-substantial snark a lot of the time.

I don't think it necessarily reflects badly on MS; they went in with a design philosophy proven to bear results in the past. But the shift to variable clocks (with fixed power cap) on consoles might accelerate development paradigms similar to the console industry shifting from mostly assembly language to C language in the mid '90s (coinciding the shift to mainstream 3D). And I think, maybe, the desire of conformity for a high-power and low-power dual-console approach (Series X, Series S) not to mention commonality for a high-power APU intended for both gaming and server blade (Azure) usage, might have enabled certain compromises that could impede maximum utilization of resources in a gaming environment (or at least up the efforts needed to do so by several degrees).

If there's one company that has the financial means and resources to leverage that type of hardware over time (not to mention overall corporate interest in doing so thanks to Azure, etc.), it's probably Microsoft. It might be a bit of a Herculean effort, and might be for them what maximizing the Cell was for Sony back with the PS3, but MS clearly have the means to do it. Question is how much will they be willing to assist 3P devs in doing the same?

If MS's 1P output hits a certain threshold and level of consistency starting next year, then in truth they may not need to rely on the 3P content to that degree. They're on course to become the largest platform-holder publisher ever (in terms of volume of games) behind Sega, who published near 150 games (across multiple platforms) in 1995. And unlike Sega, that won't just be on MS-branded devices (tho it doesn't mean most if any will be on Sony or Nintendo platforms, either). But yeah, if it's really down to going with fixed clocks, the burden's going to shift onto their 1P and collaborative 3P exclusive content to fully leverage the hardware, because for most 3P multiplats the PS5 might retain the lead or at least be flush even with Series X, and that would kind of betray one of MS's marketing points earlier on this year.

They'd need to make up for that in some way, and that comes down to leveraging ecosystem exclusives to push the system. Knowing that, nothing's going to stop Sony from doing the same with their own titles on PS5, so no matter how you cut it MS is in for a fight this gen and they have a surprisingly steep upward hill to climb.

Firstly I would like to address the clock, as lets face it some AIB 6800XT with air cooling have hit 2.5 Ghz, so add liquid metal to a die half the size and Ps5 can probably sit at 2.23 Ghz all day. Sony would up the fan curve anyway, and only reason to do that would be the memory temps not the APU.

I had read some of the power watts discussion, we know the GPU on XSX is not stressed, so the work is not being given to the 14 CU in a timely manner. No need to measure watts, we can see it in how games run.

My thoughst are MS wanted 4 games, had to do 4 shader arrays **, they knew the hit was losing say XX % efficiency but it is what it is (distributing wavefronts, parameter and LDS cache hits). I have no Idea what XX % is, but Microsoft will have known for 3 years, but they did it anyway as its still great for 1440p60 target and the main goal was server IMO.

** Note I dont know anything about server running, I am assuming each game runs private per shader array, thats why they distributed RB and primatives to each array, and again how that effects efficiency vs shared over shader engine is anybodys guess.

Look on the bright side, some workloads could favour the large shader arrays, possibly some GPU compute, maybe ray tracing, maybe some ML functions so its never clear cut.

I also guessing XSX will perform better with whole game and graphics in the 10 GB, but who knows, look what bandwiths 6800 has

We still know so little, ps5 everything except die dize and 36 CU and clocks, XSX how those 14 CU are suported (LDS / param cache and whether they did anything expansive here).

I know I have some GIF fun, especially with my boy Riky but lets face it the consoles are twins.

This seems like a good take on it and as time goes on it becomes more apparent various design choices were made with the Series X to accommodate server and game streaming purposes of multiple virtualized instances on the GPU, which could create some challenges in leveraging the full hardware. Not that PS5 doesn't have some its own potential challenges there, because it does. But the challenges are unique upon both systems.

So in many ways I agree the systems are practically twins, particularly when talking the CPU and GPU. However I think the question is, since MS are using fixed clocks and that brings with it unique challenges in saturating the hardware with work at all times, how much more difficult will that be for most devs compared to Sony's setup at current, or even when Sony's setup starts to bring its own challenges to bear (power budget on GPU being exceeded and downclocking coming into play, requiring tuned management and optimization of game code to ensure it's power-efficient to stay in the power envelope and not requiring power reductions too severely on the CPU or GPU)?

In that type of respect, the systems do clearly bear out differences in design, and that's where my hope is that MS's GDK tools and optimizations are at a point where ensuring maximized use of their power envelopes is as easy as possible for 3P as it will inevitably be with their 1P teams in time. Because the areas of advantage you list WRT Series X's setup, we already have a sampling of how that can be leveraged going back to PS4, but such was mainly leveraged by 1P titles.

I think at least on some things like RT, we're seeing some of the benefits of their design already, regardless of how much or how little those benefits in areas like RT are over 3P titles on Sony's system currently. Leveraging the wider net of CUs wouldn't be as much a necessity if variable clocks were implemented, though, so no matter what there is going to be potentially more work for 3P devs on Series X to get certain results in line with PS5 titles, the range depending on how efficient MS's tools become, and how willing they are to assist devs there.

If things do shift however to programming models around, say, mesh shaders and ML, that actually changes things quite a lot to general development models that will tend to benefit MS's designs more often than not. Which is probably what they're banking on happening, hence the decisions in designs they've made, and also hence unifying everything through GDK. I guess it'll be very interesting to see how things shape out in the meta of the game dev scene within a year or two.
 
Last edited: