• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Imo MS locking at sustained speeds is a mistake to be corrected

I’ve just got to laugh at all the people overreacting and calling the whole gen over the first few multi platform games and the most minuscule of performance difference.

It was stated before that the PS5 is easier to develop for, right now. Things can change and they’ve even upped the frequency of the Xbox One via an update before, not that they’ll even need to.

If a year or two in the X is still being outperformed then yes, it can be called but until then I think people are jumping the gun. It might come back to bite them in the ass after all the gloating and armchair analysis.
 

Snake29

RSI Employee of the Year
man all that unlimited 12TF, True 4K power talk didn’t age well.

From the beginning this was wrong marketing. So far they under delivered their promises. Yes, Sony was silent, but so far they are delivering, but only have to improve some OS and BC features.
 
Last edited:

kyliethicc

Member
Nah, MS was too conservative. SX has room to spread still, but i guess they calculated locked 1.8ghz is enough to win.
It just bugs me of wasted higher potential.
MS should have also went with narrow and fast approach.

A 44 CU GPU at 2156 MHz would have landed them at the same compute power as their current setup. ROPs and rasterization rate would have been 18% faster too as a result.

EDIT: I mean, 18% faster than their current 52 CU @1825 MHz setup.

Fun fact: Their One X devkits are equipped with 44 GCN CUs and One X retail have 4 of them disabled.


They wanted to build a chip for XCloud to run 4 Xbox One S games at once.
Phil also said they wanted to double the One X's 6 TFLOPs. So 12 TFLOPs was the marketing target.

Xbox One S has 14 CUs (12 active) and uses around 5 GB of RAM.
14x4=56 CUs. 5x4=20 GB. And 20 GB of RAM uses a 320 bit bus.

So they built a die with 56 CUs and a 320 bit bus, and so the XCloud chip was built.

But how to use it as a $500 console? Well, 20 GB RAM cost too much, so they went down to 16 GB and made it work with the same die. (Thus the split bandwidth.)

Then consoles need to disable 4 CUs for yields. So 52 active CUs for the XSX.

Plus, they save money and get better yields because they only need 48 active usable CUs for their server chips, so they can salvage some of the chips that don't yield the 52 CU min needed for the console supply.

What clock speed does 52 CUs need to be set at to hit 12 TFLOPs?
52 CUs x 64 shaders per CU x 2 IPC x 1800 MHz = 11.98 TFLOPs

So @1.8 GHz they'd only have 11.98 TFLOPs, not quite the round 12 they wanted for marketing, so 1.825 got to their goal.

The Xbox Series X SoC was not made to beat the PS5. It was a dual design chip built for XCloud servers to run Xbox One games to be streamed to phones via Game Pass, but also serves as their $500 Xbox One X replacement with 2X the floppies.
 
Last edited:

ethomaz

Banned
You want a uoclock like MS did with Xbox One but remember the upclock on Xbox One was done six months before the machine launch... that means in design/project phase yet (even if the production was near to start).

So it is possible to upclock now after the launch? Well it is possible... just look at PSP or Vita (I don’t remember which) upclock years after release.

But I don’t think MS will do that... it is too risky.

Another point to take in mind is if the actual CUs are capable to go higher... there is some leaks that says the Series X CUs are actually RDNA CUs and not RDNA 2 like in PS5.... if that is the case they won’t have room to upclock.
 
Last edited:

nosseman

Member
You want a uoclock like MS did with Xbox One but remember the upclock on Xbox One was done six months before the machine launch... that means in design/project phase yet (even if the production was near to start).

So it is possible to upclock now after the launch? Well it is possible... just look at PSP or Vita (I don’t remember which) upclock years after release.

But I don’t think MS will do that... it is too risky.

Another point to take in mind is if the actual CUs are capable to go higher... there is some leaks that says the Series X CUs are actually RDNA CUs and not RDNA 2 like in PS5.... if that is the case they won’t have room to upclock.

I think a straight up upclock is impossible. What is possible is to remove the cap at 1.825 and then say to the system - "Ok, for short times you can exceed the power-envelope".

But I am not sure it is needed as I am not sure it is causing the problem.

It is still pretty early and in some games the XSX wins, in some its pretty much dead even and in some PS5 wins. Also - in some games and situations the seems to be strange hickups with the XSX - almost like microstutters.

As it stands now I am more inclined to thinks its bad API:s and/or bad optimization.

Edit: What I mean by "Ok, for short times you can exceed the power-envelope" is something like Intels Turbo Boost.

 
Last edited:
To add to my last post. The minuscule difference in perf could be mitigated through the games themselves, because it’s the games that matter, according to this forum.

Xbox don’t have the track record with game studios with a few exceptions, but they now have 21+ studios and are guaranteed a hit or 3 and that’s what will matter in the end.
 

Md Ray

Member
They wanted to build a chip for XCloud to run 4 Xbox One S games at once.
Phil also they wanted to double the One X's 6 TFLOPs.
So 12 TFLOPs was the marketing target.

Xbox One S has 14 CUs (12 active) and uses around 5 GB of RAM.
14x4=56 CUs. 5x4=20 GB. And 20 GB of RAM uses a 320 bit bus.

So they built a die with 56 CUs and a 320 bit bus, and so the Xcloud chip was built.

But how to use it as a $500 console? Well, 20 GB RAM cost too much, so they went down to 16 GB and made it work with the same die. (Thus the split bandwidth.)

Then consoles need to disable 4 CUs for yields. So 52 active CUs for the XSX.

Plus, they save money and get better yields because they only need 48 active usable CUs for their server chips, so they can salvage some of the chips that don't yield the 52 CU min needed for the console supply.

What clock speed does 52 CUs need to be set at to hit 12 TFLOPs?
52 CUs x 64 shaders per CU x 2 IPC x 1800 MHz = 11.98 TFLOPs

So @1.8 GHz they'd only have 11.98 TFLOPs, not quite the round 12 they wanted for marketing, so 1.825 was all they needed and they got to their goal.

The Xbox Series X GPU was not made to beat the PS5. It was a dual design chip built for Xcloud servers to run Xbox One games to be streamed to phones via gamepass and also serves as their $500 One X replacement with 2X the floppies.
That makes a lot of sense. Thanks. geordiemp geordiemp was saying this too. Need to pay more attention.
 

kyliethicc

Member
That makes a lot of sense. Thanks. geordiemp geordiemp was saying this too. Need to pay more attention.
I agree with your post in theory. If they were just making a console chip.

But clearly thats not their key business priority anymore, and so they built a compromise chip to serve multiple functions and also meet their marketing target.

The biggest flaw in Xbox console design over the last 5 years is obsessing over TFLOPS. The One X was 6, the SX is 12! Nice big round numbers sure, but totally arbitrary. Who cares.

The architect and engineers should be allowed to design the system for the ideal performance, unrestricted by marketing pressure. Like the PS5 was.
 
Last edited:

M1chl

Currently Gif and Meme Champion
For your smart asses out there, John Sells from past intel glory is behind APU (outside of of AMD, because like you are saying "lord Cerny" as if 10ths of years of word at ATi/Radeon counts for nothing).

Ronald is probably tasked with other stuff.

And well me being obvious (and only logical thing today to do) nVidia fan, Radeon is behind of performance curve that for midgen refresh, they should demand Radeon team to do at least something remotely powerful.

Using something like "infinity cache" is just poor crutch, not the mention expensive one, which in better GPU is used for tensor cores and that kind of standard thing today.

However I feel like XSX still should perform better and that hopefully we would see some write up on that as to why it is. From some reputable source.
 
The way I see it, The PS5 has a 20% advantage for the following reasons (and in the following scenarios).

When you are issuing jobs to a GPU, you have a budget of time for those jobs to be completed, and a lot the time, those jobs need to be synchronized. This means, the slowest job bottlenecks the system, as we need to wait to synchronize, essentially leaving the GPU utilization below 100%. Now, to take up the slack, you can issue more, smaller jobs in the meantime, if you have any to run. This is the issue. Most engines won't have any jobs to issue while they wait for the previous pipeline to complete. Most engines just don't have anything to do in this time! While game code is waiting for jobs to complete, within the CU itself, you have SIMD units which ALSO need to be saturated. This is usually where GPU parallelism shines, as SIMDs are great at processing pixels in parallel in fixed timestep, and the SIMDs can share a cache also but that's within the CU itself. So you have potentially underutilized CUs while they are performing jobs, and CUs which are idle / waiting, and then sharing of higher levels (slower) cache between CU vs SIMDs within the CU.

The reason we're seeing 20% increase on PS5? Because it's clocked 20% faster for the same tasks, as well as the fact that any engine distributing work to more CU's has diminishing returns. There is no games utilizing the entirety of the PS5, never mind the Series X CUs, and probably won't happen for a long time.

To improve the situation, the engine pretty much has to be written from scratch with the above in mind, which of course is possible. For multiplats, I believe ID Tech 7 has a brand new job engine designed for the above.
 
Last edited:

Kumomeme

Member
microsoft goes for wider and slow opposed to narrow and fast ps5's. there are reason why they opted for 1.8ghz then goes for more cu ( 52). there are reason for this decision. Xbox engineering team is capable. like cerny said, for ps5 they can go 48 cu with slower speed however 36cu with faster speed still bring similliar performance. ms apparently choose former than later. surely there are reason why the team opted for this decision. they probably aimed for small size console since beginning so they need to take account of thermal and cooling solution. also raytracing performance is relate to cu numbers. rt supposed to be next gen main attraction which is no suprise if they focused on that. hence, wider and slower philosphies. no need to fret. everything has strength and weakness. probably what held them back is their gdk tools and they need capable first party team to squeeze the console strength and advantages.

eitherway, funny that before there are people laugh at ps5's over 2ghz gpu to the point some of them said sony did it last minute to close the gap since xsx specs caught them of guard lmao.
 
Last edited:

Larvana

Member
This brings up another point...

The XSS is a complete disaster. It is not just a simple "downrez". Games are flat out performing worse at 30 fps vs 60.

This is a stillborn console IMHO, and looks like a horrible value compared to PS5 DE.

What the hell was Phil thinking? They are going to have to carry this baggage around for the entire gen...ugh. Just awful.
No wonder why people think neogaf is a Sony platform with mindsets like yours, lol... ignorance is bliss.
 

RaZoR No1

Member
Did we ever get a performance / frequency bump through an update?
I don't think we will get one even in the current situation it could help a bit, but probably not solve all problems.
But why should MS push the Series S/X further to the limit and risk broken consoles? Current specs were tested and proved to be sufficient enough for the cooling. I think they do not want any RROD situation, therefore they do not want to risk anything.
 

AeneaGames

Member
Nah, MS was too conservative. SX has room to spread still, but i guess they calculated locked 1.8ghz is enough to win.
It just bugs me of wasted higher potential.

Since MS loves to talk about power a lot it is rather safe to assume that if they could have gone higher with current box and cooling they most certainly would have.

Yes, the GPU could have run higher but they probably would have done that before designing the cooling system and housing.

If it's as easy as a firmware upgrade to clock it higher it would mean it was designed to run at those speeds from the get go, then why on earth would they not do that right away?

This really sounds a lot like the hidden gpu fantasies of the X1.

It's a great console, nothing wrong with it, nothing needs changing, perhaps maybe lower settings in the game engines....
 

onQ123

Member
Did we ever get a performance / frequency bump through an update?
I don't think we will get one even in the current situation it could help a bit, but probably not solve all problems.
But why should MS push the Series S/X further to the limit and risk broken consoles? Current specs were tested and proved to be sufficient enough for the cooling. I think they do not want any RROD situation, therefore they do not want to risk anything.


PSP or PS Vita did


Edit: PSP


https://bit-tech.net/news/psp_processor_gets_a_boost/1/

We know quite a few of you (most notably our former Chief Editor, Wil Harris) love your PSPs. And who can blame you? It really is a great little system. But since the PS3's debut, people have been looking to hook them both up - and Sony's version 3.5 firmware (released at the very end of May in North America) finally let that happen. But what they didn't tell us is that it does something else cool, too.

The new PSP firmware slaps some "go faster" stripes on the little handheld, bumping its processor up 50 percent from a wimpy 222MHz to 333MHz. That's right - no new hardware required. The PSP actually has always run on a bit of an underclocked processor, presumably to aid in battery life until some extra horsepower was needed.

Now, game developers will be able to flag their software for faster speeds if necessary. The bump won't be noticeable in older games, as they have all been programmed on and will run at 222MHz. Instead, developers will be required to tell the device whether to step up its speeds - the firmware just unlocks the potential. This way, older games don't act buggy or accelerated without intent.

Sony's latest speed bump won't be used by many in the commercial sector just yet, though the new Ratchet and Clank has brought speeds up to 266MHz. However, home brew source code has long since fiddled with the device's clock speed through modified firmware, often accelerating it at the expense of battery life. It seems that finally, proper developers will get the chance as well.

Initially, Sony hid the increase under mounds of paperwork detailing the firmware upgrade, not much interested in making it a public note. However, SCEA has now finally confirmed that the bump up did happen, after being none too pleased (from what we understand) with the initial leak. Unfortunately, a worldwide update has not been offered yet - the firmware upgrade is still North America only for the time being.
 
Last edited:

Bitmap Frogs

Mr. Community
That clock was very deliberate to deliver 12tflops, I wouldn't be bothered about a couple of cross gen launch games in the grand scheme of things.

None of these games are next gen, none are using the RDNA2 performance features or even Velocity Architecture. There is a lot more to come over the next seven years.

Waiting! The new exclusive game from Microsoft game studios, available for free to all owners of Microsoft hardware. Waiting!
 

RaZoR No1

Member
PSP or PS Vita did
I totally forgot about that.
But it was not a typical overclock, they just unlocked CPU/GPU speeds, which were there since the release.
If you owned an hacked one, you already could access them much earlier.
I think it is even the same for the Switch.
Underclocked the already underclocked SoC for more battery time.

Of course, this could apply to the PS5 and XSX too, that theoretically some modes are not unlocked due to different reasons.
 

Pedro Motta

Member
I think SX designs has more to offer. But by locking at only 1.8ghz, seems like MS is selling it too short
Look at RX6800, the closest to SX.

RX6800 runs metro 1440p ultra at around 2.25ghz, takes only 230W. It generates 85fps, beats 2080Ti Fe.
From reviews, a 7% overclock 6800, gives about 3% fps improvements
Working backwards, 1.8ghz to 2.25ghz is a 25% increase. So if we capped RX6800 to 1.8ghz, it will lose 10~12% in perf, so Metro 1440p may lose 10fps in doing so. 🤷‍♀️

Let see if any reviewers review capped 6800/6800XT clocks. Please share if you come across them.

zJBwNYXab6TMJQ5vdNmVeX-2751-80.png

SDTxTksKUDpKAH7vhCPL4X-2751-80.png

VbT8NM8Q4EjvnyZSVGVs2K-2751-80.png

clock-speed-comparison2.png
And how are they going to correct it now? Flash new firmware and change all the tools and profilers? Lol
 

ZywyPL

Banned
Wouldn't it go against the Hovis Method, which is fine tuning each unit individually, like PC guys do with their CPUs/GPUs? I think it's too late for any changes in the frequencies/voltage curves once the consoles left the production line and are already out in the wild. They could of course set every console to the exact same profile, like out-of-the-box PC components do, but I don't think they would ever want to do that after all the effort put in incorporating the said Hovis Method. Not that they need any more power to begin with, I think people who desperately want their preferred console to crush the opposing box every single time are just overreacting, the will do just fine in the longer run, especially when current-gen platforms will stop being supported anymore.
 

onQ123

Member
I totally forgot about that.
But it was not a typical overclock, they just unlocked CPU/GPU speeds, which were there since the release.
If you owned an hacked one, you already could access them much earlier.
I think it is even the same for the Switch.
Underclocked the already underclocked SoC for more battery time.

Of course, this could apply to the PS5 and XSX too, that theoretically some modes are not unlocked due to different reasons.



PS5 is actually able to go above 2.23GHz but it's limited to make sure everything works right with the clock rates . So I'm guessing a mad man let the CPU drop down to PS4/PS4 Pro clock rate than push the GPU clocks over 2.5GHz
 

Md Ray

Member
I agree with your post in theory. If they were just making a console chip.

But clearly thats not their key business priority anymore, and so they built a compromise chip to serve multiple functions and also meet their marketing target.

The biggest flaw in Xbox console design over the last 5 years is obsessing over TFLOPS. The One X was 6, the SX is 12! Nice big round numbers sure, but totally arbitrary. Who cares.

The architect and engineers should be allowed to design the system for the ideal performance, unrestricted by marketing pressure. Like the PS5 was.
My thoughts exactly.
 

mrmeh

Member
Both consoles are similar in performance... it was speculated before release dev's have been having teething issues with the new Xbox gdk , which would explain some of the issues or slight drops in performance in a few of the handful of games tested so far.

Both consoles are the same price and have a similar power envelope.

I'm an XsX owner and I don't care if its a few fps slower or faster than a PS5... what a pathetic existence that would be.

I do want dev's to get the most out of it though so hopefully MS can improve their tools and api's. It's harsh to judge MS too badly, they have a bigger ecosystem with Direct X to manage and I think they have done an excellent job with the formfactor and engineering of the XsX.

Strong competition makes things better for both eco systems.
 
Last edited:
You mean all the people who were gloating and doing armchair analysis before the consoles launched? Yeah definitely they got bitten in the ass.

Both sides were gloating as much as each other, it’s not exclusive to one side. I did my fair share too but at least I can admit it unlike some.

As I said though, the difference is minuscule especially compared to XB1 vs PS4 so it’s literally nothing to worry about.
 

Three

Member
Did we ever get a performance / frequency bump through an update?
I don't think we will get one even in the current situation it could help a bit, but probably not solve all problems.
But why should MS push the Series S/X further to the limit and risk broken consoles? Current specs were tested and proved to be sufficient enough for the cooling. I think they do not want any RROD situation, therefore they do not want to risk anything.
Only for the Vita or PSP as far as I remember.
 

diffusionx

Gold Member
Take it from someone who screwed up and got a One early on, instead of wondering about MS design flaws or waiting for dev kit secret sauce just buy the better system and enjoy it. You backed the wrong horse, just reverse course while it’s still early.

You want a uoclock like MS did with Xbox One but remember the upclock on Xbox One was done six months before the machine launch... that means in design/project phase yet (even if the production was near to start).

So it is possible to upclock now after the launch? Well it is possible... just look at PSP or Vita (I don’t remember which) upclock years after release.

PSP CPU was purposefully under locked to 222mhz for battery life, but the CPU could always run at 333, but they waited until the slim release that was more efficient to do it.
 

rnlval

Member
Oh longdi longdi ... Not so long ago:

So what you're saying is that you now want Microsoft to do a "last minute overclock"?
For certain games at 4K resolution, XSX's CPU fixed power allocation with fixed clock speed frequency is pointless e.g. some games do not use AVX v2 256-bit seven CPU cores power allocation.

MS should offer developers the ability to configure CPU: GPU power allocation behavior e.g. single CCX mode still has 8 threads which can cut the CPU power budget allocation in half and increase GPU's clock speed.
 
Last edited:

DustQueen

Banned
What's worse is the series S clocks. 1.565 Ghz. I mean, really ? That is a low clock speed for rdna 1, let alone rdna 2. I don't understand why they are being so conservative.
I really do not really think so... But let's fantasize a little..
Cod Sony actually did it with PSP back in a day...

Maybe MS can push clock speeds with software updates? Cos if u look at the cooling both S/X have...they probably ll handle it fine. So maybe they ll unlock extra 200-400 GHz in 2 years as a marketing move or something
 

rnlval

Member
They wanted to build a chip for XCloud to run 4 Xbox One S games at once.
Phil also said they wanted to double the One X's 6 TFLOPs. So 12 TFLOPs was the marketing target.

Xbox One S has 14 CUs (12 active) and uses around 5 GB of RAM.
14x4=56 CUs. 5x4=20 GB. And 20 GB of RAM uses a 320 bit bus.

So they built a die with 56 CUs and a 320 bit bus, and so the XCloud chip was built.

But how to use it as a $500 console? Well, 20 GB RAM cost too much, so they went down to 16 GB and made it work with the same die. (Thus the split bandwidth.)

Then consoles need to disable 4 CUs for yields. So 52 active CUs for the XSX.

Plus, they save money and get better yields because they only need 48 active usable CUs for their server chips, so they can salvage some of the chips that don't yield the 52 CU min needed for the console supply.

What clock speed does 52 CUs need to be set at to hit 12 TFLOPs?
52 CUs x 64 shaders per CU x 2 IPC x 1800 MHz = 11.98 TFLOPs

So @1.8 GHz they'd only have 11.98 TFLOPs, not quite the round 12 they wanted for marketing, so 1.825 got to their goal.

The Xbox Series X SoC was not made to beat the PS5. It was a dual design chip built for XCloud servers to run Xbox One games to be streamed to phones via Game Pass, but also serves as their $500 Xbox One X replacement with 2X the floppies.
XSX GPU clock speed is 1825 Mhz

52 CUs x 64 shaders per CU x 2 IPC x 1825 MHz = 12.147 TFLOPs (vector shader math units only, not including scalar units). 2 IPC refers to FMA (fused multiply-add operations).

CU includes texture mapper units, texture filter units, ray-tracing units, vector stream processors, local data store, L0 cache, scalar processor,s and 'etc'.
 
Last edited:
Variable clocks are bad they said... last minute clock boost to make up for lack of power they said...
Smartshift is a marketing gimmick they said... it won't affect performance they said...

Few months later...

4n006u.jpg


Lmao... for real though, nothing wrong with seeing the writing on the wall. OP understands.
 

Onironauta

Member
Both consoles were designed by experts in their fields. I'm sure engineers at MS had a good reason for this choice.
Maybe SX's thermal design would not handle higher clocks even in short bursts.
 

Whitecrow

Banned
If XSX could have higher clocks, you think they would leave as it is now?

PS5 can clock higher thanks to less CUs. And also performs above its weight thanks to more specific and dedicated API than DX12U, that must support all XBOXs and PCs.

Everything is set in stone now.
 

ToadMan

Member
Well RX6800 at 2.2ghz takes only 230w. The zen2 mobile apu is very watt friendly, probably another 30w max. So you add in the br drive, wireless, ssd everything. I dont think SX will exceed 280-300w

Straight away they’d need a bigger PSU if they want to get to those power consumption levels. The Xsex has a 315W PSU. 70% load is 220w. If you pushed it to 80% that’s only 250W. Going beyond that isn’t recommended - you’ll get power spikes and that can lead to shutdowns and reliability problems.

It also bumps up the heat output which will need the system to throttle the clocks for thermals - so Xsex will have a boost system like a PC has. That will need MS to build an algorithm for that and let devs experiment with it to try and get the performance they want.

What it will also mean is the Xsex gets what many erroneously said the PS5 had - variable performance depending on ambient temperature.

But this is all hypothetical - the clocks on Xsex are fixed and can’t be raised because there’s not sufficient power available.
 
Hmm if the rumour that Xbox is partially RDNA1 is true that could also explain why they need to stick to slower clockspeeds. Remember RDNA1 5700XT doesn't clock as high as those new RDNA2 gpus from yesterday.
 

Allandor

Member
I think SX designs has more to offer. But by locking at only 1.8ghz, seems like MS is selling it too short
Look at RX6800, the closest to SX.

RX6800 runs metro 1440p ultra at around 2.25ghz, takes only 230W. It generates 85fps, beats 2080Ti Fe.
From reviews, a 7% overclock 6800, gives about 3% fps improvements
Working backwards, 1.8ghz to 2.25ghz is a 25% increase. So if we capped RX6800 to 1.8ghz, it will lose 10~12% in perf, so Metro 1440p may lose 10fps in doing so. 🤷‍♀️

Let see if any reviewers review capped 6800/6800XT clocks. Please share if you come across them.

zJBwNYXab6TMJQ5vdNmVeX-2751-80.png

SDTxTksKUDpKAH7vhCPL4X-2751-80.png

VbT8NM8Q4EjvnyZSVGVs2K-2751-80.png

clock-speed-comparison2.png

Your assumptions go in the wrong direction.
The 6800 has a really, really big cache which helps here quite a lot.

And my ears really thank MS for that quite box. Yes they could clock it higher, but than they would need more power, produce more heat and the fan has to run faster. I'm happy with how this thing is build, but the software side is a whole other story.
We can even see this on PC where DirectX12 is after so many years not the norm. And if it is used, chances are high, that the DX11 mode works better.
It also does not help, that the xbox uses Directx12 because PC ports are easily made. So if the game works, developers won't optimize that much especially when a release date was announced where the game must be released. PS5 has here the advantage, that developers have to optimize for the system, just because it is not done with a simple PC port. The API is different so they must change things and on the way they automatically optimize the game/engine for the system.
And it also didn't help, that developers got their dev-kits so late.
MS is a software company, but they are really slow in delivering software.
 

Greggy

Member
You want a uoclock like MS did with Xbox One but remember the upclock on Xbox One was done six months before the machine launch... that means in design/project phase yet (even if the production was near to start).

So it is possible to upclock now after the launch? Well it is possible... just look at PSP or Vita (I don’t remember which) upclock years after release.

But I don’t think MS will do that... it is too risky.

Another point to take in mind is if the actual CUs are capable to go higher... there is some leaks that says the Series X CUs are actually RDNA CUs and not RDNA 2 like in PS5.... if that is the case they won’t have room to upclock.
So the XSX is the RDNA1 console now? I think I've heard it all. MS quotes AMD saying that they are the only console with all the RDNA2 features but GAF knows "some leaks".
I guess we'll go with the "leaks" (the console is in everybody's hands, was the subject of multiple developer conferences but we still need leaks).
 

ethomaz

Banned
So the XSX is the RDNA1 console now? I think I've heard it all. MS quotes AMD saying that they are the only console with all the RDNA2 features but GAF knows "some leaks".
I guess we'll go with the "leaks" (the console is in everybody's hands, was the subject of multiple developer conferences but we still need leaks).
It is a not a GAFer leaker... it is a AMD leaker.
Xbox indeed is the only GPU that supports all new Advanced RDNA 2.0 features with DirectX 12U.
That doesn't mean the hardware is fully RDNA 2.0 (it is not... even if you ignore the CUs and Front-end it has not Infinite Cache for example).



BTW the CU IPC is 25% from Xbox One X to Xbox Series X that match exactly the GCN to RDNA1 IPC increase... RDNA2 CU IPC increase is something around 10-15% over RDNA 1 CU IPC.
 
Last edited:

John Wick

Member
Both sides were gloating as much as each other, it’s not exclusive to one side. I did my fair share too but at least I can admit it unlike some.

As I said though, the difference is minuscule especially compared to XB1 vs PS4 so it’s literally nothing to worry about.
I don't think so. MS fangirls had been far more vocal spreading fud. Especially the 12 teraflops nonsense.
 

Papacheeks

Banned
Nah, MS was too conservative. SX has room to spread still, but i guess they calculated locked 1.8ghz is enough to win.
It just bugs me of wasted higher potential.

They were not conservative in how they designed Series x. They just dont have the same design philosphy as Sony so their hardware reflects that. They have always tried to make a powerful PC you can just use while on the couch. OG xbox was basically that. Same with xb 360.

Sony has a different approach to gaming than MS. So the products each produce reflect that.

Series x has a ton of raw power. Issue is it's tied down by use of Direct X and any updates that happen along the way. All of their extra speed for the ssd is tied to Velocity arc, which in turn uses Direct storage thats written into direct x re-write that isn't even finished and wont be till sometime next year.

Thats why when you see how far along sony's games are they all have in some way shape or form ray tracing. It's also why we heard dev tools were way ahead of xbox in terms of maturity. I think things will even out a little come next year once direct x features are finished.

If Xbox went with something different that was not tied down to windows this thread would make more sense.

But honestly it's two companies that have different visions in how they see the industry so there for the choices in how the design their hardware differs.
 
Top Bottom