• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X CPU Performance May Be on Par With Ryzen 5 1600/1600 AF, Analysis Suggests

WCCFxboxseriesx4.jpg

The Xbox Series X rumored 12 TLOPS figure has been officially confirmed earlier this week, alongside some new details on the console's processor. Most of the analyses that have been shared so far seemed to have ignored the console's CPU performance, but a new one that has been shared recently suggests some interesting things.

John Prendergast has been taking a good look at what is currently known about both the next-gen consoles on his blog. He recently performed a calculation on the Xbox Series X CPU performance based on the latest reveals, including Phil Spencer's statement regarding the new console offering four times the performance of the Xbox One. After looking at various Cinebench results, using UserBenchMark as well for comparison, the analysis suggests that the Xbox Series X CPU processing power is on the same level as the Ryzen 5 1600/1600 AF, roughly matching the power of the leaked Flute APU that emerged last year.

The full analysis, which goes quite in-depth, can be found here. It is based on incomplete information, as there is still a lot we do not know about Microsoft's next-gen console, but it is an interesting read nonetheless.

Last week, we also learned that the Xbox Series X is going to feature audio ray tracing, a new spacial audio feature. The console is also going to feature dedicated audio hardware acceleration, which was supposed to be discussed at a panel at the now-postponed GDC 2020.

With the introduction of hardware accelerated ray tracing with the Xbox series X, we’re actually able to enable a whole new set of scenarios, whether that’s more realistic lighting, better reflections, we can even use it for things like spatial audio and have ray traced audio
The Xbox Series X launches later this year worldwide.

Within yesterday's information release from Microsoft had one final tidbit of information that I didn't address and that was the allusion to the fact that the Xbox Series X is four times as powerful, in terms of CPU power, as the Xbox One:
"Delivering four times the processing power of an Xbox One and enabling developers to leverage 12 TFLOPS of GPU (Graphics Processing Unit) performance – twice that of an Xbox One X and more than eight times the original Xbox One."

Yes, it isn't specifically stated that this is the CPU improvement but by a process of elimination (4x GPU performance is below the One X TFLOPS value) it appears to refer to CPU processing power. So what, exactly, does this mean and how does it fit into the information we have for current Ryzen APU offerings?

Codenames and performance...

I noted back in November 2019 that it looked, to me, like the leaked codenames for the development kit processors were referencing mid-tier performance; an X6XX part (e.g. R5 1600, 2600, or 3600). The leaked performance numbers for the "Flute" APU also loosely corroborated this with performance just below a Ryzen 7 1700X over at userbenchmark. Now, the benchmarking suite has changed the weightings of the tests it runs so it's a bit difficult to compare with current results using these figures. I can "interpolate" the results from currently listed results though the original results are no longer listed in the database because they were compared to a Ryzen 7 3700X. I can use the 3700X current results with the previous results to see the way the weightings changed.

It's not a perfect manipulation but it gives us results of 1-core: 88.6; 4-core: 341.4; and 8-core: 626.6. We'll come back to these results later on...

However, we do also have several Cinebench scores for many prior and currrent CPUs and APUs that are applicable to this test. Digital Foundry noted that an Athlon 5370 clocked at 1.6 GHz roughly corresponds to a PS4. Extrapolating these Cinebench R15 results to the 1.75 GHz clock speed of the Xbox One by cross-referencing them with results for a stock 2.05 GHz Athlon 5350 and Athlon 2.3 GHz 5370, you manage to approximate an R15 single core score of 38 and multicore score of 140.

Processor%2Bspeed%2Btests_1.png

The orange cells are calculated from surrounding data. Data that was unable to be calculated or was completely absent is marked with a dark grey cell.

One aspect that must be taken into account from the Digital Foundry results is that their multicore "simulation" of the PS4 CPU is only 4 core, not 8 - as found in the PS4/XBO. Whilst this doesn't have an impact on the single core, you'll note that the stated 4.7x improvement between a 3.2 GHz 3700X and a "stock" PS4 is actually more like 4.3x (I presume there was a transcription error here because other calculated ratios are correct), the multi-core would need to be multiplied by a factor of 2 in order to be closer to the actual performance.

The PS4 is not the console we have a performance number for, though. I performed the same extrapolation as above for the XBO Userbenchmark scores in order to have a second test suite that might be able to corroborate the approximations. Multiplying all of these calculated scores by a factor of 4 (as per the Xbox news blogpost), factor in 4-extra cores for the multi-core benchmark by multiplying the result by 2 and we achieve a theoretical Xbox SX single core score of 153 and a multi-core score of 1120. The Userbenchmark scores are 117, 427 and 888 for 1-core, 4-core and 8-core results, respectively.

Now that we have these theoretical values we can look at the Cinebench scores for the Zen 2-based APUs, the 4800U and the 4800H (listed in the table above, along with their Userbenchmark scores). Here we can see the huge performance uplift and optimisation of the APU Zen 2 cores compared to the Zen, Zen+ and Zen 2 desktop parts. With a TDP of just 15-45 W, the APUs are almost matching the single core performance of the 3700X and beating the 1600, 1600 AF and 2600 considerably. However, as I noted last time, reducing the TDP and L2 + L3 cache sizes has affected multicore performance as the chips are unable to run as hot for as long, and are bottlenecked by having to draw in/swap out data to the main system memory much sooner than for their desktop counterparts.

What is apparent is that whatever "Flute" was, it has been improved upon considerably because, as per the APUs' results, it was only matching a Ryzen 5 3550H part (included in the table above) for 1-core and 4-core but outperforming the 3550H in the multicore score with 627 vs 493. This is a very confusing result and it begs the question as to what, exactly, "Flute" was meant to be testing and what architectural makeup it had.

Processor%2Bspeed%2Btests_2.png

Comparison of a theoretical Xbox Series X CPU performance (4x Xbox One) with derived results for down-clocked 4800U and 4800H parts. The "Middle" allow for a slight increase in TDP and these scores allow us to see that a Zen 2 APU clocked at 2.55 GHz in a 25 W TDP envelope would potentially match the 4x improvement over the XBO.

Either way, the theoretical Xbox Series X (above) gives us our target benchmark results. Looking at the 4800U and 4800H APU benchmark results, we can see that these are already in the ballpark of 3.78x to 6.11x a performance improvement across all the various benchmarks. However, there are likely to be significant TDP and power constraints on the Xbox SX in comparison with these two APUs due to the fact that both APUs are only sporting 8 and 7 GPU CUs (Compute Units) in comparison to the >40 CUs rumoured to be present on the SX APU die. It would be likely that, in order to guarantee the performance of the GPU, the CPU would be necessarily clocked lower in order to allow a greater TDP budget for the GPU portion of the die.

Previously, I tried to estimate relative die areas for raytracing, CPU and GPU cores. I looked at existing EPYC embedded parts on the market and concluded that the 3251 would fit the design spec of the rumoured next gen consoles, though I determined that its 2.5 GHz, 50 W TDP would need to be scaled back in order to "fit" within a console form factor and match the Gonzalo leak in terms of 1.6 GHz base clock.

I also calculated that Flute was operating below it's maximum boost clock (though above its base clock) during the leaked testing results, averaging at around 2.6 GHz. During these thoughts, I also determined that it was unlikely to be final silicon (and may not even have been the final architecture [i.e. Zen 2 - though at the time I thought that the consoles would inherit Zen 3]) and that it was not operating at the desired performance target at the time of its testing. We can see above that Flute is some way behind the theoretical Xbox Series X and, I presume an almost equal PS5 CPU block. Personally, I believe that there will be a smaller difference in the CPU between the two consoles than the rest of the system.

Anyway, to get back to the point at hand: I performed a calculation to reduce the clockspeed of the 4800U and H APUs by matching the single core performance in Cinebench R15. This gave a clockspeed of 2.55 GHz. From this I recalculated each remaining benchmark for these two APUs at the new, downclocked frequency. Then, seeing that the 15 W part was not quite performant enough to match the multicore score, I increased the theoretical TDP by interpolating between the U and H parts to 25 W. This gives us a CPU with performance approximating all of the benchmark results apart from the Userbenchmark 8-core result, which lags behind by 197 points.

dimensions.jpg


I didn't have another image to fill this gap, so here's one I made earlier...

The reason for this discrepancy could be down to the way I extrapolated the 8-core result for the theoretical result for the Xbox One. I multiplied the Cinebench multicore score by 2 in order to account for the extra 4 cores and performed the same, simple approximation for the Userbenchmark score. However, whilst this appears to have been fine for Cinebench, it may not be appropriate for Userbenchmark because of the way that testing suite weights the results of its testing procedure. It could be that with double the cores, you do not get double the score.

Ideally, we need to find score results for two CPUs using the same architecture and similar base and boost clocks - though one needs to have 4 cores and the other 8. Unfortunately, the third generation Ryzen desktop processors bottom out at 6 cores and the mobile APUs have not been completely benchmarked on Userbenchmark (the 4700U is not found). However, we can compare the 4300U (4C/4T) to the 4800U (8C/16T) and the 3200GE (4C/4T) and the 3400GE (4C/8T).

Compiling the data (in the table below) you can see that there is an observable effect on the benchmark results upon adding more threads. It seems that Userbenchmark's testing overly favours additional threads (whether they are logical or physical) over increased clock speed - it is not a 1:1 increase with frequency or available threads/cores. Most notably, if we apply the same scaling factor of the (4C/4T) 3200GE to the (4C/8T) 3400GE in our theoretical scaling from the Athlon 5370 @1.75 GHz to the Xbox One, we reach an 8-core score of 174 points. Multiplying that by a factor of 4 gives us 697 points which is in line with what we see for the other scores in our "Middle" CPU, above.

So, with regards to this small discrepancy, I'm not that concerned about the mismatch.

If you take a look back at those numbers in the two tables above, you'll see that my original assumption was pretty close to the mark - this performance level is right around that of the Ryzen 5 1600 and 1600 AF. It also correlates pretty closely to the performance of the Flute CPU scores, though improved somewhat.

In this context, it could be inferred that Flute was a development chip testing out TDP and approximating performance from a fully enabled 8C/16T zen+ die running at or near R5 1600 clock speeds. In fact, this could intimate the entire reason the 1600 AF exists - as a precursor to the scaling down of existing core technology as a test case for console development kits.

Userbench_comparison.png

There's an outsized effect in the way Userbenchmark's suite works on increased logical and physical threads...

So, what have we got? Well, I think this approximation looks pretty good. A CPU operating at 25 W TDP, 2.55 GHz frequency provides almost exactly four times the processing power of the original Xbox One CPU. This TDP leaves plenty of thermal room for a large GPU to operate on-die considering that the One X had a TDP of 180 W. That's around 150 W headroom before you reach that prior upper limit. Given that RDNA 2 is meant to be much more power efficient than Navi is, 36-40 CUs could easily fit into that TDP (for PS5). Though the SX might have more of an issue if it's as large as it's rumoured to be.

However, I'm not quite so sure that the SX will be 56 CUs. My calculations put the number of possible CUs in the Xbox SX at around 48. Doing some more back-of-the-napkin maths, I arrive at 12.00 TFLOPS for a 48 CU GPU running at 1.84 GHz with 14 GHz GDDR6. In comparison, with the same memory configuration, 36 CUs running at 2.0 GHz (for PS5) yields 9.78 TFLOPS. Yes, that's a bit of a difference but I spoke about the trade-offs between general vs dedicated hardware that appears to be the difference in direction between Microsoft and SONY last time.

What is interesting here is that, if we assume that the PS5 has a 36 CU GPU running under a 125 W or 130 W TDP (putting the total at around 165 W for total APU die*) that would peg the SX at around 133 W TDP for a 48 CU GPU (putting the total at around 155-160 W TDP for the total APU die**). [Please see below for an update on these calculations]
*I'm assuming 5 W for ancillary silicon not accounted for by the CPU and GPU.
**I'm assuming no extra wattage due to the ancillary elements being accounted for by the GPU in the case of the SX.
Now, I should stress that these thermal meanderings are far less rigorous than the calculations I performed for the number of CUs and the CPU power/frequency so don't pay attention to them until we get more information from the platform holders. I'm just eyeballing the TDP of AMD's RX 5000 series cards, extrapolating some optimisation based on the 8 Vega CUs present in the 4800U (1792 TFLOPs @ 8 CU) and 4800H (1433 TFLOPS @ 7 CUs) APUs.

When you run those 7 nm Vega numbers through a conversion, you arrive at 11.3 TFLOPS for a 48 CU Vega-based APU at 1840 MHz and 9.2 TFLOPS for a 36 CU Vega-based APU at 2000 MHz. These numbers are incredibly close to those slated for Xbox Series X and rumoured for PS5 and RDNA 2 should be able to push more than Vega does.

Anyway, as always. I hope you've enjoyed this descent into madness. I'll post another update on this as and when information comes to light.

[UPDATE]

I realised I had confused the estimated TDPs incorrectly because I had not subtracted the 25W of the CPU + 8 CU from the initial value for the PS5 and not taken the lower clock of the SX's CUs into account. Looking at the TDPs of the two APUs I used in the comparison, it is clear that the majority of the heat is being emitted by the CPU portion of the die. The 4800H is running 7 CUs at a lower clockspeed (1.6 GHz) but the CPU is running at both a faster base and boost frequency (2.9/3.8). The 4800U is running 8 CUs at a higher clockspeed (1.75 GHz) but the CPU is clocked lower for both base and boost frequencies (1.8/3.2).

If we assume for the 15 W part that 8 W is generated by the CPU and 7 W is generated by the GPU under full load then we could estimate that the 45 W part has 40 W from the CPU and 5 W from the GPU. It's probably not that cut and dry but it's a starting point. We can then extrapolate that a static 2.55 GHz chip (with no boost frequency) with 8 CU at 2 GHz at 25 W could have 15-16 W from the GPU and 9-10 W from the CPU* for the PS5. The SX would have a different TDP because the CPU portion remains static but the heat generated from the GPU would be lower due to the lower clockspeed. So, let's say we keep that 9-10 W CPU and decrease the 8 CUs to 10 W - which gives a 20 W total for the unit.
*I'm going with this split because when chips are pushed to higher frequencies they run hotter in a non-linear manner. Since the CPU would be static and within the turbo frequency of the 4800U, I'm not expecting it to generate a lot more heat and that heat would be a constant instead of fluctuating and so easier to wick away.
Taking those assumptions, I'm realising that I was lowballing the the TDP of an optimised 36 CU 2GHz part. If the RX5700 @ 1.625 GHz was a 180 W TDP part then even with optimisations, we're looking at 130 W for the same clockspeed, not significantly higher. Looking across the isle at NVidia, a 36 CU optimised successor to the RX 5700 could look something like the base RTX 2070 (it's only a 20%-ish jump in performance). That's still a 175 W part versus the 180 W TDP of the RX 5700...

Looking back at historical Radeon cards, we have the RX 5700 at 22% more performance than the RX Vega 56 generating 85% of the TDP and using 64% of the CUs (56 vs 36). To make a CU-to-CU comparison with the RX 580, the 5700 is 54% more performant generating 97% of the TDP whilst using the same number of CUs. The RX 5700 actually has almost equal performance with the higher end RX Vega 64 but that chip was pushed incredibly hard, almost to its limit (in fact undervolting the Vega series was a thing) and that produced an incredibly hot chip (295 W TDP!).

So, if the 7nm process combined with the switch from GCN to Navi could result in a TDP drop to 61%, it seems possible that moving to the 7nm+ node and RDNA2 could result in another 20-40% drop too. That would put 36 CU at 110 W TDP when running at 1.625 GHz. Pushing that up to 2 GHz could land us at 130-150 W TDP. Within the realm of my initial thoughts.

Putting all those things together, this would mean that a theoretical PS5 CPU+8CU combo would be a 43 W TDP part*. This is not that crazy given that 8 CUs in an RX 5700 correspond to 40 W TDP and in an RX 5700 XT, 45 W. Yes, it's a pretty large improvement in efficiency and performance but we saw the same thing from Vega to Navi.
* [150/(36/8) = 33.3] + 10 W CPU = 43.3 W
The SX, though running at a lower frequency has increased CUs. I *know* that the frequency relationship to TDP is non-linear however, I don't know of an equation that can help me estimate a curved TDP/frequency relationship for any given GPU architecture. So, on first basis, I'm going to assume a linear relationship...

We have 33.3 W TDP per 8 CU @ 2GHz for the PS5, linearly speaking, we get 30.7 W TDP for those 8 CU @ 1.84 GHz and thus 184 W TDP for the full 48 CU GPU. This would mean that the theoretical TDP of the SX CPU+8CU combo would be 40.7 W. That's still incredibly close to the estimated TDP of the PS5 CPU which is somewhat reassurring - from a mathematically beautiful standpoint. The equivalent linear reduction in Navi (RX 5700 XT to RX 5700) gives us a 6 W TDP decrease over the 2 GHz to 1.84 GHz range. Yes, a 50% increase from the 3 W calculation in efficiency but not that unreasonable based on all the assumed improvements.

Finally, after all that recalculation, the PS5 now has an estimated 43 W + 116.5 W + 5 W = 165 W total die TDP. The Xbox Series X has an estimated 10 W + 184 W = 194 W... OR we have a 10 W + 163 W = 173 W TDP if we assume the wattage decrease of the current Navi GPUs.

Looking at historical APU TDPs of the current generation of consoles, these numbers are pretty much in line with what we've see before: 160 W PS4 Pro and 180 W One X TDP.

Once again, these TDP musings are nowhere near as accurate or definitive as the CPU/GPU performance calculations and even those are estimations based on extrapolations from current and prior technology. I have no insider information - this is purely for fun.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Ryzen 5 1600 is zen 1 architecture no? And it's low-end CPU. I was hoping for far more...
 
Ryzen 5 1600 is zen 1 architecture no? And it's low-end CPU. I was hoping for far more...
Don't pay too much attention to an analyst who missed something like this which was already clarified by the manufacturer.

"To say the least, the Flute CPU, if released with these specifications, will be at least two times faster than previous-generation Jaguar cores."

Probably not, Microsoft already said and specifically clarified that it's a factor of 4x faster.
 

McHuj

Member
Ehh? It's Zen 2 so it should have same performance as identically clocked Ryzen 3xxx part. Maybe it will have less cache but comparing it to first gen "5" series CPU is too much...

Standalone Ryzen 3XXX parts have TPD's that are around 65-95W.

The CPUs in the next gen consoles will be probably limited to around half of that so they will not perform the same as an 8-core/16-thread desktop part. Even a 200-250W console will dedicate most of the power envelope dedicated to the GPU.
 

Ma-Yuan

Member
Those were weak ass cpus back in 2013. How can we just get 4x CPU increase after 7 years from these cpus oO I would assume they will also be a bottleneck later down this generation ><

Edit: we get a 10x gpu increase minimum because of better architecture but only 4x CPU. They were already imbalanced back then oO
 
Last edited:

FireFly

Member
I struggle to believe it will perform worse than the 4800U, since that part is 15W for both GPU and CPU. It's most natural to assume Microsoft meant at least 4 times faster.
 
Last edited:

Entroyp

Member
hmmm.. I was thinking that the new consoles would be more on par with the 4800 CPUs than with old Ryzen CPUs. We’ll see.
 
Last edited:

kiphalfton

Member
That's... not what I expected.

Edit: to be clear I expect a lot more, something around 3600-3700X.

Well high expectations generally leads to disappointment.

$500 is only going to get you so far.

I say that matter of factly, and not to try and get a reaction. Even when PC gamers expect the next gen graphics cards to have double performance or twice as much vram, it's hard not pointing out how ridiculous those expectations are.
 
Last edited:
Those were weak ass cpus back in 2013. How can we just get 4x CPU increase after 7 years from these cpus oO I would assume they will also be a bottleneck later down this generation ><

Edit: we get a 10x gpu increase minimum because of better architecture but only 4x CPU. They were already imbalanced back then oO
Not exactly, the viability and compute uplift of a CPU has a much broader and long lasting effect than a GPU.

For example I am still on my 2600k from 2011, it's a 9 year old CPU and it's not really limiting me in any harmful capacity in terms of gaming.

You merely have to reach a certain threshold of CPU performance and the system will operate fine without any substantial form of bottlenecking.
 
Last edited:

Butz

Banned
If it's indeed 4k, the CPU matters the least, to be fair.
2600 - 3600 - 4600 doesn't matter, if you look at frames, you're looking at a difference of around 3FPS.
 

kiphalfton

Member
Not exactly, the viability and compute uplift of a CPU has a much broader and long lasting effect than a GPU.

For example I am still on my 2600k from 2011, it's a 9 year old CPU and it's not really limiting me in any harmful capacity in terms of gaming.

You merely have to reach a certain threshold of CPU performance and the system will operate fine without any substantial form of bottlenecking.

2600K isn't exactly a good comparison, as it's a desktop CPU, you can overclock it, and up until a 1-2 years ago we still had only 4 core/8 thread CPU's from Intel (outside hedt CPU's). The last part is probably why the 2600K had the legs it did, mainly cuz Intel didn't really innovate until amd forced them to.
 

GHG

Gold Member
I don't doubt it, said as much in the Lockhart thread the other day.

People expecting a 3XXX level of CPU in next gen consoles are dreaming.

I expect something that is architecturally similar to the 2700 but severely compromised to ensure thermals are kept in check. There are also a lot of features in these desktop chips that are redundant when being used exclusively for gaming. Stripping them down as much as possible will help keep the costs down.

Pretty much anything resembling a desktop chip from the last 5 years will be a huge improvement over what's in the current gen consoles.
 
Last edited:

GHG

Gold Member
Also for people panicking, the 1600 AF is a solid choice for a gaming focused PC build today:




The moment you get to 1440p and above the difference between this chip and higher end chips is negligible. Something like this chip with an extra 2C/4T would perform amazingly in any console that is focused on pumping out higher resolutions.

This represents a massive step forward for consoles compared to current gen without breaking the bank or creating potential heat issues, be happy.
 
Last edited:
Also for people panicking, the 1600 AF is a solid choice for a gaming focused PC build today:




The moment you get to 1440p and above the difference between this chip and higher end chips is negligible. Something like this chip with an extra 2C/4T would perform amazingly in any console that is focused on pumping out higher resolutions.

This represents a massive step forward for consoles compared to current gen without breaking the bank or creating potential heat issues, be happy.


Everything you’ve written in this thread is, in my opinion, spot on.

What confuses me more so than anything else is that so many of the people who at least seem to believe that these consoles will be as powerful as proper truly high end PC gaming rigs also hate the idea they could cost 499 or more.

I don’t get it.

The consoles will be beasts for what they are and for what they aim to do at a price point more people can afford than a 1200+ dollar gaming pc
 
Well high expectations generally leads to disappointment.

$500 is only going to get you so far.

I say that matter of factly, and not to try and get a reaction. Even when PC gamers expect the next gen graphics cards to have double performance or twice as much vram, it's hard not pointing out how ridiculous those expectations are.
There is no reason for it to perform as a first gen Ryzen cpu.
 
Last edited:
It's not that hard, if its 4x. the jaguar sits at around 400 cinebench on multicore at best so that would be around 1600.
Which pretty much equals
3600
1800
2700
I think that the thread title is misleading, the article doesn't say it's like a 1600... For some reason it end in the title.
 

GHG

Gold Member
So how does that compare to the Jaguar cores?

Different planet all together.

People need to understand that in isolation a 1600 (or 1600 AF) is a very good chip for gaming purposes.

There's no point in comparing these chips to Intel chips (or even more recent Ryzen chips) and saying they are bad while looking at 1080p benchmarks in games that are single thread intensive. That's not the context in which these CPU's will be used in next gen consoles. The games will be multi-threaded and the resolutions will be in excess of 1440p.
 
Last edited:

Kenpachii

Member
Also for people panicking, the 1600 AF is a solid choice for a gaming focused PC build today:




The moment you get to 1440p and above the difference between this chip and higher end chips is negligible. Something like this chip with an extra 2C/4T would perform amazingly in any console that is focused on pumping out higher resolutions.

This represents a massive step forward for consoles compared to current gen without breaking the bank or creating potential heat issues, be happy.


That video is such bullshit.

I got a 9900k at 5ghz which stomps that 1600 ryzen and i get limited in many games with drops even in ac odyssey towards low 60's ( 61 fps ) in city's and that's where my gpu sits at 90 or even 80% at ultra settings. His high settings don't reflect anything realistically towards next gen solutions.

( he also forgets to mention the loads of issue's 1000 ryzen chips had wth motherboards / memory / microstutter which makes the chip a no go straight out of the gate for pc )

U will be lucky to hit 40 fps in towns with that 1600 ryzen. There is a massive difference between top end cpu's and 1600. Then also consoles will have 8 cores not 6 which will stomp that 1600. If you want to look at 6 core cpu's u will have to compare it towards 3600 not a 1600. 2 cores and 4 threads actually do a ton of work in games if games are optimized for it which for PC isn't much the case at this point so performance comparisons fall flat real quick.

The problem with this logic is that CPU performance doesn't matter is the same trap jaguar fell in with assassin creed games and witcher 3. No matter the resolution no matter the visual setting the jaguar is never able to push witcher 3 towards 60 fps without massive sacrifices in everything which they didn't wanted to bother with and so many other games.

If microsoft and sony are going ot take 60 fps seriously even at reduced visual settings they will need a beast of a CPU in that box the fastest they can get there hands on and make it work. 4x the performance which equals 3600 ryzen or 2700 ryzen in mobile solution makes totally sense.
 
Last edited:

GHG

Gold Member
That video is such bullshit.

I got a 9900k at 5ghz which stomps that 1600 ryzen and i get limited in many games with drops even in ac odyssey towards low 60's ( 61 fps ) in city's and that's where my gpu sits at 90 or even 80% at ultra settings. His high settings don't reflect anything realistically towards next gen solutions.

( he also forgets to mention the loads of issue's 1000 ryzen chips had wth motherboards / memory / microstutter which makes the chip a no go straight out of the gate for pc )

U will be lucky to hit 40 fps in towns with that 1600 ryzen. There is a massive difference between top end cpu's and 1600. Then also consoles will have 8 cores not 6 which will stomp that 1600. If you want to look at 6 core cpu's u will have to compare it towards 3600 not a 1600. 2 cores and 4 threads actually do a ton of work in games if games are optimized for it which for PC isn't much the case at this point so performance comparisons fall flat real quick.

The problem with this logic is that CPU performance doesn't matter is the same trap jaguar fell in with assassin creed games and witcher 3. No matter the resolution no matter the visual setting the jaguar is never able to push witcher 3 towards 60 fps without massive sacrifices in everything which they didn't wanted to bother with and so many other games.

If microsoft and sony are going ot take 60 fps seriously even at reduced visual settings they will need a beast of a CPU in that box the fastest they can get there hands on and make it work. 4x the performance which equals 3600 ryzen or 2700 ryzen in mobile solution makes totally sense.

Where to begin...

  • I'll start by saying the AC:Odyssey PC port is poor and should not be the basis of comparison for anything - https://www.dsogaming.com/news/assa...used-by-denuvo-but-by-insane-driver-overhead/
    • The drops you mentioned are also seen in the 1600 AF review - they also benchmark AC Odyssey and the minimum framerates are displayed.
  • The chip in the video I linked is the 1600 AF, not the vanilla 1600. The 1600 AF is a 2nd gen Ryzen CPU, it's essentially a re branded 2600.
  • There are other reviews that state exactly the same thing as the hardware unboxed video:


You don't have to take my word for it either, you can look up some written reviews as well. The 1600 AF is a very good value gaming CPU, especially if you are looking to run at higher resolutions where the bottleneck is the GPU rather than the CPU.
  • The issues that 1st gen Ryzen CPU's experienced at launch have since been ironed out via chipset updates. This is not even relevant to this discussion since motherboard/memory configurations etc are tightly controlled by console manufacturers.
  • I already mentioned the extra 2C/4T - the chip in consoles would essentially be based on a 2700 since a 2600 is a 2600 with 2 cores disabled.
  • Ultra settings, 60 fps is always the talk at the start of every generation. They have no commitment to this and never do, it will soon devolve into the usual scenario of 30fps medium-high settings.
  • It's not the same scenario as the jaguar chip going into consoles. A chip equivalent to the 1600 AF going into consoles would be more than adequate, this isn't some shitty mobile chip we are talking about.
  • "4x the performance" is meaningless until we know what metric/benchmark they are measuring this by. Is it an across the board average? Is it a single 7Zip benchmark? etc
Anyway, I've already said I expect something like the 2700 in this thread. The 1600 AF isn't far off that at all.
 

GHG

Gold Member
MS said it was 4 times faster (crunches the numbers) so that means it should be about 2 times faster.... okay

What numbers?

There's a reason why hardware reviews run a suite of benchmarks rather than a single benchmark to determine how well a chip performs. In certain benchmarks a chip might perform 4x better than another one but in others it might only perform 2x as well. Until we are told what they are measuring that by there's no use in talking about it until we get more information on the hardware.

Like FFS, we've all seen those infamous graphs Nvidia love to throw in our faces every time they launch a new product.
 
Last edited:
What numbers?

There's a reason why hardware reviews run a suite of benchmarks rather than a single benchmark to determine how well a chip performs. In certain benchmarks a chip might perform 4x better than another one but in others it might only perform 2x as well. Until we are told what they are measuring that by there's no use in talking about it until we get more information on the hardware.

Like FFS, we've all seen those infamous graphs Nvidia love to throw in our faces every time they launch a new product.
That was the joke, the fact that their only real source for this was a statement by microsoft saying it was 4 times faster and then they somehow reched a conclusion that it is 2 times faster.
 

GHG

Gold Member
That was the joke, the fact that their only real source for this was a statement by microsoft saying it was 4 times faster and then they somehow reched a conclusion that it is 2 times faster.

Ah gotcha.

Sorry, I get a bit cranky about this stuff. The amount of times I've been excited about new hardware when it gets revealed/announced, only to be bitterly disappointed once the real benchmarks start surfacing.

PxOa3BB.gif
 
Last edited:
I really don't see how anyone could be unhappy about this.

This will be the most powerful console CPU, relative to the PC CPUs of the time, ever. Not only is it VERY powerful but it's also extremely easy to program for. Something the Cell certainly could not claim.

An 8 core 16 thread Zen 2 CPU is monstrous. It probably won't have as much L2 Cache and will be clocked lower, but other than that, it's a full on high end PC CPU... in a console.
 

THE:MILKMAN

Member
Looking at historical APU TDPs of the current generation of consoles, these numbers are pretty much in line with what we've see before: 160 W PS4 Pro and 180 W One X TDP.

I've always said and expected the CPUs of the next consoles to have cut back/mobile or otherwise lower performance than the desktop chips many thought we were getting. Even Microsoft only say 4X CPU increase though not sure if that refers to One or X? They clarify this with the GPU by saying >8 times One and 2X X....

As for the bit I quote above, it seems you are saying the next console APUs alone will consume what the entire Pro and X consoles consumed at the wall? Something wrong here!
 

01011001

Banned
when MS said 2x GPU performance of One X they meant by hard TFLOP numbers, so what I think is they are talking about CPU numbers exactly the same way...

does anyone know exact details of the Xbox One's CPU?
 

ZywyPL

Banned
when MS said 2x GPU performance of One X they meant by hard TFLOP numbers, so what I think is they are talking about CPU numbers exactly the same way...

does anyone know exact details of the Xbox One's CPU?

Aren't the next-gen CPUs 16 threads tho? Effectively being not 4x but 8x more capable than current-gen jaguar?
 

AGRacing

Gold Member
It’s amazing how an obviously wrong headline can save you from reading the 40 paragraphs after it....

I’m going to use this time to do something good for myself!!! Where’s my Fitbit!?
 

xool

Member
;tldr fuck this clown

Based on a Flute performance that matches a 8 core Zen 1700X the conclusion is "performance the same as 6 core 1600" .. wtf.

Also too many words.

[edit] No bad intent to the OP, but pls never link to a fucking nonsense word salad blog like that ever again
 
Last edited:

01011001

Banned
Aren't the next-gen CPUs 16 threads tho? Effectively being not 4x but 8x more capable than current-gen jaguar?

the current ones have 8 threads.
so 16 threads would be a 2x increase.

but I think the theoretical performance numbers is what Microsoft means when they say 4x as powerful.

it could mean 2x the threads and 2x the clocks?
it could also means TFLOP performance just like the GPU.
 

xool

Member
the current ones have 8 threads.
so 16 threads would be a 2x increase.

but I think the theoretical performance numbers is what Microsoft means when they say 4x as powerful.

it could mean 2x the threads and 2x the clocks?
it could also means TFLOP performance just like the GPU.

If you're counting theoretical FP32 flops then Zen2 has 2x the IPC compared to Jaguar. The extra threads just share that .. At 3.5GHz or higher that's 2x the 1.75GHz clock of the Xbox One S. Combine those two 2x 's and you get '4x the compute power of Xbox One' .. just like MS said.

Seems a straightforward case that matches realistic expections and everything MS has said.

;tdlr 8C/16T Zen2 @3.5GHz has 4x FP32 performance of Xbox One S
 

01011001

Banned
If you're counting theoretical FP32 flops then Zen2 has 2x the IPC compared to Jaguar. The extra threads just share that .. At 3.5GHz or higher that's 2x the 1.75GHz clock of the Xbox One S. Combine those two 2x 's and you get '4x the compute power of Xbox One' .. just like MS said.

Seems a straightforward case that matches realistic expections and everything MS has said.

;tdlr 8C/16T Zen2 @3.5GHz has 4x FP32 performance of Xbox One S


there you have it...
I really think Microsoft's message is very unambiguous by design, and their messaging is very straight forward.

so this is exactly what we should expect
 
Top Bottom