• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry: PC Gaming on the Xbox Series X CPU

DenchDeckard

Moderated wildly
All this talk about ps5 Pro....

What if it uses the same cpu like the ps4 Pro did.......

Whats Up Stare GIF by NBA


Also, wait a minute.....someone told me the series x cpu said 8k on it and MS were tricking us! Lies!
 
Last edited:
I mean, if anything I'm even more impressed with current gen console performance compared to PC. Has it ever been this tight 3 years into the generation?
We don't get to see PC developers push the systems these days since so many games are lead on consoles. I've never been one who looks to compare a PC to a console and never understood why people really feel the need too myself
 
FFS.

This is the kind of stuff DF do, particularly Richard, it's not meant to mean anything profound, just an interesting theoretical exercise that alot us find fascinating.

If you don't, don't watch it...
I find it theoretically utterly pointless myself
 

Bojji

Member
Trouble is, UE5, the now defacto game engine, is such a CPU hog, that even the very latest desktop CPUs are struggling to cope with.

I really, really hate UE.

YES, it runs like crap even on the best CPUs and don't really use more than 6/12 threads. Why they are not making this engine less CPU intensive (and better multithreaded) is beyond me, especially since console CPUs are not so great...

Most 30fps only consoles games are CPU limited thanks to unoptimized engines they use (UE4). Redfall drops to 40fps on R5 3600, same for gotham knights, Jedi survivor runs like shit in performance mode. Developers can't really bring those games to 60fps because engine doesn't allow them to.
 

shamoomoo

Member
Yeah, and as @SmokSmog said earlier, the 8 MB L3$ is really 2x 4 MB chunks so you have 4 cores/8 threads accessing 4 MB of L3$ in these systems.

There wasn't really anything Sony or Microsoft could've done there, though; that setup was all on AMD. A PS5 Pro actually has a more realistic chance of having Zen 4 than first thought, and since Zen 4 is BC with Zen 2 code, it might make the change there more acceptable from a design POV. IIRC the reason the PS4 Pro and One X used the same Jaguar cores was because AMD's new CPUs at the time were not BC in microcode with the Jaguar CPUs, so too much work would've been required in recompiling games.

At least, that's my understanding of things. But for next-gen systems, I genuinely think they need to move away from GDDR and towards HBM3 (maybe ben HBM-PIM but that depends on what can be done with AMD in GPU/CPU architecture by then), if they want a unified memory solution. Which is still preferable to a non-unified memory pool, especially in console, plus generally being cheaper due to more volume allowing better economies of scale.



HBM prices have come down a lot in the past few years, and there's supposed to be a design that simplifies (or even removes) the interposer, which is where a lot of the cost comes from. But also, the main reason HBM prices have been high compared to DDR and GDDR, IMO, is because none of the customers for it order in large enough volumes to make the memory manufacturers price per-chip costs lowers. The volumes they get ordered are simply too small for economies of scale, so they make up for it by keeping the prices high for the clients willing to pay for the privilege.

If Microsoft or especially Sony were ordering HBM memories for their next consoles, they'd be ordering in volumes so large that prices would naturally come down because the manufacturers of the memory make more through the sheer bulk of order than they ever did with companies ordering way smaller quantities, so it'd not matter if the profit margin on each chip were lower for massive clients like Sony & Microsoft. That's my theory on HBM pricing, anyway.

I really don't see non-unified memory ever returning in consoles, it's just not worth the headaches. PC does have SAM/BAR but it's still not preferable to actual unified memory pool. There's a reason the industry in embedded systems has been trending towards hUMA: the benefits simply far outweigh the alternative. But, if next-gen consoles want to keep that while avoiding these issues, and don't address cache, then they have to move away from GDDR and towards something like HBM.

Cache option could provide a nice balance and would help a lot, though the consoles wouldn't want too much cache, not without ways of smart managing the cache data. In that respect I think the PS5 has a good idea with its GPU cache scrubbers, but yes they should also increase the amount of cache and fully unify the cache as well.
Why use Zen3+ but make it like Zen4c? The mobile variation had most of the frequency and cache with I believe less instruction set support.
 

Clear

CliffyB's Cock Holster
I'd also add that people absolutely need to stop considering performance purely on the basis of the speed/ratings of individual components. Yes, it works on PC because the way these parts connect is mostly the same - obviously that's the case because its necessary to support a variety of standardized discrete components.

However consoles are not bound in the same way. In fact not only may the hardware connecting the various subsystems differ but all the low-level software and firmware layer can be radically different due to not needing to support the same multiplicity of components.
 

winjer

Gold Member
Yeah, and as @SmokSmog said earlier, the 8 MB L3$ is really 2x 4 MB chunks so you have 4 cores/8 threads accessing 4 MB of L3$ in these systems.

There wasn't really anything Sony or Microsoft could've done there, though; that setup was all on AMD. A PS5 Pro actually has a more realistic chance of having Zen 4 than first thought, and since Zen 4 is BC with Zen 2 code, it might make the change there more acceptable from a design POV. IIRC the reason the PS4 Pro and One X used the same Jaguar cores was because AMD's new CPUs at the time were not BC in microcode with the Jaguar CPUs, so too much work would've been required in recompiling games.

AMD gave Sony and MS what they wanted, inside a budget. Remember that AMD already had Zen3 with a single 32Mb of L3 cache, by the time the PS5 and Series X released.
But Sony And MS choose to sacrifice CPU cache to have more CUs on the GPU. And to save die space on the SoC.
Also, AMD probably would charge more for a Zen3 SoC, than for one with Zen2, as this one would be more recent.
We have to remember that consoles are very price sensitive.

At least, that's my understanding of things. But for next-gen systems, I genuinely think they need to move away from GDDR and towards HBM3 (maybe ben HBM-PIM but that depends on what can be done with AMD in GPU/CPU architecture by then), if they want a unified memory solution. Which is still preferable to a non-unified memory pool, especially in console, plus generally being cheaper due to more volume allowing better economies of scale.

HBM prices have come down a lot in the past few years, and there's supposed to be a design that simplifies (or even removes) the interposer, which is where a lot of the cost comes from. But also, the main reason HBM prices have been high compared to DDR and GDDR, IMO, is because none of the customers for it order in large enough volumes to make the memory manufacturers price per-chip costs lowers. The volumes they get ordered are simply too small for economies of scale, so they make up for it by keeping the prices high for the clients willing to pay for the privilege.

If Microsoft or especially Sony were ordering HBM memories for their next consoles, they'd be ordering in volumes so large that prices would naturally come down because the manufacturers of the memory make more through the sheer bulk of order than they ever did with companies ordering way smaller quantities, so it'd not matter if the profit margin on each chip were lower for massive clients like Sony & Microsoft. That's my theory on HBM pricing, anyway.

I really don't see non-unified memory ever returning in consoles, it's just not worth the headaches. PC does have SAM/BAR but it's still not preferable to actual unified memory pool. There's a reason the industry in embedded systems has been trending towards hUMA: the benefits simply far outweigh the alternative. But, if next-gen consoles want to keep that while avoiding these issues, and don't address cache, then they have to move away from GDDR and towards something like HBM.

Cache option could provide a nice balance and would help a lot, though the consoles wouldn't want too much cache, not without ways of smart managing the cache data. In that respect I think the PS5 has a good idea with its GPU cache scrubbers, but yes they should also increase the amount of cache and fully unify the cache as well.

Doesn't matter how much HBM prices have lowered, it's still way too expensive for a console.

The issue with latency is not so much GDDR6. But rather the memory controller, as this is tuned for bandwidth, sacrificing latency.
On a PC, with 2 memory pools, one is memory controller is tuned for latency, as this is better for a CPU. And the other is tuned for bandwidth, as this is better for a GPU.
Once again, consoles are very price sensitive. So having 2 memory controllers means more die space.
But cache also means greater cost.
 

OverHeat

« generous god »
Barf…even my last pc is more powerful then this. Who in there right mind would play pc games on this
 

DaGwaphics

Member
This is so pathetic and ultimately pointless. Consoles are bottled necked, the shock """
What next, putting in an OG Xbox CPU in Alex's old PC and getting sh8t results?

Consoles are different to a PC and that's why one is called a console and the other the PC and a console user can't happily change their CPU or GPUs

I think these kind of tests are quite interesting. Can't say I'm surprised at all by the results, these CPUs never looked particular great on paper with the castrated 2x4MB cache, their claim to fame is just that they are still a huge improvement from the Jaguar CPUs.

I would expect refreshed hardware to at minimum double the cache but preferably quadruple it to 32MB and punch up the max clocks by 1ghz at least. You could probably solidify 60fps on the CPU side with that in games that are CPU bound to 30 on the base hardware.
 

intbal

Member
Remember that AMD already had Zen3 with a single 32Mb of L3 cache, by the time the PS5 and Series X released.
In one of the DF Weekly episodes, Richard and John said that Microsoft told them the Series X APU design was already finalized in 2016. Although they didn't get any more specific than that.
I assume PS5 would be roughly the same.

I believe PS5/XSX were the best they could possibly do for a 2020 launch.
 

winjer

Gold Member
In one of the DF Weekly episodes, Richard and John said that Microsoft told them the Series X APU design was already finalized in 2016. Although they didn't get any more specific than that.
I assume PS5 would be roughly the same.

I believe PS5/XSX were the best they could possibly do for a 2020 launch.

That is just complete bullshit. In 2016 there wasn't even the first Zen CPU, as this released in 2017.
And the first RDNA GPU was still 3 years away.
How the frack would AMD develop Zen2 and RDNA2, when they didn't even have Zen, Zen+ and RDNA1 developed at that time.
 

intbal

Member
That is just complete bullshit. In 2016 there wasn't even the first Zen CPU, as this released in 2017.
And the first RDNA GPU was still 3 years away.
How the frack would AMD develop Zen2 and RDNA2, when they didn't even have Zen, Zen+ and RDNA1 developed at that time.
THey didn't have Zen released, but it could have been in the design phase, just like Zen 2.
And don't blame me. DF said it.
 

DaGwaphics

Member
That is just complete bullshit. In 2016 there wasn't even the first Zen CPU, as this released in 2017.
And the first RDNA GPU was still 3 years away.
How the frack would AMD develop Zen2 and RDNA2, when they didn't even have Zen, Zen+ and RDNA1 developed at that time.

I think they begin the development process many years before we see the final results. By the time Zen 1 released AMD was likely already deep into the design of Zen 3, maybe even Zen 4. I assume that the console makers are working off of AMD's internal roadmap and likely had a good idea what was going to be mass producible in 2020 all the way back in 2016.
 
Last edited:

winjer

Gold Member
I think they begin the development process many years before we see the final results. By the time Zen 1 released AMD was likely already deep into the design of Zen 3, maybe even Zen 4. I assume that the console makers are working off of AMD's internal roadmap and likely had a good idea what was going to be mass producible in 2020 all the way back in 2016.

In 2016, AMD was near bankrupt. They barely had enough money and people to develop Zen1. In fact, they really didn't finish developing it.
Soon after, they released Zen+, basically the same CPU but with improved cache speeds.
A lot of people thought Zen was going to be AMD's last CPU, before the company folded.
So there was no way they would have been developing Zen2, in 2016.
And much less, they would have been developing RDNA2 by then. Even RDNA1 only came out in 2019.
 
Last edited:

twilo99

Member
Zen 2 mobile is just like Zen 1 desktop
And why are they saying the Xbox is 4800s and Ps5 4700s when PS5 has better frames in most games?

3rd party developers decide which box has better frames, its not a metric that anyone should be looking at.

We need a standardized bench test to really see which hardware is better..
 

DaGwaphics

Member
In 2016, AMD was near bankrupt. They barely had enough money and people to develop Zen1. In fact, they really didn't finish developing it.
Soon after, they released Zen+, basically the same CPU but with improved cache speeds.
A lot of people thought Zen was going to be AMD's last CPU, before the company folded.
So there was no way they would have been developing Zen2, in 2016.
And much less, they would have been developing RDNA2 by then. Even RDNA1 only came out in 2019.

I just think the process of designing these things from start to finish is a more time consuming process than you are allowing for. Jim Keller started work on Zen1 all the way back in 2012 with the part not reaching the public until 2017. That's a realistic timeline for these things, when you see it on the shelf it has likely been in development for at least 4 years. Maybe 3 if it was something fast-tracked out of desperation.
 

winjer

Gold Member
I just think the process of designing these things from start to finish is a more time consuming process than you are allowing for. Jim Keller started work on Zen1 all the way back in 2012 with the part not reaching the public until 2017. That's a realistic timeline for these things, when you see it on the shelf it has likely been in development for at least 4 years. Maybe 3 if it was something fast-tracked out of desperation.

In 2016 Zen2 was at best a name on a PowerPoint slide.
And RDNA1, probably wasn't even that.
 

twilo99

Member
Barf…even my last pc is more powerful then this. Who in there right mind would play pc games on this

The whole things has always been a farce for a while...at least we used to have games made for PC, now we only have some shitty ports from console and people climixing over how it runs better on their console compared to PC.


Valve is the only dev that still does unique work for PC, the rest is all optimized for low end hardware found in the consoles.
 
Last edited:
AMD gave Sony and MS what they wanted, inside a budget. Remember that AMD already had Zen3 with a single 32Mb of L3 cache, by the time the PS5 and Series X released.
But Sony And MS choose to sacrifice CPU cache to have more CUs on the GPU. And to save die space on the SoC.
Also, AMD probably would charge more for a Zen3 SoC, than for one with Zen2, as this one would be more recent.
We have to remember that consoles are very price sensitive.

I think it's true in theory that Sony & MS could have gone with Zen 3 CPU, or at least Zen 3 unified L3$. But it might also be possible that AMD did not have an APU design ready with those features at the time Sony & MS needed APUs to test and iterate in time for launch in 2020.

Agreed with pricing being a factor; Zen 3 would have definitely costed more than Zen 2.

Doesn't matter how much HBM prices have lowered, it's still way too expensive for a console.

Again, maybe currently. But at large economies of scale, the price for a company like Sony per-chip would be a lot lower than, say, what some server company pays for HBM in their rack ad-hoc rack setup. I don't think Sony would pay that much more for decent HBM memory, than they currently do for GDDR6, at the scale of orders they put in for their consoles.

I mean even Microsoft admitted they considered HBM at one point, but went against it due to JEDEC. Didn't seem it was due to pricing concerns on their end, and they have much less economies-of-scale benefits than Sony does with PlayStation.

The issue with latency is not so much GDDR6. But rather the memory controller, as this is tuned for bandwidth, sacrificing latency.

True. But, this is where HBM would be an obvious advantage, because you get better latency without sacrificing bandwidth.

On a PC, with 2 memory pools, one is memory controller is tuned for latency, as this is better for a CPU. And the other is tuned for bandwidth, as this is better for a GPU.

Yeah, and that is great for PC. But I don't see that approach being cost-effective in console. You'd lose economies-of-scale by splitting up order volume between two different memory types, so costs per-chip increase just by that alone. Then you lose out on hUMA advantages, so you need things in place to assist in data management, enforce coherency, and probably need some buffer memory on the controllers between the two memories to mitigate performance drops by splitting up the memory pools.

That essentially complicates things for developers. In a sense I could see Microsoft taking that approach, but not Sony.

Once again, consoles are very price sensitive. So having 2 memory controllers means more die space.
But cache also means greater cost.

Yep. It really just comes down to what works best for market and product needs.

I can see a future where Microsoft prioritizes modular PC-like memory setup in their next system (whether it's on a console business model or not is uncertain) to address CPU latency issue and get good GPU bandwidth while opening up possibility for capacity expansion.

Meanwhile, can see Sony opting for a more cache-orientated approach while sticking to a hUMA memory setup, potentially HBM 3-based, with fixed memory capacity. That way they still address CPU latency issues and get bandwidth for the GPU, decent capacity, and maximize benefits from economies-of-scale.

Ah imagine a game 100% optimized for a 4090 based machine... one can dream.

If arcades still existed (as actual arcades, not FECs), you'd be getting that. Probably from Sega.

THey didn't have Zen released, but it could have been in the design phase, just like Zen 2.
And don't blame me. DF said it.

DF say a lot of things. Maybe Series X CPU was decided by 2016 (strong doubt), but I have many reasons to suspect actual full design of the X and S was in a compressed development stage starting mid-late 2017 up through to late 2019 to early 2020.

"Compressed", as in bulk of design and development. Microsoft likely had some amount of concept work on a 9th-gen Xbox prior and pulled parts of that towards Series X and S, but R&D into that earlier design was likely quite slow due to uncertainty around the division's future after XBO's launch. The division getting funding reduced during the Myerson years would have also had this effect.

Then after the One S and One X came about Microsoft likely took that feedback and predicated a bulk of the Series X and S design and product strategy around those two devices.

Why use Zen3+ but make it like Zen4c? The mobile variation had most of the frequency and cache with I believe less instruction set support.

No for PS5 Pro I was thinking they could use Zen 4. Or Zen 3. Whatever, just something not Zen 2. I'm going off the idea that Zen 3 and 4 are BC with Zen 2 microcode, whereas the CPUs AMD had at the time of PS4 Pro weren't BC in microcode with Jaguar CPUs. Hence why PS4 Pro stuck with Jaguar cores.
 
Last edited:

FireFly

Member
In 2016, AMD was near bankrupt. They barely had enough money and people to develop Zen1. In fact, they really didn't finish developing it.
Soon after, they released Zen+, basically the same CPU but with improved cache speeds.
A lot of people thought Zen was going to be AMD's last CPU, before the company folded.
So there was no way they would have been developing Zen2, in 2016.
And much less, they would have been developing RDNA2 by then. Even RDNA1 only came out in 2019.
"Zen 2, which began life in mid-2015 according to CPU chief Mike Clark, was designed primarily to boost the all-important instructions per clock cycle (IPC) metric which historically has been lacking on AMD chips when compared directly to Intel. IPC has become more important as liberal increases in frequency have dried up: reliably hitting 5GHz on any modern processor is extremely difficult."


Also note that at Zen's 2 launch, AMD said they were already working on Zen 5, which is like 5 years in advance.

"Beyond Zen 3, AMD has already stated that Zen 4 and Zen 5 are currently in various levels of their respective design stages, although the company has not committed to particular time frames or process node technologies. AMD has stated in the past that the paradigms of these platforms and processor designs are being set 3-5 years in advance".

 

Zathalus

Member
I mean, if anything I'm even more impressed with current gen console performance compared to PC. Has it ever been this tight 3 years into the generation?

In terms of CPU the gap is roughly the same compared to previous generations (PS3/4) if you take single core performance into account (2-2.5x). The gap is the widest it has ever been if you are referring to multicore, but that hardly matters because very few games take advantage of that.

In terms of GPU the pure rasterization gap at the moment is about the same as PS3/4 generation. The 4090 at the moment is roughly 3x-3.5x the console GPUs while the 1080 was roughly 3x-3.5x the PS4 GPU and the same for the Radeon 4870 over the 360. Difference is in RT, that gap widens significantly, something like 8-12x in pure RT workloads. Then PC still has the advantage with AI upscaling and frame generation. It must be said that the pricing gap has shifted in favour of consoles though.

Consoles do currently have the advantage in I/O with PCs only very recently having the hardware and API to match this aspect.
 

DaGwaphics

Member
In 2016 Zen2 was at best a name on a PowerPoint slide.
And RDNA1, probably wasn't even that.

By the time anything even becomes a bullet point a lot of engineering work has probably already been completed on it. No way Zen 2 hit the market when it did without being in development by 2016. Simply not realistic when you look at the timelines.

Edit: Thanks for bringing the receipts @ F FireFly
 
Last edited:

winjer

Gold Member
I think it's true in theory that Sony & MS could have gone with Zen 3 CPU, or at least Zen 3 unified L3$. But it might also be possible that AMD did not have an APU design ready with those features at the time Sony & MS needed APUs to test and iterate in time for launch in 2020.

Agreed with pricing being a factor; Zen 3 would have definitely costed more than Zen 2.

MS and Sony are very important for AMD. If they wanted Zen3 for the new consoles, AMD would have developed the SoCs for them. It's just a matter of price.

Again, maybe currently. But at large economies of scale, the price for a company like Sony per-chip would be a lot lower than, say, what some server company pays for HBM in their rack ad-hoc rack setup. I don't think Sony would pay that much more for decent HBM memory, than they currently do for GDDR6, at the scale of orders they put in for their consoles.

I mean even Microsoft admitted they considered HBM at one point, but went against it due to JEDEC. Didn't seem it was due to pricing concerns on their end, and they have much less economies-of-scale benefits than Sony does with PlayStation.

Economies of scale only go so far.
And GDDR6 benefits even more from economies of scale, as it is used in many more products.
Of course Sony would have considered it. They probably considered a ton of things.
But it's cost was prohibited for a console.
 

KungFucius

King Snowflake
To be fair neither system is maxing out games with resolution or framerate. Some might be interested in systems that allow a stable 60FPs or maxed out DRS.
Which really makes the case for not buying a console and instead opting for a high spec PC. Why would Sony want to make people who bought PS5s/XSXs regret their purchase? There is no new TV resolution making the OG PS5/XSX underpowered, it is just it's cheap design.
 
Which really makes the case for not buying a console and instead opting for a high spec PC. Why would Sony want to make people who bought PS5s/XSXs regret their purchase? There is no new TV resolution making the OG PS5/XSX underpowered, it is just it's cheap design.

Well consoles are very convenient which interests many people. As for regretting purchases it didn't really happen with the Pro or the One X. It's not like they could have done this at a reasonable price point years ago.
 

winjer

Gold Member
"Zen 2, which began life in mid-2015 according to CPU chief Mike Clark, was designed primarily to boost the all-important instructions per clock cycle (IPC) metric which historically has been lacking on AMD chips when compared directly to Intel. IPC has become more important as liberal increases in frequency have dried up: reliably hitting 5GHz on any modern processor is extremely difficult."


Also note that at Zen's 2 launch, AMD said they were already working on Zen 5, which is like 5 years in advance.

"Beyond Zen 3, AMD has already stated that Zen 4 and Zen 5 are currently in various levels of their respective design stages, although the company has not committed to particular time frames or process node technologies. AMD has stated in the past that the paradigms of these platforms and processor designs are being set 3-5 years in advance".


That's more of a stretch interpretation of what really was going on.
AMD and Jim Keller had ideas for where the Zen arch could go in the future. But one thing is to set targets and ideas, a very different thing is to develop a full product.
AMD did not have the resources back then, as they have now. AMD even had to sell their HeadQuarters to be able to keep working.

And don't forget that in 2016, RDNA1 was still 3 years away. And RDNA2 was 5 years away.
Worst yet, the specs for RDNA2 were not set. Remember that it was nvidia that won the DX12_2 spec with Turing, a GPU released in 2018.
And AMD had to rework RDNA2 into those specs.
 
Last edited:

TrueGrime

Member
I'm really surprised that people are willing to shell out more money than what they spent with a PS5. I just got mine last year on Amazon while having to wait however long for a code to pop up. Not only that, it didn't come standalone, but already bundled with a game. I don't think I can believe that the gap between 2020-2023 will be such a significant jump for what Sony is going to ask for a PS5 Pro.
 
MS and Sony are very important for AMD. If they wanted Zen3 for the new consoles, AMD would have developed the SoCs for them. It's just a matter of price.

I can agree with that. If Sony & MS can pull future GPU features into their systems, they could do the same with the CPU, like take Zen 3's unified cache blocks and use them in PS5 & Series systems.

So yeah, I can see what you mean on this one.

Economies of scale only go so far.
And GDDR6 benefits even more from economies of scale, as it is used in many more products.
Of course Sony would have considered it. They probably considered a ton of things.
But it's cost was prohibited for a console.

Maybe HBM was too much for PS5 & Series X, but the point is it won't stay that way by the time the next consoles are ready to release. It's really about weighing the advantages of what the tech brings for the price, and in that respect, HBM is getting to a point where it wins cleanly over GDDR, even if it will still cost a bit more.

Keep in mind Sony have already shown an understanding of this in the past, when they took a gamble on GDDR5 for the PS4 instead of the cheaper DDR3. They did this even knowing they were going into 8th gen from an unfavorable position market-wise in 7th gen. They were willing to gamble on less memory capacity for the benefits of GDDR5 and the hUMA advantages. When larger capacities became available, they took another gamble and doubled up on capacity even knowing they'd be spending more on memory as a result, because again, the advantages outweighed the drawbacks in cost.

I think they're going to see very similar clear benefits to HBM for PS6 and take a similar approach in making that switch, because limitations of GDDR are becoming more and more noticeable particularly for embedded designs like consoles, as time goes on. Whether that be in latency, or memory controller complexity (factoring into the size for the APUs), or power usage or physical footprint, the limitations are becoming more readily apparent and I think Sony in particular will decide that some HBM-based memory makes more sense overall moving forward, for the performance uplift that can help contribute to in the whole system design.

Microsoft IMO have less reasons to consider HBM going forward, at least in the same way or capacity as Sony, and I think they'll probably prioritize memory that works better for modularity. Because I see them diverging a lot for the next generation, and where they are diverging it might make more sense to go non-hUMA and stick with somewhat more conventional, cheaper memories (well, GDDR seems to be cheaper than DDR but still).
 
Console CPUs are handicapped by low L3 cache and high GDDR6 latency. Nothing new.
PS5 and XSX CPUs = Zen1 in gaming.

This is why desktop PCs are using low latency dedicated system RAM for CPUs paired with a lot of cache. GPUs have their own high bandwidth version with higher latency called GDDR.
PCs have to cope with the latency and CPU cost of copying data to and from main RAM to VRAM, that consoles don't.

So, having lower latency for the main RAM and larger L3 cache mostly just offsets those costs.

Consoles benefit greatly from a shared memory pool for CPU and GPU and having the CPU and GPU on the same die. At the same time, they have to deal with bus contention issues when CPU and GPU are both trying to access the same memory pool at the same time.

So there are pros and cons to each platforms.
 
Last edited:
Hopefully the PRO versions of the consoles (xbox needs one too) get a bump in CPU spec as well and not only a modest gpu bump. Compatibility shouldn't be too much of an issue nowadays.
Pro versions of both would be nice but both the PS5 and XSX are powerful enough for most people.
 

FireFly

Member
That's more of a stretch interpretation of what really was going on.
AMD and Jim Keller had ideas for where the Zen arch could go in the future. But one thing is to set targets and ideas, a very different thing is to develop a full product.
AMD did not have the resources back then, as they have now. AMD even had to sell their HeadQuarters to be able to keep working.

And don't forget that in 2016, RDNA1 was still 3 years away. And RDNA2 was 5 years away.
Worst yet, the specs for RDNA2 were not set. Remember that it was nvidia that won the DX12_2 spec with Turing, a GPU released in 2018.
And AMD had to rework RDNA2 into those specs.
There's no contradiction in claiming that AMD were working on the early design for Zen 2, but needed Zen 1 to be a success to have the resources to complete development. (You should also note that R&D spending only increased by 40% from 2016-2018, while it has increased by 2.5X since then. So it still seems that Zen 2 was made on a relatively tight budget, as AMD did not see huge revenue growth until later years).

In terms of CPU the gap is roughly the same compared to previous generations (PS3/4) if you take single core performance into account (2-2.5x). The gap is the widest it has ever been if you are referring to multicore, but that hardly matters because very few games take advantage of that.

In terms of GPU the pure rasterization gap at the moment is about the same as PS3/4 generation. The 4090 at the moment is roughly 3x-3.5x the console GPUs while the 1080 was roughly 3x-3.5x the PS4 GPU and the same for the Radeon 4870 over the 360.
The difference is that the 1060 was already 2x console performance, and the 4060 isn't close to that. Also that the 4090 in itself is 3X a 4060, while the 1080 was maybe 70% faster than the 1060.
 

winjer

Gold Member
Maybe HBM was too much for PS5 & Series X, but the point is it won't stay that way by the time the next consoles are ready to release. It's really about weighing the advantages of what the tech brings for the price, and in that respect, HBM is getting to a point where it wins cleanly over GDDR, even if it will still cost a bit more.

Keep in mind Sony have already shown an understanding of this in the past, when they took a gamble on GDDR5 for the PS4 instead of the cheaper DDR3. They did this even knowing they were going into 8th gen from an unfavorable position market-wise in 7th gen. They were willing to gamble on less memory capacity for the benefits of GDDR5 and the hUMA advantages. When larger capacities became available, they took another gamble and doubled up on capacity even knowing they'd be spending more on memory as a result, because again, the advantages outweighed the drawbacks in cost.

I think they're going to see very similar clear benefits to HBM for PS6 and take a similar approach in making that switch, because limitations of GDDR are becoming more and more noticeable particularly for embedded designs like consoles, as time goes on. Whether that be in latency, or memory controller complexity (factoring into the size for the APUs), or power usage or physical footprint, the limitations are becoming more readily apparent and I think Sony in particular will decide that some HBM-based memory makes more sense overall moving forward, for the performance uplift that can help contribute to in the whole system design.

Microsoft IMO have less reasons to consider HBM going forward, at least in the same way or capacity as Sony, and I think they'll probably prioritize memory that works better for modularity. Because I see them diverging a lot for the next generation, and where they are diverging it might make more sense to go non-hUMA and stick with somewhat more conventional, cheaper memories (well, GDDR seems to be cheaper than DDR but still).

There isn't much data on HBM prices, but was at least double the price of GDDR. On a price sensitive product like a console, this is prohibitive.
But the worst part is that recently, GDDR prices have fallen a lot. While HBM prices have increased 5 times.

 

winjer

Gold Member
There's no contradiction in claiming that AMD were working on the early design for Zen 2, but needed Zen 1 to be a success to have the resources to complete development. (You should also note that R&D spending only increased by 40% from 2016-2018, while it has increased by 2.5X since then. So it still seems that Zen 2 was made on a relatively tight budget, as AMD did not see huge revenue growth until later years).

Zen1 was am unexpected success for AMD, as it did relatively well. This was in 2017.
Only after this, did AMD manage to increase it's R&D budget. So like you said, in 2018, AMD increased it's budget by 40%.

And you continue to ignore the most important part in the SoC of a console: the GPU.
In 2016 RDNA2 was still 5 years away from release. Not even the specs for DX12_2 were chosen.

According to intbal intbal , this is what they said: "Richard and John said that Microsoft told them the Series X APU design was already finalized in 2016."
No way the Series X APU was finalized in 2016.
 

Bojji

Member
There's no contradiction in claiming that AMD were working on the early design for Zen 2, but needed Zen 1 to be a success to have the resources to complete development. (You should also note that R&D spending only increased by 40% from 2016-2018, while it has increased by 2.5X since then. So it still seems that Zen 2 was made on a relatively tight budget, as AMD did not see huge revenue growth until later years).


The difference is that the 1060 was already 2x console performance, and the 4060 isn't close to that. Also that the 4090 in itself is 3X a 4060, while the 1080 was maybe 70% faster than the 1060.

True, mainstream GPUs are now much weaker and more expensive compared to 2013/14 (in relation to consoles).
 

Vox Machina

Banned
I really don't understand the complaints to the PS5 and Series X hardware, both machines were great for a price of $499 in 2020 with better balanced hardware than the PS4-ONE generation with that pretty bad jaguar CPU, 8 zen2 cores, 10 -12 Tflops, 16 gigabytes of ram and a 2.4Gb-5.5Gb SSD, etc... there should be little complaint.

I'm no console gamer but only getting 3 years of 60 FPS games (and even 60 is low relative to PC) before the graphically demanding ones bump back down to 30 fps would be pretty disappointing to me.

Id be more inclined to buy a console if they committed to base platform that put out bi-yearly user-swappable upgrade packages. Sort of like a half-step between the current user-friendliness of consoles and the customizability and flexibility of PCs.
 

intbal

Member
According to intbal intbal , this is what they said: "Richard and John said that Microsoft told them the Series X APU design was already finalized in 2016."
No way the Series X APU was finalized in 2016.
I'm trying to find it, but we're talking about dozens or hundreds of hours of video, just to locate one sentence. And not every video has a transcript (and those often have misspellings, anyway). I don't even remember what type of video it was. Not necessarily a DF Weekly. It could have been a platform comparison video.
 

FireFly

Member
Zen1 was am unexpected success for AMD, as it did relatively well. This was in 2017.
Only after this, did AMD manage to increase it's R&D budget. So like you said, in 2018, AMD increased it's budget by 40%.

And you continue to ignore the most important part in the SoC of a console: the GPU.
In 2016 RDNA2 was still 5 years away from release. Not even the specs for DX12_2 were chosen.

According to intbal intbal , this is what they said: "Richard and John said that Microsoft told them the Series X APU design was already finalized in 2016."
No way the Series X APU was finalized in 2016.
Well, I was addressing the CPU side, because the question was whether Microsoft could realistically have used a newer CPU architecture (i.e Zen 3).

As for the GPU, it depends what is being claimed to have been "finalized". I can see Microsoft agreeing to an RDNA 2 product supporting X number of functional units, with a rough teraflop target. That doesn't mean that the features had all been locked down yet. At this point we're talking about an interpretation of a quote of a quote.
 

StereoVsn

Member
I am really hoping that PS5 Pro goes for Zen 4. This video from DF demonstrates how cut down CPU is in the current gen. Still significantly better vs PS4/Xbone but not enough to run modern Unreal basically.

Also, why are people complaing about this DF video? It's interesting from both technical and speculative perspective. If you don't like it, don't watch it ffs.
 
Top Bottom