• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

01011001

Banned
50% perf/watt improvement over RDNA 1 goddamn. That's how they managed to put 12 goddamn Tera Flops on XSX. That's insane.

I must say I'm impressed.

well they could have done this with RDNA1 as well I bet, it would just turn out to be a bit louder and draw more power
 

Old Empire.

Member
so if Microsoft co-developed RDNA 2.0 technology then how would that work out for the PS5? forgive my ignorance lol


Quote was The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.

If Sony not involved, be interesting to see if the API better than what Sony has developed for their console? Sony not a software developer, so we don't know if they have the same tools to replicate and match the performance?
 
Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner

Its not that. Its that people think Ariel = Oberon. Its not true, never was. I talked about this for months, hence I am called cult leader. Ariel was known since December 2018. Ariel was known as GFX1000, it was Navi 10 derivative, RDNA1 chip.


Oberon is later chip, its 2nd APU, and entire Github repo for Oberon was made up of Native/BC1/BC2 tests which are done on Ariel iGPU testlist. Therefore, since Ariel was Navi 10 Lite, you couldn't have ray tracing and variable rate shading running Ariel's testlist.

People are trying to discredit a leak without understanding basics about it, its frustrating.

This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.

It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.

And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?

Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).

So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.

That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.

Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.

Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.

But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.

Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.

but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.
 

DeepEnigma

Gold Member
Quote was The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.

If Sony not involved, be interesting to see if the API better than what Sony has developed for their console? Sony not a software developer, so we don't know if they have the same tools to replicate and match the performance?

Uh, Sony is a software developer and has an extensive API network with XDev, ICE, and plenty of others that do fantastic work and toolsets.

What year is it?
 
Jesus tapdancin' Christ, that Colbert dude is insufferable.

How many times in how many ways can Github literally be shot down as not entirely accurate of anything, but you refuse to give up?

This all reminds me of The Last Samurai, wherein the Github clingers are being smacked down by Ujio with a wooden sword at every turn, but refuse to stay down.
 

DeepEnigma

Gold Member
API for hardware ray tracing, that costs money. This is not a graphic driver.

Holy shit dude where have you been the last 7+ years especially with the design with the PS4?They have an entire team of competent architects working on the damn thing.

They have their own in-house APIs, game engines, toolsets, and everything under the sun as any other platform.

Are you being purposely obtuse, or?
 

Ellery

Member
So a 225w RDNA1 GPU would only need about 112w RDNA2 FOR the same performance.

No. A 225W GCN GPU would only need about 112W RDNA2 for the same performance. (or Roughly around there)

The way AMD and Nvidia do their math on this can be a bit confusing. The math itself is correct, but they do it in a way to make it sound the best (for them).

When AMD says RDNA1 -> RDNA2 is a 50% perf/watt increase then that also includes the lower power consumption by the more advanced manufacturing node.

Bear with me. I know this sounds wrong what I am about to say, but it is actually correct and reviews back me up on this.

To put it very simple in an easy to show example :

Radeon VII uses about 270W during gaming. This is a 7nm GCN Card.
5700 XT uses about 210W during gaming. This is a 7nm RDNA1 card.
Those two cards have roughly the same performance.
If we were to have an RDNA2 card with the same performance as the two cards mentioned aboved it would only need 140W-150W ~

The way it works in math is that % increase and % decrease are different. Like if you have 100 Apples and you INCREASE your apple count by 50% then you have 150 Apples. But if you now DECREASE your Apple count by 50% you suddenly end up at 75 apples (going from 150 apples). This is kind of the same that is happening here. In order to go back to 100 from 150 you would need to DECREASE by 33%.
But saying 50% better perf/watt sounds much better in presentations than saying 33% decrease in power consumption.

When AMD says 50% better. What they mean is that it is a 30-33% decrease in power consumption.
 
Last edited:
Jesus tapdancin' Christ, that Colbert dude is insufferable.

How many times in how many ways can Github literally be shot down as not entirely accurate of anything, but you refuse to give up?

This all reminds me of The Last Samurai, wherein the Github clingers are being smacked down by Ujio with a wooden sword at every turn, but refuse to stay down.

Amd says ps5 is rdna2 and navi 2x , colbert and proven will disagree and say github with rdna 1 and navi 10 is the ps5 😆
 

Old Empire.

Member
Holy shit dude where have you been the last 7+ years especially with the design with the PS4?They have an entire team of competent architects working on the damn thing.

They have their own in-house APIs, game engines, toolsets, and everything under the sun as any other platform.

Are you being purposely obtuse, or?

API was co-architected and co-developed by AMD and Microsoft! You think Sony just magically on its own will generate an API that matches? Microsoft worked with AMD to get the best performance from the ray tracing on the chip. Engineering performed on the silicon pipeline and new coding needs to be formatted here to bring about gains with FPS with raytracing on. This not an overnight job, API likely in development for a year or more.
 

Ellery

Member
No.

12TFs in RDNA = 12TFs in RDNA2.
But RDNA 2 consumes less than RDNA.

That is what Perf/watt means.


Actually I am not sure about this one. I think 12 RDNA2 TF are actually better than 12 RDNA TF in terms of gaming performance.

I would need to take a deeper look into this. I don't like comparing Teraflops too much. I am more into gaming performance benchmarks.

But if I am not mistaken then those 12 TF RDNA2 should be genuinely impressive depending on what architectural changes actually happened going from RDNA -> RDNA 2.

I don't like saying stuff like 12 TF RDNA2 is like 15 TF RDNA, but we will have to wait and see how the benchmarks for the RDNA2 AMD cards are going to turn out.

Teraflops itself aren't the important thing here, but the architecture. Like the 5700 XT (9.7 TF) is as fast as the Radeon VII (13.44 TF)
 

spartan30gr

Member
tenor.gif


PS5 is RDNA 2
 
Last edited:

DeepEnigma

Gold Member
API was co-architected and co-developed by AMD and Microsoft! You think Sony just magically on its own will generate an API that matches? Microsoft worked with AMD to get the best performance from the ray tracing on the chip. Engineering performed on the silicon pipeline and new coding needs to be formatted here to bring about gains with FPS with raytracing on. This not an overnight job, API likely in development for a year or more.

Sony works closely with AMD as well, they’re the same vendor. so I am more than positive they co-developed their own API too.

This is the DX12 crap all over again.

🤡 🌎
 
as far as I know we didn't have clock speeds in the leaks for the Series X, maybe they are higher than we think? maybe Microsoft chose 1.8ghz or 1.9ghz and less active CUs for better yields?

That is a possibility. Another possibility is that 56CUs is the full chip and they have 52 active? Because if it's a 60CU chip and they have say 8 CUs disabled, that feels a bit like leaving performance on the table IMO. Might as well have gone with a smaller chip in that case.

In any case they do have the headroom to upclock depending on where PS5 actually lands, but if in case an example Oberon's a fat chip similar to XSX but at 2GHz, well that would seem insane to me until seeing AMD double down on RDNA2 efficiency (I hope it's that efficient for their sake because Nvidia is NOT playing around xD).
 

ethomaz

Banned
Actually I am not sure about this one. I think 12 RDNA2 TF are actually better than 12 RDNA TF in terms of gaming performance.

I would need to take a deeper look into this. I don't like comparing Teraflops too much. I am more into gaming performance benchmarks.

But if I am not mistaken then those 12 TF RDNA2 should be genuinely impressive depending on what architectural changes actually happened going from RDNA -> RDNA 2.

I don't like saying stuff like 12 TF RDNA2 is like 15 TF RDNA, but we will have to wait and see how the benchmarks for the RDNA2 AMD cards are going to turn out.

Teraflops itself aren't the important thing here, but the architecture. Like the 5700 XT (9.7 TF) is as fast as the Radeon VII (13.44 TF)
I mean the 50% perf/watt doesn’t related with Arch IPC improvements.
Of course it could have better IPC but AMD didn’t talk about.
 
Last edited:

Bludbro

Neo Member
Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner



This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.

It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.

And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?

Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).

So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.

That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.

Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.

Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.

But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.

Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.

but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.
Excellent analysis, thanks.
 

OsirisBlack

Banned
That is a possibility. Another possibility is that 56CUs is the full chip and they have 52 active? Because if it's a 60CU chip and they have say 8 CUs disabled, that feels a bit like leaving performance on the table IMO. Might as well have gone with a smaller chip in that case.

In any case they do have the headroom to upclock depending on where PS5 actually lands, but if in case an example Oberon's a fat chip similar to XSX but at 2GHz, well that would seem insane to me until seeing AMD double down on RDNA2 efficiency (I hope it's that efficient for their sake because Nvidia is NOT playing around xD).

This sounds oddly familiar.
Lawrence Julius Taylor
 
Last edited:

Old Empire.

Member
Sony works closely with AMD as well, they’re the same vendor. so I am more than positive they co-developed their own API too.

This is the DX12 crap all over again.

🤡 🌎

Not really dx12 crap again. AMD claiming they co developed an API with Microsoft. Ray tracing is a new technology for developers in the console space. Sony making its own API that's interesting.!

We have to wait and see if PS5 can match the Xbox series ray tracing performance. Sony uses different APIs for their console, so AMD would have to be involved in development of new API to be compatible . Right now AMD only confirmed working with MS to get the best performance from their silicon.
 
Last edited by a moderator:

Evilms

Banned
Last edited:

Fake

Member
No. 12 TF is super impressive. 50% perf/watt for RDNA2 makes it even more impressive (and much more logical from a power consumption standpoint)

Otherwise the Xbox Series X would be way too power hungry
Yeah thats explains a lot. seX will be not a power hungry as I thought. RDNA 2.0 will explain a lot in fact.
 

01011001

Banned
Gpu was confirmed to be navi 2x which is big navi.9.2 tf is from navi 10 in github based on rdna 1 .

which then means with 100% certainty that the PS5 is not 9.2TF of course, because it is physically impossible to manufacture a chip with that performance profile if it's RDNA2! that's one of the basic laws of physics after all!
even Einstein once said: "Ne, RDNA2 wird niemals nur 9.2TF leisten! unmöglich!"

it's also impossible that sony ever used an RDNA1 chip in early development and then changed to RDNA2 with the same performance goal! impossible! like what's next? ghosts? vampires? nah, all of this is impossible
 
Status
Not open for further replies.
Top Bottom