• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

IceManCat

Member
AFAIK Zen+/Navi APU does not exist, only Zen2 and Navi (end of the year release).

Also I struggle, really do, to find any hardware that could pack 18Gbps GDDR6 chips as SYSTEM memory. It would be unprecedented for laptop or PC. For laptop, because its completely counterproductive (much higher TDP/costs and actually worse perf then 16GB of DDR4) and for PC because APU with 16GB GDDR6 at 18Gbps would be....completely puzzling? You would provide APU based PCs only for low end, whats the reason of incredibly high powered APUs in PC when you can go discrete?

Weird thing is, these are absolute fastest chips Samsung makes and are yet to be found in any product. Even high performance GPUs such as 2080S have 16Gbps chips. These speeds almost sound too much even for a console, but not for a console with "narrow" bus. For 256bit bus, slightly downclocked 18Gbps would bring 528GB/s of bandwidth. Would leave 440GB/s for GPU alone...


Most compelling argument I've seen so far. In theory how powerful is "this machine", around 9tf ?
 

R600

Banned
Most compelling argument I've seen so far. In theory how powerful is "this machine", around 9tf ?
Yes I would say ~9TF. Its upper limit of what I thought because it seems Zen2 in consoles will have alot of cache removed, therefore even lower TDP and more die space for CUs. But I think PS5 is 100% ~320mm² so not more then 40CUs.

IMO, with RT hardware and SSD, very powerful.

Considering its closed box and all, I think we would get absolutely incredible looking games worthy of $499.

Tbh I think we pretty much know what PS5 will look like. Gonzalo/Flute and PCB leak narrow it down incredibly well.
 
Last edited:

FrostyJ93

Member
Yes I would say ~9TF. Its upper limit of what I thought because it seems Zen2 in consoles will have alot of cache removed, therefore even lower TDP and more die space for CUs. But I think PS5 is 100% ~320mm² so not more then 40CUs.

IMO, with RT hardware and SSD, very powerful.

Considering its closed box and all, I think we would get absolutely incredible looking games worthy of $499.

Tbh I think we pretty much know what PS5 will look like. Gonzalo/Flute and PCB leak narrow it down incredibly well.

I bet the casing design will be qyite the looker too.
 

IceManCat

Member
Yes I would say ~9TF. Its upper limit of what I thought because it seems Zen2 in consoles will have alot of cache removed, therefore even lower TDP and more die space for CUs. But I think PS5 is 100% ~320mm² so not more then 40CUs.

IMO, with RT hardware and SSD, very powerful.

Considering its closed box and all, I think we would get absolutely incredible looking games worthy of $499.

Tbh I think we pretty much know what PS5 will look like. Gonzalo/Flute and PCB leak narrow it down incredibly well.

I'm kinda in the same boat as SonGoku, I think 9 tf just isn't enough to give a real leap, I have no doubt itll look good but just not sure how much better. 9tf would be about 11-12 GCN,
 
Another thing to consider is the 1.6ghz base clock which is the same as PS4. Isn't that important for backwards compatibility?

Such clocks would make no sense for some random chinese console.
This logic makes zero sense, considering the fact that Zen uarch has 50% higher IPC than Jaguar. Who says that "code to metal" (which doesn't exist, but let's assume that it does) games won't break due to uarch differences?

Besides, it's a stupid idea if you want to push PS4 games with unlocked framerates to rock solid 60 fps.

PS4 Pro has no problem running all unpatched PS4 games at 2.13 GHz with boost mode, so it's fair to assume the PS5 will be able to run all PS4/VR games at 3.2 GHz.
 

R600

Banned
This logic makes zero sense, considering the fact that Zen uarch has 50% higher IPC than Jaguar. Who says that "code to metal" (which doesn't exist, but let's assume that it does) games won't break due to uarch differences?

Besides, it's a stupid idea if you want to push PS4 games with unlocked framerates to rock solid 60 fps.

PS4 Pro has no problem running all unpatched PS4 games at 2.13 GHz with boost mode, so it's fair to assume the PS5 will be able to run all PS4/VR games at 3.2 GHz.
“To give an example, the GPU of the prior version of the system might run at a GPU clock of 500 MHz, and the current system might run at a GPU clock [156] of 750 MHz. The system would run with [156] set to 750 MHz when an application is loaded that is designed only for the current system. In this example, the cycle counter [CC] would correspond to the 750 MHz frequency (i.e., it is a true cycle counter). When a legacy application (i.e., an application designed for the prior version of the system) is loaded, the system [100] may run at a frequency slightly higher than the operating frequency of the prior system (e.g., with [156] set to 505 MHz). In this backward compatible mode, the GPU spoof clock [135] would be configured to run at 500 MHz, and the cycle counter CC would be derived from the spoof clock, thus providing the expected value to the legacy application.”

From Sony patent itself.
 

xool

Member
9 TF RDNA is also better than Radeon 5700 .. basically 5700XT territory .. which is 200W territory on it's own (ram included) ..

I bet the casing design will be qyite the looker too.
I had to make some concessions for cooling, but I came up with a few designs. Hope people like it.


UG9pVC1.jpg
 
I'm kinda in the same boat as SonGoku, I think 9 tf just isn't enough to give a real leap, I have no doubt itll look good but just not sure how much better. 9tf would be about 11-12 GCN,
9TF is around 13TF GCN. IMO more then what many thought was possible half a year ago.

and here we go again :messenger_tears_of_joy:, some voodoo in this thread shit going in loops.

I had to make some concessions for cooling, but I came up with a few designs. Hope people like it.

jokes aside, i think one of them might go with a cylinder case for better cooling.
 

R600

Banned
So from Anadtech 8 core Zen2 chip is 74mm² with ~32mm² going directly to cache.

If this leak is true, upon second look, it should have 1/4th of cache equaling 8MB instead of 32MB. This would save them ~24mm² of die, therefore Zen2 chip in hypotetical PS5 would take 50mm². This means that they could fit full Navi XT and RT (probably additional 4CUs as well) inside ~320mm² chip.

From Scarlett POV, 320bit bus would already take 16mm² all else staying the same. If Scarlett fits 8MB ofL3 cache more, it would result in additional 8mm² on top of 16mm² from wider bus.

We could be in situation where PS5 is ~320mm² with narrower bus but 528GB/s of total BW duo to faster memory modules, and Scarlett is 345mm² with wider bus and 560GB/s of total BW (or even a bit less if downclocked) and a bit more cache but same amount of CUs and therefore very, very similar numbers.

Which would be better way? MS would give them ability to push for more BW with even faster RAM modules if necessary, but since number of CUs is already locked there is little point in going for more BW if there are no additional CUs to feed.

So we could get pretty much same systems with one having narrower but faster RAM and less L3 cache for considerably smaller die size, but same performance as the bigger one that has gone for wider bus and slower RAM but more L3 cache. Only question now would be who can clock them higher and have more headroom with regarding to cooling.
 
Last edited:

R600

Banned
Not long now and you’ll be in the double digits.

giphy.gif
Nope. I always said 8.3-9.2TF (36 or 40CUs at 1800mhz).

Its just that 9TF was upper limit duo to not expecting them to skim so much on L3, which will save them some watts and die space.

and here we go again :messenger_tears_of_joy:, some voodoo in this thread shit going in loops.
Its not voodo. Navi XT (that doesnt hold its boost clocks) easily beats Vega64 which is 12.6TF card and pretty much matches R7 (13.8TF)
 
Last edited:

MadAnon

Member
This logic makes zero sense, considering the fact that Zen uarch has 50% higher IPC than Jaguar. Who says that "code to metal" (which doesn't exist, but let's assume that it does) games won't break due to uarch differences?

Besides, it's a stupid idea if you want to push PS4 games with unlocked framerates to rock solid 60 fps.

PS4 Pro has no problem running all unpatched PS4 games at 2.13 GHz with boost mode, so it's fair to assume the PS5 will be able to run all PS4/VR games at 3.2 GHz.
PS4 pro runs patched games perfectly fine in boost mode but Cerny said it himself that you can run non-patched games in boost mode too but there might be unexpected problems.

I'm pretty sure they want to avoid problems because games will most likely not be patched by devs themselves like they did for Pro. You expect they will patch ps4 library for ps5?

And judging by their patent, it's clocks that are important.
 
Last edited:

vpance

Member
Yes I would say ~9TF. Its upper limit of what I thought because it seems Zen2 in consoles will have alot of cache removed, therefore even lower TDP and more die space for CUs. But I think PS5 is 100% ~320mm² so not more then 40CUs.

IMO, with RT hardware and SSD, very powerful.

Considering its closed box and all, I think we would get absolutely incredible looking games worthy of $499.

9TF $499, bite size APU. Arrogant Sony is back.
 
“To give an example, the GPU of the prior version of the system might run at a GPU clock of 500 MHz, and the current system might run at a GPU clock [156] of 750 MHz. The system would run with [156] set to 750 MHz when an application is loaded that is designed only for the current system. In this example, the cycle counter [CC] would correspond to the 750 MHz frequency (i.e., it is a true cycle counter). When a legacy application (i.e., an application designed for the prior version of the system) is loaded, the system [100] may run at a frequency slightly higher than the operating frequency of the prior system (e.g., with [156] set to 505 MHz). In this backward compatible mode, the GPU spoof clock [135] would be configured to run at 500 MHz, and the cycle counter CC would be derived from the spoof clock, thus providing the expected value to the legacy application.”

From Sony patent itself.
Sony has lots of patents:


Not all of them come into fruition. We know for a fact that PS4 Pro boost mode has no adverse effects. Same for XB1X.

And you still didn't explain how they are going to tackle uarch differences... same GHz doesn't really tell us anything. Different uarch, different IPC/performance. So again: how are they going to emulate Jaguar IPC/cycle behavior?

If Sony was crazy, they'd probably make a big.LITTLE x86 monstrosity (8 Jaguar cores for BC + Zen cores for next-gen games). Remember how the PS2 had the PS1 MIPS CPU embedded? Thank god they don't have to do that anymore.

PS4 pro runs patched games perfectly fine in boost mode but Cerny said it himself that you can run non-patched games in boost mode but there might be problems because the games are not coded with this extra juice in mind.

I'm pretty sure they want to avoid problems because games will most likely not be patched by devs themselves like they did for Pro.

And judging by their patent, it's cycles that are important and not IPC.
Do we have any examples of this or is it an entirely theoretical scenario (hence the patent "just in case")?

Don't forget that Cerny also said that they chose Jaguar on PS4 Pro for BC reasons, which sounds like a PR excuse (since the PS5 will have 100% native PS4 BC with Zen 2 uarch, isn't that contradictory?). The truth is that both PS4 Pro and XB1X didn't have enough time to integrate Zen cores in their semi-custom design (PS4 Pro APU was finalized in 2015, 2 years before Zen 1).

Last time I checked, it was during the DOS era when certain games relied on having certain MHz (and the turbo button broke them). That's an arcane programming practice and I seriously doubt anyone uses it these days.

Modern x86 CPUs are impossible to "code to metal", since there's microcode and a CISC/RISC translation layer.

The only way to truly "code to metal" would be if Intel/AMD allowed programmers to have direct access to those internal RISC cores (no microcode/translation shenanigans). Guess why they don't do that? It's because the internal architecture (aka microarchitecture) tends to change a lot from time to time, so there's no guarantee about keeping certain opcodes, which is problematic for BC.

Even stuff like automatically managed caches (versus Cell's local store memory) and out-of-order execution tend to get in the way a lot if you care about absolute determinism.
 

MadAnon

Member
Sony has lots of patents:


Not all of them come into fruition. We know for a fact that PS4 Pro boost mode has no adverse effects. Same for XB1X.

And you still didn't explain how they are going to tackle uarch differences... same GHz doesn't really tell us anything. Different uarch, different IPC/performance. So again: how are they going to emulate Jaguar IPC/cycle behavior?

If Sony was crazy, they'd probably make a big.LITTLE x86 monstrosity (8 Jaguar cores for BC + Zen cores for next-gen games). Remember how the PS2 had the PS1 MIPS CPU embedded? Thank god they don't have to do that anymore.


Do we have any examples of this or is it an entirely theoretical scenario (hence the patent "just in case")?

Don't forget that Cerny also said that they chose Jaguar on PS4 Pro for BC reasons, which sounds like a PR excuse (since the PS5 will have 100% native PS4 BC with Zen 2 uarch, isn't that contradictory?). The truth is that both PS4 Pro and XB1X didn't have enough time to integrate Zen cores in their semi-custom design (PS4 Pro APU was finalized in 2015, 2 years before Zen 1).

Last time I checked, it was during the DOS era when certain games relied on having certain MHz (and the turbo button broke them). That's an arcane programming practice and I seriously doubt anyone uses it these days.

Modern x86 CPUs are impossible to "code to metal", since there's microcode and a CISC/RISC translation layer.

The only way to truly "code to metal" would be if Intel/AMD allowed programmers to have direct access to those internal RISC cores (no microcode/translation shenanigans). Guess why they don't do that? It's because the internal architecture (aka microarchitecture) tends to change a lot from time to time, so there's no guarantee about keeping certain opcodes, which is problematic for BC.

Even stuff like automatically managed caches (versus Cell's local store memory) and out-of-order execution tend to get in the way a lot if you care about absolute determinism.

I actually misremembered some of what he said. I looked up the interview with DF. He actually said this.

"First, we doubled the GPU size by essentially placing it next to a mirrored version of itself, sort of like the wings of a butterfly. That gives us an extremely clean way to support the existing 700 titles," Cerny explains, detailing how the Pro switches into its 'base' compatibility mode. "We just turn off half the GPU and run it at something quite close to the original GPU.

"For variable frame-rate games, we were looking to boost the frame-rate. But we also wanted interoperability. We want the 700 existing titles to work flawlessly," Mark Cerny explains. "That meant staying with eight Jaguar cores for the CPU and pushing the frequency as high as it would go on the new process technology, which turned out to be 2.1GHz. It's about 30 per cent higher than the 1.6GHz in the existing model."

"Moving to a different CPU - even if it's possible to avoid impact to console cost and form factor - runs the very high risk of many existing titles not working properly," Cerny explains. "The origin of these problems is that code running on the new CPU runs code at very different timing from the old one, and that can expose bugs in the game that were never encountered before."

So yeah... Besides that patent, no idea how they will do backwards compatibility. That's the only info we have to discuss.
 
Last edited:

Darklor01

Might need to stop sniffing glue
Sony has lots of patents:


Not all of them come into fruition. We know for a fact that PS4 Pro boost mode has no adverse effects. Same for XB1X.

And you still didn't explain how they are going to tackle uarch differences... same GHz doesn't really tell us anything. Different uarch, different IPC/performance. So again: how are they going to emulate Jaguar IPC/cycle behavior?

If Sony was crazy, they'd probably make a big.LITTLE x86 monstrosity (8 Jaguar cores for BC + Zen cores for next-gen games). Remember how the PS2 had the PS1 MIPS CPU embedded? Thank god they don't have to do that anymore.


Do we have any examples of this or is it an entirely theoretical scenario (hence the patent "just in case")?

Don't forget that Cerny also said that they chose Jaguar on PS4 Pro for BC reasons, which sounds like a PR excuse (since the PS5 will have 100% native PS4 BC with Zen 2 uarch, isn't that contradictory?). The truth is that both PS4 Pro and XB1X didn't have enough time to integrate Zen cores in their semi-custom design (PS4 Pro APU was finalized in 2015, 2 years before Zen 1).

Last time I checked, it was during the DOS era when certain games relied on having certain MHz (and the turbo button broke them). That's an arcane programming practice and I seriously doubt anyone uses it these days.

Modern x86 CPUs are impossible to "code to metal", since there's microcode and a CISC/RISC translation layer.

The only way to truly "code to metal" would be if Intel/AMD allowed programmers to have direct access to those internal RISC cores (no microcode/translation shenanigans). Guess why they don't do that? It's because the internal architecture (aka microarchitecture) tends to change a lot from time to time, so there's no guarantee about keeping certain opcodes, which is problematic for BC.

Even stuff like automatically managed caches (versus Cell's local store memory) and out-of-order execution tend to get in the way a lot if you care about absolute determinism.

Coding to the metal is a bs marketing term, however it is impossible to negate the fact that a console OS would not have the processing overhead of a computer OS. It is simply a way of saying that it is easier to dedicate a higher percentage of the resources to the requirements of a game.

You are correct that some older games did break with the turbo button.

I don’t recall Cerny stating using Jaguar for BC reasons, I’d have to look that up.
 

SonGoku

Member
That comparison is apples to oranges, at base level we pretend PS3 had no access to more than half of its programmable compute.
Take 360 then which was more or less on par with ps3
Shitty 3xPPE and early TeraSacale based GPU 240GF
The jump to PS4 is 7.5x before even taking into account Terascale->GCN arch jump which was huge
Single numbers across disparate architectures don't mean much
We know GCN punches above its weight compared to terascale so the gap between PS4 and 360 GPUs is even higher than what raw numbers say.
 

SonGoku

Member
.Efficiency (ie polys per second per million transistors or something) doesn't increase as shaders become more programmable - it actual decreases - I have no way of putting a number on it though
There's more to GPUs than poly crunching, the 360 GPU had similar raw numbers but was much more capable due to its unified shader architecture.
Benchmark posted yesterday with 16GB was with downclocked 18Gbps chips on 256bit bus resulting in 528 GB/s. So more bandwidth then 2080 Super.
I didn't see any 18GBps chip benchmark you keep mentioning, link?
There is not a single product I can think of that would pack something like this. Workstation laptop?
Gaming laptops, boxes etc.
This is not console at all, very mediocre
Makes sense since consoles typically get the lower powered, mobile variant.
Funny how one generation is enough to become a trend :messenger_tears_of_joy: typically implies more than once...
This means that they could fit full Navi XT and RT (probably additional 4CUs as well) inside ~320mm² chip.
Here is where your crazy theory crumbles
A 344 vs 320 mm2 chip will have minimum short term cost saving and practically null long term.
The minimal savings are not worth the performance tradeoff that cripples the CPU IPC (3700x->1700x) for a measly 24mm2,
 
Last edited:

R600

Banned
By nature APUs have integrated GPUs. Doesn't prove its related to consoles

Thats not the point of the tweet. Point of the tweet is that Navi 10 Lite does not mean its cut down version of desktop GPU.

To answer you how we know chips are 18Gbps....look at bandwidth write score - 33 which would mean 66GB/s x 8 = 528GB/s total BW. There is no RAM in the world that can provide that BW except 18Gbps (this is downclocked slightly). So not even 16Gbps chips used in 2080S first time in mass production.

Care to explain what laptop in the world has or has ever had 16GB of absolute fastest GDDR6 RAM for system memory? Its completely unheard of and makes 0% sense.

To make things interesting, benchmark has been deleted now.
 
Last edited:

SonGoku

Member
Thats not the point of the tweet. Point of the tweet is that Navi 10 Lite does not mean its cut down version of desktop GPU.
I just answered because of the triggered emoji lol, its cool but i don't think it changes anything: Gonzalo went from shitty to slightly less shitty?
To answer you how we know chips are 18Gbps....look at bandwidth write score - 33 which would mean 66GB/s x 8 = 528GB/s total BW. There is no RAM in the world that can provide that BW except 18Gbps (this is downclocked slightly). So not even 16Gbps chips used in 2080S first time in mass production.
Thanks for the info, i noticed that too but are you sure that wasn't total ram?
Care to explain what laptop in the world has or has ever had 16GB of absolute fastest GDDR6 RAM for system memory?
Not even the top 5 craziest, there are dual and triple gpu gaming laptops even water cooled
Could be an gaming APU for OEMs
To make things interesting, benchmark has been deleted now.
"OPN" is suspect possibly meaning Orderable Part Number? OEM part?
 

R600

Banned
I know there has been crazy laptops for years, but there has never been a laptop with unified GDDR6 RAM on board. Fastest possible one at that.

There is a reason why PCs have split RAM, its cheaper, better for battery and obviously performance, since latency for system RAM with GDDR6 would be horrible. So whats the point? There is no point obviously and that us why I think this just cant be laptop benchmark.
 

CrustyBritches

Gold Member
Honestly, it's weird to have a performance laptop with APU with a powerful iGPU like 5700XT and 16GB high-speed GDDR6, and no DDR4 memory. That's a console build. A powerful one with futuristic RAM and ~$350-400 GPU performance.
 

SonGoku

Member
but there has never been a laptop with unified GDDR6 RAM on board.
APUs are just now starting to get any good with Zen2
8GB would be too low to be shared between system, CPU and GPU. 16GB GDDR6 must be cheaper than 8GB GDDR6 + 8GB DDR4 (added complexity) and defeats the purpose of an APU. Its cheaper than dual GPU setups anyways.

Not entirely sure its laptop, could be gaming apu in general. But battery life isn't really a good argument considering gaming modes are meant to be plugged in
There is a reason why PCs have split RAM, its cheaper, better for battery and obviously performance, since latency for system RAM with GDDR6 would be horrible. So whats the point?
An APU is best served with one memory pool otherwise it drives up the cost and lose the APU unified pool advantages
Options are super expensive hbm, gddr6, or ddr4. For a budget gaming oriented apu gddr6 is the most appealing option. Suborz a windows based "console" went with a unified gddr5 pool.
Honestly, it's weird to have a performance laptop with APU with a powerful iGPU like 5700XT and 16GB high-speed GDDR6, and no DDR4 memory. That's a console build. A
Whats so weird about it? Laptops with high end GPUs have released before, this APU equipped with a midrange GPU is probably cheaper
Its a gaming APU
 
Last edited:

R600

Banned
APUs are used for low/mid tier solutions in laptops. Find me one single APU that compares to entry level Nvidia GPUs in laptop, let alone top end.

Sure you would rather want 8+8 as general purpose tasks will suffer duo to ~60% higher latency, and costs will certainly be smaller no matter the complexity.

In the end, would certainly not use 256bit bus in laptop with total amount of BW being 528GB/s. That is just apsurd, if we are talking about APU, since APUs have been created as lower tier cost effective solutions, not highest performing one that also makes you sacrifice price and entire PC design duo to it.

There might be AMDs Navi apu with ~20CUs (which would be enough for 1080p gaming). There wont be laptops that are faster then their top PC part equivalent.
 

CrustyBritches

Gold Member
Exactly. APUs are for cheap and easy laptops. More capable laptops have separate CPU and GPU. They have many tiers and configs where you would see DDR4 as well. Not only that, it's weird to even see a powerful APU like this on desktop. I have Ryzen 2400g and it's like a Xbox One at best.

It could be another SuborZ+ type console, or Alienware Steam Machine, etc. PS5 still seems most likely to me. :pie_thinking:
 
Last edited:

SonGoku

Member
APUs are used for low/mid tier solutions in laptops. Find me one single APU that compares to entry level Nvidia GPUs in laptop, let alone top end.
5700xt is a midtier GPU and as I've said with Zen2 AMD APUs will be much more popular
This APU (if laptop oriented) will be targeted to gaming laptops or maybe its a gaming APU
Sure you would rather want 8+8 as general purpose tasks will suffer duo to ~60% higher latency, and costs will certainly be smaller no matter the complexity.
General purpose tasks are not as important to a gaming oriented APU, they would still have decent performance
I think APUs have inherit advantages to being unified pools be it economic or permanence benefits, otherwise the Subor would have went with a 4 + 8 setup

Are there any AMD APUs with split pools btw?

. That is just apsurd, if we are talking about APU, since APUs have been created as lower tier cost effective solutions,
A 340mm2 APU will probably be cheaper than CPU + discrete GPU + RAM setup. It will be a cost effective performance gaming laptop/htpc
Exactly. APUs are for cheap and easy laptops. More capable laptops have separate CPU and GPU
In other words a market just waiting to be tapped into.
Gamers have generally been disinterested in AMD APUs with low performance gpus and cpus

Zen2 and 7nm have the potential to disrupt the APU market and offer superior gaming performance for a lower cost of entry.
Intel definitely sees the potential of such market.
It could be another SuborZ+ type console, or Alienware Steam Machine, etc. PS5 still seems most likely to me. :pie_thinking:
Or a gaming APU for OEMs to use how they see fit (console, htpc, laptop, tablet)
 
Last edited:

R600

Banned
5700XT is fastest AMD GPU (well matching R7). I am sure they are looking to put it inside laptop 1:1 with additional 8GB of RAM. Not only that, this one will have even faster RAM (14Gbps for desktop part, 18Gbps for PC/Laptop APU). Does this sound convincing to you, becuse to me its completely apsurd. Why buy APU with GPU matching desktop part and crippled Zen2?

The simplest explanation that this, now removed, benchmark, was console part. They removed some L3 cache to save die space and put maximum they could in terms od GPU performance in closed box on 7nm node.
 

SonGoku

Member
Not only that, this one will have even faster RAM (14Gbps for desktop part, 18Gbps for PC/Laptop APU). Does this sound convincing to you, becuse to me its completely apsurd. Why buy APU with GPU matching desktop part and crippled Zen2?
It would still be cheaper than discrete parts, maybe there lies the appeal

This is most likely a gaming apu for sale, not necessarily laptop.
 

R600

Banned
It would still be cheaper than discrete parts, maybe there lies the appeal

This is most likely a gaming apu for sale, not necessarily laptop.
But its not. There is no appeal in this SonGoku. Literally everything points into different direction.

They will not be putting most expensive, fastest RAM available in their laptop or PC APU. In fact, RAM in question is not even in the mass production. What is the purpose to put it in there? Whats the purpose of putting gaming laptop out with more performance then your fasteat discrete solution that requires 200W?
 

CrustyBritches

Gold Member
I've found use for APUs in builds for family and friends going back to the original Llano. It's best in a scenario with mostly business or general application usage with still maintaining better graphics capability for multi-monitor, media playback, or light gaming. The idea was that you could get near to entry-level dGPU performance for only a bit more cash without adding another possible point of failure.

Gonzalo is a specialized, high performance gaming chip. It's like the opposite of the what AMD does with their laptop/desktop APUs. Wouldn't surprise me if even the newer 3400G is still weaker than PS4, a 2013 design. They don't seem to step on their semi-custom contracts with desktop equivalents.
 
Last edited:

xool

Member
I didn't see any 18GBps chip benchmark you keep mentioning, link?
To answer you how we know chips are 18Gbps....look at bandwidth write score - 33 which would mean 66GB/s x 8 = 528GB/s total BW. There is no RAM in the world that can provide that BW except 18Gbps (this is downclocked slightly). So not even 16Gbps chips used in 2080S first time in mass production.
I had to check at another site but single core write was reported at 33.1 GB/s ..

It's 16Gb/s GDDR6 - here's why - the experimentally measured bandwidth score includes L2 and L3 cache hits, which pushes the score just over the 32GB/s expected for 16Gb/s ram (it's +3.4%)

The alternative of buying cutting edge ram and downclocking to a fraction of a percent improvement on commercially available ram doesn't make sense
 

SonGoku

Member
But its not. There is no appeal in this
The appeal of a gaming APU is gaming performance
A Navi10 + Zen2 APU would provide decent performance all the while being cheaper than buying separate discrete parts. This would be appealing for htpcs, windows based consoles and even laptops.

Literally everything points into different direction.
It makes no sense for a console, im 99% sure its unrelated
They will not be putting most expensive, fastest RAM available in their laptop or PC APU. In fact, RAM in question is not even in the mass production.
Im still undecided about this, read some post on ree that speculated hbm, ddr3 etc
The whole thing is bogus, drawing conclusions from it would be naive... 16 chips when 2GB chips are more cost effective

Are 1GB 18GBps chips even possible? i believe only 2GB chips are privy to those speeds
 
Last edited:

xool

Member
Can someone ELI5 this :

Say I'm Alienware and I'm making a gaming laptop with an AMD APU .. why would/wouldn't I chose GDDR6 over DDR - cost?/performance?/somethingelse?

..(thinking).. If GDDR6 is the right choice for console APU, why not other applications ??
 

xool

Member
[more on "Flute" memory]

(The bench reported the memory as 16 blocks of 1024M chips ..)

Are 1GB 18GBps chips even possible? i believe only 2GB chips are privy to those speeds

What if we are looking at GDDR6 in dual channel (2x16bit data) mode - that might/would report 8x 2GB chips as 16 1GB chips .. maybe .. ok I have no idea how this bench reports multi-channel set ups ..

just a thought
 
Status
Not open for further replies.
Top Bottom