• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Possible hint at AMD's next-gen APU (codename: Gonzalo) - 8 cores, 3.2GHz clock, Navi 10-based GPU

DeepEnigma

Gold Member
One cool thing to consider with a massive bandwidth but not so extreme gpu is we could have insanely good particles, shadows and texture filtering. Ps2 was the weakest overall but with the highest bandwidth it did some cool stuff.

I would think that high of memory bandwidth would be great for physics calculations as well. Makes the world's feel so much more alive.
 

Darius87

Member
When he sent the info to the mod it had the CPU 8 Cores , clocked at 3.2Ghz & the GPU clocked at 1.121 Ghz this match up with the leak

looks legit, except GPU clock is so low at 7nm only 210 Mhz increase over Pro at 16nm i thought it might be around 1.5Ghz.
 

onQ123

Member
Yeah, lol.

Considering what gorgeous games we are currently getting out of a 1.84TF baseline PS4, I can only imagine more than 5 times that as a baseline with huge memory bandwidth and a massive CPU upgrade over the netbook one.

Not to mention the advances in hardware that we should be getting.

One cool thing to consider with a massive bandwidth but not so extreme gpu is we could have insanely good particles, shadows and texture filtering. Ps2 was the weakest overall but with the highest bandwidth it did some cool stuff.


With the stacking of chips I'm hoping that a small amount of memory maybe 512MB or 1GB is stacked on the GPU with a wide bus just because.
 

onQ123

Member
looks legit, except GPU clock is so low at 7nm only 210 Mhz increase over Pro at 16nm i thought it might be around 1.5Ghz.


The lower clock only lets me know that it's more going on than just pushing the clock rate up so I'm thinking it's going to be above 64CU's or the CU's will have more ALU's ,

Stacking maybe but the only real information that we have about Navi is that it's 7nm & going to be scalable plus use new memory.


bLXCAt7.jpg
 

Shin

Banned
GPU clock is so low at 7nm only 210 Mhz increase over Pro at 16nm i thought it might be around 1.5Ghz.
It is low because XBOX OX is running at 1172, plus you wouldn't even be at 10TF if it's 1121 if max CU is 64.
64x64x1121*2 = ?
 

LordOfChaos

Member
One cool thing to consider with a massive bandwidth but not so extreme gpu is we could have insanely good particles, shadows and texture filtering. Ps2 was the weakest overall but with the highest bandwidth it did some cool stuff.


Same reason in my theory, the bizarro world hardware Wii U was able to have fur on Donkey Kong but the Switch couldn't. One thing it did have going for it was the eDRAM right on the GPU die (though that also meant giving up half the die for everything else it could have had), where the Switch has a better GPU but just the shared system bandwidth.

PS2 was a fur monster at the time.
 
Last edited:
Same reason in my theory, the bizarro world hardware Wii U was able to have fur on Donkey Kong but the Switch couldn't. One thing it did have going for it was the eDRAM right on the GPU die (though that also meant giving up half the die for everything else it could have had), where the Switch has a better GPU but meh bandwidth.
I believe this is just an artists difference if you're referring to smash bros. After all tropical freeze is 1080p on switch with fur (though they messed up the shading and lighting in the conversion process) and its 720p on wii u.

The numbers might seem in wii us favor because of edram but remember its main pool is half of switchs 25.6gb/s bandwidth, and its a bigger difference than that suggests because of Maxwell's delta color compression helps save bandwidth.

Essentially switch has better effective bandwidth than wii u.
 

onQ123

Member
Same reason in my theory, the bizarro world hardware Wii U was able to have fur on Donkey Kong but the Switch couldn't. One thing it did have going for it was the eDRAM right on the GPU die (though that also meant giving up half the die for everything else it could have had), where the Switch has a better GPU but just the shared system bandwidth.

PS2 was a fur monster at the time.


This is why I'm hoping that Navi is using WoW stacking with the top layer being the CU's & the layer below it with embedded memory & so on

TSV2_lhnrf7.jpg
 

CrustyBritches

Gold Member
Whatever happened to that one guy who said he was going to drop insider info on the PS5 after he got approval from the mods?

Do ninjas have him tied up in the cellar still?

*moved from Radeon VII thread I accidentally posted in*
 
Last edited:

Armorian

Banned
Fixed
The CPU is better than expected at 3.2ghz its the gpu that's shit, not even 2x the X is unacceptable

How? X1X was literally the best MS could do in 2017 with AMD parts for 500$. Navi in PS5/X4 won't be better than Vega 56 due to heat/power issues (for console size box) and GCN CU limits (yeah I'm pretty sure it's another GCN) so 10-12TF is where these consoles will land.
 

SonGoku

Member
How? X1X was literally the best MS could do in 2017 with AMD parts for 500$. Navi in PS5/X4 won't be better than Vega 56 due to heat/power issues (for console size box) and GCN CU limits (yeah I'm pretty sure it's another GCN) so 10-12TF is where these consoles will land.
Im expecting the best Sony can do, not some half assed job
If the gpu is smaller than the og ps4 then sony cheaped out
Navi its post gnc btw and if it wasn't by any chance Sony would be the biggest fools on earth to do a repeat of the rsx

Im fine with 12tf btw, its 10tf i find unacceptable
 
Last edited:

ethomaz

Banned
Fixed
The CPU is better than expected at 3.2ghz its the gpu that's shit, not even 2x the X is unacceptable
Peharps you are looking in the wrong way...

10-12TFs is about 6x more powerful than PS4... that is similar jump from PS3 to PS4.

That is a generation leap... people need to stop to try to compare with mid-gen refresh that come two years ago.
 
Last edited:

SonGoku

Member
Peharps you are looking in the wrong way...

10-12TFs is about 6x more powerful than PS4... that is similar jump from PS3 to PS4.

That is a generation leap... people need to stop to try to compare with mid-gen refresh that come two years ago.
Previous gen leaps have been 8x to 10x and im fine with 12tf btw
If the gpu is smaller than the og ps4 Sony clearly cheaped out
 
Last edited:

DeepEnigma

Gold Member
Peharps you are looking in the wrong way...

10-12TFs is about 6x more powerful than PS4... that is similar jump from PS3 to PS4.

That is a generation leap... people need to stop to try to compare with mid-gen refresh that come two years ago.

Mid-gen refreshes that are also not taken advantage of fully by having their own games built from the ground up on them.
 
Previous gen leaps have been 8x to 10x and im fine with 12tf btw
If the gpu is smaller than the og ps4 Sony clearly cheaped out
Previous gen leaps have been 8x to 10x and im fine with 12tf btw
If the gpu is smaller than the og ps4 Sony clearly cheaped out

I wouldn't necessarily call that cheaping out, perhaps more like a shift in priorities if we get super bandwidth, good cpus and ssds vs. Moar gpu lol. But, we'll have to wait and see about the ram and ssds.

Ps4 was a 9x leap in gpu over the 360 (so even more over ps3 gpu) but good lord that's nothing compared to ps1 to 2 or snes to n64, n64 to GC etc. etc. But yes 12tf would be MORE than fine, hell 10 is fine if we get other nice parts.
 
Last edited:

SonGoku

Member
I wouldn't necessarily call that cheaping out, perhaps more like a shift in priorities if we get super bandwidth, good cpus and ssds vs. Moar gpu lol. But, we'll have to wait and see about the ram and ssds.

Ps4 was a 9x leap in gpu over the 360 (so even more over ps3 gpu) but good lord that's nothing compared to ps1 to 2 or snes to n64, n64 to GC etc. etc.
lol ssd for faster loading times would be a waste of resources, the end user can buy his own ssd and replace it. You get bigger gains from a beefier gpu, Zen core at 7nm is roughly the same size as jaguar at 28nm so no excuses there.

24GB is not even that much to ask, its just that ram companies are all together in a scummy scheme to inflate memory prices
 
Last edited:

onQ123

Member
Fixed
The CPU is better than expected at 3.2ghz its the gpu that's shit, not even 2x the X is unacceptable

You do realize that Xbox One X just came out in 2017 and it's a $500 premium product based on older hardware & matched up with the same old CPU right?


New hardware will do more with them flops than the older GPU especially with a better CPU.

You haven't even seen what can be done with Xbox One X GPU because it's tied down to a weak CPU & Xbox One software.
 
lol ssd for faster loading times would be a waste of resources, the end user can buy his own ssd and replace it. You get bigger gains from a beefier gpu, Zen core at 7nm is roughly the same size as jaguar at 28nm so no excuses there.

24GB is not even that much to ask, its just that ram companies are all together in a scummy scheme to inflate memory prices

Not sure if you noticed but load times can already be in excess of 1 minute on current consoles. The issue will grow worse next gen if we get terrible laptop hdds again.

It's Not just a matter of load times either. If developers know they have an ssd to stream data from, that can affect world traversal speed and general game complexity for the better. Just imagine if 7th gen had no hdd to stream assets from!
 

SonGoku

Member
It's Not just a matter of load times either. If developers know they have an ssd to stream data from, that can affect world traversal speed and general game complexity for the better. Just imagine if 7th gen had no hdd to stream assets from!
More memory will influence that much more than ssd
And as i said as long as the end user has the option to replace the hdd for a ssd its a non issue, a ssd not gonna happen anyways.
You do realize that Xbox One X just came out in 2017 and it's a $500 premium product based on older hardware & matched up with the same old CPU right?
I expect sony to do their best like ms did their best with the X
As long as the PS5 gpu is not smaller than the og ps4 im satisfied
New hardware will do more with them flops than the older GPU especially with a better CPU.
You haven't even seen what can be done with Xbox One X GPU because it's tied down to a weak CPU & Xbox One software.
True! but its no excuse to cheap out on hw
 
Last edited:
More memory will influence that much more than ssd
And as i said as long as the end user has the option to replace the hdd for a ssd is a non issue a ssd not gonna happen anyways.
Not necessarily. It does no good to have tons of memory when you can't fill it fast enough. 32gb ram would be an incredible waste if paired with a laptop hdd.

An optional ssd upgrade is nice, but it can't help game design like an ssd in every unit would.
 

SonGoku

Member
Not necessarily. It does no good to have tons of memory when you can't fill it fast enough. 32gb ram would be an incredible waste if paired with a laptop hdd.
An optional ssd upgrade is nice, but it can't help game design like an ssd in every unit would.
Why though? load times would just be longer and you can always upgrade the hdd if you are bothered by it but you can't upgrade the memory or the gpu.

Im not really seeing it being worth the sacrifice in specs to include a ssd (unless they get a good in bulk deal type contract that negates the cost difference), if the laptop hdd is limiting they can use a full sized desktop hdd instead
 
Last edited:
Why though? load times would just be longer and you can always upgrade the hdd if you are bothered by it but you can't upgrade the memory or the gpu.

Im not really seeing being worth the sacrifice in specs to include a ssd (unless they get a good in bulk deal type contract that negates the cost difference), if the laptop hdd is limiting they can use a full sized desktop hdd instead

I don't see how much worse than ps1 game load times are acceptable these days. Well, they're not to me.


Let's say you have crackdown 4 or something with destruction designed for 32gb. You can either have extremely bad pop in, or wait 3 minutes every time you boot up and long times every time you die. Not to mention, any stutters from data streaming while speeding through the world. With an ssd, you won't have to choose between insane load times and bigger, more complex worlds because you can stream data so much faster.

Desktop hdds are an option yes, with optional ssd upgrades that would be a fine compromise. Somehow I don't see it happening though since it's bound to make the consoles bigger. The og xbox 1 got flack for that. Plus, ssds are rapidly becoming the standard for pc gaming and prices are lowering all the time. I would argue for the ssd over a 3.5 inch drive.
 

onQ123

Member
True! but its no excuse to cheap out on hw

Huh? if it's true that Sony helped design Navi how can you say they cheap out on hardware just by looking at the floating point number?

If Sony is smart they will have a FPGA on the PS5 for machine learning.
 

Pimpbaa

Member
lol ssd for faster loading times would be a waste of resources

No but a ssd being standard would mean developers could make a game that streams more data than any mechanical drive could handle, meaning more detailed visuals (particularly in regards to textures).
 
Peharps you are looking in the wrong way...

10-12TFs is about 6x more powerful than PS4... that is similar jump from PS3 to PS4.

That is a generation leap... people need to stop to try to compare with mid-gen refresh that come two years ago.

PS3 to PS4 was 230gflops to 1.81 tflops, ~8x. PS3 to PS4 also didn't have to contend with a 4x pixel rendering standard increase.

PS2 to PS3 was ~6 gflops to 230. Xbox to 360 was about 20 to 240. >10 times increase and that gen also came with a large resolution jump.

A substantial amount of the power increase is going to be lost to rendering just PS4 level assets at 4k. This is why Cerny himself noted PS4 games would need ~8 tflops to be rendered at 4k across the standard.

Forget the resolution increase, we need far larger jumps in power to see the same jump in perceived rendering and yet here people are advocating for and defending the prospect of getting one of the smallest leaps in console history (Nintendo excluded). I don't get it.

*edit - Maybe I wasn't fair. I understand due to technological limitations, it may be the best we can get in the next year. 12 tflops I think would be a decent jump ~10x XB1) although still nothing like we've got before all things considered. What I'm worried about is more on the 10tflops or less spectrum.
 
Last edited:

onQ123

Member
PS3 to PS4 was 230gflops to 1.81 tflops, ~8x. PS3 to PS4 also didn't have to contend with a 4x pixel rendering standard increase.

PS2 to PS3 was ~6 gflops to 230. Xbox to 360 was about 20 to 240. >10 times increase and that gen also came with a large resolution jump.

A substantial amount of the power increase is going to be lost to rendering just PS4 level assets at 4k. This is why Cerny himself noted PS4 games would need ~8 tflops to be rendered at 4k across the standard.

Forget the resolution increase, we need far larger jumps in power to see the same jump in perceived rendering and yet here people are advocating for and defending the prospect of getting one of the smallest leaps in console history (Nintendo excluded). I don't get it.

*edit - Maybe I wasn't fair. I understand due to technological limitations, it may be the best we can get in the next year. 12 tflops I think would be a decent jump ~10x XB1) although still nothing like we've got before all things considered. What I'm worried about is more on the 10tflops or less spectrum.



Maybe you forgot that a lot of the games in the PS3/Xbox 360 generation was actually below 720P?
 
Now you're talking about something completely different. He said 30watts is too much for a console which that would be on the current zen 2. By zen 2 he means zen+.

Damn forums lol Anyways yes they'll either shrink zen+ to 7nm or just use the full fat 7nm zen 2 cores. That's if zen 2 isn't just a clockspeed boost and has ipc boosts over zen+. Either way we're getting 7nm yes.


DemonCleaner said:
3.2 is actually pretty high if you ask me.

it's not on Zen2.

that should need around 30W at full load.

i see you figured it out already, but here's the full Quote again, just to be sure. i meant zen2 (3rd gen) and not zen+. in fact i've been saying that it will be 7nm with zen2 and navi (or derivates of both) since 2017 or so, when most of here thought that would be far to optimistic. considering economics it was the only viable options for the suggested release window even back then. so no surprises there.

in the quote above i said, that 3.2Ghz isn't to much on a 7nm process and that this boost speed would fit nicely in a 30W power target for the CPU part. i wouldn't be surprised if clocks are even a bit higher in the end.
 
Last edited:

SonGoku

Member
I don't see how much worse than ps1 game load times are acceptable these days. Well, they're not to me.
The thing is you can always buy a ssd if load times bother you but you can't upgrade gpu, why sacrifice gpu for something you can upgrade on your own
Huh? if it's true that Sony helped design Navi how can you say they cheap out on hardware just by looking at the floating point number? .
Not going by the tf number alone but chip size, if its smaller than og ps4 it means they cheaped out
No but a ssd being standard would mean developers could make a game that streams more data than any mechanical drive could handle, meaning more detailed visuals (particularly in regards to textures).
Is there a use case for this where it wouldn't be able to replicate the same on a hdd but longer load times?
With plenty of ram, streaming assets becomes more of a bottleneck (as speedy as ssd are they are no match for ram let alone vram), streaming was used heavily on ps360
4 PS4 GPU size modules
What!? are you suggesting they duck tape 4 PS4s? terrible idea to hold back PS5 with a old ass architecture just for ez backwards compatibility, if that's what it takes i rather not have it at all

and gddr5? lol good one
 
Last edited:

onQ123

Member
The thing is you can always buy a ssd if load times bother you but you can't upgrade gpu, why sacrifice gpu for something you can upgrade on your own

Not going by the tf number alone but chip size, if its smaller than og ps4 it means they cheaped out

Is there a use case for this where it wouldn't be able to replicate the same on a hdd but longer load times?
With plenty of ram, streaming assets becomes more of a bottleneck (as speedy as ssd are they are no match for ram let alone vram), streaming was used heavily on ps360

What!? are you suggesting they duck tape 4 PS4s? terrible idea to hold back PS5 with a old ass architecture just for ez backwards compatibility, if that's what it takes i rather not have it at all

and gddr5? lol good one



That wouldn't be the same architecture just because it's broke down into 4 smaller GPU modules this is a smarter way to get power without making large dies


Now do you think you're smarter than Intel , AMD & nvidia?



https://research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs

Historically, improvements in GPU-based high performance computing have been tightly coupled to transistor scaling. As Moore's law slows down, and the number of transistors per die no longer grows at historical rates, the performance curve of single monolithic GPUs will ultimately plateau. However, the need for higher performing GPUs continues to exist in many domains. To address this need, in this paper we demonstrate that package-level integration of multiple GPU modules to build larger logical GPUs can enable continuous performance scaling beyond Moore's law. Specifically, we propose partitioning GPUs into easily manufacturable basic GPU Modules (GPMs), and integrating them on package using high bandwidth and power efficient signaling technologies. We lay out the details and evaluate the feasibility of a basic Multi-Chip-Module GPU (MCM-GPU) design. We then propose three architectural optimizations that significantly improve GPM data locality and minimize the sensitivity on inter-GPM bandwidth. Our evaluation shows that the optimized MCM-GPU achieves 22.8% speedup and 5x inter-GPM bandwidth reduction when compared to the basic MCM-GPU architecture. Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU. Lastly we show that our optimized MCM-GPU is 26.8% faster than an equally equipped Multi-GPU system with the same total number of SMs and DRAM bandwidth.

mcm_0.png


Sa8iyezH6GJlpVzA.jpg
 

llien

Member
From the rumore there were around Raja’s departure and even after (that Sony was strongly collaborating on Navi with AMD) and from the semi-custom work we have seen AMD do for MS and particularly Sony from the HW point of view, I am not sure why you would say that... unless you see those consoles taking features from a generation post Navi.
If "normal Navi" has fab issues, so has its custom version. That was my point.


Previous gen leaps have been 8x to 10x and im fine with 12tf btw
Previous leaps were in times when doubling and trippling GPU perf each gen was nothing extraordinary. We are past that mark.
 

Panajev2001a

GAF's Pleasant Genius
If "normal Navi" has fab issues, so has its custom version. That was my point.

If it has major issues and it was not related to the desktop targeted chip (so worrying about PCB integration for the dedicated card, GDDR6 vs HBM2/3, much higher clockrate and thus TDP, etc...) but something wrong with the core architecture then perhaps, but there are lots of variables (including time and paid prioritisation) at play. Rumours about a high end targeted desktop card and a console semi-custom SoC sharing some of that tech might be related to different things.
 

ethomaz

Banned
PS3 to PS4 was 230gflops to 1.81 tflops, ~8x. PS3 to PS4 also didn't have to contend with a 4x pixel rendering standard increase.

PS2 to PS3 was ~6 gflops to 230. Xbox to 360 was about 20 to 240. >10 times increase and that gen also came with a large resolution jump.

A substantial amount of the power increase is going to be lost to rendering just PS4 level assets at 4k. This is why Cerny himself noted PS4 games would need ~8 tflops to be rendered at 4k across the standard.

Forget the resolution increase, we need far larger jumps in power to see the same jump in perceived rendering and yet here people are advocating for and defending the prospect of getting one of the smallest leaps in console history (Nintendo excluded). I don't get it.

*edit - Maybe I wasn't fair. I understand due to technological limitations, it may be the best we can get in the next year. 12 tflops I think would be a decent jump ~10x XB1) although still nothing like we've got before all things considered. What I'm worried about is more on the 10tflops or less spectrum.
PS3 to PS4 was around 6x.

Cell helped on rendering while Jaguar not.

10-12TFs will be about PS3 to PS4 jump with 10 being smaller and 12 higher.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
4 PS4 GPU size modules with 6 or 9 stacks of GDDR5 , PS4 BC no problem , PS4 & PS5 game streaming from the same servers no problem, continue to sell cheap PS4s no problem, PS5 Pro a few years later no problem.
He says with such certainty.

What is this even supposed to imply? That MCM is a thing with AMD? Because congrats thats being known for 10 years atleast. You are literally supplying a picture of a Radeon Embedded E8860 here.

You already have console like digital signage players from Giada and Ibase with these GPU's in them.
 
Last edited:

ethomaz

Banned
He says with such certainty.


What is this even supposed to imply? That MCM is a thing with AMD? Because congrats thats being known for 10 years atleast. You are literally supplying a picture of a Radeon Embedded E8860 here.
Let it go :D
He even made a thread about that lol
That is the same theories MxM had for XB1 that turned bullshit.
 

Redneckerz

Those long posts don't cover that red neck boy
Let it go :D
He even made a thread about that lol
That is the same theories MxM had for XB1 that turned bullshit.
I mean, it would be interesting to compare the current consoles with those signage players so we can establish some more details but just random linkdumping without explaining what it is you want to point out is just... you know?
 

ethomaz

Banned
I mean, it would be interesting to compare the current consoles with those signage players so we can establish some more details but just random linkdumping without explaining what it is you want to point out is just... you know?
My commentary was self explanatory... there is nothing more to add.

These are the exactly same bullshit theories MxM spread in 2012-2013-2014-2015-today... even similar picture, AMD docs, etc.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
My commentary was self explanatory... there is nothing more to add.

These are the exactly same bullshit theories MxM spread in 2012-2013-2014-2015-today... even similar picture, AMD docs, etc.
Well yeah, i was referring to 123 :)

Oh god not MXM... Sega also has had a guy like that for years, with the Dreamcast 2 having raytracing and all.
 

onQ123

Member
https://www.techpowerup.com/gpu-specs/radeon-next.c3339


AMD Radeon Next
GRAPHICS PROCESSOR Navi 10

CORES 4096

TMUS 256

ROPS 64

MEMORY SIZE 8GB

MEMORY TYPEGDDR6

BUS WIDTH256 bit

This graphics card is not released yet.
Data on this page may change in the future.
The Radeon Next will be a graphics card by AMD. Built on the 7 nm process, and based on the Navi 10 graphics processor, in its Navi 10 XT variant, the card supports DirectX 12.0. It features 4096 shading units, 256 texture mapping units and 64 ROPs. AMD has placed 8,192 MB GDDR6 memory on the card, which are connected using a 256-bit memory interface. The GPU is operating at a frequency of 1000 MHz, which can be boosted up to 1000 MHz, memory is running at 1750 MHz.
Being a dual-slot card, its power draw is not exactly known. This device has no display connectivity, as it is not designed to have monitors connected to it. Radeon Next is connected to the rest of the system using a PCI-Express 3.0 x16 interface.

Graphics Processor
GPU Name Navi 10

GPU Variant Navi 10 XT

Architecture GCN 6.0

Foundry TSMC

Process Size7 nm

Transistors unknown Die Size unknown

Clock Speeds
GPU Clock 1000 MHz Boost Clock 1000 MHz

Memory Clock 1750 MHz

14000 MHz effective

Memory
Memory Size 8 GB Memory Type GDDR6 Memory Bus 256 bit Bandwidth 448.0 GB/s

Theoretical Performance
Pixel Rate 64.00 GPixel/s
Texture Rate 256.0 GTexel/s

FP16 (half) performance 16,384 GFLOPS (2:1)

FP32 (float) performance 8,192 GFLOPS

FP64 (double) performance512.0 GFLOPS (1:16)
 
64 cus at 1ghz would make a lot of sense for ps5. Amd architectures power consumption doesn't scale well with high clocks but at low clocks they're at *least* as good as nvidia per watt.

448gb/s would be disappointing though. Might be what we get...
 
Last edited:

ethomaz

Banned
64 cus at 1ghz would make a lot of sense for ps5. Amd architectures power consumption doesn't scale well with high clocks but at low clocks they're at *least* as good as nvidia per watt.

448gb/s would be disappointing though. Might be what we get...
64CUs at 1Ghz is 8.2TFs for GCN like chip.

I don't believe the clock will be lower than 1.3Ghz at 7nm.
 
Last edited:
Top Bottom