• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Windows Central on US December NPD: PS4 sold 1,568,000 (238K Pro), XB1 sold 1,511,000

Proelite

Member
You can increase the memory bandwidth how you want that won't fix the GPU power bottleneck for 4k.

And about Vega and clocks... it seems reasonable for a console in a APU form use a lower clock than the dedicated Desktop version. XB1/PS4 based GPU runs at 1100-1200Mhz in Desktop with way more SP units while the Console form had to stay on 800-900Mhz and lower SP units.

Even if they use Vega it will be a caped Vega with lower clocks.

My view Scorpio will have Vega at 50 CUs, 3200 SPs, ~950Mhz in a console APU form factor... way more clock won't be viable for consoles APUs.

Your clocks are too conservative. Vega 10 is clocked at 1500mhz, (64cu 12.5 teraflops).

I don't think Scorpio will have more than 48 CUs. I'm thinking it have 40 CU at 1.2 GHz.

It's a balance between cooling difficulty and SOC size. Considering TMSC 16nm is not as dense, Scorpio will probably have less CUs at higher clocks.
 
Your clocks are too conservative. Vega 10 is clocked at 1500mhz, (64cu 12.5 teraflops).

I don't think Scorpio will have more than 48 CUs. I'm thinking it have 40 CU at 1.2 GHz.

It's a balance between cooling difficulty and SOC size. Considering TMSC 16nm is not as dense, Scorpio will probably have less CUs at higher clocks.

why do you expect the Xbox Scorpio soc in TMSC 16nm FF+ and not GloFO 14nm LPP?
 

ethomaz

Banned
why do you expect the Xbox Scorpio soc in TMSC 16nm FF+ and not GloFO 14nm LPP?
TSMC 16nm is better than GF 14nm and that is why Vega desktop will use it... it allow better clocks and less voltage needed... it is overall a better choice even to AMD high end chip.
 

BeforeU

Oft hope is born when all is forlorn.
Man if Sony could only sell these many Pros can you imagine MS this holiday? Scorpio will not only compete with Xbox One and PS4 but also cheaper Pro. It is going to be a hard sell if they dont message it right. And show some stellar games.
 

Proelite

Member
TSMC 16nm is better than GF 14nm and that is why Vega desktop will use it... it allow better clocks and less voltage needed... it is overall a better choice even to AMD high end chip.


The Pro's 911mhz is 81.34% of the RX480's core clock of 1120mhz.

Vega 10 at 64 will need to be 1542mhz to reach their reported 12.5 teraflops.

81.34 % of 1542mhz is 1254mhz.

I expect Scorpio to have at least 40CUs at 1254mhz. 6.3 teraflops.
 

sirronoh

Member
Man if Sony could only sell these many Pros can you imagine MS this holiday? Scorpio will not only compete with Xbox One and PS4 but also cheaper Pro. It is going to be a hard sell
if they dont message it right. And show some stellar games.
no matter what

FTFY. Your second sentence is 100% accurate. The Scorpio's biggest competition is the Xbox One S, which it will almost certainly not outsell anytime soon. Meanwhile the PS4 will have a huge catalog of new 2017 console or platform exclusives by fall + cheaper prices + marketing deals for
  • Red Dead Redemption
  • Star Wars
  • Call of Duty

In order to do well in November and December, the best chance Scorpio has is exclusive non-VR games + $399 in the United States and UK. Any other scenario or market will show bad to dire performance relative the expectations some people have on GAF for how well this thing will sell.
 

ethomaz

Banned
The Pro's 911mhz is 81.34% of the RX480's core clock of 1120mhz.

Vega 10 at 64 will need to be 1542mhz to reach their reported 12.5 teraflops.

81.34 % of 1542mhz is 1254mhz.

I expect Scorpio to have at least 40CUs at 1254mhz. 6.3 teraflops.
RX480 runs at 1266Mhz to delivery 5.8TFs.

~72% not 81%

1542Mhz delivers 12.6TFs... for 12.5TFs you need 1525Mhz... for 12RFs you need 1465MHz... AMD sline shows 12TFs.

72% of 1525Mhz is 1100Mhz... 72% of 1465Mhz is 1050Mhz.

PS. 1120Mhz is the base clock of RX480... Vega 10 base clock is 1200Mhz... if you use base clock comparison then Vega has a small boost over RX480.
 

Proelite

Member
RX480 runs at 1266Mhz to delivery 5.8TFs.

~72% not 81%

1542Mhz delivers 12.6TFs... for 12.5TFs you need 1525Mhz... for 12RFs you need 1465MHz... AMD sline shows 12TFs.

72% of 1525Mhz is 1100Mhz... 72% of 1465Mhz is 1050Mhz.

PS. 1120Mhz is the base clock of RX480... Vega 10 base clock is 1200Mhz... if you use base clock comparison then Vega has a small boost over RX480.


I've read 1550 mhz for base clock from here.

http://www.christianpost.com/news/b...tes-amd-vega-10-vs-nvidia-gtx-1080-ti-173092/

http://wccftech.com/amd-vega-10-20-slides-double-precision-performance-1500-mhz-vega-10-x2-2017/
 

Proelite

Member
That boost clock... and it gives 12.6TFs instead the 12.5TFs rumored... AMD slide shows 12TFs.

Core clock is 1200Mhz.

PS. Of course that is all rumors.

48CUs at 976mhz is exactly 6 teraflops.

What is Nvidia doing that can get their core clocks so high.
 

ethomaz

Banned
48CUs at 976mhz is exactly 6 teraflops.

What is Nvidia doing that can get their core clocks so high.
Different architecture has different clocks... nVidia works with less SPs units bigger clock while AMD works with more SPs units lower clock.

That remembers the Intel Netbust with super clocks losing to AMD Barton with half clock lol
 

Proelite

Member
Different architecture has different clocks... nVidia works with less SPs units bigger clock while AMD works with more SPs units lower clock.

That remembers the Intel Netbust with super clocks losing to AMD Barton with half clock lol

Doesn't explain RX480 having same shader count, more power draw and less clock than 1070.

It might be a TMSC 16nm vs GF 14nm thing.
 

ethomaz

Banned
Doesn't explain RX480 having same shader count, more power draw and less clock than 1070.

It might be a TMSC 16nm vs GF 14nm thing.
To be fair I was surprised how RX480 power draw in certain circumstances... it is not stable and can reach 180W (absurd for a card that is supposed to be sub 130W).

From the places I read (most technical sites) the point that TSMC 16nm+ FinFET is better than even Intel 14nm FinFET... GF 14nm FinFET is actually the worst of the three and that is why AMD is not using it for high-end chips anymore.

This article try to explain some points: https://www.semiwiki.com/forum/content/4585-motley-fooled-finfets.html

So in the end, how do they stack up? If you use the Intel's per drawn micron metric, TSMC 16FF+ has ~10% more drive current than Intel 14nm (all other things being equal including leakage and voltage). If you use another metric like current/fin, or current/Weff, TSMC has an even stronger advantage.

That is why during the TSMC symposium last month Dr. BJ Woo emphatically stated TSMC had ”the best" transistor in the 14-16nm technologies. It will be interesting to watch how this unfolds as 10nm process details are disclosed. In my 30 years in the semiconductor industry I don't remember a more exciting time, absolutely.

TSMC at least in 14/16nm war is actually the top dog.
 

leeh

Member
It is so easy to understand...

- 63 GB/s to a 1.3TFs GPU is not enough = MS worked in that bottleneck adding a big and fast cache (eSRAM)... 1.3TFs GPU is not enough to reach 1080p in mid to high graphic quality.
So that's why games like GoW4 and FH3 run at ridicously high graphical settings in comparison to similar spec PC's? For example, Gears 4 and it's Ultra shadows.

- 320GB/s is more than enough to a 6TFs GPU... there is no RAM bottleneck here... no need cache... 6TFs GPU is not enough to 4k mid to high graphic quality.
Based on what? The only evidence you have is that the games which have been said to be running 4k (1st party) already run at comparitively high graphical settings to what a similar spec'd PC would do. Even in crazy maths land. The GPU alone has 4x the power, so in a high-level aspect it should be possible to run games like Gears and FH at 4K. Then you have the ability to upgrade the assets for 4k based on the extra RAM and BW.

If MS was smarter when projected XB1 they could chose a better RAM bandwidth (GDDR5?) and not use eSRAM... so make a chip with more GPU power, more ram bandwidth and optional mid to high 1080p power... ohhhh Sony did that.
Sony got very lucky with GDDR, this is a known thing. There was the fire in the production factory where DDR was mostly made which knocked back the entire supply for DDR. Sony nearly got lumped with 4GB of RAM compared to 8GB on X1. That would of made things a bit more interesting. They were lucky to even have the supply of GDDR to be able to put 8GB in there.

About your Titan comment... no console has a price and some drawn power limits... a bigger chip than ~320mm2 is really something Sony and MS wants to avoid at all costs at the point both put what they could in side this chip at launch and Sony put at could in the Pro chip right now.
Well that's funny, because both X1 and PS4 have chips over than 320mm2. The X1 had a stupidly over-sized heating solution and a absurdly large die space... all wasted by esRAM.

"Your GPU bottlenecks are often caused by not enough available throughput in RAM, especially at 4K" what you described here is RAM bootleneck and has nothing to do with GPU bottleneck... GPU bottleneck is when the GPU holds the render to reach a high level... in this case the 6TFs GPU holds the render to reach 4k in mid to high quality while there is RAM bandwidth enough for GPU and CPU (320GB/s... it is indeed overkill for 4k).
You're forgetting the exponential loss of RAM BW when the CPU is doing operation. Something people forget about with the PS4. A RAM bottleneck has nothing to-do with a GPU bottleneck? Ok then...

If I hate or not Scorpop won't change the actual delivery of it specs.

I'm surprise how your posts is full of crazy claims without any technical base.
Should be saying that to you pal.
 

Space_nut

Member
So that's why games like GoW4 and FH3 run at ridicously high graphical settings in comparison to similar spec PC's? For example, Gears 4 and it's Ultra shadows.


Based on what? The only evidence you have is that the games which have been said to be running 4k (1st party) already run at comparitively high graphical settings to what a similar spec'd PC would do. Even in crazy maths land. The GPU alone has 4x the power, so in a high-level aspect it should be possible to run games like Gears and FH at 4K. Then you have the ability to upgrade the assets for 4k based on the extra RAM and BW.


Sony got very lucky with GDDR, this is a known thing. There was the fire in the production factory where DDR was mostly made which knocked back the entire supply for DDR. Sony nearly got lumped with 4GB of RAM compared to 8GB on X1. That would of made things a bit more interesting. They were lucky to even have the supply of GDDR to be able to put 8GB in there.


Well that's funny, because both X1 and PS4 have chips over than 320mm2. The X1 had a stupidly over-sized heating solution and a absurdly large die space... all wasted by esRAM.


You're forgetting the exponential loss of RAM BW when the CPU is doing operation. Something people forget about with the PS4. A RAM bottleneck has nothing to-do with a GPU bottleneck? Ok then...


Should be saying that to you pal.

FH3 and Gears 4 are wizardry!! Running 1080p to boot too. I really love the real time global illumination in FH3 just astounding
 
Top Bottom