• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD VEGA: Leaked TimeSpy DX12 benchmark?

Locuza

Member
Let's assume that you're right - how exactly this changes what I'm saying? AMD can use all of this on MI25 and not use it on RX Vega - the end result would be the same as if they'd just used binned chips, with RX Vega running slower and hotter than what seems to be hinted by MI25 specs. So what exactly are you arguing with?
I guess with nothing besides the general uncertainty of not knowing the exact power consumption and clock speeds in practise or having tested any Vega product at all.
 
Are we pretending we really care about power draw on a gaming GPU again? I bought a 650Watt Antec Earth Watts for $74 in 2010. It's had a GTX 460, a 7870XT and now a GTX 1070.

I bought a Rosewill 430watt in 2012 it has a 750Ti on it. I used the Cooler Master power supply calculator just now and my 430Watt PSU can support a Ryzen 1600, SSD, HDD, Blu-Ray(why HTPC) and an AMD 480 with 70Watts to spare.

I agree that NVIDIA is ahead here and there are several factors that play into it. It's a notable difference, but pretending it matters to a consumer unless we are talking smaller form factor, but even then were talking like a < 50Watt difference in most cases.
 
I care about power draw because more power = more heat and the last thing people want is to be baking in their rooms. Vega needs to be priced competitively, be somewhat power efficient for the kind of performance it's offering. I don't care if it's not 1080Ti-tier, if it's faster than the 1080, they can undercut it somehow and it'll age better than Nvidia's offerings, I'm all for it.... But Navi better be high-end compared to Volta.
 

Locuza

Member
[...]
I agree that NVIDIA is ahead here and there are several factors that play into it. It's a notable difference, but pretending it matters to a consumer unless we are talking smaller form factor, but even then were talking like a < 50Watt difference in most cases.
Consumers might not care much but from a technical perspective it's incredible important to be as power efficient as possible.

Nobody is getting meaningfull design wins in the sff market with 30% or worse perf/watt.
Also enterprise clients who must not only pay for the products but also for the cost of ownership are hard to convince without really pushing down the prices.

AMD has no other choice than massivly improve here, the current situation is really bad.
 
Consumers might not care much but from a technical perspective it's incredible important to be as power efficient as possible.

Nobody is getting meaningfull design wins in the sff market with 30% or worse perf/watt.
Also enterprise clients who must not only pay for the products but also for the cost of ownership are hard to convince without really pushing down the prices.

AMD has no other choice than massivly improve here, the current situation is really bad.

I'm not talking about enterprise though...
 

dr_rus

Member
Are we pretending we really care about power draw on a gaming GPU again? I bought a 650Watt Antec Earth Watts for $74 in 2010. It's had a GTX 460, a 7870XT and now a GTX 1070.

I bought a Rosewill 430watt in 2012 it has a 750Ti on it. I used the Cooler Master power supply calculator just now and my 430Watt PSU can support a Ryzen 1600, SSD, HDD, Blu-Ray(why HTPC) and an AMD 480 with 70Watts to spare.

I agree that NVIDIA is ahead here and there are several factors that play into it. It's a notable difference, but pretending it matters to a consumer unless we are talking smaller form factor, but even then were talking like a < 50Watt difference in most cases.

Again, less power draw = less heat dissipation = less noise or simpler coolers leading to cheaper cards. And that's just generally speaking. There's also the fact that a lot of GPU applications require them to fit into a specific power consumption envelope - such applications span from mobile phones / tablets / handhelds through gaming laptops to ultra high end GPUs where better power efficiency usually directly transform into better performance.

Even if you take a typical mainstream desktop, better power efficiency can mean a difference between an ITX and ATX build for example. So even if you don't care about it the market certainly does.
 
Are we pretending we really care about power draw on a gaming GPU again? I bought a 650Watt Antec Earth Watts for $74 in 2010. It's had a GTX 460, a 7870XT and now a GTX 1070.

I bought a Rosewill 430watt in 2012 it has a 750Ti on it. I used the Cooler Master power supply calculator just now and my 430Watt PSU can support a Ryzen 1600, SSD, HDD, Blu-Ray(why HTPC) and an AMD 480 with 70Watts to spare.

I agree that NVIDIA is ahead here and there are several factors that play into it. It's a notable difference, but pretending it matters to a consumer unless we are talking smaller form factor, but even then were talking like a < 50Watt difference in most cases.

Nvidia is destroying AMD with Pascal line up for laptop, which actually sell more than PC.
 

tuxfool

Banned
Nvidia is destroying AMD with Pascal line up for laptop, which actually sell more than PC.

I doubt it. Intel is the one that owns the laptop market. I very much doubt that gaming laptops or even laptops with a dgpu sell in massive quantities in comparison to PC.
 

tuxfool

Banned
Intel is not competing with Nvidia you know?

That's not my point. Your assertion is that Nvidia sells more laptop GPUs than desktop GPUs.

I don't think that's correct. Laptops do sell more overall than desktops, but the vast majority will only come equipped with the iGPU of Intel processors.
 
These leaked benches are very worrying and more evidence Vega will be a compute beast but not very spectacular for gaming (around 1080 performance in DX12/Vulkan at final clocks).

Now I keep telling ya unfortunately for us AMD has survival in its sights and so is prioritizing markets where it can make more money like deep learning or those fields working in AI. This of course will backfire as no-one in those markets will buy it over Nvidia anyway.
 

ISee

Member

will update OP, thanks.

edit:

A new entry in the CompuBench database was spotted. The entry bears the name gfx 9000 (polaris is gfx 8) and has 64 Compute Units. This is also the first AMD GPU reaching 1600 MHz. We don't know if this is a consumer or a professional Vega variant.
The card still has 64 Compute Units. If AMD kept the same principles for GFX9 architecture (64 cores per cluster), then we should expect 4096 shaders. Making this a 13.1 tflop card. Lately, AMD needed significantly more raw power to get to Nvidia performance levels.
 

ISee

Member
1600 MHz is 33% faster than 1200 MHz so if we take the 5721 graphics score from the Timespy leak and multiply with 1.33 we get 7609. Which I guess is closer to a 1080, than a 1080Ti.

So far, AMD needs more raw power to get to Nvidia performance levels. A RX Vega running at 1600 MHz 'only' reaching GTX 1080 performance wouldn't be too surprising. The GPU used in the TimeSpy Benchmark would have a Floating-point performance of 9.8. For comparison a 1070 has ~5.9 tflops (no overclock at all).

Overall the leaks don't pay a bad light on vega performance. It just arrives about 9 months to late to the party. But we'll see how the market is going to adpet. Price will be very important here.
 

V_Arnold

Member
So far, AMD needs more raw power to get to Nvidia performance levels. A RX Vega running at 1600 MHz 'only' reaching GTX 1080 performance wouldn't be too surprising. The GPU used in the TimeSpy Benchmark would have a Floating-point performance of 9.8. For comparison a 1070 has ~5.9 tflops (no overclock at all).

Overall the leaks don't pay a bad light on vega performance. It just arrives about 9 months to late to the party. But we'll see how the market is going to adpet. Price will be very important here.

You cant be serious. So this is the expectation now? Barely reaching 1080 vanilla? :D Mate, it is not the 1600mhz clock that will reach gtx1080 perf, but the rest of the architecutre.
One of the biggest downsides with th Rx4** architecture was not being able to sustain high clocks (which is invalidated by 1600mhz like...instantly), and not having fast enough memory (in rx470's case, this was crucial) - HBM2 solves this as well.

Vega only being as good as an 1080 WILL be a huge suprise, but I am not surprised to read this speculation on gaf.
 

ISee

Member
You cant be serious.

?

The RX 580 is a 6.1 tflops card.
The GTX 1060 is a 3.8 tflops card (~5.1 tflops overclocked).

Both cards seem to perform very close to each other in benchmarks.


You cant be serious. So this is the expectation now? Barely reaching 1080 vanilla? :D Mate, it is not the 1600mhz clock that will reach gtx1080 perf, but the rest of the architecutre.
One of the biggest downsides with th Rx4** architecture was not being able to sustain high clocks (which is invalidated by 1600mhz like...instantly), and not having fast enough memory (in rx470's case, this was crucial) - HBM2 solves this as well.

Vega only being as good as an 1080 WILL be a huge suprise, but I am not surprised to read this speculation on gaf.

But are there significant architectural gains? The first Benchmark only suggests that the performance is around 1070 levels with 1200 MHz (9.8 tflops). The rest of the leaks suggest that 4096 shaders are a constant on Vega (leak 2 and 3) making vega a 13.1 tflops gpu running at 1600 MHz. If we take the first TimeSpy Benchmark as a starting point 'just' 1080 performance isn't that far fetched.
 
These leaked benches are very worrying and more evidence Vega will be a compute beast but not very spectacular for gaming (around 1080 performance in DX12/Vulkan at final clocks).

Now I keep telling ya unfortunately for us AMD has survival in its sights and so is prioritizing markets where it can make more money like deep learning or those fields working in AI. This of course will backfire as no-one in those markets will buy it over Nvidia anyway.

More to the point, Intel is getting geared up to go all-in on deep learning and AI. AMD has no hope after Intel jumps in to compete with Nvidia.

Nvidia has built up a massive advantage in software for AI and deep learning, and Intel is absolutely not a software company and never has been. What people don't understand about Nvidia's success is how critical software has been. Nvidia wrote an entire driver for Maxwell and Pascal which simulates the hardware scheduler in AMD's GCN and offers a good 90-95% effective performance in software compared to the hardware solution. Doing this has let them dramatically cut die size and offer an overwhelming power and efficiency advantage, which has translated to a massive clock speed advantage.

The real reason why Nvidia dominates AMD is not just hardware. I would say that hardware is maybe 20% of Nvidia's advantage. The other 80% is Nvidia's immense software engineering talent and depth, built up over decades of writing drivers for video cards. This advantage will continue to make AMD's challenge to surpass Nvidia almost impossible because they have to do in hardware what Nvidia does in software, and it will pose an enormous challenge for Intel in competing with Nvidia for deep learning and AI.
 

V_Arnold

Member
?

The RX 580 is a 6.1 tflops card.
The GTX 1060 is a 3.8 tflops card (~5.1 tflops overclocked).

Both cards seem to perform very close to each other in benchmarks.

I expanded on my post. I know that rx580 packs more raw power (480 does to, in fact the 470 does too), but the architecture does not translate that power difference into gaming difference. In my experience, clockspeed and memory speed/bandwidth is the main issue with these cards. Both is being solved (not to mention additional optimizations).
 

ISee

Member
Its hard to imagine vega not being faster than a 1080 if it can sustain 1.6 ghz



Fyi, even the founders edition 1060 is running well above 3.8 tflops in virtually every review

I know, but I'm using the official AMD / Nvidia specs.

I expanded on my post. I know that rx580 packs more raw power (480 does to, in fact the 470 does too), but the architecture does not translate that power difference into gaming difference. In my experience, clockspeed and memory speed/bandwidth is the main issue with these cards. Both is being solved (not to mention additional optimizations).

I also expanded my post :)
 

llien

Member
Correct me if I'm wrong, but Fury X perf/watt was very close to 980Ti's.
Well, of course it had water cooling and reduced temperatures going for it, but still.


PS
It's hard to imagine Vega not trading blows with 1080Ti/Titan if it runs sustained 1600Mhz.
 

Renekton

Member
The real reason why Nvidia dominates AMD is not just hardware. I would say that hardware is maybe 20% of Nvidia's advantage. The other 80% is Nvidia's immense software engineering talent and depth, built up over decades of writing drivers for video cards. This advantage will continue to make AMD's challenge to surpass Nvidia almost impossible because they have to do in hardware what Nvidia does in software, and it will pose an enormous challenge for Intel in competing with Nvidia for deep learning and AI.
If software engineering chops is so key, then Google's TPU may be ahead.
 
Are we pretending we really care about power draw on a gaming GPU again? I bought a 650Watt Antec Earth Watts for $74 in 2010. It's had a GTX 460, a 7870XT and now a GTX 1070.

I bought a Rosewill 430watt in 2012 it has a 750Ti on it. I used the Cooler Master power supply calculator just now and my 430Watt PSU can support a Ryzen 1600, SSD, HDD, Blu-Ray(why HTPC) and an AMD 480 with 70Watts to spare.

I agree that NVIDIA is ahead here and there are several factors that play into it. It's a notable difference, but pretending it matters to a consumer unless we are talking smaller form factor, but even then were talking like a < 50Watt difference in most cases.

You should absolutely care about power draw on a gaming GPU, because how much power the top-end card draws has certain limits that you can't exceed. 2x 8-pin PCI-e power connectors can supply a total of 150 x 2 = 300W + 75W supplied by the PCI-e slot itself = 375W total. So your nominal power draw cannot ever exceed 375W unless you are insane and want to use 3x PCI-3 power connectors on your video card. I don't think AMD are insane, so I'm not expecting Vega to sport 3x PCI-e power connectors. We can assume Vega will also use 2x PCI-e power connectors.

The 1080 Ti actually has a power limit of exactly, you guessed it, 375W when set to the highest power limit allowed by Nvidia. So Nvidia requires 375W to supply a 1080 Ti going full-tilt with max core voltage and power limit. Now we look at our friend the RX 580.

http://www.tomshardware.com/reviews/amd-radeon-rx-580-review,5020-6.html

The RX 580, a card which competes with the GTX 1060, draws almost as much power as the 1080 when it at max overclock. So it is more or less offering half the performance of the 1080 while pulling about as much juice.

Now we know that Vega is going to be TSMC instead of GloFo, but unless Vega is some incredible miracle of efficiency compared to Polaris, there is absolutely no way that it can come anywhere close to offering 1080 Ti's performance while also staying under 375W power usage. If Vega offers roughly 1080 performance while pulling 1080 Ti power it will be a massive improvement over Polaris.

So yeah, we're not pretending power draw matters on a gaming GPU. Because it's actually the most important thing right now. The amount of juice these things require to blast those 4K graphics straight into our eyeballs is already hitting the limits of what you can reasonably expect a computer to be able to supply.

If software engineering chops is so key, then Google's TPU may be ahead.

You are absolutely right. Google is probably actually the most secretly dangerous company in AI right now. However Google isn't a hardware company at all, so someone needs to supply Google with the hardware they run their software on. In that sense, Nvidia and Google are potentially competitors but Nvidia supplies a piece of the puzzle (GPUs) that Google does not.
 

llien

Member
...which has translated to a massive clock speed advantage...

This is like claiming Prescott had massive clock advantage over Athlon 64.

Architectures are different, one goes for more IPC, other for higher clocks.
580 die size is about 10% bigger than 1060's, it's not that much of a difference.


AMD doesn't have resources to spend it on optimizing DX11 driver as much as nVidia does, it is true. But with DX12 there isn't nearly as much that GPU manufacturer can do in drivers, to my knowledge, so this advantage will diminish. (and as far as stability goes, my perception is that AMD drivers are more stable in the last 1-2 years).

Not sure what you meant with "software solution" to async stuff. AMD cards get more performance from it, while nVidia's don't, last time I've checked.


Now we know that Vega is going to be TSMC instead of GloFo, but unless Vega is some incredible miracle of efficiency compared to Polaris
Vega has more in common with Fiji, not Polaris.
And Fiji wasn't that far from Maxwell (on water :))
 
You should absolutely care about power draw on a gaming GPU, because how much power the top-end card draws has certain limits that you can't exceed. 2x 8-pin PCI-e power connectors can supply a total of 150 x 2 = 300W + 75W supplied by the PCI-e slot itself = 375W total. So your nominal power draw cannot ever exceed 375W unless you are insane and want to use 3x PCI-3 power connectors on your video card. I don't think AMD are insane, so I'm not expecting Vega to sport 3x PCI-e power connectors. We can assume Vega will also use 2x PCI-e power connectors.

The 1080 Ti actually has a power limit of exactly, you guessed it, 375W when set to the highest power limit allowed by Nvidia. So Nvidia requires 375W to supply a 1080 Ti going full-tilt with max core voltage and power limit. Now we look at our friend the RX 580.

http://www.tomshardware.com/reviews/amd-radeon-rx-580-review,5020-6.html

The RX 580, a card which competes with the GTX 1060, draws almost as much power as the 1080 when it at max overclock. So it is more or less offering half the performance of the 1080 while pulling about as much juice.

Now we know that Vega is going to be TSMC instead of GloFo, but unless Vega is some incredible miracle of efficiency compared to Polaris, there is absolutely no way that it can come anywhere close to offering 1080 Ti's performance while also staying under 375W power usage. If Vega offers roughly 1080 performance while pulling 1080 Ti power it will be a massive improvement over Polaris.

So yeah, we're not pretending power draw matters on a gaming GPU. Because it's actually the most important thing right now. The amount of juice these things require to blast those 4K graphics straight into our eyeballs is already hitting the limits of what you can reasonably expect a computer to be able to supply.

^ You're forgetting that HBM is significantly more power efficient than GDDR5. Look at Fury X vs 390X power numbers: Fury X performed 20-30% better while drawing 100 less watts.

You can't really use Polaris as a way to extrapolate Vega power draw when there are pretty big differences between both of them.
 

llien

Member
Fury x price was very high but barely beat a 980

Fury X at release date beat stock 1080Ti only at 4k, 4 month later, also at 1440p.
980 was no match, and was beaten by Nano.

3o9MJXc.png



So, we're supposed to know something more today...

https://twitter.com/GFXChipTweeter/status/864347035033522178

31th can't come soon enough... I hate this chip for all the frustrated waiting... =/
 
D

Deleted member 465307

Unconfirmed Member
Vega has more in common with Fiji, not Polaris.
And Fiji wasn't that far from Maxwell (on water :))

I admit I'm not very educated in this subject, but this seems counterintuitive to me. Why does Vega have more in common with an older architecture than their most recent one?
 
^ You're forgetting that HBM is significantly more power efficient than GDDR5. Look at Fury X vs 390X power numbers: Fury X performed 20-30% better while drawing 100 less watts.

You can't really use Polaris as a way to extrapolate Vega power draw when there are pretty big differences between both of them.

We'll see. For AMD's sake, let's hope you are right!
 

FingerBang

Member
Fury X at release date beat stock 1080Ti only at 4k, 4 month later, also at 1440p.
980 was no match, and was beaten by Nano.

3o9MJXc.png


31th can't come soon enough... I hate this chip for all the frustrated waiting... =/

Don't tell me. I've wanted to build a new rig for months now. I want to go with AMD because of Freesync but have been waiting for this new cards for a year now. I really hope it's shockingly powerful for much cheaper than Nvidia, otherwise I'll have to change my plans. The fact we might see Volta by the end of the year offering 1080ti performance for half the price bothers me.

I'm prepared to be disappointed.
 

llien

Member
The fact we might see Volta by the end of the year offering 1080ti performance for half the price bothers me.

I beg your pardon but where does half price for 1080Ti performance expectation come from? We didn't see even remotely as big a jump with Maxwell to Pascal transition. Heck, with FE pricing, was there even a perf/$ jump at all?
 

ISee

Member
https://www.techpowerup.com/reviews/AMD/R9_Fury_X/31.html

At no point was it ever only barely faster than a 980.

Vega does not have more in common with fiji than polaris. Where are people getting this stuff from?

Fury X performance fluctuated a lot during it's release window. It was sometimes even faster than a 980 Ti and sometimes on paar with a 980. Today it is mostly as fast as a 980 Ti. That said, it is the more powerful card, at least on Paper. A Fury X running at 1130 MHz has 9.2 tflops and the 980 Ti running at 1350 MHz hast 7.7 tflops. That's a difference of about 20% and still both cards are on the same level of performance in gaming, mostly.

Will this gap change with Vega? Of course it could, but most people seem to assume that Fiji and Vega have a lot in common in their architecture. Just like Maxwell and Pascal (they perform very close to each other if you overclock/downclock them to roughly the same tflop levels).
For now assuming that a 13.1 tflops vega (@1600 MHz) won't be able to reach the 13.6 tflops of a 1080 Ti (@1900 MHz) isn't a far fetched assumption, in a thread about leaks and speculations.
That said, maybe Vegas will be able to reach even higher clock speeds? 1750 MHz could start to bring in 1080 Ti performance, imo.

So is the 1080ti better or not

No way to say till we have benchmarks from review sites. We're mostly just speculating here.
 
Fury X performance fluctuated a lot during it's release window. It was sometimes even faster than a 980 Ti and sometimes on paar with a 980. Today it is mostly as fast as a 980 Ti. That said, it is the more powerful card, at least on Paper. A Fury X running at 1130 MHz has 9.2 tflops and the 980 Ti running at 1350 MHz hast 7.7 tflops. That's a difference of about 20% and still both cards are on the same level of performance in gaming, mostly.

Will this gap change with Vega? Of course it could, but most people seem to assume that Fiji and Vega have a lot in common in their architecture. Just like Maxwell and Pascal (they perform very close to each other if you overclock/downclock them to roughly the same tflop levels).
For now assuming that a 13.1 tflops vega (@1600 MHz) won't be able to reach the 13.6 tflops of a 1080 Ti (@1900 MHz) isn't a far fetched assumption, in a thread about leaks and speculations.
That said, maybe Vegas will be able to reach even higher clock speeds? 1750 MHz could start to bring in 1080 Ti performance, imo.



No way to say till we have benchmarks from review sites. We're mostly just speculating here.

there is a much bigger architectural improvement going from fiji to vega than maxwell to pascal. its not even close. furyx fluctuated initially due to weaker drivers and poorly optimized games. as devs left behind all the legacy ps360 code and started using lots of compute, fury x performance just continued to improve. devs also learned how to get better geometry performance through all the console work
 

FingerBang

Member
I beg your pardon but where does half price for 1080Ti performance expectation come from? We didn't see even remotely as big a jump with Maxwell to Pascal transition. Heck, with FE pricing, was there even a perf/$ jump at all?

Well, a 1070 is as powerful as a 980ti/Titan and cost around half the price (well, slightly more since they raised the price this time around) when it came out. It seemed pretty beefy to me.
 
Top Bottom