• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD VEGA: Leaked TimeSpy DX12 benchmark?

So you take in consideration power draw to build a system even if you get more capacity than you need.


There are fine for sub 100W GPUs... RX 480 can reach 200W in some cases without any over... anything below 500W PSU you will need to check the others components.


The same can be said about these "enthusiast" saying perf/watt is irrelevant in actual modern hardware lol

plz explain to me the benefit a consumer receives from the lower power draw of a 1060 v a 480/580
 

ethomaz

Banned
Of course power draw is important. But when you are talking about 150W versus 220W I couldn't care less. If Vega performs slightly faster than a 1080 and draws twice the amount of power it's a problem. If it has that performance and needs 25% more juice I couldn't give a monkeys.
Let's clear up the things here...

My comment: After Maxwell nVidia is probably one or two generation ahead AMD in perf/watt (that in reply to somebody else about nVidia hardware being ahead AMD hardware).

Somebody: Downplayed this advantage just for the sake of downplay.

I replayed: That in the actual world perf/watt is something is getting more and more important in any hardware component and that the focus of the companies are heavy in this important point. I give the example of RX 480 that reaches peak of 200W while GTX 1060 have peaks of 120-130W and that is important for how the components works because RX 480 with 6-pin PCI-E throttles to reach the 1266Mhz clock because it pass the actual 150W of the 6-pin and needs to downclock internally... so it didn't reach the actual peak performance because the power draw (that is not a case for 8+6-pin PCI-E RX 480).

Somebody: Downplayed because he never used power draw reference to build a system.

I replayed: So how do you build a system without look at the PSU specs that have to account for the overall power draw of the system.

That is what is happening here... if some guys wants to donwplay "perf/watt" just because it is bad in their brand choice... fine but just don't try to say it is irrelevant when it is one of the key features of the actual and future hardware componentes... something that all companies in the world are focusing.
 

Xyphie

Member
Graphics cards are made towards board power targets of ~75/150/225/300W. Perf/W effectively sets the upper limit of what you're going to get in terms of performance..
 
Let's clear up the things here...

My comment: After Maxwell nVidia is probably one or two generation ahead AMD in perf/watt (that in reply to somebody else about nVidia hardware being ahead AMD hardware).

Somebody: Downplayed this advantage just for the sake of downplay.

I replayed: That in the actual world perf/watt is something is getting more and more important in any hardware component and that the focus of the companies are heavy in this important point. I give the example of RX 480 that reaches peak of 200W while GTX 1060 have peaks of 120-130W and that is important for how the components works because RX 480 with 6-pin PCI-E throttles to reach the 1266Mhz clock because it pass the actual 150W of the 6-pin and needs to downclock internally... so it didn't reach the actual peak performance because the power draw (that is not a case for 8+6-pin PCI-E RX 480).

Somebody: Downplayed because he never used power draw reference to build a system.

I replayed: So how do you build a system without look at the PSU specs that have to account for the overall power draw of the system.

That is what is happening here... if some guys wants to donwplay "perf/watt" just because it is bad in their brand choice... fine but just don't try to say it is irrelevant when it is one of the key features of the actual and future hardware componentes... something that all companies in the world are focusing.

amd has supported mixed graphics and compute in the same SM since tahiti. nvidia is now 5 years behind amd techmoponology
 
plz explain to me the benefit a consumer receives from the lower power draw of a 1060 v a 480/580

I saw the absolute pittance someone would save using a 1060 heavily vs a 480 over the course of a year several months back (it was something like $5), so from a cost POV, I shake my head at any argument about the lower power draw being important.
 

ethomaz

Banned
plz explain to me the benefit a consumer receives from the lower power draw of a 1060 v a 480/580
There are a lot of beneficies and most of them are effects of each other...

+ More performance at the same power target
+ Cheaper PSU
+ Less power consumption (daaaaa)
+ Less heat
+ Cheaper cool system for the GPU and the build
+ Cheaper GPU parts
+ Less issues with clock throttles

amd has supported mixed graphics and compute in the same SM since tahiti. nvidia is now 5 years behind amd techmoponology
Do you mean different implementation of Async Compute?

AMD: SM/CU can works in both graphic and compute at same time (eg. half SPs can work in compute and half in graphic).
nVidia: SM can work in graphic of compute at time (eg. all SPs works on either graphic of compute).
 
I saw the absolute pittance someone would save using a 1060 heavily vs a 480 over the course of a year several months back (it was something like $5), so from a cost POV, I shake my head at any argument about the lower power draw being important.

the few bucks youd save over 3 years doesnt even make up for the price premium youd pay for a 1060. its just comedy

There are a lot of beneficies and most of them are effects of each other...

+ More performance at the same power target (net performance is what matters)
+ Cheaper PSU (LUL)
+ Less power consumption (daaaaa) (doesnt matter)
+ Less heat (both gpus run at similar temps with a similar cooler at similar noise levels)
+ Cheaper cool system for the GPU and the build (what???)
+ Cheaper GPU parts (what???)
+ Less issues with clock throttles (480 has no problems with thorttling. still waiting on that evidence)


Do you mean different implementation of Async Compute?

AMD: SM/CU can works in both graphic and compute at same time (eg. half SPs can work in compute and half in graphic).
nVidia: SM can work in graphic of compute at time (eg. all SPs works on either graphic of compute).

the former is clearly better than the latter
 
Let's clear up the things here...

My comment: After Maxwell nVidia is probably one or two generation ahead AMD in perf/watt (that in reply to somebody else about nVidia hardware being ahead AMD hardware).

.

Depends on what performance metrics/figures you are using to judge this.

Abstract Polaris' performance without heavy driver overhead and software optimisation and you get something like the performance in DOOM Vulkan - vastly faster than Pascal but drawing a little more juice if we look at 480 vs 1060.

But if you look at performance in old DX11 titles, the 480 loses badly in perf/watt. But again, check all of the big new PC releases this year, 480 vs 1060 in DX12 and the 480 is ahead slightly in performance (I'm not posting all the benches again I had this debate with Mr Rus) but uses more juice. So behind in perf/watt but not anywhere near close to a gen behind.
 

ethomaz

Banned
Depends on what performance metrics/figures you are using to judge this.

Abstract Polaris' performance without heavy driver overhead and software optimisation and you get something like the performance in DOOM Vulkan - vastly faster than Pascal but drawing a little more juice if we look at 480 vs 1060.

But if you look at performance in old DX11 titles, the 480 loses badly in perf/watt. But again, check all of the big new PC releases this year, 480 vs 1060 in DX12 and the 480 is ahead slightly in performance (I'm not posting all the benches again I had this debate with Mr Rus) but uses more juice. So behind in perf/watt but not anywhere near close to a gen behind.
Depends what?

GTX 1080 power draw the same (or even lower) than RX 480.

It is pretty clear and big the discrepancy in perf/watt... nVidia architecture is some generations ahead AMD in this point... to be fair since Maxwell.
 

ethomaz

Banned
and if nvidia sold 1080 at the same price as 480 it would matter
What are you talking about? What price has to do with "generation ahead in perf/watt"??? You have a high-end product from company A that performance way better and consumes the same (or even lower) than a mid-end product from company B.
 
Depends what?

GTX 1080 power draw the same (or even lower) than RX 480.

It is pretty clear and big the discrepancy in perf/watt.

That's totally unfair as Polaris is a low-mid range arch most efficient at low clocks. Pascal is the opposite - it excels at higher clocks and the 1080 doesn't currently have a competitor from AMD. Wait until AMD delivers a high-end card and then you can compare perf/watt and find that AMD isn't 'one or two' generations behind.
 
What are you talking about?

He's saying that price and performance are what matters to the consumer.

If you want to spend $250 on a card, you buy the fastest you can afford at that bracket (1060 or 480/580 depending on game(s) and who you ask).

If you want to spend $500, you buy a 1080.

Performance/watt doesn't enter the analysis for almost all consumers. Yes Nvidia is more power efficient. No, most purchasers don't actually care about that.
 

ethomaz

Banned
That's totally unfair as Polaris is a low-mid range arch most efficient at low clocks. Pascal is the opposite - it excels at higher clocks and the 1080 doesn't currently have a competitor from AMD. Wait until AMD delivers a high-end card and then you can compare perf/watt and find that AMD isn't 'one or two' generations behind.
Of course it is not unfair. It is Polaris vs Pascal...

- At same performance = Pascal will power draw way lower
- At same power draw = Pascal will performance way more

There is no way to spin that... that architectural differences and that is why I said...

nVidia is probably one or two generation ahead AMD in perf/watts with GPU tech... AMD is still doing the catch job.

He's saying that price and performance are what matters to the consumer.

If you want to spend $250 on a card, you buy the fastest you can afford at that bracket (1060 or 480/580 depending on game(s) and who you ask).

If you want to spend $500, you buy a 1080.

Performance/watt doesn't enter the analysis for almost all consumers. Yes Nvidia is more power efficient. No, most purchasers don't actually care about that.
That has nothing to do with the discussion since beginning or what I claimed and I'm defensing it now... what I said never compared prices.

He is moving the subject.
 
Of course it is not unfair. It is Polaris vs Pascal...

+ At same performance = Pascal will power draw way lower
+ At same power draw = Pascal will performance way more

There is no way to spin that... that architectural differences and that is why I said...

Lol you're unbelievable.

Pascal is an architecture that scales from low to high-end. You know Polaris isn't the same for AMD. It's low-mid, so compare the 460/470/480 with the 1050/1050 Ti/1060 for fairer perf/att comparisons. AMD is going to have a high end competitor in about 6-8 weeks.
 
That has nothing to do with the discussion since beginning or what I claimed and I'm defensing it now... what I said never compared prices.

He is moving the subject.

Huh?

Perf/watt is irrelevent for 99% of consumers

Tell that to the world because every silicon company is working hard to reach better perf/watt.

That is really a big deal in CPUs and even more in GPUs... anybody that has a RX 480 knows how bad it is power usage at the point to not reach the default clocks in most of times.

No. It is not irrelevant.

It was like the first comment in this "argument" and you responded to it directly. How does it have nothing to do with the discussion?
 

ethomaz

Banned
Lol you're unbelievable.

Pascal is an architecture that scales from low to high-end. You know Polaris isn't the same for AMD. It's low-mid, so compare the 460/470/480 with the 1050/1050 Ti/1060 for fairer perf/att comparisons. AMD is going to have a high end competitor in about 6-8 weeks.
That is an architectural difference that affect all products.

GTX 1080 power draw less than RX 480.
GTX 1060 power draw less than RX 470.
GTX 1050 power draw less than RX 460.

You can go up or down into the lineup because it is not a market range difference but an architectural difference between Polaris and Pascal... that why I said nVidia is probably one or two generations ahead AMD in terms of perf.watt.

86529.png
 
the notion that a 480 usually draws 200W is plain bullshit.

my overclocked 480 (1380Mhz) draws 80 to 110W on the vcore when under load. add 30-50W for VRAM and cooler and that's it.
 

ethomaz

Banned
the notion that a 480 usually draws 200W is plain bullshit.

my overclocked 480 (1380Mhz) draws 80 to 110W on the vcore when under load. add 30-50W for VRAM and cooler and that's it.
It peaks at 200W... avg. in load is at ~160W... that is what most reviews shows.

power_average.png


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9KL1cvNTkxNjkyL29yaWdpbmFsLzAyLU92ZXJ2aWV3LUdhbWluZy5wbmc=


We skipped long-term overclocking and overvolting tests, since the Radeon RX 480's power consumption through the PCIe slot jumped to an average of 100W, peaking at 200W. We just didn't want to do that to our test platform.
 

dr_rus

Member
We don't know that it is 300W, and there is also the fact that it is passively cooled.

Just think how crazy it is to passively cool 300W (not that passively cooling +175W isn't crazy already).
It's a server rack card, it's not "passively cooled", it's in fact the exact opposite - the whole cooler is one giant radiator which is supposed to be cooled by the rack's external fans. And it is rated at 300W - the last time people were laughing about RX480 being rated at 150W going off just the peak power supply and how's that turned out for them?

MI6(5.8) has the same amount of tflops as RX480(5.8). How are they considerably higher? As far as I'm aware all the server parts are clocked the same or more conservatively than their desktop equivalent.

So expecting them to be less this time around really has no basis in reality, and you should actually expect higher clocks from the consumer version if anything.
RX480 is not a 5.8TFlops card because:
1. It doesn't provide this performance on a 150W power stated for MI6
2. It doesn't provide this performance for compute applications at all as that would require it running on boost clocks all the time in compute and it's hardly able to do this even in gaming.
MI6 specs clearly hint at AMD using the best chips for these cards which they likely have very little amount.

AMD has a history of specifying higher clocks for embedded parts, and this coupled with the pricing of Instinct parts pretty much means that it's totally unrealistic to expect such clocks on the RX Vega. It would be cool if they'd be able to reach them though.

plz explain to me the benefit a consumer receives from the lower power draw of a 1060 v a 480/580

Lower power draw -> less heat dissipation -> less noise from air cooling / cheaper coolers used leading to lower retail prices.
 
It's a server rack card, it's not "passively cooled", it's in fact the exact opposite - the whole cooler is one giant radiator which is supposed to be cooled by the rack's external fans. And it is rated at 300W - the last time people were laughing about RX480 being rated at 150W going off just the peak power supply and how's that turned out for them?


RX480 is not a 5.8TFlops card because:
1. It doesn't provide this performance on a 150W power stated for MI6
2. It doesn't provide this performance for compute applications at all as that would require it running on boost clocks all the time in compute and it's hardly able to do this even in gaming.
MI6 specs clearly hint at AMD using the best chips for these cards which they likely have very little amount.

AMD has a history of specifying higher clocks for embedded parts, and this coupled with the pricing of Instinct parts pretty much means that it's totally unrealistic to expect such clocks on the RX Vega. It would be cool if they'd be able to reach them though.



Lower power draw -> less heat dissipation -> less noise from air cooling / cheaper coolers used leading to lower retail prices.

except that 480s/580s run cool, quiet and cost less than 1060s
 

Locuza

Member
[...]
RX480 is not a 5.8TFlops card because:
1. It doesn't provide this performance on a 150W power stated for MI6
2. It doesn't provide this performance for compute applications at all as that would require it running on boost clocks all the time in compute and it's hardly able to do this even in gaming.
MI6 specs clearly hint at AMD using the best chips for these cards which they likely have very little amount.
[...]
1. It's "<150W"
2. Who says that the MI6 is always reaching its boostclock?
 
You probably need to check some reviews for RX 480 because it runs hotter and made more noise than even the high-end cards of the competition.

technically sure, but at low 30s in dba rating its not even going to be discernable. as for temps they run high 60s to low 70s. again a few degrees lower than that doesnt actually provide the user any real benefit
 

ethomaz

Banned
technically sure, but at low 30s in dba rating its not even going to be discernable. as for temps they run high 60s to low 70s. again a few degrees lower than that doesnt actually provide the user any real benefit
More like 50dba and over 80 Celsius in load for RX 480.
 
It peaks at 200W... avg. in load is at ~160W... that is what most reviews shows.

well 160W falls in line with what i said. it would help if you linked your sources (especially when you quote something, otherwise no one can comprehend the context). if i go to the techpwerup review of the reference 480 i find this:

power_peak.png


power_maximum.png


and that are peak values not representative of mean load.


i don't really know why you throw the MSI card into the mix. it runs with a vorce voltage of 1150mV. that's 70 mV higher than reference card (1082mV). of course it will draw more power (it also has a higher TDP rating because of its 8pin configuration). while the gamingX is one of the best 480ies, im not really sure why MSI needed to go with such high voltages. even my 480 clocks considerably higher than the gamingX with just 1110mV (+30mV over reference) while having merely average silicon. my card (XFX 480 RS) on stock settings has an even lower voltage (1062mV) than the reference card and in contrast can hold ist boost clock (1288Mhz) without any problems at all while consuming less power than the reference card.


Which is interesting, since an RX580 actually providing 5.8 TF uses over 200 Watts at load.

RX580 is between 6.2 and 6.5 TF depending with which manufacturer you go with. to get there you need higher clock speeds. to get polaris to those clock speeds you need more core voltage. more core voltage means more power draw, espacially if you move towards saturation....so whats your point excactly?
 

llien

Member
GTX 1080 power draw the same (or even lower) than RX 480.

Please stop spreading FUD.

Besides, for PSU wary users AMD has great utilities, such as wattman (easely saves 30-ish watt with no performance impact, by just setting less aggressive voltage) and chill (can cut down power consumption by two thirds).
 

llien

Member
Well, Vega will introduce tiled rasterization on AMD GPUs, while Nvidia introduced it with Maxwell in 2014 (without telling anyone).
Any other "technology" that nVidia is using that is "years ahead"?


Yes I accept 1800X an amazing P/P ratio compared to the Intel based chips but in individual core performance the situation is a little bit different..

It's a GPU thread, but since you've mentioned it:

b39b7b2870763ab2535f630530134af6870ee404.jpg
 

ISee

Member
Any other "technology" that nVidia is using that is "years ahead"?




It's a GPU thread, but since you've mentioned it:

b39b7b2870763ab2535f630530134af6870ee404.jpg

Please start posting large pics in quotes or code, like the rest of us (or most of us). Thank you :)
It's way easier to read the thread this way and if people want to see a larger version they can click on it to enlarge it. It's common practice on NeoGAF, besides in dedicated screenshot threads.
 

dr_rus

Member
except that 480s/580s run cool, quiet and cost less than 1060s
On what planet? 480 is hotter, noisier and definitely not cheaper than 1060. 580 is a 1070 level card when it comes to power dissipation and noise and it's definitely not cheaper than 1060 either.

1. It's "<150W"
2. Who says that the MI6 is always reaching its boostclock?
1. Which is even more suspicious.
2. It is rated at a 5.8 TFlops figure which in HPC world means that it must be able to provide it in a sustainable manner which means that it must run at ~1260MHz all the time while consuming less than 150W which even RX480 isn't able to achieve, let alone RX580. Why is it so hard to see for some of you?
 
On what planet? 480 is hotter, noisier and definitely not cheaper than 1060. 580 is a 1070 level card when it comes to power dissipation and noise and it's definitely not cheaper than 1060 either.


1. Which is even more suspicious.
2. It is rated at a 5.8 TFlops figure which in HPC world means that it must be able to provide it in a sustainable manner which means that it must run at ~1260MHz all the time while consuming less than 150W which even RX480 isn't able to achieve, let alone RX580. Why is it so hard to see for some of you?

its absolutely cheaper

https://www.newegg.com/Product/Prod...0446076&PID=6149513&SID=j2d6qiygw300ag8y00053

https://www.newegg.com/Product/Prod...43&IsNodeId=1&bop=And&Order=PRICE&PageSize=36

it runs a bit hotter and a bit noisier, but it matters to literally no one if a card is 3 dba quieter when youre in low to mid 30s or a few c cooler when youre in high 60s to low 70s.

edit - 580 is cheaper too

https://www.newegg.com/Product/Prod...NE&IsNodeId=1&N=100007709 601296377 600494828

and also faster

https://www.youtube.com/watch?v=H6st6QZTDxE
http://www.techspot.com/review/1393-radeon-rx-580-vs-geforce-gtx-1060/

no sane person would ever buy a 1060 at this point without highly specific circumstances
 
On what planet? 480 is hotter, noisier and definitely not cheaper than 1060. 580 is a 1070 level card when it comes to power dissipation and noise and it's definitely not cheaper than 1060 either.

In the US, you could definitely find plenty of 480's cheaper than equivalent 1060's. Hell, around December - February you could pick up a custom 480 4GB for as low as $150.

580 hovers around 1060 6GB prices.
 
On what planet? 480 is hotter, noisier and definitely not cheaper than 1060. 580 is a 1070 level card when it comes to power dissipation and noise and it's definitely not cheaper than 1060 either.

?????.jpg

The 480, both 4GB and 8GB versions is universally noticeably cheaper than a 1060. It also performs similarly, or slightly better in DX12.

Mine runs at 1300+, topping out at 72C and it's not noisy, my CPU cooler is noisier than it.
 

dr_rus

Member
its absolutely cheaper

https://www.newegg.com/Product/Prod...0446076&PID=6149513&SID=j2d6qiygw300ag8y00053

https://www.newegg.com/Product/Prod...43&IsNodeId=1&bop=And&Order=PRICE&PageSize=36

it runs a bit hotter and a bit noisier, but it matters to literally no one if a card is 3 dba quieter when youre in low to mid 30s or a few c cooler when youre in high 60s to low 70s.

edit - 580 is cheaper too

https://www.newegg.com/Product/Prod...NE&IsNodeId=1&N=100007709 601296377 600494828

and also faster

https://www.youtube.com/watch?v=H6st6QZTDxE
http://www.techspot.com/review/1393-radeon-rx-580-vs-geforce-gtx-1060/

no sane person would ever buy a 1060 at this point without highly specific circumstances

Local prices in places I visit:

1060: http://www.regard.ru/catalog/tovar231790.htm (this isn't even a cheapest one, you can get a miniITX one or a blower style significantly cheaper)
580: http://www.regard.ru/catalog/tovar250424.htm

1060: https://www.gigantti.fi/product/tie...us-dual-geforce-gtx-1060-oc-naytonohjain-3-gb
480: https://www.gigantti.fi/product/tie.../asus-dual-radeon-rx-480-oc-naytonohjain-4-gb
(Pretty safe to assume that 580 will cost more here than 480 once it will appear)

So why would "no sane person" buy a card which is less expensive, consume less electricity and perform more or less the same?

I also see that you've chosen to just ignore the rest of my arguments as you probably can't think of anything to say about them?
 

Locuza

Member
[....]
1. Which is even more suspicious.
2. It is rated at a 5.8 TFlops figure which in HPC world means that it must be able to provide it in a sustainable manner which means that it must run at ~1260MHz all the time while consuming less than 150W which even RX480 isn't able to achieve, let alone RX580. Why is it so hard to see for some of you?
The MI6 has the same specs as the Radeon Pro WX7100, the later is rated at 130W and for the MI6 AMD says "<150W".
The reasons why the Pro GPU is more efficient than a RX480 is the lower memory speed, better components on the PCB and a more restricted power budget and just for you, maybe also better binned P10 chips.
http://www.tomshardware.co.uk/amd-radeon-pro-wx-7100,review-33783-2.html

I rather believe that the MI6 is working like the WX7100 than speculating that the boost clocks are always reached and that better binned P10 would provide such massive perf/watt advantage.
 

dr_rus

Member
Firestrike result of the same RX Vega sample: http://www.3dmark.com/fs/12296284
Comparison with potential competition: https://www.3dcenter.org/news/erster-3dmark-firestrike-wert-zu-amds-vega-10-aufgetaucht

The MI6 has the same specs as the Radeon Pro WX7100, the later is rated at 130W and for the MI6 AMD says "<150W".
The reasons why the Pro GPU is more efficient than a RX480 is the lower memory speed, better components on the PCB and a more restricted power budget and just for you, maybe also better binned P10 chips.
http://www.tomshardware.co.uk/amd-radeon-pro-wx-7100,review-33783-2.html

I rather believe that the MI6 is working like the WX7100 than speculating that the boost clocks are always reached and that better binned P10 would provide such massive perf/watt advantage.

I don't think that any specs for Instinct lineup were ever disclosed and memory speed is very important for deep learning markets.

WX7100 is a good example which you decided to see in a completely opposite fashion. People were pointing to it before 500 series launch as a sign of a new Polaris revision with significantly improved power consumption but the result in the form of RX580 doesn't show anything of the sorts clearly pointing to the fact that WX7100 is just using rare binned chips (it's rather doubtful that only 1GHz memory downclock would shave >20W off RX480 power consumption; from recent examples, +1GHz to GDDR5 memory on 1060 added less than 5W to its power). And Instinct lineup is supposedly several times more expensive than the Pro line so it can use even higher binned and even more rare chips.
 

Locuza

Member
[...]
I don't think that any specs for Instinct lineup were ever disclosed and memory speed is very important for deep learning markets.
image.php

http://www.anandtech.com/show/10905/amd-announces-radeon-instinct-deep-learning-2017/2

WX7100 is a good example which you decided to see in a completely opposite fashion.
Which in my opinion is the more reasonable viewpoint.

People were pointing to it before 500 series launch as a sign of a new Polaris revision with significantly improved power consumption [...]
I wasn't among them and I doubted that the WX7100 would use a new stepping or that the fabrication process got so much better in comparison to the first RX480 results.

but the result in the form of RX580 doesn't show anything of the sorts clearly pointing to the fact that WX7100 is just using rare binned chips (it's rather doubtful that only 1GHz memory downclock would shave >20W off RX480 power consumption
http://www.tomshardware.co.uk/amd-radeon-pro-wx-7100,review-33783-2.html

Page 1 said:
It certainly appears that AMD put a lot more effort into the Radeon Pro's [PCB] design than the Radeon RX 480. This should manifest as higher efficiency under load.
Page 4 said:
At the start of this review we noted the Radeon Pro's restrictive power limit that tried to keep the card below its 130W ceiling. Demanding workloads appear to be pegged at 137 watts without any room to push higher, which is good.
Page 4 said:
The Radeon RX 480's maximum power consumption seems a bit high for a mainstream product. But the workstation-class Radeon Pro WX 7100 is more reasonable. Solid performance through our real-world benchmark suite and closer proximity to Ellesmere's sweet spot make WX 7100 a more efficient product.

I repeated myself for the third time but it's always just better binned chips.

And Instinct lineup is supposedly several times more expensive than the Pro line so it can use even higher binned and even more rare chips.
You can get a WX7100 with 8GB for ~625$ on Amazon:
https://www.amazon.com/AMD-Radeon-100-505826-256-bit-GDDR5/dp/B01N8XS96E

The MI6 will come with 16GB but AMD didn't announced any prices.
The Radeon Instinct MI6 accelerator based on the acclaimed Polaris GPU architecture will be a passively cooled inference accelerator optimized for jobs/second/Joule with 5.7 TFLOPS of peak FP16 performance at 150W board power and 16GB of GPU memory
http://www.amd.com/en-us/press-releases/Pages/radeon-instinct-2016dec12.aspx
(The press release speaks about "150W" and not "<150W" somebody is trying to mislead us here)

AMD might once again use a more restrictive power target, higher quality components for the board-design, getting a few watts from lower clocked memory speeds and YES maybe they also bin the P10 chips better but that's not the only reason why these cards are scoring better efficiency results than something like a RX480 or RX580, where the latter further decreases the perf/watt.
 

Avtomat

Member
the few bucks youd save over 3 years doesnt even make up for the price premium youd pay for a 1060. its just comedy



the former is clearly better than the latter

My PC sits beside me on my desk - when I had a 780 I got noticeably warmer when it was being pushed - this was @ 1.1 Ghz.

Many people go out of their way to get quieter systems myself included - more power will typically (not all the time) mean more heat to dissipate and more noise.

If solving these 2 issues means a 2-3% drop in performance then sign me up.
 
My PC sits beside me on my desk - when I had a 780 I got noticeably warmer when it was being pushed - this was @ 1.1 Ghz.

Many people go out of their way to get quieter systems myself included - more power will typically (not all the time) mean more heat to dissipate and more noise.

If solving these 2 issues means a 2-3% drop in performance then sign me up.

Then why wouldnt you have just dropped the oc on the 780?
 

Avtomat

Member
Then why wouldnt you have just dropped the oc on the 780?

I did in the end, it got irritating enough that I would overclock harder in winter than I would in summer even though the card was stable.

My point was all other things being equal power and heat dissipation are valid concerns. Not the top priority but valid concerns none the less.
 

dr_rus

Member
AMD might once again use a more restrictive power target, higher quality components for the board-design, getting a few watts from lower clocked memory speeds and YES maybe they also bin the P10 chips better but that's not the only reason why these cards are scoring better efficiency results than something like a RX480 or RX580, where the latter further decreases the perf/watt.

Let's assume that you're right - how exactly this changes what I'm saying? AMD can use all of this on MI25 and not use it on RX Vega - the end result would be the same as if they'd just used binned chips, with RX Vega running slower and hotter than what seems to be hinted by MI25 specs. So what exactly are you arguing with?
 

Pagusas

Elden Member
My PC sits beside me on my desk - when I had a 780 I got noticeably warmer when it was being pushed - this was @ 1.1 Ghz.

Many people go out of their way to get quieter systems myself included - more power will typically (not all the time) mean more heat to dissipate and more noise.

If solving these 2 issues means a 2-3% drop in performance then sign me up.

If it's that big of an issue to you than water cool it.
 
There are a lot of beneficies and most of them are effects of each other...

+ More performance at the same power target
+ Cheaper PSU
+ Less power consumption (daaaaa)
+ Less heat
+ Cheaper cool system for the GPU and the build
+ Cheaper GPU parts
+ Less issues with clock throttles.
-The first one is definitely important on the engineering side of things, but the consumer doesn't need to worry about that

-You can get PSUs (550W) that can run practically any setup for about $35. You could go a bit cheaper if you get a 1060 over a 480, but by about like $5 and fewer watts/dollar. I have an OC'ed 4790K/Titan XP system and power draw doesn't exceed 600W from the wall. I have an 80+ bronze efficiency PSU so the actual power requirements are more in line with 500W. I picked up that particular 750W PSU for $50 too.

-Less power consumption, but insignificant factoring in everything the normal house has running and less important considering the power consumption of the entire system.

-It's true that there is less heat, but not by a substantial amount. I can't say I can tell the difference between a Titan XP, an R9 290, a GTX 980 and 2x670s as far as heat. Noise is more important IMO and that depends on the cooling. There are plenty of quiet designs on both sides.

-You won't need extra cooling capacity for an AMD card over an NV one.

-Important for the manufacturer, but the end user doesn't have to worry about anything except what it's costing them, and there are lots of great deals on AMD cards.

-Depends entirely on the cooling solution. Rarely a problem with any aftermarket card.
 
Top Bottom