• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Radeon RX Vega thread

Such a disappointment, I was hoping for more informations especially after AMA hype they did on reddit.

I was in need of a GPU for my build and was looking forward to this event.

The lack of informations coupled with the rx580 8gb being out stock everywhere for indefinite amount of time, I decided to buy a gtx 1060 today,too bad because I wanted to go full AMD for my build.
 

dr_rus

Member
Consumer Volta is unlikely to launch before SK Hynix gets GDDR6 going, and that's not before next year.

There's no reason why Volta can't launch with GDDR5X, especially the upper mid range one. GDDR5X's highest specced speed is well within that of GDDR6, it's entirely possible that Micron will be able to hit it this year.

I also can't remember a single time when NV waited for some memory tech to launch their new GPUs. They have updated previously launched GPUs with new memory a number of times though.
 
There's no reason why Volta can't launch with GDDR5X, especially the upper mid range one. GDDR5X's highest specced speed is well within that of GDDR6, it's entirely possible that Micron will be able to hit it this year.

I also can't remember a single time when NV waited for some memory tech to launch their new GPUs. They have updated previously launched GPUs with new memory a number of times though.

This timetable you suggest for Volta sounds and awful lot of fanboyism without anything backing it up. Sure Nvidia can use older memory in cheaper products, but that's hardly what people are waiting for.

No one's saying Nvidia is "waiting" on memory. They're not releasing anything to replace 1080 Ti so soon after its release.
 
Take this with a grain of salt, but people on Reddit are pointing out that there was likely multiple tears per frame in the awful Vega Prey demo. This could be a crossfire issue, to do with the unfinished cards.

But it could also be that the framerate was much higher than the projector could output. The question here then, is why didn't they just use a single card? Or show an FPS counter?

Anyway: The wait for Vega continues.

Edit: sources


https://www.reddit.com/r/Amd/comments/6edd3h/amd_rx_vega_computex_pray_demo_fps/

https://www.reddit.com/r/Amd/comments/6ecdfr/slug/di9irs6
 
Take this with a grain of salt, but people on Reddit are pointing out that there was likely multiple tears per frame in the awful Vega Prey demo. This could be a crossfire issue, to do with the unfinished cards.

But it could also be that the framerate was much higher than the projector could output. The question here then, is why didn't they just use a single card? Or show an FPS counter?

Anyway: The wait for Vega continues.

Edit: sources


https://www.reddit.com/r/Amd/comments/6edd3h/amd_rx_vega_computex_pray_demo_fps/

https://www.reddit.com/r/Amd/comments/6ecdfr/slug/di9irs6

Stage 1.
 

Renekton

Member
Take this with a grain of salt, but people on Reddit are pointing out that there was likely multiple tears per frame in the awful Vega Prey demo. This could be a crossfire issue, to do with the unfinished cards.

But it could also be that the framerate was much higher than the projector could output. The question here then, is why didn't they just use a single card? Or show an FPS counter?

Anyway: The wait for Vega continues.
I think Reddit was more concerned about needing to crossfire Vega's when Prey zips through 4K in one 1080Ti.
 
I think Reddit was more concerned about needing to crossfire Vega's when Prey zips through 4K in one 1080Ti.
Not just Reddit's concern. A 980ti gets around 45fps I think. So if that Vega card is between the 980ti and 1080ti, then they should really only need 1. It sent all the wrong signals.
 

dr_rus

Member
This timetable you suggest for Volta sounds and awful lot of fanboyism without anything backing it up. Sure Nvidia can use older memory in cheaper products, but that's hardly what people are waiting for.

No one's saying Nvidia is "waiting" on memory. They're not releasing anything to replace 1080 Ti so soon after its release.

Suuuure. I've just told you what's backing it up, in addition to the whole lot of other factors. What sounds like fanboyisnm is your apparent inability to read what I've said.

Autumn is not "soon" after 1080Ti release.
 
Suuuure. I've just told you what's backing it up, in addition to the whole lot of other factors. What sounds like fanboyisnm is your apparent inability to read what I've said.

Autumn is not "soon" after 1080Ti release.

If you call that backing up your statements you're delusional. And in terms of GPU generations these days, spring to autumn is a blink of an eye.
 

dr_rus

Member
If you call that backing up your statements you're delusional. And in terms of GPU generations these days, spring to autumn is a blink of an eye.

Of course, I'm delusional and not the guy who expect consumer Volta later than general GV100 availability in 4Q17.

Spring to autumn is 6-9 months (considering that 1080Ti launched in March). 6-9 months is not a "blink of an eye". GP104 launch to autumn is 15-18 months and this is actually in the higher range of a typical NV product update schedule (lower range is about 12 months). If you still can't see it then no reason to continue this discussion.
 
This timetable you suggest for Volta sounds and awful lot of fanboyism without anything backing it up. Sure Nvidia can use older memory in cheaper products, but that's hardly what people are waiting for.

No one's saying Nvidia is "waiting" on memory. They're not releasing anything to replace 1080 Ti so soon after its release.

If the gv104 is smaller and cheaper to make than the gp102 used in the 1080ti, then why wouldn't nvidia release it.

Honestly Nvidia can do whatever they want at this point and theyd still have more good will with enthusiasts than AMD.
 
Of course, I'm delusional and not the guy who expect consumer Volta later than general GV100 availability in 4Q17.

Spring to autumn is 6-9 months (considering that 1080Ti launched in March). 6-9 months is not a "blink of an eye". GP104 launch to autumn is 15-18 months and this is actually in the higher range of a typical NV product update schedule (lower range is about 12 months). If you still can't see it then no reason to continue this discussion.

What you lack is any proof of your suggested timetable for the launch of consumer Volta. Sorry if I don't just take things in blind faith based on no factual information. I've seen it time and time again fanboys have these ridiculous notions of when next gen "should arrive", yet have nothing but vague assumptions based on when previous gens arrived. There's solid information out there from Nvidia's own GTC that GDDR6 is aimed at early 2018, and that's as close to a reliable hint you'll get at this point. Could Nvidia launch some low end Volta before that? Perhaps, but who cares, people talking Volta are expecting a 1080/Ti successor.
 

PFD

Member
Of course, I'm delusional and not the guy who expect consumer Volta later than general GV100 availability in 4Q17.

Spring to autumn is 6-9 months (considering that 1080Ti launched in March). 6-9 months is not a "blink of an eye". GP104 launch to autumn is 15-18 months and this is actually in the higher range of a typical NV product update schedule (lower range is about 12 months). If you still can't see it then no reason to continue this discussion.

Was there ever only a 12 month wait between x80 to x80 launch? It's usually longer no?
 

NeOak

Member
Looking forward to Ryzen + Vega APUs no matter what.



It was a risk nVidia didn't have to take.
AMD, on the other hand, being underfunded and an underdog had to gamble.

It seems it didn't quite work out.
My main concern is not that it doesn't take on 1080Ti, but that they are still not able to actually bring product to the market (faster than 1070/1080 is the biggest part of high end market anyhow)


Also, Raja speaking about that there is only a handful of GPU manufacturers because it is so hard to write drivers smells... like:

Lisa: dafuq Raja, why am I not impressed by how our HBM2 500mm2 chip performs?
Raja: Oh, that's... drivers! It rocks, but we don't have good enough drivers (points hands to dudes responsible for software)

Ask Intel how hard it is. Seriously.

I don't think you remember or know what meant gaming on S3, Trident, Matrox, or PowerVR cards back then, or when a new DirectX came out and made your brand new GPU obsolete.

There's no reason why Volta can't launch with GDDR5X, especially the upper mid range one. GDDR5X's highest specced speed is well within that of GDDR6, it's entirely possible that Micron will be able to hit it this year.

I also can't remember a single time when NV waited for some memory tech to launch their new GPUs. They have updated previously launched GPUs with new memory a number of times though.
GeForce FX 5800.
 

OryoN

Member
After all this waiting for Vega, they're still talking about a dual GPU setup as something for the enthusiasts, all without first mentioning whether we can expect a single Vega GPU that rivals the competition's top cards? Very poor messaging, as usual.

AMD has been way too unclear about their product roadmap. After the "cheap high-performance" Polaris line, and the hype build up to their new GPU architecture, you would've thought AMD would've already be trying to challenge high-end Nvidia cards with the introduction of Vega. Nope! Likewise, I don't even know what they are doing Ryzen. Even when they have good products, their current planning, mass-market advertising/messaging, excution, future plans and how they intend to expand their business... it's just too poorly done.

It's no wonder there's a lot of uncertainty with AMD's stock right now! It's like asking investors to admire your beautiful painting from the outside of a stained glass window. What are your ultimate goals, AMD? No one knows! Meanwhile, Nvidia is busy...not only busy drumming up excitment for the world's AI revolution, but making it crystal-clear to investors exactly how they plan to be the driving force behind this inevitable future. What a f#&$ing contrast!
 

napata

Member
What you lack is any proof of your suggested timetable for the launch of consumer Volta. Sorry if I don't just take things in blind faith based on no factual information. I've seen it time and time again fanboys have these ridiculous notions of when next gen "should arrive", yet have nothing but vague assumptions based on when previous gens arrived. There's solid information out there from Nvidia's own GTC that GDDR6 is aimed at early 2018, and that's as close to a reliable hint you'll get at this point. Could Nvidia launch some low end Volta before that? Perhaps, but who cares, people talking Volta are expecting a 1080/Ti successor.

No one is expecting a Ti succesor this year. Nvidia always releases their x80 & x70 cards first with a Ti months later. I could easily see the 1070 & 1080 successors use GDDR5x.
 
What you lack is any proof of your suggested timetable for the launch of consumer Volta.

Yeah. Everybody lacks that, yourself included. All we can do is speculate based on what Nvidia has done in the past.

No one is expecting a Ti succesor this year. Nvidia always releases their x80 & x70 cards first with a Ti months later. I could easily see the 1070 & 1080 successors use GDDR5x.

This. We're talking about a Volta-based cutdown where the "1180" would outstrip the 1080Ti by a few percentage points, and an "1170" that would be positioned in the neighborhood of the 1080. Wouldn't be the first time Nvidia has used that approach on a similar timetable. The "1180Ti" would launch ~spring 2018, with the aforementioned GDDR6.
 

ethomaz

Banned
This timetable you suggest for Volta sounds and awful lot of fanboyism without anything backing it up. Sure Nvidia can use older memory in cheaper products, but that's hardly what people are waiting for.

No one's saying Nvidia is "waiting" on memory. They're not releasing anything to replace 1080 Ti so soon after its release.
GV104 is not TI.
GDDR5x allow 384GB/s for 256bits bus... that only if the manufacturer didn't have a 13/14Gbps GDDR5x.
 

NeOak

Member
I can't imagine any other explanation, besides "performance is lacking".




You mean that company that is 5 times bigger than AMD + nVidia together?
Are you sure they'd say "we suck at writing software"?
Look at their compilers, as well as a bunch of other software they write.

Graphic drivers requires more effort than all their other drivers combined. They don't invest in those.
 

PFD

Member
This. We're talking about a Volta-based cutdown where the "1180" would outstrip the 1080Ti by a few percentage points, and an "1170" that would be positioned in the neighborhood of the 1080. Wouldn't be the first time Nvidia has used that approach on a similar timetable. The "1180Ti" would launch ~spring 2018, with the aforementioned GDDR6.

There was a 2 year wait between the 980Ti and 1080Ti
 

llien

Member
MT9WO57.png


Not sure why 60 is something to boast about.


Look at their compilers, as well as a bunch of other software they write.
What am I supposed to see? Fantastic compiler?

Graphic drivers requires more effort than all their other drivers combined. They don't invest in those.
The key here is that it requires DIFFERENT expertise/experience and many iterations to progress (as well as making silicon for those). In my opinion, of course, I'm not claiming these are facts.
Size of it is vastly irrelevant in the context of monster company like Intel.
 

dr_rus

Member
What you lack is any proof of your suggested timetable for the launch of consumer Volta.
Okay.

Was there ever only a 12 month wait between x80 to x80 launch? It's usually longer no?
Sure:
- 480->580 in 8 months
- 680->780 in 14 months

But then 12 months from 1080 launch is right now so the proper question is - what's the biggest typical period between NV gen update? If we take the longest periods we're looking at:
- 980->1080 in 20 months (not their decision but a result of TSMC's failure with 20nm)
- 780->980 in 16 months

The second one is important because there are many similarities between Kepler to Maxwell and the upcoming Pascal to Volta switches.

So where does that put Volta? Obviously we've already passed the minimal 12m mark so it will be more. May'16+16 months is Sep'17. May'16+20 months is Jan'18.

Those who expect x80 Volta next year assume (for some reason) that Volta will be as late as Pascal was relative to Maxwell (or even more late) - but Pascal was late because of the production process failure at TSMC, and nobody expect Volta to wait for 7nm (which is likely 1H19 for GPU production). Thus why I really don't get people who expect Volta next year. It will most likely launch this Autumn, starting with GV104 cards which will replace 1080 and 1080Ti, as usual.

GeForce FX 5800.
GDDR2 wasn't really a new memory type, it was basically just an overclocked DDR1 - which is partially why FX5800 failed in power efficiency as this OCed DRAM was dissipating insane amounts of heat. For all what went wrong with the FX series I don't think that DRAM was the reason for its late introduction.
 

Papacheeks

Banned
MT9WO57.png


Not sure why 60 is something to boast about.



What am I supposed to see? Fantastic compiler?


The key here is that it requires DIFFERENT expertise/experience. As well as making silicon for those. In my opinion, of course, I'm not claiming these are facts.
Size of it is vastly irrelevant in the context of monster company like Intel.

Maybe if those cards were the 4gb variant and coming under $400? That would be a big deal if that were the case.

Rumors from a while back point to there being a somewhat budget vega card close to 1070 in terms of performance. So if those were the lower end versions that only have 4gb of memory running together in crossfire doing 4k 60fps or well above then that is impressive as it's a little over 1080ti price for 2 cards running together. But all this is conjecture at this point without solid evidence to go by.
 

ZOONAMI

Junior Member
GV104 is not TI.
GDDR5x allow 384GB/s for 256bits bus... that only if the manufacturer didn't have a 13/14Gbps GDDR5x.

Wait what? Wouldn't you need like 12gb of gddr5x at 11-12gbps to get near 400 bandwidth? And isnt a 12gb config restricted to a 352 bus?

I don't think we will see a 16gb 1170/1180. At those price points aren't they kind of stuck with 8gb and 256 bit bus?
 

llien

Member
Hard to disagree with dr_rus on timings, if 11xx does indeed use GDDR5x.
It seems like too soon only because underfunded underdog AMD is struggling to roll out high end cards (and even that is likely because they bet on hbm2)

Maybe if those cards were the 4gb variant and coming under $400? That would be a big deal if that were the case.

Rumors from a while back point to there being a somewhat budget vega card close to 1070 in terms of performance. So if those were the lower end versions that only have 4gb of memory running together in crossfire doing 4k 60fps or well above then that is impressive as it's a little over 1080ti price for 2 cards running together. But all this is conjecture at this point without solid evidence to go by.

I'm pretty sure if those were lesser Vegas they'd mention it. (Raja would in any case)
Also note the "not optimized for Prey" driver (heh).
 

ZOONAMI

Junior Member
Maybe if those cards were the 4gb variant and coming under $400? That would be a big deal if that were the case.

Rumors from a while back point to there being a somewhat budget vega card close to 1070 in terms of performance. So if those were the lower end versions that only have 4gb of memory running together in crossfire doing 4k 60fps or well above then that is impressive as it's a little over 1080ti price for 2 cards running together. But all this is conjecture at this point without solid evidence to go by.

That isn't impressive at all as 2 8GB 580s does about the same performance for cheaper.

AMD needs to have a perf crown card that at least beats a 1080ti in some situations at least. Running 2 480/580s has been a thing for a while now if you want to beat a 1080ti and save a bit of money. But with 1070 prices down so much it makes more sense to grab 2 of those. Either way AMD is severely lacking in the enthusiast department, as running CF or SLI really doesn't make any sense given the existence of the 1080 and 1080 ti. Honestly the 1070/1080 is a pretty insane value especially with a bit of OC. Makes zero sense to go CF or SLI given nvidias dominant cards right now.

My 1070 at 2000+/9200 is basically a 1080 stock with just slightly less bandwith, you really can't beat that for $400ish. And sometimes you can grab a 1080 for under $500. AMD has nothing right now given NV pricing right now. The 580 is a good budget card but it makes more sense to get a 1070 for $350ish imo.
 

PFD

Member
Okay.


Sure:
- 480->580 in 8 months
- 680->780 in 14 months

But then 12 months from 1080 launch is right now so the proper question is - what's the biggest typical period between NV gen update? If we take the longest periods we're looking at:
- 980->1080 in 20 months (not their decision but a result of TSMC's failure with 20nm)
- 780->980 in 16 months

The second one is important because there are many similarities between Kepler to Maxwell and the upcoming Pascal to Volta switches.

So where does that put Volta? Obviously we've already passed the minimal 12m mark so it will be more. May'16+16 months is Sep'17. May'16+20 months is Jan'18.

Those who expect x80 Volta next year assume (for some reason) that Volta will be as late as Pascal was relative to Maxwell (or even more late) - but Pascal was late because of the production process failure at TSMC, and nobody expect Volta to wait for 7nm (which is likely 1H19 for GPU production). Thus why I really don't get people who expect Volta next year. It will most likely launch this Autumn, starting with GV104 cards which will replace 1080 and 1080Ti, as usual.

Interesting timeline comparison. I wonder where the Titan fits in all this.

I'm considering upgrading from my 980Ti to the Volta Titan, it would be such a massive jump
 
MT9WO57.png


Not sure why 60 is something to boast about.

Most likely meant "above 60" as even a Fury X does close to 60 at 4K.

Cause, otherwise, 2x Vegas doing "about 60" could only mean one of two things:

1) A single Vega isn't much faster than a Fury X and Crossfire isn't working (which would make the whole demo pointless)

or

2) A single Vega is only slightly faster than a 480, which would be impossible unless it's running at like... 800MHz.
 
Wait what? Wouldn't you need like 12gb of gddr5x at 11-12gbps to get near 400 bandwidth? And isnt a 12gb config restricted to a 352 bus?

I don't think we will see a 16gb 1170/1180. At those price points aren't they kind of stuck with 8gb and 256 bit bus?

If they went with the 12gb/s chips it should be 384gb/sec with 8gb/ 256bit bus
 

Papacheeks

Banned
Hard to disagree with dr_rus on timings, if 11xx does indeed use GDDR5x.
It seems like too soon only because underfunded underdog AMD is struggling to roll out high end cards (and even that is likely because they bet on hbm2)



I'm pretty sure if those were lesser Vegas they'd mention it. (Raja would in any case)
Also note the "not optimized for Prey" driver (heh).

I doubt he would either way as I think they have been having production issues for gaming version of vega. Or they are trying to get with partners to get their drivers running well.

I think it's the latter since they made a point in showing a game from a publisher they directly partnered with.
 

dr_rus

Member
Interesting timeline comparison. I wonder where the Titan fits in all this.

I'm considering upgrading from my 980Ti to the Volta Titan, it would be such a massive jump

Titan Volta likely fits into early 2018 as per Hynix 384 bit GDDR6 videocard tidbit. This also kinda hints at non-Titan Volta parts launching earlier than early 2018 btw.

MT9WO57.png


Not sure why 60 is something to boast about.

Why boast about Threadripper's IO capabilities for a dual card setup? x16x2 is only 32 lanes - hardly something which only Threadripper provides at the moment.

This whole demonstration was bad. They kinda missed the moment when their hype attempts for Vega started to look weak and actually damage Vega's image. Hype can take you only so far without actual products or at least some solid benchmark results.

I also like how he just plainly admits that they are using heavily optimized by them s/w to demonstrate Vega's capabilities. They should just stop saying anything until the cards will hit the market at this point.
 

llien

Member
This whole demonstration was bad. They kinda missed the moment when their hype attempts for Vega started to look weak and actually damage Vega's image. Hype can take you only so far without actual products or at least some solid benchmark results.

I also like how he just plainly admits that they are using heavily optimized by them s/w to demonstrate Vega's capabilities. They should just stop saying anything until the cards will hit the market at this point.


Well, except Prey runs better on nvidia cards.
Agreed on ThreadRipper.

Can't make any sense of neither this demo nor Raja's comment...
 

ethomaz

Banned
Wait what? Wouldn't you need like 12gb of gddr5x at 11-12gbps to get near 400 bandwidth? And isnt a 12gb config restricted to a 352 bus?

I don't think we will see a 16gb 1170/1180. At those price points aren't they kind of stuck with 8gb and 256 bit bus?
You only needs 12GB RAM for 384bits... 256bits allow 8GB RAM.

The speed is based in the GDDR5x frequency and bus... 12Gbps (12Ghz) at 256bits = 384GB/s... 12Gbps (12Ghz) at 384bits = 576GB/s. The fastest GDDR5x in the market is 12Gbps (12Ghz) but it is not out of question a 13Gbps or 14Gbps version until late this year.

The non-TI Geforces uses 256bits... so 8GB with 384GB/s (12Ghz GDDR5x).

BTW the amount of memory (4GB, 8GB, 12GB, 16GB) didn't affect bandwidth... bandwidth is bus x speed / 8.
 
What's going to be interesting to see is how the Frontier Edition will perform in games compared to the eventual RX Vega. There's no doubt going to be some game benchmarks for FE, but it might end up looking like a disappointment at first if the drivers aren't ready and the gaming version should have higher clocks.

The Prey demo certainly didn't make a whole lot of sense, other than showcasing how AMD will finally have a premium option for both CPU and GPU. I highly doubt ThreadRipper will be useful for gaming, even with CrossFire. Maybe if they had showcased working quad-CF...
 

Durante

Member
Yeah, complex, so what. Is it complex to a point of "Intel's GPUs are underwhelming, because Intel doesn't know how to write drivers"? (Raja's point) I doubt it, to be honest.
It's so complex that it's a contributing reason why Intel gave up on creating a high-end GPU with a revolutionary architecture (read up on Larrabee if you are unfamiliar with it).

Seriously, when I say that modern high-performance PC graphics drivers (which need to support everything from OpenGL 1 and DX8 to Vulkan and DX12) are some of the most complex software projects on the planet, what I mean is that they are some of the most complex software projects on the planet.
 
What's going to be interesting to see is how the Frontier Edition will perform in games compared to the eventual RX Vega. There's no doubt going to be some game benchmarks for FE, but it might end up looking like a disappointment at first if the drivers aren't ready and the gaming version should have higher clocks.

The Prey demo certainly didn't make a whole lot of sense, other than showcasing how AMD will finally have a premium option for both CPU and GPU. I highly doubt ThreadRipper will be useful for gaming, even with CrossFire. Maybe if they had showcased working quad-CF...

Subjecting oneself to quad CF to play games sounds like a form of torture. Yes, I do want to burn myself to death in an oven fueled by my own wasted money and shattered dreams of actual CF or SLI support. How did you know?!
 
Subjecting oneself to quad CF to play games sounds like a form of torture. Yes, I do want to burn myself to death in an oven fueled by my own wasted money and shattered dreams of actual CF or SLI support. How did you know?!

That's why used the word working. Of course it's purely a hypothetical scenario that would showcase the benefit of the extra PCIe lanes. Quad CF/SLI never was reasonable, but people still did it for bragging rights and benchmark scores. It's actually surprising AMD and Nvidia seem to have given up on that.
 
D

Deleted member 465307

Unconfirmed Member
So...Vega news in July? I wonder how much time there will be between Vega's consumer launch and Volta's consumer launch or even just Volta consumer launch rumors. Vega is going to have to be incredible if it's aiming for the enthusiast market and sandwiched by Pascal and Volta.
 
Confirmed in a tweet that he meant above 60. Second part: "comfortably above *any* single GPU" is more interesting.

So is he implying above 1080ti then?

I thought that he meant that dual Vega is comfortably above any single card. Which is a "well, duh" statement.

Previously (on previous threads) I posted that I expected Vega to land between the 1080 and the 1080Ti, and I stand by this prediction.
 

dr_rus

Member
Well, except Prey runs better on nvidia cards.
Agreed on ThreadRipper.

Can't make any sense of neither this demo nor Raja's comment...

It's a DX11 game which hardly uses more than 8 CPU threads and is likely limited by 1 CPU core performance when it's CPU limited. So no wonder that it runs somewhat better on NV cards at the moment. Hopefully they'll add something useful to it with their Vega optimizations, like a VK renderer, not just (de)optimize the DX11 renderer so that their cards will be faster in it.
 
Top Bottom