• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon 300 series (possible) specs

wachie

Member
290 series cards don't have the bandwidth compression stuff from Tonga, right? Likewise, Maxwell 200 series cards have similar bandwidth compression tech. This leads to the situation where at lower resolution, where bandwidth matters less, that 50% bandwidth advantage is unpronunced and negated by the compression tech advantage (something equivalating to 35% more "bandwidth" I believe)"

Maxwell fairing so well with less bandwidth has a lot to do with that compression tech, the 290 series even then fairs better and scalles better at higher resolutions inspite of the bandwidth compression on Maxwell.

Now imagine a card with much higher shading power, bandwidth compression tech from Tonga (aka, Maxwell wouldn't be the only one have this trump card for its bandwidth deficiencies anylonger), as well as HBM with a huge 1024 bit interface. Maxwell could quite conceivably be left in the proverbial dust.
To be frank, I'm not too impressed by Tonga's compression algorithms. Either the card is really unbalanced or something else is up.
 
Lower power and not eating up half the board space means we might actually see higher end cards in smaller form factors as well. New generation of HTPCs that can actually be good at gaming incoming!
 

Seanspeed

Banned
So you're thinking really about buying a 290x 8gig then?

http://www.sapphiretech.com/presentation/product/?cid=1&gid=3&sgid=1227&pid=2394&psn=&lid=1&leg=0

Because your key theoretical point rests on having a much faster 4 gig card performing worse than your effectively 3.5gig card at the higher resolutions?
I never said anything about performing worse. Just that its advantage isn't exactly that big of an advantage if its best use cases are for situations where its limited vRAM becomes an issue anyways.

For me, I have to wonder whether that's really worth getting a refund of my 970 now, using my old 670 for possibly up to 6 months or more, and then possibly paying even *more* for a 390 that will probably suck more power and not really give me the memory advantage that I'm waiting for.

I plan on moving to a 1440p monitor at some point and when the Oculus Rift CV1 releases, I basically need something that can handle resolutions higher than 1440p(as it renders content higher than display resolution). I don't want to keep upgrading every year, so I'm hoping to find something that will last me longer. And I'm worried a 390 isn't going to be the answer due to its limited RAM. But I also don't want to be using a GTX670 for a whole nother year or more.

But yes, an 8GB 290X is a potential option, but maybe a bit too power hungry for my PSU.
 
That reminds me, does AMD's Shadowplay alternative (I always forget what it's called) support desktop capture now?

I hope no one on GAF is running 4 GPUs of any kind. I wouldn't wish that kind of latency, compatibility issues and frame pacing on anyone.

The Gaming Evolved app is Raptr, it supports DVR functionality now, never used it. But I'm sure it supports desktop capture.
 

Rafterman

Banned
In every single thread

At this point one has to wonder why those comments aren't a bannable offence when they'Re mostly used by nVidia fanboys still hung up on how the drivers were 10 years ago

Because in many ways people are telling the truth. We have actual proof in the DX12 thread, you guys just don't like to see it. Why would you ban people for telling it like it is?

http://www.neogaf.com/forum/showthread.php?t=987116&highlight=dx12



Anyway, what's with the 4gb of ram? With all the talk of "future proofing" around here, you'd think there would be a little more concern with the fact that these cards won't last any longer than the last generation of cards. Yeah, they'll be faster, right up until you hit the 4gb mark and they choke just like every other card. I didn't upgrade to a 980 because it's only 4gb, I'm sure as hell not going to spend even more money to do it in the middle of 2015. Either this is a huge mistake on AMD's part or there will be more ram on these cards and we just don't know it.
 

Human_me

Member
Anyway, what's with the 4gb of ram? With all the talk of "future proofing" around here, you'd think there would be a little more concern with the fact that these cards won't last any longer than the last generation of cards. Yeah, they'll be faster, right up until you hit the 4gb mark and they choke just like every other card. I didn't upgrade to a 980 because it's only 4gb, I'm sure as hell not going to spend even more money to do it in the middle of 2015. Either this is a huge mistake on AMD's part or there will be more ram on these cards and we just don't know it.

Because as Serandur said:
There is one issue The 390X will have to deal with from HBM though. First-gen HBM tech limits the card's potential (HBM) VRAM size to 4 GBs. For something as theoretically powerful and well-equipped for high resolution gaming as it, that 4GB limitation might be particularly painful in contrast to "affordable" GM200's likely 6 GBs (or whatever portion of it cut-down GM200s will actually be able to access at full speed...).

I hope AMD implement secondary GDDR5 memory controllers for some additional VRAM or something, but it seems unlikely.

I do agree 4gb isn't enough.
Especially as the high end has such a huge bandwidth and gpu power it would be foolish not to downsample many games.
 

Paganmoon

Member
Because in many ways people are telling the truth. We have actual proof in the DX12 thread, you guys just don't like to see it. Why would you ban people for telling it like it is?

http://www.neogaf.com/forum/showthread.php?t=987116&highlight=dx12



Anyway, what's with the 4gb of ram? With all the talk of "future proofing" around here, you'd think there would be a little more concern with the fact that these cards won't last any longer than the last generation of cards. Yeah, they'll be faster, right up until you hit the 4gb mark and they choke just like every other card. I didn't upgrade to a 980 because it's only 4gb, I'm sure as hell not going to spend even more money to do it in the middle of 2015. Either this is a huge mistake on AMD's part or there will be more ram on these cards and we just don't know it.

Could you link to said proof? browsed a few pages of that thread and all it's boiled down to is X1 talk at this point.
 

Rafterman

Banned
Could you link to said proof? browsed a few pages of that thread and all it's boiled down to is X1 talk at this point.

The proof in is how poorly AMD drivers deal with CPU utilization in DX11 in those benchmarks. DX11 is 5 years old. This whole narrative that it's some internet meme and not reality is false. Hell, the Omega drivers, which were a significant improvement, weren't even released until the end of last year. How can people claim the drivers have been good since 2006, 2010, whatever year they want to pull out of their behind, when it was literally 3 months ago that they significantly improved? And even as good as the Omega drivers are, these benchmarks were done after they were released.

This is a timely article, especially since we were just discussing here on GAF a few days ago how much worse AMD DX11 drivers are in terms of CPU utilization comapred to NV DX11 drivers. There was no hard data out there, but - at least in my opinion - lots of circumstantial evidence pointing towards "a lot worse". Now it's rather rigorously confirmed.



Because as Serandur said:
I do agree 4gb isn't enough.
Especially as the high end has such a huge bandwidth and gpu power it would be foolish not to downsample many games.

Well that's unfortunate that they are stuck with 4gb. All Nvidia would have to do is release 6-8gb cards to turn these into 970's all over again. They show a few benchmarks of games at 4k using >4gb ram and these cards would be destroyed. HBM is sexy as hell, but they probably should refine it a bit more before using it on their high end cards.
 
Where are these news sites getting this 640GB/s number? Hynix is currently only providing chips with maximum of 128GB/s bandwidth and four of those only add up to 512GB/s.

I'd also expect these chips to not be that power efficient. Clock for clock Nvidia GPUs currently use a lot less power. I doubt first-gen HBM will bring down the power usage that much considering it is AMD at the helm. Like mantle, it will cover up AMD's deficiencies til its competitors have their answer.
 

Fularu

Banned
Because in many ways people are telling the truth. We have actual proof in the DX12 thread, you guys just don't like to see it. Why would you ban people for telling it like it is?

http://www.neogaf.com/forum/showthread.php?t=987116&highlight=dx12

Because people are never talking about "poor" CPU optimization when they discuss AMD drivers, they're talking about graphical performances, graphical glitches, "missing features" (supersampling, PhysX, you name it) or game crashes.

Those (outside of PhysX which Nvidia just bought out anyway) are mostly a myth.
 

riflen

Member
Because in many ways people are telling the truth. We have actual proof in the DX12 thread, you guys just don't like to see it. Why would you ban people for telling it like it is?

http://www.neogaf.com/forum/showthread.php?t=987116&highlight=dx12



Anyway, what's with the 4gb of ram? With all the talk of "future proofing" around here, you'd think there would be a little more concern with the fact that these cards won't last any longer than the last generation of cards. Yeah, they'll be faster, right up until you hit the 4gb mark and they choke just like every other card. I didn't upgrade to a 980 because it's only 4gb, I'm sure as hell not going to spend even more money to do it in the middle of 2015. Either this is a huge mistake on AMD's part or there will be more ram on these cards and we just don't know it.

PC GPUs don't exist to provide any kind of proof of future performance. The entire premise is flawed. These devices are superseded at their price point 18-24 months following release. Sometimes sooner. There is no future proofing. There is only, 'what I am willing to spend'.
It's simply at total odds with the realities of semi-conductor development and especially that of the GPU, the performance of which scales almost linearly with transistor count.

In the case of the design in question, there has to be a HBM1 design so they can sell some units and make money to cover the R&D costs. There'll be an HBM2 design later on that can support higher densities. The HBM1 process was only very recently finalised I think.
It's not a huge mistake because the product will certainly out-perform their existing design by a considerable margin and not everyone owns a 290x or 970 already.

But yes, an 8GB 290X is a potential option, but maybe a bit too power hungry for my PSU.

I wouldn't bother at all. Hunt down some benchmarks and you'll find a 970 performs to within 5% of an 8GB 290X at 3840x2160 and the 970 is on a par or faster at lower resolutions.
 

Durante

Member
The Gaming Evolved app is Raptr, it supports DVR functionality now, never used it. But I'm sure it supports desktop capture.
It does, but in my experience it's not very good. The overlay causes some performance problems when recording.
Thanks for the answers. By "it does", do you mean it does support desktop shadow recording?

Because I play almost everything in borderless windowed ("desktop"), so anything else is almost useless to me. I have shadowplay running 24/7 on my 2560x1440 desktop and it works very well.
 

Kezen

Banned
I wonder where the gaffer "artist" has gone. I remember him being a staunch supporter of AMD in my lurking days.

Such news would probably bring a smile to his face.

AMD seems to be back with a vengeance and the GPU landscape will only benefit from that.
 
I wonder where the gaffer "artist" has gone. I remember him being a staunch supporter of AMD in my lurking days.

Such news would probably bring a smile to his face.

AMD seems to be back with a vengeance and the GPU landscape will only benefit from that.

Artist trolled too much for his own good. Pretty sure he's perma'd
 

Human_me

Member
I wonder where the gaffer "artist" has gone. I remember him being a staunch supporter of AMD in my lurking days.

Such news would probably bring a smile to his face.

AMD seems to be back with a vengeance and the GPU landscape will only benefit from that.

Banned.
His last post is hilarious.
 

Durante

Member
Artist trolled too much for his own good. Pretty sure he's perma'd
Makes sense. At some point I put him on my ignore list. Whenever I do that I usually see a few hidden posts here and there from those people, and then at some point they just stop.

Shame. Oh well wachie will take his place.
Hmmm, wachie is on my ignore list now... ;)
Though I'm considering removing him, from what I see when other people quote him he doesn't seem as batshit crazy as artist was.
 

Zaptruder

Banned
Because people are never talking about "poor" CPU optimization when they discuss AMD drivers, they're talking about graphical performances, graphical glitches, "missing features" (supersampling, PhysX, you name it) or game crashes.

Those (outside of PhysX which Nvidia just bought out anyway) are mostly a myth.

And the fuckers don't even use the physx technology. It's like that time Creative bought out Aureal and just fucking killed spatial audio technology.
 

riflen

Member
Probably me but why does it say DDR when it's supposed to say GDDR?

It's not. The new devices use an HBM interface to stacked DRAM modules I believe. DDR refers to Double Data Rate, which just means it can transfer data twice in one clock cycle.
GDDR5 is just another type of DRAM.
 

Scandal

Banned
390 or X it is. I want the new and modern things. My budget is $499 for the card though, otherwise I'm not interested. I'll wait for Pascal most likely if it's over $499 for the 390 and just utilize my PS4 during the wait. I want a card which delivers 60 FPS at 1440p.
 

Bytes

Member
anyone that values their time and patience knows AMD drivers are a joke

WhatYearIsIt.jpg

I would like to upgrade to a 390, but I currently have an HD 7970 with 6 GB of VRAM. I am not sure whether or not losing 2 GB of VRAM would negatively impact my performance, or if it even matters because games don't seem to use that much VRAM at this point.
 

Nachtmaer

Member
Only 4GB? That's so disappointing considering I recently bought a 4k monitor for 4k gaming.

ARMlQkU.png


This is the reason why. For now they're using the first generation of HBM which allows 1GB stacks, generation two is rumored to be ready by next year for their 400 series. In theory they could add more stacks, but the problem would be the interposer (the thing connecting these RAM stacks to the GPU) being too big or too expensive to make. I assume more stacks also means a bigger and more complex GPU because it still needs to communicate with the RAM.
 
I don't think it's explicitly stated anywhere that the amount of stacks will be 4. The slides merely talk about 4 Gb stacks being the limit. Of course 4 GB is probably still the most likely case.
 
ARMlQkU.png


This is the reason why. For now they're using the first generation of HBM which allows 1GB stacks, generation two is rumored to be ready by next year for their 400 series. In theory they could add more stacks, but the problem would be the interposer (the thing connecting these RAM stacks to the GPU) being too big or too expensive to make. I assume more stacks also means a bigger and more complex GPU because it still needs to communicate with the RAM.

If this is the case how come the PS4 got 8GB and why couldn't they use that technology for a graphics card? Also isn't there a 6GB graphics card already out by Nvidia?
 
PS4 doesn't use HBM, neither do the 6GB Titan or the 8GB R9 290 variants. They use GDDR5, this is new technology. Stacked DRAM.
 

mephixto

Banned
I wonder where the gaffer "artist" has gone. I remember him being a staunch supporter of AMD in my lurking days.

Such news would probably bring a smile to his face.

AMD seems to be back with a vengeance and the GPU landscape will only benefit from that.

I thought you were artist second account. Same style on the avatars but with less trolling.
 

DieH@rd

Banned
If this is the case how come the PS4 got 8GB and why couldn't they use that technology for a graphics card? Also isn't there a 6GB graphics card already out by Nvidia?

PS4 has GCN GPU that has 256bit memory interface, 32bit communication with each of its 8 chip mounting points. What Sony did there was instead of mounting 8 chips, they added 2 on each mounting point [one above and one below motherboard, taking advantage of specific feature of GDDR that 2 chips can share one communication array] and mounted 16 512MB chips [which we though were not manufacturable then in high quantities]. When Cerny announced 8GB, GAF freaked the fuck out with good reason. :)

Nvidia achieved 6GB by having 386bit memory interface [12 mounting points [no double side used] x 512MB chips = 6GB]. Radeon 290X has 512mbit interface, which enables it to to up to 8GB of vRAM with no doble sided mounting also]. edit - I'm not sure if they have used 512MB chips, maybe they went with 256MB chips and doublesided arrays.

In new GPU series, AMD is moving away from GDDR5 and going straight to the new tech.
 

Nachtmaer

Member
If this is the case how come the PS4 got 8GB and why couldn't they use that technology for a graphics card? Also isn't there a 6GB graphics card already out by Nvidia?

Graphics cards (and the PS4) use GDDR5 which has been around for quite a few years now. Putting it simply, the reason why the PS4 and high end graphics cards have more memory is because the actual chips have a higher density. The PS4 uses currently 16 512MB chips, so 8GB in total. Recently production of 1GB GDDR5 chips started, so the PS4 will probably get a revision that will only have 8 of those to save costs.

HBM is just another memory standard that will slowly replace GDDR5, just like how GDDR5 replaced GDDR3 in the past. The main reasons why is that has a lot more bandwidth while lowering power consumption, space and price in the long run. HBM2 will increase the density and bandwidth even more oin the future.
 

wachie

Member
I'd also expect these chips to not be that power efficient. Clock for clock Nvidia GPUs currently use a lot less power. I doubt first-gen HBM will bring down the power usage that much considering it is AMD at the helm. Like mantle, it will cover up AMD's deficiencies til its competitors have their answer.
Last rumours were pointing to 290X or higher levels of TDP and water cooling. However TDP is never a concern for enthusiast segment.
 

tuxfool

Banned
WhatYearIsIt.jpg

I would like to upgrade to a 390, but I currently have an HD 7970 with 6 GB of VRAM. I am not sure whether or not losing 2 GB of VRAM would negatively impact my performance, or if it even matters because games don't seem to use that much VRAM at this point.

Does the 7970 really benefit from all that memory? Those versions of the card solely exist because it was the previous flagship card, so they had to make allowances for those wanting to use high resolution displays.
 

tuxfool

Banned
The linkedin profile mentioned 300w TDP. But as we all know TDP is not actual power consumption.
A 290x at stock didn't consume 290w and neither did some overclocked models :
http://www.techpowerup.com/reviews/Sapphire/R9_290X_Tri-X_OC/22.html

No it doesn't. And the Nvidia cards at peak consume more than their specified specs (AMD and Nvidia also measure TDP differently). Notably some of the non-reference cards manage to keep power down at OC because they also cool the gpu better. Running silicon at lower temperatures reduces its leakage power consumption.

The issue here is that many benchmarks do full system power consumption measurements (such as AT). They're not entirely wrong for doing it as Nvidia cards do have lower overheads on the rest of the system, even if they aren't as efficient as stated (or implied) on paper.
 

Kezen

Banned
No it doesn't. And the Nvidia cards at peak consume more than their specified specs. Notably some of the non-reference cards manage to keep power down at OC because they also cool the gpu better. Running silicon at lower temperatures reduces its leakage power consumption.

The issue here is that many benchmarks do full system power consumption measurements (such as AT). They're not entirely wrong for doing it as Nvidia cards do have lower overheads on the rest of the system even if they aren't as efficient as stated on paper.

I don't really know how Nvidia calculate their TDP but it seems to vary from generation to generation. Maxwell cards have impressive perf/watt still, but their advertised TDP are sometimes exceeded.
Like this custom 970 (MSI) which is 20w above standard 970 consumption :
http://www.guru3d.com/articles_pages/msi_geforce_gtx_970_gaming_review,7.html
 

tuxfool

Banned
I don't really know how Nvidia calculate their TDP but it seems to vary from generation to generation. Maxwell cards have impressive perf/watt still, but their advertised TDP are sometimes exceeded.

Yeah it is impressive, but people often quote the TDP figures as relative comparisons when it isn't all that appropriate at giving an accurate figure. You could measure a realistic figure with full system tests only if you could isolate the AMD overheads.

You could only do real measurements if you can tap into the PCI-E and Mobo power rails (or before the Graphics card VRM).
 
Top Bottom