• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

AMD's next desktop graphics card talk (rumored 380x)

980/970 are really mid-range cards, sold as mid-range cards in the old days.

These days $329/549 is made such a big deal given GPU process manufacturing is getting difficult.

I'm aware that they are not the full potential of Maxwell and Nvidia "could" have brought out a beastly high end card if it had wanted to. More I would be surprised if AMD went balls to the walls by delivering a card 6 months later that is 40% more powerful; it seems at odds with recent trends where card makers would rather provide somewhat more incremental upgrades in terms of raw performance. I guess if AMD was feeling extremely threatened in the GPU space it might make sense, but the GPU side of their business was quite healthy compared to the CPU side - I would have suspected they would be seeking very high margins rather than trying to go for the throat.
 
I'm aware that they are not the full potential of Maxwell and Nvidia "could" have brought out a beastly high end card if it had wanted to. More I would be surprised if AMD went balls to the walls by delivering a card 6 months later that is 40% more powerful; it seems at odds with recent trends where card makers would rather provide somewhat more incremental upgrades in terms of raw performance. I guess if AMD was feeling extremely threatened in the GPU space it might make sense, but the GPU side of their business was quite healthy compared to the CPU side - I would have suspected they would be seeking very high margins rather than trying to go for the throat.

AMD essentially evened the playing field with the R9 290 series cards, but the 970 and 980 killed the 290's for price:performance, AMD went from being in a good place to a bad one very quickly. Their only real opening is the fact that the 980 isn't nVidias "big" card, with the Titan being pretty much dead for gaming, the ultra high end slot is open.

A new, more efficient iteration of GCN, on likely a much bigger card with HBM and water-cooling sounds like it would fit that slot quite nicely right now.
 
If I have to upgrade my PSU, I've gotta factor that into the cost of the GPU upgrade. Its also a bit of a pain.
Its not only the extra cost of a new, stronger PSU. You also have to factor another 40-50$ for a quality aftermarket heatsink. Default heatsink will probably be "just barely enough" for your card to not burst in flames. You also have to factor the noise.

With low consumption cards you can at least keep the default heatsink and only change it if you want to keep your card below 50c at all times and you plan to use it for 6+ years (like i do).
 
Its not only the extra cost of a new, stronger PSU. You also have to factor another 40-50$ for a quality aftermarket heatsink. Default heatsink will probably be "just barely enough" for your card to not burst in flames. You also have to factor the noise.

With low consumption cards you can at least keep the default heatsink and only change it if you want to keep your card below 50c at all times and you plan to use it for 6+ years (like i do).

You don't ever have to buy a new heat sink unless you really want to. Your card will not suddenly burst into flames if you keep the stock cooler/heatsink.

The thermal design of new cards mean that even if it runs at 80 (or even 90 in a lot of cases) degrees while gaming you are well within its limits.

So yes, if you are looking at a card with a higher TDP all you need to do is replace the PSU if your current one is not sufficient and you are ok with any noise the new card puts out.

What do I read on this place sometimes...?
 
That's 5GB+ total RAM in the consoles though. They don't have system memory like a PC, where games use both.

I agree that 4GB isn't going to be good enough for >1080p, but it's not fair to compare console and PC memory because they aren't the same.

You're forgetting that the CPU also uses RAM from the same memory pool. I highly doubt that we'll see games running on the X1/PS4 that use 5GB or more for VRAM, not with the current system reserve at least.

I am not forgetting anything. Right now, the x1 has 5GB allocated for the game, and 3Gb for the OS, the PS4 has a slightly more complicated but similar setup. It is expected that the OS will shrink to 2GB leaving 6GB for the game.

Out of that, 90% will go to the visuals, and only 10% will go to game logic and other things that could be handled by slower ram. Code just doesn't take that much space in memory, compared to textures. So, now, out of the 5GB, you have 4.5GB worth of graphic data. 5.5Gb by Gen's end. At 1080p. This means you need a GPU with at least 5.5 GB of VRAM.

You will need more for 1440p+. 8GB minimum would be my guess.

The console GPUs are what they are, and there's no reason to think they could even handle 5GB worth of textures and geometry with acceptable performance anyway.

Why is there no reason to think that? You seem to think that all the textures are all being rendered on the screen at once, but that's not the case. Loading more textures in memory for the next room allows for reduced or eliminated loading times. Check out Bloodborne vs Dark Souls 2 for an example of that.

Seriously 5 go only for the gpu ? all games are not the next quantic dream style games.

Not all, but some, and those tend to be the most ambitious ones, so that's not a good thing!
 
Out of that, 90% will go to the visuals, and only 10% will go to game logic and other things that could be handled by slower ram. Code just doesn't take that much space in memory, compared to textures. So, now, out of the 5GB, you have 4.5GB worth of graphic data. 5.5Gb by Gen's end. At 1080p. This means you need a GPU with at least 5.5 GB of VRAM.

You will need more for 1440p+. 8GB minimum would be my guess.

90% of ram for graphics? Please source this. 3GB cards have been matching or exceeding console settings consistently, even on textures. Even 2gb cards like the 750ti is trading blows with ps4 in many games.
 
I am not forgetting anything. Right now, the x1 has 5GB allocated for the game, and 3Gb for the OS, the PS4 has a slightly more complicated but similar setup. It is expected that the OS will shrink to 2GB leaving 6GB for the game.

Out of that, 90% will go to the visuals, and only 10% will go to game logic and other things that could be handled by slower ram. Code just doesn't take that much space in memory, compared to textures. So, now, out of the 5GB, you have 4.5GB worth of graphic data. 5.5Gb by Gen's end. At 1080p. This means you need a GPU with at least 5.5 GB of VRAM.

You will need more for 1440p+. 8GB minimum would be my guess.



Why is there no reason to think that? You seem to think that all the textures are all being rendered on the screen at once, but that's not the case. Loading more textures in memory for the next room allows for reduced or eliminated loading times. Check out Bloodborne vs Dark Souls 2 for an example of that.



Not all, but some, and those tend to be the most ambitious ones, so that's not a good thing!

This is an impressive amount of talking out of your ass.
 
So if the 380x is going to be AMD's flagship card as the rumors seem to suggest, I wonder then what the role of the 390x would be. Where does this leave the 390x?
 
So if the 380x is going to be AMD's flagship card as the rumors seem to suggest, I wonder then what the role of the 390x would be. Where does this leave the 390x?

Will probably be the dual GPU solution.

I don't think AMD is going to go bigger than this on 28nm. It's already a huge chip, and a little faster than what I expect GM200 will be.

This is probably going to be AMD's flagship until next year and 16/14nm.
 
Will probably be the dual GPU solution.

I don't think AMD is going to go bigger than this on 28nm. It's already a huge chip, and a little faster than what I expect GM200 will be.

This is probably going to be AMD's flagship until next year and 16/14nm.

That's what I was thinking. At 28 nm, and even 20, 4096 SP's will be huge. I don't see how the 390x could improve on this unless it's a dual GPU.
 
Maybe, but I really doubt that's the case. It's not the first time we see AMD hardware performing worse than Nvidia, what else could it be aside from drivers ?


I disagree. We do have reliable data.

Nvidia drivers are far superior.
Those benchmarks literally could not be any worse for assessing crossfire and SLI performance.

*edit*

Whoops, thought that was last page, turns out it was on the first.
 
That's what I was thinking. At 28 nm, and even 20, 4096 SP's will be huge. I don't see how the 390x could improve on this unless it's a dual GPU.

IMO they will be dual GPU if 380 is the new flagship. It brings their product naming scheme in-line with Nvidia's product lines, which have the X80 cards (680, 780, 980) as flagships, and X90 for dual GPU (590, 690)
 
IMO they will be dual GPU if 380 is the new flagship. It brings their product naming scheme in-line with Nvidia's product lines, which have the X80 cards (680, 780, 980) as flagships, and X90 for dual GPU (590, 690)

From a marketing perspective that makes a lot of sense. It seems like this is going to be the case. I wonder if Nvidia will stick to numbers for their next gen GPU's. I think 1070/1080 is too much, but then again there were the 9800 series, that is until AIB partners start complaining again.
 
I am not forgetting anything. Right now, the x1 has 5GB allocated for the game, and 3Gb for the OS, the PS4 has a slightly more complicated but similar setup. It is expected that the OS will shrink to 2GB leaving 6GB for the game.

Out of that, 90% will go to the visuals, and only 10% will go to game logic and other things that could be handled by slower ram. Code just doesn't take that much space in memory, compared to textures. So, now, out of the 5GB, you have 4.5GB worth of graphic data. 5.5Gb by Gen's end. At 1080p. This means you need a GPU with at least 5.5 GB of VRAM.
I love it when people start to talk authoritatively and then reveal just how little they know in one sentence.

Seriously, code is almost completely irrelevant size-wise. It's also far from the only non-graphics thing stored in memory during the execution of a program.
 
From a marketing perspective that makes a lot of sense. It seems like this is going to be the case. I wonder if Nvidia will stick to numbers for their next gen GPU's. I think 1070/1080 is too much, but then again there were the 9800 series, that is until AIB partners start complaining again.


IMO They should have abandoned these back and forth naming schemes aeons ago. It's clear that any scheme that only allows 10 generations max, and which sometimes allows fewer (because they start at like, 200 or something) is unsustainable when these companies are in business for decades. They need to make it meaningful and sustainable for a long time.

Something like

(Brand) (Year of product range launch)-(class)(modifier)

So maybe

GeForce 14-80M

instead of GeForce 980M
 
IMO They should have abandoned these back and forth naming schemes aeons ago. It's clear that any scheme that only allows 10 generations max, and which sometimes allows fewer (because they start at like, 200 or something) is unsustainable when these companies are in business for decades. They need to make it meaningful and sustainable for a long time.

Something like

(Brand) (Year of product range launch)-(class)(modifier)

So maybe

GeForce 14-80M

instead of GeForce 980M
Yeah, but if they made it simple, then people like myself wouldn't feel nearly as good about ourselves for being able to navigate the maze that is the current naming scheme and offer advice based on that.
 
IMO They should have abandoned these back and forth naming schemes aeons ago. It's clear that any scheme that only allows 10 generations max, and which sometimes allows fewer (because they start at like, 200 or something) is unsustainable when these companies are in business for decades. They need to make it meaningful and sustainable for a long time.

Something like

(Brand) (Year of product range launch)-(class)(modifier)

So maybe

GeForce 14-80M

instead of GeForce 980M

What they really should do is keep the X2 moniker for the dualGPU card.

It's descriptive, rolls off the tongue, and extensible to any GPU in their lineup. You could have a 370X2 and a 380X2, and so on. But I'm not expecting good naming schemes at this point. Seems like a real sore point for the industry, nobody can give anything a good name.

Maybe they want to move away from suffixes because they cloud the shopping experience too much. That could be why they're dropping the "X2"

I like your idea too.
 
I missed this earlier:

Its not only the extra cost of a new, stronger PSU. You also have to factor another 40-50$ for a quality aftermarket heatsink. Default heatsink will probably be "just barely enough" for your card to not burst in flames. You also have to factor the noise.

With low consumption cards you can at least keep the default heatsink and only change it if you want to keep your card below 50c at all times and you plan to use it for 6+ years (like i do).
If you aren't familiar with the current trends in GPUs, most come with pretty beastly non-reference coolers now.
 
Those benchmarks literally could not be any worse for assessing crossfire and SLI performance.

*edit*

Whoops, thought that was last page, turns out it was on the first.

I was not interested in multi GPU benches. The single GPU ones definitely put Nvidia's DX drivers above AMD's by a significant margin. Will that change in the future ? Well crazier things have happened.

In the meantime it will not be sufficient to narrow the gap between the hardware in some departments. Ryse is an interesting example as it highlights Nvidia's deficiencies when it comes to compute. Class-leading drivers won't be of much help there.
 
90% of ram for graphics? Please source this. 3GB cards have been matching or exceeding console settings consistently, even on textures. Even 2gb cards like the 750ti is trading blows with ps4 in many games.

Guys... I didn't mean 90% as a precise, unwavering number.

1- The percentage will obviously vary from game to game
2- We don't typically have access to those numbers so their representativeness is unknown.
3- We are still at the beginning of the gen, so even if we had them, they would not represent games that come out mid-late gen.

This is an impressive amount of talking out of your ass.


Well, thank you DieHard. Looking at those numbers, and taking the previous caveats I mentioned earlier, I can see that Video memory takes about 67% of 4.5GB, or 3.1GB.

Technically, If you add the sound files and the Havoc physics data to the actual Vram (900MB), using your PC's AMD GPU hardware to process the audio and physics, as the PS4 is built to do, you do end up with almost 4GB in the VRAM, which is about 86% of the total.

553 (sound) + 350 (physics) = 903MB
Video ram = 3072 MB
Vram inc sound + physx = 3975MB
non-vram =633 MB = 14%.

Granted, I'm stacking the chips in my favor here. the real number is probably somewhere between 70% and 80%.

I love it when people start to talk authoritatively and then reveal just how little they know in one sentence.

Seriously, code is almost completely irrelevant size-wise. It's also far from the only non-graphics thing stored in memory during the execution of a program.

Et tu, Durante?

My 90% figure was a bit high, I will readily admit it, but I am deliberately targeting worst case scenarios, not average scenarios.

I'm disappointed that no one in this thread (except DieHard) countered with their educated estimates and recommendations. Let's hear em! :)
 
I was not interested in multi GPU benches. The single GPU ones definitely put Nvidia's DX drivers above AMD's by a significant margin. Will that change in the future ? Well crazier things have happened.

In the meantime it will not be sufficient to narrow the gap between the hardware in some departments. Ryse is an interesting example as it highlights Nvidia's deficiencies when it comes to compute. Class-leading drivers won't be of much help there.
Then why did you post benches that had information on dual GPU solutions?

Regardless, Avg/Min FPS is still an absolute travesty of a benchmark to look closely at driver performance.
 
Then why did you post benches that had information on dual GPU solutions?
My point was to highlight how better was Nvidia's DX in CPU bound scenarios. However, the new "Omega" drivers warrant new benchmarks.

Regardless, Avg/Min FPS is still an absolute travesty of a benchmark to look closely at driver performance.
Fair point but you can't outright dismiss that data. It does mean something.

It's going to be interesting how things will evolve in the DX12 era : less dependency on drivers mean Nvidia's drivers supremacy will matter a lot less.
I trust them to trounce AMD once more though.
 
My point was to highlight how better was Nvidia's DX in CPU bound scenarios. However, the new "Omega" drivers warrant new benchmarks.
So you use a single game with dual GPUs that are notoriously inconsistent, with the NVIDIA dual GPU solution being a consistently better performer, all things being equal?

I'm not sure I follow.

Games which are CPU bound are just that, CPU bound. You're not going to see much of a difference between multiple cards.

Here's a game that is known to be CPU bound, as it is on UE3:

bl2-99th.gif
 
Alien Isolation is also very CPU bound and Nvidia drivers apparently best AMD's :
ai-99th.gif

http://techreport.com/review/27702/nvidia-geforce-gtx-960-graphics-card-reviewed/9

Same goes for Beyond Earth :
civ-99th.gif

I'm surprised to see Nvidia doing so well in this game considering it's part of AMD's Gaming Evolved program.

bl-99th.gif

Another very interesting benchmark.

But regardless, I'm curious about AMD's next offerings. I don't know about Nvidia vs AMD GPU marketshare but I get the feeling this must be somewhat even-steven.
 
See, now that's much better data to support your point. That's all I was after.

*edit* wait a second. Those cards are all slotting in the appropriate places in terms of performance. How are you chalking that up to drivers?

The 970 is *supposed* to outperform the 290. Its direct competitor is the 290X, which is nowhere to be found on those posted benches.
 
See, now that's much better data to support your point. That's all I was after.

And you were correct about averages not telling the full story. Unfortunately frametimes are harder to find that raw averages.
I wish Gamegpu.ru were more "rigorous" in their testing, I understand it's a exhaustive exercise but that's the most relevant data we can have.
 
Slow down there slick, I never said I was going to buy the card. Maybe you need to justify your purchases, but I don't. If I want something, I can buy it.

Don't you find it extremely selfish that you want to limit the potential of graphics card just because you don't want to upgrade from a 550 Psu? So you are willing to let people pay a premium for a less powerful cards because apparently upgrading your shitty PSU is too expensive.

Let me put it this way, I am glad AMD isn't asking for your opinion on the subject.

Yeah it's sad, people actually just want to be told they have a high end gpu even when they don't, just because it helps them rationalise overpaying so much for it.
 
I love how these barren humans on fb ask amd the question and tie in a threat to buy their rival's product,

It's understandable. AMD has nothing that really competes with the 970/980 right now. People want info!
 
I'm abit worried now... in latest rumors only 390X will have HBM, and come with liquid cooling. Meanwhile the other cards will be refreshes of Hawaii.

Kinda regretting my LAN-size case now.

#teamredconcern
 
Games which are CPU bound are just that, CPU bound. You're not going to see much of a difference between multiple cards.
That's not really true. If one driver is more effective in its use of CPU cycles than another, then that driver will perform better in CPU-limited scenarios. That's definitely what you see in those Alien: Isolation benchmarks (which are very clearly CPU limited as confirmed by their CPU scaling).

It's just that sites other than GameGPU never do actual CPU-limited testing. And sadly, even they very rarely do it for both NV and AMD. I just tried searching another article where they did but couldn't find one :/

Someone should do an article dedicated to this.

Edit: finally found another one!
http--www.gamegpu.ru-images-stories-Test_GPU-Action-Dead_Rising_3-test-dr_3_proz_amd.jpg
http--www.gamegpu.ru-images-stories-Test_GPU-Action-Dead_Rising_3-test-dr_3_proz.jpg


Edit2: and one more:

Clearly a CPU-limited scenario, clearly much better performance on NV.
 
Top Bottom