• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Radeon RX 6900XT and 6800/6800XT final specs. 16 GB G6. Up to ~21Tflops(~24 Tflops Boost)

Dampf

Member
12 GB VRAM would mean 192 bit memory bus and lower bandwidth since thare are not memory modules available in the size that could have 12Gb VRAM and 256 bit memory bus.
Oh that is right, haven't thought about that.

Oh well I hope there will be versions with lower VRAM though. AMD needs to destroy Nvidia when it comes to price.
 

ZywyPL

Banned
Sounds good, but I'm skeptical about VRAM. Such high amounts of VRAM will only lead to higher prices. Why not a 6800 with 12 VRAM instead? Those high amounts of VRAM are unnecessary

Well, looking how much concern there about with Ampere's "low" amount of RAM, AMD will be able to market their cards solely with the RAM numbers alone to win the gamers wallets.
 

Dampf

Member
Well, looking how much concern there about with Ampere's "low" amount of RAM, AMD will be able to market their cards solely with the RAM numbers alone to win the gamers wallets.
I believe that is their strategy.

But these VRAM fearmongering people are in the vocal minority. Features like DLSS, better Raytracing, Nvidia Broadcast, CUDA, OptiX, stable drivers etc are far more important selling points than VRAM. AMD has to significantly undercut Nvidia's price in the same performance category to compete.
 
Last edited:

FireFly

Member
Its near impossible for AMD to put out raytracing performance between 3070 and 3080 on their first gen on this. Nvidia has 2 generations of raytracing on the market while amd has none as of now. They wont match turing's dedicated rtx cores, nevermind be between 3070 and 3080
They don't need to match Turing, at least at the high end. They need above 2060 performance in raytracing for 40 CUs clocked at 2 GHz+. That would be less efficient than Turing, since in rasterisation we would expect 2080 Super performance. But double it up (for 80 CUs) and you get to 2080 Ti/3070 levels.

Even if you are less efficient per CU, you can still scale performance by increasing the number of CUs. And actually it's not clear that Ampere is massively more efficient in raytracing than Turing from a transistor perspective, since they increased the number of transistors by 108% vs the 2080, and performance seems to have scaled by 120%-130%.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
For rdna1, the average game clocks are about 3% higher than advertised with existing 'last gen' games.

Im not sure if rdna2 6000 will have similar headroom, seeing they are already pushing 7nm with 2ghz gameclock.

My prediction is 6000/rdna2 game clocks are very much the end game limits, with no more positive upside. 🤷‍♀️

I predict the opposite.
I think AIBs will be handling 2.4Ghz easy work.
+150hz shouldnt be outside the realm of possibility with a Strix level cooler, assuming the chips arent already at the power limits.
 

martino

Member
rhoooooooo
it seems it's not this time the cycle will be broken.
i decided to wait ... i want to see how games using dx12U will perform before to jump one side or an the other
 
Last edited:

longdi

Banned
I predict the opposite.
I think AIBs will be handling 2.4Ghz easy work.
+150hz shouldnt be outside the realm of possibility with a Strix level cooler, assuming the chips arent already at the power limits.
By amd own admission, ryzen 5000 will have even less over clocking than before. Im not optimistic 🤷‍♀️
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
By amd own admission, ryzen 5000 will have even less over clocking than before. Im not optimistic 🤷‍♀️

Ryzen ≠ RDNA2
It would be a big fuck you to AIBs if there is literally not even 150Mhz of headroom in these chips.
150Mhz....i dont think even OEM GPUs have failed to give atleast 150Mhz.

You really think AMDs GPUs are already at their max on AMDs coolers?

Why would any high end variant cards need to exist then?
AIBs are going to have a very very hard time charging anything above MSRP for a card thats only marginally quieter at full load and doesnt perform any better.
 
Last edited:

CuNi

Member
Can someone Ping Ascend?
When I posted Igors prediction of 320W yesterday he called this "FUD" :messenger_tears_of_joy: :messenger_tears_of_joy:
Someone check on him, maybe he's having a heart attack.
 

ZywyPL

Banned
I think AIBs will be handling 2.4Ghz easy work.

They will most likely use the same 2.5-3 slot coolers used in Ampere cards, as it's always the case, so the thermals shouldn't be much of an issue, my guess is it's the the power delivery/chip capabilities that will be the bottleneck.


By amd own admission, ryzen 5000 will have even less over clocking than before. Im not optimistic 🤷‍♀️

Ryzen ≠ RDNA2
It would be a big fuck you to AIBs if there is literally not even 150Mhz of headroom in these chips.
150Mhz....i dont think even OEM GPUs have failed to give atleast 150Mhz.


But does it matter if you are getting X clocks out if the box or by manually tuning a chip yourself? IMO 2+GHz on a GPU out of the box is a much better situation than counting on silicon lottery.
 

longdi

Banned
Ryzen ≠ RDNA2
It would be a big fuck you to AIBs if there is literally not even 150Mhz of headroom in these chips.
150Mhz....i dont think even OEM GPUs have failed to give atleast 150Mhz.

You really think AMDs GPUs are already at their max on AMDs coolers?

Why would any high end variant cards need to exist then?
AIBs are going to have a very very hard time charging anything above MSRP for a card thats only marginally quieter at full load and doesnt perform any better.

I'm talking about stock operations, that's the game clock this time round, won't be higher than specs, unlike rdna1 with its +50-70mhz more headroom.

I guess AIB can go higher than game clocks specs, maybe the very best lightning/rog/Xtreme models can do about 2.15ghz.

Because 7nm is 7nm. :messenger_clapping:
 
Last edited:

regawdless

Banned
Ryzen ≠ RDNA2
It would be a big fuck you to AIBs if there is literally not even 150Mhz of headroom in these chips.
150Mhz....i dont think even OEM GPUs have failed to give atleast 150Mhz.

You really think AMDs GPUs are already at their max on AMDs coolers?

Why would any high end variant cards need to exist then?
AIBs are going to have a very very hard time charging anything above MSRP for a card thats only marginally quieter at full load and doesnt perform any better.

You mean just like most 3080 AIB cards that charge 100 bucks more while only offering like one or two percent more performance compared to the Founder Edition? :messenger_tears_of_joy:
 

Md Ray

Member
If ps5 were to be like 6800 and 6900, at voltages normally seen with stock pc GPUs, its average sustainable frequency will be about 2ghz.

just saying.... 🤷‍♀️
But is PS5 really like 6800 and 6900?
 
Last edited:

Mhmmm 2077

Member
It's DDR6 though. I'll be interested to see how well it scales next to 6x. If it gives the same bandwidth as 10gbs of 6x it might be a mute point. But I'd bet it'll scale better.

I really can't wait to see the benchmarks. I'm hopeful AMD is competitive again. We really need someone to knock Nvidia down a peg and get them to get their heads out of their asses.
RTX 3070 is also GDDR6, only 3080 and 3090 are GDDR6X.
 

fermcr

Member
I'm willing to spend 300-350€ in my next graphics card... the company the offers the best option for that price, gets my money.
Thanks and good bye.
 
Just going to leave this here....




Igor's speculation below based on some readings from an AIB card

This very special board partner card sets the GPU clock to 2.577 MHz. If one converts this into the experiences with Navi10, then temporarily still approx. 2.3 to 2.4 GHz maximum “boost” clock should remain. Real under load this could still be between 2.1 and 2.3 GHz “gaming” clock. Depending on cooling and load scenario. Although this is still quite speculative, it is quite plausible.

Would be interested to get your thoughts on this longdi longdi ?
 
Can someone Ping Ascend?
When I posted Igors prediction of 320W yesterday he called this "FUD" :messenger_tears_of_joy: :messenger_tears_of_joy:
Someone check on him, maybe he's having a heart attack.

Em... you do realize that those power draw figures are not confirmed.....right? The ones quoted by the OP are simply referencing Igor's speculative numbers. Literally the same thing. Personally I doubt those power draw numbers as well based on what I've seen discussed in the tech sphere.
 

Kadve

Member
What happened to HBM?

HBM's only real advantage over GDDR was it's lower power consumption which as history has found out, gamers don't really care about. Otherwise you would just get the same performance for a much higher price.

The only place where it's preferable nowadays are for enterprise users.
 
Last edited:

VFXVeteran

Banned
Sounds good, but I'm skeptical about VRAM. Such high amounts of VRAM will only lead to higher prices. Why not a 6800 with 12 VRAM instead? Those high amounts of VRAM are unnecessary

Not at all. Crysis Remake on Ultra settings @4k/unlocked FPS allocates over 16G VRAM. These newer games coming out will absolutely take advantage of the amount of VRAM on board to stabilize framerates.
 

regawdless

Banned
Not at all. Crysis Remake on Ultra settings @4k/unlocked FPS allocates over 16G VRAM. These newer games coming out will absolutely take advantage of the amount of VRAM on board to stabilize framerates.

Do you and me really need to have a discussion about VRAM allocation vs usage again?
(In which case you'll mop the floor with me because I'm way less knowledgeable.)

Just joking of course :messenger_grinning_sweat:
 

Ascend

Member
By amd own admission, ryzen 5000 will have even less over clocking than before. Im not optimistic 🤷‍♀️
You mean the CPU that just reached 6 GHz?

Can someone Ping Ascend?
When I posted Igors prediction of 320W yesterday he called this "FUD" :messenger_tears_of_joy: :messenger_tears_of_joy:
Someone check on him, maybe he's having a heart attack.
Why would I? There are likely AIB cards that will use that much power. I doubt AMD will release their own version using that much power, especially because they likely don't need to push the GPU to its absolute limit to be competitive.

The thing is, barely anyone worries about the power consumption of the nVidia cards, but now that AMD's cards are close to coming out, everyone wants to jump on the high power consumption AMD cards train.
 
Last edited:

VFXVeteran

Banned
Do you and me really need to have a discussion about VRAM allocation vs usage again?
(In which case you'll mop the floor with me because I'm way less knowledgeable.)

Just joking of course :messenger_grinning_sweat:

:messenger_tears_of_joy:

If I have to allocate storage just because I think I might need it, I'm still locking it up. I could be doing a number of swaps within that memory block. If I had a smaller memory store, I would allocate less and probably destroy more often. It all depends but my point is that memory is a precious commodity. The more you have the better off you'll be.
 

nemiroff

Gold Member
:messenger_tears_of_joy:

If I have to allocate storage just because I think I might need it, I'm still locking it up. I could be doing a number of swaps within that memory block. If I had a smaller memory store, I would allocate less and probably destroy more often. It all depends but my point is that memory is a precious commodity. The more you have the better off you'll be.

That would show up in 3080 vs 3090 benchmarks. Does it?
 
Last edited:

VFXVeteran

Banned
That would show up in 3080 vs 3090 benchmarks. Does it?

I have never seen a benchmark tool that measures allocated VRAM space on a per-GPU basis per frame in a game. It would certainly be an interesting metric to examine.
 
Last edited:

nemiroff

Gold Member
I have never seen a benchmark tool that measures allocated VRAM space on a per-GPU basis per frame in a game. It would certainly be an interesting metric to examine.

If Crysis Remake allocates more VRAM to somehow take advantage of it to speed things up, we'd simply see a bigger gap in the 3090 vs 3080 benchmarks than than the usual 5-15%. I couldn't find any benchmarks though.
 

mansoor1980

Member
never owned an AMD gpu before but might buy the 6700xt instead of a 3060 after a month simply becuz of that vram........how r the drivers for the AMD cards?
 
never owned an AMD gpu before but might buy the 6700xt instead of a 3060 after a month simply becuz of that vram........how r the drivers for the AMD cards?

We won't know until they release. New launches can sometimes have driver issues, crashes or hiccups as anyone following the 3000 series launch will know. Hopefully they will have solid drivers, some whispers state that one of the reasons they have not launched these cards earlier is that they are trying to perfect the drivers, or at least remove any deal breaker issues like what happened to the 3000 series. If it is any consolation it doesn't look like they are rushing out this launch so hopefully that will translate to more stable drivers.
 

Papacheeks

Banned
never owned an AMD gpu before but might buy the 6700xt instead of a 3060 after a month simply becuz of that vram........how r the drivers for the AMD cards?

Drivers are fine now. They had lots of issues late last year following the 5700/XT launch but seem to have it all ironed out. ANd have been taking big strides in fixing things in timely manner. Brand wise if your not interested in Reference then AIB wise, XFX and Sapphire are in my opinion the top tier with MSI being second tier.
 

mansoor1980

Member
We won't know until they release. New launches can sometimes have driver issues, crashes or hiccups as anyone following the 3000 series launch will know. Hopefully they will have solid drivers, some whispers state that one of the reasons they have not launched these cards earlier is that they are trying to perfect the drivers, or at least remove any deal breaker issues like what happened to the 3000 series. If it is any consolation it doesn't look like they are rushing out this launch so hopefully that will translate to more stable drivers.
ok cool...........and also AMD gpus age well cuz of gud driver support...............i mean look at how gud the vega56 is doing compared to the nvidia counterpart
 

mansoor1980

Member
Drivers are fine now. They had lots of issues late last year following the 5700/XT launch but seem to have it all ironed out. ANd have been taking big strides in fixing things in timely manner. Brand wise if your not interested in Reference then AIB wise, XFX and Sapphire are in my opinion the top tier with MSI being second tier.
[/QUOTE
sapphire is lower priced compared to the others where i live ......is it gud quality..............only had experience with msi and gigabyte before
 
ok cool...........and also AMD gpus age well cuz of gud driver support...............i mean look at how gud the vega56 is doing compared to the nvidia counterpart

Well the fine wine meme does seem to be real, although I'm not sure if that is because Radeon initial drivers are maybe "just ok" and don't bring out the best in the hardware until later? Or if it is just AMD being committed to continued improvement. I think it is probably the latter.

Most of the time Nvidia cards also improve performance with drivers over time, sometimes with great gains in some edge cases but on average the gains/improvements don't seem to be as large as what AMD delivers over time vs initial launch period drivers.
 

GustavoLT

Member
devs optimizing games on consoles:
tenor.gif



devs optimizing games on PC:
tenor.gif
 

mansoor1980

Member
Well the fine wine meme does seem to be real, although I'm not sure if that is because Radeon initial drivers are maybe "just ok" and don't bring out the best in the hardware until later? Or if it is just AMD being committed to continued improvement. I think it is probably the latter.

Most of the time Nvidia cards also improve performance with drivers over time, sometimes with great gains in some edge cases but on average the gains/improvements don't seem to be as large as what AMD delivers over time vs initial launch period drivers.
ok then.........but i was not happy with the gtx 970 vram bull shit...........although it was a great card
 
Nice, so AMD might actually have decent graphics cards for once.

The power draw might be concerning tho, let's wait and see.
 
Without DLSS I really can't justify buying AMD, so I guess I'm going team green for my next card.

Everyone has their own priorities in what is important to them for their purchases. Personally I think DLSS is pretty cool tech, I've no idea if AMD has anything to compete with it. Having said that currently it is only supported in around 6 titles and need to be implemented on a game by game basis by the developers. Granted that number of supported titles will likely grow over the next two years as RT starts to gain adoption due to consoles etc...

Having said that I certainly wouldn't base my purchase on an adhoc feature that needs to be supported on a game by game basis and currently only works on a tiny handful of titles. Until DLSS can be turned on at the driver level with no work from developers it is not going to be a huge thing, despite the paid promotional hype from Digital Foundry and the like.

But if DLSS is something you can't live without and you want to future proof yourself then I can't really complain, buying a 3000 series card would be a great investment for you if that is the case.
 

waylo

Banned
Still don't see this competing well if it does not have good RT and a DLSS equivalent.
This right here is my main concern. I really, really hope they offer up a solid RT solution, but even if they do, you know they aren't going to have something like DLSS. And while it may not be for every game, DLSS is a HUGE benefit and not something I'd want to be without. AMD would seemingly have to just go for insane raw performance to try and keep up, and I don't see that happening. I hope they prove me wrong though. Either way I'm interested to see what they announce next week.
 

RedVIper

Banned
Everyone has their own priorities in what is important to them for their purchases. Personally I think DLSS is pretty cool tech, I've no idea if AMD has anything to compete with it. Having said that currently it is only supported in around 6 titles and need to be implemented on a game by game basis by the developers. Granted that number of supported titles will likely grow over the next two years as RT starts to gain adoption due to consoles etc...

Having said that I certainly wouldn't base my purchase on an adhoc feature that needs to be supported on a game by game basis and currently only works on a tiny handful of titles. Until DLSS can be turned on at the driver level with no work from developers it is not going to be a huge thing, despite the paid promotional hype from Digital Foundry and the like.

But if DLSS is something you can't live without and you want to future proof yourself then I can't really complain, buying a 3000 series card would be a great investment for you if that is the case.

The number of games running DLSS is increasing a lot, there's tons of games that are announced to be supporting on release, or adding support for DLSS soon TM.

I expect most AAA games to support it in the future, and that's kinda all that matters, because those are the games that will be pushing these cards, I don't care if indies support it because they're usually not that demanding.
 
The number of games running DLSS is increasing a lot, there's tons of games that are announced to be supporting on release, or adding support for DLSS soon TM.

I expect most AAA games to support it in the future, and that's kinda all that matters, because those are the games that will be pushing these cards, I don't care if indies support it because they're usually not that demanding.

Those are fair points to be honest, can't really argue with you. The DLSS does seem like great tech. Either way I'm looking forward to seeing what AMD have in store for the reveal on the 28th.
 

regawdless

Banned
Everyone has their own priorities in what is important to them for their purchases. Personally I think DLSS is pretty cool tech, I've no idea if AMD has anything to compete with it. Having said that currently it is only supported in around 6 titles and need to be implemented on a game by game basis by the developers. Granted that number of supported titles will likely grow over the next two years as RT starts to gain adoption due to consoles etc...

Having said that I certainly wouldn't base my purchase on an adhoc feature that needs to be supported on a game by game basis and currently only works on a tiny handful of titles. Until DLSS can be turned on at the driver level with no work from developers it is not going to be a huge thing, despite the paid promotional hype from Digital Foundry and the like.

But if DLSS is something you can't live without and you want to future proof yourself then I can't really complain, buying a 3000 series card would be a great investment for you if that is the case.

DLSS 2.1 will be a step in the right direction. I'm convinced that Nvidia will keep pushing it and that most of the demanding games will support it, sooner than later.
 
Top Bottom