• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon VII Announced

If the June/July release for Navi is true, then getting the Radeon VII doesn't make sense unless you're a collector or a prosumer. That card is just there to bridge the gap from now to Navi's release so Nvidia isn't the only one offering high-end GPUs.
Yep pretty much. But I think getting an rtx card makes just as little sense. Unless you're literally swimming in money than sure get the 2080ti... But people that work for a living should not support these prices.
 

llien

Member
Why can't people have a normal conversation on GAF anymore...
Please, don't overgeneralize.

If the June/July release for Navi is true, then getting the Radeon VII doesn't make sense unless you're a collector or a prosumer. That card is just there to bridge the gap from now to Navi's release so Nvidia isn't the only one offering high-end GPUs.
Navi will not get beyond 2070 in 2019 according to leaks.

That's fair. But if the cards are equal in performance and equal in price, I'd take the one with more VRAM.
On another note... This is slightly relevant, if people are interested in what happened that Radeon VII got announced instead of something else (Navi).



This is as relevant as it gets, bar annoying twitter drama part, the highlights:

  • WCCNewsStraightFromArse happened to actually hint at "Radeon 7" later to become V7, when one of the AMD oldies was fired (13 Dec 2018)
  • V7 costs $750 to produce
  • Nevertheless Vega 20 production was ramping up, fearing Turing, which happened to be not at all scary, so it was called off
  • Navi was first out in Sep 2018 (TSMC), and had a weird mix of "better than expected" (at least perf, likely power consumption too) on one hand, on the other, had problems (stability?) which need fixing, this is the reason it wasn't demoed at CES
  • Navi is still a mid-low range card, smaller than Polaris
  • In 2019 fastest Navi card will be at around 1080/2070 levels
  • Bigger Polaris was planned, but called off, with Vega poised to fill the gap, but underdelivering. AMD "won't repeat the same mistake", there will be a bigger Navi card in 2020
  • Navi will use GDDR6
  • End of summer is named as the expected launch date for Navi

Mentioned 7nm woes also explain why PS5 is postponed.
 
Last edited:

Leonidas

Member
I guess we have to wait for 3rd party benchmarks to judge that

Lisa Su already said the 2080 will be faster in some games. And we'll also see VII running 28-40FPS(unplayable to borderline unplayable) @ 4K max in some games that came out last year or earlier.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Navi will not get beyond 2070 in 2019 according to leaks.

Nothing leaked about Navi has been shown to be true. The pre-CES leaks in regards to Navi were mostly garbage. So we have no idea how it will perform.

AMD said literally nothing about Navi and to be honest, that concerns me.
 
Yep pretty much. But I think getting an rtx card makes just as little sense. Unless you're literally swimming in money than sure get the 2080ti... But people that work for a living should not support these prices.
People who work for a living have the capitalism-given right to spend their discretionary income as they see fit. If they are interested in the new ray tracing technology then why wouldn't they buy an RTX card? It's first-generation tech to be sure but there's always a first generation for everything. Someone always has to go first and God knows AMD has never done anything to push the industry forward the way Nvidia has.
 

Panajev2001a

GAF's Pleasant Genius
People who work for a living have the capitalism-given right to spend their discretionary income as they see fit. If they are interested in the new ray tracing technology then why wouldn't they buy an RTX card? It's first-generation tech to be sure but there's always a first generation for everything. Someone always has to go first and God knows AMD has never done anything to push the industry forward the way Nvidia has.

Unified Shaders with Xenos?
 
Dear nvidia slave -

Unified shader architecture.
Dear AMD cultist,

If you have to go back 15 years to find an example, you haven't won the argument. Your answer is also wrong: The Xbox 360 launched in 2005, the Geforce 8800 GTX launched a year later. If you think Nvidia just goes into the lab and shits out a GPU in a year, you have a fundamental lack of understanding of developing new GPU hardware. Nvidia was already working on a unified shader architecture before AMD ATI prepared the Xenos for the 360, but they launched it later as an add-in card for PC's instead of embedded in a console.

Also if we are going into the distant past, the real and correct answer would have been DirectX 9/PS 2.0 support as seen in the original Half-Life 2. That's the only instance where what was then still ATI really moved the industry forward.

edit: Saw what Leonidas wrote below me. He is actually correct. AMD acquired ATI after the deal to use Xenos in 360 was already done by what was then also still ATI. So the answer is wrong in 2 ways, first it presupposes that Nvidia wasn't already working on unified shaders (they were) and second it gets the history incorrect (and I did too) where we all forgot that the Xenos deal was done before ATI was acquired by AMD. But Leonidas remembered. Sparta never forgets.
 
Last edited:

Ascend

Member
People who work for a living have the capitalism-given right to spend their discretionary income as they see fit. If they are interested in the new ray tracing technology then why wouldn't they buy an RTX card? It's first-generation tech to be sure but there's always a first generation for everything. Someone always has to go first and God knows AMD has never done anything to push the industry forward the way Nvidia has.
Really? Look at this video by AMD from 2018. I'm tired of people acting like nVidia somehow invented Ray Tracing;


That's not even mentioning Radeon Rays which was around from 2016... Here's another nice little video...



Also, AMD has never done anything to push the industry forward? LOL
Unified shaders (As already mentioned)
TressFX?
HBM?
FreeSync over HDMI?
Who was generally the first on a new node again? Oh right, AMD.
Remember DX10.1 that brought a bunch of benefits but was stripped out of games by nVidia? (https://techreport.com/news/14707/ubisoft-comments-on-assassin-creed-dx10-1-controversy-updated)
Remember Tessellation? Yeah. ATi invented that. Since that's now AMD's property, it still counts.

Uh... You know what... Forget listing things. This link should suffice...
https://www.fastcompany.com/most-innovative-companies/2018/sectors/consumer-electronics

If you have to go back 15 years to find an example, you haven't won the argument.
You don't get to say AMD never has done X, and then complain about the timing of the example.
 
Last edited:

Leonidas

Member
TressFX?
HBM?
FreeSync over HDMI?
Who was generally the first on a new node again? Oh right, AMD.
Remember DX10.1 that brought a bunch of benefits but was stripped out of games by nVidia? (https://techreport.com/news/14707/ubisoft-comments-on-assassin-creed-dx10-1-controversy-updated)

Removed the things that predate AMD.

TressFX was used in like 2 games. Wouldn't say that moved the industry forward.
With HBM Fury and Vega couldn't beat older Nvidia cards that sold for the same price. Wouldn't say that moved the industry foward.
Rushing to the new node isn't really moving the industry forward if they're offering the same level of performance as older competing GPUs.

Things that predate the ATI buyout are irrelevant, we're talking about what AMD has done. ATI used to do some good things, but I can't say they've pushed the industry forward in any meaningful way since they became a part of AMD.
 
Last edited:

thelastword

Banned
Hairworks vs TressFX, I'll take TressFx anyday.....TressFx made it to consoles, that alone speaks to AMD's stance on things, AMD wants the technology out there for everyone and playable everywhere......,They're not in it to cripple performance so you can keep buying more powerful cards to run those resource hoggish features, with no great visual and aesthetic advantage........Lara's hair looks very nice, Geralt's hairworks hair was an abomination......Then you have something like FF, where NV features just ruin performance, they have hairworks enabled for the entire map, even for creatures not in view or in the vicinity, just a mess and shady stuff all round.......

Raytracing was being worked on AMD way before NV even announced anything about raytracing, AMD is not in it to cripple performance with some ho-hum raytracing technology.....They really want to deliver raytracing that looks good, performs good, where all parties would be privy to an affordable and good looking solution.....I'm pretty sure when AMD delivers RT, it will be available on consoles too......So no one has to buy a $1200.00 to get 60fps at 1440p on Ultra with Hybrid Raytracing just for reflections....

AMD always had the better tech and pushed the industry forward, but people are blinded by Nvidia's monopolistic ways.....You want to know another thing AMD was right about? Freesync........Yet look at the hypocrisy, now Nvidia is onboard with Freesync, they're already making statements like; "they have better freesync support over AMD"......Yes, you can't make this up.......Right now, NV is embracing freesync like they invented it and own it.......Also, look at NV fans who never had a good thing to say about freesync before, but now, freesync is so great because it's available to NV card owners..........The irony of it all.......

You know, it's a bit sad when people don't see who is doing things to advance the tech and not cripple it's forward frustum......You know who's games don't intentionally cripple performance on the competitor's cards? Games developed in collaboration with AMD, they all work well on NV cards, Farcry5, Sniper Elite, Hitman etc.......It's crazy isn't it, but because of AMD's open policy, NV can use freesync, NV should also be privy to their raytracing tech "radeon rays" through GPU OPEN too......So whilst NV freeloads and keeps their monopoly alive through proprietary and segregative tactics, they're only able to make this work because of their hold on the industry, since people just blindly buy Nvidia even if they get worse performance.....1050ti vs a cheaper rx470, people buy the 1050ti and they wonder why NV is selling cards for $1200.00/2499, yet it's a cardinal sin for AMD to price a high bandwidth/high memory card for $700.00........

I'll tell you what these NV fans want, 1080ti performance for $200.00 from AMD, really hopeless stuff.......Forget that AMD is selling you a card which costs $750 @ $700.00 with double the ram and bandwidth over the nearest NV competitor card......Oh No!.....The AMD card is a joke you see, it should be available for peanuts.....even though it has more expensive parts than the competition.....We should start being a bit more honest about things to be frank.....It's the only way things can get better in this industry....
 

Ascend

Member
Removed the things that predate AMD.

TressFX was used in like 2 games. Wouldn't say that moved the industry forward.
Because nVidia hijacked it and called it PureHair. But in any case, it was hair done on compute rather than tessellation. Lots of things can be done in compute. It has its benefits to show this.

With HBM Fury and Vega couldn't beat older Nvidia cards that sold for the same price. Wouldn't say that moved the industry foward.
Oh so performance is all that matters? I guess then that the R9 290X definitely moved the industry forward.

Rushing to the new node isn't really moving the industry forward if they're offering the same level of performance as older competing GPUs.
Again with the performance excuse. Supporting a node shrink is an important step in the industry, no matter when it happens. That their architecture is old because they get no funding from gamers to improve it is a whole other story. It doesn't mean they didn't drive the industry forward.
Not to mention... There is a reason AMD is considered innovative, as linked previously.

Things that predate the ATI buyout are irrelevant, we're talking about what AMD has done. ATI used to do some good things, but I can't say they've pushed the industry forward in any meaningful way since they became a part of AMD.
It's not a directly fair comparison either because nVidia's mind share is a lot stronger than in the past. Great technologies get left behind simply because it's not done by nVidia. Look at async compute... Proven that it can drastically improve performance, but it is not widespread simply because nVidia cards suck at it.

And what happened to FreeSync over HDMI? Glossed over that one? It's the reason consoles can have adaptive sync.
 

Leonidas

Member
Because nVidia hijacked it and called it PureHair. But in any case, it was hair done on compute rather than tessellation. Lots of things can be done in compute. It has its benefits to show this.

Oh so performance is all that matters? I guess then that the R9 290X definitely moved the industry forward.

Again with the performance excuse. Supporting a node shrink is an important step in the industry, no matter when it happens. That their architecture is old because they get no funding from gamers to improve it is a whole other story. It doesn't mean they didn't drive the industry forward.
Not to mention... There is a reason AMD is considered innovative, as linked previously.


It's not a directly fair comparison either because nVidia's mind share is a lot stronger than in the past. Great technologies get left behind simply because it's not done by nVidia. Look at async compute... Proven that it can drastically improve performance, but it is not widespread simply because nVidia cards suck at it.

And what happened to FreeSync over HDMI? Glossed over that one? It's the reason consoles can have adaptive sync.

Performance matters a lot but it's not the only thing that matters.
Nvidia supports new node shrinks also, rushing to a new node isn't innovation.

Didn't gloss over FreeSync over HDMI I just don't consider that innovation on AMDs part(Free-Sync came after G-Sync).
Support over HDMI is good but I'll not forget the years where I had to buy Nvidia to use HDMI 2.0...
You mention conosle support, only Xbox supports adaptive-sync today and the experience isn't good on many FreeSync monitors since most console games run outside the Free-Sync range.
 
Last edited:
Because nVidia hijacked it and called it PureHair. But in any case, it was hair done on compute rather than tessellation. Lots of things can be done in compute. It has its benefits to show this.

Oh so performance is all that matters? I guess then that the R9 290X definitely moved the industry forward.

Again with the performance excuse. Supporting a node shrink is an important step in the industry, no matter when it happens. That their architecture is old because they get no funding from gamers to improve it is a whole other story. It doesn't mean they didn't drive the industry forward.
Not to mention... There is a reason AMD is considered innovative, as linked previously.


It's not a directly fair comparison either because nVidia's mind share is a lot stronger than in the past. Great technologies get left behind simply because it's not done by nVidia. Look at async compute... Proven that it can drastically improve performance, but it is not widespread simply because nVidia cards suck at it.

And what happened to FreeSync over HDMI? Glossed over that one? It's the reason consoles can have adaptive sync.

Well turing actually fixed their async compute woes take a look at wolfenstein benches. Still, it was something amd had been doing since the dawn of GCN and NVidia took how many years to catch up? Nvidia drones are easily exposed in here lol.
 

ethomaz

Banned
You're a corporate slave when you're slamming a product that isn't out yet. On PC, where drivers are a thing. Yes that fits my definition.

Maybe not you, but duck guy is he knocked amd because he had a faulty power supply lol.

Yes, we do not know comprehensive numbers for the vii yet and YES the rtx 2080 will run into vram limitations where the Radeon won't.
I started the discussion because somebody said “this AMD card is the best 4k card on the market”.

I asked “Is it? It cost the same than GTX 2080 so how do you know it is better?”

Asked for source and nothing.

Then somebody said because it has 16GB RAM... I called out again that doesn’t make a better 4k card.... and go on.

Your picture didn’t happened in that thread... the opposite.

You are hyping something that is not out yet.
 
Last edited:

ethomaz

Banned
Tbh I would never buy the vii either, I'm only arguing that from a performance standpoint it will be superior to the 2080 in time ; its simply a more capable card. Taking power usage out of the equation imagine these cards in a console, you should know damn well that memory set up and lack of pc overhead would put it well ahead of nvidia.

Anyways So I wouldn't buy any current high end card right now - on the nvidia side because fuck them they need a kick in the balls, and they're too expensive. On the Radeon side its too expensive as well and there's the power consumption.

And I definitely think its dumb as he'll to upgrade before a new console generation sets the baseline. Just a really shotty time for high end pc parts.
So you have your mind made up that Radeon VII will be better or more capable card without any evidence?

Now we know what corporate slave means... nothing better than real examples to understand... thanks.
 
Last edited:
So you have your mind made up that Radeon VII will be better or more capable card without any evidence?

Now we know what corporate slave means... nothing better than real examples to understand... thanks.

Because we see before the card is even out it performs neck and neck with the 2080 in far cry 5 for example. When turing has been out longer with time for drivers to age.

Then doing some 1st grade math I deduced that 16 is greater than 8, and the vii has a greater than 300gb/s lead in bandwidth. THEN I said imagine this in a console where nvidia's DX11 implementation does not come into play for exclusives or console centric AAA titles. Do you even know how much could be done with that bandwidth?!

Welcome to the list slave.
 

ethomaz

Banned
Because we see before the card is even out it performs neck and neck with the 2080 in far cry 5 for example. When turing has been out longer with time for drivers to age.

Then doing some 1st grade math I deduced that 16 is greater than 8, and the vii has a greater than 300gb/s lead in bandwidth. THEN I said imagine this in a console where nvidia's DX11 implementation does not come into play for exclusives or console centric AAA titles. Do you even know how much could be done with that bandwidth?!

Welcome to the list slave.
There is no benchmark yet...

Drivers have more change to GTX 2080 get better because it is a new arch than Radeon VII that is the old and already optimized Vega 64.

You looks not that good at 1st grade maths because more VRAM and bandwidth doesn’t exactly means better performance.
 
Last edited:
There is no benchmark yet...

Drivers have more change to GTX 2080 get better because it is a new arch than Radeon VII that is the old and already optimized Vega 64.

You looks not that good at 1st grade maths.

Turing has plenty in common with pascal. It's no alien architecture and the games already take full advantage of the async compute to the extent it can on PC.


Furthermore we simply don't know which architecture gets ore optimization on pc... Well no it's clearly nvidia's domain there with their marketshare lead and shenanigans like tessellation in crysis 2 and gameworks titles.

Anything else.
 

SonGoku

Member
Lisa Su already said the 2080 will be faster in some games. A
Thats the same old situation of similarly performant cards edging one another at games that favor one arch over the other
And we'll also see VII running 28-40FPS(unplayable to borderline unplayable) @ 4K max in some games that came out last year or earlier.
The same will apply to the 2080, so what's your point?
Yeah dood, just that thing that helped 360 games leave their ps3 counterparts in the dust for a long time and dragged nvidia into making the 8800 series.. no big deal :p
Im pretty sure nvidia was working on Tesla R&D looong before the 360 reveal
You can't just whip out a massive arch change like that in a couple of years.
 
Last edited:
SonGoku SonGoku Even if that were true which i'm not so sure, Ati beat them to it by a full year, and one more scumbag point for Nvidia ; they proclaimed dedicated shaders were overall superior to unified before the 8800 launch and hoodwinked sony into buying a much inferior product for the ps3.
 

SonGoku

Member
@SonGokuEven if that were true which i'm not so sure, Ati beat them to it by a full year
Im not defending that other poster point either, amd contributed its fair share to the industry.
Its just that major architectures take years of r&d its not something you can pull as reactionary measure especially something as good as the 8800 was, that arch gave Nvidia a huge boost
and one more scumbag point for Nvidia ; they proclaimed dedicated shaders were overall superior to unified before the 8800 launch and hoodwinked sony into buying a much inferior product for the ps3.
Scummy PR 101, companies lie to keep selling the inferior product while they get ready for the next big thing
 
Last edited:

Ascend

Member
Performance matters a lot but it's not the only thing that matters.
Nvidia supports new node shrinks also, rushing to a new node isn't innovation.

Didn't gloss over FreeSync over HDMI I just don't consider that innovation on AMDs part(Free-Sync came after G-Sync).
Support over HDMI is good but I'll not forget the years where I had to buy Nvidia to use HDMI 2.0...
You mention conosle support, only Xbox supports adaptive-sync today and the experience isn't good on many FreeSync monitors since most console games run outside the Free-Sync range.
Interesting how you keep calling it 'rushing'. Which node shrink where an AMD was first was an actual failure?
Why does it matter if FreeSync came after G-sync? It's not as if AMD decided to create their own module to put in monitors and copy nVidia directly. No. They used a universal standard and made it mainstream. So much so that now nVidia was forced to jump on board. If you really want to dismiss that one, I guess this conversation is over because we will never agree and things need to remain civil.
 

SonGoku

Member
AMD is not rushing to 7nm by any means if anything they are playing it smart by porting a familiar arch to 7nm first to get a grip of the node
Similar strategy to what they did back with the 40nm hd 4770 and it turned out great for their new arch (hd 5000) on the new process
 
Last edited:

Leonidas

Member
Which node shrink where an AMD was first was an actual failure?

Never called them a failure, just said they weren't innovative and the last two shrinks seem rushed.
Radeon VII. AMD needed two years and a die shrink to match an old Nvidia GPU. And it uses more power. World's first 7nm consumer GPU. Sounds bad and seems rushed.
AMD was first at the mid-range with Polaris. Power issues at launch with single 6-pin reference design and out of spec PCI-e draw. Seems rushed.
 
Last edited:

llien

Member
Nothing leaked about Navi has been shown to be true. The pre-CES leaks in regards to Navi were mostly garbage. So we have no idea how it will perform.

AMD said literally nothing about Navi and to be honest, that concerns me.
If release is indeed as far as late summer or later, with nvidia's next product being 7nm and coming in 2020, it would be too early.


And it uses more power.
Good that Fermi is that far in the past that power consumption started to matter, chuckle.
We have a very vague idea how much power V7 consumes.
 

llien

Member
"I can't go into details about performance but I will say you will be surprised, just as I was. The new Radeon VII isn't perfect, but it's a damn good card that I'm excited to dive into with my full review on February 7."

tweaktown
 
Last edited:
Top Bottom