• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

3DMark Time Spy rigged in Nvidia's favor?

Sini

Member
Vsync option does not do anything other than get rid of the framecap when set to off, but the frames are always syncing to refresh. There is no tearing in spite of what you select.
200FPS cap? I have vsync set off and it still doesn't go above that, not that it gets that high anywhere but in classic Doom maps.

I haven't seen any tearing, neither there is any input lag I had with OpenGL, so what kind of Vsync are they using then?
 
200FPS cap? I have vsync set off and it still doesn't go above that, not that it gets that high anywhere but in classic Doom maps.
Yeah I should have been more specific. It gets rid of the frame cap which is usually up to your refresh rate.
I haven't seen any tearing, neither there is any input lag I had with OpenGL, so what kind of Vsync are they using then?

Seems like real triple buffering I think. And yeah, you would see no tearing.
 

Karanlos

Member
Wuh? I have seen more people telling Vsync doesn't do anything at all.

With AMD Vsync off my fps is above 130 on my 1080p 60 hz minitor, with it on it's capped at 60 but it's unstable 60 so it fluctuates between 59 and 60 fps with actually causes tearing according to the internal measuring tools so I don't know.

I've made a FCAT layer so Vulkan has that now but I don't know if the colors are correct.
 
Lol, this is hilarious.
FYI I prefer NVIDIA, so everything I am about to say is obviously 100% false and my statement is rigged for nvidias purpose.

I like how over the last year or so the entire "tech" segment of GAF has been pro low-level code, pro AMD and definately pro Async-Compute. Such to the point that they assume that async/low-level can compensate for actual hardware difference. The people who spout "optimization" as the thing which will allow an rx480 to outclass a gtx1080. Low level will allow a 6tf Scorpio to be "untouched" for a few years, again besting a gtx 1080. Async compute allows for more thing to be processed at once, cutting out any downtime that GPU might have while waiting. It doesn't increase the number of actual compute cores, and it doesn't make those existing cores faster. A 10-15% improvement is fantastic, but not enough to close the gap with nvidias superior processing ability.

So when a benchmark comes out that doesn't show an rx480 besting a gtx1080, then holy shit it must be rigged.

Of course, I prefer NVIDIA so I'm obviously wrong.
 

dr_rus

Member
Lol, this is hilarious.
FYI I prefer NVIDIA, so everything I am about to say is obviously 100% false and my statement is rigged for nvidias purpose.

I like how over the last year or so the entire "tech" segment of GAF has been pro low-level code, pro AMD and definately pro Async-Compute. Such to the point that they assume that async/low-level can compensate for actual hardware difference. The people who spout "optimization" as the thing which will allow an rx480 to outclass a gtx1080. Low level will allow a 6tf Scorpio to be "untouched" for a few years, again besting a gtx 1080. Async compute allows for more thing to be processed at once, cutting out any downtime that GPU might have while waiting. It doesn't increase the number of actual compute cores, and it doesn't make those existing cores faster. A 10-15% improvement is fantastic, but not enough to close the gap with nvidias superior processing ability.

So when a benchmark comes out that doesn't show an rx480 besting a gtx1080, then holy shit it must be rigged.

Of course, I prefer NVIDIA so I'm obviously wrong.

I think what hurts AMD fans the most is that RX480 isn't besting not 1080 but even 1060 in that benchmark. This is certainly unexpected when everyone knows* that AMD is miles better than NV in DX12**.

*
from AMD's marketing documents
**
in AMD sponsored titles only
 

TarNaru33

Banned
Lol AMD fans taking it to the next level.

I think what hurts AMD fans the most is that RX480 isn't besting not 1080 but even 1060 in that benchmark. This is certainly unexpected when everyone knows* that AMD is miles better than NV in DX12**.

*
from AMD's marketing documents
**
in AMD sponsored titles only

Lol, this is hilarious.
FYI I prefer NVIDIA, so everything I am about to say is obviously 100% false and my statement is rigged for nvidias purpose.

I like how over the last year or so the entire "tech" segment of GAF has been pro low-level code, pro AMD and definately pro Async-Compute. Such to the point that they assume that async/low-level can compensate for actual hardware difference. The people who spout "optimization" as the thing which will allow an rx480 to outclass a gtx1080. Low level will allow a 6tf Scorpio to be "untouched" for a few years, again besting a gtx 1080. Async compute allows for more thing to be processed at once, cutting out any downtime that GPU might have while waiting. It doesn't increase the number of actual compute cores, and it doesn't make those existing cores faster. A 10-15% improvement is fantastic, but not enough to close the gap with nvidias superior processing ability.

So when a benchmark comes out that doesn't show an rx480 besting a gtx1080, then holy shit it must be rigged.

Of course, I prefer NVIDIA so I'm obviously wrong.

In all honesty, you all are acting like children in your responses, which doesn't help at all lol.
 
Lol, this is hilarious.
FYI I prefer NVIDIA, so everything I am about to say is obviously 100% false and my statement is rigged for nvidias purpose.

I like how over the last year or so the entire "tech" segment of GAF has been pro low-level code, pro AMD and definately pro Async-Compute. Such to the point that they assume that async/low-level can compensate for actual hardware difference. The people who spout "optimization" as the thing which will allow an rx480 to outclass a gtx1080. Low level will allow a 6tf Scorpio to be "untouched" for a few years, again besting a gtx 1080. Async compute allows for more thing to be processed at once, cutting out any downtime that GPU might have while waiting. It doesn't increase the number of actual compute cores, and it doesn't make those existing cores faster. A 10-15% improvement is fantastic, but not enough to close the gap with nvidias superior processing ability.

So when a benchmark comes out that doesn't show an rx480 besting a gtx1080, then holy shit it must be rigged.

Of course, I prefer NVIDIA so I'm obviously wrong.

Lol, this is hilarious.
 

Mabufu

Banned
A DirectX12 benchmark where its advantages are not being utilized.

Great benchmark.

This indeed rigged in Nvidia's favor, as the thing AMD's architecture is good at is not being tested at all.
This is just another DX11 bench.
 

Jebusman

Banned
In all honesty, you all are acting like children in your responses, which doesn't help at all lol.

They can't help it. Involving yourself in the GPU wars, it changes you.

Smugness and egos sort of take over. Even if you feel you're right (and the evidence is there to back you up), you can't help but fan the flames. It's what keeps the war going.
 

ethomaz

Banned
The opposite what they did is being favorable to one brand. They choose the unbiased path that run in both cards and just shows how stronger the card is.

Or people really want they do a bench that runs in DX12_1 + code path to async compute for nVidia vs DX12_0 + code path to async compute for AMD???

They did the best with what they had in hands.
 

twobear

sputum-flecked apoplexy
At best you could claim that the benchmark doesn't correspond to a real-world scenario where optimisations for different cards are likely to exist. That's fine and, coincidentally, it also isn't a new problem, which is part of the reason why 3DMark is no longer considered a be-all end-all benchmark, with more emphasis placed on performance in actual games. But to claim that it's biased because it doesn't optimise for AMD cards is insane; it's the whole point of it being a benchmark program: otherwise it only measures performance relative to other cards that share the same architecture. Although perhaps that is what AMD stans want now.
 
At best you could claim that the benchmark doesn't correspond to a real-world scenario where optimisations for different cards are likely to exist. That's fine and, coincidentally, it also isn't a new problem, which is part of the reason why 3DMark is no longer considered a be-all end-all benchmark, with more emphasis placed on performance in actual games. But to claim that it's biased because it doesn't optimise for AMD cards is insane; it's the whole point of it being a benchmark program: otherwise it only measures performance relative to other cards that share the same architecture. Although perhaps that is what AMD stans want now.

I suppose that is exactly why it allows turning on/off of features that allow significant performance gains like tesselation (for amd) and dof (for nvidia) which completely invalidates their comparability, right?

Honestly, they could have easily added a code path that will actually take advantage of DX12 features in the way DX12 games will, and just left it optional.

As is, their 'baseline DX12' code path is just pure shit, a DX11 benchmark dressed up as a DX12 benchmark to please their nvidia overlords.
 

ethomaz

Banned
I suppose that is exactly why it allows turning on/off of features that allow significant performance gains like tesselation (for amd) and dof (for nvidia) which completely invalidates their comparability, right?

Honestly, they could have easily added a code path that will actually take advantage of DX12 features in the way DX12 games will, and just left it optional.

As is, their 'baseline DX12' code path is just pure shit, a DX11 benchmark dressed up as a DX12 benchmark to please their nvidia overlords.
It is AMD/nVidia that forces theses features on/off via drive level... not 3DMark fault... I don't think the app even know these features are disabled at drive level because it one way path (app send instructions to drive and the drive to hardware and not the opposite).

BTW it is a benchmark that needs to run the same code to gauge performance between cards... so they are doing the right thing here... the complain is pure shit. They could for example optimize to DX12_1 the bench and left AMD with worst performance but not... they choose to be unbiased (unless you have some proof of the opposite).
 
It is AMD/nVidia that forces theses features on/off via drive level... not 3DMark fault... I don't think the app even know these features are disabled at drive level because it one way path (app send instructions to drive and the drive to hardware and not the opposite).

BTW it is a benchmark that needs to run the same code to gauge performance between cards... so they are doing the right thing here... the complain is pure shit. They could for example optimize to DX12_1 the bench and left AMD with worst performance but not... they choose to be unbiased (unless you have some proof of the opposite).

Actually this response itself is a great example of pure shit because they couldn't optimize a DX12_1 bench and leave AMD with worse performance. You really don't seem to know what you are talking about. Like, at all.

EDIT: Regardless, for all intents and purposes, Time Spy basically invalidated itself as a credible gauge of DX12 performance completely, it's a travesty. It will just be a tool for nvidia aligned review sites to allow Pascal and Maxwell cards to retain some form of dignity on the DX12 front.

Which is like.. History just repeats itself. Remember the first DX9 3DMark, 3DMark 03?

483_7.gif


We all remember how that actually turned out. Fuck you Futuremark.
 
Actually this response itself is a great example of pure shit because they couldn't optimize a DX12_1 bench and leave AMD with worse performance. You really don't seem to know what you are talking about. Like, at all.

explain

please explain

you talk in platitudes that must be self-evident, when they are not

I can imagine a hypothetical scenario where Futuremark added a specific effect or rendernig feature taht uses Conservative Rasterization and thus being quite a great deal slower on AMD (a 12_1 feature).
 

Kysen

Member
it was never just "the async compute boost". The DX12 and Vulkan renderers allow major steps up over their OpenGL and DX11 drivers in other areas.

Yep this is the conclusion I am getting. The AMD drivers for DX11 and OpenGL are poor which is why the jump on DX12 is so big.
 

cirrhosis

Member
I think what hurts AMD fans the most is that RX480 isn't besting not 1080 but even 1060 in that benchmark. This is certainly unexpected when everyone knows* that AMD is miles better than NV in DX12**.

*
from AMD's marketing documents
**
in AMD sponsored titles only

thread in a nutshell

can't say i expected anything else really. another day, another amd thread.

never change, GAF
 

ethomaz

Banned
Actually this response itself is a great example of pure shit because they couldn't optimize a DX12_1 bench and leave AMD with worse performance. You really don't seem to know what you are talking about. Like, at all.
Yeap... I don't know.

Just try to make a code path using DX12_1 for nVidia and compare with a code path doing the same in DX12_0 for AMD... come back to post results please ;)
 
explain

please explain

you talk in platitudes that must be self-evident, when they are not

I can imagine a hypothetical scenario where Futuremark added a specific effect or rendernig feature taht uses Conservative Rasterization and thus being quite a great deal slower on AMD (a 12_1 feature).

Which would in the worst case cause that rendering feature to run on software at around 20% slower, or in the best case just be ignored or optimized for in the drivers a-la 3DMark Vantage which nobody gives a fuck about when Tesselation is off.
 

Durante

Member
A DirectX12 benchmark where its advantages are not being utilized.
That's utterly wrong. Which everyone who actually knows more about 3D rendering engine development than whatever is convenient to justify hardware biases should immediately realize.

This benchmark does things which would slow a DX11 pipeline to a crawl. You know how? By taking advantage of all the truly important and in-depth API-level changes in DX12.

Which is also what actual Futuremark developers are telling people. But clearly, the only fair benchmark is the one which is completely tailored to one particular hardware architectures at the readily apparent expense of everything else.
 
That's utterly wrong. Which everyone who actually knows more about 3D rendering engine development than whatever is convenient to justify hardware biases should immediately realize.

This benchmark does things which would slow a DX11 pipeline to a crawl. You know how? By taking advantage of all the truly important and in-depth API-level changes in DX12.

Which is also what actual Futuremark developers are telling people. But clearly, the only fair benchmark is the one which is completely tailored to one particular hardware architectures at the readily apparent expense of everything else.

It also purposefully does things that hamstrings its performance on certain hardware in ways that games using DX12 would never do.

Are you actually telling us that its purpose as a gaming benchmark is fulfilled here? Or are we arguing void morals and semantics?
 

Durante

Member
It also purposefully does things that hamstrings its performance on certain hardware in ways that games using DX12 would never do.
Bullshit.

Which would in the worst case cause that rendering feature to run on software at around 20% slower
Do you have any idea what conservative rasterization is or what it does? You don't simply "run it in software". Depending on what it is used for, replicating the same thing in software which is done in hardware will have an order of magnitude performance factor impact, not 20%. So it's a good thing a general benchmark utility doesn't use it, right?
 

ethomaz

Banned
Which is also what actual Futuremark developers are telling people. But clearly, the only fair benchmark is the one which is completely tailored to one particular hardware architectures at the readily apparent expense of everything else.
The way people here are complaining Futuremark will need to create the 3DMark AMD and 3DMark nVidia, two benchmark different from each other which scores can't be compared at all lol
 
Bullshit.

Yeah. I guess that "Bullshit." just explains away the disparity between this benchmark's results and DX12 games' benchmark results, the way other "Bullshit."s explained 3DMark03's crazy supposed DX9 scores.

The way people here are complaining Futuremark will need to create the 3DMark AMD and 3DMark nVidia, two benchmark different from each other which scores can't be compared at all lol

They already have. Vantage with Tesselation Off for AMD and Vantage with DOF off for nVidia. Which makes this whole thing and the benchmarking morality explanation Futuremark provide even more hypocritical.
 

Durante

Member
Yeah. I guess that "Bullshit." just explains away the disparity between this benchmark's results and DX12 games' benchmark results
What disparity? Whenever I've seen the possibility to measure the impact of asynchronous compute separately outside of a microbenchmark, 10-15% is what the advantage on GCN was. Which is also what it is in Time Spy, surprise.

So what in god's name are you actually, specifically talking about when you make accusations as specific and severe as claiming that it "purposefully does things that hamstrings its performance on certain hardware"?
 

twobear

sputum-flecked apoplexy
Yeah. I guess that "Bullshit." just explains away the disparity between this benchmark's results and DX12 games' benchmark results, the way other "Bullshit."s explained 3DMark03's crazy supposed DX9 scores.

So now you're accusing nVidia of tampering with their drivers instead of Futuremark deliberately instituting an nVidia-friendly DX12 benchmark. Does nVidia's nefarious moneyhatting never end???
 

ethomaz

Banned
They already have. Vantage with Tesselation Off for AMD and Vantage with DOF off for nVidia. Which makes this whole thing and the benchmarking morality explanation Futuremark provide even more hypocritical.
Proof?

3DMark Vantagem have both on for both cards. If AMD/nVidia forces it to be disabled in driver level then what Futuremark can do? lol

See what you say makes no sense... users or companies that disable these features at drive level are cheating the benchmark.
 
explain

please explain

you talk in platitudes that must be self-evident, when they are not

I can imagine a hypothetical scenario where Futuremark added a specific effect or rendernig feature taht uses Conservative Rasterization and thus being quite a great deal slower on AMD (a 12_1 feature).

i actually would like an independent vendor to produce a benchmark of this type because im really curious what uplifts the 12.1 features actually bring performance wise to maxwell/pascal
 

dr_rus

Member
So you figured your best bet was to continue hang out in the thread and make what few users that do respect you, respect you less?

Ace.
Yes, I figured just that - I'll continue to hang out in this thread until even you will understand the stupidity of this whole noise around TS. Note that it's you who started to move the thread into discussion of my persona instead of contributing anything of value. So let me tell you that my respect for you is somewhere in negative values because of this.

At best you could claim that the benchmark doesn't correspond to a real-world scenario where optimisations for different cards are likely to exist. That's fine and, coincidentally, it also isn't a new problem, which is part of the reason why 3DMark is no longer considered a be-all end-all benchmark, with more emphasis placed on performance in actual games. But to claim that it's biased because it doesn't optimise for AMD cards is insane; it's the whole point of it being a benchmark program: otherwise it only measures performance relative to other cards that share the same architecture. Although perhaps that is what AMD stans want now.

This is somewhat moot though. There isn't a lot of indication that DX12 games will use specific optimizations even let alone different paths for different h/w. This is both budget heavy and prone to issues on future h/w where you may actually need to add a 3rd, 4th, etc different path to provide the best possible performance.

What DX12 games we have now are hardly optimized for each DX12 capable GPU on the market. Most of them are optimized solely for AMD because AMD paid for this. RoTTR was optimized solely for NV because of the same reasons but has been getting AMD optimizations lately - possibly because of a nearing PS4 release which allow Nixxes to transfer some of the renderer work there back to PC. QB? Lol, ok. What else? GearsUE? Doubtful that it uses different paths for NV and AMD otherwise it wouldn't have had the launch issues it had.

So from what we have now an approach where DX12 renderer contain only one path which either optimized to one vendor or to all of them seems to be the prevalent scenario. This actually puts TS among the real world DX12 games we have as it's using the same approach.

It also purposefully does things that hamstrings its performance on certain hardware in ways that games using DX12 would never do.
Name these things.

Are you actually telling us that its purpose as a gaming benchmark is fulfilled here? Or are we arguing void morals and semantics?
Yes. It actually shows a rather objective comparative picture between NV and AMD cards in DX12 in case when the s/w in question is equally optimized for both IHVs instead of being skewed into one IHV's favor.

Which would in the worst case cause that rendering feature to run on software at around 20% slower, or in the best case just be ignored or optimized for in the drivers a-la 3DMark Vantage which nobody gives a fuck about when Tesselation is off.

20% slower is about two times more than what AMD cards are getting from fabled async compute on average. I'd also like to know where you get these 20%.
 

Mabufu

Banned
If optimization for AMD architechture >>>> optimization for Nvidia, this bench is bullshit.

If that's not the case, and Nvidia's architechture is as capable as AMD's in DX12, I'll come here and eat lots of crows.

Sincerily, I dont see too much sense on doing a benchmark deactivating the things that will be in the games.
 

ethomaz

Banned
If optimization for AMD architechture >>>> optimization for Nvidia, this bench is bullshit.

If that's not the case, and Nvidia's architechture is as capable as AMD's in DX12, I'll come here and eat lots of crows.

Sincerily, I dont see too much sense on doing a benchmark deactivating the things that will be in the games.
You mean things that will be only in games for one GPU brand??? That is the point... you can't compare performance between something and nothing.
 

Mabufu

Banned
You mean things that will be only in games for one GPU brand??? That is the point... you can't compare performance between something and nothing.

No. I mean potential optimization for AMD being bigger than Nvidia's, in a context where AMD architechture is better better for DX12 capabilities.
 
http://radeon.com/radeon-wins-3dmark-dx12/ - I don't know how much more of an approval AMD can give to 3D Mark other than having Raja Koduri calling you on the phone and saying he approves.

AMD cards get the expected bump in performance thanks to Async compute support over Maxwell cards, pushing them past Maxwell in performance in DX12. RX 480 is now at a GTX 980 level of performance instead of being closer to a 970, the Fury X soundly beats the 980 Ti, the 390 soundly beats the 970. This is exactly the expected performance outcome using DX12 + Async over Maxwell GPU's and the independently tested performance Async alone provides matches what has been seen in actual games so far.
 

dr_rus

Member
If optimization for AMD architechture >>>> optimization for Nvidia, this bench is bullshit.

If that's not the case, and Nvidia's architechture is as capable as AMD's in DX12, I'll come here and eat lots of crows.

Sincerily, I dont see too much sense on doing a benchmark deactivating the things that will be in the games.

What things?
 

dogen

Member
http://radeon.com/radeon-wins-3dmark-dx12/ - I don't know how much more of an approval AMD can give to 3D Mark other than having Raja Koduri calling you on the phone and saying he approves.

It's far more than approval. Every gpu vendor has heavy involvement in the development process.

https://m.youtube.com/watch?feature=youtu.be&v=8OrHZPYYY9g

In this video they explain that not only do AMD, Nvidia, Futuremark and Intel extensively discuss exactly what goes into the benchmark(everyone has to agree on everything included btw), but they all have access to the source code repository and review every single line of code. Multiple times even. Futuremark even said that they've discussed including vendor specific optimizations, and the hardware vendors have said that a neutral benchmark is more useful for them.

Anyone who still objects at this point is literally in denial of reality. If they want an AMD optimized tech demo they should ask AMD for one.
 
Top Bottom