• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

3DMark Time Spy rigged in Nvidia's favor?

Renekton

Member
aftAlty.jpg

https://www.reddit.com/r/nvidia/comments/4thlwx/futuremarks_time_spy_directx_12_benchmark_rigged/

Edit. Adapted from btgorman:
[1] 3dMark releases benchmark call Time Spy that is touted as ideal DX12 benchmark.

[2] This DX12 benchmark allegedly targets the lowest GPU functionality denominator, and does not use AMD hardware to its full capacity

[3] There is a GDC slide that states "correct" DX12 requires specific code for Nvidia, AMD, etc. Therefore a "correct" DX12 benchmark/game should not be vendor neutral, rather have custom code for Nvidia and custom code for AMD

[4] Given the above, 3dMark benchmark seems to favor Nvidia

[5] Futuremark claims this is a neutral approach
 
Perhaps you should clearly state what is happening, since you're the OP.

From what I can tell

[1] 3dMark releases DX12 benchmark call Time Spy that is "vendor neutral"

[2] This DX12 benchmark targets the lowest GPU functionality denominator, and does not use AMD hardware to its full capacity

[3] There is a GDC slide that states "correct" DX12 requires specific code for Nvidia, AMD, etc. Therefore a "correct" DX12 benchmark/game should not be vendor neutral, rather have custom code for Nvidia and custom code for AMD

[4] Given the above, 3dMark benchmark favors Nvidia
 

Buggy Loop

Member
Perhaps you should clearly state what is happening, since you're the OP.

From what I can tell

[1] 3dMark releases DX12 benchmark call Time Spy that is "vendor neutral"

[2] This DX12 benchmark targets the lowest GPU functionality denominator, and does not use AMD hardware to its full capacity

[3] There is a GDC slide that states "correct" DX12 requires specific code for Nvidia, AMD, etc. Therefore a "correct" DX12 benchmark/game should not be vendor neutral, rather have custom code for Nvidia and custom code for AMD

[4] Given the above, 3dMark benchmark favors Nvidia

spweh.jpg


Seriously, this industry is making me facepalm..

3Dmark used to be a showcase of features from the latest dx, before devs would use them typically, not the lowest gpu feature denominator. WTF is this?
 

dr_rus

Member
Short answer: no.

Long answer: read from the link till the end of the discussion there if you want to know what people who actually know how this stuff works think about that.

As for the separate paths / one path contention - this is where a benchmark and a game can and will differ in their approach. A benchmark should use one path on all h/w if it is to be an accurate indication of comparative performance of this h/w in that particular workload. A game should use whatever it can to provide the maximum possible performance for all gamers out there. Thus what is limiting in a benchmark isn't in a game engine.

A nice example of different rendering paths would be the Doom's Vulkan renderer. In it's current beta stage it is using async compute and shader intrinsic functions on GCN h/w while it doesn't use either on NV's h/w making it run different workloads which result in wildly different performance between IHVs/APIs. But even when (if?) they'll add shader intrinsics for NV h/w these will be completely different from what it is using for AMD's h/w since they are based on the h/w architecture (basically a low level extension of shader language for a particular h/w architecture) so even in that case it will in fact be running two different rendering paths to achieve optimal performance on different h/w.

One could also argue that Futuremark's decision to not use anything above DX12 FL11_0 is actually in favor of AMD's h/w since some features of FL12_1 supported by Maxwell/Pascal could have resulted in a performance gain on them compared to the current "common rendering path". So if you're really into it you could go and start another thread which should be named "3DMark Time Spy rigged in AMD's favor?"

http://www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

Futuremark released an article in response. By catering to lowest denominator (Nvidia) for Async Compute, they claim it is a neutral approach.

Did you actually read the statement? Where does it say "catering to lowest denominator (Nvidia) for Async Compute"?
 

Bolivar687

Banned
Doesn't sound like it will be very indicative of DX12 performance, especially when you consider that most DX12 games are also AMD sponsored.
 
Eh... Eh...
The reason for using IHV specific code is to access things from certain feature set levels. 12 FL 0 vs 12 FL 1 vs 12 FL: 2 or vendor specific extensions pending to be added to DX12.

The command to send things to a compute queue is the same for all IHVs. Therefore, you wouldn't write vendor specific async compute code as there currently isn't a way to do that. You set up a command list on a thread. You send the command list to the GPU via ExecuteCommandList.

DX12 is low level in the same way C++, Objective-C, and D are low level languages. You're operating pretty close to the hardware, but you're not determining how the hardware ultimately does its job.
 

dr_rus

Member
That's my own interpretation of it so far.

How did you come to this interpretation? The statement clearly show that there is no application level difference of work submission for any card used.

Eh... Eh...
The reason for using IHV specific code is to access things from certain feature set levels. 12 FL 0 vs 12 FL 1 vs 12 FL: 2 or vendor specific extensions pending to be added to DX12.

Not IHV specific either. Any IHV can support any feature level with their h/w. FL12_1 is supported not only by NV but by Intel iGPUs starting with Skylake. I'm pretty sure that they need a different set of optimizations.
 

Spinifex

Member
ah yes, this is the kind of neutrality CNN displays when having on a climate scientist and a chemtrails theorist. Gotta be neutral! ¯\_(ツ)_/¯
 

TSM

Member
There was no way Futuremark could win this one. If they had instead maximized for each vendor and Nvidia came out on top there would be nothing but whining about how Futuremark was in bed with Nvidia rigging the benchmark for Nvidia to come out on top. It looks to me like they tried to be neutral by avoiding that whole situation with vendor neutral code. I'm not sure why AMD fans think they would have come out on top if Nvidia engineers had given Futuremark help maximizing for their hardware.
 
Use actual game benchmarks and not synthetics when deciding which card to purchase. DOOM Vulcan benchmarks are probably the best low level benchmarks atm.
 
There was no way Futuremark could win this one. If they had instead maximized for each vendor and Nvidia came out on top there would be nothing but whining about how Futuremark was in bed with Nvidia rigging the benchmark for Nvidia to come out on top. It looks to me like they tried to be neutral by avoiding that whole situation with vendor neutral code. I'm not sure why AMD fans think they would have come out on top if Nvidia engineers had given Futuremark help maximizing for their hardware.

I also question the purpose of a benchmark with vendor specific code. The purpose of a benchmark isn't to show how your hardware is going to do in optimized conditions. It's to show you how your card is going to do in the average case which is code will be written to be as vendor unspecific as possible unless taking advantage of a hardware feature not present on other hardware. That is developers aren't likely to optimize Async Compute to fully use AMD or nVidia's hardware. However, they will take advantage of tiled rendering or multi-projection or any number of features from DX 12_1 or 12_2 where possible.
 
How is using a slide valid? Would it not be much better to reference official guidelines/documentation?

Overall, the argument seems to have some validity. (From my lay perspective.)
 
There was no way Futuremark could win this one. If they had instead maximized for each vendor and Nvidia came out on top there would be nothing but whining about how Futuremark was in bed with Nvidia rigging the benchmark for Nvidia to come out on top. It looks to me like they tried to be neutral by avoiding that whole situation with vendor neutral code. I'm not sure why AMD fans think they would have come out on top if Nvidia engineers had given Futuremark help maximizing for their hardware.

Yup. Optimization for a particular vendor is a far worse look than what they did now, which is just a neutral implementation. The premise of this thread is wrong.
 

TSM

Member
Overall, the argument seems to have some validity. (From my lay perspective.)

If you have been following all the "controversy" over the new DX12 test, then you would have read about all the fanboys initially being livid because they thought Futuremark had written a Nvidia specific code path that gave them better performance. Now that it turns out that the opposite is true they are livid because apparently the only right way to do it is to have a Nvidia specific code path (AMD as well obviously). Futuremark was not going to be on the winning side of this either way.
 

Renekton

Member
I also question the purpose of a benchmark with vendor specific code. The purpose of a benchmark isn't to show how your hardware is going to do in optimized conditions. It's to show you how your card is going to do in the average case which is code will be written to be as vendor unspecific as possible
The GDC DX12 slide said that if vendor-unspecific might as well use DX11, implying missing the point of DX12. Not sure if that was taken out of context for this.

If you have been following all the "controversy" over the new DX12 test, then you would have read about all the fanboys initially being livid because they thought Futuremark had written a Nvidia specific code path that gave them better performance.
Well more of suspected it de-emphasized AMD's async compute capability via lowest denominator in its "neutral" code path.
 

Head.spawn

Junior Member
Why would a non-bias benchmark optimize code in either direction? That seems like it would defeat the entire purpose.
 

DieH@rd

Banned
So they are intentionally ignoring the computational capability of GCN.

Stupidity. Benchmark should take full advantage of GPU cards so that we can know what they are truly capable off.
 

dogen

Member
So they are intentionally ignoring the computational capability of GCN.

Stupidity. Benchmark should take full advantage of GPU cards so that we can know what they are truly capable off.

Seems like they do a pretty good amount of async compute to me.

uSsVhkh.png
 

dr_rus

Member
So they are intentionally ignoring the computational capability of GCN.

Stupidity. Benchmark should take full advantage of GPU cards so that we can know what they are truly capable off.

They are not. Stupidity is what's happening in this thread.
 
seems pretty fair to me. feature level 11 baseline doesnt take advantage of the full capabilities of nvidia gpus either(12.1 features). altho we have no idea how performant they are
 

Jebusman

Banned
So by not favoring AMD they are favoring Nvidia?

This is how videocard wars are fought. Either side will complain if they're not getting the treatment they believe they deserve (aka, the graph shows their product being the better one).

It just goes to show the difficulty in trying to benchmark performance on 2 different devices that want to handle graphics in 2 different ways, while pleasing everyone.

If product A performs using method X or method Y, and product B performs using either method X or method Z, coding for X is the only "fair" method because it's the only one both cards could do.

People will argue that product B's "method Z" runs faster than it's "method X", or that product A's "method Y" is faster as well, but then comparing "AY and BZ" isn't really a direct comparison between them, which is what 3DMark is trying to be.

So unless they want to seperate the benchmarks for "NVIDIA and "AMD" cards into their own categories, in which case the only use will be comparing cards between the same vendor, which isn't really as useful information, there's not much they can do but target the lowest common denominator.
 
You could say that a lot of DX12's benefits come from lower level access to GPUs, which leads to IHV specific code to get the most benefits. So in essence Time Spy not utilizing any IHV specific rendering paths limits it to the generic benefits of DX12 applicable to both sides. Given how much effort is pushed by AMD and Nvidia to their sponsored titles, more and more games tend to be optimized for one vendor over the other, so Time Spy probably isn't indicative of how most demanding AAA games are developed.

All in all, I'd say 3DMark is becoming more and more useless tool in comparing graphics performance as it relates to actual games. They might go for the lowest common denominators and ignore the rest, but the game developers won't.
 
All I've taken from everything I've read is a bunch of very angry people attacking 3DMark while having no idea how any of this works. I don't have any idea how any of this works either but I have no opinion on the subject yet.
 

pottuvoi

Banned
I do not see how they could have done differently.
Perhaps writing path to every IHV and run all paths on all hardware capable to run them.

Certainly would have gotten lots of interesting information, but I'm not sure how feasible coding challenge that would be. (>7 paths, with and without extensions for each IHV and baseline?)
 
Lol, I wouldn't have written "rigged in Nvidia's favor?", that's sounds too partisan and slanted, but there is a nugget of truth in there: you are supposed to do low level optimizations for the gpus, that's one of the big advantages of Dx12/Vulkan, the amount of gains you can have doing it, otherwise you are leaving on the table tons of possible extra fps.

In fact that's the 'dark side' of the new APIs, all the super numbers, all the promises they've done are only true if the programmers do a good, extensive job. Just changing from Dx11 to Dx12 isn't going to give you a magical, extra 40% performance. And we all know not all the developers will do it.
 

TSM

Member
Lol, I wouldn't have written "rigged in Nvidia's favor?", that's sounds too partisan and slanted, but there is a nugget of truth in there: you are supposed to do low level optimizations for the gpus, that's one of the big advantages of Dx12/Vulkan, the amount of gains you can have doing it, otherwise you are leaving on the table tons of possible extra fps.

In fact that's the 'dark side' of the new APIs, all the super numbers, all the promises they've done are only true if the programmers do a good, extensive job. Just changing from Dx11 to Dx12 isn't going to give you a magical, extra 40% performance. And we all know not all the developers will do it.

As soon as the benchmark isn't using common code, you can no longer directly compare Nvidia hardware to AMD hardware. Using separate code paths essentially turns this into two separate benchmarks: one for Nvidia cards and one for AMD cards.
 
So they are intentionally ignoring the computational capability of GCN.

Stupidity. Benchmark should take full advantage of GPU cards so that we can know what they are truly capable off.

The Futuremark page reads:

The implementation is the same regardless of the underlying hardware. In the benchmark at large, there are no vendor specific optimizations in order to ensure that all hardware performs the same amount of work. This makes benchmark results from all vendors comparable across multiple generations of hardware.

Whether work placed in the COMPUTE queue is executed in parallel or in serial is ultimately the decision of the underlying driver. In DirectX 12, by placing items into a different queue the application is simply stating that it allows execution to take place in parallel - it is not a requirement, nor is there a method for making such a demand. This is similar to traditional multi-threaded programming for the CPU - by creating threads we allow and are prepared for execution to happen simultaneously. It is up to the OS to decide how it distributes the work.

The graphs they present show asynchronous compute workloads behaving as expected. Items originally on the direct queue being thrown into the compute queue to fill gaps. Indeed, on the DX12 benchmarks for timespy we see the following (courtesy of PCPer):

timespy-3ydazd.png


Maxwell doesn't benefit, Pascal benefits a little, and GCN benefits moderately.

So what exactly is the controversy here? GCN is better at it than Pascal is and it shows. Maxwell continues to get no benefits. I can only imagine that people are used to seeing AMD cards get insane 50% boosts from DX11 > DX12 because their DX11 drivers are simply less efficient than the Nvidia equivalents - it was never just "the async compute boost". The DX12 and Vulkan renderers allow major steps up over their OpenGL and DX11 drivers in other areas.
 

dr_rus

Member
The gains from async in TS on Radeons are also pretty close to what they gain in games where there's an option of checking this by switching the async option.

So as I've said - the stupidity.
 
The gains from async in TS on Radeons are also pretty close to what they gain in games where there's an option of checking this by switching the async option.

So as I've said - the stupidity.

10 to 15% is probably what you can expect going forward. i dont see how anyone can have a problem with this benchmark.
 

twobear

sputum-flecked apoplexy
So wait the argument is

'The benchmark is biased against AMD because it isn't tailored to test precisely the things that AMD is faster at?'

I...
 

Mivey

Member
This is one of the most embarrassing recent "controversies" in PC technology. Jesus.
Yeah, the only scandal here is the conformation bias displayed by average users.
Sad that even this probably doesn't help AMD actually sell more hardware. Especially now that 1060's early release seems to have taken them by surprise.
 
Use actual game benchmarks and not synthetics when deciding which card to purchase. DOOM Vulcan benchmarks are probably the best low level benchmarks atm.

Yeah.. Curiously a good deal of (supposedly trustworthy) review outlets that went to great lengths to measure RX 480 PCIe draw power simply fell back to benchmarking Doom on OpenGL for their 1060 reviews.

Curious.. Not. Plain fucked. Fuck this industry and its moneyhatted review sites that will do anything for their free review hardware.
 
Yeah.. Curiously a good deal of (supposedly trustworthy) review outlets that went to great lengths to measure RX 480 PCIe draw power simply fell back to benchmarking Doom on OpenGL for their 1060 reviews.

Curious.. Not. Plain fucked. Fuck this industry and its moneyhatted review sites that will do anything for their free review hardware.

"Money hatted review sites".

As a reminder, doom Vulkan is currently kind of imperfect as it does not have FCAT, the ability to externally capture via fraps or RTSS, and on NV you cannot currently disable Vsync. It is not exactly perfectly ready for benches in the traditional sense.
 

cyen

Member
"Money hatted review sites".

As a reminder, doom Vulkan is currently kind of imperfect as it does not have FCAT, the ability to externally capture via fraps or RTSS, and on NV you cannot currently disable Vsync. It is not exactly perfectly ready for benches in the traditional sense.

Vsync can be disabled at least on 1080. Its the only way to play because it stutters like hell with vsync enabled on vulkan.
 
Wuh? I have seen more people telling Vsync doesn't do anything at all.

Vsync can be disabled at least on 1080. Its the only way to play because it stutters like hell with vsync enabled on vulkan.

Vsync option does not do anything other than get rid of the framecap when set to off, but the frames are always syncing to refresh. There is no tearing in spite of what you select.
 
Top Bottom