• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

3D Mark releases new DX 12 benchmark - TimeSpy

Durante

Member
Yeah makes sense. Cool, thx for explanation. I guess some of the 1080s really just aren't boosting very high with no OC.
The Gamerock in that post in particular is already at a stable 1.974 MHz without OC according to the CB review, so add that OC and you're up to ~2050, which makes sense for the score (7681). My more "modest" Inno3D runs at ~2GHz in Time Spy with a +150 OC and I got a graphics score of 7554.

What can certainly be said is that the graphics score is pretty good at being predominantly GPU limited, and quite consistent, given that you can really see such small differences in clock in the final output.
 

ToD_

Member
I'm pretty happy with the results. It's clear my CPU needs to be upgraded, however. The good ol' i5-2500k (@4.5) is showing its age.

5dRvyJZ.png
 

justjim89

Member
tumblr_oaf4bro19F1u5eu8ao1_540.png


Is this good? These numbers seem so arbitrary and I don't know what they mean. Running a 1070 and a 6600k no overclocking. Was only getting about 30 fps on average on the graphics tests.
 

Haunted

Member
Can we take a step back from the penis measuring and appreciate that a museum of past 3Dmark benchmarks is an awesome idea for the setting? Good job, Futuremark.

Computerbase has an article up:
timespyi6kbv.png


Nothing really surprising in the placement of these GPUs.
3505 with a 970, seems to fit right in.
 

ZOONAMI

Junior Member
tumblr_oaf4bro19F1u5eu8ao1_540.png


Is this good? These numbers seem so arbitrary and I don't know what they mean. Running a 1070 and a 6600k no overclocking. Was only getting about 30 fps on average on the graphics tests.

For stock 1070 and no oc on the i5 yes you are right in line. If you oc your cpu you could get closer to 6k total score and oc on gpu would get closer to or above 6k graphics score.
 

GashPrex

NeoGaf-Gold™ Member
If you don't mind me asking, how much was your NUC and external setup all told?

Yikes - lots of money.

Skull Canyon - $658
32 GB DDR4 RAM - $129
512 GB Samsung 950 Pro - $334
1070 GTX - $399
Razer Core - $499

So $2000...

But I wanted to get rid of my dedicated gaming PC and still have a low power/always on small PC for everyday home office use. The external GPU was a good solution, but expensive.

Latest timespy

5342 Overall

5631 GPU

http://www.3dmark.com/spy/66961
 

ss_lemonade

Member
Lol, what was the point of the time spy jumping at the end of the demo? Felt so anticlimactic.

http://www.3dmark.com/spy/56063

Got around 3000 with a slightly oc'ed 780 and 3570k. Guess I should start increasing overclocks again and see how high I can go.

Anyone know why the app is not reading my gpu's clocks properly again? This happened before, then fixed itself and now it's happening again.

Can we take a step back from the penis measuring and appreciate that a museum of past 3Dmark benchmarks is an awesome idea for the setting? Good job, Futuremark.
The idea was great but I don't know. I always feel like there's something off with these 3dmark benchmark demos. Not sure if its just the low framerate or if its animation related or something else. I think the one that impressed me the most was the old, nature one with the frog. Can't remember what version that was but I think it was a dx9 exclusive demo
 
+70 core from what though? There are 3rd party designs of the 1080 which boost up to ~2GHz reliably without any additional OC -- they are ~18% faster out of the box than a FE. Add a small additional OC, get 20% total, 6500*1.2=7800, mystery solved?

Yep, my Zotac 1080 OC version stays at 2GHz out of the box, and can maintain 2080+ MHz with minimal power, temp and voltage increase, it is a phenomenal improvement over FE.
 
The test is being hammered on the Steam forums right now, some proper info can be found here: http://www.overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958

Just LOL

So it seems like the whole basis of the callout is that Maxwell performance didn't tank with Async on, therefore Time Spy isn't doing Async. It was then explained by the 3D Mark dev, along with a description on how they did Async in Time Spy, that Maxwell's performance was a wash because Async isn't on at a driver level with Maxwell based cards.

It also seems like a whole lot of people don't understand exactly what DX12 and Vulkan's main purpose is, which is lowering the performance overhead of hardware by offering a lower level API. Most of AMD's gains are that, because their DX11 drivers were mediocre and their OGL drivers were dumpster garbage, which explains why they've seen decent DX12 gains but ridiculous Vulkan gains. The rest of the gains come from things like Async and shader intrinsic functions. The people saying the 50% gain i performance in Doom using Vulkan on AMD as proof that 3D Mark isn't doing Async properly hurts my brain.
 

TSM

Member
Yeah, the fanboys are going crazy about this benchmark. Some of them seem to think that all the AMD gains going from OpenGL to Vulkan or DX11 to DX12 are due solely async compute. The users telling them that you have to compare DX12 with async vs DX12 without async or Vulkan with async vs Vulkan without async are being completely ignored. The best part is that their Steam moderator said it'd be Monday before the engineers are back to put out the fire by issuing a statement.
 

DonMigs85

Member
So did any Gaffers make it into the top 10? I know a guy who's rank 6 on this benchmark. Everyone has GTX 1080s in SLI.
 
The latest (and last I guess) bios for my gigabyte mobo has this issue that if I have everything in "auto", my uncore/NB frequency (as reported by cpuz in Windows) would be significantly lower than it should be. I don't remember the exact number it was down to, nor do I know what uncore affects exactly, but it would significantly affect my cpu performance.

If I manually set either ram or cpu to the exact same value as the automatic value was, everything's fine and my NB frequency is as it should be and cpu performance is inline with others.

No idea if you have the same issue of course, but it's something to check.

Well I found out that my ASRock z97 Anniversary had my Turbo Boost set to the default 3.5GHz frequency for some reason. I manned up and followed an oveclocking guide and now at 4.4GHz my CPU score is 3543.

Is sure hope my overclock is stable/doesn't kill my CPU!
 

RedRum

Banned
I just signed up. Still getting used to it.

Are the benchmark downloads free? If not, what's the best one to get?
 

yamo

Member
Intel Core i5-3570K Processor and NVIDIA GeForce GTX 1070

3DMark Score: 5302
Graphics Score: 6025
CPU Score: 3157
 

Wag

Member
5820k @ 4.3GHz, 3x 980Ti:

10,621 total.

Not sure why I'm scoring on the low-end of comparable setups. Some people with 2x 980Ti's are scoring higher.

I might need to reinstall Win10.
 
Finally bought 3DM since it was on sale again. First time running this, 970 over clocked through Afterburner which admittedly I don't use a whole lot.




EDIT: Second run was pretty much the same, GPU score went up three points, CPU score went down two, over all 3871.


EDIT 2: 3617 with 3722 graphics score with my GPU back to just the default boost of 1430/1417 and memory speed. CPU was down a bit too at 3119, I might've left my browser open...
 

DjRalford

Member
JDawo6v.jpg


My little nano is not that far off the full fat Fury X, and 1 point above the 980 Ti in that table posted earlier in the thread.
 

Durante

Member
So it seems like the whole basis of the callout is that Maxwell performance didn't tank with Async on, therefore Time Spy isn't doing Async. It was then explained by the 3D Mark dev, along with a description on how they did Async in Time Spy, that Maxwell's performance was a wash because Async isn't on at a driver level with Maxwell based cards.

It also seems like a whole lot of people don't understand exactly what DX12 and Vulkan's main purpose is, which is lowering the performance overhead of hardware by offering a lower level API. Most of AMD's gains are that, because their DX11 drivers were mediocre and their OGL drivers were dumpster garbage, which explains why they've seen decent DX12 gains but ridiculous Vulkan gains. The rest of the gains come from things like Async and shader intrinsic functions. The people saying the 50% gain i performance in Doom using Vulkan on AMD as proof that 3D Mark isn't doing Async properly hurts my brain.
It's incredibly stupid.

I mean, it's ok not to understand how any of this works, but please don't get indignant when someone explains to you how it does.
 
Can we change the name to PissMark instead of 3D Mark? It just looks like everything is bathed in piss while a piss mist swirls through the air.
 
The test is being hammered on the Steam forums right now, some proper info can be found here: http://www.overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958

Just LOL

So it seems like the whole basis of the callout is that Maxwell performance didn't tank with Async on, therefore Time Spy isn't doing Async. It was then explained by the 3D Mark dev, along with a description on how they did Async in Time Spy, that Maxwell's performance was a wash because Async isn't on at a driver level with Maxwell based cards.

It also seems like a whole lot of people don't understand exactly what DX12 and Vulkan's main purpose is, which is lowering the performance overhead of hardware by offering a lower level API. Most of AMD's gains are that, because their DX11 drivers were mediocre and their OGL drivers were dumpster garbage, which explains why they've seen decent DX12 gains but ridiculous Vulkan gains. The rest of the gains come from things like Async and shader intrinsic functions. The people saying the 50% gain i performance in Doom using Vulkan on AMD as proof that 3D Mark isn't doing Async properly hurts my brain.

Man is that embarassing.
 
Eh, works good enough for me in-game though.

Just a warning though. If your GPU isn't stable in benchmarks, you may run into problems later down the line in games. I remember OCing a gtx 460 quite heavily when I got it and then gradually turning down the clocks over the years as it aged because it kept crashing in new games that pushed it harder.
 

cyen

Member
Well, it seems that there is some discution regarding the use of Async on TimSpy benchmark. There are several threads about this on overclock.net\reddit\steam.
It seems that Maxwell cards are running always with async disable (driver disables async in order to degrade performance) even if the application launches with async on, in my opinion async on maxwell scores should be considered invalid.
Another question that was arised is that the benchmark is not using "true Async" (im not a technical guy so i dont know what is true or false async) since it´s using context switches as recomended by nvidia in its white papers in order to get most of async on theire cards.
Im not saying this is all true but the debate is clearly open since one of the guys from futuremark already escalated this internally in order to get a better explanation of what´s happening in TimeSpy.

Post #83

"..and just FYI so you don't think I'm just ignoring this thread:

Whole thread (and the Reddit threads - all six or seven of them - and a couple of other threads in other places - you guys have been posting this everywhere...) have been fowarded to the 3DMark dev team and our director of engineering and I have recommended that they should probably say something.

It is a weekend and they do not normaly work on weekends, so this may take a couple of days, but my guess is that they would further clarify this issue by expanding on the Technical Guide to be more detailed as to what 3DMark Time Spy does exactly.

Those yelling about refunds or throwing wild accusations of bias are recommended to calm down ever so slightly. I'm sure a lot more will be written on the oh-so-interesting subject of DX12 async compute over the coming days."

http://steamcommunity.com/app/223850/discussions/0/366298942110944664/
 

dr_rus

Member
Well, it seems that there is some discution regarding the use of Async on TimSpy benchmark. There are several threads about this on overclock.net\reddit\steam.
It seems that Maxwell cards are running always with async disable (driver disables async in order to degrade performance) even if the application launches with async on, in my opinion async on maxwell scores should be considered invalid.
DX12 doesn't allow an application to "disable async". You can code a separate rendering path which doesn't submit commands to compute queues and this would essentially be a way to not use async - in a same way as, say, a game which doesn't use vsync isn't using vsync even though the ability is there. But this doesn't "disable" anything, it just doesn't take advantage of the feature which is still present and active.

What most people still can't get it seems is that multiengine / async compute is a mandatory part of DX12 API which must be supported by all DX12 compatible h/w. Supporting something however says little on how that support is implemented, and for multiengine / async compute there is no guidance in the API or anywhere else on how the h/w must provide that support - only the requirement that such support must be provided.

Thus a) all DX12 h/w "support async compute" but b) it's up to the h/w (meaning h/w+driver) to decide on how exactly to support it's execution. There are no "proper" or "wrong" way to handle it.

What NV does on Maxwell cards is execute compute from secondary queues sequentially from the same h/w graphics queue as the graphics commands themselves. This results in the exact same performance with "async compute" enabled as it is with it disabled - since when an app disables the submission to compute queues this workload doesn't magically disappear and is being sent to the main graphics queue instead.

So what happens on Maxwell with async on is that driver puts the compute into the graphics command stream - essentially performing the same what the application does when it "turns the async off". There's nothing invalid in these results.

The question which would be cool to have an answer for is what would happen on Maxwell if NV would enable concurrent compute execution on a fixed SM partition for the TS benchmark? My guess would be that Maxwells would perform worse than right now - otherwise I think NV would've done it for the benchmark. Again, there's nothing wrong with running async compute sequentially if this results in higher performance.

Another question that was arised is that the benchmark is not using "true Async" (im not a technical guy so i dont know what is true or false async) since it´s using context switches as recomended by nvidia in its white papers in order to get most of async on theire cards.
Im not saying this is all true but the debate is clearly open since one of the guys from futuremark already escalated this internally in order to get a better explanation of what´s happening in TimeSpy.

Post #83

"..and just FYI so you don't think I'm just ignoring this thread:

Whole thread (and the Reddit threads - all six or seven of them - and a couple of other threads in other places - you guys have been posting this everywhere...) have been fowarded to the 3DMark dev team and our director of engineering and I have recommended that they should probably say something.

It is a weekend and they do not normaly work on weekends, so this may take a couple of days, but my guess is that they would further clarify this issue by expanding on the Technical Guide to be more detailed as to what 3DMark Time Spy does exactly.

Those yelling about refunds or throwing wild accusations of bias are recommended to calm down ever so slightly. I'm sure a lot more will be written on the oh-so-interesting subject of DX12 async compute over the coming days."

http://steamcommunity.com/app/223850/discussions/0/366298942110944664/

There is no "true async". No specification for async compute execution exist in the universe. Each h/w is fully allowed to perform it in the way which is most beneficial to this particular h/w. Also - all h/w do context switches when running more than one context. When you run compute queues in parallel to graphics one you are running more than one context so even GCN h/w must perform context switch although in its case this is a global scheduling level switch only since GCN CUs are context agnostic.
 

cyen

Member
DX12 doesn't allow an application to "disable async". You can code a separate rendering path which doesn't submit commands to compute queues and this would essentially be a way to not use async - in a same way as, say, a game which doesn't use vsync isn't using vsync even though the ability is there. But this doesn't "disable" anything, it just doesn't take advantage of the feature which is still present and active.

What most people still can't get it seems is that multiengine / async compute is a mandatory part of DX12 API which must be supported by all DX12 compatible h/w. Supporting something however says little on how that support is implemented, and for multiengine / async compute there is no guidance in the API or anywhere else on how the h/w must provide that support - only the requirement that such support must be provided.

Thus a) all DX12 h/w "support async compute" but b) it's up to the h/w (meaning h/w+driver) to decide on how exactly to support it's execution. There are no "proper" or "wrong" way to handle it.

What NV does on Maxwell cards is execute compute from secondary queues sequentially from the same h/w graphics queue as the graphics commands themselves. This results in the exact same performance with "async compute" enabled as it is with it disabled - since when an app disables the submission to compute queues this workload doesn't magically disappear and is being sent to the main graphics queue instead.

So what happens on Maxwell with async on is that driver puts the compute into the graphics command stream - essentially performing the same what the application does when it "turns the async off". There's nothing invalid in these results.

The question which would be cool to have an answer for is what would happen on Maxwell if NV would enable concurrent compute execution on a fixed SM partition for the TS benchmark? My guess would be that Maxwells would perform worse than right now - otherwise I think NV would've done it for the benchmark. Again, there's nothing wrong with running async compute sequentially if this results in higher performance.



There is no "true async". No specification for async compute execution exist in the universe. Each h/w is fully allowed to perform it in the way which is most beneficial to this particular h/w. Also - all h/w do context switches when running more than one context. When you run compute queues in parallel to graphics one you are running more than one context so even GCN h/w must perform context switch although in its case this is a global scheduling level switch only since GCN CUs are context agnostic.

So why is the score invalid if i turn async off (running as a single queue like maxwell with async off?)
 

bj00rn_

Banned
Well, it seems that there is some discution regarding the use of Async on TimSpy benchmark. There are several threads about this on overclock.net\reddit\steam.


http://s3.amazonaws.com/download-aws.futuremark.com/3DMark_Technical_Guide.pdf
In Time Spy, asynchronous compute is used heavily to overlap rendering
passes to maximize GPU utilization. The asynchronous compute workload per
frame varies between 10-20%.

So when Nvidia actually does support the standard I guess Futuremark "don't need to do anything" here. In this instance the GPU becomes a black box https://en.wikipedia.org/wiki/Black-box_testing

So why is the score invalid if i turn async off (running as a single queue like maxwell with async off?)

It appears to me you're missing the point explained in dr_rus' post.


Disclaimer. I may be talking out of my ass.. I'm just trying to understand
 

dr_rus

Member
So why is the score invalid if i turn async off (running as a single queue like maxwell with async off?)

You mean when a user turns async off in TS's settings? If this results in the score being invalid - haven't tried myself - then that's up to a policy of Futuremark, they decide which settings are valid for a score submission.

I'd guess that this policy is actually protecting Radeons positions more than anything since it's them which are loosing the most of performance with async being turned off.
 

DonMigs85

Member
I find it amusing that this benchmark also has Steam achievements. One is for overclocking your CPU more than 50% over stock, and another is for getting over 9000 in Time Spy. There's also cheevos for having an unbalanced GPU and CPU.
 

Henrar

Member
That CPU test, god damn. It went from 60FPS+ to 30FPS in a second.

CPU is 5960x @ 4.4 and GPUs are dual Titan Xs.
rC9PPJg.png
 
Top Bottom