• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Quantum Break: Better on DirectX 11! Gameplay Frame-Rate Tests

Durante

Member
One thing dr_rus' largely correct writeup isn't mentioning explicitly which I think is also relevant is that HW abstraction is not just important across vendors, it's also relevant for different designs by a single vendor. You can e.g. see in thos CB benchmarks that the 390 is gaining a bit of performance and the RX480 isn't really getting anything -- and those are just rather minor hardware revisions and not an entirely new architecture (like e.g. going from TeraScale to GCN)!

One great thing about GPUs is that you could always get the same code to run significantly faster on new HW, pertially because GPU code is inherently parallel, but also partially because of the level of abstraction afforded by the API and drivers.

One exciting prospect is the reduction of engine input lag.
Is it? You can get single-frame input lag on DX11 if you design for it (VR stuff generally does), I don't really see the advantage of DX12 there.
 

dr_rus

Member
What are the chances for engines like UE4 and others to implement this so developers don't have to deal with it themselves for every game anew?
It's possible but there's still a catch: what will happen to a DX12 game on some UE4 released today in a year on a new GPU architecture? There's a chance that it won't run well and in that case the said game will have to be patched because there's no way to fix this with drivers anymore. Granted that the chance is rather small as the h/w is actually improving mostly, in both ways and numbers but it's still there.

The implementation of the same engine can also be rather different between titles and this may mean that while some of them will be able to use the default D3D12 provided by the ISV, some won't and will have to tweak it resulting in some issues. The fact that no widely used 3rd party engine have adopted D3D12 (or Vulkan) yet properly says a lot about how hard it is to do properly and how little gain there is for them in general. You'd expect that a free performance boost from running on D3D12 would be a great selling point for some UE4 but it seems that Epic themselves don't think so as the cost of the effort right now is higher than the gains for them.

Interesting read... I'm not an expert on this stuff, but to me, this complete removal of validations checks, as you put it, doesn't sound good, and honestly, leaves me more than a bit skeptical on the proper utilization of DX12 if we're left at the "mercy" of developers who have to manually account for so many variables. You make DX12 sound like a "one step forward, two steps backwards" solution, there's potential to free up the CPU, but making use of it sounds like a lot of work.

Based on your post, wouldn't the best solution be some kind of a middle ground, where this DX11 "code analyzer" part is streamlined and simplified, but not entirely scrapped?
Validation checks are still there for the development environment, they are removed only from the runtime, so it's kind of like a removal of a debug layer which slows down the execution from the product shipping to users.

Yeah, with DX12 we're at the mercy of the developers and publishers and it's a lot easier for them to make something which won't run well on some GPUs. The fact that most (if not all) independent publishers are avoiding new APIs at the moment says a lot about how hard it is to make them work properly. This, however is by design, as the removal of this driver level "magic" is what gives these APIs their main strength - the ability to free up a lot of CPU resources, the ability to use multicore CPUs to the fullest. Putting some of this back may lead to a loss of this benefit which would make these APIs into the old DX11/OGL ones, so it probably won't happen.

We'll just have to wait till the devs will figure the new APIs out instead of being forced or payed to do this right now in cases where it may not even make much sense - like FH3 for example where a game is ported from a slow console CPU and as a result should not be CPU limited on PC and thus should not even benefit from D3D12 at all.
 

M3d10n

Member
The jury is still out on the long term benefits of DX12 (and Vulkan) on PC games, performance-wise.

As was already posted, both APIs allow devs to micromanage GPUs a great deal more than the more abstracted DX and OpenGL APIs, which can cause to games to have less consistent performance across GPU vendors and generations, since the drivers have less opportunities to interfere with a game's rendering process to better suit their particular architecture quirks.

These low level APIs seem more suited for middleware engines like Unreal, Unity and CryEngine, which can afford to spend significant development time making sure their render code works good on a wide range of hardware.
 

ethomaz

Banned
Makes sense. Nvidia made cards better designed around DX11 from what I gather, and AMD did with DX12/Mantle. And with the team more comfortable with DX11 (which probably makes the biggest impact)
So why AMD runs the same with DX11/DX12??? It is supposed to be better like you said.

This just shows that drivers optimization for API matters more than the API itself... DX11 on mVidia already has low CPU overhead that DX12 is suppose to bring.
 
Makes sense. Nvidia made cards better designed around DX11 from what I gather, and AMD did with DX12/Mantle. And with the team more comfortable with DX11 (which probably makes the biggest impact)

From what I've seen, it's not that NV or AMD made cards better designed around any API. The cards have certain features that work better than the competitors in the new API's, notably Async Compute on AMD cards, which is why people are claiming AMD is a 'real' DX12 card and NV isn't. At the same time, NV has DX12 hardware features AMD doesn't have, like conservative rasteration, though I don't think that has been implemented in a game yet.

The most important part of the DX12/Vulkan API's, the lowering of CPU overhead by removing some of the abstraction, works just as well on both cards, provided that the developers make proper implementations of the API. AMD gets huge gains in some DX12 games, like Ashes, because their DX11 driver is plagued with overhead issues. NV sees little gain or even loss depending on implementation, because their DX11 driver is already really good about overhead. Doom Vulkan also shows notable gains for NV, especially with more recent driver updates that have included a newer runtime, Pascal cards are seeing a solid 10-15% increase in overall FPS over OpenGL.

Ultimately, every game with a DX12 render path right now is designed with DX11 in mind and we probably won't see what either company's cards can really do in DX12 until games start being built from the ground up with DX12 in mind. The only thing I can think of where the application was built from the ground up for DX12 is the Timespy benchmark, and Nvidia performed well enough in that people thought Nvidia paid off Futuremark because until then, Nvidia cards had struggled with the haphazard DX12 implementations early DX12 games had.
 

Vash63

Member
I have the Steam version and for some reason get a 30FPS stutter every time I fire the Heavy Pistol. Anyone else have this bug? My FPS counter still shows ~90FPS but the screen pans at 30 for the recoil, it's extremely noticeable since it's out of sync with the frame rate. This is on a 1080 with latest Nvidia drivers.
 

Kevin

Member
Any visual differences between the Windows Store DX12 version and the Steam DX11 version?

Any way to get a Steam key for someone who has purchased the Windows Store version without having to rebuy the game?
 

Locuza

Member
[...] At the same time, NV has DX12 hardware features AMD doesn't have, like conservative rasteration, though I don't think that has been implemented in a game yet.
[...]
Rise of the Tomb Raider (VXAO) and The Division (HFTS) use Conservative Rasterization but not through DX11.3 or DX12 but through NVAPI in addition to DX11.
(VXAO is not available under the DX12 path for Tomb Raider)
 

M3d10n

Member
From what I've seen, it's not that NV or AMD made cards better designed around any API. The cards have certain features that work better than the competitors in the new API's, notably Async Compute on AMD cards, which is why people are claiming AMD is a 'real' DX12 card and NV isn't. At the same time, NV has DX12 hardware features AMD doesn't have, like conservative rasteration, though I don't think that has been implemented in a game yet.

AFAIK, there no GPU "fully" implements DX12 feature level 12_1 yet. Each one is missing a different feature.

Also, devs don't need to rewrite their games in DX12 to get those features, since new DX12 versions are always matched with a new DX11 version with the new features (11.3 matches 12.1).
 

FaintDeftone

Junior Member
What's the reasoning behind the Win10 version not getting DX11? It doesn't make sense to me. Why punish those early adopters with an inferior game? Is DX12 forced in Win10 Store or something?
 
What's the reasoning behind the Win10 version not getting DX11? It doesn't make sense to me. Why punish those early adopters with an inferior game? Is DX12 forced in Win10 Store or something?

Marketing for W10 store. It'd look bad for MS had this game had a DX11 version that performs better than DX12 at launch on the Windows Store that brags about DX12.
 

Locuza

Member
AFAIK, there no GPU "fully" implements DX12 feature level 12_1 yet. Each one is missing a different feature.
[...]
You either support a Feature-Level or you don't, there is nothing in between.
Maxwell v2, Pascal and Intels Gen 9 architecture (Found in Skylake) support DX12 FL12.1.
But the FL12.1 does not include all features which are avaiable under the DX12 spec.

A Feature-Level is a package of features but you can also check and use only the features you need, for example you can use FL11.0 as a base and in addition check for Tiled Resources Tier 2, like Forza (Apex and Horizon).
So every GPU which does support this mix can run Forza and every GPU which don't can't.
But you don't have to use FL12.0 if you need Tiled Resources Tier 2.
Since Forza is combining FL11.0 with TR2 Kepler and GCN Gen 1 can run this game, if you would jump from Feature-Level to Feature-Level without the option to choose the single features you need, you wouldn't be able to support a wide variety of hardware.

Intels Gen 9 does support more or less everything from the DX12 spec, there are some features which are currently not exposed by the driver, like the standard swizzle format for textures and also FP16 is not exposed under DX12 (but under DX11.3 it is supported).

https://translate.google.de/translate?sl=de&tl=en&js=y&prev=_t&hl=de&ie=UTF-8&u=http%3A%2F%2Fwww.pcgameshardware.de%2FGrafikkarten-Grafikkarte-97980%2FNews%2FDirectX-12-Support-Geforce-Radeon-Treiber-1204767%2F&edit-text=
https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D
 

dr_rus

Member
AFAIK, there no GPU "fully" implements DX12 feature level 12_1 yet. Each one is missing a different feature.

What Locuza said. Also worth noting that Skylake's iGPU support every feature DX12 currently have to it's fullest tier so if you want to have a GPU with "full DX12 support" buy a Skylake.

Any visual differences between the Windows Store DX12 version and the Steam DX11 version?

Any way to get a Steam key for someone who has purchased the Windows Store version without having to rebuy the game?

No and no.
 

SapientWolf

Trucker Sexologist
One thing dr_rus' largely correct writeup isn't mentioning explicitly which I think is also relevant is that HW abstraction is not just important across vendors, it's also relevant for different designs by a single vendor. You can e.g. see in thos CB benchmarks that the 390 is gaining a bit of performance and the RX480 isn't really getting anything -- and those are just rather minor hardware revisions and not an entirely new architecture (like e.g. going from TeraScale to GCN)!

One great thing about GPUs is that you could always get the same code to run significantly faster on new HW, pertially because GPU code is inherently parallel, but also partially because of the level of abstraction afforded by the API and drivers.

Is it? You can get single-frame input lag on DX11 if you design for it (VR stuff generally does), I don't really see the advantage of DX12 there.
The API has features like bundling that allow devs to get data to the GPU earlier in the frame. And multithreading doesn't come with the same latency hit that it carries in DX11. So devs can render the frame faster and more efficiently, which should help with latency as well.

In theory, getting low latency should be easier with DX12.
 

ethomaz

Banned
The API has features like bundling that allow devs to get data to the GPU earlier in the frame. And multithreading doesn't come with the same latency hit that it carries in DX11. So devs can render the frame faster and more efficiently, which should help with latency as well.

In theory, getting low latency should be easier with DX12.
Are there any test about these latency with nVidia? Because I'm really interested to see actual low level tests with all these APIs under nVidia drivers.

I won't be surprised if DX11/OpenGL ends being equal or better to these newer "close to metal" APIs.
 
I'm happy to report that I was able to get a refund just now from Microsoft support. It was like pulling teeth though. I started in chat, explained in detail the situation, they stated that they couldn't do anything to help me. I had to request a supervisor on three different occasions before they finally agreed.

They requested a number to call, and the supervisor called me about 5 minutes later. I brought her up to speed on the phone, she put me on hold for another 5 minutes or so, and told me that that she could do a one time refund as a courtesy to me.

So yeah, $63.29 refunded to my card, for my Quantum Break Win 10 copy that I purchased back in April.

i'm going to try this on Monday too.

literally played all of 3 minutes on the UWP release before shutting down and never touching it again.

hoping i can get a refund so i can buy this Steam release.
 

galv

Unconfirmed Member
I won't be surprised if DX11/OpenGL ends being equal or better to these newer "close to metal" APIs.

I'd say the only reason that it looks like DX11/OpenGL run as well as DX12/Vulkan right now is because devs simply haven't had the same amount of time with the newer APIs as opposed to the over 7 years that devs have had with DX11. When it comes to good implementation of DX12, I'd say we have to wait for a while. Vulkan on the other hand has had a very good showing in DOOM.
 

ethomaz

Banned
I'd say the only reason that it looks like DX11/OpenGL run as well as DX12/Vulkan right now is because devs simply haven't had the same amount of time with the newer APIs as opposed to the over 7 years that devs have had with DX11. When it comes to good implementation of DX12, I'd say we have to wait for a while. Vulkan on the other hand has had a very good showing in DOOM.
Vulkan on Doom only showed good performance improvement for AMD due shit OpenGL drivers with a lot of overhead.

nVidia was another case with Vulkan matching or being a bit better than OpenGL because the OpenGL drivers from nVidia already have low overhead... nVidia even has a OpenGL extension that allow more "close to metal" coding.

A good optimized driver will run all APIs near the same performance... that is what I think.
 

Vash63

Member
Vulkan on Doom only showed good performance improvement for AMD due shit OpenGL drivers with a lot of overhead.

nVidia was another case with Vulkan matching or being a bit better than OpenGL because the OpenGL drivers from nVidia already have low overhead... nVidia even has a OpenGL extension that allow more "close to metal" coding.

A good optimized driver will run all APIs near the same performance... that is what I think.

On Pascal there is a good boost with Vulkan.
 
Top Bottom