• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Quantum Break PC performance thread

RX-480-ABC-80.jpg
RX-480-ABC-84.jpg

http://www.hardwarecanucks.com/foru...9-radeon-rx480-8gb-performance-review-21.html

I don't see why it would run any better in QB as with the improvements to general graphics pipeline efficiency Polaris predictably lost some of async gains visible on GCN2/3.
Thanks, I had suspected this after seeing other benchmarks. Still not bad for $200 though.
 

Mikeside

Member
I've got a 980ti - is this game fixed enough that it'll play ~60 at 1440p yet? (I don't need max settings, advice for what settings to aim for to achieve this appreciated!)

How about at 1080p? Am I still in for a rough ride, or is it OK now?
 
I've got a 980ti - is this game fixed enough that it'll play ~60 at 1440p yet? (I don't need max settings, advice for what settings to aim for to achieve this appreciated!)

How about at 1080p? Am I still in for a rough ride, or is it OK now?

1080p at medium with scaling off runs at 60 fps on a 980ti

ive got i5 4690k, 980ti (asus strix), 16 GB ram, win 10 for a better reference
 

hwalker84

Member
I've got a 980ti - is this game fixed enough that it'll play ~60 at 1440p yet? (I don't need max settings, advice for what settings to aim for to achieve this appreciated!)

How about at 1080p? Am I still in for a rough ride, or is it OK now?
1080p on my GTX 1080 ran pretty well with everything maxed. 60 fps isn't happening at 1440p on a 980TI.
 

Mikeside

Member
1080p at medium with scaling off runs at 60 fps on a 980ti

ive got i5 4690k, 980ti (asus strix), 16 GB ram, win 10 for a better reference

Thanks!
And that looks OK? I'm not a pixel counting "oh man this is so ugly" kind of guy, so as long as it's reasonably crisp and such, I'll be happy.
 

mosdl

Member
With a 1070 (EVGA SC) and a i5 4670 I am able to get near 60 with a mixture of medium/high settings and scaling turned off at 1080p.
 

drotahorror

Member
Medium settings and ultra textures with upscaling on. More than this and the framerate will drop.

That's kind of why I asked if the new patch did some optimization. Cuz dude up there said his 980 ti with everything on medium upscaling off got 60fps. On my gtx 1080 and a 6700k everything on medium with ultra textures I couldn't hold 60.
 

SimplexPL

Member
I sometimes wonder if people realize how much shit this game has going on at times. It's not broken, the highest settings are just that demanding.

https://www.youtube.com/watch?v=ANIG2HQJZDI#t=05m37s
I tested the very beginning of the game, where nothing happens.

I have 1080 MSI Gaming X OC'ed to over 2GHz and i7 6700K OCed to 4.5GHz. So this is basically the fastest rig possible (game does not support SLI). I could not get a faster PC even if I wanted to.

With upscaling off and all options set to absolute max I was getting around 40-50 fps... in 1080p.
After I dialled down some options I was getting between 50-55 - still not locked 60. At 1080p. On a 1080 with 6700K. In a scene where nothing was happening.

That's my definition of broken.
 

suedester

Banned
I don't have a frame rate indicator but I was getting around 40 fps at 1440p with scaling off and most settings maxed on my overclocked 1070. No AA though as it was causing a black screen issue. Very playable and really good game. Almost naughty dog levels of presentation.
 
I tested the very beginning of the game, where nothing happens.

I have 1080 MSI Gaming X OC'ed to over 2GHz and i7 6700K OCed to 4.5GHz. So this is basically the fastest rig possible (game does not support SLI). I could not get a faster PC even if I wanted to.

With upscaling off and all options set to absolute max I was getting around 40-50 fps... in 1080p.
After I dialled down some options I was getting between 50-55 - still not locked 60. At 1080p. On a 1080 with 6700K. In a scene where nothing was happening.

That's my definition of broken.

I think AMD cards take the win in this game, as it is DX12 and uses asych compute. It's just a poor PC release, as far as optimization.
 

CHC

Member
So I wound up beating the game this week on a GTX 1070 G1 and a 2500k @ 4.5 GHz.

I went with everything at medium except for textures and shadow resolution, both of which I put at Ultra. Upscaling was OFF and the game was running at 1080p.

60 FPS pretty much all the time, and the game looked out of this world. Upscaling off and textures at ultra gave a pretty clean, but soft, image quality. After all the doom and gloom I wound up being pleasantly surprised with the quality of the port, so I'm glad I waited to play it.
 
I tested the very beginning of the game, where nothing happens.

I have 1080 MSI Gaming X OC'ed to over 2GHz and i7 6700K OCed to 4.5GHz. So this is basically the fastest rig possible (game does not support SLI). I could not get a faster PC even if I wanted to.

With upscaling off and all options set to absolute max I was getting around 40-50 fps... in 1080p.
After I dialled down some options I was getting between 50-55 - still not locked 60. At 1080p. On a 1080 with 6700K. In a scene where nothing was happening.

That's my definition of broken.
Volumetric lighting above medium isnt reccomended on nvidia.
 

jackdoe

Member
Every time this thread gets bumped, I eagerly click, expecting some miraculous patch to unshit performance on Nvidia cards. And every time, my hopes are quickly dashed. At least I'll know that when I'm able to successfully run this game at a native 1440p at 60 fps that I have a beast of a machine that can brute force almost anything.
 
I tested the very beginning of the game, where nothing happens.

I have 1080 MSI Gaming X OC'ed to over 2GHz and i7 6700K OCed to 4.5GHz. So this is basically the fastest rig possible (game does not support SLI). I could not get a faster PC even if I wanted to.

With upscaling off and all options set to absolute max I was getting around 40-50 fps... in 1080p.
After I dialled down some options I was getting between 50-55 - still not locked 60. At 1080p. On a 1080 with 6700K. In a scene where nothing was happening.

That's my definition of broken.
Nvidia cores sitting unused between cycles due to lack of asynchronous compute. This game basically requires it with all the processing it does in real time.


Edit: I was mistaken, some things are prebaked, but the things that do use global illumination, dynamic shadows, acclusion, and volumetric lighting eat tons of resources.
 

Zeneric

Member
Every time this thread gets bumped, I eagerly click, expecting some miraculous patch to unshit performance on Nvidia cards. And every time, my hopes are quickly dashed. At least I'll know that when I'm able to successfully run this game at a native 1440p at 60 fps that I have a beast of a machine that can brute force almost anything.

Reminds me of Crysis.
"Can it run Quantum Break?"
 
Every time this thread gets bumped, I eagerly click, expecting some miraculous patch to unshit performance on Nvidia cards. And every time, my hopes are quickly dashed. At least I'll know that when I'm able to successfully run this game at a native 1440p at 60 fps that I have a beast of a machine that can brute force almost anything.

Quoting myself from another thread

The game is just doing too much at too high settings for our current hardware. It's not a case of having a bad port. It's a demanding title.

People are upset with the drop in performance when running native resolutions on PC. The game needs the multiple lower resolution higher FPS frames with each getting various post processing passes. Those four frames have different effects added to them and then combined into one frame with all of the effects shown. When rendering one larger frame that takes extra time to render, the framerate drops or stutters as there's just not enough time for our hardware to crunch those numbers all at once.

The only way to get a "better running port" would be if the devs labeled medium as ultra and low as medium. Then people could say "look how well my PC runs this on ultra" instead of "this is such a bad port I have to set it on medium to get good framrate"

Either that, or they'd have had to completely remake the game with prebaked effects.


Edit:

This also plays into the issues that Nvidia cards have with the game. This game uses asynchronous compute on the Xbox One.

What that means is that as soon as a process is finished, the processor can pull another task from the pipeline and get started.

Without async compute, that processor has to wait until every one after it and before it all receive their instructions before it can get another task. So it sits unused until then. So, if it's process takes over one cycle, it has to wait through the next cycle until it can work again.

Asynchronous compute effectively acts like hyperthreading the GPU cores. So if a core fires off two or three times in one cycle crunching simple tasks, then it's effectively acted as two or three logical cores while those other physical cores get to start on the meatier processes instead of having three cores knock out three simple processes and then sit unused.

With trying to do so much in real-time, the non-async cards stutter. That's why it takes more powerful Nvidia cards to reach the performance of the AMD cards. They need more physical cores to make up for not having "hyperthreaded" logical cores and even then there's not enough to brute force so that there's always a core ready to take the next process within a cycle.
 
Every time this thread gets bumped, I eagerly click, expecting some miraculous patch to unshit performance on Nvidia cards. And every time, my hopes are quickly dashed. At least I'll know that when I'm able to successfully run this game at a native 1440p at 60 fps that I have a beast of a machine that can brute force almost anything.

I finished the game on a 970 at 1440p gSync without any major issues, there were three or four segments where the game lagged out a bit but otherwise I didn't have any major issues. I even got 1k GS so I REALLY finished it, two complete runs (one on the hardest difficulty) and all branches explored. I have no idea what my FPS was since there is no counter in win10, but I assume it was between 30 and 40 for the majority of the game.

imo the 20 minute TV show segments are a bigger issue than any technical problems at this point, they really just halt momentum of the game completely and its the only thing I will really remember from the game, in a "remember that dumb game that had 3 hours of cut scenes in it?". I enjoyed the game, story, and acting but its on the same tier as order 1886 or far cry primal or assassins creed syndicate for me: decent games, completely forgettable after you finish.
 

dr_rus

Member
Do you have some dev diary footage / article about the "effects to multiple render targets" thing? Just curious

All modern games which are using deferred rendering approach are doing what he's describing. It is a bad port which isn't optimized for the majority of PC GPU h/w.
 
All modern games which are using deferred rendering approach are doing what he's describing. It is a bad port which isn't optimized for the majority of PC GPU h/w.
They do, but you're downplaying how much this game does in real time vs prebaked. They use async compute to get that much processed fast enough, I'm sure they'd love it if Nvidia had chosen to incorporate the feature, but that's hardly Remedy's fault.

The simple truth is that this is a game built from the ground up for Xbox One using Asynchronous Compute in order to do as many things in real-time as possible. No amount of optimization can make up for lack of a standard Dx12 feature on the hardware level.
 

SimplexPL

Member
Even on Xbox one it runs at upscaled 720p and does not hold 30fps at all times, so it's not like it's a well optimized title in general. Alan Wake ran at 960x540 on Xbox 360, so that's almost Remedy's signature.
 

cyen

Member
They do, but you're downplaying how much this game does in real time vs prebaked. They use async compute to get that much processed fast enough, I'm sure they'd love it if Nvidia had chosen to incorporate the feature, but that's hardly Remedy's fault.

The simple truth is that this is a game built from the ground up for Xbox One using Asynchronous Compute in order to do as many things in real-time as possible. No amount of optimization can make up for lack of a standard Dx12 feature on the hardware level.

It´s crap on NV cards, so it´s a bad port. Nothing to see here.
 

Durante

Member
They do, but you're downplaying how much this game does in real time vs prebaked. They use async compute to get that much processed fast enough, I'm sure they'd love it if Nvidia had chosen to incorporate the feature, but that's hardly Remedy's fault.

The simple truth is that this is a game built from the ground up for Xbox One using Asynchronous Compute in order to do as many things in real-time as possible. No amount of optimization can make up for lack of a standard Dx12 feature on the hardware level.
You are making some heavy assumptions here. There's really no way to ascertain that the performance of the game is all down to asynchronous compute -- as opposed to a standard "classic" reason like the shader code being heavily optimized for one particular architecture. I find that a lot more likely.
 

frontieruk

Member
You are making some heavy assumptions here. There's really no way to ascertain that the performance of the game is all down to asynchronous compute -- as opposed to a standard "classic" reason like the shader code being heavily optimized for one particular architecture. I find that a lot more likely.

But did you read through the Siggraph pdf I posted a few posts back, I'd welcome your views on how they've achieved their real time lighting.
 

cyen

Member
But did you read through the Siggraph pdf I posted a few posts back, I'd welcome your views on how they've achieved their real time lighting.

I think it´s mixed of both,heavely optimized for GCN Arch since it was developed as a xbox one title only, it´s normal that almost nothing can be done to improve on NV arch since GCN probably have the upper hand on the arch features they used to optimize the game.

I suspect more cases like this will happen in the future as DX12 and console optimized games start to push the envelope of consoles since they will be utilizing theire arch to get all of the limited hw currently present on consoles and since DX12 is more engine dependent than driver dependent nvidia cannot optimize it like they did on DX11 code.

Im not saying that GCN is superior to Pascal\Maxwell but AMD will probably collect some "wins" due to the fact they have the console ecosystem.
 

dr_rus

Member
They do, but you're downplaying how much this game does in real time vs prebaked. They use async compute to get that much processed fast enough, I'm sure they'd love it if Nvidia had chosen to incorporate the feature, but that's hardly Remedy's fault.

The simple truth is that this is a game built from the ground up for Xbox One using Asynchronous Compute in order to do as many things in real-time as possible. No amount of optimization can make up for lack of a standard Dx12 feature on the hardware level.

QB is doing a lot more pre-baked than some other games out there which are running just fine on NV's h/w. If you think about it you'd understand why.

I'm not at all sure that the reason for a bad performance lies in async compute usage. But even if we assume it is - this is exactly what "a bad port" means, when a feature is ported as is to a market where 80% of the h/w can't take advantage of it. Nothing states that async compute must provide performance increases on all h/w out there. Pascals support concurrent async compute just fine - and you can check on previous pages how much performance they got from it in QB. So yeah, it's a bad port.
 

riflen

Member
Given the heavy hit to performance on Nvidia hardware that otherwise is perfectly capable and the circumstances under which this port was conceived and released, I'm far more inclined to believe that the cause is down to the very short lead time Remedy was probably given.
I can easily believe that Microsoft gave them weeks to port this as part of pushing their games store. In these circumstances a developer will be forced to get the low hanging fruit first (get a build working well on AMD hardware).
Another important point is that a PC port was probably never on the cards when the renderer was being conceived and it was developed purely as a showcase for one piece of fixed hardware back when hopes for the Xbox One's success were high.
Now Microsoft have decided to go in a different direction, this game has low-level design decisions that don't make sense if you were creating a cross-platform title.

Edit: I also want to add that because this is a DirectX12 title, Nvidia themselves are far less able to intervene and correct less-than-optimal code at the driver level. Something that historically has happened all the time in DirectX11 titles on Nvidia hardware.
 

dr_rus

Member
Oh sure, I don't blame Remedy, they're more than capable of producing great PC versions of their games. So it is most likely a case of limited time / budget provided by the publisher.
 
It´s crap on NV cards, so it´s a bad port. Nothing to see here.
Funnily enough, when it's AMD hardware suffering due to games being coded to Nvidia's strengths, it's because AMD makes bad hardware, but when Nvidia is actually missing a Dx12 feature it seems to be everyone else's fault...

And no, Pascal's async is a bandaid. It doesn't function on the SM/CU level. So all it really does is help it better schedule tasks that were intended for async compute rather than actually natively support the feature. So it will still get hammered when trying to run code meant to use the feature, just not as badly as it would without it.
 

dr_rus

Member
Funnily enough, when it's AMD hardware suffering due to games being coded to Nvidia's strengths, it's because AMD makes bad hardware, but when Nvidia is actually missing a Dx12 feature it seems to be everyone else's fault...
You've got some proof for that claim or is that just a regular stories out of A?

And no, Pascal's async is a bandaid. It doesn't function on the SM/CU level. So all it really does is help it better schedule tasks that were intended for async compute rather than actually natively support the feature. So it will still get hammered when trying to run code meant to use the feature, just not as badly as it would without it.

There's no "bandaid" or "non bandaid" way to handle async compute execution in h/w. There are no reasons why async compute would even result in performance gains on an architecture which is fully utilized by graphics. And GCN can be hammered by async compute very easily as well, so it's really no different to what Pascal is capable of.
 

horkrux

Member
They do, but you're downplaying how much this game does in real time vs prebaked. They use async compute to get that much processed fast enough, I'm sure they'd love it if Nvidia had chosen to incorporate the feature, but that's hardly Remedy's fault.

The simple truth is that this is a game built from the ground up for Xbox One using Asynchronous Compute in order to do as many things in real-time as possible. No amount of optimization can make up for lack of a standard Dx12 feature on the hardware level.

I find this very unlikely. Async Compute doesn't just work out of the box on PC afaik - it requires serious additional optimization. Optimization, that the port was obviously lacking.
 
You've got some proof for that claim or is that just a regular stories out of A?



There's no "bandaid" or "non bandaid" way to handle async compute execution in h/w. There are no reasons why async compute would even result in performance gains on an architecture which is fully utilized by graphics. And GCN can be hammered by async compute very easily as well, so it's really no different to what Pascal is capable of.
GCN essentially multithreads the GPU, ie actual async compute, Pascal does not, it just tries to schedule tasks better in order to keep their lack of hardware support for the feature from impacting performance as badly as it would otherwise.


I find this very unlikely. Async Compute doesn't just work out of the box on PC afaik - it requires serious additional optimization. Optimization, that the port was obviously lacking.
So, optimization = building a new game from scratch that doesn't actually use asynchronous compute? Or, we can we agree that had Nvidia actually gone with a design that uses async Compute on a hardware level instead of using a scheduler for a few tasks to try to mimic async compute and calling it async Compute on a technicality, then it may have been possible to better and more easily optimize the game for their cards?


Edit:

A better explanation from Eteknix.

http://www.eteknix.com/pascal-gtx-1080-async-compute-explored/

Even with all of these additions, Pascal still won’t quite match GCN. GCN is able to run async compute at the SM/CU level, meaning each SM/CU can work on both graphics and compute at the same time, allowing even better efficiency.

It's worth noting that Nvidia's version of Async isn't in spec for async compute support as it was created to be. The only reason what they are doing is even being called asynchronous compute is because they have the audacity to call it by the name despite not actually supporting it. Pascal cores still only work on either compute or graphics, not both at the same time as asynchronous compute was intended to do.
 

horkrux

Member
So, optimization = building a new game from scratch that doesn't actually use asynchronous compute?

No, but optimizing it at all. See: Considering how fast they managed to shove that port out of the door, I wouldn't be surprised if the PC port didn't even utilize async compute.
 
No, but optimizing it at all. See: Considering how fast they managed to shove that port out of the door, I wouldn't be surprised if the PC port didn't even utilize async compute.
If it didn't, it wouldn't run well on GCN cards, it would run like crap across the board.

GCN cores run both graphics and compute at the same time, so each core is essentially functioning as two logical cores, one running graphics, the other compute, then once a task is done, that core can immediately get another task during the same cycle, so that one core can function 3 or 4 logical cores per cycle. Pascal requires at least two physical cores to complete the tasks of one GCN core in the best case scenario, 4:1 in the worst case while running this game(or any asynchronous compute) . No amount of optimization can add extra cores to the Nvidia cards to make up for the fact that they can't asynchronously run both compute and graphics on the same core at the same time.


Also, Remedy have released several patches that helped optimize the scheduling for Nvidia cards, but it seems there's only so much they can do given the circumstances.

Quantum Break's engine was built for the Xbox One, which means they designed it to use all of the features of async compute, otherwise they would have been leaving performance sitting on the table unused. Nvidia cores simply can't do as much per cycle.
 

dr_rus

Member
GCN essentially multithreads the GPU, ie actual async compute, Pascal does not, it just tries to schedule tasks better in order to keep their lack of hardware support for the feature from impacting performance as badly as it would otherwise.

That's just not true at all.

A. All GPUs are natively multithreaded since early days of programmable shaders.

B. Both GCN and Pascal allow concurrent execution of different contexts. The difference is that GCN allows to mix threads of different contexts in flight on one CU (aka SM) while Pascal allows to mix them only between SMs (aka CUs).

C. What Pascal does requires h/w support. Otherwise they'd do it on Kepler and up.

D. Async compute running serially doesn't impact performance at all, as can be seen from most Maxwell benchmarks. It doesn't provide any performance boost, yes, but it's doesn't make things slower.

E. What you seems to not get still is that there's no "proper" way of handling this execution in h/w. Theoretically speaking it would be great to have Pascal support the same context agnostic execution of threads on its SMs -- but in practice there's no such thing as a free feature. Just as an example what if implementing GCN-style async execution would result in Pascal loosing it's frequency advantage over GCN? Would a ~1-5% performance gain from such implementation be a good trade off for loosing ~25-33% of the clocks? A rhetorical question.
 
Oh sure, I don't blame Remedy, they're more than capable of producing great PC versions of their games. So it is most likely a case of limited time / budget provided by the publisher.

I agree. The decision to port the game to Windows 10 was probably made late in development, once the higher-ups decided that simultaneous pc/console releases would be the company's strategy moving forward. The same goes for Gears of War. I am confident that future pc releases will be much better.
 
That's just not true at all.

A. All GPUs are natively multithreaded since early days of programmable shaders.That was an autocorrect, it was supposed to be hyperthreaded.

B. Both GCN and Pascal allow concurrent execution of different contexts. The difference is that GCN allows to mix threads of different contexts in flight on one CU (aka SM) while Pascal allows to mix them only between SMs (aka CUs).they both allow a core to receive instructions as soon as a task is finished, but Pascal cores can only do graphics or compute, GCN cores can do both at once. So a task that requires both needs two Pascal cores to complete while one GCN core can do both.

C. What Pascal does requires h/w support. Otherwise they'd do it on Kepler and up. What does do needed hardware support, but that doesn't mean that it's full asynchronous compute integration, it's just a half step.

D. Async compute running serially doesn't impact performance at all, as can be seen from most Maxwell benchmarks. It doesn't provide any performance boost, yes, but it's doesn't make things slower.it makes things slower compared to cores that fully integrate async compute otherwise we wouldn't be having this conversation.

E. What you seems to not get still is that there's no "proper" way of handling this execution in h/w. Theoretically speaking it would be great to have Pascal support the same context agnostic execution of threads on its SMs -- but in practice there's no such thing as a free feature. Just as an example what if implementing GCN-style async execution would result in Pascal loosing it's frequency advantage over GCN? Would a ~1-5% performance gain from such implementation be a good trade off for loosing ~25-33% of the clocks? A rhetorical question.
So you agree that Nvidia released a poorly designed Dx12 architecture in order to increase theoretical performance over actual capability to run code fully optimized for Dx12? One of the greatest advantages of async compute is the ability to hyperthread graphics and compute on the same core, Nvidia chose not to include that and the 4 year old GCN vs modern Pascal performance shows the a Tualatin performance that trade off cost them. If you can't see that you're in denial.

Responses in bold.

It could even be argued, considering the performance boosts older GCN cards get, that Nvidia's choice not to adopt the tech years ago has been holding the industry back.
 

Brandon F

Well congratulations! You got yourself caught!
Yikes, I just bought this during the sale and there constant stuttering and crazy long menu delays, I mean every time I pause to the options menu there is a several second pause where I question if the game crashed. Game stutters regularly in action as well, have no other games installed that act like this on Steam/Origin/GoG...

i7 6700k
GTX 970
16gb RAM

I haven't updated my Nvidia drivers in about 3 weeks, so not sure if there was anything pertinent in mid/late June released that is helpful for this?
 
Thanks!
And that looks OK? I'm not a pixel counting "oh man this is so ugly" kind of guy, so as long as it's reasonably crisp and such, I'll be happy.

yea looks pretty good. honestly there isnt too much of a difference between upscaling on and off


Quoting myself from another thread

i think whatever they are doing just isnt worth the performance cost. the game looks good, but isnt mind blowing IMO
 

scitek

Member
Yikes, I just bought this during the sale and there constant stuttering and crazy long menu delays, I mean every time I pause to the options menu there is a several second pause where I question if the game crashed. Game stutters regularly in action as well, have no other games installed that act like this on Steam/Origin/GoG...

i7 6700k
GTX 970
16gb RAM

I haven't updated my Nvidia drivers in about 3 weeks, so not sure if there was anything pertinent in mid/late June released that is helpful for this?

Try turning the textures and effects down. That 3.5 GB limit could be biting you in the ass with the 970.
 
Top Bottom