• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Watch_Dogs PC Performance Thread

Not enough difference to matter at all. There are still ugly ass textures on Ultra. If you want to see specifically how small of a difference there is, here you go. http://international.download.nvidi...dogs-textures-comparison-1-ultra-vs-high.html

Thanks for the link.

Finally managed to get locked to 30 FPS, with some very minor dips while driving. Lowered textures to high, and AA is now set to MSAA 2x. I actually needed to add MSAA; running on V-sync level 2 causes the framerate to tank, but letting setting it to level 1 keeps it mostly at 30. To prevent it from rising to 60 intermittently, I added MSAA, which seemed to do the job.

Still, the game does dip here and there; got weird stutters randomly when doing the first ctOS hacking mission, when jumping from camera to camera. At times, it would be at 60, sometimes it would dip lower than 20. I'm assuming it has to do with the game's poor optimization.
 

J.Edge

Neo Member
Hey everyone,

Was considering using WatchUNDERSCOREDogs as an excuse to upgrade from my 560Ti to a 6GB 780. If this is the rest of my setup, is Ultra a pipe dream at 1080p (I'm not bothered about 30 fps as long as it's stable)?

i5 2500k @ 4.4GHz
8GB DDR3 2000 MHz
Windows 8.1

(Posted before but got quickly buried under the day-of-release chatter)
 

mkenyon

Banned
Hey everyone,

Was considering using WatchUNDERSCOREDogs as an excuse to upgrade from my 560Ti to a 6GB 780. If this is the rest of my setup, is Ultra a pipe dream at 1080p (I'm not bothered about 30 fps as long as it's stable)?

i5 2500k @ 4.4GHz
8GB DDR3 2000 MHz
Windows 8.1

(Posted before but got quickly buried under the day-of-release chatter)
You should do really well at 1080p/ultra. Performance will be right around 60 FPS for most of the time, with no significant chugging.
 

dsk1210

Member
Hey everyone,

Was considering using WatchUNDERSCOREDogs as an excuse to upgrade from my 560Ti to a 6GB 780. If this is the rest of my setup, is Ultra a pipe dream at 1080p (I'm not bothered about 30 fps as long as it's stable)?

i5 2500k @ 4.4GHz
8GB DDR3 2000 MHz
Windows 8.1

(Posted before but got quickly buried under the day-of-release chatter)

You will be fine, I am running a similar rig except a 780ti and i can keep a very regular 60fps with stuttering every now and again as it streams data in.
 
There are geometry differences between High and Medium Textures in NVIDIA's Article/comparison shots, (which could help A LOT with performance IMO).

I wish we could seperate these two things, Textures and Model Quality, into two seperate selections. because I'd like higher quality textures and don't care much for the added model quality.

http://international.download.nvidi...ogs-textures-comparison-1-high-vs-medium.html


Excuse my ignorance but what is the difference between frame time and Frames Per Second? Whats so important of frame time, and how does it translate into gameplay. How can end-users measure it, and ultimately does it matter more than FPS?

I know there are geometry differences between normal and high, but many people here are talking about the difference between high and ultra, and I do not see a geometry difference between those, but would still consider the texture differences significant.
 
There's no sure-fire fix it seems. Some people seem to have alleviated in different ways, but many people are still experiencing it after trying these methods. People with high-end setups as well.

There's something going wrong somewhere.

Ah, ok, thanks.

Is there a way to remove the Ubisoft startup intro movie?
 

Dennis

Banned
I am scratching/shaking my head with all the negativity on PC performance.

I am running old as fuck 3GB GTX580s and I could not be happier with the performance.

I am getting 25-30 fps at 2560 x 1600 resolution, totally playable, with every setting at Ultra and Temporal SMAA.

If I were willing to lower some of the settings I would get even more frames.

Using 2x2 supersampling is too rich for gameplay but useable for screenshots.
 

LilJoka

Member
I am scratching/shaking my head with all the negativity on PC performance.

I am running old as fuck 3GB GTX580s and I could not be happier with the performance.

I am getting 25-30 fps at 2560 x 1600 resolution, totally playable, with every setting at Ultra and Temporal SMAA.

If I were willing to lower some of the settings I would get even more frames.

Using 2x2 supersampling is too rich for gameplay but useable for screenshots.

I dont think a lot of would be happy with 25-30 fps....
 
I think you're still confused. You should read that post again with this in your head: FPS and frame time is the same data, but presented differently. FPS simply averages out frame times over a given second, which makes it a less accurate metric.

If you have 20 frames over one second, then frame time analysis looks at each frame individually. FPS just averages out those 20 frames as "20 FPS". In reality, you may have one frame at 200ms, one at 15ms, one at 60ms, etc. For translation, 16.7ms = 60 FPS, 33.3ms = 30 FPS.

I see what you're saying, measuring the latency between each frame, correct? the benefit is knowing more detailed into about what end-feel the game is giving you, beyond simply frames per second.

What I'm asking is, why is this info more relevant than FPS and if it is important to measure, how can end-users work at improving it on their end if they have poor frame-time performance?

I ask because it's kinda trivial to measure something if you can't adjust it or improve it's performance with certain methods of optimization. Where with FPS you can just alter certain graphical settings up or down to adjust the end-result of FPS very easily.
 
Performance will be roughly the same as two Titans in SLI.

Well, yeah, but I am asking if there was a rich gaffer who tried maxing everything on such beasts, be it titan z and sli titans. Maybe this game is an evil plot between ubi and nvidia to release the most un-optimized AAA game in history in order to increase the number of titan z sold to more than a couple.
 

Guri

Member
Thought this would be the most appropiate thread to share our findings.
Those are the default render configurations that the dev team seemed to aim for on PS4/XO :p
They're contained in an internal engine config, that also allows for some interesting graphical enhancements on PC (see screenshots below taken by Vortex), but also causes some glitches w/ AA if modified as it seems.

Eventually this could help to tweak the performance, will keep you updated.

Did you test the performance with those settings?
 

Dennis

Banned
This is done though Nvidia inspector, yes? I wonder what performance I'd get by trying this and disabling AA @ 1080p. Using a GTX 760 4GB.

No, it is using the games own config-file option. You can enable SS as an option. Supposedly using downsampling is the better alternative but that doesn't play nice for me.
 
Thought this would be the most appropiate thread to share our findings.
Those are the default render configurations that the dev team seemed to aim for on PS4/XO :p
They're contained in an internal engine config, that also allows for some interesting graphical enhancements on PC (see screenshots below taken by Vortex), but also causes some glitches w/ AA if modified as it seems.

Eventually this could help to tweak the performance, will keep you updated.

Could you release this for us to try out?
 

Gumbie

Member
I am scratching/shaking my head with all the negativity on PC performance.

I am running old as fuck 3GB GTX580s and I could not be happier with the performance.

I am getting 25-30 fps at 2560 x 1600 resolution, totally playable, with every setting at Ultra and Temporal SMAA.

If I were willing to lower some of the settings I would get even more frames.

Using 2x2 supersampling is too rich for gameplay but useable for screenshots.

I watched a webm you posted yesterday and it looked like a slideshow to me. To each his own though. :)
 

Skyzard

Banned
Most of us are playing the game while Dennis is just walking around taking photos like a tourist. 25 fps is perfect for that.

It's performing well for me, even on 2560 with max settings and temporal smaa. I just want to set v-sync but doing so in-game is weird and doesn't have great results. I set rendered gpu frames to 3... and I'm going to set v-sync through inspector, with triple buffering. I'm playing with a controller. Do those numbers sound right?
 

lifa-cobex

Member
Grrr

Steam is playing me up.

Telling me (missing executable). Iv deleted appcache folder but i'm still having no luck.

Anyone got any ideas?

Edit: never mind. Fixed it.
 

mkenyon

Banned
I see what you're saying, measuring the latency between each frame, correct? the benefit is knowing more detailed into about what end-feel the game is giving you, beyond simply frames per second.

What I'm asking is, why is this info more relevant than FPS and if it is important to measure, how can end-users work at improving it on their end if they have poor frame-time performance?

I ask because it's kinda trivial to measure something if you can't adjust it or improve it's performance with certain methods of optimization. Where with FPS you can just alter certain graphical settings up or down to adjust the end-result of FPS very easily.
They're the same thing, except FPS is an average of frame times. If you have 30 frames per second, each of those frames were created in a certain amount of time. The total time to create all 30 frames is one second, giving you an average of 33.3ms per frame. However, within that one second, you may have had a number of frames at 100ms, some at 10ms. If they average out to 33.3ms, then on paper, it would look to be exactly the same as a second of gameplay where every frame was created at 33.3ms.

For example, both of these lines would be shown as "30 FPS"

BvEZx4O.png


Obviously series 2 has some major stuff going on with huge spikes above 120ms (8.3 FPS). But that's the issue, FPS averages out all of that data to give you a single data point, rather than ALL of the data.

It's like you were driving long distance. Lets say you wanted a chart of the speed you were going. It takes each 10 minute chunk of an hour long trip and gives you an average of that 10 minutes. Instead, it's a lot more accurate to see a line chart of your speed throughout that entire hour. Does that make sense?
 
Thinking about upgrading my 670 to a 780 Ti. What would the performance boost be like given I only have an i53570K? Won't be upgrading that until the refresh. Get a pretty steady 30 FPS right now with everything at max except 2XTXAA except for minor hitching during driving.
 
The state this game is currently in is utter nonsense. >(

I was really looking forward to WD.
I never bought Ubisoft games for the PC (X360 for Ubi all the way last gen), but WD and AC:IV were a sweet deal that I couldn't pass up.
AC:IV, will all its problems, ran beautifully on my rig (with RadeonPro Triple Buffering vsync).

The experience with WD so far was disheartening to say the least. Inconsistent performance, subpar graphics, etc.

My rig:

Intel Core i5 4670K @ default 3.4GHz
8GB RAM
Radeon 7850 2GB (@ 975/1250 MHz)
Samsung 840 Pro SSD 256GB
Win 7 x64 Pro
Latest Catalyst 14.6
1920x1200/60Hz

What settings do you recommend?
Any way to triple buffer?
What should I do?

Should I just wait a couple of days/weeks? *sigh* #hopeless
 

noomi

Member
I don't know how this makes any sense.....

Running 1080P everything maxed out, TXAA x4, using between 2900-3050VRAM.... (3GB GTX780)

How are people getting 3500-3900 VRAM usage? Higher resolutions?
 

mkenyon

Banned
Thinking about upgrading my 670 to a Ti. What would the performance boost be like given I only have an i53570K? Won't be upgrading that until the refresh.
To a 780 Ti?

The 3570K isn't holding anything back. The difference between an i7 and i5 at the same frequency in this game is probably not even noticeable.
 
I don't know how this makes any sense.....

Running 1080P everything maxed out, TXAA x4, using between 2900-3050VRAM.... (3GB GTX780)

How are people getting 3500-3900 VRAM usage? Higher resolutions?

No idea. Ultra w/SMAA @ 4K max GPU usage has been exactly 3030MB out of 3072MB.
 
I have got

4770k
8gigs of ram
780ti

and i get about 50 frames with everything on ULTRA with Temporal SMAA at 2560 x 1440, but sometimes it drops into the low teens for some weird reason, performance for me has been all over the place.
 
To a 780 Ti?

The 3570K isn't holding anything back. The difference between an i7 and i5 at the same frequency in this game is probably not even noticeable.

Yeah, not sure why I left that out. And good, was just worried about such a high end card being limited by the CPU. Just really want to be able to downsample these new games and still get playable framerates.
 
I have an 3750K I5 at 4.2 and 2 7950's

I get about 40 - 60 fps with vsync doesnt matter what settings I choose.

I tried with just 1 card as well with similar results.

Tried locking to 30fps but then it dips to 25 fps every so often which is annoying.
 

Denton

Member
DirectX doesn't feature triple buffering. Optionally, developers can use a render queue, that uses 3 buffers by default, but can be adjusted from 2 to 8. This has some of the advantages of triple buffering (wont crash your frame rate down to sync rate / 2), but adds latency compared to true triple buffering, by using old frames instead of being able to discard them.

I don't know how much Ubisoft can actually do. DirectX has needed true triple buffering for years and years now.

Huh, is that true ? I thought when I forced triple buffering on AC4 via D3D Overrider (and it worked, let me have 55+ fps instead of 30), I was, you know, forcing triple buffering. On DirectX game. So I was actually just forcing 3 frames buffer ?
And in Watch Dogs, I can set 1 frame or 2, 3, 4 or 5, but it seems to have no effect. Vsync is shit no matter the setting.
 

Nokterian

Member
I am scratching/shaking my head with all the negativity on PC performance.

I am running old as fuck 3GB GTX580s and I could not be happier with the performance.

I am getting 25-30 fps at 2560 x 1600 resolution, totally playable, with every setting at Ultra and Temporal SMAA.

If I were willing to lower some of the settings I would get even more frames.

Using 2x2 supersampling is too rich for gameplay but useable for screenshots.

25-30fps..yeah..no.
 

dsk1210

Member
Thinking about upgrading my 670 to a 780 Ti. What would the performance boost be like given I only have an i53570K? Won't be upgrading that until the refresh. Get a pretty steady 30 FPS right now with everything at max except 2XTXAA except for minor hitching during driving.


That's what i upgraded from and to, 780ti is a great card its about double the power of a 670 and watchdogs aside, its amazing for downsampling nearly any game.
 
They're the same thing, except FPS is an average of frame times. If you have 30 frames per second, each of those frames were created in a certain amount of time. The total time to create all 30 frames is one second, giving you an average of 33.3ms per frame. However, within that one second, you may have had a number of frames at 100ms, some at 10ms. If they average out to 33.3ms, then on paper, it would look to be exactly the same as a second of gameplay where every frame was created at 33.3ms.

For example, both of these lines would be shown as "30 FPS"

BvEZx4O.png


Obviously series 2 has some major stuff going on with huge spikes above 120ms (8.3 FPS). But that's the issue, FPS averages out all of that data to give you a single data point, rather than ALL of the data.

It's like you were driving long distance. Lets say you wanted a chart of the speed you were going. It takes each 10 minute chunk of an hour long trip and gives you an average of that 10 minutes. Instead, it's a lot more accurate to see a line chart of your speed throughout that entire hour. Does that make sense?
I understand, my point is, if you have a set of data showing huge spikes, how can knowing that by recording frame-times help you?

Can you realistically use the data to improve frame-times? IE: go from the red line to the blue line, is what I'm asking. Are there reliable ways to improve this, otherwise... what's the point of recording more detailed data.

You say FPS obfuscates deeper issues seen in frame-time data, so how can we remedy or improve on those "issue", deeper issues now that we can accurately measure frame-time data?
 

mkenyon

Banned
I understand, my point is, if you have a set of data showing huge spikes, how can knowing that by recording frame-times help you?

Can you realistically use the data to improve frame-times? IE: go from the red line to the blue line, is what I'm asking. Are there reliable ways to improve this, otherwise... what's the point of recording more detailed data.

You say FPS obfuscates deeper issues seen in frame-time data, so how can we remedy or improve on those "issue", deeper issues now that we can accurately measure frame-time data?
So, in the case of swapping in assets, you'll see huge spikes in frame times upwards of 150ms or more. Then you know that your slowdowns to 20-40 FPS are the result of that, as opposed to something else going on. If the frame times are consistent around a given number, then things are working as intended for your system, and you just have shitty performance.

With HardOCP's data, for example, you can't tell that at all. From the previous page:

okh8Xv8.png


Looking at that chart, you think, "not too much going on, a bit of a slowdown there towards the end." That could be physics, it could be your PC running some background task, it could be all sorts of stuff. But then you look at the frametime data for the exact same sequence:

VurndvA.png


Obviously those huge spikes in frame time is when the card basically takes a shit and can barely render frames. It's taking nearly a quarter of a second to produce one frame at times. This is an obvious indication that there is either A) something going terribly wrong with the game engine and is out of your control, or B) your card is swapping in new assets.

Now that you have limited the variables, you can then turn down textures to see if that helps rather than playing around with a ton of settings.
 

mkenyon

Banned
Without SLI I get 30fps solid. When I enable SLI I average ~50fps, but I get stutters
No difference in settings? You might consider turning down your textures and running SLI to get better general performance. Perhaps the greater disparity in frame rate when swapping in new textures with two cards makes it more noticeable.
 

mkenyon

Banned
He's playing at 1600p (essentially two 1080p panels) and everything at ultra on a GPU 3 gens old.

I think that's pretty good. We're talking doing the rendering work of what like 3 PS4's do for this game.
To be fair, that GPU is only a single generation old. Its successor is the GTX 780.
 
I can't believe the crazy amounts of stuttering I've been having with this game. I had to end up using a mixture of Medium/High/Low settings to get a smooth framerate.

Is this typical?

Here's my specs.

i7-4771 3.9ghz
MSI GTX 770 2gb
MSI Z87-G45 Pro
16gb G.Skill DDR3 2400
 

x3sphere

Member
I swapped my 780 for a 290X.

Completely eliminated any stuttering with SMAA / everything Ultra and HBAO+ High. I'm gaming at 3440x1440. Looks like I'll be selling the 780...

To get running on the 780, I had to disable Depth of Field, set HBAO+ to Low and no AA. There was still some stuttering from time to time as well.
 
Top Bottom