• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Death Stranding - In-depth PS5 vs Nvidia & AMD PC - Performance Deep Dive

3060ti this was my lowest fps. I think it matches the frame pretty close sorry overlay is so small. Also does anyone know to to pick non native resolutions without setting desktop to 4k?
PnXzP5y.png

Edit
So 51 fps wins?
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
3060ti this was my lowest fps. I think it matches the frame pretty close sorry overlay is so small. Also does anyone know to to pick non native resolutions without setting desktop to 4k?
PnXzP5y.png

What RTX 3060ti do you have?

Also is your DSR set up in the Nvidia control panel.
It should let you choose resolutions up to 4x your desktop resolution

aid_3587_1.jpg
 
Last edited:
maybe its a general ampere bottleneck people, we may not know

it is clear that rtx 3000 series brought up huge numbers like 10k cores and such. but not every core is created equal, never forget that :)

in the end sadly, from gtx 1060 to rtx 3090, nvidia has a problem: frequency. all them gpus run at 1.9 ghz. at least amd made some strides there. games always like frequency.

maybe we're seeing an internal ampere bottleneck here that somehow causes 3070 to not shine or something. but that's an nvidia problem, in the end, ps5 runs at 2.2-2.3 ghz and rdna2 cards are flying high above 2.5 ghz+

maybe in that particular transition scene, frequency becomes the bottleneck. as i've said in my other post, not every %99 gpu is created equal. you can see %99 gpu util in ac:valhalla, but gpu may be consuming 160w. what does 160w mean for a 220w tdp gpu? it means some parts of the gpu is internally bottleneck and not working at all. but core load may present itself as %99. so in this particular case, maybe his 2070super gets higher "computational" usage and 3070 kinda gets underused. but as i've said, its a problem with ampere architecture. i've noticed this problem when i did a comparative benchmark with against my friend's gtx 1080ti in nba 2k19. practically, 3070 failed to scale beyond 1080ti performance in certain old games



in this video, this valuable person talks about something is being awkward on ampere and claims that nvidia's software team can tackle the issue with software optimizations. then again, stuff like this makes me uneasy, if a gpu's performance depends on the game developer and nvidia, its not good. it may even implicate that the ampere may end up like Kepler: alone and forgotten and left behind... :( this is why GCN cards aged better,because AMD kept along with the same architecture over the years.

nvidia however constantly shifts architectures and im pretty sure that is causing troubles for game development


I'm not technically sound as some of the folks here but the clock frequency could be an interesting factor at play here.

We already know that there are elements of the graphics pipeline which scale really well with high clock frequency such as as rasterisation, pixel fill rate, alpha effects and geometry throughput, cache bandwidth and latency amongst other things.

This has already been discussed an enormous amount of times on other threads so I don't want to go into too much detail, but we can can speculate as to whats allowing the PS5 to "pull ahead" in certain rendering scenarios which are sensitive to GPU clock frequency.

How Sony first party developers architect their graphics engines around such advantages in the GPU is also an interesting discussion and how well these graphics engines "could" scale cross AMD and perhaps even Nvidia cards with high clock frequencies is even more interesting but of course beyond the scope of this thread.
 

Md Ray

Member
hmm. very interesting. I never new about this. Those benchmarks are nuts.

That said, this would be a great CPU to test AMD cards like the 6600xt and the 6700xt. The two RDNA 2.0 cards that are the closest to the PS5 and XSX. The 2700 performs similar to the PS5 CPU that went on the market recently. The 4800U I believe. So you can use this CPU to do some very interesting comparisons that will either confirm or bust any myth of console coding to the metal. If the cards perform identical when downclocked to match the PS5 and XSX tflops, we can say there is no secret sauce in the PS5's low level API. This will also help point out any advantages the PS5 I/O might have over PC and the XSX.

I am surprised that no one seems to be interested in doing those comparisons. Alex is happy jerking off to 2060 Super comparisons to prove his point, but I think there are far more interesting findings to be had if they stick with these two RDNA 2.0 cards.
For an even close to console equivalent CPU, my suggestion would be to use Ryzen 7 4700G, rather than Zen+ 2700 or 3700X. Why?
  1. Firstly it's Zen 2, and the only 8-core CPU in its make-up that comes closest to what's inside consoles in the desktop space.
  2. It's a monolithic design like the console Zen 2, not chiplet like the 3700X which has a separate IO die... And that has its own drawback like higher latency when communicating between CCXs that monolithic don't.
  3. It has the exact 8MB of L3 cache, not 16MB or 32MB that you see in 2700 or 3700X.
And that's about where the HW similarities end. It will require a downclock in order to get even closer since 4700G is a 65W TDP part and has clock speeds very similar to a 3700X out-of-the-box. Beyond this, the console CPUs have received some custom tweaks here and there that you can't really do anything about.

For GPU, I really wish they release a 36 CU RDNA 2 part and call it RX 6700 or whatever with at least 384 GB/s BW, that would be the closest to PS5 GPU while having identical config in terms of CU/ROPs/TMUs. I'll definitely do these tests/comparisons if I could get my hands on them.
 
Last edited:

GHG

Member
3060ti this was my lowest fps. I think it matches the frame pretty close sorry overlay is so small. Also does anyone know to to pick non native resolutions without setting desktop to 4k?
PnXzP5y.png

Edit
So 51 fps wins?

Create a custom resolution in the Nvidia CP. After you create it make sure the checkbox next to it is checked so that the resolution will appear in all your games.
 
Hmm? Strange.


P.S Unless im mistaken the Gaming X Trio is the fastest RTX 3060ti available.
You lucky dog. Imma hate you forever
Yeah I enjoy it quite a bit. I picked it because it was scanned into inventory infront of me for MSRP in January. It's quite fast and get cool. Oced very well too.
For an even close to console equivalent CPU, my suggestion would be to use Ryzen 7 4700G, rather than Zen+ 2700 or 3700X. Why?
  1. Firstly it's Zen 2, and the only 8-core CPU in its make-up that comes closest to what's inside consoles in the desktop space.
  2. It's a monolithic design like the console Zen 2, not chiplet like the 3700X which has a separate IO die... And that has its own drawback like higher latency when communicating between CCXs that monolithic don't.
  3. It has the exact 8MB of L3 cache, not 16MB or 32MB that you see in 2700 or 3700X.
And that's about where the HW similarities end. It will require a downclock in order to get even closer since 4700G is a 65W TDP part and has clock speeds very similar to a 3700X out-of-the-box. Beyond this, the console CPUs have received some custom tweaks here and there that you can't really do anything about.

For GPU, I really wish they release a 36 CU RDNA 2 part and call it RX 6700 or whatever with at least 384 GB/s BW, that would be the closest to PS5 GPU while having identical config in terms of CU/ROPs/TMUs.
I feel like picking that one is kinda putting PC at a disadvantage. Ps5 is set up to not need the cache while PC most certainly is not. It's also PCI 3 and doesn't support fast Ram.
 

Md Ray

Member
maybe its a general ampere bottleneck people, we may not know

it is clear that rtx 3000 series brought up huge numbers like 10k cores and such. but not every core is created equal, never forget that :)

in the end sadly, from gtx 1060 to rtx 3090, nvidia has a problem: frequency. all them gpus run at 1.9 ghz. at least amd made some strides there. games always like frequency.

maybe we're seeing an internal ampere bottleneck here that somehow causes 3070 to not shine or something. but that's an nvidia problem, in the end, ps5 runs at 2.2-2.3 ghz and rdna2 cards are flying high above 2.5 ghz+

maybe in that particular transition scene, frequency becomes the bottleneck. as i've said in my other post, not every %99 gpu is created equal. you can see %99 gpu util in ac:valhalla, but gpu may be consuming 160w. what does 160w mean for a 220w tdp gpu? it means some parts of the gpu is internally bottleneck and not working at all. but core load may present itself as %99. so in this particular case, maybe his 2070super gets higher "computational" usage and 3070 kinda gets underused. but as i've said, its a problem with ampere architecture. i've noticed this problem when i did a comparative benchmark with against my friend's gtx 1080ti in nba 2k19. practically, 3070 failed to scale beyond 1080ti performance in certain old games



in this video, this valuable person talks about something is being awkward on ampere and claims that nvidia's software team can tackle the issue with software optimizations. then again, stuff like this makes me uneasy, if a gpu's performance depends on the game developer and nvidia, its not good. it may even implicate that the ampere may end up like Kepler: alone and forgotten and left behind... :( this is why GCN cards aged better,because AMD kept along with the same architecture over the years.

nvidia however constantly shifts architectures and im pretty sure that is causing troubles for game development

Big fan of NerdTechGasm. That's a fantastic channel right there that does an in-depth analysis of GPU architectures.
 

SlimySnake

Flashless at the Golden Globes
For an even close to console equivalent CPU, my suggestion would be to use Ryzen 7 4700G, rather than Zen+ 2700 or 3700X. Why?
  1. Firstly it's Zen 2, and the only 8-core CPU in its make-up that comes closest to what's inside consoles in the desktop space.
  2. It's a monolithic design like the console Zen 2, not chiplet like the 3700X which has a separate IO die... And that has its own drawback like higher latency when communicating between CCXs that monolithic don't.
  3. It has the exact 8MB of L3 cache, not 16MB or 32MB that you see in 2700 or 3700X.
And that's about where the HW similarities end. It will require a downclock in order to get even closer since 4700G is a 65W TDP part and has clock speeds very similar to a 3700X out-of-the-box. Beyond this, the console CPUs have received some custom tweaks here and there that you can't really do anything about.

For GPU, I really wish they release a 36 CU RDNA 2 part and call it RX 6700 or whatever with at least 384 GB/s BW, that would be the closest to PS5 GPU while having identical config in terms of CU/ROPs/TMUs. I'll definitely do these tests/comparisons if I could get my hands on them.
Yeah, that's the CPU I was wondering about. But I saw some benchmarks posted and the 2700 was roughly on par with this CPU. Give or take 10%.

BTW, I noticed that your 3070 might be underperforming a bit. For example, you are hitting 49 fps here but Werewolfgrandma's 3060 Ti is hitting 51 FPS at roughly the same point.


tBr2RDz.png


PnXzP5y.png



If I were you, I'd look into what might be bottlenecking your PC because it should be at least 15-20% better than the 3060Ti. Your GPU utilization is 99%, but the results should be far higher than that. Id recommend running some timespy or firestrike benchmarks and compare your score with other users with 3070s and 3700x. Firestrike's graphics score is actually a pretty good indicator of GPU performance because they keep the combined CPU+GPU score separate. I somehow lost 2k points in my graphics score in 2 years so my card is definitely degrading somehow.
 

Md Ray

Member
I feel like picking that one is kinda putting PC at a disadvantage. Ps5 is set up to not need the cache while PC most certainly is not. It's also PCI 3 and doesn't support fast Ram.
Are you talking about the CPU? The 4700G I'm talking about would actually perform better than Ryzen 2700 when paired with a Radeon dGPU.
 
Yeah, that's the CPU I was wondering about. But I saw some benchmarks posted and the 2700 was roughly on par with this CPU. Give or take 10%.

BTW, I noticed that your 3070 might be underperforming a bit. For example, you are hitting 49 fps here but Werewolfgrandma's 3060 Ti is hitting 51 FPS at roughly the same point.


tBr2RDz.png


PnXzP5y.png



If I were you, I'd look into what might be bottlenecking your PC because it should be at least 15-20% better than the 3060Ti. Your GPU utilization is 99%, but the results should be far higher than that. Id recommend running some timespy or firestrike benchmarks and compare your score with other users with 3070s and 3700x. Firestrike's graphics score is actually a pretty good indicator of GPU performance because they keep the combined CPU+GPU score separate. I somehow lost 2k points in my graphics score in 2 years so my card is definitely degrading somehow.
My 3060 ti is beastly though. Also it runs at 2100mhz most of the time so that probably helps.
Are you talking about the CPU? The 4700G I'm talking about would actually perform better than Ryzen 2700 when paired with a Radeon dGPU.
Yeah it only supports PCI 3 and 3200 Ram. On PC you do have to be a bit faster to have equal specs. Also PC is missing ps5 hardware. Why get rid of PC hardware ps5 doesn't have?

Side note
Taa is the AA method to use to have ps5 settings? Also I actually got a slightly higher FPS with vsync on then vsync off.
 

Md Ray

Member
My 3060 ti is beastly though. Also it runs at 2100mhz most of the time so that probably helps.
Truly, a rising tide lifts all boats. :pie_ssmiling:
Yeah it only supports PCI 3 and 3200 Ram. On PC you do have to be a bit faster to have equal specs. Also PC is missing ps5 hardware. Why get rid of PC hardware ps5 doesn't have?

Side note
Taa is the AA method to use to have ps5 settings? Also I actually got a slightly higher FPS with vsync on then vsync off.
Yes, TAA is what all three (PS4, Pro, PS5) use.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Yeah, that's the CPU I was wondering about. But I saw some benchmarks posted and the 2700 was roughly on par with this CPU. Give or take 10%.

BTW, I noticed that your 3070 might be underperforming a bit. For example, you are hitting 49 fps here but Werewolfgrandma's 3060 Ti is hitting 51 FPS at roughly the same point.


tBr2RDz.png


PnXzP5y.png



If I were you, I'd look into what might be bottlenecking your PC because it should be at least 15-20% better than the 3060Ti. Your GPU utilization is 99%, but the results should be far higher than that. Id recommend running some timespy or firestrike benchmarks and compare your score with other users with 3070s and 3700x. Firestrike's graphics score is actually a pretty good indicator of GPU performance because they keep the combined CPU+GPU score separate. I somehow lost 2k points in my graphics score in 2 years so my card is definitely degrading somehow.
Werewolfs RTX 3060ti is like literally the fastest 3060ti available....he should be capable of keeping it above 2000mhz core constantly.
Its punching above its weight class for sure.

Is that...Windows 7?
:messenger_open_mouth:
Hahahah has it been that long since you saw Windows 7?

Not my screenshot btw first result on google image search was too lazy to take a screenshot.
 

SlimySnake

Flashless at the Golden Globes
My 3060 ti is beastly though. Also it runs at 2100mhz most of the time so that probably helps.

Yeah it only supports PCI 3 and 3200 Ram. On PC you do have to be a bit faster to have equal specs. Also PC is missing ps5 hardware. Why get rid of PC hardware ps5 doesn't have?

Side note
Taa is the AA method to use to have ps5 settings? Also I actually got a slightly higher FPS with vsync on then vsync off.
thats pretty impressive. You seem to be getting 200 more mhz compared to MD Ray's 3070 in that shot, though you do have fewer shader cores. Still, at that frequency and shader count, you are at 20 tflops while he is at 22 tflops. He should still be a bit higher.

Or maybe Cerny was right and higher frequencies offer more performance than more shader cores.
 
Werewolfs RTX 3060ti is like literally the fastest 3060ti available....he should be capable of keeping it above 2000mhz core constantly.
Its punching above its weight class for sure.


Hahahah has it been that long since you saw Windows 7?

Not my screenshot btw first result on google image search was too lazy to take a screenshot.
Call me Grandma. It's usually between 2080 and 2100mhz
 

martino

Member
Thats actually really impressive for a 1080ti.
The card that keeps on giving.

Did you manage to get the average by the end of the cutscene?
in fact i redid the test with all options afterburner can offer and found 32 fps for 0.1% low (min fps seems to be 1% low and my previous number was based on this)
this is the full cut scene data for me this second time :

2nlohy.png

this 32 fps didn't appear at the moto moment but during a camera change before that.
 
Last edited:

yewles1

Member
For an even close to console equivalent CPU, my suggestion would be to use Ryzen 7 4700G, rather than Zen+ 2700 or 3700X. Why?
  1. Firstly it's Zen 2, and the only 8-core CPU in its make-up that comes closest to what's inside consoles in the desktop space.
  2. It's a monolithic design like the console Zen 2, not chiplet like the 3700X which has a separate IO die... And that has its own drawback like higher latency when communicating between CCXs that monolithic don't.
  3. It has the exact 8MB of L3 cache, not 16MB or 32MB that you see in 2700 or 3700X.
And that's about where the HW similarities end. It will require a downclock in order to get even closer since 4700G is a 65W TDP part and has clock speeds very similar to a 3700X out-of-the-box. Beyond this, the console CPUs have received some custom tweaks here and there that you can't really do anything about.

For GPU, I really wish they release a 36 CU RDNA 2 part and call it RX 6700 or whatever with at least 384 GB/s BW, that would be the closest to PS5 GPU while having identical config in terms of CU/ROPs/TMUs. I'll definitely do these tests/comparisons if I could get my hands on them.
Can't you take a 6700XT and disable 4 CU's?
 
thats pretty impressive. You seem to be getting 200 more mhz compared to MD Ray's 3070 in that shot, though you do have fewer shader cores. Still, at that frequency and shader count, you are at 20 tflops while he is at 22 tflops. He should still be a bit higher.

Or maybe Cerny was right and higher frequencies offer more performance than more shader cores.
I added 150mhz to an already overclocked card. Md Ray Md Ray has a 3070 fe but I don't know if they overclocked it

Cerny was right in the area of 6%. Some engines will do better, some worse, but PC parts show an average of 6% I believe.
 

yamaci17

Member
For an even close to console equivalent CPU, my suggestion would be to use Ryzen 7 4700G, rather than Zen+ 2700 or 3700X. Why?
  1. Firstly it's Zen 2, and the only 8-core CPU in its make-up that comes closest to what's inside consoles in the desktop space.
  2. It's a monolithic design like the console Zen 2, not chiplet like the 3700X which has a separate IO die... And that has its own drawback like higher latency when communicating between CCXs that monolithic don't.
  3. It has the exact 8MB of L3 cache, not 16MB or 32MB that you see in 2700 or 3700X.
And that's about where the HW similarities end. It will require a downclock in order to get even closer since 4700G is a 65W TDP part and has clock speeds very similar to a 3700X out-of-the-box. Beyond this, the console CPUs have received some custom tweaks here and there that you can't really do anything about.

For GPU, I really wish they release a 36 CU RDNA 2 part and call it RX 6700 or whatever with at least 384 GB/s BW, that would be the closest to PS5 GPU while having identical config in terms of CU/ROPs/TMUs. I'll definitely do these tests/comparisons if I could get my hands on them.

ps5/xbox sx cpu are not like the 4700g, this is a huge misconception that some people still have. ps5 cpu is exactly like the 4700s, have two CCX clusters. so, they're still bound to high inter-ccx latency:





this user talks about ps5/xbox cpu in detail in a long thread of tweets, have a read, its nice :)
 
Last edited:

Md Ray

Member
Yeah, that's the CPU I was wondering about. But I saw some benchmarks posted and the 2700 was roughly on par with this CPU. Give or take 10%.

BTW, I noticed that your 3070 might be underperforming a bit. For example, you are hitting 49 fps here but Werewolfgrandma's 3060 Ti is hitting 51 FPS at roughly the same point.


tBr2RDz.png


PnXzP5y.png



If I were you, I'd look into what might be bottlenecking your PC because it should be at least 15-20% better than the 3060Ti. Your GPU utilization is 99%, but the results should be far higher than that. Id recommend running some timespy or firestrike benchmarks and compare your score with other users with 3070s and 3700x. Firestrike's graphics score is actually a pretty good indicator of GPU performance because they keep the combined CPU+GPU score separate. I somehow lost 2k points in my graphics score in 2 years so my card is definitely degrading somehow.
BOOM! :messenger_winking: Nothing a small OC can't fix.

53fps now... It's founders edition, btw so it's basically a reference card. My initial tests were with stock frequency.

d1Pc2wF.png
 
Last edited:
BOOM! :messenger_winking: Nothing a small OC can't fix.

53fps now... It's founders edition, btw so it's basically a reference card. My initial tests were with stock frequency.

d1Pc2wF.png
Nice, now nut up and OC that card to the moon.

I had the option to buy a 3070 for 165$ more. Didn't seem worth it for a few %
 

Dream-Knife

Banned
thats pretty impressive. You seem to be getting 200 more mhz compared to MD Ray's 3070 in that shot, though you do have fewer shader cores. Still, at that frequency and shader count, you are at 20 tflops while he is at 22 tflops. He should still be a bit higher.

Or maybe Cerny was right and higher frequencies offer more performance than more shader cores.
You can't compare Nvidias tflop count with AMD. Nvidia calculates it differently.
 

Md Ray

Member
ps5/xbox sx cpu are not like the 4700g, this is a huge misconception that some people still have. ps5 cpu is exactly like the 4700s, have two CCX clusters. so, they're still bound to high inter-ccx latency:





this user talks about ps5/xbox cpu in detail in a long thread of tweets, have a read, its nice :)

The 4700G is basically Renoir in desktop APU form and has two CCX clusters just like 4700S/PS5 CPU. What am I missing? :pie_thinking:
 

Md Ray

Member
Nice, now nut up and OC that card to the moon.

I had the option to buy a 3070 for 165$ more. Didn't seem worth it for a few %
Definitely doing that.

I paid just a little over $100 over 3070 FE's $499 launch price, I got lucky. :messenger_grinning_smiling:
All the other 3060 Ti and 3070 AIBs were selling for over 2x nearly 3x the price of the founders edition when I bought this. Pretty crazy.
 

Pedro Motta

Member
Yeah, that's the CPU I was wondering about. But I saw some benchmarks posted and the 2700 was roughly on par with this CPU. Give or take 10%.

BTW, I noticed that your 3070 might be underperforming a bit. For example, you are hitting 49 fps here but Werewolfgrandma's 3060 Ti is hitting 51 FPS at roughly the same point.


tBr2RDz.png


PnXzP5y.png



If I were you, I'd look into what might be bottlenecking your PC because it should be at least 15-20% better than the 3060Ti. Your GPU utilization is 99%, but the results should be far higher than that. Id recommend running some timespy or firestrike benchmarks and compare your score with other users with 3070s and 3700x. Firestrike's graphics score is actually a pretty good indicator of GPU performance because they keep the combined CPU+GPU score separate. I somehow lost 2k points in my graphics score in 2 years so my card is definitely degrading somehow.
It's not on the same spot the comparison, the 3070 test has a lot more geometric detail on display. I bet half a second later the FPS will drop a bit more.
 

rnlval

Member
One is NX Gamer's (at the top) which has 2070 and PS5 in it, one is my own screenshot (at the bottom) which has MSI Afterburner OSD saying "3070", I thought it was obvious?

Anyways, you can see VRAM consumption as well there. It's under 6GB, so that's not the issue. As NXG said, it's a combination of the engine favoring AMD architecture, driver cost to DX12 layer and all of that overhead affecting the PC including Vsync, but I had turned that off in my test.

The APIs and driver overhead cost on consoles are very lean, in comparison, so devs are able to get the most out of their HW.
PC's Doom 2016 uses vendor-specific Shader Intrinsic via Vulkan API extension.

For example


Doom Eternal (Vulkan API) has equal software optimizations for both AMD and NVIDIA GPUs.

DirectX12's Shader Model 6 exposes the GPU hardware's wavefront compute programming model.

Death Stranding doesn't use hardware raytracing and may not reflect games with hardware raytracing usage.
 
Last edited:

rnlval

Member
Well here is my 2070s at 4k max settings and v-synced. I guess Super being the operative word as it crushes the 2070 in the video. It's not overclocked and sits at 1950 to 1980Mhz out of the box so it's not a shit card but not some absurd silicon lottery.

I didn't cherry pick other than looking for bits where the PS5 looked to be miles ahead so I've included plenty of shots where the PS5 wins.

It's behind the PS5 but not by that much and Shadowplay eats 2-4 FPS. If a 3070 is 25-30% faster than a 2070s it should absolutely clean house here unless there is something wrong with the rest of the PC.

The images are 1440p as I just screenshotted my recording but the game was running with these settings:



The UI will be really small on mobile as I didn't bump up the font so I'll list the figures for each set of images.

1)

2070 in video 42fps
PS5 57fps
My 2070s 55fps




2)

2070 in video 41fps
PS5 60fps
My 2070s 54fps




3)

2070 in video 42fps
PS5 60fps
My 2070s 57fps




4)

2070 in video 30fps
PS5 40fps
My 2070s 41fps




5)

2070 in video 38fps
PS5 52fps
My 2070s 54fps




And here is an average from the end of the cutscene:

Avg fps:
2070 in video 41.63
PS5 56.97
My 2070s 54




I'd guess PS5 would be between 2080 and 2080s which is no surprise for a game favoring AMD and still impressive to be fair. Not sure why the 2070 in the video performs so poorly but I'm not going to speculate.

RTX 2070 FE was beaten by RTX 2060 Super.


YroWTda.png
 

Md Ray

Member
It's not on the same spot the comparison, the 3070 test has a lot more geometric detail on display. I bet half a second later the FPS will drop a bit more.
You're right, it's not on the same spot so I went back and re-checked my benchmark capture and it was still at 49fps in the exact spot/frame where grandma was getting 51fps. After OCing the card I'm seeing 53fps consistently.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
RTX 2070 FE was beaten by RTX 2060 Super.


You say that as if the 2060S isnt just a ~100CUDA shy overclocked 2070.
Its the reason alot of sites stopped including the 2070 in their benchmarks....it performs pretty much identical to the 2060S and they trade blows basically on a run to run basis.
 
You're right, it's not on the same spot so I went back and re-checked my benchmark capture and it was still at 49fps in the exact spot/frame where grandma was getting 51fps. After OCing the card I'm seeing 53fps consistently.
Yeah I had another low of 51 quite a bit further into that jump, but 51 was the lowest. I have average and 1% low set the same as the others in rtss but it just won't show up on screen.
 

SlimySnake

Flashless at the Golden Globes
Does anyone know why the graphics options are not available on the PS5 version anymore? Or rather where can I set the 21:9 mode? I played my old save file for a few hours in the wide screen mode. I knew exactly where that setting was in the options menu, but when I started a new save, it defaulted to full screen and I cant find the damn setting to change it back. Cant even change it from performance to quality because that option isnt there either.
 
Top Bottom