• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Horizon Forbidden West Complete Edition PC specifications revealed

S0ULZB0URNE

Member
Yeah but it won’t be noticeably large.

The game on ps5 already looks outstanding and sharp on a 4k screen regardless of which mode you pick

It is highly optimized for the PS5 and that just won’t be the case for PC
It's a PS4 game at its heart which means it should scale with all types of builds and maybe some legacy(like PS4) PC parts.
PC is good at brute forcing such games.
I don’t think your rig will be able to do 4k 120 but maybe get close
With "fake resolution and FPS"
It might.
Why did you laugh at my post?
I recommend putting him on ignore as he's a rampant troll.
I know you said "like" but the PS5 is about 5.4x in terms of fill rates and in terms of TFLOPS, it's literally 2 PS4 Pro plus the base PS4.
Plus it's got industry’s fastest data streaming I/O.
Which they always leave out and don't understand.
 
Last edited:

yamaci17

Member
Nah rift apart doesn’t scale as well as other pc games the ps5 has more gpu grunt in that game than a 3070 and approaches the 3080 even though that shouldn’t be the case.
that has nothing to do with ps5 getting close to 3070 or 3080. it has to do with 3070 and 3080 getting downgraded performance due to heavy VRAM bottlenecks in nixxes ports.

it is like claiming 4060ti magically performs like 4070

Kep4BGR.png


what you say, on paper, would be true, but technically untrue. as you can see here due to extreme vram pressure, 4070 and 4060ti looks like performing the same.

yet when you remove the vram limitation out of the equation, you get performance like expected

jmO9S8o.png



what part of this is so hard to understand that I had to repeat this for at least 10 times now? how much can I be clear? I don't understand. I even give examples between GPUs and showing actual outlier with actual example in the actual game you've been talking about.

PS5 is not getting magical sauce better GPU performance than a 3070 or 3080. if you believe that, then you would have to believe 4060ti gets that over 4070 too, which is nonsensical. the way their engine handles thing lower your GPU bound performance greatly if you're VRAM bound, which is where 3070 and 3080 are at (7 and 9 GB DXGI budget respectively)

if 16 gb 4060ti performs %30 faster than ps5 in these games, your logic falls on its face. because it is generally accepted that 4060ti=3070 in %99 of the cases. and by the virtue of their actual GPU chip, it is true.


4vzQHus.png



as I said, you're confusing things.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
What is with the debate regarding performance vs the PS5? PS5 does 1800p checkerboard at 60fps, while a 3070 should do 1440p at 60fps. 2.88 million pixels vs 3.68 million pixels. So 28% more pixels which is basically the power difference between a PS5 and the 3070. Nothing really debating over.
PS5 upscales to 4K though. This has a cost.

The problems we have now of course is that we don't have matched settings for one. And for 2, the 3070 could run it at 1440p/80fps for all we know. It's apparently enough for 1440p/60 but not enough for 4K60. It's a bit foolish to make definitive judgements without benchmarks.
 

Zathalus

Member
PS5 upscales to 4K though. This has a cost.

The problems we have now of course is that we don't have matched settings for one. And for 2, the 3070 could run it at 1440p/80fps for all we know. It's apparently enough for 1440p/60 but not enough for 4K60. It's a bit foolish to make definitive judgements without benchmarks.
I don't believe it scales up to 4k, it is 1800p checkerboard. Which does have some cost, but it's nothing on the level of temporal upscalers.

But you are right, it is pretty pointless to debate over recommended requirements as we all know how little accuracy they can have.
 
Console version is usually 1440p 30fps (can drop to 1080p at times and below 30 even), so you need at least 2x the GPU for 4K (4070S) and another 2x for 60FPS.

So ~4x the power of PS5, 4090 won't be able to do it without upscaling:

SVHG93h.jpg
Ff 16 seems more cpu than gpu limited
 

nowhat

Member
Anyway, is the CB in Forbidden West any better than in Death Stranding? Because I believe it's the first PS5 game that will allow us to compare them directly.
It's not, really (the first game that will allow for the comparison). First of all, we're talking about different iterations of Decima. But beyond that, when it comes to reconstruction, Death Stranding on PS5 may use some form of it, but it's not CB at least according to John from DF.

 

Bojji

Member
Nah rift apart doesn’t scale as well as other pc games the ps5 has more gpu grunt in that game than a 3070 and approaches the 3080 even though that shouldn’t be the case.

Is there any good comparison of PS5 vs. GPUs in RA?

Ff 16 seems more cpu than gpu limited

In GFX mode it's entirely GPU limited, it drops even to 1080p and below 30 fps. Performance runs above 30 in those areas so it's not gpu limit.
 

S0ULZB0URNE

Member
What "fake resolution and FPS" are you talking about?



This game has DirectStorage on PC. "industry fastest" is going to depend, but I don't think it really matters.
If a technique is done to simulate a resolution or add fps it's not true/native and is fake.

If it does DirectStorage that's great but doesn't make what I stated wrong.
 

Gaiff

SBI’s Resident Gaslighter
If a technique is done to simulate a resolution or add fps it's not true/native and is fake.
That's almost every game on PS5? FW and ZD both use checkerboard rendering. FF XVI uses FSR and the list goes on.

Checkerboard is even faker than DLSS since you know, it's worse.
 

S0ULZB0URNE

Member
He's mocking DLSS upscaling and frame generation, yet if/when similar technology come to the PS5 pro, watch him change his tune on them completely.
I didn't put it down or downplay it.

Constant mentions of artifical fps increases and AI upscaled resolutions as if they were native is disingenuous no matter how you and the circle jerk crew spin it.

I own multiple 4000 series cards including a 4090 because I am a fan of this technology.

Keep trying though.
 

yamaci17

Member


f5l3JjA.png


can you give me any sensible explanation why 4060ti 16 gb performs 1.5x faster than the 8 gb version?



exact same GPU chip. same GPU core count, same frequency, same bus, same bandwidth, same cache, same TFLOPS, same architecture, literally the same GPU

if you are not going to make any efforts to understand how VRAM bottleneck plays into the performance in ratchet and spiderman: just say so and I will not bother you anymore. because this is getting real tiring.
 
that has nothing to do with ps5 getting close to 3070 or 3080. it has to do with 3070 and 3080 getting downgraded performance due to heavy VRAM bottlenecks in nixxes ports.

it is like claiming 4060ti magically performs like 4070

Kep4BGR.png


what you say, on paper, would be true, but technically untrue. as you can see here due to extreme vram pressure, 4070 and 4060ti looks like performing the same.

yet when you remove the vram limitation out of the equation, you get performance like expected

jmO9S8o.png



what part of this is so hard to understand that I had to repeat this for at least 10 times now? how much can I be clear? I don't understand. I even give examples between GPUs and showing actual outlier with actual example in the actual game you've been talking about.

PS5 is not getting magical sauce better GPU performance than a 3070 or 3080. if you believe that, then you would have to believe 4060ti gets that over 4070 too, which is nonsensical. the way their engine handles thing lower your GPU bound performance greatly if you're VRAM bound, which is where 3070 and 3080 are at (7 and 9 GB DXGI budget respectively)

if 16 gb 4060ti performs %30 faster than ps5 in these games, your logic falls on its face. because it is generally accepted that 4060ti=3070 in %99 of the cases. and by the virtue of their actual GPU chip, it is true.


4vzQHus.png



as I said, you're confusing things.
I don’t think it’s just vram pressure though it certainly plays a big part always appreciate the informative posts you could easily take a negative tone but your always respectful unlike others on this site thank you
 
This is the game that will bring the 4080 and 4090 cards to their knees. Time to give yall that Sony port special since want that "greatness" in ya face so bad. :messenger_sunglasses:

Just make sure you let those shaders compile first before you lose your marbles over how good it looks. I know Alan Wake 2 look good, but once this game gets moded with path tracing.....Cyberpunk who?

Horizon Zero Dawn Ps4 GIF by PlayStation
 

Gaiff

SBI’s Resident Gaslighter
can you give me any sensible explanation why 4060ti 16 gb performs 1.5x faster than the 8 gb version?



exact same GPU chip. same GPU core count, same frequency, same bus, same bandwidth, same cache, same TFLOPS, same architecture, literally the same GPU

if you are not going to make any efforts to understand how VRAM bottleneck plays into the performance in ratchet and spiderman: just say so and I will not bother you anymore. because this is getting real tiring.
It's not an exact comparison. The DRS on PS5 is more aggressive than on PC. Nixxes told Alex that on PS5, the DRS has certain levels it immediately drops to when it cannot maintain the target frame rate. For instance, if the target frame rate is 60fps and it's at 1440p but drops to 57fps, it'll immediately lower to the next level which might be 1080p. On PC, it's more granular and will always attempt to maintain the highest resolution even if the target frame rate isn't met.

DRS on PS5 functions differently than it does on PC. DRS on PS5 works from a list of pre configured resolutions to choose from with limits on the top and bottom res with course adjustments (aka 1440p down to 1296p). PC DRS is freefloating and fine-grain. If you turn on IGTI with DRS set to 60, it will max your GPU essentially at the highest and most fine grain res possible.

F21z-uqWQAAY3H3


The resolution on the PC side is clearly higher.
 
Last edited:

yamaci17

Member
i think you quoted the wrong person (i personally dont take Nxgamer videos seriously as he knows that VRAM causes extreme performance bottlenecks on 8 GB cards and actively abuses this fact to proclaim ps5 is punching above its weight, resulting in uninformed users to claim ps5 is now approaching to 3080, apparently). of course he won't do the same comparisons with a 16 gb 4060ti or 12 gb 3060 (cheaper version) because that would defeat the whole purpose then (his claims)
 
Last edited:

Topher

Gold Member
If a technique is done to simulate a resolution or add fps it's not true/native and is fake.

If it does DirectStorage that's great but doesn't make what I stated wrong.

Sure it is not true native. Think everyone understands that. Don't think "fake" is the right way to characterize it though. That sounds extremely derogatory.

I didn't say you were wrong about anything I/O. You made a point about it being left out so making sure you were aware that the PC version has similar tech.

He's mocking DLSS upscaling and frame generation, yet if/when similar technology come to the PS5 pro, watch him change his tune on them completely.

Current consoles use upscaling tech as well though.
 

Gaiff

SBI’s Resident Gaslighter
i think you quoted the wrong person (i personally dont take Nxgamer videos seriously as he knows that VRAM causes extreme performance bottlenecks on 8 GB cards and actively abuses this fact to proclaim ps5 is pucnhing above its weight, resulting in uninformed users to claim ps5 is now approaching to 3080, apparently)
No, I quoted you correctly. You were addressing some of the videos playsave3 posted and some of them were from Nxgamer so assuming you watched those, I was just telling you that his comparisons aren't even apples to apples.
 

yamaci17

Member
I don’t think it’s just vram pressure though it certainly plays a big part always appreciate the informative posts you could easily take a negative tone but your always respectful unlike others on this site thank you
it is purely vram pressure. these are the exact same GPU chips.

in old times, when you run out of vran, in most cases, game would tank to single digits. nowadays you either get reduced but playable performance by a specific percentage or you get less quality textures (depending on the engine, severity of this will change)

ulIDIcF.png



problem with you is that you take that "reduced but playable" performance as the actual performance of that chip and compare it against PS5 and come to wrong conclusions.

now tell me, 4 gb 6500xt drops to 44 fps but the exact same chip with 8 GB vram renders 64 fps in same scene. will you claim that 6500xt chip is only capable of 44 FPS and treat it as such, when you know that when it is not under vram pressure it performs much better?
 

Zathalus

Member
This is the game that will bring the 4080 and 4090 cards to their knees. Time to give yall that Sony port special since want that "greatness" in ya face so bad. :messenger_sunglasses:

Just make sure you let those shaders compile first before you lose your marbles over how good it looks. I know Alan Wake 2 look good, but once this game gets moded with path tracing.....Cyberpunk who?

Horizon Zero Dawn Ps4 GIF by PlayStation
Looking at the requirements it doesn't seem either a 4080 or a 4090 will be brought to their knees. Unless over 4k60 counts as that.

As for wanting the game 'so bad', while I personally mostly enjoyed it, it is going up against Dragons Dogma 2, so I doubt it will light the sales charts on fire.
 
Last edited:

yamaci17

Member
No, I quoted you correctly. You were addressing some of the videos playsave3 posted and some of them were from Nxgamer so assuming you watched those, I was just telling you that his comparisons aren't even apples to apples.
I get you now. I did watch those videos way before so I know their contents and I know they're not apples to apples. but even if they were apples to apples, sadly, due to how NIXXES handles their ports, you would still get weird performance penalties under VRAM limitation (which is a topic i'm trying to convey across to playsave3, otherwise he will keep claim ps5 is approaching to 3080 and on its way to conquer 3090 or something lol). how can ps5 come close to 3080 all the while still being slower than a 16 gb 4060ti.

this is why I hate Nixxes ports personally. they don't even inform users that they will get heavy performance penalties with very high textures + ray tracing (a scenario that nx gamer abuses to destroy GPU bound performance on competent ray tracing 8 GB cards.)
 
it is purely vram pressure. these are the exact same GPU chips.

in old times, when you run out of vran, in most cases, game would tank to single digits. nowadays you either get reduced but playable performance by a specific percentage or you get less quality textures (depending on the engine, severity of this will change)

ulIDIcF.png



problem with you is that you take that "reduced but playable" performance as the actual performance of that chip and compare it against PS5 and come to wrong conclusions.

now tell me, 4 gb 6500xt drops to 44 fps but the exact same chip with 8 GB vram renders 64 fps in same scene. will you claim that 6500xt chip is only capable of 44 FPS and treat it as such, when you know that when it is not under vram pressure it performs much better?
The reason I don’t think it’s just vram pressure is there are some scenes in the game where 16gb+ cards still fall below 50 though not as much It should be a mix of issues depending on scene
 
Then I wouldn't hold my breath for that. If the game is rife with shitty textures, it has little to do with the hardware and more to do with the time and resources it would take to rework every texture to look good. My guess is the highest texture preset on PC will be identical to the one on the PS5. Unless of course, they add some super textures that require 16GB+ of VRAM but I don't think it's happening.
Would love for them to add super textures especially cause it can get backported to the pro
 

yamaci17

Member
The reason I don’t think it’s just vram pressure is there are some scenes in the game where 16gb+ cards still fall below 50 though not as much It should be a mix of issues depending on scene
these are maxed out ultra settings on top of maxed out ray traced settings that also includes ray traced shadows and ambient occlusion that are not present on PS5.... and even then those options are also maxed out (not high, but very high is used)

of course it will drop below 50. if you used ps5 equivalent ray traced settings, that 16 gb 4060ti will easily push upwards of 70 FPS easily.

there are no direct 4060ti 16 gb vs ps5 at ps5 equivalent performance mode settings comparisons. if you find one where ps5 punches above 4060ti, please send me. (spoiler alert, you can't because I searched a lot and most people crank to ultra for testing).

4060ti 16 gb can literally render native 1440p 45-50 fps with ray traced reflections + shadows + ambient occlusion + maxed out raster settings. that alone should prove you that 4060ti is punching way above PS5. ps5 at those settings would probably cumble to <30 FPS due to lack of proper ray tracing performance on their GPU.
 
Last edited:

S0ULZB0URNE

Member
Sure it is not true native. Think everyone understands that. Don't think "fake" is the right way to characterize it though. That sounds extremely derogatory.

I didn't say you were wrong about anything I/O. You made a point about it being left out so making sure you were aware that the PC version has similar tech.



Current consoles use upscaling tech as well though.
Not real means?
I get to the point with my content. Why sugar coat?

I wasn't saying you were about the DirectStorage I wasn't aware it had it and it's good news.

I never said anything about console upscaling... certain posters run to and mention it often yet conveniently forget that DLSS isn't native either.
 

Gaiff

SBI’s Resident Gaslighter
I get you now. I did watch those videos way before so I know their contents and I know they're not apples to apples. but even if they were apples to apples, sadly, due to how NIXXES handles their ports, you would still get weird performance penalties under VRAM limitation (which is a topic i'm trying to convey across to playsave3, otherwise he will keep claim ps5 is approaching to 3080 and on its way to conquer 3090 or something lol). how can ps5 come close to 3080 all the while still being slower than a 16 gb 4060ti.

this is why I hate Nixxes ports personally. they don't even inform users that they will get heavy performance penalties with very high textures + ray tracing (a scenario that nx gamer abuses to destroy GPU bound performance on competent ray tracing 8 GB cards.)
Yeah, and this game is weird because at least a few weeks after release, AF hit VRAM really hard. Usually, AF 16x on PC is basically free but in this game, it tanked the performance of cards with less than 12GB of VRAM. Even the 2080 Ti could struggle with Very High Textures+16x AF. The recommendation was to reduce the AF to 8x or textures to High. I can't recall the last time when anyone with a card released in the past decade had to forego AF.

It also thrashed the PCIe bus and if I remember, even 3.0 16x sometimes got bottlenecked and you needed 4.0 16x which is insane. Not sure if this was fixed either.

On Beyond3D, you had guys benchmarking the game and the 2080 Ti outperformed the 3080, very likely due to VRAM limitations on the part of the 3080.
 
Last edited:

yamaci17

Member
Yeah, and this game is weird because at least a few weeks after release, AF hit VRAM really hard. Usually, AF 16x on PC is basically free but in this game, it tanked the performance of cards with less than 12GB of VRAM. Even the 2080 Ti could struggle with Very High Textures+16x AF. The recommendation was to reduce the AF to 8x or textures to High. I can't recall the last time when anyone with a card released in the past decade had to forego AF.

It also thrashed the PCIe bus and if I remember, even 3.0 16x sometimes got bottlenecked and you needed 4.0 16x which is insane. Not sure if this was fixed either.

On Beyond3D, you had guys benchmarking the game and the 2080 Ti outperformed the 3080, very likely due to VRAM limitations on the part of the 3080.
it is not really fixed, it still heavily relies on PCI-e even to this day (source, i played it 2 weeks ago on PC). it always used upwards of 2.5-3 GB shared vram with very high textures. high textures bring huge performance improvements but they are very inconsistent in quality

it also uses less dxgi budgets due to its own weird structure (other games often use 7.2 gb of dxgi budget on my 8 gb card but this one did not want to go above 6.7 GB for some reason, which is further horrible)

regardless games like avatar and alan wake 2 restored my personal faith in 8 GB cards. these games do proper texture streaming without trashing PCI-e so i cannot take whatever ratchet and spiderman doing as serious. i did look at 4060ti 8 gb vs 16 gb videos in avatar and even though 16 gb uses more vram, both cards perform same there and most textures look same. you would literally have to look closely to notice differences at far distances. meanwhile ratchet asks you to destroy half the performance you have. it is just a bad design and somehow they appreciate themselves doing it in a GDC talk.
 
Last edited:

Bojji

Member
The reason I don’t think it’s just vram pressure is there are some scenes in the game where 16gb+ cards still fall below 50 though not as much It should be a mix of issues depending on scene

PS5 has constant drs in every mode, no? With that I don't think it can be directly compared to pc GPUs.

That's why DF didn't do any comparison, they only do it with games they can match settings.
 

Gaiff

SBI’s Resident Gaslighter
it is not really fixed, it still heavily relies on PCI-e even to this day (source, i played it 2 weeks ago on PC). it always used upwards of 2.5-3 GB shared vram with very high textures. high textures bring huge performance improvements but they are very inconsistent in quality

it also uses less dxgi budgets due to its own weird structure (other games often use 7.2 gb of dxgi budget on my 8 gb card but this one did not want to go above 6.7 GB for some reason, which is further horrible)

regardless games like avatar and alan wake 2 restored my personal faith in 8 GB cards. these games do proper texture streaming without trashing PCI-e so i cannot take whatever ratchet and spiderman doing as serious. i did look at 4060ti 8 gb vs 16 gb videos in avatar and even though 16 gb uses more vram, both cards perform same there and most textures look same. you would literally have to look closely to notice differences at far distances. meanwhile ratchet asks you to destroy half the performance you have. it is just a bad design and somehow they appreciate themselves doing it in a GDC talk.
Agreed. Personally not a big fan of the hard-on some people have for Nixxes. I think it's nice that their games scale well above PS5 in terms of visuals (Spider-Man and Rift Apart can look quite a bit better with RT) but their performance profiles leave a lot to be desired, especially for people on older systems or lower-specced machines.

I guess these are PS5 games at heart though, so it's not like Insomniac worried about 8GB of VRAM or slower PCIe buses when making them.
 

yamaci17

Member
PS5 has constant drs in every mode, no? With that I don't think it can be directly compared to pc GPUs.

That's why DF didn't do any comparison, they only do it with games they can match settings.
eh even then



this stupid game engine reduces its performance by another %50 if you happen to have software that uses A BIT of vram in the background. LOL.

all other games will just stream some redundant textures afar to accomodate lack of extra 200-300 mb of vram (background tasks). this game somehow gets its performance reduced by a large margin if you have programs that use 300 mb of extra vram in the background. whatever they're doing is super inefficient and is understandable that Digital foundry refrained from direct comparisons. their ports literally cannot handle dynamic vram loads within your systems.

all of this happens while dxgi budget is not even fully utilized by the game. if this happens in horizon forbidden west, exact same discussion we're having will be repeated
 
Last edited:

Bojji

Member
eh even then



this stupid game engine reduces its performance by another %50 if you happen to have software that uses A BIT of vram in the background. LOL.

all other games will just stream some redundant textures afar to accomodate lack of extra 200-300 mb of vram (background tasks). this game somehow gets its performance reduced by a large margin if you have programs that use 300 mb of extra vram in the background. whatever they're doing is super inefficient and is understandable that Digital foundry refrained from direct comparisons. their ports literally cannot handle dynamic vram loads within your systems.


Lol, its comical with that VRAM scaling. But Alex didn't compare any GPU to PS5 because it's impossible to do fairly.
 

TrebleShot

Member
Can’t wait crank this WITH DLDSR such a shame you can’t do custom res 21:9 with it at the same time , so will have to pick between ultra wide and DLDSR but I’ll def try both.
 

Besides Phantom Liberty, nothing comes close in graphical fidelity to HFW, it's just facts. You may post linear trash like AW2 or FF, but they're all hot garbage compared to the scale and detail HFW brings. It's the queen of graphics with Cp2077 being the king, regardless of platform.

EDIT: to your avatar vid, right, I forgot about that game because well Ubisoft. I havent played it but it does indeed look insane. I guess third place then or close enough with it.
 
Last edited:
Top Bottom