The port is fine for the most part. The PS5 does outperform a 3070 but not a 2080 Ti and that's because the latter has an extra 3GB of VRAM to play with.
Also, even at 1080p, all those RT effects add a substantial amount of pressure on the VRAM. The PS5 only uses reflection at around High in its Performance Mode.
Because you're running the game at much higher settings but lower resolution. You'd get better frame time stability running at PS5 settings and 1440p+RT reflections than 1080p max settings+all RT effects. The latter uses less VRAM.
This is what this port amounts to, it is heavily pcie limited when at low textures/low settings/no ray tracing:
practically look at pcie bandwidth usage, no other game does this. this is at 1080p/dlss with low settings + low textures + no ray tracing.
there's 2.6 gb worth of empty usable DXGI budget on GPU but game still uses PCIe and shared VRAM anyways. it is just unavoidable. exact same thing with tlou. notice how framerate tanks with maxed out GPU usage when pcie usage goes to 11/12 gb/s. performance comes back to normal levels with 7-8 gb/s usage. but even 7-8 gb/s usage is excessive and limits the GPU anyways
idiotic port. no excuses that it still transfers data over pcie with 2.6 gb empty vram available. it is literally hard coded there to do this. there really is not avoiding it. so even if you lower settings extremely and lower VRAM requirements immensely, game refuses to let go the PCIe/shared VRAM usage.
reason it is inexcusable because avatar pandora and alan wake 2 proved that you can have texture streaming without relying on shared VRAM/pcie bandwidth stalls. if those games can do it, so should ratchet/last of us.
it simply makes no sense to rely on shared vram so much when other engines can use a buffer space and stream textures without causing GPU performance stalls. best case scenario, you get texture downgrades in far distances that you will most likely not notice, worst case scenario you will get noticable texture quality reduction but without a big hit to GPU performance regardless and you can at least retain texture quality by changing other settings or reducing resolution
with nixxes ports, there simply is no winning. it is by design always uses a fair amount of shared VRAM. so you can dial in settings that use 4.5 GB VRAM (1080p, no ray tracing, low textures, dlss ) and still get hit by the VRAM-bound performance hit.. you still get a GPU bound performance hit with subpar textures.
in most of the other engines, you get better texture quality without such a hit on GPU. this is especially apparent with games like jedi survivor (I know, funny), avatar and alan wake 2. I played jedi survivor at 4k/dlss quality and I was quite VRAM limited. game was streaming textures all the time. but I never have gotten a GPU bound performance hit. and only noticable texture quality downgrade was in very far distances, especially mountains etc. even then, it looked very subtle, like a LOD change. alan wake 2 and avatar proves that you can manage VRAM much better than whatever Nixxes is doing.