• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

Gaiff

SBI’s Resident Gaslighter
I love how the narrative has shifted to the CPU bound games somehow not being able to take advantage of everything else the ps5 pro has on offer

Talk about a completely disingenuous argument
It will take advantage of everything the Pro has to offer but 30fps only isn’t off the table. There probably will be a Performance Mode but GTA VI will no doubt be very CPU-intensive.

I said a few months back that the consoles CPUs are good enough for 60fps in GTA VI but I’m not so sure anymore.
 

solidus12

Member
are they aware of their lies and bad faith or not ?

FEXJVfY.jpg
 

yamaci17

Member
It will take advantage of everything the Pro has to offer but 30fps only isn’t off the table. There probably will be a Performance Mode but GTA VI will no doubt be very CPU-intensive.

I said a few months back that the consoles CPUs are good enough for 60fps in GTA VI but I’m not so sure anymore.
rockstar will always push the limits. there's no way they will target 60 fps as a baseline for Zen 2 CPU.

at least we can expect that they will make it worthwhile though. my personal problem with dragon dogma 2 and jedi survivor is that they're not making it worthwhile. 30 fps cpu bound performance is worthwhile as long as game has impressive stuff. gta 6 surely will have some impressive stuff if trailer is any indication. draw distance, npc density, drawcalls, details, geometric density is off the charts. even on PC, I don't expect anything short of 5800x 3d to hit 60 fps in that title. it will be a brutal wake up call for everyone involved.
 

yamaci17

Member
It will take advantage of everything the Pro has to offer but 30fps only isn’t off the table. There probably will be a Performance Mode but GTA VI will no doubt be very CPU-intensive.

I said a few months back that the consoles CPUs are good enough for 60fps in GTA VI but I’m not so sure anymore.
it is not like I'm arguing against p5 pro being useless or something. it is just it will be useless for people that expectations that it will fix games with bad performance.

it will be like xbox one x. one x nearly has 3x-3.5x the GPU power of the xbox one. yet you have to play rdr 2 at 30 fps in both. one renders at 864p and other 4K. it would be possible to play rdr 2 at 1080p-1200p 60 fps on xbox one x, if it had a decent CPU. Series S is proof of that.
 
So I read the article again and it says:

The PlayStation 5 Pro PSSR currently supports 3840×2160 and is currently aiming for 4K 60 FPS and 8K 30FPS, but it’s unclear if those internal milestones can be passed.

So let's say worst case scenario they can easily do:

1440p->PSSR->1800p->classic upscale to 2160p
 
yeah I'm being disingenuous for wanting a better, balanced product for console folks. enjoy your new glorified Xbox One X then, if you really want me to be disingenuous.

people get mad over someone pairing a midrange cpu with a highend GPU. here ps5 pro is pairing a super lowend CPU with a decent upper midrange GPU. you just can't defend that. this mentality is the reason consoles are being kept to 30 FPS time and time again

Yeah well console players don’t want a $1500 machine, if we did we’d get a PC

There are only a small handful of titles that don’t have a 60 fps mode, and that’s down to dev incompetence rather than some technical limitation of the CPU.

Game logic of GTA6 isn’t going to be some radical departure from what they have running on Jaguar cores

Take your fake concern somewhere else
 

Bojji

Member
spiderman is just well optimized but it is exception to the norm. no need to dismiss the game regardless, it just proves BVH structure at 60+ fps cpu bound is possible. but it was only possible because base spiderman already targeted a perfectly frame paced 30 fps on 1.6 ghz jaguar cores.

amount, type or quality of ray tracing effects does not have a big impact on the CPU performance. you need the very same BVH structure no matter what you're going to do with ray tracing. it is a fixed cost you pay, no matter what you run. you can enable ray tracing BVH structure and just run it while no ray tracing effects are being present and you will still be hit with the CPU cost

please stop talking about things you don't even understand


just reflections

67 fps cpu bound
p3E9cnL.png


just shadows 68 fps cpu bound
Hj3f8Aj.png


path tracing 63 fps cpu bound (a mere %6 drop)
jFYFfDI.png


reflections + shadows 68 fps cpu bound

fKE565K.png


reflections + shadows + gi 67 fps cpu bound

wITRgsC.png




no ray tracing 87 fps cpu bound (1.29x cpu bound performance hit from ray tracing)

Lr8lJcZ.png


all values are within margin except path tracing (which is %6 more cpu heavy compared to shadows+gi+reflections.)



actually spiderman miles morales has a bigger hit on CPU with ray tracing on PC:

TVT0mOB.png

hZXlZGw.png


115 to 80 (1.43x increase)

reason spiderman is able to hit 60+ fps with ray tracing is because it is less cpu bound at its baseline than cyberpunk. and funny thing is, spiderman's BVH cpu cost is much heavier than cyberpunk which defeats your logic entirely

In most games it works exactly like that, they have almost fixed CPU rendering cost no matter how many rt effects are active but this is not always the case. In hitman rt shadows had almost no impact but reflections were a different story:

34OmrIZ.png


Replying to other posters, I don't doubt that cost of ray tracing on CPU of PS5 can be different, console is using totally different implementation of RT compared to dx12.

Even when some games performed better on PS5 than on series x with RT on many people were speculating that this could be thanks to lighter CPU load from that RT.
 
So I read the article again and it says:

The PlayStation 5 Pro PSSR currently supports 3840×2160 and is currently aiming for 4K 60 FPS and 8K 30FPS, but it’s unclear if those internal milestones can be passed.

So let's say worst case scenario they can easily do:

1440p->PSSR->1800p->classic upscale to 2160p
Hopefully no. PSSR uspcaling should always output at 4K because it's always better to upscale only once using a custom implementation. What they can do is reduce quality of the upscaling (but I doubt many games will need to render at less than 1080p) but that will still be better than 2 consecutive upscaling. With CBR it was different because the reconstruction can only double the resolution but PSSR will be much more flexible here, like DLSS is.
 

RoadHazard

Gold Member
At least these are more realistic.

The PlayStation 5 Pro PSSR currently supports 3840×2160 and is currently aiming for 4K 60 FPS and 8K 30FPS, but it’s unclear if those internal milestones can be passed.

Still don't think we're getting there unless they use something equivalent to DLSS Performance and run the internal res at 1080p and upscale it to 4K.

That's exactly what it's said to be doing. 2x scale on each axis.
 

SlimySnake

Flashless at the Golden Globes
No no no no and no.

Why make this complicated. They can take 1440p and PSSR that to 1800p if they want. Theer is nothing stopping them from doing that. It does not have to be PSSR to 4K.
If the internal res was native 1440p, that is equivalent to 4k dlss quality. You would get a 4k image, not a 1800p image.
They said, the target is to match the IQ of quality mode. In this example, that just happens to be 1800p in the game being used.

How you match that with an internal rez of 1440p, is to then use PSSR to reconstruct that back up to 1800p. Get it?

They also say to match the frame's performance of performance mode, which runs internally at 1080p. Dropping the rez of fidelity mode from 1800p to 1440p, will give them the GPU headroom to then run it at a higher frameratre than what the og fidelity mode could manage. And then using PSSR, which is AI accelerated on the Pro, allows them to reconstruct that up with the lowest possible frametime cost. It would at least cost significantly less than FSR.
you can interpret it that way if thats what you want. I am seeing that that they used PSSR in front of the 1440p figure, and speficially stated IMAGE QUALITY target being equivalent to 1800p. NOT the resolution. They are NOT attempting to improve upon the image quality of the resolution mode which is 1800p and thus worse than 4k. If their internal res was indeed 1440p, their image quality would be BETTER than 1800p.

Again, I would be inclined to believe that they were able to take a 1080p game to 1440p if there was a 75% increase in raw GPU performance. But we know the increase is only 45% and even if we take the best case scenario and assume the raw increase is 65%, the math still doesnt add up because you are still 10% shot of the 75% increase needed to get to 1440p natively. On top of that, upscaling has a cost of its own, so you would have needed 100% more GPU power to take a 1080p image and take it to 1440p internal resolution to get a 4k dlss quality image.


Game 1
Target – image quality close to Fidelity Mode (1800p) with Performance Mode FPS (60 FPS)

Standard PlayStation 5 –


  • Performance Mode – 1080p at 60FPS
  • Fidelity Mode – 1800p at 30FPS
PlayStation 5 Pro –

  • 1440p at 60FPS (PSSR used)
 

yamaci17

Member
In most games it works exactly like that, they have almost fixed CPU rendering cost no matter how many rt effects are active but this is not always the case. In hitman rt shadows had almost no impact but reflections were a different story:

34OmrIZ.png


Replying to other posters, I don't doubt that cost of ray tracing on CPU of PS5 can be different, console is using totally different implementation of RT compared to dx12.

Even when some games performed better on PS5 than on series x with RT on many people were speculating that this could be thanks to lighter CPU load from that RT.
thanks for clarification. this one seems like a very odd case indeed.

Though I have a theory as to why that happens. I remember that reducing reflections quality reduced the CPU boundness of that setting quite a lot, by reducing the amount of reflected objects. with ray tracing reflections, drawcall load is also increased since game now has to render certain things twice in that game.

this is why you can also manipulate the cpu boundness of ray tracing in spiderman by reducing or increasing ray tracing object distance. it seems like hitman is rendering ray traced objects twice while most other games don't (?). I'm not technically in the know about this though. just a guess.

i'm sure cyberpunk is doing things differently because it reflects much more stuff (and more detailed) than hitman and spiderman and still has insane cpu bound efficiency. something tells me spiderman and hitman traces rays and finds the places that reflections should be, and then renders the thing there. while cyberpunk must be drawing the "picture" of what reflection should be instead of rendering things in the reflection

though this is just a guess.
 
Last edited:
If the internal res was native 1440p, that is equivalent to 4k dlss quality. You would get a 4k image, not a 1800p image.

But that's now what they wrote...

Their PSSR target is 2160p/60 fps not 1440p/60

What you are suggesting is that PSSR tops out at 1440p and then it's traditionally upscaled to 2160p by the game code or by the console itself

Doesn't seem like the target they have
 

Jigsaah

Gold Member
From what limited knowledge I have actually comparing to the PS5 it’s really half the 33 TF number

Besides too many people are missing the main thing in all this even if it was 100TFs

  • Rendering 45% faster than PS5
So why mention the TF number at all if it's the rendering that's most important? Buzzwords?
 

SlimySnake

Flashless at the Golden Globes
Far apart enough where you should see a decent difference if really cpu limited, as shown below.

bwvbbnu.jpg
0lDWwun.jpg

I am losing brain cells here Chief. It shows a 11% increase going from one CPU to the other. Your own benchmark shows CPUs going from 68 to 92 fps and the 5800x3d isnt even the best one which goes all the way up to 130 fps with the 12900k as I showed earlier. That is what a CPU benchmark looks like. Please educate yourself. This is getting extremely frustrating.
No you don't. It's the same CPU. That's the mistake I'm referring to that DF made.

e0aumKp.jpg
DF made no mistake here. They took the same CPU and GPU and turned off RT in a "CPU" limited scene where the GPU was not being fully utilized, and gained 45 fps. That shows that RT has an impact on the CPU. It is the most basic test you will ever see. You dont need to be a genius to figure this out. But you do need to understand PC benchmarking which Im hoping you learn by taking your time instead of simply posting more nonsense over and over again. Just take a deep breath and watch other benchmarks and it will begin to make sense eventually. Im not going to waste my time trying to make you understand basics of CPU and GPU benchmarking, sorry.
 

Mr.Phoenix

Member
If the internal res was native 1440p, that is equivalent to 4k dlss quality. You would get a 4k image, not a 1800p image.
Dude, they can set the internal and output resolutions to anything they want them to be.

You really saying you don't know this?

Helly, the internal render resolution can even be constantly in flux using DRS.
you can interpret it that way if thats what you want. I am seeing that that they used PSSR in front of the 1440p figure, and speficially stated IMAGE QUALITY target being equivalent to 1800p. NOT the resolution. They are NOT attempting to improve upon the image quality of the resolution mode which is 1800p and thus worse than 4k. If their internal res was indeed 1440p, their image quality would be BETTER than 1800p.
You cxan see it as you want to then. But that makes no sense. That means you are suggesting that they are running internally at under 1440p and using PSR to get it to 1440p, then generic upscaling that to 4K. That makes absolutely no sense.
Again, I would be inclined to believe that they were able to take a 1080p game to 1440p if there was a 75% increase in raw GPU performance. But we know the increase is only 45% and even if we take the best case scenario and assume the raw increase is 65%, the math still doesnt add up because you are still 10% shot of the 75% increase needed to get to 1440p natively. On top of that, upscaling has a cost of its own, so you would have needed 100% more GPU power to take a 1080p image and take it to 1440p internal resolution to get a 4k dlss quality image.
I don't get how or why you are looking at it this way, regardless of the examples I have tried giving you. This is confirmation bias at its worst I guess.

Again...

PS5 10TF quality mode = 1800p@30fps (thats 5.7M rendered pixels)

PS5pro PSSR mode = 1440p@ up to 60fps. The console that has 45% more power (as per your example), is running the same quality mode at just over half the pixel load.

Thin ablt that. If they were to run the 10TF og PS5 at 1440p instead of 1800p, what framerate do you think they would hit? 40fps? 45fps? Now imagine them running that on the PS5pro and at 1440p. Then they use PSSR to scale that to the higher "target" out rez, in this case thats 1800p.

But somehow, your theory is that they run it internally at 1080p and use the AI hardware accelerated PSSR to get it up to 1440p? so they can get 60fps? You don't see how that doesnt make sense?
 
But somehow, your theory is that they run it internally at 1080p and use the AI hardware accelerated PSSR to get it up to 1440p? so they can get 60fps? You don't see how that doesnt make sense?

He just wants to find the worst case narrative to complain about.

It makes no sense for it to be interpreted any other way than native 1440p60 upscale to 4K60 using PSSR which approximates 1800p30 quality mode but with 60fps

So we can see from this how good the upscale results are; 1440p to 1800p. That’s a really good result
 
Last edited:

shamoomoo

Member
So it's half? So why are they saying 33 and 67 in the OP post?
That's the difference between FP32 and FP16,RDNA3 can dual-issue certain operations to double the TFLOPS performance.


Here's is some info if you are interested.

 
Last edited:

SlimySnake

Flashless at the Golden Globes
You really saying you don't know this?

Helly, the internal render resolution can even be constantly in flux using DRS.

You cxan see it as you want to then. But that makes no sense. That means you are suggesting that they are running internally at under 1440p and using PSR to get it to 1440p, then generic upscaling that to 4K. That makes absolutely no sens


I don't get how or why you are looking at it this way, regardless of the examples I have tried giving you. This is confirmation bias at its worst I guess.

Again...

PS5 10TF quality mode = 1800p@30fps (thats 5.7M rendered pixels)

PS5pro PSSR mode = 1440p@ up to 60fps. The console that has 45% more power (as per your example), is running the same quality mode at just over half the pixel load.

Thin ablt that. If they were to run the 10TF og PS5 at 1440p instead of 1800p, what framerate do you think they would hit? 40fps? 45fps? Now imagine them running that on the PS5pro and at 1440p. Then they use PSSR to scale that to the higher "target" out rez, in this case thats 1800p.

But somehow, your theory is that they run it internally at 1080p and use the AI hardware accelerated PSSR to get it up to 1440p? so they can get 60fps? You don't see how that doesnt make sense?
You are looking at this backwards. Dont worry about what they are doing with their 1800p 30 fps mode. That mode is its own thing. The 60 fps mode has different CPU requirements, differnt vram bottlenecks, different GPU requirements. What they are telling you is that on the base PS5, that mode runs at 1080p 60 fps. That means 2.1 million pixels. For whatever reason, doubling the framerates cost them 3.5 million pixels or 2.66x the GPU performance. In a perfect world, reducing resolution by half to 2.8 million pixels wouldve got them 60 fps but that didnt happen and they had to settle for a much lower amount due to some kind of bottleneck in the PS5 hardware that is resulting in these discrepancies.

We have seen this in many games that run at native 4k 30 fps then drop all the way down to 1080p sacrificing 4x resolution for 2x more frames including the latest spiderman. Ive been guilty of this myself because on PC if i reduce resolution by half i get 2x the frames, but console games dont scale like that. We just had skull and bones release with a 720p 60 fps mode while the fidelity mode ran at native 4k 30 fps. In theory, for this unannounced 1800p 30 fps game, yes, 45% extra GPU power gets them from 5.6 million pixels to 8.1 million pixels, divide it by half and you get 4 million pixels which is closer to 1440p. But it doesnt work like that in reality, at least not on consoles.

You have to start at the 1080p 60 fps mode and apply the 1.45x multiplier there. You either get 3.04 million pixels at 60 fps or 87 fps. Lets go with increasing the resolution so you are at 3 million pixels instead of the 3.7 million pixels needed to get native 1440p. So right, you dont have enough to hit 1440p native. let alone use the extra GPU resources required to handle the upscaling. DLSS and FSR2 both have a cost on the GPU. So yes, my theory is that the majority of the 45% extra power is being used by the PSSR tech to get to 1440p output resolution using 1440p dlss quality. I had initially assumed that they would be using that extra GPU power to target 4k dlss performance which also has a 1080p base but thats not what THEIR goal is here. In THEIR OWN words, the goal is to get 1800p image fidelity. If the goal was to hit native 4k image fidelity then yes, I wouldve assumed that the 1440p figure there was PRE-PSSR upscaling because 4k dlss quality uses a base of 1440p and is widely considered either on par or superior to native 4k let alone 1800p.
 

ChiefDada

Gold Member
I am losing brain cells here Chief. It shows a 11% increase going from one CPU to the other. Your own benchmark shows CPUs going from 68 to 92 fps and the 5800x3d isnt even the best one which goes all the way up to 130 fps with the 12900k as I showed earlier. That is what a CPU benchmark looks like. Please educate yourself. This is getting extremely frustrating.

The feeling is mutual.

DF made no mistake here. They took the same CPU and GPU and turned off RT in a "CPU" limited scene where the GPU was not being fully utilized, and gained 45 fps. That shows that RT has an impact on the CPU.

Of course it does! But in classic Slimy fashion you are interpreting incorrectly and attributing the performance difference from RT off to High RT as purely CPU bound. You're WRONG! Guess what? The CPU was already "limited" when RT mode was off and pushing only 100fps. Otherwise it would have been hitting FPS illustrated in GPU benchmark test below that used 5800X3D as CPU.

In this test using 5800X3D: 178 FPS RT Off vs 124 FPS RT ON = 30% Performance Loss.
In the DF Test using 11400F: 100 FPS RT Off vs 65 FPS RT On =35% Performance Loss

IT IS THE GPU!!!!!!!!!!!!!!!!!!!!!!!!!!!! It doesn't matter that the GPU isn't at full utilization with RT off. When RT is turned on, the GPU will have increased utilization and in this case is the primary cause for performance dip.

q7Nic5Q.jpg


1CHsqmH.jpg
 

simpatico

Member
Mofos really about to be $1,100 invested into a way to play PS4 games at a higher res and frame rate? We don’t even have a true first party classic yet this Gen.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
So it's half? So why are they saying 33 and 67 in the OP post?
because it looks better? AMD knew what they were doing with this. Nvidia started this whole mess with the 30 series. They were claiming the 3080 was 35 tflops but in reality it was equivalent to a 20 tflops 6800xt. The 4090 has 63 tflops but is only 3.5x more powerful than the PS5 so in reality its more of a 35 tflops card.

AMD with the 7000 series then decided to do the same thing knowing full well it would not translate into any meaningful performance gains. They released a 52 tflops gpu that was basically a 26 tflops gpu and acted like a 26 tflops gpu compared to their 23 tflops 6900xt.

the 67 figure is even more nonsense. the PS5 would be 20 tflops by the same metric. All of this is simply a dick measuring contest where people are inflating the cock size by 100% to get girls into bed. thankfully, nerds like us are smarter to fall for this shit because we have seen the 7000 series benchmarks.
 
Last edited:

DenchDeckard

Moderated wildly
because it looks better? AMD knew what they were doing with this. Nvidia started this whole mess with the 30 series. They were claiming the 3080 was 35 tflops but in reality it was equivalent to a 20 tflops 6800xt. The 4090 has 63 tflops but is only 3.5x more powerful than the PS5 so in reality its more of a 35 tflops card.

AMD with the 7000 series then decided to do the same thing knowing full well it would not translate into any meaningful performance gains. They released a 52 tflops gpu that was basically a 26 tflops gpu and acted like a 26 tflops gpu compared to their 23 tflops 6900xt.

the 67 figure is even more nonsense. the PS5 would be 20 tflops by the same metric. All of this is simply a dick measuring contest where people are inflating the cock size by 100% to get girls into bed. thankfully, nerds like us are smarter to fall for this shit because we have seen the 7000 series benchmarks.

I think plenty will fall for it still.

BUT, if it all looks good on screen and if the frame gen tech is decent.....I'm like 4 metres from my 65" TV....I'll take a fake 60FPS if it still feels responsive enough over 30FPS on my OLED. So I am looking forward to how the comparisons come out.
 

SlimySnake

Flashless at the Golden Globes
The feeling is mutual.



Of course it does! But in classic Slimy fashion you are interpreting incorrectly and attributing the performance difference from RT off to High RT as purely CPU bound. You're WRONG! Guess what? The CPU was already "limited" when RT mode was off and pushing only 100fps. Otherwise it would have been hitting FPS illustrated in GPU benchmark test below that used 5800X3D as CPU.

In this test using 5800X3D: 178 FPS RT Off vs 124 FPS RT ON = 30% Performance Loss.
In the DF Test using 11400F: 100 FPS RT Off vs 65 FPS RT On =35% Performance Loss

IT IS THE GPU!!!!!!!!!!!!!!!!!!!!!!!!!!!! It doesn't matter that the GPU isn't at full utilization with RT off. When RT is turned on, the GPU will have increased utilization and in this case is the primary cause for performance dip.
Brother, and I say this with nothing but love, please learn to read charts better.

In this test using 5800X3D: 178 FPS RT Off vs 124 FPS RT ON = 30% Performance Loss.
In the DF Test using 11400F: 100 FPS RT Off vs 65 FPS RT On =35% Performance Loss
This right here, shows you that its a CPU bottleneck. The same GPU is giving you 124 FPS with RT on with one CPU while literally dropping to 65 FPS with the other CPU. If that is not a CPU bottleneck, i dont know what is.

if you dont believe me? fine. Feel free to create a new thread. post this result and let others tell you if its CPU or GPU related.
 
Those that want play GTA6 and DS2 at best fidelity on consoles, along with every game they play on ps5?

This isn’t rocket science
Graphical fidelity and LOD isn't what the majority of people here have had issues/concerns with since the generation started. It's the want for the same level of fidelity we get in high quality modes while simultaneously running at a locked 60fps. I think most would have been happier if the GPU had gotten a lesser upgrade while the CPU received an overhaul.
 

Mr.Phoenix

Member
You are looking at this backwards. Dont worry about what they are doing with their 1800p 30 fps mode. That mode is its own thing. The 60 fps mode has different CPU requirements, differnt vram bottlenecks, different GPU requirements. What they are telling you is that on the base PS5, that mode runs at 1080p 60 fps. That means 2.1 million pixels. For whatever reason, doubling the framerates cost them 3.5 million pixels or 2.66x the GPU performance. In a perfect world, reducing resolution by half to 2.8 million pixels wouldve got them 60 fps but that didnt happen and they had to settle for a much lower amount due to some kind of bottleneck in the PS5 hardware that is resulting in these discrepancies.

We have seen this in many games that run at native 4k 30 fps then drop all the way down to 1080p sacrificing 4x resolution for 2x more frames including the latest spiderman. Ive been guilty of this myself because on PC if i reduce resolution by half i get 2x the frames, but console games dont scale like that. We just had skull and bones release with a 720p 60 fps mode while the fidelity mode ran at native 4k 30 fps. In theory, for this unannounced 1800p 30 fps game, yes, 45% extra GPU power gets them from 5.6 million pixels to 8.1 million pixels, divide it by half and you get 4 million pixels which is closer to 1440p. But it doesnt work like that in reality, at least not on consoles.

You have to start at the 1080p 60 fps mode and apply the 1.45x multiplier there. You either get 3.04 million pixels at 60 fps or 87 fps. Lets go with increasing the resolution so you are at 3 million pixels instead of the 3.7 million pixels needed to get native 1440p. So right, you dont have enough to hit 1440p native. let alone use the extra GPU resources required to handle the upscaling. DLSS and FSR2 both have a cost on the GPU. So yes, my theory is that the majority of the 45% extra power is being used by the PSSR tech to get to 1440p output resolution using 1440p dlss quality. I had initially assumed that they would be using that extra GPU power to target 4k dlss performance which also has a 1080p base but thats not what THEIR goal is here. In THEIR OWN words, the goal is to get 1800p image fidelity. If the goal was to hit native 4k image fidelity then yes, I wouldve assumed that the 1440p figure there was PRE-PSSR upscaling because 4k dlss quality uses a base of 1440p and is widely considered either on par or superior to native 4k let alone 1800p.
You are just not looking at this entire thing holistically.

The simple explanation to your conundrum, is memory bandwidth. That is the literal explanation of why they would have to go down to 1080p to get 60fps in the current PS5. 448GB combined bandwidth simply is not enough to allow for 1440p and 60fps.

The PS5pro does not have that problem. It has 576GB/s of bandwidth. And I dont know why you insist on looking at the current fidelity mode as a separate thing when they have clearly used both that and the performance mode to show what their targets are.

Their targets are, to match IQ of fidelity mode and match performance of performance mode. The only logical way that is happening, is using 1440p internally and PSSRing that to 1800p/4K (whatever) while running at up to 60fps.

But let me indulge your approach. You say start from 1080p. The first mistake you are making is assuming that even on the ogPS5, that 1080p 60fps is running with zero overhead. Which is impossible, to hit and maintain a 60fps output, your game has to be able to natively average around 70fps, and or scale rez dynamically when you cannot average that 70fps. Now the PS5 has 1.45x faster rendering and more bandwidth. So even if you are not getting the game to exactly 1440p, you are at least in the ballpark. And AI accelerated reconstruction would be far less costly than traditional FSR2 or any other non-AIL-assisted reconstruction tech... we should think so or not what was the point investing in the hardware to do it. So my take is the cost of the reconstruction wouldn't be significantly more than just doing TAA.

Point is, even if you look at a best or worst-case scenario, it still makes more sense that the PSSR mode, which is an amalgamation of fidelity and performance modes, is taking internal render rez as close to 1440p as they can to allow them to hit 40fps with PSSR that would give you the IQ on par with running 1800p native. And even if that means they are averaging internal rez of 1280-1440p dynamically, but reconstructing that to 4K using PSSR... DLSS has shown us that that would still give IQ very close to native 4k. Or in this case, probably on par with 1800p.
 

yamaci17

Member
This guy has no clue. Still continues with the "half the resolution, double the performance" gimmick.

it has nothing to do with how consoles operate. you don't get linear performance improvements from resolution. most games will also be GPU heavy even at 1080p due to geometry and many other aspects. not to mention upscaling is extremely heavy.


Modern GPU performance will scale worse and worse as you go down the resolution. A game's GPU load does not come from resolution alone.

and DLSS/FSR upscaling is heavy:





native 1080p = 2.1 million pixels
1080p dlss/fsr quality = 0.9 million pixels

perf difference = 1.6x at best, 1.3x at worst

y5UQ8nL.png


you're free to explain why 3060 gets a 1.3x framerate improvment with more than 2x pixel reduction. have a good day.
 
Last edited:
Graphical fidelity and LOD isn't what the majority of people here have had issues/concerns with since the generation started. It's the want for the same level of fidelity we get in high quality modes while simultaneously running at a locked 60fps. I think most would have been happier if the GPU had gotten a lesser upgrade while the CPU received an overhaul.

That’s still a function of fidelity

You want greater fidelity at the ubiquitous 60fps modes that exist on the vast majority of games
 
You are looking at this backwards. Dont worry about what they are doing with their 1800p 30 fps mode. That mode is its own thing. The 60 fps mode has different CPU requirements, differnt vram bottlenecks, different GPU requirements. What they are telling you is that on the base PS5, that mode runs at 1080p 60 fps. That means 2.1 million pixels. For whatever reason, doubling the framerates cost them 3.5 million pixels or 2.66x the GPU performance. In a perfect world, reducing resolution by half to 2.8 million pixels wouldve got them 60 fps but that didnt happen and they had to settle for a much lower amount due to some kind of bottleneck in the PS5 hardware that is resulting in these discrepancies.

We have seen this in many games that run at native 4k 30 fps then drop all the way down to 1080p sacrificing 4x resolution for 2x more frames including the latest spiderman. Ive been guilty of this myself because on PC if i reduce resolution by half i get 2x the frames, but console games dont scale like that. We just had skull and bones release with a 720p 60 fps mode while the fidelity mode ran at native 4k 30 fps. In theory, for this unannounced 1800p 30 fps game, yes, 45% extra GPU power gets them from 5.6 million pixels to 8.1 million pixels, divide it by half and you get 4 million pixels which is closer to 1440p. But it doesnt work like that in reality, at least not on consoles.

You have to start at the 1080p 60 fps mode and apply the 1.45x multiplier there. You either get 3.04 million pixels at 60 fps or 87 fps. Lets go with increasing the resolution so you are at 3 million pixels instead of the 3.7 million pixels needed to get native 1440p. So right, you dont have enough to hit 1440p native. let alone use the extra GPU resources required to handle the upscaling. DLSS and FSR2 both have a cost on the GPU. So yes, my theory is that the majority of the 45% extra power is being used by the PSSR tech to get to 1440p output resolution using 1440p dlss quality. I had initially assumed that they would be using that extra GPU power to target 4k dlss performance which also has a 1080p base but thats not what THEIR goal is here. In THEIR OWN words, the goal is to get 1800p image fidelity. If the goal was to hit native 4k image fidelity then yes, I wouldve assumed that the 1440p figure there was PRE-PSSR upscaling because 4k dlss quality uses a base of 1440p and is widely considered either on par or superior to native 4k let alone 1800p.

1800p30 is the target on base PS5.

Why isn’t 1440p60 achievable with the PS5 Pro?
 

Quantum253

Member
This can't be 599 without the disk drive...

I need to see this running games but it looks like the money is on AI upscaling to make up the lacking specs on the cpu. Memory is welcomed.

I was hoping for 18 to 20 tf of raster performance with the ai upscaling/frame gen.

If it's priced what I feel is correct which I feel should be £579 with a disk drive. I'll grab one. If not my standard will do, as I haven't touched it since spider man 2.
I was thinking about that, and I think if Sony wants to keep the price down, they might sell the Pro without the disc drive and controller. Taking in if the target audience would already have a controller or two, and only purchase the drive if wanted. The initial adopters would then have to purchase the disc drive and controller. But I don't think a console has launched without a controller before.
 

paolo11

Member
You are just not looking at this entire thing holistically.

The simple explanation to your conundrum, is memory bandwidth. That is the literal explanation of why they would have to go down to 1080p to get 60fps in the current PS5. 448GB combined bandwidth simply is not enough to allow for 1440p and 60fps.

The PS5pro does not have that problem. It has 576GB/s of bandwidth. And I dont know why you insist on looking at the current fidelity mode as a separate thing when they have clearly used both that and the performance mode to show what their targets are.

Their targets are, to match IQ of fidelity mode and match performance of performance mode. The only logical way that is happening, is using 1440p internally and PSSRing that to 1800p/4K (whatever) while running at up to 60fps.

But let me indulge your approach. You say start from 1080p. The first mistake you are making is assuming that even on the ogPS5, that 1080p 60fps is running with zero overhead. Which is impossible, to hit and maintain a 60fps output, your game has to be able to natively average around 70fps, and or scale rez dynamically when you cannot average that 70fps. Now the PS5 has 1.45x faster rendering and more bandwidth. So even if you are not getting the game to exactly 1440p, you are at least in the ballpark. And AI accelerated reconstruction would be far less costly than traditional FSR2 or any other non-AIL-assisted reconstruction tech... we should think so or not what was the point investing in the hardware to do it. So my take is the cost of the reconstruction wouldn't be significantly more than just doing TAA.

Point is, even if you look at a best or worst-case scenario, it still makes more sense that the PSSR mode, which is an amalgamation of fidelity and performance modes, is taking internal render rez as close to 1440p as they can to allow them to hit 40fps with PSSR that would give you the IQ on par with running 1800p native. And even if that means they are averaging internal rez of 1280-1440p dynamically, but reconstructing that to 4K using PSSR... DLSS has shown us that that would still give IQ very close to native 4k. Or in this case, probably on par with 1800p.

Question, instead of quality mode going to 60fps, they will make performance mode better IQ, resolution while maintaining 60fps due to PSSR?
 
Rockstar utilized the Xbox One X, the least popular console SKU last generation, to its maximum, managing to reach a native 4K resolution in Red Dead Redemption 2, a Sony-sponsored title.

... and you doubt that they're going to take full advantage of the PS5 Pro?
XBOX One X had already been out for a year by that stage and all it got was a resolution bump.

GTA V got nothing.
 
Last edited:

Perrott

Gold Member
XBOX One X had already been out for a year by that stage and all it got was a resolution bump.
"All it got was a resolution bump"... do you realize that they took the best looking title of all time up to that point and made it run at native 4K on 2017 console hardware? They reached the golden standard of what an Xbox One X could aspire to.

They'll do the same for the Pro, not only because Rockstar are perfectionists and will always shoot for the stars, but because Sony will insist on it.
GTA V got nothing.
Grand Theft Auto V released years before the enhanced consoles, so unlike with any new releases, they weren't forced to take advantage of the newest console SKUs.

I don't think you understand that Grand Theft Auto VI won't be able to release on PlayStation 5 if it doesn't take any bespoke advantage of the Pro, per the terms and conditions associated with publishing new games after the release of these mid-gen refreshes.
 
Top Bottom