• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Evolution of Graphics Technology

Welcome to the thread! I hope you are ready for doing a lot of searching and showing to prove your point.

I've owned every single console to date except these latest ones. Yes, I've played ALL of the AAA games on each one and have a thorough understanding of what's going on under the hood. That's why I mentioned all of the graphics techniques in my OP. We will dissect them one by one if we have to.



The argument isn't whether the hardware is max'd out or not or if there will be NO improvements at all. That's not the claims! Please read the claims bud!

The BIG question is not whether developers will get time to optimize for a particular piece of hardware, but will that translate to A SIGNIFICANT LEAP in visuals from it's predecessor. If you say YES, then state WHAT was improved upon significantly! It will take much more than one improved feature to make a dramatic difference in overall visuals. Cyberpunk is proof of that claim I just made.

Let's take one of our favorite companies GG.

KZ:SF:



Deferred lighting system
SSR are abundant
Mie scattering environment lighting
PBR shaders
Normal maps
Conventional smoke that's colored like in Cyberpunk today
SSAO on rocks and debris


How about Horizon on PS4:



Same Mie scattering of environment lighting
DIFFERENT Clouds!! - They look much better
PBR shaders in environment - same!
Conventional smoke - same
SSAO on rocks and debris - same
Shadows - same




I back up my statements with concrete proof. You do the same and you can ridicule to your heart's content.

Man your example of Shadow fall amd Horizone.
In Horizon there are more polygon textures and dens vegitation as well as more enhanced lightening and dynsmic time of day. Not mentioning open world. Horizon is shows 3 times improvement you are trying to deny.
 

regawdless

Banned
Thanks for making this thread! Also, very interesting points so far. The point I was making in other threads regarding this discussion was the following one:

Devs had a whole generation to learn and optimize for last gen consoles. Now the next gen consoles build on that and aim to be even easier to develop for. They have very similar development environments, devs are already familiar with the systems. Making development more efficient, causing devs to achieve better results earlier in lifecycle, flattening the learning curve over the generation.
Which causes me to think this gen will be the one with the smallest gains compared to all previous console generations. Games will look amazing and there will be improvements of course. I don't deny that games won't look better, I think they will peak in the next three years. Mid gen refreshs might bring new life though.

But I have a hard time agreeing with people saying there will be vast and significant improvements on these base versions. At least from my current understanding, I don't come to that conclusion.

Correct me please if I'm wrong here.
 

Lethal01

Member
Thanks for making this thread! Also, very interesting points so far. The point I was making in other threads regarding this discussion was the following one:

Devs had a whole generation to learn and optimize for last gen consoles. Now the next gen consoles build on that and aim to be even easier to develop for. They have very similar development environments, devs are already familiar with the systems. Making development more efficient, causing devs to achieve better results earlier in lifecycle, flattening the learning curve over the generation.
Which causes me to think this gen will be the one with the smallest gains compared to all previous console generations. Games will look amazing and there will be improvements of course. I don't deny that games won't look better, I think they will peak in the next three years. Mid gen refreshs might bring new life though.

But I have a hard time agreeing with people saying there will be vast and significant improvements on these base versions. At least from my current understanding, I don't come to that conclusion.

Correct me please if I'm wrong here.

Dev's have been continually saying that they are just scratching the surface of what the PS5 can do,
This adds weight to Mark Cerny's statement that they aimed to "balance innovation with revolution this gen", saying that there is more to be gained from digging deep into mastering the customizations of this hardware (geometry engine, Slightly customized RT pipeline, IO Storage architecture) than there was for the PS4.

They clearly stated their intention to both make PS5 easier to get started with while also giving more benefit for mastering than PS4.

While nowhere near the level of early ps1 to late ps1 there was still a clear increase in graphical prowess from early PS4 to late PS4.
The evidence suggests we can expect a bigger jump when it comes to early PS5 vs Late PS5.

Or at least, it does when you are speaking with true Video Game Graphic Veterans.
 
Last edited:

VFXVeteran

Banned
Cool? Didn't say there was.


So essentially how it looks to the eye. So I can say for example this looks better than anything in Cyberpunk.



Sure some cinematics turn up everything to the max.


Looks good, I don't think anybody said it didn't.


Sure lets.

Here is HZ:FW cinematic screenshot, shows a marked improvement from the screenshot you posted earlier doesn't it? yet you were here saying Horizon Zero Dawn on PC would look better than Forbidden West on PS5.
6IZtd49.png



Looks good as well. I don't recall anyone saying it was the best. I do recall some people saying it can look plasticky at times but there are moments where it looks really good. Can't be arsed to find screenshots.


There is nothing there that is impressive to the point where i would say it looks better than

This.
50628085231_c1fc2917a5_o.png


Or this
rFyaaT0.jpg


Or this
E9u12nd.jpg


Your arguments are flawed. I gave complete and utter GAMEPLAY videos. You are giving me cinematics which I've already said don't count in the rendering pipeline when running the gameloop.

NONE of the images you put up are from gameplay.
 

regawdless

Banned
Dev's have been continually saying that they are just scratching the surface of what the PS5 can do,
This adds weight to Mark Cerny's statement that they aimed to "balance innovation with revolution this gen", saying that there is more to be gained from digging deep into mastering the customizations of this hardware (geometry engine, Slightly customized RT pipeline, IO Storage architecture) than there was for the PS4.

They clearly stated their intention to both make PS5 easier to get started with while also giving more benefit for mastering than PS4.

While nowhere near the level of early ps1 to late ps1 there was still a clear increase in graphical prowess from early PS4 to late PS4.
The evidence suggests we can expect a bigger jump when it comes to early PS5 vs Late PS5.

Or at least, it does when you are speaking with true Video Game Graphic Veterans.

Of course devs say that. They also say that's it's never been easier to get performance out of consoles.

You've mentioned some of the new buzzwords, can you help me understand what improvements we are expecting in what areas for which of these new techniques?

Everyone throws these words around but let's be specific and define what we are expecting out of every new tech. Again, I'm not saying that there won't be improvements. I absolutely believe that devs will have more efficient processes and will have more time for optimizations, design, artwork etc. Just not resulting in vastly improved visuals.

But what I see so far are a bunch of people being absolutely convinced that there'll be massive improvements without being able to really explain where it will come from specifically. It feels like most are just saying it because they want to feel good about there purchase. But so far, no one was able to go into more detail and convincing me with a logical explanation outside of "rofl just look at the past".

Please someone educate me, I'm honestly curious and try to understand the expectations.
 

VFXVeteran

Banned
Man your example of Shadow fall amd Horizone.
In Horizon there are more polygon textures and dens vegitation as well as more enhanced lightening and dynsmic time of day. Not mentioning open world. Horizon is shows 3 times improvement you are trying to deny.

Where are there more textures? Dense vegetation because Horizon's setting is in a forest. That doesn't prove that their graphics engine couldn't do it when they made KZ:SF. Yes, TOD is dynamic but that doesn't make the game "look" better. It's an nice added feature. You have not shown that Horizon is 3x the graphical improvement of ShadowFall.
 

VFXVeteran

Banned
Dev's have been continually saying that they are just scratching the surface of what the PS5 can do,
This adds weight to Mark Cerny's statement that they aimed to "balance innovation with revolution this gen", saying that there is more to be gained from digging deep into mastering the customizations of this hardware (geometry engine, Slightly customized RT pipeline, IO Storage architecture) than there was for the PS4.

They clearly stated their intention to both make PS5 easier to get started with while also giving more benefit for mastering than PS4.

While nowhere near the level of early ps1 to late ps1 there was still a clear increase in graphical prowess from early PS4 to late PS4.
The evidence suggests we can expect a bigger jump when it comes to early PS5 vs Late PS5.

Or at least, it does when you are speaking with true Video Game Graphic Veterans.

Where is the evidence? That's what this thread is for, to show the proof.
 
As someone mentioned, this is a straw man argument. You are claiming that because games from Day 0 use the same (or similar techniques) to the games at the tail end of the generation, that they are equivalent and therefore are the same.

They are not. Again, as someone already pointed out, you are facing with an issue that is at the intersection of time and employee resources. If there were no means to extract better performance over a generation, then there's no need to update the dev tools. But that's not what happens. Every GDC you have game devs coming out and discussing new ways that an existing technique can be optimized better to save cycle time.

It happens ALL. THE. TIME. That's the name of the game. Reducing cycle time, optimizing memory management, getting rid of areas in the engine that waste cycles...this doesn't happen on day 0 that a console launches. It takes time to navigate through what works well and what doesn't. And as the tools refine, so does the ability of the technical artists to easily get their vision into the game. Meaning, less time dealing with bugs and issues and more time iterating on artistic quality.

Yes, Horizon DOES look better than Killzone. It's in a huge open world setting, features gigantic robotic beats, and does it while showing a third person perspective (which also adds to the resources required on screen). Guerrilla Games constantly refines their Decima engine and has even gone back and forth with Team Kojima on making it better.

This coming generation, the biggest gains will be seen from memory management thanks to the I/O and SSDs. That's not something that can be easily leveraged day 1. It will take a while for devs to retool their engines to optimize asset streaming in ways they didn't think would be possible. UE5 has already done some leg work in this area, but we will see much more of that "Nanite" look in a few years that will look dramatically better than what we have on the market today.
 
Your arguments are flawed. I gave complete and utter GAMEPLAY videos. You are giving me cinematics which I've already said don't count in the rendering pipeline when running the gameloop.

NONE of the images you put up are from gameplay.
The problem is the ps5 has more than enough extra performance to run ps4 cinematic models in game, that includes death stranding which runs circles around any game.

Morales random npcs in game


Cyberpunk Main character npc maximum settings


Death Stranding cinematic, likely easy to match in game on ps5.
 
Last edited:

Aion002

Member
So.... Games got significantly "prettier" inside every gen, as the devs get better and better with the hardware. Then, when things can't improve anymore, a new gen arrives... Right?

faa45a9831b2a0148e0cd60615c724a7.gif


Why you are belittling that?
 
Last edited:
I’m not sure what the point of this is having just skimmed it, but I definitely noticed that the improvements seen in the ps4 generation have been vastly smaller than the previous 360 generation.

On the other hand, there have always been launch/early titles that can go toe to toe with late gen software.

Seeing how ps4 was just a more powerful version of Xbox 360, and ps5 is more revolutionary with its components ; I expect bigger improvements than we saw last generation.
 
Last edited:

VFXVeteran

Banned
As someone mentioned, this is a straw man argument. You are claiming that because games from Day 0 use the same (or similar techniques) to the games at the tail end of the generation, that they are equivalent and therefore are the same.

I'm saying several things. None of which you seem to be interpreting properly. The issue here has always been defining what you guys mean by "dramatically" different at the tail end. The issue has always been "how much optimization will be had that will translate into a BIG boost in performance thereby lending itself to more graphics features.

As someone has already shown with screenshots over the lifespan of the PS4. Not much to the point of justifying saying a "dramatic" difference. Performance on the PS4 has been constant throughout it's generation as well.

They are not. Again, as someone already pointed out, you are facing with an issue that is at the intersection of time and employee resources. If there were no means to extract better performance over a generation, then there's no need to update the dev tools. But that's not what happens. Every GDC you have game devs coming out and discussing new ways that an existing technique can be optimized better to save cycle time.


Can you stop implying that I'm saying there is NO optimization to be had? You drag the argument longer and longer by adding errorenous words to my claims so you can call yourself correcting me. Sure there is optimization. Again, sure there is optimization. How MUCH is needed to the point of visual change? Can YOU specify? Do you see the PS5 being able to run native 4k/60FPS in every game later in the gen? That kind of optimization?

It happens ALL. THE. TIME. That's the name of the game. Reducing cycle time, optimizing memory management, getting rid of areas in the engine that waste cycles...this doesn't happen on day 0 that a console launches. It takes time to navigate through what works well and what doesn't. And as the tools refine, so does the ability of the technical artists to easily get their vision into the game. Meaning, less time dealing with bugs and issues and more time iterating on artistic quality.

I don't disagree with anything you said. I will say, however, that you don't know how much time that is. It could be 3yrs or 6 months. You also aren't considering that the PS4 and PS5 aren't that radically different and therefore wouldn't need the same amount of ramp up time. You fail to mention that. Why?

Yes, Horizon DOES look better than Killzone. It's in a huge open world setting, features gigantic robotic beats, and does it while showing a third person perspective (which also adds to the resources required on screen). Guerrilla Games constantly refines their Decima engine and has even gone back and forth with Team Kojima on making it better.

Never said it didn't look better. I said it didn't look dramatically better if you examine the tech and ignore the scope (i.e. dinos vs. soliders and wilderness vs. enclosed areas). If you guys want to think that KS:SF looks dramatically worse than Horizon, then others can interpret that going forward as mediocre improvement. You would say Spiderman MM looks dramatically better than Spiderman PS4 and it really doesn't. Do the Insomniac devs still need time to understand PS5 so they can make an even bigger jump with Spiderman 2? What kind of things do you think will be in Spiderman 2 that will make it pop over Spiderman MM?

UE5 has already done some leg work in this area, but we will see much more of that "Nanite" look in a few years that will look dramatically better than what we have on the market today.

First, Epic is a 3rd party multiplatform company. They was able to get UE5 demo up and running before PS5 even released. That tech would be used across UE5 multiplatform games including PC/XSX. Agree? Second, you believe that UE5 demo looks better than all PC games now right? If so, then we don't even have to wait because you already think every other game out looks dramatically worse even now - and based solely on geometry/texturing.
 

VFXVeteran

Banned
The problem is the ps5 has more than enough extra performance to run ps4 cinematic models in game, that includes death stranding which runs circles around any game.

Death Stranding cinematic, likely easy to match in game on ps5.

Omega Supreme Holopsicon Omega Supreme Holopsicon if you post another comparison of cinematics vs. gameplay, I'm going to beg the mods to kick you out of my thread. How many times do I have to tell you guys to stop using cinematics as a means of comparing gameplay? It's COMPLETELY unfair.

There isn't a single gameplay character you can show that has better detail than the main characters in Cyberpunk.
 

Lethal01

Member
Of course devs say that. They also say that's it's never been easier to get performance out of consoles.

You've mentioned some of the new buzzwords, can you help me understand what improvements we are expecting in what areas for which of these new techniques?

Everyone throws these words around but let's be specific and define what we are expecting out of every new tech. Again, I'm not saying that there won't be improvements. I absolutely believe that devs will have more efficient processes and will have more time for optimizations, design, artwork etc. Just not resulting in vastly improved visuals.

But what I see so far are a bunch of people being absolutely convinced that there'll be massive improvements without being able to really explain where it will come from specifically. It feels like most are just saying it because they want to feel good about there purchase. But so far, no one was able to go into more detail and convincing me with a logical explanation outside of "rofl just look at the past".

Please someone educate me, I'm honestly curious and try to understand the expectations.

Everything is "buzzwords" when you don't have your hands on it.
A quick example would be the improved Geometry engine helping with things like more efficient culling and dynamic level of detail. I don't know what kind of more proof you are expecting without someone leaking the PS5 SDK and architecture layout so that we can discuss the exact layout of the Shaders or something.

The customized geometry engine existing is a fact.
Developers stating that while getting games running on PS5 is easy there is still tons more to get out of it is also a fact.
What kind of evidence do you think would exist that could prove that PS5 will have less of an improvement over time that PS4?

If you had some kind of evidence that these devs are lying then I'd get it. but all I'm seeing is baseless speculation that there is some conspiracy.

Omega Supreme Holopsicon Omega Supreme Holopsicon Where is the evidence? That's what this thread is for, to show the proof.

You want to say that the Devs are lying about there being much more power to gain then prove it.
 
Last edited:

VFXVeteran

Banned
Everything is "buzzwords" when you don't have your hands on it.
Geometry is basically expected to give a substantial boost to processing.. geometry. Things like more efficient culling and dynamic level of detail. I don't know what kind of proof you are expecting without someone leaking the PS5 SDK and die shot.

He wants estimates from the people here that act like they know exactly what will happen.

You mention geometry. But geometry has to be rendered. If you have a game like Spiderman MM that has to dumb down geometry/shading to render reflections, adding more geometry into the scene isn't going to change this, for example.

Developers stating that while getting games running on PS5 is easy there is still tons more to get out of it is also a fact.

"tons" more? He wants to know like what? UE5 demo showed the most advanced geometry streaming this new generation. And yet the GPU couldn't render that demo at native 4k/60FPS. The limit was reached even with the SSD. This game:

fIigYJi.gif


doesn't look far from it and it's not using SSD. So the question is: what is the quantitative reality of the meaning of the word "tons" more to get out of the PS5? How much did Insomniac get out of the PS5 using it's SSD and RT? 60%, 70%, 30%?


If you had some kind of evidence that these devs are lying then I'd get it. but all I'm seeing is baseless speculation that there is some conspiracy.

It's not that they are lying, it's that we aren't seeing it yet. Insomniac has done their due diligence on MM and we'd like to quantify that. If they haven't even scratched the surface for what the PS5 can do - performance wise - why can't we see native 4k/60FPS or better, more accurate ray-tracing already?
 
Considering The 360 Era alone, had games jump forward in massive leaps and stride in terms of graphical fidelity - 7 times in totality. One that left the PC fanbase completely at odd's with their hardware - Gears of War.

From base launch titles like condemned to GTA 4 then Gears of War, it's sequels and then the many - many countless reviewer's whom cited in chorus that BF3 level graphics were something many thought impossible on that gen of console hardware. It is obviously a very shaky argument to infer generational leap's in terms of visuals are not possible within the same generation and history is categorically not on your side.

To dissuade other's you established an argument I've never seen made in actuality ...by anyone ...

Instead what I see are users arguing visual leaps within a generation are in fact tangible occurrences that happen with frequency, but your argument falls apart completely.. as long as others consider the fact that non linear... in fact downright disruptive leaps in visuals happen within a generation. Who even need's to create an argument about the linear transgression of visual fidelity when in fact disruptive exponentials are what actually occur historically during a console generation.

Also, console hardware has only been stressed to the point of it's own limitations when a custom offering struggles to deliver a game that maintains 24 FPS. Aside from this fact, you must typically wait at least 4 and a half years into a consoles lifecycle to begin to see these types of advances made in stride.

This was the foundation of a large spanning Debate on Mad Onion/Futuremark - when countless Hardware Overclockers/Computer Scientists joined together to set the record straight.

The Argument was - at what point can we safely, in the future - believe a console has been in fact tapped out/found maturity in it's dev cycle and would the PC equivalent in fact outperform it.

They tested this by reducing the question to PC performance vs Console Performance knowing the latest PC hardware was in fact 5 generations more advanced than the Xbox 360.

This test also resolved the issue of relative potential left on consoles vs PC's ceding that Console Manufacturers do as they notoriously have for decades - hardcode in attributes that they believe no developer will ever utilize in totality. Attributes in fact more advanced than hardware available on PC at time of launch.

It became evident that when a game was made, intentionally to tap out a consoles performance - that same game when ported to PC would in fact underperform it's console brethren by 4-7 frames a second on even on PC hardware released a year and a half after the console in fact launched.

The testbed was Xbox 360 and a high end PC with a 5870 GPU. A GPU released a year and a half after the 360 launched. The games were Fallout 3 New Vegas and then later Skyrim, both variant's underperformed the 360 equivalents in hardware performance.

They cited they could not perform this test without waiting 4 1/2 years into the consoles life cycle - as 4 1/2 years of maturity were needed to test the notion that console hardware was underutilized during the offset of a consoles launch cycle.

Xbox 360 outperformed the PC - even when the PC was outfitted with the best hardware................ released a year and a half after the 360 launched .

So this was astronomically better hardware than what was on offer in the 360. Specifically considering it was in fact a generational step up from the 360 equivalency testbed.

5 and 6 Generations of hardware had been released on PC when New Vegas/Skyrim finally released and when the testbed was made utilizing hardware, it - as I stated - utilized hardware that was in fact a legit generational step up from consoles. The 5870 offered true programable shaders and 6 percent better performance than the previous high end GPU.

This occurrence made headlines on gaming websites bigtime.

That STILL, while utilizing the BEST hardware within the allotted time frame - hardware a generational step up - released a year and a half after the 360's launch on PC's (a 5870 GPU, far superior than what was available in the 360, yet 5 generations behind the latest hardware on PC) - again I state - the Highest Grade GPU/CPU released 1 1/2 years after the 360 launched, these test's magically resolved your so called improbability when waiting 4 1/2 year's to run this test on consoles and PC - with the 360 outperforming the superior PC test chambers.

So on one hand you have a 360, the other a PC with hardware specifically far more advanced - with higher clocks, more horsepower all around... underperforming consoles.

This test was timed so that it would properly benchmark a console considered to be tapped out. So that dev's would find maturity with that ecosystem. And the final result's, Xbox 360 defeating it's PC counterpart and the better more advanced parts released a year afterward - were extrapolated forward ad infinite.

So yes I am inferring a game that taps out consoles 4 1/2 years from now on Console - will certainly outperform with consistency... a 3080/AMD Equivalent given these results. So will the Series X outperform the 3080, 4 years from now. Yes. Will the Series X outperform the 3080 with DLSS 2.0 Enabled, only - and I stress only - if Microsoft evokes a massive undertaking to utilize it's MLRender stack in the DX12U pipeline. Then and only then, will the Series X with a direct answer to DLSS 2.0 outperform the 3080 on games utilizing DLSS 2.0 standards 4 1/2 years from now such as the previous test chamber inferred.

Will it outperform the latest hardware on PC? No, that is not why this testbed endeavor/scenario was with great effort applied in a timed manner to non current yet exceedingly more powerful PC hardware vs the 360.
 
Last edited:
Omega Supreme Holopsicon Omega Supreme Holopsicon if you post another comparison of cinematics vs. gameplay, I'm going to beg the mods to kick you out of my thread. How many times do I have to tell you guys to stop using cinematics as a means of comparing gameplay? It's COMPLETELY unfair.

There isn't a single gameplay character you can show that has better detail than the main characters in Cyberpunk.
People can see the morales in game vid, and the cyberpunk in game vid with one of the main characters, near the beginning of the vid I put. They look quite comparable. That's a cross gen game compared to the top pc game at max settings.
 
Last edited:

Lethal01

Member
If they haven't even scratched the surface for what the PS5 can do - performance wise - why can't we see native 4k/60FPS or better, more accurate ray-tracing already?

So you're asking if developer are going to get more performance out of the machine then why aren't we seeing those improvements from the future now?

"tons" more? He wants to know like what? UE5 demo showed the most advanced geometry streaming this new generation. And yet the GPU couldn't render that demo at native 4k/60FPS. The limit was reached even with the SSD. This game:

The demo ran at 1440p 30 while they expect to have it running at 1440p 60 at launch, this is already a big improvement. That is clear evidence of there being more to get out of the PS5, infact just the existence of the Unreal demo is clear evidence that there is more to get from it since it looks more advanced than anything on PS5 right now. Going from what we have now to games that looks Like the UE5 demo will be a big jump

We will see high resolution textures, or more geometry or better lighting. I don't see why he would be asking about what kind of improvements we would see, that's obvious.
 
Last edited:

VFXVeteran

Banned
People can see the morales in game vid, and the cyberpunk in game vid with one of the main characters, near the beginning of the vid I put. They look quite comparable. That's a cross gen game compared to the top pc game at max settings.

Link your vid. I'm sure that I'm going to laugh at you comparing any NPC in Spiderman MM with Cyberpunks.
 

Lethal01

Member
fIigYJi.gif


doesn't look far from it and it's not using SSD. So the question is: what is the quantitative reality of the meaning of the word "tons" more to get out of the PS5? How much did Insomniac get out of the PS5 using it's SSD and RT? 60%, 70%, 30%?

Well since you are literally just eyeballing it so will I.

To me it looks like a noticeable step down from the UE5 demo in geometry and texture quality.
Not to mention they never said it's not utilizing SSD or a higher amount of ram to compensate for the lack of the SSD. Plus you are making assumptions that the lighting is as dynamic as the UE5 demo when it could be more baked unlike the UE5 demo which we know was dynamic.
 
Last edited:

VFXVeteran

Banned
So you're asking if developer are going to get more performance out of the machine then why aren't we seeing those improvements from the future now?

If you have an unoptimized system, but have plenty of power, you can increase bandwidth and the GPU should continue to scale. If it runs like shit at 1440p/30FPS because the GPU isn't being utilized, then it would run like shit at 4k/30FPS too with no decrease in performance. That's the hallmark of unoptimized code. It's the old tried and true benchmark for determining if a game is CPU or GPU limited.

The demo ran at 1440p 30 while they expect to have it running at 1440p 60 at launch, this is already a big improvement.
We will see high resolution textures, or more geometry or better lighting. I don't see why he would be asking about what kind of improvements we would see, that's obvious.

I'd like to see 1440p/60FPS. That would mean dynamic 4k/60FPS. The only caveat with that demo is that it's not a game. So it will naturally not run at that performance level when several sub-systems are put into play (i.e. FX, NPCs, audio, etc..). Again, here is the hardware limitation I'm talking about.

He's asking about improvements because certain improvements are more costly than others. The best improvements to overall image quality cost more and therefore are rejected (i.e. RT, local dynamic light sources or skin shaders for example).
 

VFXVeteran

Banned
Well since you are literally just eyeballing it so will I.

To me it looks like a noticeable step down from the UE5 demo in geometry and texture quality.

How so? That presentation looks very similar to UE5 demo. Grab a part of that UE5 demo where she is climbing a wall. I'm not talking about the camera looking at the rocks with the geometry. We already know UE5 will have the advantage there. Also, not sure how you can see a difference in texture sizes in that UE5 demo compared to this trailer. Can you point out what you are talking about in both?

Not to mention they never said it's not utilizing SSD or a higher amount of ram to compensate for the lack of the SSD.

We know it's a PC trailer. You could be correct, but I don't think you need an SSD to run that game tbh. Amount of RAM on a PC is insignificant since thats' part of the specs which are unknown at this time.

Plus you are making assumptions that the lighting is as dynamic as the UE5 demo when it could be more baked unlike the UE5 demo which we know was dynamic.

That lighting is dynamic. That's why I took that GIF. Look at the dark part of the wall near the character light up when the wall breaks (and the opposite side as well). That's dynamic GI. To add, many games (i.e. AC:Valhalla, Cyberpunk, Watch Dogs, Horizon, etc..) have environment sky dynamic GI right now. They use dynamic light probe generation so it would look unnoticable to the Lumen technique if RT is not involved except with removing of geometry that blocks direct sunlight - but ND perfected that with UC4 and TLOU 2 with it's local GI properties (and Alien: Isolation and Kindgom Come does it too)..
 
Last edited:

RaySoft

Member
CLAIM #1 - No matter what the generation, developers will ALWAYS need to start their learning process of the hardware all over again from ground zero.
No they don't have to start from scratch.

CLAIM #2 - The hardware is nowhere near fully utilizing it's GPU at the start of the generation.
True. Optimizations plays a big role here. You can have some code maxing both the CPU and GPU just for the sake of it. The point is that over time devs both share and learn knowledge about a given hardwares strengths. With that knowledge you can optimize your code in many ways. This is alpha/omega. for instance a CPU heavy code could be broken down into many bits running asynchronically on multiple compute shaders for instance. It's all up to the devs how much time and effort they put into it.

CLAIM #3 - Graphics technology has an exponential output based on linear time. A game today will look exponentially better in 2-3yrs.

I want to discuss these fallacies in detail and get to the heart of why people think that generations are always the same despite evidence that they are not always the same. The assumption that developers are always inexperienced and lack knowledge of basic graphics principles in the beginning of a generation. And we will discuss why people think a game today will look dramatically worst than it's successor within the SAME generation. We will get down to details such as
Why do you use the word exponential? Ofc it won't be, but it's a fact that over time devs learn to exploit a set hardwares strengths and find other ways to mitigate its weaknesses.
 

Lethal01

Member
How so? That presentation looks very similar to UE5 demo. Grab a part of that UE5 demo where she is climbing a wall. I'm not talking about the camera looking at the rocks with the geometry. We already know UE5 will have the advantage there. Also, not sure how you can see a difference in texture sizes in that UE5 demo compared to this trailer. Can you point out what you are talking about in both?



We know it's a PC trailer. You could be correct, but I don't think you need an SSD to run that game tbh. Amount of RAM on a PC is insignificant since thats' part of the specs which are unknown at this time.



That lighting is dynamic. That's why I took that GIF. Look at the dark part of the wall near the character light up when the wall breaks (and the opposite side as well). That's dynamic GI. To add, many games (i.e. AC:Valhalla, Cyberpunk, Watch Dogs, Horizon, etc..) have dynamic GI right now. They use dynamic light probe generation so it would look unnoticable to the Lumen technique if RT is not involved.

That specific gif could easily be part of a totally scripted cinematic which mean they have tons of ways to fake it being their GI doing the work.

603.png

600.png


Geometric and Texture density looks like a clear step up on the demo compared to the CD trailers.
Now that I look at it more the shadow quality and AO look better too.

Ofcourse I can't open the texture and measure exactly how big they are, neither can you. As I said you eyeballed it and said they were close, I'm eyeballing it and to me the Demo pulls strongly ahead.

Also it's not like this demo is just a big cutscene, it could have enough systems working as a game like uncharted, Game code is clearly being ran.
 
Last edited:

Tripolygon

Banned
Your arguments are flawed. I gave complete and utter GAMEPLAY videos. You are giving me cinematics which I've already said don't count in the rendering pipeline when running the gameloop.

NONE of the images you put up are from gameplay.
First of all those were not cinematic, they were screenshots from gameplay photo mode up-close, bar the Horizon Forbidden west one which i captured from the cinematic trailer. And i disagree with your narrow criteria that only running gameplay loop counts. Everything counts from cinematic to gameplay as they are all running in realtime on the hardware showcasing what the engine is capable of doing. That is like saying a CGI movie doesn't count because it is prerendered offline, does baked lighting, shadows not count also?

And you are welcome to take photo mode screenshots as well.
 
Last edited:
First of all those were not cinematic, they were screenshots from gameplay photo mode up-close, bar the Horizon Forbidden west one which i captured from the cinematic trailer.
He doesn't want photomode either. You're going to have to get close to a building and force the camera to get a close up of the characters in game outside photo mode.
 

Tripolygon

Banned
He doesn't want photomode either. You're going to have to get close to a building and force the camera to get a close up of the characters in game outside photo mode.
Here is the thing with the sorta discussion he wants, it never ends because the goalpost and criteria keeps moving and shrinking. That is not a game i want to play. And he is free to take photo mode screenshots as well, i won't be offended.
 
Here is the thing with the sorta discussion he wants, it never ends because the goalpost and criteria keeps moving and shrinking. That is not a game i want to play. And he is free to take photo mode screenshots as well, i won't be offended.
Morales character photomode pics > Cyberpunk character photomode pics. But he won't acknowledge that little fact.
 

diffusionx

Gold Member
True. Optimizations plays a big role here. You can have some code maxing both the CPU and GPU just for the sake of it. The point is that over time devs both share and learn knowledge about a given hardwares strengths. With that knowledge you can optimize your code in many ways. This is alpha/omega. for instance a CPU heavy code could be broken down into many bits running asynchronically on multiple compute shaders for instance. It's all up to the devs how much time and effort they put into it.

The point is linked to the first one, that they're not starting from scratch. They're not learning how to do compute shaders from scratch on this totally new hardware architecture and need to learn the process from the start, so the process is in good shape from day one. Will it iterate, of course it will, but instead of going from 0 to 50 (totally arbitrary numbers) or 50 to 90, it is like 90 to 95.

Even this gen saw the rise of physically based rendering, which was brand new, but devs kind of hit the ground running and now, in 2020, they're well versed in the techniques.
 
Last edited:

Lethal01

Member
The point is linked to the first one, that they're not starting from scratch. They're not learning how to do compute shaders from scratch on this totally new hardware architecture and need to learn the process from the start, so the process is in good shape from day one. Will it iterate, of course it will, but instead of going from 0 to 50 (totally arbitrary numbers) or 50 to 90, it is like 90 to 95.

Even this gen saw the rise of physically based rendering, which was brand new, but devs kind of hit the ground running and now, in 2020, they're well versed in the techniques.

That "1st Claim" is the dumbest part of the OP to be honest, non of the example of what people are actually saying even come close to implying it. People aren't saying that they are starting from scratch, they are saying that they haven't mastered it yet. There is no denying that claim without saying that all the devs are lying.
 

RaySoft

Member
The point is linked to the first one, that they're not starting from scratch. They're not learning how to do compute shaders from scratch on this totally new hardware architecture and need to learn the process from the start, so the process is in good shape from day one. Will it iterate, of course it will, but instead of going from 0 to 50 (totally arbitrary numbers) or 50 to 90, it is like 90 to 95.

Even this gen saw the rise of physically based rendering, which was brand new, but devs kind of hit the ground running and now, in 2020, they're well versed in the techniques.
It was just an example. If new hardware introduces new and better ways to do certain things, ofc utilizing that is the better option.
But this again just makes my point, that software is king. Just think about it, the hardware is set, but the code is evolving over time. One person could make his/her code running slugishly while another person could do the same task much better by just utilizing the hardware better. Really-really good code will reach closer to that coveted 1:1 ratio with the hardware, but a set hardware will NEVER reach it's full 100% potential anyways.. We can only strive to reach as close to it as possible.
 

VFXVeteran

Banned
That specific gif could easily be part of a totally scripted cinematic which mean they have tons of ways to fake it being their GI doing the work.

How was him swinging from a rope and latching on (very awkwardly) a cinematic? You are arguing for the sake of it now.


600.png


kKdxmm5.png

Geometric and Texture density looks like a clear step up on the demo compared to the CD trailers.

It's not a clear step up. The lighting in both games look very similar. In fact, I'd give CD the edge in lighting as it's using a good tone mapping shader at the right points on the rocks to make the lighting pop. The texture color palette on each look the same. I like the rock formation of UE5 but I also like the formation on CD. Obviously UE5 has the geometry advantage as we already agreed upon but that's not making the demo have a SIGNIFICANT advantage. If you saw these two games running together in this exact sequence, you'd say the UE5 demo has more geometric detail but that would all you'd say. Claiming that CD which is a full game being leagues behind would be just wrong.
 
I'm saying several things. None of which you seem to be interpreting properly. The issue here has always been defining what you guys mean by "dramatically" different at the tail end. The issue has always been "how much optimization will be had that will translate into a BIG boost in performance thereby lending itself to more graphics features.

As someone has already shown with screenshots over the lifespan of the PS4. Not much to the point of justifying saying a "dramatic" difference. Performance on the PS4 has been constant throughout it's generation as well.



Can you stop implying that I'm saying there is NO optimization to be had? You drag the argument longer and longer by adding errorenous words to my claims so you can call yourself correcting me. Sure there is optimization. Again, sure there is optimization. How MUCH is needed to the point of visual change? Can YOU specify? Do you see the PS5 being able to run native 4k/60FPS in every game later in the gen? That kind of optimization?



I don't disagree with anything you said. I will say, however, that you don't know how much time that is. It could be 3yrs or 6 months. You also aren't considering that the PS4 and PS5 aren't that radically different and therefore wouldn't need the same amount of ramp up time. You fail to mention that. Why?



Never said it didn't look better. I said it didn't look dramatically better if you examine the tech and ignore the scope (i.e. dinos vs. soliders and wilderness vs. enclosed areas). If you guys want to think that KS:SF looks dramatically worse than Horizon, then others can interpret that going forward as mediocre improvement. You would say Spiderman MM looks dramatically better than Spiderman PS4 and it really doesn't. Do the Insomniac devs still need time to understand PS5 so they can make an even bigger jump with Spiderman 2? What kind of things do you think will be in Spiderman 2 that will make it pop over Spiderman MM?



First, Epic is a 3rd party multiplatform company. They was able to get UE5 demo up and running before PS5 even released. That tech would be used across UE5 multiplatform games including PC/XSX. Agree? Second, you believe that UE5 demo looks better than all PC games now right? If so, then we don't even have to wait because you already think every other game out looks dramatically worse even now - and based solely on geometry/texturing.

So basically our disagreement is with what we could classify as "dramatic"?

I think that the improvements we will see throughout this gen will be similar to what was seen when we went from GTAV/RDR2 to Cyberpunk 2077 on PC.

Do you consider that a dramatic difference?
 
Last edited:
I'm not staring from the topic, I'm showing you how daft your premise from the beginning is. Nobody claimed Volumetric smoke was new in Horizon Zero Dawn. You made up a dumbass strawman you can argue with.

I can say that Naughtydog is not reinventing the wheel here but also appreciate the marked improvement between Uncharted 4 and The Last of Us Part 2.
I'm fairly sure that nobody thinks that the FX Chip in the SNES had shading and texture mapping that there was no difference between it and the PS1.

The difference is not in the existence of the techniques, a lot of 3D graphics algorithm are hundreds or thousands of years old for some reason we still find new uses for these, yet nobody thinks they could run Cyberpunk 2077 on with a pen and paper--even if they are very good at math.

The effects in Killzone:SF are very comparable to those in Horizon:ZD, the quality is in the same league, the performance is in the same league, etc.

Arguing if the steps are huge or expected ... well it shows there is nothing unexpected.
 

Kataploom

Gold Member
You've demonstrated an inability to appreciate steps in graphics that most of us notice. We've seen this with your comments on the RT in Miles Morales. I don't know if this is a by-product of you working in the field, but you clearly don't view games like a normal gamer. We're not graphics engineers, we're just shmoes. Some of us are engineers, but again, we don't specialize in graphics work. All I have to rely on are the differences that my own eyes pickup, and they are vast. Yes, Horizon looks a step up from KZ to me. I'm not dissecting the tech that went into it, because I have no reason to care. The totality of the final product is what matters.
I was about to say the same, because I can understand how VFXVeteran VFXVeteran feel, and I'm not even a AAA dev (I've worked on high quality 2D and 3D graphics for clients tho).

I don't see TLOUS a big leap even from the first game, I mean, it looks better for sure, but not like there's something "absurdly good" about how it looks... the same I feel about the first game against any other games from that generation... looks pretty average, I don't see what's so unique maybe because I see it from a technical point of view.

There are things that impact me like god rays and lit smoke on first Crysis game (PC) or many BOTW stuff due to the time or hardware they were done tho.

The improvements on games visuals are mostly due to tools (yes, they are THAT important) and developers in general finally knowing how to take advantage of their possibilities, but the techniques are mostly used, just in a less smart way, at the beginning of a generation (since 7th gen or so, thanks to shaders) and I don't see how that is up to hardware.

And to be fair, most of the things consoles get to see are already implemented on PC versions of games all the last gen back, it's just that they're over a game developed for consoles so the "base" version is still last gen.

Take a look at games like Batman AC, Tomb Raider 2013 and Remember Me on their PC versions maxed out (or even medium/high) and you can see many things that most gamers started to see on 8th gen consoles (mostly regarding to resolution, textures quality, ambient occlusion, particles density, etc.).

Many stuff are already there, devs really know how to use them, since shader-based graphic computing came out, maxing out hardware capabilities are mostly tied to art direction IMO, a lot of the times we hear things like they've waited for finally being able to use said technique since they got pushed back due to consoles inhability to proccess them at playable performance.
 

stetiger

Member
VFXVeteran VFXVeteran As a programmer I would like to chip in here. I think there are two conversations going on here and would like to delineate from each.

What does taking full advantage of an hardware means?
This could mean a lot of things the way I see it as an engineer. It could mean make the most of your clock cycle, making the most of your thread by using a much more mutlithreaded/job pipeline, and/or using the instructions that are inherent to the machine itself. I think it's fair to say that most games are not using all zen2 instructions because that would be a pain to implement as programmer. The best I could do would be to find the largest supported instructions and leverage those. Example AVX256 is not being used by cyberpunk on pc/ps5/seriesConsoles. Using these instructions would cripple the ps4/xbone versions and old computers. I expect developers will start using those features in the coming years. I don't see this happen until developers are targetting next gen only/ and know what to use these types of instructions for. The other example would be async compute, battlefield3 looks amazing on ps3/xbox360 because Dice moved to that pipeline. The ps3 did well there and outpaced the xbox as well (even though it was trailing the world generation), that to me is an example of dramatic changes by using the hardware properly. Or using streaming from the hdd to increase texture quality as oppose to dvds . I would say in those cases developers were still learning how to best use the hardware after 6 years from release. On pc, those things are brute force or already available. It's hard to say that someone is making the best use of the ps4/xbone without async compute support, which were not ubiquitous at the beginning of the generation (think AC:IV versus AC:Unity). i will point out that this only happens once a generation, and can provide a visual signature for the generation. I also have to point out that most studios don't have the resources or the technological know how to do this. So most developers don't really change their pipeline that much.

What does graphic improving over the generation means?
Again not a one thing solution. Most visual improvements seems to stem from better art, better hardware limitation being understood and building levels on a way that leverage these gain knowledges. So the game look better without really changing much except in some few occasion. Dice I believe is the only studio that truly embodies using better techniques to make their games look better as well as better hardware usage (think Battlefield 4 vs Battlefront).

TLDR; It is both. Some developers learn how to use the hardware better and amplify existing techniques, while some developers reuse their techniques smartly. I am curious to see what will happen when AVX255 is used interestingly, or when mesch shaders are better implemented with Geometry Culling. In theory, that would provide higher geometries. Some things on the other hand will not improve, like resolution, ray tracing (but perhaps how they are implemented could be more let's say impactful)
 

borborygmus

Member
The limiting factor is productivity, and this is what improves over time in the same hardware generation which results in better graphics from the same developers on the same hardware.

- Every codebase has some rigidity and idiosyncrasies so even if a technique is known, it's not a no-brainer to incorporate it into your preexisting systems.

- It's extremely hard to evaluate the performance of a feature so many times experimental features are left out due to time constraints.

- It's prohibitive to benchmark every single feature of a codebase. Very often there are quick wins that weren't noticed. It often turns out that a feature that was thought to be important doesn't actually contribute very much to the final output and cutting it improves performance significantly. The reduction in rendering cost can be used on something else with more visual bang-for-buck.

- There are always idiosyncratic optimizations that can make an expensive feature work for your particular game. Optimizations can come with heavy limitations, but it's fine if they're not noticeable in your game. This is not a straightforward determination.

- Codebases are always suboptimal. There's almost always something you could rewrite to run better. The constraint here is time/productivity.

- Game design happens much faster than implementation. The code is always playing catch up to what the designers want. This means you have very limited time for refactoring or playing with experimental things. Sometimes the designer needs that feature now and then everyone moves on to something else ASAP and you end up with no time trying to get more out of it, so you save your idea for the next project because it's just not gonna happen in this one.

All of these constraints are mitigated with experience.
 
Last edited:

Lethal01

Member
How was him swinging from a rope and latching on (very awkwardly) a cinematic? You are arguing for the sake of it now.

I wasn't referring to him swinging I was specifically talking about the other scene where we see the top of the cave break apart letting direct light in which then bounces around, I assumed that's what you were talking about since it's the clear example of dynamic GI in the trailer.

Unlike the unreal demo which was specifically showcased in a way that demonstrated exactly how the GI performed I need to give that scene the benefit of the doubt. I don't think it's smart to make any definitive statements that CD has GI that's as good as the UE5 demo.

600.png


kKdxmm5.png



It's not a clear step up. The lighting in both games look very similar. In fact, I'd give CD the edge in lighting as it's using a good tone mapping shader at the right points on the rocks to make the lighting pop. The texture color palette on each look the same. I like the rock formation of UE5 but I also like the formation on CD. Obviously UE5 has the geometry advantage as we already agreed upon but that's not making the demo have a SIGNIFICANT advantage. If you saw these two games running together in this exact sequence, you'd say the UE5 demo has more geometric detail but that would all you'd say. Claiming that CD which is a full game being leagues behind would be just wrong.

Firstly I do think increased geometry makes a huge difference and is enough to make it look much better than CD. You keep asserting that a game doesn't looks far better if they "only" improve or add one major feature. You don't need a huge list of improvements to get a far better looking game. A jump in geometry, or lighting, or effects is more than enough.

Secondly while I can agree on it having good tone mapping the shadow detail also looks worse. the shadow from the character just doesn't connect with the wall as it should and I can't see the level of ambient occlusion.

So as said before, games going from looking like Demon's Souls Or Mile morales to looking like the Unreal demo Is the kind of jump I would say will happen, and that I think that jump will be a great demonstration of how much better games get once devs get used to the hardware.
 
Last edited:

Darius87

Member
How was him swinging from a rope and latching on (very awkwardly) a cinematic? You are arguing for the sake of it now.


600.png


kKdxmm5.png



It's not a clear step up. The lighting in both games look very similar. In fact, I'd give CD the edge in lighting as it's using a good tone mapping shader at the right points on the rocks to make the lighting pop. The texture color palette on each look the same. I like the rock formation of UE5 but I also like the formation on CD. Obviously UE5 has the geometry advantage as we already agreed upon but that's not making the demo have a SIGNIFICANT advantage. If you saw these two games running together in this exact sequence, you'd say the UE5 demo has more geometric detail but that would all you'd say. Claiming that CD which is a full game being leagues behind would be just wrong.
not a clear step up? better lightning? :pie_roffles: this just another proof how blind and delusional you are UE5 demo looks like offline render in real time even REAL dev have said that but hey you think CD have better lightning because it have better shader which makes lightning pop? prove it what's complexity of the shader compared to UE5? why does it looks better compared to UE5?
now CD looks good but still have gamey look to it and nowhere near to UE5 quality. the only reason you're saying this because the only PS5 renders visuals like UE5 at the moment not even your overpriced 2K$ 3090 have showed it can run this.
 
Last edited:
Take a look at games like Batman AC, Tomb Raider 2013 and Remember Me on their PC versions maxed out (or even medium/high) and you can see many things that most gamers started to see on 8th gen consoles (mostly regarding to resolution, textures quality, ambient occlusion, particles density, etc.).
From what I can recall, Remember Me was the first game to heavily implement PBR. It was certainly quite the looker at the time of release.
 

VFXVeteran

Banned
So basically our disagreement is with what we could classify as "dramatic"?

I think that the improvements we will see throughout this gen will be similar to what was seen when we went from GTAV/RDR2 to Cyberpunk 2077 on PC.

Do you consider that a dramatic difference?

Nope. However, it depends on whether you want Cyberpunk 2077 MAX settings or not. Because I don't see every game popping up doing that kind of rendering as a norm. Not when a RTX 3090 is rendering it all in the 30-40s FPS. If you think the version of the PS5/XSX that comes out next year is what we will see all generation, then I'm ok with that.
 

FireFly

Member
Is it? I much prefer the art direction of Dark souls 1, but Dark souls 3 is clearly a much better looking game.

People need to learn how to put their subjective opinion aside, otherwise I can say Furi looks better than Uncharted because I prefer it's art direction.
I am not saying that effective art direction eliminates the need for higher fidelity visuals. Or that we can't make comparisons in fidelity across generations. Clearly, Doom 3 "looks better" than Doom 2, in the sense of having more realistic graphics, whether or not you personally prefer pixelised sprites to high poly models.

My point is that when we make comparisons in the first place, we're comparing art! We're comparing models, textures, animations, level design. (Without denying there is a also the technical aspect of how these things are rendered) So you can't have a simply technical discussion about how a game looks, because the quality of the art is an inescapable part of this process. It would be like comparing two paintings based on the number of colours used and material of the canvas, while ignoring the quality of the representations themselves. And my main point with regards to generational improvements, is that the quality (read: perceived fidelity) of the art can improve significantly, even while the technology remains the same.

So for example I take the main praise for TLoU 2 not to be about the rendering techniques themselves but about the consistency with which they are applied. Naughty Dog are still using cube maps for reflections, which reflect off screen content in a static way, but they went in and hand aligned them manually to match the SSR reflections (which only reflect on camera detail), to create the illusion of a "real" reflection. So the reflections look great, not because Naughty Dog discovered some amazing new technique, but through sheer artistry. But does the player care if the end result is the same? No. That's why it's disingenuous every time a player says that a particular scene or game looks better than another, to try to force them to establish some type of purely technical difference.

And my wider point is that how we evaluate all the disparate elements that make up a scene, including the technology *and* the art, is subjective. We can agree that Doom 3 "looks better" than Doom 2 and Half-Life 2 looks better than Half-Life. This is the case because there are multiple generations between them, so practically everything is massively upgraded. But does Half-Life 2 look better than Doom 3? They were both targeting the same hardware and released at the same time, yet Doom 3 had per pixel lighting for everything, and a full dynamic shadowing system, and Half-Life 2 didn't. On the other hand Half-Life 2 had baked radiosity lighting, which made indoor environments seem more naturally lit, and the shadows were softer than Doom 3's hard edged stencil shadows. The textures were also higher resolution, overall.

So how do we determine objectively, which game looked better? My contention is that there is no formula for this. Some people were bothered by the hard-edged shadows in Doom 3, some were not. Some people were bothered by the "faked" NPC shadows in Half-Life 2, some were not.
 
Last edited:

VFXVeteran

Banned
TLDR; It is both. Some developers learn how to use the hardware better and amplify existing techniques, while some developers reuse their techniques smartly. I am curious to see what will happen when AVX255 is used interestingly, or when mesch shaders are better implemented with Geometry Culling. In theory, that would provide higher geometries. Some things on the other hand will not improve, like resolution, ray tracing (but perhaps how they are implemented could be more let's say impactful)

Thanks for your input! Welcome to the thread!

Would you say, in your opinion, that with everything that's going on with a game like Cyberpunk (especially on a high end PC GPU) running at max throughput @4k, RTX set to Ultra with DLSS 2.0, that you'd see a console exclusive game exceed the amount of rendering going on at the same settings with NO DLSS in hardware? We've had guys claim that "there will be 1st party games on PS5 that will look better than this game from a technical perspective". Many Sony gamers here want to use subjective comparisons in art direction to denote a game "looking better" technically. The problem with that is they never punchline their comments with "subjectively" and declare that optimization has made a $500 box do "more" than a $2000 high end PC. This is the crux of argument after argument on these boards. I believe that if more developers were on these boards (besides me and a handful of others) putting things into perspective, we'd see less of these ridiculous claims.
 

VFXVeteran

Banned
Firstly I do think increased geometry makes a huge difference and is enough to make it look much better than CD. You keep asserting that a game doesn't looks far better if they "only" improve or add one major feature. You don't need a huge list of improvements to get a far better looking game. A jump in geometry, or lighting, or effects is more than enough.

We will have to disagree here. The geometric detail in todays games on props absolutely do need more tessellation and having that added geometry would do wonders for the look of the environment in this case. However, I don't think that's the overall game changer here moving forward. I will stand by lighting/shading being the long run factor. Perhaps it's due to me working in the industry for such a long time and seeing how that's what makes the visuals (they spend way less time on models and geometry).

Secondly while I can agree on it having good tone mapping the shadow detail also looks worse. the shadow from the character just doesn't connect with the wall as it should and I can't see the level of ambient occlusion.

I think that grab on the ledge was a bug as he wasn't close enough to it. Obviously the game has AO and it will probably have the higher end form of it - HDAO (which is in every PC game nowadays).

So as said before, games going from looking like Demon's Souls Or Mile morales to looking like the Unreal demo Is the kind of jump I would say will happen, and that I think that jump will be a great demonstration of how much better games get once devs get used to the hardware.

So you think the Nanite will be the general factor for this generation instead of the importance on lighting. I'd have to disagree with it being the biggest jump, but I will agree it will be a jump. Not a generational leap like comments like "if it's doing this so early on, imagine what it will look like in 3yrs..". We both know there is a lot of hyperbole going on with the Sony crowd. No use in defending that.
 

VFXVeteran

Banned
The improvements on games visuals are mostly due to tools (yes, they are THAT important) and developers in general finally knowing how to take advantage of their possibilities, but the techniques are mostly used, just in a less smart way, at the beginning of a generation (since 7th gen or so, thanks to shaders) and I don't see how that is up to hardware.

That is my point. People seem to think that a game like Spiderman MM will look much much better later on in the generation when I feel that while optimization can be made, they won't be dramatic enough to declare a "much much better looking" implementation. If they struggled with RT reflections by culling out a lot of things within the reflection, I don't think that's going to significantly change much. I don't see them freeing up enough resources to implement even more RT features for example for the next game.

And to be fair, most of the things consoles get to see are already implemented on PC versions of games all the last gen back, it's just that they're over a game developed for consoles so the "base" version is still last gen.

This statement is a sin from the Sony crowd. This is exactly my base case for every prediction I make with future games. If it's not seen on the PC, then it's not going to be seen on the console. The UE5 demo is most definitely running on a PC, but I understand the marketing on that demo and how Sony wanted first rights to it being shown.

It also brings up another point that turns into arguments here - that technology is moved forward by the console exclusives. That is just completely wrong and yet, I get shunned for telling otherwise.

Many stuff are already there, devs really know how to use them, since shader-based graphic computing came out, maxing out hardware capabilities are mostly tied to art direction IMO, a lot of the times we hear things like they've waited for finally being able to use said technique since they got pushed back due to consoles inhability to proccess them at playable performance.

That's another HUGE point right there. With the advent of pixel/vertex shaders, pretty much every studio has been able to implement common shader techniques for their games without having to write custom low-level code to get good results.
 

VFXVeteran

Banned
And my wider point is that how we evaluate all the disparate elements that make up a scene, including the technology *and* the art, is subjective. We can agree that Doom 3 "looks better" than Doom 2 and Half-Life 2 looks better than Half-Life. This is the case because there are multiple generations between them, so practically everything is massively upgraded. But does Half-Life 2 look better than Doom 3? They were both targeting the same hardware and released at the same time, yet Doom 3 had per pixel lighting for everything, and a full dynamic shadowing system, and Half-Life 2 didn't. On the other hand Half-Life 2 had baked radiosity lighting, which made indoor environments seem more naturally lit, and the shadows were softer than Doom 3's hard edged stencil shadows. The textures were also higher resolution, overall.

So how do we determine objectively, which game looked better? My contention is that there is no formula for this. Some people were bothered by the hard-edged shadows in Doom 3, some were not. Some people were bothered by the "faked" NPC shadows in Half-Life 2, some were not.

Very good points F FireFly . But can we, in good faith, rely on a game's popularity on these boards (which means the most voices that speak subjectively) to dictate a game's ranking on the "best looking" chart? We have to somehow establish a way that is objective. This very thread concentrates on that objectivity. When people declare a particular game looking the best BUT then declares that the technology had something to do with it - we've moved from a subjective opinion to an objective one that can be dissected in detail. Many people steer away from that because they want their subjective opinion to have weight.
 

Sun Blaze

Banned
A lot of arguments stem from misunderstanding.

The main thing I see is that VFXVeteran VFXVeteran and several other users disagree on the very basic definition of what they are arguing about.

I think the disconnect stems from USAGE vs EFFICIENCY. Some people have incorrectly pointed out that the consoles don't use their full power which is false. They are already tapped out, however, do devs currently make the best out of the hardware?

Case and point; Spider-Man Miles Morales. People often say it has the best RT implementation on consoles which is factually incorrect, Watch_Dogs Legion is at least as good. The difference is, Insomniac focused on key areas that are OBVIOUS to the player such as draw distances. At first glance, Spider-Man will appear better because the compromises it makes are much less obvious. But then you realize Watch_Dogs does thing like reflections within reflections and has higher fps within them. These things will not be perceptible to the average player unless they run up close to the reflections and search for them. Hence in that sense Spider-Man may appear to put the GPU cycles of the PS5 to better use because what it does best is much more noticeable. Doesn't make it better though because it doesn't use more advanced techniques or effects than Watch_Dogs (if I remember, Watch_Dogs Legion also has particles reflections which Spider-Man does not, but I can't confirm).

About the second point of graphics getting much better over time. Yeah, I think this stopped being true starting with the PS4. In the olden days of the N64, PS2 and PS3, the new machines were markedly different from their predecessors and everything they knew. It was practically learning a new language for them. In this day and age? I'd argue there are still improvements but this is more due to the development of new techniques and technologies over time than devs "mastering" the hardware. Killzone Shadow Fall is from 2013, Infamous Second Son, 2014. I'd say they still look fantastic to this day and their successors don't look significantly better. Horizon: Zero Dawn and Ghost of Tsushima really leverage the art styles but in terms of rendering techniques, they do little their predecessors did not, and in the case they do, it's mostly due to new innovations being introduced after 2014, which has nothing to do with consoles, but is just how the world works. Ryse is also a top-tier Xbox One game despite being a launch title. AC: Unity to me to THIS DAY is still the best looking AC game in the series, and certainly one of the best looking open-world titles still despite being 6 years old.

This should be obvious with Ratchet & Clank and Horizon: Forbidden West, neither which look far beyond their prequels.
 
Last edited:
Top Bottom