• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Crytek CEO Cevat Yerli: "Crysis 3 maxes out consoles, not even 1% left"

It should not be "active at all times".

The reason fast movement blurs a bit in you vision is because of the time it takes for the light receptors in your eye to change what they send your brain.

That means that you will always get that slight blurring effect no matter what! Including when playing games!
No, this isn't how it works.

There's a reason video recordings have to be set to a certain shutter angle to get a particular amount of motion blur. That's simply because TVs and monitors cannot create motion blur the way actual, physical motion does. No matter how high the frame rate of your monitor gets, it will never, ever produce motion blur on its own. In fact, a 60 FPS video with a shutter angle that's the equivalent 1/240 of a second will look completely unnatural. There will be hardly any motion blur at all! In fact, it'll look a lot like what video games without per-object motion blur look like today.

I am no physicist, so I can't tell you the physics behind why physical movement produces motion blur and TVs do not, but it is a thing and it is real. My hypothesis is that TVs merely emit light through tiny colored dots, whereas the phenomena that produces motion blur requires reflection of light across a surface, hence an object moving quickly reflects light into our eyes in a smooth stream, whereas emitted light does not because it actually a bunch of tiny colored dots changing their color very rapidly.

Look for videos shot at a low shutter angle and then compare them to videos shot at a standard shutter angle (re: any normal movie trailer). The difference should be obvious and immediate.
 
you're crazy dude.

Why? Because I judge a game by the merits of the developer, including a fully delivered package at the setting the developer recommends. It's crazy that when I play a game I should expect the best quality for my dollar, the best use of art and design to produce the best game the developer can make. That a developer appropriately balance things like story, gameplay and visuals so that neither one is completely dominant or non existant. To expect a properly designed and thought out game which doesn't rely on garbage shortcuts in design to overcome glaring faults in art direction, storytelling and gamedesign?

I think the people who are crazy are the ones who say "oooohh, shiney!" Then proceed to skittle over which mods they can use to make the game better. Screw that noise, the community shouldn't have to pick up the slack for the game developer. A game developer shouldn't get a free ride because they spent a lot of money making their game, and doing the same thing everyone else has done too many times before. The industry is stagnating in design because they dont get to think outside the box. It's like toyota of character design thread. Everyone looks the same because it gets focus tested and no one "hates" it. Well, people should hate stuff in games, it makes other stuff better in comparison, and I for one hate "cinematic" experiences. Leave that shit for movies, create "gaming" experiences.
 
Why? Because I judge a game by the merits of the developer, including a fully delivered package at the setting the developer recommends. It's crazy that when I play a game I should expect the best quality for my dollar, the best use of art and design to produce the best game the developer can make. That a developer appropriately balance things like story, gameplay and visuals so that neither one is completely dominant or non existant. To expect a properly designed and thought out game which doesn't rely on garbage shortcuts in design to overcome glaring faults in art direction, storytelling and gamedesign?

I think the people who are crazy are the ones who say "oooohh, shiney!" Then proceed to skittle over which mods they can use to make the game better. Screw that, the community shouldn't have to pick up the slack for the game developer. A game developer shouldn't get a free ride because they spent a lot of money on their game.

Games should be experienced the exact way they were thought of? Sorry, but not every developer wants to make Uncharted or Portal 2. You cannot have all that control, that shouldn't be the only way to play, specially when it is so exiting to come up with something the developers didn't anticipate.

But that doesn't have anything to do with this topic, we are talking about graphics right?. Are you actually implying that crytek should focus in making a single version version of their game? It will be interesting, I give you that, but I think I rather be able to play that game.

Btw, good luck watching many movies made before 1950 the way they were intended.
 
Games should be experienced the exact way they were thought of? Sorry, but not every developer wants to make Uncharted or Portal 2. You cannot have all that control, that shouldn't be the only way to play, specially when it is so exiting to come up with something the developers didn't anticipate.

But that doesn't have anything to do with this topic, we are talking about graphics right?. Are you actually implying that crytek should focus in making a single version version of their game? It will be interesting, I give you that, but I think I rather be able to play that game.

Btw, good luck watching many movies made before 1950 the way they were intended.

we're talking about Cevat blowing his own horn about how he's pushing consoles further then anyone else when in reality tech is as subjective as art direction. An engine designed to handle a specific lighting model, a different rendering pipeline and a different variety of effects is no less capable an engine then his. The engine will produce a different effect all together, no better, no worse, just different and aren't comparable. It's like the "unreal effect" in games, people can just tell when a game has been made in unreal, regardless if it looks like gears of war or not.

He says his engine is better then the ones powering Halo and Gears but that's as subjective as saying the art in his game is better. I say if you have a prebaked lighting model that looks better then realtime global illumination, then you fucking lose Cevat because not only is the prebaked model cheaper it's producing a better result, it doesn't matter if the global illumination is "harder" to do, or more advanced.
 
we're talking about Cevat blowing his own horn about how he's pushing consoles further then anyone else when in reality tech is as subjective as art direction. An engine designed to handle a specific lighting model, a different rendering pipeline and a different variety of effects is no less capable an engine then his. The engine will produce a different effect all together, no better, no worse, just different and aren't comparable. It's like the "unreal effect" in games, people can just tell when a game has been made in unreal, regardless if it looks like gears of war or not.

He says his engine is better then the ones powering Halo and Gears but that's as subjective as saying the art in his game is better. I say if you have a prebaked lighting model that looks better then realtime global illumination, then you fucking lose Cevat because not only is the prebaked model cheaper it's producing a better result.

hahahaha. oh wow.
 
we're talking about Cevat blowing his own horn about how he's pushing consoles further then anyone else when in reality tech is as subjective as art direction. An engine designed to handle a specific lighting model, a different rendering pipeline and a different variety of effects is no less capable an engine then his. The engine will produce a different effect all together, no better, no worse, just different and aren't comparable. It's like the "unreal effect" in games, people can just tell when a game has been made in unreal, regardless if it looks like gears of war or not.

He says his engine is better then the ones powering Halo and Gears but that's as subjective as saying the art in his game is better. I say if you have a prebaked lighting model that looks better then realtime global illumination, then you fucking lose Cevat because not only is the prebaked model cheaper it's producing a better result, it doesn't matter if the global illumination is "harder" to do, or more advanced.

You make it sound like crytek is some 3rd grade engine company... :(
 
we're talking about Cevat blowing his own horn about how he's pushing consoles further then anyone else when in reality tech is as subjective as art direction.
That completely invalidated anything else you may or may not have said in your entire posting career.

It's just so...

WRONG...

...I don't even know what to say to that. How do you even talk to someone whose frame of reality is completely out of whack? I certainly don't know the answer to that!
 
That completely invalidated anything else you may or may not have said in your entire posting career.

It's just so...

WRONG...

...I don't even know what to say to that. How do you even talk to someone whose frame of reality is completely out of whack? I certainly don't know the answer to that!

So there's a right way and a wrong way to draw pixels on a screen? Tell every developer to stop right now. Don't even bother working on rendering research, we've got the answer folks. Everyone is doing it wrong. Zyrusticae is going to tell us how to make the best realtime renderer in the world on neogaf! I sure as hell dont know what it is.

If i was WRONG then why are other developers bothering to make their own engines? And why do developers say that some engines are better then others for different tasks?
 
we're talking about Cevat blowing his own horn about how he's pushing consoles further then anyone else when in reality tech is as subjective as art direction. An engine designed to handle a specific lighting model, a different rendering pipeline and a different variety of effects is no less capable an engine then his. The engine will produce a different effect all together, no better, no worse, just different and aren't comparable. It's like the "unreal effect" in games, people can just tell when a game has been made in unreal, regardless if it looks like gears of war or not.

He says his engine is better then the ones powering Halo and Gears but that's as subjective as saying the art in his game is better. I say if you have a prebaked lighting model that looks better then realtime global illumination, then you fucking lose Cevat because not only is the prebaked model cheaper it's producing a better result, it doesn't matter if the global illumination is "harder" to do, or more advanced.

When it comes to engine tech, Crytek are forward developers. They make effects for now and tomorrow.

You can bet Naughty Dog or 343i will be going for a more realtime lighting engine next gen. It's great to knock Crytek, but they are only helping the advancements in next generation technology.

Look at EPIC post CryEngine 3.

They're now fully focusing on a realtime lighting engine for massive consumption with Unreal Engine 4. That's the future of graphics tech.
 
Didn't they say this same shit about Crysis 2 and we all know how they piece of crap game turned out on consoles.
 
If i was WRONG then why are other developers bothering to make their own engines?

Because there are always ways to improve and everyone wants to do things their own way, but that doesn't mean tech is something that's subjective. Some engines simply are better than others. It just means that most developers value other things over having bleeding edge tech.
 
When it comes to engine tech, Crytek are forward developers. They make effects for now and tomorrow.

You can bet Naughty Dog or 343i will be going for a more realtime lighting engine next gen. It's great to knock Crytek, but they are only helping the advancements in next generation technology.

Look at EPIC post CryEngine 3.

They're now fully focusing on a realtime lighting engine for massive consumption with Unreal Engine 4. That's the future of graphics tech.

I'm not saying that a prebaked model is better. I'm saying that technically, if a prebaked model is producing a better result, or the same result for cheaper, then Cevat shouldn't be claiming that their tech is pushing the console harder because the reality is an ineffecient use of the hardware. I agree that a realtime lighting model is the future, but I don't agree that for current hardware, their model is best. And while they might be using 99% I think it's bullish to claim no one else can do better.

Because there are always ways to improve and everyone wants to do things their own way, but that doesn't mean tech is something that's subjective. Some engines simply are better than others. It just means that most developers value other things over having bleeding edge tech.

I agree, bleeding edge tech doesn't make a better game, and there are ways to improve, and there are ways to do it differently, hence why I think tech has to be subjective. Again, if there is a right way to render an image, I've yet to see everyone agree on it.
 
No, this isn't how it works.

There's a reason video recordings have to be set to a certain shutter angle to get a particular amount of motion blur. That's simply because TVs and monitors cannot create motion blur the way actual, physical motion does. No matter how high the frame rate of your monitor gets, it will never, ever produce motion blur on its own. In fact, a 60 FPS video with a shutter angle that's the equivalent 1/240 of a second will look completely unnatural. There will be hardly any motion blur at all! In fact, it'll look a lot like what video games without per-object motion blur look like today.

I am no physicist, so I can't tell you the physics behind why physical movement produces motion blur and TVs do not, but it is a thing and it is real. My hypothesis is that TVs merely emit light through tiny colored dots, whereas the phenomena that produces motion blur requires reflection of light across a surface, hence an object moving quickly reflects light into our eyes in a smooth stream, whereas emitted light does not because it actually a bunch of tiny colored dots changing their color very rapidly.

Look for videos shot at a low shutter angle and then compare them to videos shot at a standard shutter angle (re: any normal movie trailer). The difference should be obvious and immediate.

I don't think anyone's complaining about object blur (except when it's overdone, which it usually is). The issue is camera blur, which has basically no reason to exist in games, especially not in first-person games that are presumably trying to emulate eyes, not cameras.

EDIT: Also, I would dispute your claim that videos shot at a low shutter angle are "completely unnatural." The reality is that we expect motion blur because we're conditioned by TV and movies to associate it with cameras. Thus, when we watch a video that doesn't have it, we feel like something is missing.

When it comes to engine tech, Crytek are forward developers. They make effects for now and tomorrow.

You can bet Naughty Dog or 343i will be going for a more realtime lighting engine next gen. It's great to knock Crytek, but they are only helping the advancements in next generation technology.

Look at EPIC post CryEngine 3.

They're now fully focusing on a realtime lighting engine for massive consumption with Unreal Engine 4. That's the future of graphics tech.

Fully-realtime lighting has been a thing in games since Doom 3, which came out in 2004. The currently popular technique for doing dynamic lighting, deferred rendering, has been around since roughly 2007, when STALKER and GTA4 popularized it. Generally, current game engines are years behind what's being done in offline rendering, the "next big thing" in image rendering is always discovered and popularized by the movie/CGI guys, then adopted by the gaming industry when computers get fast enough to do it in real time. Obviously there are some exceptions to this, but to say that Crytek is doing anything legitimately new in Crysis 3 is a bit of an exaggeration.
 
I don't think anyone's complaining about object blur (except when it's overdone, which it usually is). The issue is camera blur, which has basically no reason to exist in games, especially not in first-person games that are presumably trying to emulate eyes, not cameras.

EDIT: Also, I would dispute your claim that videos shot at a low shutter angle are "completely unnatural." The reality is that we expect motion blur because we're conditioned by TV and movies to associate it with cameras. Thus, when we watch a video that doesn't have it, we feel like something is missing.



Fully-realtime lighting has been a thing in games since Doom 3, which came out in 2004. The currently popular technique for doing dynamic lighting, deferred rendering, has been around since roughly 2007, when STALKER and GTA4 popularized it. Generally, current game engines are years behind what's being done in offline rendering, the "next big thing" in image rendering is always discovered and popularized by the movie/CGI guys, then adopted by the gaming industry when computers get fast enough to do it in real time. Obviously there are some exceptions to this, but to say that Crytek is doing anything legitimately new in Crysis 3 is a bit of an exaggeration.

Are they? It's not particularly a good idea because it's an illusion that is far too easily broken, that's why you'll see devs doing things like color grading and filmic tone-mapping etc.

A lot of the stuff Crytek did that was new actually went into Cryengine 3 already. You talk about the movie guys but do you know what they're doing? because I haven't seen any papers about Light Propagation Volumes outside of Crytek's work. You seem to be mistaking an effect for a technique. While it's true that films have and have had the global illumination features of color bleeding, caustics, glossy and mirror reflections among others for some time it's not the same thing as putting it into a game and if you're evaluating features (or hell, even techniques) then the film guys aren't sitting there coming up with everything. Phong wrote the paper for his reflectance model in '75. Kajiya described path tracing in '86 and as far as I know he never worked in the movie industry. Now at the same time some lucasfilm guys were working on REYES which has been used a ton in film and there have been a lot of advances from that, but there have also been a lot of advances out of stuffy intellectual researchers and, yes, games developers.
 
Are they? It's not particularly a good idea because it's an illusion that is far too easily broken, that's why you'll see devs doing things like color grading and filmic tone-mapping etc.

I would assume so. Under what circumstance is a first-person game not experienced through the eyes of the player?

A lot of the stuff Crytek did that was new actually went into Cryengine 3 already. You talk about the movie guys but do you know what they're doing? because I haven't seen any papers about Light Propagation Volumes outside of Crytek's work. You seem to be mistaking an effect for a technique. While it's true that films have and have had the global illumination features of color bleeding, caustics, glossy and mirror reflections among others for some time it's not the same thing as putting it into a game and if you're evaluating features (or hell, even techniques) then the film guys aren't sitting there coming up with everything. Phong wrote the paper for his reflectance model in '75. Kajiya described path tracing in '86 and as far as I know he never worked in the movie industry. Now at the same time some lucasfilm guys were working on REYES which has been used a ton in film and there have been a lot of advances from that, but there have also been a lot of advances out of stuffy intellectual researchers and, yes, games developers.

I didn't mean to imply that Crytek doesn't do any research, or that making games like Crysis 3 run in real-time isn't challenging or time consuming. I more referring to the fact that game graphics development basically follows the same trends as offline rendering, only lagging behind a few years. Sure, we can talk all we want about how impressive it is that CryEngine 3 and Unreal Engine 4 are running global illumination in real time. And I would never want to imply that real time global illumination isn't impressive all hell. It takes a lot of work and a lot of mastery of math and computer science to make a modern game engine. But everyone already knew that the next big thing was going to be global illumination next gen. Everyone already knew that global dynamic lighting on tile-based deferred renderers was going to dominate next gen. Everyone already knew that the next big thing wasn't going to be ray tracing. These things are very predictable because they've all been done before by the offline guys. The post I was quoting was talking about CryEngine 3 like it came out of the blue and everyone else was being influenced by it, changing their engine specs to mimc it. The reality is that all these next-gen engines (CryEngine 3, Unreal Engine 4, Frostbite 2, Fox Engine, you name it) are so similar because the "next step" is, frankly, really obvious. It's the optimization that's the hard part.
 
I wish I could eliminate this from the internet. None of that is even true for a reason I could link to but I just am tired of linking it.

Basically, that stupid report that came out about tesselation being unoptimized in crysis 2 did it really stupidly. It looked around in wireframe mode.

The cryengine turns off all tessellation culling in wire frame mode. Hence why you can see the water under the level and the blocks are like a million polygons. It doe snot lwork that way in game...

Sigh...


Yes. I put my two cents 15 months ago, but did not help much.

http://maldotex.blogspot.com.es/2011/09/tesselation-myth-in-crysis-2-el-mito-de.html


Wave your hand in front of you. Does it look blurry? Yeah. "Camera" motion blur is actually pretty unrealistic, though.


It looks blurry. Because of visual retention. Your senses are not binary, and have retention.

After receiving a slap, your nerves endings take a moment to stop being excited and you can feel the contact after the slap.

After hearing a thud, the membrane of your ear is undulating and you can hear the sound as decreases over time.

If you look at a light and then close your eyes, you can still see a fading light because of the retention of the rods and cones into your eye.

This image retention keeps your hand where it is not already and make it fade slowly. So a hand moving in front of your eye is not a clear picture, but the overlap of several, causing motion blur. Your brain is used to clean the image by analyzing the contour, but if someone writes some letters in your hand and move it in front of your eyes (without moving the eyes) you can not read the letters.

The only way to avoid this motion blur is to move your eyes following the hand, then the motion blur will appear in the rest of the image.
 
Toon Link >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this shit

You've added so much to the debate with your post..

Their will always be a balance between both looks for instance this weekend I was playing both Battlefield 3 on PC which leans towards the 'realistic' and the demo for Ni no Kuni

03_ninokuni_ps3yvuxw.jpg


Which goes the other way but still retains it's own art style and purpose. There is always room for various approaches imo
 
we're talking about Cevat blowing his own horn about how he's pushing consoles further then anyone else when in reality tech is as subjective as art direction. An engine designed to handle a specific lighting model, a different rendering pipeline and a different variety of effects is no less capable an engine then his. The engine will produce a different effect all together, no better, no worse, just different and aren't comparable. It's like the "unreal effect" in games, people can just tell when a game has been made in unreal, regardless if it looks like gears of war or not.

He says his engine is better then the ones powering Halo and Gears but that's as subjective as saying the art in his game is better. I say if you have a prebaked lighting model that looks better then realtime global illumination, then you fucking lose Cevat because not only is the prebaked model cheaper it's producing a better result, it doesn't matter if the global illumination is "harder" to do, or more advanced.

How you optimize you engine to do certain things is different from a rather more objective measure of - how much of the consoles processing power is been utilized.

I assume that when they talk about the latter, they mean that the consoles are been utilized in such a way so as to not leave any significant gaps in usage; all cores at full blast, memory fully loaded, information been shuttled around and processed as much as possible.

Which is... quite an impressive technical achievement.
 
How you optimize you engine to do certain things is different from a rather more objective measure of - how much of the consoles processing power is been utilized.

I assume that when they talk about the latter, they mean that the consoles are been utilized in such a way so as to not leave any significant gaps in usage; all cores at full blast, memory fully loaded, information been shuttled around and processed as much as possible.

Which is... quite an impressive technical achievement.

Thats how most current top games are utilized, i think that Cevat was talking about algorithms. They have the cheapest SSAO, the best HDR, they have dynamic anizo, bokeh and tons of other mindblowing things they render in 33ms on consoles [or 40ms :P] and they cant probably optimize them further.
 
Thats how most current top games are utilized, i think that Cevat was talking about algorithms. They have the cheapest SSAO, the best HDR, they have dynamic anizo, bokeh and tons of other mindblowing things they render in 33ms on consoles [or 40ms :P] and they cant probably optimize them further.

Fair enough. So the objective measure would boil down to a combination of platform utilization and useful* information density of what is been processed.

*as opposed to just information - i.e. difference between a good image compression and no image compression.
 
Thats how most current top games are utilized, i think that Cevat was talking about algorithms. They have the cheapest SSAO, the best HDR, they have dynamic anizo, bokeh and tons of other mindblowing things they render in 33ms on consoles [or 40ms :P] and they cant probably optimize them further.

if he could prove it then maybe I'd listen.... of course if he could prove it he might want to start looking at p vs np.
 
I only hope it looks and plays as well as Battlefield 3 Crysis 2, then, god forbid I might actually play it for more than 30 hours =\
 
Toon Link >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this shit
What's the point of a comment like this? Seriously.

The approach to visual style is so different. Neither one invalidates the other and the whole thing is subjective anyways.

Besides, we're way past "toon Link" on the cel shading front. As much as I don't care for Naruto, the way the Ultimate Ninja games look and animate is pretty mind blowing (looks like an actual animated show).
 
Toon Link >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this shit

iXCbvFHpcwpti.gif


What's the point of a comment like this? Seriously.

The approach to visual style is so different. Neither one invalidates the other and the whole thing is subjective anyways.

Besides, we're way past "toon Link" on the cel shading front. As much as I don't care for Naruto, the way the Ultimate Ninja games look and animate is pretty mind blowing (looks like an actual animated show).

I have also nothing with Naruto, but I have to agree with you. The cel shading graphics of these games are so good! You really have to see it in motion.
It is like playing a cartoon. Real fans will go crazy for sure.
 
When it comes to engine tech, Crytek are forward developers. They make effects for now and tomorrow.

You can bet Naughty Dog or 343i will be going for a more realtime lighting engine next gen. It's great to knock Crytek, but they are only helping the advancements in next generation technology.

Look at EPIC post CryEngine 3.

They're now fully focusing on a realtime lighting engine for massive consumption with Unreal Engine 4. That's the future of graphics tech.

To be fair, the advancements in lighting were a natural evolution that the industry, including Epic, were heading towards with or without CE3.

No, it's also a console.

What I was told years ago was that while developers know the hardware well and what they are working with, it's nearly impossible to completely "max out" a console. To do so you would have to create perfect code, but we are imperfect by nature, so...
 
A game that runs like shit using all but 1% of a system's processing power still runs like shit.
Exactly. A developer can write code that does something inefficiently and maxes out the CPU. Another can write more efficient code, that does everything required and more, and leaves the CPU free to do other stuff.

It's like the old "lines of code" boast. It isn't a good measurement.
 
What I was told years ago was that while developers know the hardware well and what they are working with, it's nearly impossible to completely "max out" a console. To do so you would have to create perfect code, but we are imperfect by nature, so...
Whoever told you that is a moron.

It's very, very easy to completely max-out a console. All that happens when a console is maxed out is that the frame rate drops, and when the frame rate drops you know you've already maxed out the console's capabilities.

At that point it's about making as many compromises as possible such that you get decent visuals while still maintaining a decently smooth frame rate. All recent games have done this, most notably in the realm of frame rate (hence why pretty much everything is getting released at 30 FPS nowadays).

If someone wants to suggest that you can never attain 100% efficiency, that's completely pointless, who really cares, why should anyone ever shoot for that? It's an unattainable goal and one that hardly makes any difference next to simply improving the hardware.
 
Whoever told you that is a moron.

It's very, very easy to completely max-out a console. All that happens when a console is maxed out is that the frame rate drops, and when the frame rate drops you know you've already maxed out the console's capabilities.

At that point it's about making as many compromises as possible such that you get decent visuals while still maintaining a decently smooth frame rate. All recent games have done this, most notably in the realm of frame rate (hence why pretty much everything is getting released at 30 FPS nowadays).

If someone wants to suggest that you can never attain 100% efficiency, that's completely pointless, who really cares, why should anyone ever shoot for that? It's an unattainable goal and one that hardly makes any difference next to simply improving the hardware.

He's a developer so I wouldn't say he's a moron. ;p

By your definition of max out, any poorly coded game maxed out a system since the frame rate drops.

I actually thought his point was you can't reach 100% efficiency. Not sure how it's completely pointless since everyone strives for their code to be more and more efficient.
 
Toon Link >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this shit
Style > technical, but even in this case I think I can find technical feats I'm more impressed by. Both Last of Us and (hopefully) Beyond look to hit great levels of detail without sacrificing 30 FPS at the least for it.
I actually thought his point was you can't reach 100% efficiency. Not sure how it's completely pointless since everyone strives for their code to be more and more efficient.
And this part of why as much detail as you can at 30 FPS > pushing it and dealing with hovering around 24 at best. No one will give a shit if your multiplat pushes consoles, only to fail to be smoothly playable, they'll just ooh and aah the PC version that has hardware to truly make the game shine.
 
Style > technical, but even in this case I think I can find technical feats I'm more impressed by. Both Last of Us and (hopefully) Beyond look to hit great levels of detail without sacrificing 30 FPS at the least for it.

Of course. You can also have technical improvements & style.

The Naruto cel shaded games blow away Toon Link.
 
Of course. You can also have technical improvements & style.

The Naruto cel shaded games blow away Toon Link.
Yeah, but that's Naruto. Seriously, I prefer the visual style of Wind Waker over that, and without going to the low of the DS games I think there's still a case to be made.

But you are right (as the DS games inadvertently highlighted), you have to at least have ENOUGH technical power to make that style shine, though a good style does help to make up or mask a system's limitations, Mega Man Legends looks better than a lot of 3D PS1 games to me just because it recognizes it can't push too much detail and goes for something simple that'll hold and let them be more animated, similar to cel shading being cited here.
 
Yeah, but that's Naruto. Seriously, I prefer the visual style of Wind Waker over that, and without going to the low of the DS games I think there's still a case to be made.

But you are right (as the DS games inadvertently highlighted), you have to at least have ENOUGH technical power to make that style shine, though a good style does help to make up or mask a system's limitations, Mega Man Legends looks better than a lot of 3D PS1 games to me just because it recognizes it can't push too much detail and goes for something simple that'll hold and let them be more animated, similar to cel shading being cited here.

Honesly I'm not that impressed with Wind Waker's cel shading.

I found Jet Set Radio Future's cel shading on the OG Xbox more impressive. And that ran in 60fps.
 
He's a developer so I wouldn't say he's a moron. ;p

By your definition of max out, any poorly coded game maxed out a system since the frame rate drops.
That's exactly what I'm saying.

All hardware has a limit. Consoles will reach this limit rather easily, especially nowadays. This is why we need better hardware.

This is also why I'm almost exclusively a PC gamer (I say almost because, obviously, there are some games exclusive to consoles - particularly fighting games).

I actually thought his point was you can't reach 100% efficiency. Not sure how it's completely pointless since everyone strives for their code to be more and more efficient.
I'm saying that diminishing returns will kick in sooner than later, to the point where every single optimization you make is a compromise rather than an actual gain.

It doesn't even take particularly long to reach this point with current-gen consoles.
 
That's exactly what I'm saying.

All hardware has a limit. Consoles will reach this limit rather easily, especially nowadays. This is why we need better hardware.

This is also why I'm almost exclusively a PC gamer (I say almost because, obviously, there are some games exclusive to consoles - particularly fighting games).


I'm saying that diminishing returns will kick in sooner than later, to the point where every single optimization you make is a compromise rather than an actual gain.

It doesn't even take particularly long to reach this point with current-gen consoles.

I think you're mistaken on some things. Hopefully I get time from work to explain what I mean soon.
 

No, it's also a console.

Keep telling yourself that when you think you've maxed something out if it makes you feel better. Somebody will come along and get a little bit more out of it.

Like I said, it isn't the hardware, it is the developer, or more importantly the developers' willingness to invest significant time for minor advances.

Nobody pushes it further because it isn't worth pushing further, not because it can't be pushed further.
 
Top Bottom