• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

"Lazy devs" - is this really an argument?

Biker19

Banned
Almost every time a PS3 port performed worse than the 360 people cried that it was due to lazy devs. We heard it for over half a decade.

Now you have a problem with it when the shoe is on the other foot.

Exactly. I just love the hypocrisy.

At least with PS3, 3rd party developers can actually get the ports up to par/parity with the Xbox 360 versions, due to both 360 & PS3 basically being a wash between each other, as 360 has a better GPU, while PS3 has a stronger CPU.

That won't happen with Xbox One, as the system has a GPU that's 50% weaker than the GPU inside of the PS4, along with 8 GB's of slower RAM. And if that wasn't bad enough, there's eSRAM involved (especially with only 32 MB's of it), making things even more difficult for 3rd party developers.

There will always be differences between Xbox One & PS4 versions of multiplat games.
 

Eusis

Member
Exactly. I just love the hypocrisy.

At least with PS3, 3rd party developers can actually get the ports up to par/parity with the Xbox 360 versions.

That won't happen with Xbox One, as they have a GPU that's 50% weaker along with 8 GB's of slower RAM, along with eSRAM (especially with only 32 MB's of it), making things even more difficult for 3rd party developers.
Yeah, you basically HAVE to hold back the PS4 version in order to get parity, and even then you may be inclined to go "well fuck it lets fit some supersampling in" as Lego Hobbit readily displayed. It comes off even more as being a petulant child, at least with PS3/360 it really was about figuring out how to use what you have, and even then I didn't really get upset if they didn't or couldn't do it (though I had access to a 360 before a PS3 and got my own of each at roughly the same time), so seeing something like a poster going "not my problem, get it done" when a developer outlines how the XB1 simply can't match up to the PS4 seems absurd and even childish.
 
This argument is stupid and just shows how silly some people can be.
As a handheld-only gamer, my favorite line is "They had lazy devs make a shitty port to the DS/PSP/whatever."
Like, no. People had to sit down and make that "shitty" port.
That version of Spider Man: Web of Shadows you look down upon because it's not the 3D one? Fucking FANTASTIC. The Harry Potter games for the GB? Radical games.

I'm not saying that there aren't lazy devs out there. I'm just saying that it's a stupid thing to say when it comes to the industry as a whole.
 

Surface of Me

I'm not an NPC. And neither are we.
IW is lazy devs, PS4 and X1 version look almost identical to their last gen counterparts.


A lot of the devs having trouble with X1 probably arent lazy though.
 

Lathentar

Looking for Pants
Regardless of the platform you run on, when developing real time applications, it is a well-known basic rule that you do not tie your program logic to update timing - you scale every numeric value that gets updated with time by the time delta between updates. This is a fundamental principle, as you can get crazy things to happen when due to somethinf your update is faster or slower than an expected, supposedly locked rate. Not doing this is very bad practice and even amateur game developers on the game maker forums know this. To imagine that a released, professional video game can be programmed without this principle is baffling.
You'd be super baffled then in the console space.
 

Tain

Member
Regardless of the platform you run on, when developing real time applications, it is a well-known basic rule that you do not tie your program logic to update timing - you scale every numeric value that gets updated with time by the time delta between updates. This is a fundamental principle, as you can get crazy things to happen when due to somethinf your update is faster or slower than an expected, supposedly locked rate. Not doing this is very bad practice and even amateur game developers on the game maker forums know this. To imagine that a released, professional video game can be programmed without this principle is baffling.

I think I'll let it slide and not call them "bad developers" when the single targetted platforms they worked on were reliable enough for them to design game logic around vblank timing and when the game slowing down could even have practical use for players. Well, that, and they made games like fucking Metal Slug, lol. I can't ever allow an individual principle like "the engine must be framerate-independent" completely overshadow all of the good design in the entirety of Metal Slug and Jet Set Radio and Super Mario Bros. and Street Fighter II and DoDonPachi and Shinobi and Makaimura and Herzog Zwei and Rockman X and Gradius and Metal Gear Solid and just about every other Japanese classic.
 

MaxiLive

Member
I work in the industry myself from an outside perspective and I don't know of a single developer who is lazy with 80%+ work crazy hours sometimes just to get something complete that they have a passion for. I know it hurts them when they get called lazy for skipping a feature or something not quite hitting the mark that people expect. Making software seems damn challenging and there is always going to be compromises during a project with various factors kicking in.

As a consumer we are spoilt for choice so why shouldn't we expect x game to have the graphics of x, the features of x, then MP of x and physics of x on all platforms for minimal price point (*x = leading title in that field). So I think the term lazy dev gets used more just to show the consumer frustration of lack of x feature.

I don't know anyone who actually wants to make a bad project therefore the Lazy dev point is mute.

You do have poor design choices or decision making though when it comes to a project but more often than not they can come about due to other limitations in budget, time-scale or resources.
 

Morokh

Member
It's far from being accurate, but represents a good shortcut to refer at what is a much bigger problem of the industry.

Devs themselves should just not take it directly for themselves.
 
Considering the fact that it's common place for developers to do 60-80 hour weeks during crunch time, lazy is not a good word. While it sucks that substandard products are released, there are a multitude of reasons why many games fail to impress. People gotta place the blame somewhere, so we often hear cries of lazy devs and greedy publishers.
 

klaus

Member
As a consumer we are spoilt for choice so why shouldn't we expect x game to have the graphics of x, the features of x, then MP of x and physics of x on all platforms for minimal price point (*x = leading title in that field). So I think the term lazy dev gets used more just to show the consumer frustration of lack of x feature.

Perfect way of describing the situation imo (and without name-calling), but I'm sure somebody will dispute the point by claiming that "today's games are imcomplete and buggy".
 

nynt9

Member
I think I'll let it slide and not call them "bad developers" when the single targetted platforms they worked on were reliable enough for them to design game logic around vblank timing and when the game slowing down could even have practical use for players. Well, that, and they made games like fucking Metal Slug, lol. I can't ever allow an individual principle like "the engine must be framerate-independent" completely overshadow all of the good design in the entirety of Metal Slug and Jet Set Radio and Super Mario Bros. and Street Fighter II and DoDonPachi and Shinobi and Makaimura and Herzog Zwei and Rockman X and Gradius and Metal Gear Solid and just about every other Japanese classic.

Pre-double-buffering and post-NES systems actually had ways to mitigate this system so it doesn't really apply to the pre-double-buffering era. I was talking more about modern games where double buffered 3D engines are common.
 

Draft

Member
It's great because it gives devs and their lapdogs in the media something to rally around during their pity parties. Entitled gamers call us lazy? Do they even know how hard it is to make a game? Don't forget to pre-order the season pass.
 

klaus

Member
Pre-double-buffering and post-NES systems actually had ways to mitigate this system so it doesn't really apply to the pre-double-buffering era. I was talking more about modern games where double buffered 3D engines are common.

Just to clarify - do you mean that game logic should account for the time that has passed since last update (but still be executed every frame, whatever the framerate is), or should game logic be executed on a separate thread with fixed timing (i.e. completely independent from drawing)?
 

nynt9

Member
Just to clarify - do you mean that game logic should account for the time that has passed since last update (but still be executed every frame, whatever the framerate is), or should game logic be executed on a separate thread with fixed timing (i.e. completely independent from drawing)?

Ideally, separate thread with time scaling, but minimally, same thread with time scaling. Otherwise you have your character moving different distances in the same time in different (logic) framerates.
 

klaus

Member
Ideally, separate thread with time scaling, but minimally, same thread with time scaling. Otherwise you have your character moving different distances in the same time in different (logic) framerates.

Yeah I'm well aware of the problems when having unstable framerates, but I'm still not sure what the ideal solution might be - just making everything framerate dependant (and account for delta times) might not actually be the best solution in any case, even if it makes increasing (or lowering) the framerate rather simple. For one, there are issues with floating point precision, especially if you go really high with the framerate (from my memory Quake3 had that issue, there were "magic" frame / refresh rates that gave players an unfair advantage, so Carmack decided to cap logic updates at 60 fps for Doom3 Multiplayer), and when the framerate goes really low you risk collision bugs, wrong approximations and other nasty things.

For multiplayer, I guess it would be ideal to assume a (more or less) constant update of the network data (say 30 times / sec), and it should be inter- / extrapolated for each drawn frame, with special care taken for local prediction.

So those two examples suggest using a constant game logic refresh rate, but that leaves the problem of handling inter- / extrapolation (assuming that rendering is running asynchronously with varying framerate), which means more controller input lag (ideally you would want to poll the controllers the very last moment before you render view dependent stuff).

And then there is a third possibility, that nowadays probably is viewed as outdated or obsolete: you lock your logic to the framerate without any adjustments to delta time (so you can have perfect integer / fixed point arithmetic that works ideal / without any numerical artefacts) and do your damn best to keep the framerate above your chosen number (obviously also capping it so it runs "locked"). Yes, this means that if for whatever reason the game fails to update in 1/60s, things will slow down. But on the other hand, you have that pixel perfect, snappy feel where pixels move at a constant rate (ideally also constant speed if framerate is 100% stable), making single frame & pixel precise jumps / moves etc. possible that nowadays are simply not possible anymore due to triple buffering, extrapolating, floating math, upscaling and whatnot..

But I fully understand that it's nearly impossible to recreate that feeling on today's hardware with all its abstraction layers and multitasking, and I also understand that most people don't have the patience anymore to really nail moves with 100% accuracy - might get flamed for it but today's games are simply much much easier than they were in the past, and they have to be easier for 2 reasons (imo): the broadened audience isn't willing to work so hard to beat a game anymore, and today's hardware simply isn't capable anymore of truly precise timing / input / display.

/rant of a grumpy gamer & part time dev
 

-GOUKI-

Member
Its a terrible argument. Anyone who has worked as a software developer knows that game programmers get the least compensation and most stressful work load in the industry.
 

Figments

Member
I'm disturbed by the lack of Animal Farm references in this thread.

And yes, it's a dumb argument give certain pretexts. It's illogical to think it's a holistically bad argument.
 

KJRS_1993

Member
"Lazy devs" is an argument used by muppets who have not just blatantly never worked in the video game industry, but likely haven't worked in any professional environment where costs and resources are a consideration.

It's irritating, immature, and the argument is used by armchair developers at their absolute worse.
 

nynt9

Member
Yeah I'm well aware of the problems when having unstable framerates, but I'm still not sure what the ideal solution might be - just making everything framerate dependant (and account for delta times) might not actually be the best solution in any case, even if it makes increasing (or lowering) the framerate rather simple. For one, there are issues with floating point precision, especially if you go really high with the framerate (from my memory Quake3 had that issue, there were "magic" frame / refresh rates that gave players an unfair advantage, so Carmack decided to cap logic updates at 60 fps for Doom3 Multiplayer), and when the framerate goes really low you risk collision bugs, wrong approximations and other nasty things.

For multiplayer, I guess it would be ideal to assume a (more or less) constant update of the network data (say 30 times / sec), and it should be inter- / extrapolated for each drawn frame, with special care taken for local prediction.

So those two examples suggest using a constant game logic refresh rate, but that leaves the problem of handling inter- / extrapolation (assuming that rendering is running asynchronously with varying framerate), which means more controller input lag (ideally you would want to poll the controllers the very last moment before you render view dependent stuff).

And then there is a third possibility, that nowadays probably is viewed as outdated or obsolete: you lock your logic to the framerate without any adjustments to delta time (so you can have perfect integer / fixed point arithmetic that works ideal / without any numerical artefacts) and do your damn best to keep the framerate above your chosen number (obviously also capping it so it runs "locked"). Yes, this means that if for whatever reason the game fails to update in 1/60s, things will slow down. But on the other hand, you have that pixel perfect, snappy feel where pixels move at a constant rate (ideally also constant speed if framerate is 100% stable), making single frame & pixel precise jumps / moves etc. possible that nowadays are simply not possible anymore due to triple buffering, extrapolating, floating math, upscaling and whatnot..

But I fully understand that it's nearly impossible to recreate that feeling on today's hardware with all its abstraction layers and multitasking, and I also understand that most people don't have the patience anymore to really nail moves with 100% accuracy - might get flamed for it but today's games are simply much much easier than they were in the past, and they have to be easier for 2 reasons (imo): the broadened audience isn't willing to work so hard to beat a game anymore, and today's hardware simply isn't capable anymore of truly precise timing / input / display.

/rant of a grumpy gamer & part time dev

I feel you and I'm glad that we're having this conversation. Btw, the collision issues at low framerates are addressable by doing your collisions via ray casting between positions instead of checking whether there's a collision at the endpoint. With ray casting, you'll always end up at the correct spot.
 

nakedeyes

Banned
This is pretty much how I thought it all worked, so thanks for confirming my inner speculations about how porting to Xbox One would work.
This is a really nice insightful post and I'd love to see more like it.
I appreciate the feed back dudes. Hopefully Ill make it out of Junior member limbo soon! :)

Well put, thanks for the enlightening read! Do you have any insight if tiled rendering might become an option on XB1, with DX12 (supposedly) reducing drawcall / CPU overhead? As far as I remember the biggest problem of doing a tiled renderer was having to send the geometry twice, or am I missing something (perhaps vertex shader overhead, or perhaps the final image tiles have to be moved to DDR3, choking bandwith)?

Just asking because MS promised the same for the 360 (tiled rendering overcoming the limitations of EDRam), but to my knowledge, that never really happened..

I wish I knew more about that... My opinion stands on DX12 is as follows: I don't think its going to be the silver bullet that some think it will be. Typically, these big DX updates add features, and they typically offer minimal ( but tangible ) performance increases.

If DX12 has some magic new render tech that happens to help out the XBONE, that's awesome! ( I own one, so I hope that is the case ) But my intuition says that the gains won't be so over the top. And I also know that if any super awesome new techniques are in DX12 that can halve render overhead or something, you can bet your bum, those techniques will make it over to other graphics SDKs and consoles.

Full Disclosure: I own both consoles, but kinda prefer PS4 thus far. I am a console game programmer ( for over six years now ), but I only moonlight in graphics, gameplay is my main jam.
 

_machine

Member
Usually it's the producers/publishers that cut corners.
I feel that cutting corners isn't the appropriate term either, at least as in producer role I don't try to cut corners, but it's my job to make sure the game is done on time, on budget and that it's as good as we can make it. That includes making a lot of compromises, especially when it comes to the things for the minority part of the audience, but that's what making games is in the end; it's about making thousands and thousands of compromises to try to come with the best possible game, but it's also extremely hard. Developers are in the business of making the best damn games they can and they sacrifice a lot for that, but games are simply immensely tough to develop and there's always millions of little things that you could do to make it better, but you have to draw a line somewhere.
 

Mariolee

Member
I feel that cutting corners isn't the appropriate term either, at least as in producer role I don't try to cut corners, but it's my job to make sure the game is done on time, on budget and that it's as good as we can make it. That includes making a lot of compromises, especially when it comes to the things for the minority part of the audience, but that's what making games is in the end; it's about making thousands and thousands of compromises to try to come with the best possible game, but it's also extremely hard. Developers are in the business of making the best damn games they can and they sacrifice a lot for that, but games are simply immensely tough to develop and there's always millions of little things that you could do to make it better, but you have to draw a line somewhere.

Sorry to bump this thread, but this topic came to mind when discussing the recent unofficial reveal of a certain Smash Bros. character and how another poster and I got into an argument about whether Sakurai was lazy for not changing that character's moveset. I believe that in a game with tons of different modes and features, it is simply straight up ignorant to call a developer like Sakurai lazy. Like you described in the post above, he is prioritizing. He obviously didn't feel that it was worth his time working on the game to change that character than it would be working on a whole new one, if there's anything that even needs changing at all.

I remember I believe it was on GAF that a third party eveloper who worked for what was widely regarded as a "shovelware" company had actually come to defend himself stating that even if the games they make are shitty, they're not intentionally made that way. It's because of the lack of budget, time, and manpower that they come to be produced that way and that despite the resulting quality, the developers on the game worked their ass off to get it on shelves. No one is lazy.

OK, maybe the guy who made "The Letter" is lazy, but he's an obvious exception.
 

Krejlooc

Banned
"Lazy Devs" is the dunning-kruger effect in motion.

The Dunning–Kruger effect is a cognitive bias manifesting in unskilled individuals suffering from illusory superiority, mistakenly rating their ability much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to recognize their ineptitude.[1]

David Dunning and Justin Kruger of Cornell University conclude, "the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others".

If you’re incompetent, you can’t know you’re incompetent. […] the skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is.
—David Dunning
 
Top Bottom