• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Indie Game Development Discussion Thread | Of Being Professionally Poor

Status
Not open for further replies.

bumpkin

Member
Welp. Fired my program lead today. Got into it with my artist and I didn't appreciate it. The "god" attitude came out and that did it for me. I can't stand that shit. None of us are amazing or have a leg to stand on as a small studio so pretentious BS can hit the street. This now means a larger workload for me. I'll need to assess current/potential projects if I'm going to spend the next year or two creating, might as well be on something that has weight and hopefully lasting impact. I might just go straight for the heart. Will take a week to think about it.
If you don't mind me asking, what sort of game are you making? What engine are you using?
 

Blizzard

Banned
¯\(°_o)/¯
I guess I was just disappointed that they wrote that big explanation post about why it was crashing, and the answer seemed to be "no that's wrong" -- are you saying almost the entire rest of the post was also invalid and we're back to square one on the crash? =(
 
What kind of game do you want to make? Because what you want to do will make or break any game engine when it comes to your desires.

Do you want a job in the industry? Make a game with Unity and/or Unreal, it'll get you a job. Most likely.

Do you want to make a game like To The Moon, Ib or Skyborne? RPGMaker has a lot of that infrastructure in place so it'd be more efficient and faster for your to use RPGMaker. However if you want to make a 3D car racing game, Unity or Unreal would make way more sense.

There are always a few people who will absolutely delve into a tool and push it to limits you wouldn't expect like Hotline Miami does with Game Maker. I know RPG Maker has Ruby or some other scripting language behind it and masters of that will do the equivalent of making an elephant do back flips.

Evaluating engines is tricky. It's like you found a rectangular peg to fit into a square hole. You'll have to put in some effort to work around something silly that seems to you like it should be obvious, but all in all you will save time down the line going with the right tool.
Thanks. I think I'll keep with Game Maker now and make two games or so with it and then proceed to move onto something else down the line.
 

cbox

Member
This just feels so damn good to watch.

The 2D, arcade shooter version of this.
VioletKaleidoscopicGar.gif

At the least, it's a good substitute for not being able to otherwise show your game running in 60 fps. It gives a good idea of what's actually going on, which can be helpful to novices.

Those GIFs look great! ☺︎

hehe thanks :D Fortunately, my next trailer will be 60fps. I really hope youtube has that feature ready soon. I might just upload it directly to steam as they allow that frame rate ( I think )
 

Jobbs

Banned
If you do have access to the source code for your post process vertex shader, it might be worth changing the erroneous "attribute vec4 aVertex;" declaration to a vec2. It's possible the nvidia driver compiles it happily but doesn't run it without issue.

okay, it's been changed to vec2. if anyone is around for whom the tests were crashing, give it a shot please. www.ghostsonggame.com/jscreens/newdantest.rar (feel free to reply it here, but if it's already been addressed by a few replies then either pass or PM me, I don't want to clutter the whole thread with my test again)

thanks guys.
 
That's an interesting analysis of PSASBR, a game which I tried very much to love but could not.

Could you explain why you're using inverse health in your game, or would that rabbit trail into a hundred other systems?

I don't have inverse health, but parts of my scoring system feel inverse.


For example the MP has these "power-ups" that are dropped from dead players.

BnPEFoDIQAArvIn.png:large



Collecting them should be good and letting enemies collect them should be bad. Right now that's done through a simple "collect and add x points to your score." The "twist" though is that these power-ups can also be shot and destroyed. If a player sees an enemy going for a power-up, I want the player to try and shoot the power-up before the enemy player gets there.


It already works that way when we play test because you want to stop the enemy from getting points, but I fear that most players will only focus on their own score and thus hardly ever shoot the power-ups.

There are some other aspects that also fall into the same trap.
 
but I fear that most players will only focus on their own score and thus hardly ever shoot the power-ups.

I would suggest rewarding the players for a successful denial. Something like half or a quarter of the points they would get if they collected it themselves.

The goal would be to reward players for a denial, but not give them a reward for shooting every one they see just to get points. You could even get trickier and award more points (up to full points for the collectible) based on how close the other player was to the collectible when you denied them.

The tricky part would be decided what is and isn't a denial. Maybe tracking if a player (other than the shooter) has been looking at that collectible for X amount of time? You could do a simple distance-to-nearest-player check, but that wouldn't prevent aimless shooting of collectibles.

A combination of the above two would probably work pretty well, but I can't say how good/bad performance would be.

My two cents on the matter. :)
 
I would suggest rewarding the players for a successful denial. Something like half or a quarter of the points they would get if they collected it themselves.

The goal would be to reward players for a denial, but not give them a reward for shooting every one they see just to get points. You could even get trickier and award more points (up to full points for the collectible) based on how close the other player was to the collectible when you denied them.

The tricky part would be decided what is and isn't a denial. Maybe tracking if a player (other than the shooter) has been looking at that collectible for X amount of time? You could do a simple distance-to-nearest-player check, but that wouldn't prevent aimless shooting of collectibles.

A combination of the above two would probably work pretty well, but I can't say how good/bad performance would be.

My two cents on the matter. :)

Thanks for the input.

I like the "denial" idea, but as you said deciding what is and isn't a denial is the tricky part.


Could do it with three triggers maybe.

If an enemy is within X (really close) it counts as a denial.

If an enemy is within Y (kinda close) and was recently seen by the shooter, it also counts as a denial.

If an enemy is within Z (less close), has seen the power-up, has moved toward the power-up, and was recently seen by the shooter, it also counts as a denial.



I'll code it in and see how it works.
 

Pehesse

Member
ok thanks guys. :\

Wish I had better news to report!


As for Honey, some new stuff:

tumblr_inline_nbp61tPVea1rfzuuq.gif
tumblr_inline_nbp61y6Frf1rfzuuq.gif


I'm trying to steer clear of the trap of reworking endlessly old stuff, but some of the really early animation for Honey had to go, so I'm on that. Should be done pretty soon, and back to new stuff, hopefully!
 
hope this is the appropriate time/place for this. If you're interested (or know someone who'd be a good fit) we'd love to hear from you - thanks:

We're seeking an ambitious audio auteur for Narcosis, a narrative-driven, first-person survival story shipping on PC and consoles in 2015.

The Game:
Narcosis is a survival story set at the sunless depths of the Pacific Ocean. Stranded after an accident with little light and a handful of tools, an industrial diver takes desperate steps to surface before his oxygen — and sanity — give out. Inspirations include:

  • Gone Home, Amnesia, Dead Space, Silent Hill
  • Gravity, The Abyss, Solaris
  • ...and what you bring to the project.
More details at our linkedin posting.
 

Ashodin

Member
Thanks. Its a rough spot at the moment but I will just have to change my timeline a bit.


Ha! You are too kind, Feep. You can expect it but I might need to rearrange which project gets finished first. I can, however, hook you up with a Feep-only playroom where you can wall-stuff to your heart's content to tide you over! Let me know and I'll rig one up one of these days :p


Aye. I want all my dudes to take a round table approach to deciding what goes in or out of a game, including me - I might be the leader of the pack but I am certainly not immune to making stupid decisions. I believe the best food for a creative mind is another creative mind, so discussing and sharing ideas is an ethic I like to have. But when that attitude quickly turns into "I'm better than everyone, my way or the highway" that's when I choose highway. A big head like that completely cripples the rest of my team and undermines their ability to have a voice. It doesn't allow for important discussion topics like why or how, it restricts the flow of development and puts a gun to its head, instead.

Dude is a good programmer, for sure. But wanting to work as a lone wolf and holding development hostage because you think you are awesome (he actually said this a few times) is bad juju.

That's really dumb IMO. Egos need to be left at the door when you game dev.
 

backstep

Neo Member
ok thanks guys. :\

So I felt bad about sending you on a wild goose chase yesterday over that 'stride' length, and had another look this evening. I think I've fixed your bug, but before I go off on an explanation again (of an API I don't even use), could someone who had the crash previously on nvidia cards check the test executables now work for them. Here's the download:

https://www.dropbox.com/s/3cyo0ia2z5ou3lg/danshadertests.zip?dl=0

Thanks!
 

JulianImp

Member
Thanks. Its a rough spot at the moment but I will just have to change my timeline a bit.


Ha! You are too kind, Feep. You can expect it but I might need to rearrange which project gets finished first. I can, however, hook you up with a Feep-only playroom where you can wall-stuff to your heart's content to tide you over! Let me know and I'll rig one up one of these days :p


Aye. I want all my dudes to take a round table approach to deciding what goes in or out of a game, including me - I might be the leader of the pack but I am certainly not immune to making stupid decisions. I believe the best food for a creative mind is another creative mind, so discussing and sharing ideas is an ethic I like to have. But when that attitude quickly turns into "I'm better than everyone, my way or the highway" that's when I choose highway. A big head like that completely cripples the rest of my team and undermines their ability to have a voice. It doesn't allow for important discussion topics like why or how, it restricts the flow of development and puts a gun to its head, instead.

Dude is a good programmer, for sure. But wanting to work as a lone wolf and holding development hostage because you think you are awesome (he actually said this a few times) is bad juju.

Just read up on this, and it seems like a real pain, but I'd say you've done the right thing. It's okay when someone actively participates in the game's development, but only as long as that participation doesn't turn into trying to steal the spotlight from other team members, or trying to downright monopolize the game's development pipeline, clogging it up on purpose whenever their demands aren't being fullfilled.

For Quark Storm, the two guys who offered to help are new to game development, so I guess they don't feel like they're good enough to make big suggestions to me or something. Still, their feedback was invaluable for settling on the mechanics and gameplay elements that we'd be using, and the brainstroming session we had ended up helping us come up with an easily identifiable aesthetic for the enemies and things under their influence (for example, in the level select screen).

Even if they got kind of busy with other stuff shortly afterwards they had joined the team, their presence actually helped speed up the game's development considerably by offering more points of view for discussion and making suggestions.

...Which reminds me that I'm slowly accustoming to Japan's timezone, so I should probably get back to actually working on the game! TGS starts in a week, so I guess I'll have enough time to make a fun little demo.

Also, having a week before TGS means I'll probably barely have enough time to make the actual levels, so would any IndieGAFfers be willing to help me playtest them a bit? If anyone's up to it, just let me know by PM so I can send you the runnables as I build them (iOS won't be supported yet, since so far I've only been able to install directly from XCode into a device rather than compiling an apk I can send and install manually on any number of devices like with Android).
 

jusufin

Member

HelloMeow

Member
So I felt bad about sending you on a wild goose chase yesterday over that 'stride' length, and had another look this evening. I think I've fixed your bug, but before I go off on an explanation again (of an API I don't even use), could someone who had the crash previously on nvidia cards check the test executables now work for them. Here's the download:

https://www.dropbox.com/s/3cyo0ia2z5ou3lg/danshadertests.zip?dl=0

Thanks!

Yes, now it works.
 

Raonak

Banned
Been working on Super Pokemon Eevee Edition again. I had set out a target date of "late july" for the next beta. I missed that very badly, due to underestimating my time. I've set out a new target date of "before november" hopefully I'll finish the new beta in time *fingers crossed*

Made a new map~ After working on the shiny system for so long (which is boring and repetitive).... It feels so damn good to return to mapping again.
Really gets my creative juices flowing. I especially love working with Gen2 pokemon graphics for some reason. Probably nostalgia.
 

Jobbs

Banned
So I felt bad about sending you on a wild goose chase yesterday over that 'stride' length, and had another look this evening. I think I've fixed your bug, but before I go off on an explanation again (of an API I don't even use), could someone who had the crash previously on nvidia cards check the test executables now work for them. Here's the download:

https://www.dropbox.com/s/3cyo0ia2z5ou3lg/danshadertests.zip?dl=0

Thanks!

Works for me

Yes, now it works.

okay, holy shit, let me in on it. what's the deal here?
 
So I felt bad about sending you on a wild goose chase yesterday over that 'stride' length, and had another look this evening. I think I've fixed your bug, but before I go off on an explanation again (of an API I don't even use), could someone who had the crash previously on nvidia cards check the test executables now work for them. Here's the download:

https://www.dropbox.com/s/3cyo0ia2z5ou3lg/danshadertests.zip?dl=0

Thanks!

Yep, it works here too. Nice work! Now I want the explanation. :)

Also, now everyone can ask you to fix their mysterious bugs right? :D
 
I'm thinking about making a demo to showcase some of the main features of my rpg game. It'll most likely include the first "dungeon" of the game as well as a few explorable quests to find. Any ideas on the length it should be?
 

Ashodin

Member
I'm thinking about making a demo to showcase some of the main features of my rpg game. It'll most likely include the first "dungeon" of the game as well as a few explorable quests to find. Any ideas on the length it should be?

Most people get an idea if they want your game within 15 to 30 minutes of play, from my experience.

There's also a lot of data regarding indie titles that if you release any sort of demo it can turn people off entirely. So it's a mixed bag.
 

Jobbs

Banned
I'm thinking about making a demo to showcase some of the main features of my rpg game. It'll most likely include the first "dungeon" of the game as well as a few explorable quests to find. Any ideas on the length it should be?

unles your demo is DAMN GOOD, I generally would be cautious about demos. It may not ultimately matter much, but if your goal is to sell people on your game, I think you can very easily do more damage than good.

Last year during the Legend of Iya kickstarter campaign -- we had a game that looked terrific in video, and then he released a demo that was clearly very unfinished (which is fine, because that's what happens when you make a game, it's very unfinished for a while) but its lack of sound effects, poor controls, some bugs, unfinished art, etc. Damaged the mystique of the game.

His campaign funded and who knows if it ultimately hurt or help him, but in my eyes it felt like he'd have been better off not doing it.

(Around the corner, with some trepidation, I'll be releasing my own demo to backers who backed for it. I'm doing it because I said I'd do it, but if my game weren't a KS game I'd probably not do this)
 

backstep

Neo Member
Ok I'll try to keep this concise, but it's worth posting rather than PM'ing since when the next person hits this bug the fix will be available as a google result. I know random forum posts in google have helped me in the past. Also this bug is exactly why I haven't looked at OpenGL in like 10 years, it's got so much cruft, even it's deprecations are deprecated!

Reference my earlier long post (save quoting code again), where I thought the stride length in the two glVertexAttribPointer calls was causing the memory access violation. Well, it really is the first of those calls that causes the later glDrawArrays to crash the driver with a bad memory access. It isn't the stride, it's the last parameter, the offset. When you're not using VBOs, the last parameter is a memory pointer to the vertex data, when you are using VBOs it's an offset into the currently bound VBO. Even though a VBO is bound (VBO 7), the nvidia driver crashes because with that last param it's trying to load the vertex data from memory location 0x00000000 (which is NULL, nothing) rather than the start of the bound VBO. The driver is ignoring the bound VBO because of a single call made about 100 calls earlier:
Code:
glEnableClientState(GL_VERTEX_ARRAY)
That call is super deprecated, but still mentioned in the OpenGL manpages, and it's superseded by glEnableVertexAttribArray() in modern GL which your app calls anyhow. All it does is update vertex array 0 on the pipeline, but according to the spec if you enable array 0 using that old style call, it tells glDrawArrays that the array using index 0 in glVertexAttribPointer will be a literal pointer to memory. That's my understanding of it from a non-GL user. The earlier glDrawArrays calls in your app don't error because their glVertexAttribPointer calls do use literal pointers to their vertex data (no VBO is bound). Your post-processing draw call is the only one that uses a bound VBO so that's where it crashes. Disabling the legacy call allows the post-processing draw call to access the bound VBO correctly. The bug only happens on recent nvidia drivers because the spec doesn't explicitly say that binding a VBO disables the behaviour specified by the old style call, the crash is just the result of "unspecified behaviour". Older nvidia drivers and AMD ones make an assumption, if I had to guess.

That's the google-worthy info out of the way.

The reason I found out is that I keep a pristine windows 7 install for testing purposes and it has an older nvidia 331.40 driver. Your test apps worked with that driver, so I compared the GL API logs to when it crashed and they're identical, and I figured it really must be something funny in the memory access of that draw call that the new driver won't tolerate. While looking through them I noticed the earlier working calls used memory pointers instead of offsets and did a quick google and came across this page - https://code.launchpad.net/~3v1n0/unity/fix-nvidia-glDrawArrays-crash/+merge/117559 scroll to the green highlighted diff at the bottom and you'll see it's the same bug. It's a patch for a linux desktop manager from 2012, but after a bit of checking around with the terminology it seemed the likely culprit in your app too.

So how to disable it in your app? Well I looked for some kind of GL function injector DLL to disable that legacy feature with it's opposite disableclientstate call (something like the many FXAA injectors etc), but I couldn't find anything for injecting specific GL commands. Instead I figured the quickest test is to change the function argument to something invalid. I looked in that gDEBugger program to see roughly where in your app the legacy API call was made, it's from the nme.ndll not the actual executable. Then I loaded the nme.ndll into IDA and located the function that calls glEnableClientState with an argument of GL_VERTEX_ARRAY. Uppercase means a constant so I checked the headers and it translates to 8074 in hex, see - https://www.khronos.org/registry/glsc/api/1.0/gl.h . That page is also useful for making sure not to change the argument to another valid define which could cause a different bug. With the function located in IDA by hex address, I could load the nme.ndll in a hex editor (I used HxD since it's free), and edited the argument of 8074 to something benign like 0074. Saved the dll in HxD and your app worked fine, with the problematic call now issued as a fairly harmless glEnableClientState(Unknown), since 0074 isn't a valid argument.

I didn't need to reproduce the whole IDA disassembly for all of the other test executables, I just searched each one's nme.ndll in HxD for the hex-value "74800000FF15" and changed the 74 80 to 74 00 (the 8074 is backwards because x86 is little-endian). You might be safe to do the same edit on any other versions of your game till the guys at Stencyl get around to addressing it. It depends whether they're actually using the legacy style of OpenGL at any point, and if they are then they need to add glDisableClientState(GL_VERTEX_ARRAY) guards around their modern VBO bindings (which might be time consuming). Otherwise they can probably quickly remove the glEnableClientState(GL_VERTEX_ARRAY) calls from their generated code for the win32 desktop target.

Did I mention, I don't like OpenGL! :)
 

Jobbs

Banned
Ok I'll try to keep this concise, but it's worth posting rather than PM'ing since when the next person hits this bug the fix will be available as a google result. I know random forum posts in google have helped me in the past. Also this bug is exactly why I haven't looked at OpenGL in like 10 years, it's got so much cruft, even it's deprecations are deprecated!

Reference my earlier long post (save quoting code again), where I thought the stride length in the two glVertexAttribPointer calls was causing the memory access violation. Well, it really is the first of those calls that causes the later glDrawArrays to crash the driver with a bad memory access. It isn't the stride, it's the last parameter, the offset. When you're not using VBOs, the last parameter is a memory pointer to the vertex data, when you are using VBOs it's an offset into the currently bound VBO. Even though a VBO is bound (VBO 7), the nvidia driver crashes because with that last param it's trying to load the vertex data from memory location 0x00000000 (which is NULL, nothing) rather than the start of the bound VBO. The driver is ignoring the bound VBO because of a single call made about 100 calls earlier:
Code:
glEnableClientState(GL_VERTEX_ARRAY)
That call is super deprecated, but still mentioned in the OpenGL manpages, and it's superseded by glEnableVertexAttribArray() in modern GL which your app calls anyhow. All it does is update vertex array 0 on the pipeline, but according to the spec if you enable array 0 using that old style call, it tells glDrawArrays that the array using index 0 in glVertexAttribPointer will be a literal pointer to memory. That's my understanding of it from a non-GL user. The earlier glDrawArrays calls in your app don't error because their glVertexAttribPointer calls do use literal pointers to their vertex data (no VBO is bound). Your post-processing draw call is the only one that uses a bound VBO so that's where it crashes. Disabling the legacy call allows the post-processing draw call to access the bound VBO correctly. The bug only happens on recent nvidia drivers because the spec doesn't explicitly say that binding a VBO disables the behaviour specified by the old style call, the crash is just the result of "unspecified behaviour". Older nvidia drivers and AMD ones make an assumption, if I had to guess.

That's the google-worthy info out of the way.

The reason I found out is that I keep a pristine windows 7 install for testing purposes and it has an older nvidia 331.40 driver. Your test apps worked with that driver, so I compared the GL API logs to when it crashed and they're identical, and I figured it really must be something funny in the memory access of that draw call that the new driver won't tolerate. While looking through them I noticed the earlier working calls used memory pointers instead of offsets and did a quick google and came across this page - https://code.launchpad.net/~3v1n0/unity/fix-nvidia-glDrawArrays-crash/+merge/117559 scroll to the green highlighted diff at the bottom and you'll see it's the same bug. It's a patch for a linux desktop manager from 2012, but after a bit of checking around with the terminology it seemed the likely culprit in your app too.

So how to disable it in your app? Well I looked for some kind of GL function injector DLL to disable that legacy feature with it's opposite disableclientstate call (something like the many FXAA injectors etc), but I couldn't find anything for injecting specific GL commands. Instead I figured the quickest test is to change the function argument to something invalid. I looked in that gDEBugger program to see roughly where in your app the legacy API call was made, it's from the nme.ndll not the actual executable. Then I loaded the nme.ndll into IDA and located the function that calls glEnableClientState with an argument of GL_VERTEX_ARRAY. Uppercase means a constant so I checked the headers and it translates to 8074 in hex, see - https://www.khronos.org/registry/glsc/api/1.0/gl.h . That page is also useful for making sure not to change the argument to another valid define which could cause a different bug. With the function located in IDA by hex address, I could load the nme.ndll in a hex editor (I used HxD since it's free), and edited the argument of 8074 to something benign like 0074. Saved the dll in HxD and your app worked fine, with the problematic call now issued as a fairly harmless glEnableClientState(Unknown), since 0074 isn't a valid argument.

I didn't need to reproduce the whole IDA disassembly for all of the other test executables, I just searched each one in HxD for the hex-value "74800000FF15" and changed the 74 80 to 74 00 (the 8074 is backwards because x86 is little-endian). You might be safe to do the same edit on any other versions of your game till the guys at Stencyl get around to addressing it. It depends whether they're actually using the legacy style of OpenGL at any point, and if they are then they need to add glDisableClientState(GL_VERTEX_ARRAY) guards around their modern VBO bindings (which might be time consuming). Otherwise they can probably quickly remove the glEnableClientState(GL_VERTEX_ARRAY) calls from their generated code for the win32 desktop target.

Did I mention, I don't like OpenGL! :)

my head is spinning a bit from reading this, but I've passed it along to the stencyl team. XD the engine is open so it's possible for me to make quick edits once they tell me what to do, since I really don't know much about this stuff. thanks again, if this is the solution then I'll owe you one. :)
 

domino99

Neo Member
Been working on Super Pokemon Eevee Edition again. I had set out a target date of "late july" for the next beta. I missed that very badly, due to underestimating my time. I've set out a new target date of "before november" hopefully I'll finish the new beta in time *fingers crossed*

Made a new map~ After working on the shiny system for so long (which is boring and repetitive).... It feels so damn good to return to mapping again.
Really gets my creative juices flowing. I especially love working with Gen2 pokemon graphics for some reason. Probably nostalgia.

You know what does Pokémon needs? a mmorpg. I tried several times to make one but it's a huge project for someone alone :p
 

backstep

Neo Member
my head is spinning a bit from reading this, but I've passed it along to the stencyl team. XD the engine is open so it's possible for me to make quick edits once they tell me what to do, since I really don't know much about this stuff. thanks again, if this is the solution then I'll owe you one. :)

You're welcome, I learnt some new stuff, and I did feel bad about offering the earlier non-solution. If the Stencyl fix is a bit slow coming you can always use that last paragraph of mine to fix your builds, takes less than a minute and it's much less complicated than it looks.

Yep, it works here too. Nice work! Now I want the explanation. :)

Also, now everyone can ask you to fix their mysterious bugs right? :D

I missed the spoilered thing earlier, and NOPE! I did learn a lesson about poking my nose in code I don't really understand though.

What does it cost to hire you

Roughly the reported cost of Destiny's development to get me to work on any legacy GL project at least, hehe.
 
Ok I'll try to keep this concise, but it's worth posting rather than PM'ing since when the next person hits this bug the fix will be available as a google result. I know random forum posts in google have helped me in the past. Also this bug is exactly why I haven't looked at OpenGL in like 10 years, it's got so much cruft, even it's deprecations are deprecated!

Reference my earlier long post (save quoting code again), where I thought the stride length in the two glVertexAttribPointer calls was causing the memory access violation. Well, it really is the first of those calls that causes the later glDrawArrays to crash the driver with a bad memory access. It isn't the stride, it's the last parameter, the offset. When you're not using VBOs, the last parameter is a memory pointer to the vertex data, when you are using VBOs it's an offset into the currently bound VBO. Even though a VBO is bound (VBO 7), the nvidia driver crashes because with that last param it's trying to load the vertex data from memory location 0x00000000 (which is NULL, nothing) rather than the start of the bound VBO. The driver is ignoring the bound VBO because of a single call made about 100 calls earlier:
Code:
glEnableClientState(GL_VERTEX_ARRAY)
That call is super deprecated, but still mentioned in the OpenGL manpages, and it's superseded by glEnableVertexAttribArray() in modern GL which your app calls anyhow. All it does is update vertex array 0 on the pipeline, but according to the spec if you enable array 0 using that old style call, it tells glDrawArrays that the array using index 0 in glVertexAttribPointer will be a literal pointer to memory. That's my understanding of it from a non-GL user. The earlier glDrawArrays calls in your app don't error because their glVertexAttribPointer calls do use literal pointers to their vertex data (no VBO is bound). Your post-processing draw call is the only one that uses a bound VBO so that's where it crashes. Disabling the legacy call allows the post-processing draw call to access the bound VBO correctly. The bug only happens on recent nvidia drivers because the spec doesn't explicitly say that binding a VBO disables the behaviour specified by the old style call, the crash is just the result of "unspecified behaviour". Older nvidia drivers and AMD ones make an assumption, if I had to guess.
First of all:

That's the google-worthy info out of the way.

The reason I found out is that I keep a pristine windows 7 install for testing purposes and it has an older nvidia 331.40 driver. Your test apps worked with that driver, so I compared the GL API logs to when it crashed and they're identical, and I figured it really must be something funny in the memory access of that draw call that the new driver won't tolerate. While looking through them I noticed the earlier working calls used memory pointers instead of offsets and did a quick google and came across this page - https://code.launchpad.net/~3v1n0/unity/fix-nvidia-glDrawArrays-crash/+merge/117559 scroll to the green highlighted diff at the bottom and you'll see it's the same bug. It's a patch for a linux desktop manager from 2012, but after a bit of checking around with the terminology it seemed the likely culprit in your app too.

So how to disable it in your app? Well I looked for some kind of GL function injector DLL to disable that legacy feature with it's opposite disableclientstate call (something like the many FXAA injectors etc), but I couldn't find anything for injecting specific GL commands. Instead I figured the quickest test is to change the function argument to something invalid. I looked in that gDEBugger program to see roughly where in your app the legacy API call was made, it's from the nme.ndll not the actual executable. Then I loaded the nme.ndll into IDA and located the function that calls glEnableClientState with an argument of GL_VERTEX_ARRAY. Uppercase means a constant so I checked the headers and it translates to 8074 in hex, see - https://www.khronos.org/registry/glsc/api/1.0/gl.h . That page is also useful for making sure not to change the argument to another valid define which could cause a different bug. With the function located in IDA by hex address, I could load the nme.ndll in a hex editor (I used HxD since it's free), and edited the argument of 8074 to something benign like 0074. Saved the dll in HxD and your app worked fine, with the problematic call now issued as a fairly harmless glEnableClientState(Unknown), since 0074 isn't a valid argument.

I didn't need to reproduce the whole IDA disassembly for all of the other test executables, I just searched each one's nme.ndll in HxD for the hex-value "74800000FF15" and changed the 74 80 to 74 00 (the 8074 is backwards because x86 is little-endian). You might be safe to do the same edit on any other versions of your game till the guys at Stencyl get around to addressing it. It depends whether they're actually using the legacy style of OpenGL at any point, and if they are then they need to add glDisableClientState(GL_VERTEX_ARRAY) guards around their modern VBO bindings (which might be time consuming). Otherwise they can probably quickly remove the glEnableClientState(GL_VERTEX_ARRAY) calls from their generated code for the win32 desktop target.

Did I mention, I don't like OpenGL! :)
After reading all of that:

Seriously, though. I've never seen a bug report that detailed. You deserve major props for not only being so thorough, but also having the chops to figure out what the problem (most likely) was.

That's some divine debugging.
 

Blizzard

Banned
Very interesting, backstep!

Roughly the reported cost of Destiny's development to get me to work on any legacy GL project at least, hehe.
You should have said your price was $2 billion, like Notch's tweet about endorsing stuff. :p

I noticed you mentioned IDA (IDA Pro?). I know very little about it. Do you have anything you are allowed to share with unfamiliar people? It appears to be a general multiprocessor disassembler/debugger/environment...typically used for embedded systems stuff? Console game programming? All of the above?
 
Thanks. Its a rough spot at the moment but I will just have to change my timeline a bit.


Ha! You are too kind, Feep. You can expect it but I might need to rearrange which project gets finished first. I can, however, hook you up with a Feep-only playroom where you can wall-stuff to your heart's content to tide you over! Let me know and I'll rig one up one of these days :p


Aye. I want all my dudes to take a round table approach to deciding what goes in or out of a game, including me - I might be the leader of the pack but I am certainly not immune to making stupid decisions. I believe the best food for a creative mind is another creative mind, so discussing and sharing ideas is an ethic I like to have. But when that attitude quickly turns into "I'm better than everyone, my way or the highway" that's when I choose highway. A big head like that completely cripples the rest of my team and undermines their ability to have a voice. It doesn't allow for important discussion topics like why or how, it restricts the flow of development and puts a gun to its head, instead.

Dude is a good programmer, for sure. But wanting to work as a lone wolf and holding development hostage because you think you are awesome (he actually said this a few times) is bad juju.

Are you looking to replace him?

EDIT- I looked up Absinthe Games and conveniently found that you guys are located in Chicago, and so am I. I PM-ed you my email.
 

Popstar

Member
The driver is ignoring the bound VBO because of a single call made about 100 calls earlier:
Code:
glEnableClientState(GL_VERTEX_ARRAY)
That call is super deprecated, but still mentioned in the OpenGL manpages, and it's superseded by glEnableVertexAttribArray() in modern GL which your app calls anyhow. All it does is update vertex array 0 on the pipeline, but according to the spec if you enable array 0 using that old style call, it tells glDrawArrays that the array using index 0 in glVertexAttribPointer will be a literal pointer to memory. That's my understanding of it from a non-GL user.
You can actually use the old style – glEnableClientState(GL_VERTEX_ARRAY) – with vertex buffer objects (link). I'm not sure why Nvidia would change the driver to turn VBOs off when they're used. Odd.

That said, the Stencyl people shouldn't be mixing the old style and new style calls in their engine, its a recipe for grief.
 

Blizzard

Banned
You can actually use the old style – glEnableClientState(GL_VERTEX_ARRAY) – with vertex buffer objects (link). I'm not sure why Nvidia would change the driver to turn VBOs off when they're used. Odd.

That said, the Stencyl people shouldn't be mixing the old style and new style calls in their engine, its a recipe for grief.
That's alllllll the way back from OpenGL 2.1 though. I wonder if there's some nuance in the spec that allows it to be vague, and that's what nVidia went with.
 
You can actually use the old style – glEnableClientState(GL_VERTEX_ARRAY) – with vertex buffer objects (link). I'm not sure why Nvidia would change the driver to turn VBOs off when they're used. Odd.

That said, the Stencyl people shouldn't be mixing the old style and new style calls in their engine, its a recipe for grief.

I'm not knowledgable enough in this arena as I'd like to be, but I read a blog about the C4 engine creator on VBOs couple months back. This may or may not pertain / be helpful to you, but here goes - http://the31stgame.com/blog/?author=1
 

Jobbs

Banned
You're welcome, I learnt some new stuff, and I did feel bad about offering the earlier non-solution. If the Stencyl fix is a bit slow coming you can always use that last paragraph of mine to fix your builds, takes less than a minute and it's much less complicated than it looks.

yeah, I looked over that and made the change and will test it on the persons I know who reported the crash in the first place. but if the respondents here on neogaf to your upload are any indication, it looks optimistic!

That's some divine debugging.

completely agree, and I intend to recommend him for the neogaf medal of honor.
 

Popstar

Member
That's alllllll the way back from OpenGL 2.1 though. I wonder if there's some nuance in the spec that allows it to be vague, and that's what nVidia went with.
It's vague what happens when you enable both old-style explicit attribute arrays (vertex, colour, texcoord, etc...) and new style generic attribute arrays at the same time.
 

Jobbs

Banned
It's vague what happens when you enable both old-style explicit attribute arrays (vertex, colour, texcoord, etc...) and new style generic attribute arrays at the same time.

I don't know a lot about this stuff, not being a coder, but I believe stencyl is a very cobbled togethre open source engine, constantly being changed and added to and thriving on the contributions of many people. It'd probably explain how such a thing like this happens.
 

Blizzard

Banned
As someone crazy with their own (2D) engine, I can certainly sympathize with the pain of trying to upgrade ancient OpenGL calls to more modern equivalents. That, and more modern C++ features, can easily drive one into a never-ending cycle of upgrades. Well, what if you use a new C++ feature...but then you have to support a platform where the compiler DOESN'T give you that? Woe!

I still haven't upgraded my font and texture rendering system to be nice and use a shader and allow colorization of fonts.
 
Thanks. Its a rough spot at the moment but I will just have to change my timeline a bit.


Ha! You are too kind, Feep. You can expect it but I might need to rearrange which project gets finished first. I can, however, hook you up with a Feep-only playroom where you can wall-stuff to your heart's content to tide you over! Let me know and I'll rig one up one of these days :p


Aye. I want all my dudes to take a round table approach to deciding what goes in or out of a game, including me - I might be the leader of the pack but I am certainly not immune to making stupid decisions. I believe the best food for a creative mind is another creative mind, so discussing and sharing ideas is an ethic I like to have. But when that attitude quickly turns into "I'm better than everyone, my way or the highway" that's when I choose highway. A big head like that completely cripples the rest of my team and undermines their ability to have a voice. It doesn't allow for important discussion topics like why or how, it restricts the flow of development and puts a gun to its head, instead.

Dude is a good programmer, for sure. But wanting to work as a lone wolf and holding development hostage because you think you are awesome (he actually said this a few times) is bad juju.

Very good idea. A programmer with an ego can seriously kill a project, the Sonic 2 HD fan-project was monopolized by a single, egotistical programmer who, on top of writing a surprisingly inefficient engine, monopolized it, used it to force decisions on the rest of the team, and even wrote in a DRM system and a poorly-written input scheme which caused a virus scare, causing the team to fire the guy when all the bad press occurred and cancel the project outright, and the programmer earned nothing but the scorn of the entire Sonic community. Sonic 2 HD has restarted recently, but with a different team.
 

Rubikant

Member
I'm thinking about making a demo to showcase some of the main features of my rpg game. It'll most likely include the first "dungeon" of the game as well as a few explorable quests to find. Any ideas on the length it should be?

unles your demo is DAMN GOOD, I generally would be cautious about demos. It may not ultimately matter much, but if your goal is to sell people on your game, I think you can very easily do more damage than good.

Last year during the Legend of Iya kickstarter campaign -- we had a game that looked terrific in video, and then he released a demo that was clearly very unfinished (which is fine, because that's what happens when you make a game, it's very unfinished for a while) but its lack of sound effects, poor controls, some bugs, unfinished art, etc. Damaged the mystique of the game.

His campaign funded and who knows if it ultimately hurt or help him, but in my eyes it felt like he'd have been better off not doing it.

(Around the corner, with some trepidation, I'll be releasing my own demo to backers who backed for it. I'm doing it because I said I'd do it, but if my game weren't a KS game I'd probably not do this)

One of the most useful tidbits I learned from working within the industry is related to this. Apparently, several large game companies paid an independent 3rd-party research firm to do a study on how game sales were affected by various factors beyond the game itself. Then, when one of these large companies commissioned the company I worked for to make a game for them, they let us in on the results of the study, and I was lucky enough to be in the meeting where this info was revealed.

The study looked at how game sales correlated to having a demo, having a trailer, and having good reviews. The result was that having a demo usually HURT game sales, almost never increasing it. The theory was that not only was it incredibly hard to make a compelling demo, but that even if the demo was really GOOD, players would sometimes feel they really got all they wanted out of the game from the demo, or were even content to just play the demo repeatedly, and not bother buying the product.

Interestingly, reviews also had relatively little effect on sales. What did have a positive effect on sales though was doing a good trailer, which almost always had a large positive influence (over, say, screenshots and written press releases). They found that the best sales came from games that had a good trailer, good reviews, but no demo.

Years after finding this "inside info" I saw a video explaining more theories about why demos aren't necessarily a good idea for developers, check this out: http://www.youtube.com/watch?v=7QM6LoaqEnY
 
One of the most useful tidbits I learned from working within the industry is related to this. Apparently, several large game companies paid an independent 3rd-party research firm to do a study on how game sales were affected by various factors beyond the game itself. Then, when one of these large companies commissioned the company I worked for to make a game for them, they let us in on the results of the study, and I was lucky enough to be in the meeting where this info was revealed.

The study looked at how game sales correlated to having a demo, having a trailer, and having good reviews. The result was that having a demo usually HURT game sales, almost never increasing it. The theory was that not only was it incredibly hard to make a compelling demo, but that even if the demo was really GOOD, players would sometimes feel they really got all they wanted out of the game from the demo, or were even content to just play the demo repeatedly, and not bother buying the product.

Interestingly, reviews also had relatively little effect on sales. What did have a positive effect on sales though was doing a good trailer, which almost always had a large positive influence (over, say, screenshots and written press releases). They found that the best sales came from games that had a good trailer, good reviews, but no demo.

Years after finding this "inside info" I saw a video explaining more theories about why demos aren't necessarily a good idea for developers, check this out: http://www.youtube.com/watch?v=7QM6LoaqEnY

Wow, when you put it all together that makes sense. I've downloaded quite a few demos of games I've never bought and even if they were good, I found actually playing what I had satisfying enough. I do enjoy video editing so a trailer would definitely be the right answer to my predicament. Thanks for the info!
 

Ashodin

Member
backstep I would love for you to figure out why NVIDIA cards revert saved profile changes on startup of Windows. I have to change my colors back EVERY DAY I turn on my computer. Fucking annoying.
 
backstep I would love for you to figure out why NVIDIA cards revert saved profile changes on startup of Windows. I have to change my colors back EVERY DAY I turn on my computer. Fucking annoying.

Huh? I've been using Nvidia cards since I had a computer and I've never had to do this. What color settings are you talking about?
 

Ashodin

Member
OK Control Panel settings where you have NVIDIA set the colors instead of letting the program take over. Not only do other programs forcibly take over (dual monitor setup here too) and switch the colors back (sometimes even in windowed fullscreen) in fullscreen, when I shutdown the computer and restart, they're back to original settings.
 
Just lost my job today, folks.

...Which is okay, because now I get to pretend I'm a full-time indie game developer (until I find a new job)!

My first release is called Sphere Blade. I made it using Unreal Engine 4 for a small game jam between friends. Brought a build out last weekend, and people promptly played it for about 3 hours straight. It seemed to be a hit, so I spent some time tweaking it so it could be a halfway-decent game.

It's a 2.5D 1 vs 1 combat game. Each player controls a ball with a sword coming out of it. Left and right triggers spin the sword, the left analog stick spins the wheel, and the A button jumps. Every contact scores a point (no more than 1 point every .6 seconds or so). First to 20 points (or highest score on timeout) wins.

You guys can grab the pre-alpha here. It requires two Xbox 360 controllers to play.

I really should call it a prototype. I'm tossing this build out since I threw it together with blueprints, which makes it difficult to get certain things done. I'm gonna start rebuilding the game from scratch in C++ tomorrow.
 

Lautaro

Member
Just lost my job today, folks.

...Which is okay, because now I get to pretend I'm a full-time indie game developer (until I find a new job)!

Welcome to the club (of jobless people pretending to be indies). I want to dedicate at least 5 more months to my project before I go back to the wage-slave life though, I'm sick of it.
 
unles your demo is DAMN GOOD, I generally would be cautious about demos. It may not ultimately matter much, but if your goal is to sell people on your game, I think you can very easily do more damage than good.

Last year during the Legend of Iya kickstarter campaign -- we had a game that looked terrific in video, and then he released a demo that was clearly very unfinished (which is fine, because that's what happens when you make a game, it's very unfinished for a while) but its lack of sound effects, poor controls, some bugs, unfinished art, etc. Damaged the mystique of the game.

His campaign funded and who knows if it ultimately hurt or help him, but in my eyes it felt like he'd have been better off not doing it.

(Around the corner, with some trepidation, I'll be releasing my own demo to backers who backed for it. I'm doing it because I said I'd do it, but if my game weren't a KS game I'd probably not do this)

I don't mean to be mean but that demo was atrocious. It was like playing a game in the middle of development which I assume it was. A demo should be a sneak preview of a nearly finished if not finished game.
 
Status
Not open for further replies.
Top Bottom