• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GAF Indie Game Development Thread 2: High Res Work for Low Res Pay

Status
Not open for further replies.

correojon

Member
You need to test this out with your backgrounds.
The decision rests on how they are drawn. Do they have thick outlines as well or are they just colors? What is the pallete chosen for the backgrounds? Etc.
A looks safe, but I kinda like C as well, it softens the sprite somehow, though I think it needs something else to really stand out - don't know what, though! Maybe you could try another color than complete black if you go A, so it meshes better with the environment (a reddish-dark, maybe?)?
So yeah, A for me if you don't want to think about this too much, or C if you feel like experimenting some more :-D
I also prefer C.
I think C is best. Not enough games go for that style imo.
Thin and inner borders looks best to me.
Fan of C here as well
Thanks everyone for the comments, I´ve been doing some test with both and I think I´ll finally go with "thin and inner lines" (A) instead of the thick borders (C). I´ve been trying to add more details to the sprites and the problem with C is that the thick borders don´t work well with small details like the fingers of the hand. Also C takes much more work than A as the thick border is not just a black 2px wide line, but a 1px outside black border and an inner 1px darker "sellout" border. Speaking of borders, does anyone here know of an easy way to make sprite outlines in Cosmigo Promotion? I´ve seen there´s a brush mode named like this, but I haven´t been able to understand how to use it. This program is great, but it lacks a good manual and tutorials on the web for the million options it has.

Regarding the game, I´ve completely destroyed the editor file system and the way objects and their properties are handled. I´ve managed to make the editor performance get a huge boost from this and now the system is ready for levels with sub-worlds (think when Mario enters a pipe and the levels changes from the overworld to the underground). But this was a HUGE change in the most basic core of the engine, so I´m still rebuilding the most basic stuff. This is a lot of work that is invisible and I have nothing to show for it, but I hope it will help me speed up development once I finish and it will bring the game to a better state. On the bad side, I´m pretty burnt from all this ungrateful work, hope I can get it finished in a week or so and can get back to "real" development which makes the game advance.
 
For those developers working within GameMaker

I have been transitioning from a Windows target build to an HTML5 build and discovered two differing behaviors that can significantly impact the expected performance/operation of your game (in my case, breaking the game altogether):

1. The order objects (and their instances) are processed
2. The handling of (dynamically) destroyed instances

I just created a reddit post explaining each observation for those interested.

I hope you find this information helpful!
 

mStudios

Member
If I knew 3D was so fun and so easily to animate, I woulda never touched 2D in my life.
As cool as it look, 2D animation takes too much time -_-

We're just gonna do 2D for cutscenes and stuff like that from now on.

For those developers working within GameMaker

I have been transitioning from a Windows target build to an HTML5 build and discovered two differing behaviors that can significantly impact the expected performance/operation of your game (in my case, breaking the game altogether):

1. The order objects (and their instances) are processed
2. The handling of (dynamically) destroyed instances

I just created a reddit post explaining each observation for those interested.

I hope you find this information helpful!

GameMaker and its HTML5 Module is:

Divi9yo.png
 
If I knew 3D was so fun and so easily to animate, I woulda never touched 2D in my life.
As cool as it look, 2D animation takes too much time -_-

We're just gonna do 2D for cutscenes and stuff like that from now on.



GameMaker and its HTML5 Module is:

Divi9yo.png

It surely seems to be lacking...
 

LordRaptor

Member
Wait! They're going to sell people splashscreens?

Previously, freeloader edition had a compulsory unmodifiable splash screen, pro edition let you turn splash screen off entirely (or BYO).

Then they changed the pricing structures, made pro more expensive for most, and added a cheaper 'midtier' indie licence, but that paid licence also has a the compulsory non-modifiable splash screen that the free edition has, so understandably people paying money for unity don't want their product being no different to free edition to consumers.

e:
I mean, there's a deeper issue re: branding that they're effectively making telling people your game is made in unity a downside for not paying for unity, which is sort of crazy.
 

LordRaptor

Member
^ That's so sick! Jesus, where does it lead to?

I like Unity, so I think making aspects of the logo customisable as well as seamlessly adding your own logo is a good thing, because I think people making their own stuff in Unity shoudn't be embarassed to say thats what they used :D
 

Pazu

Member
Friday means a new Chiaro VR gif! this is a bit from Blue Crab Lake, puzzle-filled and hiding relics of the First Settlers.

6a4171bdd50fe3d38e72a57f83c012df.gif


anybody know a good app to make longer gifs? excited to share some more gameplay and story stuff, 10 sec clips are tough...
 
Unity has some issues with pooling and trail renderers. Can't just disable them and enable them. Need to set their trail time to -1 before the parent gets disabled, then set the trail time back to OEM 1 frame after the parent is enabled after they move or they SOMETIMES (sometimes) will draw the trail between their last position and new position, despite not being active with a trail time of -1 when they move.

That is weird.
 

Ontoue

Member
So I've finally gotten around to starting a project I've had floating around in my head for a while now, but I've sort of run into a roadblock with my inexperience in Blender and UE4. I'm trying to animate a robot character in Blender, but the problem is that when the bones are controlling separate objects in blender it doesn't export to UE4 correctly, it comes in as one big stiff object spinning around wildly. But when I join all the objects together in Blender it becomes a nightmare with weight painting trying to get all the little polygons that are pressed up against other polygons and shapes that intersect and such. Is there a way to join objects together after I finish animating it so that it becomes one large moving mesh rather than a bunch of moving pieces?

Edit: Alright I think I got it, I wasn't renaming my root bone to 'Root' and that was what was causing the problem. It seems to be working just fine now.
 

Blizzard

Banned
So I've finally gotten around to starting a project I've had floating around in my head for a while now, but I've sort of run into a roadblock with my inexperience in Blender and UE4. I'm trying to animate a robot character in Blender, but the problem is that when the bones are controlling separate objects in blender it doesn't export to UE4 correctly, it comes in as one big stiff object spinning around wildly. But when I join all the objects together in Blender it becomes a nightmare with weight painting trying to get all the little polygons that are pressed up against other polygons and shapes that intersect and such. Is there a way to join objects together after I finish animating it so that it becomes one large moving mesh rather than a bunch of moving pieces?

Edit: Alright I think I got it, I wasn't renaming my root bone to 'Root' and that was what was causing the problem. It seems to be working just fine now.
I don't know if it works in UE4, but here's the guide I followed to make simple robots for UDK with Blender:

https://www.youtube.com/watch?v=KLsBnLn41-U
 
Last night I finally implemented achievements and leaderboards in Lolly Joe. UE4 makes it surprisingly easy - just a few straightfoward nodes are required to submit the achievement progress to Steam. The hardest part was just getting Steamworks integrated into the game (it requires messing with config files), but I already took care of that a while back.

Today I'm working on trading cards. I've been looking forward to this part. :)
 

HelloMeow

Member
Unity has some issues with pooling and trail renderers. Can't just disable them and enable them. Need to set their trail time to -1 before the parent gets disabled, then set the trail time back to OEM 1 frame after the parent is enabled after they move or they SOMETIMES (sometimes) will draw the trail between their last position and new position, despite not being active with a trail time of -1 when they move.

That is weird.

TrailRenderer.Clear should do the trick.
 

Dascu

Member
It's been way too long since I posted updates or new screens of The Godbeast.

This gif shows off the updated character model, attack animation variety, new particle effects, crosshair, charged attacks and in the background part of the City area.

tg1shsli.gif


Is anyone else going to GamesCom?
 
Still waiting on a proper 5.4 release before i dive in and fix all the shit that will break.

Same boat here. I'm expecting it to drop next week since they're already on Release Candidate 2 (and it was supposed to be out last month).

They stopped working on features a few weeks ago and have been focusing on bug fixes, so the "release" version should be arriving shortly. Given how long it's been, and how broken 5.3 is, I think I might break my "wait for the first big patch release" and jump into 5.4 immediately.

I've just gotta get off of 5.3 -- possibly the worst update I've done.
 
Same boat here. I'm expecting it to drop next week since they're already on Release Candidate 2 (and it was supposed to be out last month).

They stopped working on features a few weeks ago and have been focusing on bug fixes, so the "release" version should be arriving shortly. Given how long it's been, and how broken 5.3 is, I think I might break my "wait for the first big patch release" and jump into 5.4 immediately.

I've just gotta get off of 5.3 -- possibly the worst update I've done.
I'm still on 5.2

I tried a beta of 5.4 and noped the fuck out. I use translate to move stuff and no matter the speed objects always went in one direction.

I'm assuming there's deprecation which would be easy fixes, for the most part but definitely wanted to wait on a proper release considering I read 5.4 now allows you to create patches for your game so you don't have to do it manually. Unless that's stripped, too.

Plus 5.3 had some nasty trailrenderer issues where they would just disappear at random.

Keeping my fingers crossed.
 
Hey Indie Dev Gaf! I just wrapped up a bunch of work on chiptunes for Genesis and NES games from a client and I'm ready to take on a new game.

Any of you devs on here working on a game that's in need of an original score, sound design, or both? I'd much rather forge partnerships with Gaf Devs than look for a group in the massive sea of random indie devs on the internet. There's always so many cool screenshots and progress updates in this thread, whereas most of the time when I get involved with a group from the countless other indie dev communities, it falls apart before a game ever gets made -_-

Here's a list of stuff I've worked on in the past (it needs an update)
Here's a bunch of my music in a variety of genres if you want to hear what I can do.

Let me know if any of y'all are looking for a good composer/audio guy and hopefully we can work something out!
 

Popstar

Member
After all of this, my CPU usage went from 50% to 11-15%. The core 1 temperature was maybe 8-10 degrees cooler. For a desktop this is just a fan usage annoyance, but for a laptop I'm hoping it's a battery and heat savings. :) Just in case it causes issues on some systems, I made it an optional feature, "vsync CPU saver" or whatever.
@Blizzard: Resurrecting conversation from a couple months ago to tell you that Jonathan Blow is currently tweeting about this problem if you're interested.
 
Oh, that's too bad. I took a peek at the decompiled UnityEngine assembly and it's an internal call. No easy fix there.

Well, I mean, if you can get at it with Reflection you could theoretically grab the MethodInfo once on object creation, and use that to create a callable delegate. Calling the delegate should only be slightly slower than a direct method call -- the only significant overhead would be the setup.

But if it's in Unity 5.3+, it seems like it'd just be better to wait and upgrade to 5.4, whenever it's stable enough.
 

HelloMeow

Member
Well, I mean, if you can get at it with Reflection you could theoretically grab the MethodInfo once on object creation, and use that to create a callable delegate. Calling the delegate should only be slightly slower than a direct method call -- the only significant overhead would be the setup.

But if it's in Unity 5.3+, it seems like it'd just be better to wait and upgrade to 5.4, whenever it's stable enough.

It's a call internal to the engine. I was hoping for something that was fully implemented in UnityEngine.
 
Anyone know if developers are allowed to reveal the Steam achievements, badges, and trading cards for their game before the game's actually released on the store?

I'm paranoid about breaking a NDA, so I always err on the side of caution. But I'd love to share some of these with the fans following the game.
 

Limanima

Member
Sounds good! Am on a similar track, also working on some lighting and shading.
Well, want to ask, what C/C++ dev environment are you using unter for OSX and
iOS to get your stuff out there?

-Xcode - OSx/iOS
-Visual Studio - Windows
-Eclipse - Android

Visual studio is miles away from the other 2 development environments, and Eclipse is the worse by far.
I want to get the thing to work on Windows Phone too, but SDL isn't fully supported. I'm thinking about ditching SDL for the visual part on WP (and develop a Pokemon GO game there and get rich!!)

I'm currently cracking my head with a simple thing: pass an array of structs to a shader. It simply doesn't want to work and I don't know why...
 

missile

Member
-Xcode - OSx/iOS
-Visual Studio - Windows
-Eclipse - Android

Visual studio is miles away from the other 2 development environments, and Eclipse is the worse by far.
I want to get the thing to work on Windows Phone too, but SDL isn't fully supported. I'm thinking about ditching SDL for the visual part on WP (and develop a Pokemon GO game there and get rich!!) ...
Indeed, Visual Studio is the best thing whatever came out of Microsoft.

... I'm currently cracking my head with a simple thing: pass an array of structs to a shader. It simply doesn't want to work and I don't know why...
Perhaps there is a size mismatch. Try passing everything like it's a huge
array of bytes neglecting the size of the structs and see if the data gets
passed correctly. If yes, it must be a size mismatch of some type or perhaps
a wrong amount of data was copied misaligning the structs on the copied
data bytes.

Btw; Do you know one of the fastest ways (if not the fastest) to blit an image
on the screen under iOS?
 
Anyone know if developers are allowed to reveal the Steam achievements, badges, and trading cards for their game before the game's actually released on the store?

I'm paranoid about breaking a NDA, so I always err on the side of caution. But I'd love to share some of these with the fans following the game.
If you have signed an NDA that should tell you what you can and can't talk about.

Plenty folk in here have their games on Steam but who is to say what rules changed from when they signed one to you.

Best course of action is to read the NDA or contact Valve.

You really should read every word before you sign ANYTHING, regardless of how typical they are. I read every word from Sony, MS and Nintendo. Not exactly exciting but yeah.
 
If you have signed an NDA that should tell you what you can and can't talk about.

Plenty folk in here have their games on Steam but who is to say what rules changed from when they signed one to you.

Best course of action is to read the NDA or contact Valve.

You really should read every word before you sign ANYTHING, regardless of how typical they are. I read every word from Sony, MS and Nintendo. Not exactly exciting but yeah.

Good point.

I always read everything I sign too, to the last letter, but I always wonder if I misunderstood something. Legalese can seem like gibberish at times, and a lot of it can be up to interpretation.

But anyway, I took a look at games that have a live store page on Steam but haven't been released yet, and none of them make their achievements or trading cards visible, so I think it's safe to assume Valve doesn't want them seen until the game is actually available for purchase.
 
^ You never know. They might just not be showing them or it could be a hard rule. /shrug

-

Any Unity folks know if only casting a LineCast or Raycast when needed has any performance impact vs keeping them up full time?

I only need to check the object a line intersected and the point at which it intersects a collider when I hit a button. I really don't need it returning that information full time.

I could toss it up and just request it gets that info when I need it but currently I don't draw a line until the frame I need it in. Curious of constantly drawing a line for a single frame then not has any impact vs keeping the line up full time and only returning info the frame it's needed in.

I don't know much about performance impacts of keeping it up vs just throwing it for a single frame. I'd assume only drawing a line for a single frame would be more efficient.
 
I don't know much about performance impacts of keeping it up vs just throwing it for a single frame. I'd assume only drawing a line for a single frame would be more efficient.

The most efficient option would be to only draw the ray when you need it, but if it's just a single ray, then you're unlikely to see any noticeable performance changes between the two implementations. Just as a rule of thumb if you don't need to do something, don't do it :p

Like, if you think about what's actually happening (Assuming you don't do any other raycasts in your game at all), you would simply be doing one raycast every physics tick. Since they're separated by ticks, the fact that you are doing many of them over an extended period of time isn't going to really make much difference. What would end up being taxing is if you were doing multiple raycasts per update. That can get expensive pretty quickly.

I had to deal with this in one of the games I worked on in the past. The majority of our weapons used raycasts for their projectiles. We had some edge cases where in addition to all the guns that were firing, we could have 3 different turrets that needed to independently track 4 different enemies each, only open fire when they had LOS, but then use a separate ray for the projectile (due to bullet spread spead) etc. Let's just say that the first implementations of that managed to make the game crumble to a halt. (Granted, that was Box2D, but in principle it's the same thing).

Optimise now, save yourself the pain later :)
 
The most efficient option would be to only draw the ray when you need it, but if it's just a single ray, then you're unlikely to see any noticeable performance changes between the two implementations. Just as a rule of thumb if you don't need to do something, don't do it :p

Like, if you think about what's actually happening (Assuming you don't do any other raycasts in your game at all), you would simply be doing one raycast every physics tick. Since they're separated by ticks, the fact that you are doing many of them over an extended period of time isn't going to really make much difference. What would end up being taxing is if you were doing multiple raycasts per update. That can get expensive pretty quickly.

I had to deal with this in one of the games I worked on in the past. The majority of our weapons used raycasts for their projectiles. We had some edge cases where in addition to all the guns that were firing, we could have 3 different turrets that needed to independently track 4 different enemies each, only open fire when they had LOS, but then use a separate ray for the projectile (due to bullet spread spead) etc. Let's just say that the first implementations of that managed to make the game crumble to a halt. (Granted, that was Box2D, but in principle it's the same thing).

Optimise now, save yourself the pain later :)

Roger that!

Right now i'm only drawing a line when firing a gun - using LineCast so it will only look at the closest object hit. if it hits something I return the object that was hit to be used by the weapon controller and return the intersection of the line with the object's collider so I can use that info for effects like wall/ground hits, etc.

I do use RayCasts in Mainframe One on the main character but only ever 4 in the direction of movement. Just wasn't sure how LineCasts reacted to being drawn for one frame based on the fire rate - so a few times per second.

I might experiment with them a bit more for this side project - they are a real Swiss Army Knife of sorts :D
 
Roger that!

Right now i'm only drawing a line when firing a gun - using LineCast so it will only look at the closest object hit. if it hits something I return the object that was hit to be used by the weapon controller and return the intersection of the line with the object's collider so I can use that info for effects like wall/ground hits, etc.

I do use RayCasts in Mainframe One on the main character but only ever 4 in the direction of movement. Just wasn't sure how LineCasts reacted to being drawn for one frame based on the fire rate - so a few times per second.

I might experiment with them a bit more for this side project - they are a real Swiss Army Knife of sorts :D

I'm pretty sure that's the correct way to do it. Theoretically, you could have it always checking for hits, but it seems super wasteful (I haven't checked, but I'm sure there's some kind of garbage hit for checking the LineCast). Using it on-demand is what I would do, if it were me.

If I had to have something with a bit of persistence (i.e. constant checking) I'd rather opt for a thin, but extra long Box Collider. The collision system is pretty heavily optimized (down to some kind of spatial hash/octree, if I remember correctly) so that might end up being cheaper.

Overall, though, go with what's easiest and what makes the most sense first. Only mega-optimize if you have a problem (and you're sure that the thing you're optimizing is actually the problem!).
 

Jobbs

Banned
I wanted to post something new/substantial so I picked an area that's relatively self contained.

This is a 25 minute walkthrough of a particular area in my game, including a boss fight. I've added annotations that provide various insights.

I want to show COMPLETELY new areas but they're just not intact enough yet.

https://www.youtube.com/watch?v=Tc03CKJQGdQ

there's a boss fight, npc interactions, various things!
 
I'm pretty sure that's the correct way to do it. Theoretically, you could have it always checking for hits, but it seems super wasteful (I haven't checked, but I'm sure there's some kind of garbage hit for checking the LineCast). Using it on-demand is what I would do, if it were me.

If I had to have something with a bit of persistence (i.e. constant checking) I'd rather opt for a thin, but extra long Box Collider. The collision system is pretty heavily optimized (down to some kind of spatial hash/octree, if I remember correctly) so that might end up being cheaper.

Overall, though, go with what's easiest and what makes the most sense first. Only mega-optimize if you have a problem (and you're sure that the thing you're optimizing is actually the problem!).
Roger that.

First time I'm using casts for this so I'm a bit curious as to how expensive they can be. There's only a few characters that use these for firing while others just stick to kinematic objects I move using translate.

I actually don't know if I can get intersecting points between colliders. Never needed to in MF1 but I should check some docs. Probably by reading some bounds and doing some math but it's not something I ever needed yet.

I also may or may not use my previous method of raycasting for collision with the player controller. Putting about with the RigidBody I can get it to do what I need so I'm not sure a robust system like MF1s character controller would be needed since movemwnt in this game isn't as robust as MF1. There are instances when framerates go above about 800 when timing/distance is a bit off since movement is based on Time.deltaTime, even with my custom smooth delta math. With RigidBody2D I don't get that, even interpolated. I do appreciate the pixel-perfectness of rays, though.

Considering this side venture is all about that couch and online co-op I'll have to see how a few toons using multiple rays for collision stack up against them using RigidBodies. I've never paid much attention to what sucks up the time sink when MF1 runs 2-300fps on PS4 with dips in the hundreds when SHTF but I'm eager to expand my knowledge a bit.

Thanks for the help, guys!
 

Jobbs

Banned
Has Spine2D ever gone on sale?
I'm considering getting one (probably the pro version) but not sure if I should get it now or wait.

One thing I learned in my life is if you ask for things sometimes you get them. One of my favorite stories is the time I asked Sharpie for a bunch of markers and stuff for no particular reason and they sent them to me. XD

I asked Spine2D for a discount on their Pro license, I said what I wanted to pay, and they obliged. I have no idea if this is normal practice so obviously YMMV.
 

bumpkin

Member
I wanted to post something new/substantial so I picked an area that's relatively self contained.

This is a 25 minute walkthrough of a particular area in my game, including a boss fight. I've added annotations that provide various insights.

I want to show COMPLETELY new areas but they're just not intact enough yet.

https://www.youtube.com/watch?v=Tc03CKJQGdQ

there's a boss fight, npc interactions, various things!
Looking awesome as always, man. The special effects are especially impressive (like the energy stream/flow during the upgrade). How'd you do that stuff?
 

JulianImp

Member
So, there was this classicvania I was developing as freelance work on weekends. I approached the guy who was hiring me last weekend to see if he'd be interested in having me work on it on weekdays as well and he said yes, so I now have a short term source of income so that I can actually go to Japan and stay there for longer than I had originally planned, since I'll even be able to work on the project from there.

I'm really happy that this project is actually giving me some much-needed economic stability and that it's exactly the kind of game I've always wanted to make. Even my unfinished projects are paying off, since they're the reason I can now whip up player movement systems, enemies and so on in a timely manner, and that makes the artist who's hired me really happy since he can see decent progress being made on the game on a daily basis, all while I learn even more about making these kind of games.
 

Jobbs

Banned
Looking awesome as always, man. The special effects are especially impressive (like the energy stream/flow during the upgrade). How'd you do that stuff?

thank you :)

particle effects are fun to toy with. it comes down to moving and bending series of little graphics/animations, obviously, and in the case of the one you're referring to it's just a series of pink balls stretching and following a route to the player. they change color after a certain time to give the illusion of that unbroken line being different at the start and end. a shader helps blend them together.

it's a lot of tinkering.
 

missile

Member
... I might experiment with them a bit more for this side project - they are a real Swiss Army Knife of sorts :D
Swiss Army Knife for a reason.

Raycasting is essentially a bute-force method which is pretty easy to
implement and virtually solves all problems (Swiss Army Knife), if just the
complexity wouldn't rise in orders of magnitude rather quickly. However,
there is nothing easier than probing (point-sampling) an environment and
take action on it. The main optimization is always cutting down on the
amount of rays fired. Second, to limit the extend of the rays. Third, to
simplify the geometry of the objects the rays interact with. Fourth, to
build spatial data structures (BVH etc.) to limit unnecessary ray-object
intersections.

The problem with all the more advanced optimizations is that with each one
the flexibility of the whole thing decreases, being the worst if you need to
make your geometry static to gain more optimization. Static scenes allow for
the best optimization no matter what.

So it's important to think about how flexible the engine/game should be/stay
when major parts are being based on raycasting to solve problems. However, it
basically boils down to this; great flexibility requires you to limit the size
of your world resp. the amount of objects in that world. Hence, there is a
sweep-spot somewhere. If the flexibility of the game should be great, than the
game/gamplay needs to be realizable within a more limited world resp. with
fewer objects. On the other hand, if there should be plenty of things to
watch like lots of eye candy objects in a huge world, than this will
restrict the flexibility of the game considering having to raycast them all.

The above basically describes a method for creating games for indie
developers running on a low budget. For one, you can't really create huge
assets right from the get-go (not even speaking about optimizing them it for
different platforms). However, if your gameplay just needs a few objects,
then you can basically raycast through the environment like crazy solving a
lot of problems making the interaction very flexible. For example, you can
keep the geometry of the objects very dynamic. No problem if intersecting
them is more difficult because there are only a few of them. And with
objects being dynamic in shape, one can already build create gameplay from.

Another interesting aspect considering raycasting is; as less you use any
sophisticated optimization, as better it can be parallelized on multiple
cores. This basically rests on the fact that brute-force algorithms are
usually trivial to parallelize (many independent sub-problems). Hence,
instead of doing any heavy optimization (which will ultimately restrict the
flexibility to some degree) you make up for it by distributing the rays over
multiple cores utilizing the many cores in a lot of machines theses days.
(wasn't this the reason Pixar went away from REYES over to a full ray-based
rendered?). This becomes even more feasible today than years ago due to
unified memory. Hence, you don't have to partition geometry loading parts of
it into memory of a given core. In principle, you basically just partition
said rays with the only difficulty to balance the load on said cores. But if
you have too much cores computing the hell out of it, then the bottleneck may
become memory contention. But looking at the high bandwidths these days and
of the future, there should be enough room for making some good computation.

With the ever increase in performance, such a solution becomes more and more
feasible even at larger scales. I think this is a reason for many graphics
engine now incorporating a lot more raycast/tracing techniques for realizing
otherwise difficult effects. Wouldn't be surprise if future consumer graphics
accelerators will feature ray-triangle intersection in hardware.

Anyhow, cool topic, lots of possibilities.
 
I understood some of that XD

My thing is they are very handy to get the trajectory and impact point of a projectile where the projectile moves too fast for either Unity's rigidbody collision (even with continuous monitoring, objects can still pass right thru others without registering) or translating vectors over time.

So for my use the line is only drawn once to get either environment objects or enemy objects and can't be longer than the bounds of the screen so I limit what the line intersects with. I'm not throwing them out for infinity.

I do see people using them for enemy line of sight, which is fine, but used for activation. I've always thought that to be limiting since it's too easy for a player character to recognize that line of sight and avoid it due to it being a line. I prefer to use colliders as triggers since the shape and placement can be tweaked for better results. Then checking the overlap to see what's inside when need be.

I think the optimal use is for stuff like I'm using it for. Knowing the start point and end points I can easily create a trajectory and the intersection on a collider gives me the point of impact which can be handy to detect body, head, leg collision in case I want to use modifiers or mitigation for damage applied from the projectile or effect.

Sparingly for sure. Even though in MF1 I use 8 at any one time while moving. When idle nothing is cast. When I move I cast down and in the direction of movement while grounded. If in the air, the casts always follow the players Y movement speed and are only as long as they need to be based on speed. Not much of a hit but if I had every object on screen doing that then I'd be chugging, I'm sure.
 
I understood some of that XD

My thing is they are very handy to get the trajectory and impact point of a projectile where the projectile moves too fast for either Unity's rigidbody collision (even with continuous monitoring, objects can still pass right thru others without registering) or translating vectors over time.

Have you tried decreasing the physics timestep in the time manager?

Etiher way, it sounds like for your purposes what you're doing is fine so I wouldn't worry about it too much.
 
Status
Not open for further replies.
Top Bottom