• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Little Australian games company invents technology 100,000 times better

-SD-

Banned
ColonialRaptor said:
What happens when we can run atomic scale simulations and plug our brains into it. If the simulation is real enough, who's to say it's not real if it's using the same mathematical code as the universe?
This brings to mind a comment from Gabe Newell about extraterrestial life and why we haven't seen any yet. He said something like that it's perhaps because they prefer to spend time inside their virtual worlds instead.

EDIT: Atomontage tech demo coming soon: http://www.youtube.com/user/AtomontageEngine

Atomontage said:
A tech demo should have come out already but I stumbled upon serious issues with the generators - looks like a single computer cannot produce these amounts of content quickly enough to make the whole process usable in an interactive setup - therefore I've implemented a semi-automatic evolutionary "pipeline" to save time; now it seems to work well so early results should come out soon.
 

Tuck

Member
It looks cool, but as others have said, I want to see how it runs with physics and lighting and animation. The tech sounds neat though.

But my god the narrator has a really fucking annoying voice.
 
-SD- said:

I also like the the Atomontage engine is a hybrid (voxel and polygon) engine. The seem to be honest about the current limitations as well. Basically, they can have characters that are polygonal and the terrian be voxel based.

http://atomontage.com/?id=dev_blog#jan03_2011
A hybrid solution
Atomontage Engine can mix polygon-based with voxel-based content and render both in real-time. Currently only static content can be voxel-based. The engine features an accelerated renderer so the engine performs well on modern PCs as well as on older PCs in real-time. It manages the LOD of the rendered data so that real-time performance can be achieved also in cases with very limited resources available for rendering.
 

-SD-

Banned
thehillissilent said:
I also like the the Atomontage engine is a hybrid (voxel and polygon) engine. [...] they can have characters that are polygonal and the terrian be voxel based.
Just like Outcast by Appeal, then.
 
mattp said:

Here's the article contents (basically Notch's (Minecraft creator) opinion on the Unlimited Detail tech):

Perhaps you’ve seen the videos about some groundbreaking “unlimited detail” rendering technology? If not, check it out here, then get back to this post: http://www.youtube.com/watch?v=00gAbgBu8R4

Well, it is a scam.

They made a voxel renderer, probably based on sparse voxel octrees. That’s cool and all, but.. To quote the video, the island in the video is one km^2. Let’s assume a modest island height of just eight meters, and we end up with 0.008 km^3. At 64 atoms per cubic millimeter (four per millimeter), that is a total of 512 000 000 000 000 000 atoms. If each voxel is made up of one byte of data, that is a total of 512 petabytes of information, or about 170 000 three-terrabyte harddrives full of information. In reality, you will need way more than just one byte of data per voxel to do colors and lighting, and the island is probably way taller than just eight meters, so that estimate is very optimistic.

So obviously, it’s not made up of that many unique voxels.

In the video, you can make up loads of repeated structured, all roughly the same size. Sparse voxel octrees work great for this, as you don’t need to have unique data in each leaf node, but can reference the same data repeatedly (at fixed intervals) with great speed and memory efficiency. This explains how they can have that much data, but it also shows one of the biggest weaknesses of their engine.

Another weakness is that voxels are horrible for doing animation, because there is no current fast algorithms for deforming a voxel cloud based on a skeletal mesh, and if you do keyframe animation, you end up with a LOT of data. It’s possible to rotate, scale and translate individual chunks of voxel data to do simple animation (imagine one chunk for the upper arm, one for the lower, one for the torso, and so on), but it’s not going to look as nice as polygon based animated characters do.

It’s a very pretty and very impressive piece of technology, but they’re carefully avoiding to mention any of the drawbacks, and they’re pretending like what they’re doing is something new and impressive. In reality, it’s been done several times before.

There’s the very impressive looking Atomontage Engine: http://www.youtube.com/watch?v=Gshc8GMTa1Y

Ken Silverman (the guy who wrote the Build engine, used in Duke Nukem 3D) has been working on a voxel engine called Voxlap, which is the basis for Voxelstein 3d: http://www.youtube.com/watch?v=oB1eMC9Jdsw

And there’s more: http://www.youtube.com/watch?v=xUe4ofdz5oI http://www.youtube.com/watch?v=lEHIUC4LNFE http://www.youtube.com/watch?v=Zl9CiGJiZuc

They’re hyping this as something new and revolutionary because they want funding. It’s a scam. Don’t get excited.

Or, more correctly, get excited about voxels, but not about the snake oil salesmen.
 
Here's another blog from Notch, which addresses comments that he received after he posted his previous article, which was posted above in my previous post:

http://notch.tumblr.com/post/8423008802/but-notch-its-not-a-scam
“But Notch, it’s NOT a scam!”
I’ve been getting a bunch of feedback that my last blog post is wrong for various reasons, and I’d just like to say that I would absolutely LOVE to be proven wrong. Being wrong is awesome, that’s how you learn.

If you want to read my reasoning behind various assumptions, click “read more”.

Why I assume it’s voxels and not point clouds:

* Voxels store only the information about each point, and their positions are implicit in the location of where the voxel is stored. Point cloud data stores both the information about each point and the position of each point.
* They mention “64 atoms per cubic millimeter”, which is 4*4*4 points per mm^2. While it’s possible they only refer to the sampling frequency for turning polygonal structures into point data, the numbers are just too round for me to ignore as a programmer.
* All repeated structures in the world are all facing the same direction. To me, that means they aren’t able to easily rotate them arbitrarily.

About the size calculation:

* I was trying to show that there was no way there was that much UNIQUE data in the world, and that everything had to be made up of repeated chunks.
* One byte per voxel is way lower than the raw data you’d need. In reality, you’d probably want to track at least 24 bits of color and eight bits of normal vector data per voxel. That’s four times as much data. It’s quite possible you’d want to track even more data.
* If the data compresses down to 1%, it would still be 1 700 three-terrabyte hard drives of data at one byte of raw data per voxel.

Animated voxels:

* Holy crap, people sent me videos of this actually being done!
* I was wrong! :D
* http://www.youtube.com/watch?v=tkn6ubbp1SE
* (But please note that just that single animated character runs at 36 fps)

Why it’s a scam:

* They pretend like they’re doing something new and unique, but in reality a lot of people are researching this. There are a lot of known draw-backs to doing this.
* They refuse to address the known flaws. They don’t show non-repeated architecture, they don’t show animation, they don’t show rotated geometry, and they don’t show dynamic lighting.
* They invent new terminology and use superlatives and plenty of unverifiable claims.
* They say it’s a “search algorithm”. That’s just semantics to confuse the issue. Sparse voxel octrees is a search algorithm to do very fast ray casting in a voxel space.
* They seem to be doing some very impressive voxel rendering stuff, which could absolutely be used to make very interesting games, but it’s not as great as they claim it is. The only reason I can see for them misrepresenting it this bad is that I assume they’re looking for funding and/or to get bought up.

If these guys were being honest with the drawbacks and weaknesses of their system, I’d be their biggest fan. As it is now, it’s almost like they’re trying NOT to be trustworthy.

All this said, voxels are amazing. So is raytracing and raycasting. As computers get more powerful, and storage gets faster and cheaper, we will see amazing things happen.

And a final word to the engineers who worked on this: Great job, I am impressed! But please tell your marketing department to stop lying. ;)
 

Zaptruder

Banned
Giving the guy a benefit of the doubt doesn't hurt us much - but he is doing well at ticking all the shyster boxes.

If it pans out; fantastic. If it doesn't, well, didn't expect it to anyway!

If they've already got the aussie government helping fund, it doesn't sound like they're in need of investors.
 
Zaptruder said:
Giving the guy a benefit of the doubt doesn't hurt us much - but he is doing well at ticking all the shyster boxes.

If it pans out; fantastic. If it doesn't, well, didn't expect it to anyway!

If they've already got the aussie government helping fund, it doesn't sound like they're in need of investors.
The Australian government is investing heavily in games at the moment.
 
Ydahs said:
This is a different video. A followup to the one you both linked.

As they've mentioned several times in that video, there's still a long way to go. But the improvement from the old video and the new video shows great progression. Two key points which haven't been illustrated though are animation and collision. If that gets nailed, exciting times await us.
i'm going to guess that things like collision boxes will still be polygon but invisible, while the visual side of things can use this new tech.

Still a long way to go but I do hope something cool comes out of this, real or not.
 

Utako

Banned
I'm really impressed, but based on the progress between the 2010 demo and this one, it looks like they have about 3 or 4 more years to go. Maybe for PS5!

Don't expect established gamemakers to fall in love with new paradigms. It's like Flash developers who scoff at HTML5 - they don't like having their milkshake drank, so they will do anything to ostracize this thing that they don't want to think about.
 
Top Bottom