• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Media Molecule Announces New IP, Dreams

Stampy

Member
I am on ultra shitty connection on vacation, so can't chech the new site, but just wondering what is new from the one published when they revealed the game at e3?
 

Eggbok

Member
dreams-ps4-screenshot-05.jpg


If I can make something this huge and open good god.
Hogwarts here we come!

tumblr_mubhedqrAL1sj57joo9_500.gif
 

bock

Member
There's a talk by mmalex tomorrow at siggraph
http://advances.realtimerendering.com/s2015/index.html

"Abstract: Over the last 4 years, MediaMolecule has been hard at work to evolve its brand of ‘creative gaming’. Dreams has a unique rendering engine that runs almost entirely on the PS4’s compute unit (no triangles!); it builds on scenes described through Operationally Transformed CSG trees, which are evaluated on-the-fly to high resolution signed distance fields, from which we generate dense multi-resolution point clouds. In this talk we will cover our process of exploring new techniques, and the interesting failures that resulted. The hope is that they provide inspiration to the audience to pursue unusual techniques for real-time image formation. We will chart a series of different algorithms we wrote to try to render ‘Dreams’, even as its look and art direction evolved. The talk will also cover the renderer we finally settled on, motivated as much by aesthetic choices as technical ones, and discuss some of the current choices we are still exploring for lighting, anti-aliasing and optimization."
 

Shin-Ra

Junior Member
There's a talk by mmalex tomorrow at siggraph
http://advances.realtimerendering.com/s2015/index.html
Unfortunately the new pic's a bit nondescript.

Drxn5j.jpg


Maybe it's one of the failed experiments!

The Horizon one's a bit more exciting.

vZJ3HQ.png

Abstract: Real-time volumetric clouds in games usually pay for fast performance with a reduction in quality. The most successful approaches are limited to low altitude fluffy and translucent stratus-type clouds. For Horizon: Zero Dawn, Guerrilla need a solution that can fill a sky with evolving and realistic results that closely match highly detailed reference images which represent high altitude cirrus clouds and all of the major low level cloud types, including thick billowy cumulus clouds. These clouds need to light correctly according to the time of day and other cloud-specific lighting effects. Additionally, we are targeting GPU performance of 2ms. Our solution is a volumetric cloud shader which handles the aspects of modeling, animation and lighting logically without sacrificing quality or draw time. Special emphasis will be placed on our solutions for direct-ability of cloud shapes and formations as well as on our lighting model and optimizations.
 

Chobel

Member
There's a talk by mmalex tomorrow at siggraph
http://advances.realtimerendering.com/s2015/index.html

"Abstract: Over the last 4 years, MediaMolecule has been hard at work to evolve its brand of ‘creative gaming’. Dreams has a unique rendering engine that runs almost entirely on the PS4’s compute unit (no triangles!); it builds on scenes described through Operationally Transformed CSG trees, which are evaluated on-the-fly to high resolution signed distance fields, from which we generate dense multi-resolution point clouds. In this talk we will cover our process of exploring new techniques, and the interesting failures that resulted. The hope is that they provide inspiration to the audience to pursue unusual techniques for real-time image formation. We will chart a series of different algorithms we wrote to try to render ‘Dreams’, even as its look and art direction evolved. The talk will also cover the renderer we finally settled on, motivated as much by aesthetic choices as technical ones, and discuss some of the current choices we are still exploring for lighting, anti-aliasing and optimization."

OK, that's creative alright. I would like to know how the assets/models are presented in the game.

And I also didn't see squatting human at first. I thought it's some monster opening its mouth..
 

GribbleGrunger

Dreams in Digital
OK, that's creative alright. I would like to know how the assets/models are presented in the game.

And I also didn't see squatting human at first. I thought it's some monster opening its mouth..

It's an emaciated young girl leant up against the corner of a room.
 

Shin-Ra

Junior Member
OK, that's creative alright. I would like to know how the assets/models are presented in the game.

And I also didn't see squatting human at first. I thought it's some monster opening its mouth..
I like to look at it as millions of differently sized, shaped and coloured particles all arranged to make up the underlying scene with more important foreground objects made up of more densely packed particles.
 

Pie and Beans

Look for me on the local news, I'll be the guy arrested for trying to burn down a Nintendo exec's house.
Loving the sound of that rendering approach. By not having clinically determined models, you get the wispy "its forming from dense clouds" feel to the games graphics, and as such the physicality of that sells it.

I'll be interested in what the frame-rate hit thats going to have and also how suitable to VR it'll be. One thing to have that approach to rendering your 2D screen, possibly another in stereoscopic?
 

Eggbok

Member
There's a talk by mmalex tomorrow at siggraph
http://advances.realtimerendering.com/s2015/index.html

"Abstract: Over the last 4 years, MediaMolecule has been hard at work to evolve its brand of ‘creative gaming’. Dreams has a unique rendering engine that runs almost entirely on the PS4’s compute unit (no triangles!); it builds on scenes described through Operationally Transformed CSG trees, which are evaluated on-the-fly to high resolution signed distance fields, from which we generate dense multi-resolution point clouds. In this talk we will cover our process of exploring new techniques, and the interesting failures that resulted. The hope is that they provide inspiration to the audience to pursue unusual techniques for real-time image formation. We will chart a series of different algorithms we wrote to try to render ‘Dreams’, even as its look and art direction evolved. The talk will also cover the renderer we finally settled on, motivated as much by aesthetic choices as technical ones, and discuss some of the current choices we are still exploring for lighting, anti-aliasing and optimization."

Is this streamed?
 

kyser73

Member
In graphics, you're going to have some kind of geometry representation. The typical one is a mesh of triangles approximating a shape. So typical that it's universal in games.

But you can also use mathematical equations that describe a shape - let's say a sphere.

One such equation is one that takes a point in 3D space, and returns the shortest distance from that point to the surface of the shape.

You can use that function in rendering - in a ray tracer, for example, to figure out the point on the sphere that a camera's pixel should be rendering.

So instead of putting a bunch of polygons representing a sphere down a rendering pipeline, you can trace a ray per pixel and evaluate precisely what point on the sphere that pixel should be looking at. You've probably heard of 'per pixel' effects in other contexts - this would be like 'per pixel geometry'.

You can do interesting things with these functions. You can very simply blend shapes together with a mathematical operation between two shapes' functions. You can subtract one shape from the other, with another operation. You can deform shapes in lots of interesting ways. Add noise, twist them. For example, here's a shape with a little deformation on the surface described by a function using 'sin':

Bo6IkBiIQAA8Hfm.png:large


This render was produced with a tracing of the function - at every pixel the function was evaluated multiple times to figure out the point the pixel was looking at. Notice how smooth it is - you're not seeing any polygonal edges or the like here. It's a very precise kind of way of rendering geometry.

Now this isn't exactly what Media Molecule is doing. And here's where I diverge into speculation based on tweets and stuff.

(I think) MM is taking mathematical functions mentioned above and evaluating them at points in a 3D texture. So turning them into something they can look up at discrete points - which is a lot faster than calculating the functions from scratch. So, when you're sculpting in the game, it'll be baking the results of these geometry defining mathematical functions and operations into a 3D texture. So that's turning the distance functions into a more explicit representation which you might see referred to as a distance field.

To render the object represented by that texture, they have a couple of options. They could trace a ray and look up this 3D texture as necessary to figure out the point on the surface to be shaded - which is what I thought they were doing previously. But a more recent tweet suggests they are 'splatting' the distance field to the screen, which is sort of a reverse way of doing things. They'll be explaining this at Siggraph.

The advantages are the easy of deforming geometry with relatively simple operations. Doing robust deformation and boolean (addition/subtraction/intersection etc.) operations with polygonal meshes is really hard. Knowledge of 'the shortest distance from a given point to that surface' can also be applied in lots of other areas - ambient occlusion, shadowing, lighting, physics (collision detection). It's a handy representation to have for doing things that are trickier with polygons. UE4 has recently added the option to represent geometry with distance fields for high quality shadowing.

The disadvantage of this - and of using it from top to toe in your pipeline! - is that's it tricky to do it fast, and obviously GPUs and content pipelines etc. are so based around the idea of triangle rasterisation. But GPUs have gotten a lot more flexible lately, so maybe as time wears on we'll see even more less traditional 'software rendering' on the GPU.

Requoting from earlier in the thread. For those wondering how the 'no polys' thing works, search for gofreak & Crispy75 on thread tools. I would quote all of them but am on mobile.
 

gofreak

GAF's Bob Woodward
From the Siggraph abstract, their approach is a little different than I anticipated in the post quoted above.

At the base would still be these signed distance functions, each leaf in the csg tree would be a sdf, and their parents would be some boolean operation.

They're still filling out an explicit representation, a volume texture (signed distance field), from those functions.

But unlike the presumption in the above post, I don't think they're casting a ray through the signed distance fields. They're generating point clouds at various resolutions and then rendering that. They could represent and render the point cloud with triangles and rasterisation, but since they say there are no triangles, they are perhaps doing something else, like maybe splatting the points from a different representation in GPU memory.

I'm kind of curious why there's the intermediate step of generating signed distance fields, though, why they don't generate the point clouds directly from the csg tree and its functions... Maybe the distance fields serve other purposes (e.g. in physics) that the point cloud wouldn't be so suitable for, but the point cloud is more efficient for rendering.

Anyways, we'll find out for sure shortly! Interested also to see the approaches they abandoned.
 

pswii60

Member
Really intrigued by this. It could be absolutely amazing or terrible. But I love the risk taking. Kudos to Sony for still green-lighting daring projects like this, it's how incredible things can happen.
 

spekkeh

Banned
But unlike the presumption in the above post, I don't think they're casting a ray through the signed distance fields. They're generating point clouds at various resolutions and then rendering that. They could represent and render the point cloud with triangles and rasterisation, but since they say there are no triangles, they are perhaps doing something else, like maybe splatting the points from a different representation in GPU memory.

I'm kind of curious why there's the intermediate step of generating signed distance fields, though, why they don't generate the point clouds directly from the csg tree and its functions... Maybe the distance fields serve other purposes (e.g. in physics) that the point cloud wouldn't be so suitable for, but the point cloud is more efficient for rendering.

Anyways, we'll find out for sure shortly! Interested also to see the approaches they abandoned.
Wouldn't this have to do with being able to draw with the Playstation move? So it creates point clouds as you move the controller and turns these into 3D blobs, which would be more efficient than creating meshes? Or do you just mean the rendering and not the creation of the geometry?
 

Stampy

Member
From the posted slides:

"Amusingly, the new CS based splatter beats the rasterizer due to not wasting time on all the alpha=0 pixels. That also means our ‘splats’ need not be planar any more,
however, we don’t yet have an art pipe for non-planar splats so for now the artists don’t know this! Wooahaha!"

What does this quote mean?
 

gofreak

GAF's Bob Woodward
That's a pretty frightening amount of dev and experimentation they went through. Great to have it compiled together for people to get inspiration from. Seems like it was a lot of blood, sweat and tears for the team though.

There's some stuff at the end of the presentation, some screens, but not sure if they're concept artwork or captures from scenes in the game.
 

Shin-Ra

Junior Member
I understood a little of that... ;¿

TAA everything!


Here's the most recent splats surface rendering approach which is a bit easier to understand with the accompanying pictures.
19905610623_b1efcaa4c3_o.jpg


I can’t find many good screenshots but here’s an example of the density, turned down by a factor of 2x to see what’s going on.

My initial tests here were all PS/VS using the PS4 equivalent of glPoint. it wasn’t fast, but it showed the potential. I was using russian roulette to do 'perfect' stochastic LOD, targeting a 1 splat to 1 screen pixel rate , or just under.

At this point we embraced TAA *bigtime* and went with ‘stochastic all the things, all the time!’. Our current frame, before TAA, is essentially verging on white noise. It’s terrifying. But I digress!

Level of Detail scaling
For rendering, we arranged the clusters for each model into a BVH. we also computed a separate point cloud, clustering and BVH for each mipmap (LOD) of the filtered SDF. to smooth the LOD transitions, we use russian roulette to adapt the number of points in each cluster from 256 smoothly down to 25%, i.e. 256 down to 64 points per cluster, then drop to the next LOD.

Simon wrote some amazingly nicely balanced CS splatters that hierarchically culled and refined the precomputed clusters of points, computes bounds on the russian roulette rates, and then packs reduced cluster sets into groups of ~64 splats.

So in this screenshot the color cycling you can see is visualizing the steps through the different degrees of decimation, from <25%, <50%, <75%, then switching to a completely different power of 2 point cloud;

19905613883_ba7d718f52_o.jpg

Splats into Strokes
20500285056_e0f1c6b548_o.jpg


What you see is the ‘tight’ end of our spectrum. i.e. the point clouds are dense enough that you see sub pixel splats everywhere. The artist can also ‘turn down’ the density of points, at which point each point becomes a ‘seed’ for a traditional 2d textured quad splat. Giving you this sort of thing:

19905616693_a8704f173f_o.jpg
We use pure stochastic transparency, that is, we just randomly discard pixels based on the alpha of the splat, and let TAA sort it out. It works great in static scenes. However the traditional ‘bounding box in color space’ to find valid history pixels starts breaking down horribly with stochastic alpha, and we have yet to fully solve that. So we are still in fairly noisy/ghosty place. TODO!

We started by rendering the larger strokes - we call them megasplats - as flat quads with the rasterizer. thats what you see here, and in the E3 trailer.
&#65532;
20526550285_1ba66a4d2c_o.jpg


Interestingly, Simon tried making a pure CS ‘splatting shader’, that takes the large splats, and instead of rasterizing a quad, we actually precompute a ‘mini point cloud’ for the splat texture, and blast it to the screen using atomics, just like the main point cloud when it’s in ‘microsplat’ (tight) mode.

20500308176_13d5c66546_o.jpg
&#65532;

CLOUDS³
So now we have a scene made up of a whole cloud of sculpts...

which are point clouds, and each point is itself, when it gets close enough to the camera, an (LOD adapted) ‘mini’ point cloud - Close up, these mini point clouds representing a single splat get ‘expanded’ to a few thousand points (conversely, In the distance or for ‘tight’ objects, the mini points clouds degenerate to single pixels).
Amusingly, the new CS based splatter beats the rasterizer due to not wasting time on all the alpha=0 pixels. That also means our ‘splats’ need not be planar any more, however, we don’t yet have an art pipe for non-planar splats so for now the artists don’t know this! Wooahaha!

20533430441_a318a9f3bd_o.jpg

&#65532;
That means that if I were to describe what the current engine is, I’d say it’s a cloud of clouds of point clouds. :)&#65532;

Depth-of-Field-like blurring
20338472560_96108acf66_o.jpg


Incidentally, this atomic based approach means you can do some pretty insane things to get DOF like effects: instead of post blurring, this was a quick test where we simply jittered the splats in a screenspace disc based on COC, and again let the TAA sort it all out.
It doesn’t quite look like blur, because it isn’t - its literally the objects exploding a little bit - but it’s cool and has none of the usual occlusion artefacts :)

We’ve left it in for now as our only DOF.

There's loads more from the years' work leading up to the splat solution and more on shadows and lighting.

20517695142_dd37388fdf_o.jpg


19904022784_ed643b187c_o.jpg


19906914743_21b5a525c4_o.jpg
 

Samemind

Member
I don't know nearly enough about any of that to "get it", but it's still mind blowing to me that there's 3D graphics without vertices or voxels (I think these aren't voxels?). Am I crazy or is this hugely innovative? I'm super excited at the prospect of being fully freed from all limitations that LBP had, at least as far as visual assets go. Top of my most anticipated for sure
 
Top Bottom