• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Unreal Engine 5: Edge Magazine

ndQ9pzt.png



The new Edge magazine #347 has come out, and it details / has interviews with key players about the technology.
It doesn't go too deep, but there are some interesting comments on how it works.

One of the big revelations to anyone who did any background digging on Brian Karis who is the Technical Director for Graphic at Epic Games, is that while he talks about geometry textures as a way to visualize geometry, he later moved onto something else after being "inspired" by a talk John Carmack gave over a decade ago, which I myself have clung onto for far too long myself. John talked about representing data in Sparse Voxel Octrees, which are somewhat more complex (but not entirely a sea-change from) traditional mip-mapped textures. I had presumed that Brian had delved into this tech in order to create an easy traversal model to pull data from. Indeed, this is used in what we call BVH, or bounding volume hierarchies used for tracing into today.

This is basically a lot of words to explain what is basically a bounding box to the game engine, but instead of registering a hit on an opponent, or a collision detection or a ray. Once we hit the box, we instead know it's possibly an entry point to more data. So if we enter into the box, we can quickly check if there are sub-boxes within in, and so on until we reach some data. A Hierarchy! It's sparse because most of these structure contains no data itself, or just pointers to sub data.

Look at a massive world, and then imagine it's made of large LEGO blocks, but the closer you get to the LEGO blocks, the more sub-LEGO blocks appear, giving more detail. This sounds great, but doesn't look fantastic in practice, as you literally end up with blocks. Instead, we use the blocky structure as an invisible overlay and instead of rendering the blocks as data, we store polygonal meshes into the nodes, which may be subdivided or just render simple data structures as polygons (Minecraft). This structure is essentially a database optimized to quick look up of meshes or their sub-meshes. We don't even need high throughput for the source of this data, just low latency and sufficient bandwidth for the most part.

Virtual Texturing does nearly the same thing except with less complex data structures. It boiled down to mapping a terrain to texture IDs, loading that texture / mip dependent on camera properties from page that was in RAM or could be swapped into RAM, and storing a buffer to tell the engine what was loaded and what level, so the engine can determine what to load when you move the camera etc. They did some horrible tricks such as overloading pages with duplicate data to speed things up, but it worked. I find this much easier to visualize that storing an octree of meshes with different detail levels in each node. I deal with database optimizations all day, but damn does this sound hard.

So back to Edge.

Brian Karis said:
The tech goes for beyond backface culling (which detects with polygons are facing away from a view, and doesn't draw them, saving on processing power).
"It's in the form of textures," Karis explains. It's actually like, what are the texels of the texture that are actually landing on pixels in your view? So, it's in the frustum....


..It's a very accurate algorithm, because when you're asking for it, it's requesting it. But because it's in that sort of paradigm, that means as soon as you request it, we need to get that data in very quickly."

So maybe we can throw away assumptions I made and possibly others that while inspired by Carmack and Olick. Instead they maybe are somehow reverse using the UV map or something to draw the possible mesh, maybe in combination with the above?

It threw me, and may Edge merged the talk of Virtual Textures into Virtual Geometry, but somehow using the UV to project expected texel usage would be something! I still need a datastructure like the mesh to know the angles of what I am going to draw, but mapping a Voxel -> Texel -> UV map?
 
Last edited:

Bogey

Banned
That is.. Quite surprising to me actually.

BVHs aren't exactly new, nor particularly complex. So how come something nanite-like hasn't been used before? Simply a question of (lack of) available memory size/throughput/latency?
 

D.Final

Banned
ndQ9pzt.png



The new Edge magazine #347 has come out, and it details / has interviews with key players about the technology.
It doesn't go too deep, but there are some interesting comments on how it works.

One of the big revelations to anyone who did any background digging on Brian Karis who is the Technical Director for Graphic at Epic Games, is that while he talks about geometry textures as a way to visualize geometry, he later moved onto something else after being "inspired" by a talk John Carmack gave over a decade ago, which I myself have clung onto for far too long myself. John talked about representing data in Sparse Voxel Octrees, which are somewhat more complex (but not entirely a sea-change from) traditional mip-mapped textures. I had presumed that Brian had delved into this tech in order to create an easy traversal model to pull data from. Indeed, this is used in what we call BVH, or bounding volume hierarchies used for tracing into today.

This is basically a lot of words to explain what is basically a bounding box to the game engine, but instead of registering a hit on an opponent, or a collision detection or a ray. Once we hit the box, we instead know it's possibly an entry point to more data. So if we enter into the box, we can quickly check if there are sub-boxes within in, and so on until we reach some data. A Hierarchy! It's sparse because most of these structure contains no data itself, or just pointers to sub data.

Look at a massive world, and then imagine it's made of large LEGO blocks, but the closer you get to the LEGO blocks, the more sub-LEGO blocks appear, giving more detail. This sounds great, but doesn't look fantastic in practice, as you literally end up with blocks. Instead, we use the blocky structure as an invisible overlay and instead of rendering the blocks as data, we store polygonal meshes into the nodes, which may be subdivided or just render simple data structures as polygons (Minecraft). This structure is essentially a database optimized to quick look up of meshes or their sub-meshes. We don't even need high throughput for the source of this data, just low latency and sufficient bandwidth for the most part.

Virtual Texturing does nearly the same thing except with less complex data structures. It boiled down to mapping a terrain to texture IDs, loading that texture / mip dependent on camera properties from page that was in RAM or could be swapped into RAM, and storing a buffer to tell the engine what was loaded and what level, so the engine can determine what to load when you move the camera etc. They did some horrible tricks such as overloading pages with duplicate data to speed things up, but it worked. I find this much easier to visualize that storing an octree of meshes with different detail levels in each node. I deal with database optimizations all day, but damn does this sound hard.

So back to Edge.



So maybe we can throw away assumptions I made and possibly others that while inspired by Carmack and Olick. Instead they maybe are somehow reverse using the UV map or something to draw the possible mesh, maybe in combination with the above?

It threw me, and may Edge merged the talk of Virtual Textures into Virtual Geometry, but somehow using the UV to project expected texel usage would be something! I still need a datastructure like the mesh to know the angles of what I am going to draw, but mapping a Voxel -> Texel -> UV map?

Definitely interesting article
 
The reason the article is blurred is that I didn't wan't to steal the content. It is available for a few dollars, and I'm not sure on mod policy on sharing it. I will say, it's light enough, and the quote i pulled for me, is the most telling.
 
Can you post some more.

Interesting snippet
That's my own interpretation, I don't want to stoke any fires, but generally this tech thrives on fetching as little data as possible in the deltas, with low latency. Tech doesn't do this for you, you need an engine to take advantage.
 
Top Bottom