• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The power of Ray Tracing

mansoor1980

Gold Member
The year is 2150. Nvidia have just released a patch for tetris to show the beauty of the true RT technology. Stay tuned. Pacman in rt is coming shortly!

P.s: million presentations, months of waiting for a patch (metro exodus). And? Where is that RT. I see shit. Maybe it appears only during the night. Must be my low contrast ips monitor!
need a lot of rain and puddles too
 

martino

Member
HFW is an educated guess considering the string of Sony exclusives with RT launching next-gen. It's rare for them not have it compared to how common it's for them to have it aka the probability is very high. You're not playing the Vegas odds right to claim otherwise. As for Demon Souls it's said to have it on fidelity mode - case closed for me. You can die on that stake if you want until "you see it" - which is more a projection of principled posturing to avoid looking totally dumb on yet another point within that stupid post. That Ratchet burn however... amateur hour. There is no way to downplay that foolishness of a post... no matter your spin.

I'm not an authority on the subject nor do I claim to be. Just your regular Joe with a good pair of eyes to spot BS and trolls. And I'm usually right on the money.

TestyLeftEasteuropeanshepherd-size_restricted.gif
ohLohOQ.gif
 

VFXVeteran

Banned
HFW is an educated guess considering the string of Sony exclusives with RT launching next-gen.

Educated guess or wishful thinking? I don't care if every single PS5 game comes out with RT. It's not going to look better with or without it. The RT is too expensive to use it in the right way @ 4k. Get over it. This thread is about will RT make a difference and it will .. just not much of one for the consoles.

As for Demon Souls it's said to have it on fidelity mode - case closed for me.

Again, my point still stands.. it's not been shown. You know? Like the XSX games haven't been shown? You can't get after some people who claim what you do but then crucify them because it's not been shown and then try to come at me when I say the same thing.

You can die on that stake if you want until "you see it" - which is more a projection of principled posturing to avoid looking totally dumb on yet another point within that stupid post. That Ratchet burn however... amateur hour. There is no way to downplay that foolishness of a post... no matter your spin.

Did that make you feel better?

I'm not an authority on the subject nor do I claim to be. Just your regular Joe with a good pair of eyes to spot BS and trolls. And I'm usually right on the money.

There are alot of us on these boards and evidently that seems to be OK.
 
Last edited:
Higher res textures and normal maps will always be discernable. Go take a look at Crysis Remake 8k textures and compare them to any other game's 4k textures.

Moving from one mode to the next is supposed to be gradual. It's called having a very scalable graphics engine. The excessive rendering you speak of doesn't exist. The cost of rendering does. Getting anything to have a better approximation is going to cost compute power.
 
The fallacy is Hardware Based Raytracing will be the only Raytracing ever seen in games - Software Based Simulated Raytracing will eventually be a thing
 
Educated guess or wishful thinking?

Educated guess:

Spider-man Miles Morales - RT
GodFall - RT
Demon Souls - RT
Gran Turismo 7 - RT
Sackboy's Big Adventure - RT (ask NXGamer).
COD Cold War - RT
Watch Dogs - RT
+ More

That is a variety of games, and genres (including open world games) with RT implementation on PS5. Considering Guerrila Games is one of the most technically proficient devs within WWS I think it's highly probable to be the case. It's more reasonable to say it's wishful thinking to hope it doesn't have it for the sake of not looking dumb.

I don't care if every single PS5 game comes out with RT. It's not going to look better with or without it.

Here is where your "industry knowledge" gets overwritten by plain PC fanboyism or who knows what (and that's just me speculating and being nice). RT implementation in games is done for the sole purpose of increasing the visual fidelity and accuracy of bounce lighting on reflecting surfaces etc. In other words it improves the realism of the image/game. What sort of dumb claim is that?

The RT is too expensive to use it in the right way @ 4k. Get over it. This thread is about will RT make a difference and it will .. just not much of one for the consoles.

It's being done at 4k/30 in limited fashion. No big dev making AAA's is gonna be dumb and gimp their games to look like PS3 knock offs just so it can be full path ray-traced (unless your game already looks like crap aka an indie or the Minecrafts of the world).

This thread is about will RT make a difference and it will .. just not much of one for the consoles.

It's and on consoles it's already making a difference. Even on something like Minecraft or in Spider-man Miles Morales.

Again, my point still stands.. it's not been shown. You know?

Your claim originally was that none of the games mentioned:

None of the games you mentioned have announced support for ray-tracing except MM.

Demon Souls was announced to have Ray-Tracing. Ratchet and Clank was obvious in both fine print and on the eye test. You should stay consistent instead of trying to spin your way out it. Your initial recourse should have been to admit you were wrong and be done with it. But no.... you're choosing to embarrass yourself.

Like the XSX games haven't been shown? You can't get after some people who claim what you do but then crucify them because it's not been shown and then try to come at me when I say the same thing.

That's a fair point if devoid of any context aka a disingenuous claim. Sony has shown next-gen games with Ray-Tracing running on its hardware - not PC or a labyrinth of PR. Maybe those discussions revolve around the competing company not having shown visual proof of next-gen games running with RT on the hardware to alleviate any potential questions with 3 weeks till launch? Maybe that's the argument some are making out there. I find that to be reasonable. But like I said, if you want to posture with Demon Souls you can. However, you claimed something totally different in your initial post, and then pivoted to "show, don't tell" argument when realizing the obvious.

Did that make you feel better?

No. Why would it?

There are alot of us on these boards and evidently that seems to be OK.

There are.
 
Last edited:

VFXVeteran

Banned
It's more reasonable to say it's wishful thinking to hope it doesn't have it for the sake of not looking dumb.

Like I said, I don't care if it comes with RT. It will be in limited capacity.

Here is where your "industry knowledge" gets overwritten by plain PC fanboyism or who knows what (and that's just me speculating and being nice). RT implementation in games is done for the sole purpose of increasing the visual fidelity and accuracy of bounce lighting on reflecting surfaces etc. In other words it improves the realism of the image/game. What sort of dumb claim is that?

Assume I have no knowledge. I don't need you or anyone else throwing that around like it matters. It doesn't. So just treat me like the rest of the 'bots' on these forums. You don't need to go there.

The RT as shown on consoles so far leaves a lot to be desired. The difference is minuet when you can easily get away with SSR and the game will look overall the same. That's what I meant. You'll need more powerful GPU hardware to get the more advanced RT features like GI, AO, Area lights, etc.. and the consoles don't have such power.

It's being done at 4k/30 in limited fashion. No big dev making AAA's is gonna be dumb and gimp their games to look like PS3 knock offs just so it can be full path ray-traced (unless your game already looks like crap aka an indie or the Minecrafts of the world).

Hence my claim it won't make much of a difference whether its there or not.


It's and on consoles it's already making a difference. Even on something like Minecraft or in Spider-man Miles Morales.

Spiderman MM is using only reflections and weak ones at that. It's taking so much detail from the scene in the reflection it's not even worth the trouble.

Your claim originally was that none of the games mentioned:

I have no interest in revisiting dragged up comments made several weeks ago and derailing this thread. Sorry, not going there -- again -- to tickle your fancy. Try baiting someone else.
 

VFXVeteran

Banned
The fallacy is Hardware Based Raytracing will be the only Raytracing ever seen in games - Software Based Simulated Raytracing will eventually be a thing

You need even more power to make it even playable. We have only seen small segments of using ray tracing in games using just the SMUs. Horizon-based Ambient Occlusion (using world space instead of screen space), Parallax occlusion mapping (ray-casting in screenspace) and some other forms of ray-casting (even some rare ray-marching techniques with volume smoke). All of these forms of RT were rarely seen in games because of their cost. Not sure why people think we have enough power in the current next-gen consoles to implement this software based RT when they can't even render native 4k/60FPS with regular games.
 
Like I said, I don't care if it comes with RT. It will be in limited capacity.

Assume I have no knowledge. I don't need you or anyone else throwing that around like it matters. It doesn't. So just treat me like the rest of the 'bots' on these forums. You don't need to go there.

The RT as shown on consoles so far leaves a lot to be desired. The difference is minuet when you can easily get away with SSR and the game will look overall the same. That's what I meant. You'll need more powerful GPU hardware to get the more advanced RT features like GI, AO, Area lights, etc.. and the consoles don't have such power.

Hence my claim it won't make much of a difference whether its there or not.

Spiderman MM is using only reflections and weak ones at that. It's taking so much detail from the scene in the reflection it's not even worth the trouble.

I have no interest in revisiting dragged up comments made several weeks ago and derailing this thread. Sorry, not going there -- again -- to tickle your fancy. Try baiting someone else.

Well this post above illustrates the reason I quoted that older post. It's just an effort to illustrate why some people may give your opinion on this subject a pass rather than place some objective weight due to your background. If I posted that without any back up it would look out of place. There is a lot of opinionated "I don't care this" "doesn't matter that" "won't make much difference this" "not worth the trouble that". In some circles that is called downplaying..... with some of the opinion that it's done in bad faith.
 
Last edited:

VFXVeteran

Banned
Well that was the reason I quoted that post to illustrate why some may simple give your opinion on this subject a pass rather than place some objective weight considering your background. If I posted that without any back up it would look out of place. There is a lot of opinionated "I don't care this" "doesn't matter that" "won't make much difference this" "not worth the trouble that". In some circles that is called downplaying..... with some of the opinion that it's done in bad faith.

Dude, do you know how many Sony warriors do the same downplaying shit with the PC and Xbox! People aren't talking rationale here and when their expectations are so high and their dreams pop like a bubble, they move the goalpost to the exclusive titles looking the best anyway.... that's not conversation done in good faith.

It has already been proven that my background doesn't mean shit on these boards.. so why should I even worry about that?
 
Last edited:
You need even more power to make it even playable. We have only seen small segments of using ray tracing in games using just the SMUs. Horizon-based Ambient Occlusion (using world space instead of screen space), Parallax occlusion mapping (ray-casting in screenspace) and some other forms of ray-casting (even some rare ray-marching techniques with volume smoke). All of these forms of RT were rarely seen in games because of their cost. Not sure why people think we have enough power in the current next-gen consoles to implement this software based RT when they can't even render native 4k/60FPS with regular games.
Yes, let's all ignore the fact that Software Based Ray Tracing will be a thing even when faced with the limitation's of todays hardware - even if demo's of Software Based Ray Tracing have been showcased - On last gen hardware.

Let's ignore that plain FACT and that will make it alllll better.

That make's sense here :rolleyes:

The Software Optimization hater's out in full here at gaf.


Meanwhile, here in reality - the Neon Noir Demo is host to Software Based Raytracing on last Gen DX11.

Again proving, Software Based Raytracing will eventually as I stated previously - be a thing.

 

VFXVeteran

Banned
Yes, let's all ignore the fact that Software Based Ray Tracing will be a thing even when faced with the limitation's of todays hardware - even if demo's of Software Based Ray Tracing have been showcased - On last gen hardware.

Meanwhile, here in reality - the Neon Noir Demo is host to Software Based Raytracing on last Gen DX11.

Again proving, Software Based Raytracing will eventually as I stated previously - be a thing.

Well, hell, if it didn't come out on last gen hardware -- except for some edge cases like Kingdom Come (which never received any awards for best graphics) -- I guess we can hope it eventually resurfaces this generation eh?

I ignore the word DEMOS. We've seen countless of them in the past and none never made it to even last gen games.. remember all those sweet UE demos, Panta Ray demos, Quantic Dream demos, SW1313, Deep Down, etc.. at the START of last generation? What happened to the games showing those graphics? I guess this next-gen will be different though right?
 
Last edited:
Well, hell, if it didn't come out on last gen hardware -- except for some edge cases like Kingdom Come (which never received any awards for best graphics) -- I guess we can hope it eventually resurfaces this generation eh?

I ignore the word DEMOS. We've seen countless of them in the past and none never made it to even last gen games.. remember all those sweet UE demos, Panta Ray demos, Quantic Dream demos, SW1313, Deep Down, etc.. at the START of last generation? What happened to the games showing those graphics? I guess this next-gen will be different though right?

I'm a masterful artist, a stride made simply to figure out how far we are in actuality from photorealism.

All of these demo feature's could be done on last gen hardware, that's the problem you are in fact ignoring.

Why do I say this? Because I myself noticed developer's were in fact INGORING a myriad of material techniques that bolster photorealism during this LAST generation - the question is why?


The answer? So games have room to grow visually in the future, because most dev's are not Masterful CGI artist's and because dev's can simply get away with delivering game's that are technically subpar as many...(yourself included as is obvious considering you cite Last Gen DX11 Techniques can not be applied to DX12Ultimate, in actuality - which is laughable)...are not aware of how easily these techniques can be implemented nor how often they are ignored. The average consumer in other word's, does not know graphic's techniques that would help bolster image quality and are simple to implement are in fact not being utilized due to in most cases - unwilling dev team's looking to cut corner's.


If a demonstration of a technique is being applied on last gen DX11, then that same technique can be applied to games today.

"But that's a Demo" no - those are legitimate technique's being ignored at a loss to the consumer. Consumer's, like yourself - should be outraged. But keep citing that
tech demo's from yesteryear are incapable of running on today's platforms.
 

regawdless

Banned
Ray Tracing is useless just like 8k. It takes too much resources and the difference is minor. Hdr is the biggest step.

8k being useless, I agree.
Raytracing though.... Am currently playing Quake 2 RTX. The lighting is just bonkers. You may check out some screenshots that I took in the PC screenshots thread.
The raytraced lighting and reflections make the world feel incredibly believable despite being old ass Quake 2.
 

RJMacready73

Simps for Amouranth
Jesus Rockstar are pretty much untouchable when it comes to world building, GTA5 was on another level above everyone else, even to this day the feel of driving around an actual city and its surrounding suberbs is unmatched and as pretty as that game now looks running modded it needs more pedestrians/vehicles.. ii want to see them really pack it out
 
Yes, let's all ignore the fact that Software Based Ray Tracing will be a thing even when faced with the limitation's of todays hardware - even if demo's of Software Based Ray Tracing have been showcased - On last gen hardware.

Again proving, Software Based Raytracing will eventually as I stated previously - be a thing.

Apples and Oranges.
You should probably check out how Crytek did what they did, or to be more specific, which corners they had to cut here......
There is no magic algorythm that makes RT workload disappear or run anywhere near the performance of dedicated hardware, there`s just the reduction of data/accuracy and means to mask that as good as possible.
 
Last edited:
Apples and Oranges.
You should probably check out how Crytek did what they did, or to be more specific, which corners they had to cut here......
There is no magic algorythm that makes RT workload disappear or run anywhere near the performance of dedicated hardware, there`s just the reduction of data/accuracy and means to mask that as good as possible.

Yes and all I've ever cited is gamer's will have both apples and oranges with software based variant's of raytracing if the developer so chooses, which is a simple thing to apply if the performance benefit's justify. To imply we wont have both techniques available is to imply the Crytek demo demonstrating software based raytracing is fictitious.

But in fact, a dev team with Artist's intent on simulating Nuanced Full Ray Tracing through better refinement of material shader's, highly refined cube maps, animated reflection maps and even light sources could be painted/curated to match raytracing 100% - given enough samples from render's (which is preferable when matching asset's in a game engine as they offer far superior visual fidelity when maxed out) thus the artist's could essentially fake Ray Tracing in leu of performance. A technique that is time consuming, but wholly underutilized/extremely achievable and would alleviate performance issues currently.

I imagine in the future, Raytracing will also be something Machine Learning is able to apply and fake if needed - simply by taking curated samples from sources as it does now to bolster 720p resolution to 4k and 8k. Except those samples will be curated from light sources, shadows and reflections.
 

VFXVeteran

Banned
But in fact, a dev team with Artist's intent on simulating Nuanced Full Ray Tracing through better refinement of material shader's, highly refined cube maps, animated reflection maps and even light sources could be painted/curated to match raytracing 100% - given enough samples from render's (which is preferable when matching asset's in a game engine as they offer far superior visual fidelity when maxed out) thus the artist's could essentially fake Ray Tracing in leu of performance. A technique that is time consuming, but wholly underutilized/extremely achievable and would alleviate performance issues currently.

In all seriousness, I'm not saying that what you are proposing is ridiculous. Not at all. You can fake a lot and get some really good results with a lot of work from artists but you have to realize that memory and bandwidth will become a big problem doing it this way. AC: Unity is an example of scattering highly detailed light probes in a scene and the hardware (including PCs) just wasn't there to support it's memory demands. You'd run into this same exact wall as hardware is limited (which a lot of people on these boards fail to accept). It's way too costly to propose this over saving a lot of money on artist time and getting better results with less time at the wheel. I can't count how many times our artists complained because they had to tweak and tweak shots from director's notes in order to get something to look good (i.e. baking point clouds for GI, creating multiple highres shadow maps, etc..). Literally every single shot was custom tailored. When RT came along, the artists could get much further to target with very little tools required and ultimately that's where the gaming industry is going.
 
Last edited:
In all seriousness, I'm not saying that what you are proposing is ridiculous. Not at all. You can fake a lot and get some really good results

Typically every single shot is custom tailored in many cases in most games we see. Personally, as I have come to term's with this years ago, I take no displeasure in alerting the standard consumer that the artist making their most sought after Triple A Videogame Title - even if they have created a graphical marvel - still in fact did not utilize all the tool's and or techniques at their disposal to deliver the best possible. And that is not a case of optimization that is an issue typically constrained to minimize time, effort and cost.

In all seriousness, as the industry continues to move forward - the techniques leveraged will in fact be less reliant on Hardware Accelerated features as time goes on.

We are probably one generation away from machine learning replacing in fact all art rendered on screen with higher optimized variant's of asset's. Including Ray Traced Variant's.

DirectML, as cited by Microsoft may end up doing that for this current generation. We have not yet seen what this technique can substantiate.

But currently there are a myriad of techniques the artist in fact does not worry about that would accommodate Performance AND Visuals if applied directly.

Custom Culling of 3d Assets on screen has leveraged remarkable benefit's due to the fact that 3d engines currently only apply culling on the fly with a less than accurate fuzzy math - unless more coding is added during the development cycle.... and even then those result's would not be nearly as optimal as an asset Culled/Crafted by hand from the artist. Meaning in fact, that asset would need a new LOD option to simply swap out culled animation models.

Custom culling of asset's in game isn't really a technique artist's or dev's consider, nor is it something the industry utilizes. It's a very consuming way to budget for performance. But I can see machine learning applying a variation of culling to asset's to gain performance - that in turn exceeds what engines do currently. Software Optimization's like this will come and then persist.

For the non informed. Culling is a method of taking a 3d object, and slicing away the geometry you can not physically see. Example - Take a character, and cut away literally all polygon's not seen.

The back of the characters shirt, the back of the characters legs, everything that is not visible by the camera. Do it again for the front, side, top view... angles....

Engines actually do this now, poorly.

Unless more coding is added to help alleviate the fact that your dev team may not be using standard creation method's when building a game to begin with.

And then still... the result's aren't terribly great. Machine learning applied to this technique will leverage far more performance. And I do, personally, when that option arrives - consider it a Software fix.

We will still have plenty of option's as artists, to utilize software or hardware defined method's. What you are saying in fact only correlate's with what I cited in my latest post - artist's are capable of delivering result's that match ray tracing 100% through meticulous and nuanced refinement of source maps/cubemaps/ shadow maps/spec maps and 3d asset's. Whether those techniques are time consuming or the development tool's cost more - does not mean these technique's could not be applied to deliver large performance gains and with matching result's where Ray Tracing might otherwise be enabled.

But yes, Ray Tracing in hardware is certainly nothing to scoff at - but remember consumer - when a game offer's visual's but with a reduction in raytracing utilizations (some games may offer only Ray Traced lighting, other's may offer a full suite of Hardware Accelerated Ray Tracing but at modest application) - there were method's to deliver a full fledged, insanely Ray Traced product without sacrificing performance and Ray Traced Visuals - only had the artist's involved invested enough time to deliver it - and you would have never known which part's had Ray Tracing disabled.

That is where you, as the standard consumer wanting the best visual's possible - are at a loss. Particularly when you add in the fact that there are many examples of photorealism techniques which I offered on display in this thread, that have oft been ignored for nearly a decade now in most games seeking to deliver Realism. The worst culprit being underutilization of High Resolution photo accurate textures.

Something that's been possible for nearly 2 decades now. But we still see artist's offering painterly or simply blurred, less accurate Texture variations in games.
 
Last edited:
artist's are capable of delivering result's that match ray tracing 100% through meticulous and nuanced refinement of source maps/cubemaps/ shadow maps/spec maps and 3d asset's. Whether those techniques are time consuming or the development tool's cost more - does not mean these technique's could not be applied to deliver large performance gains and with matching result's where Ray Tracing might otherwise be enabled.
1. Still images or mostly static scenery, yes, dynamic scenery, no.
2. The amount of manual labour it would involve to even get close is absolutely ridiculous.

Sorry, but imho you are treading firmly in fantasy land here.
Just because you can do something theoretically doesn´t mean that it`s actually viable in practice and ML absolutely is not the end-all solution you make it out to be here, especially for something as complex as real time dynamic lighting.
 
Last edited:
Typically every single shot is custom tailored in many cases in most games we see. Personally, as I have come to term's with this years ago, I take no displeasure in alerting the standard consumer that the artist making their most sought after Triple A Videogame Title - even if they have created a graphical marvel - still in fact did not utilize all the tool's and or techniques at their disposal to deliver the best possible. And that is not a case of optimization that is an issue typically constrained to minimize time, effort and cost.

In all seriousness, as the industry continues to move forward - the techniques leveraged will in fact be less reliant on Hardware Accelerated features as time goes on.

We are probably one generation away from machine learning replacing in fact all art rendered on screen with higher optimized variant's of asset's. Including Ray Traced Variant's.

DirectML, as cited by Microsoft may end up doing that for this current generation. We have not yet seen what this technique can substantiate.

But currently there are a myriad of techniques the artist in fact does not worry about that would accommodate Performance AND Visuals if applied directly.

Custom Culling of 3d Assets on screen has leveraged remarkable benefit's due to the fact that 3d engines currently only apply culling on the fly with a less than accurate fuzzy math - unless more coding is added during the development cycle.... and even then those result's would not be nearly as optimal as an asset Culled/Crafted by hand from the artist. Meaning in fact, that asset would need a new LOD option to simply swap out culled animation models.

Custom culling of asset's in game isn't really a technique artist's or dev's consider, nor is it something the industry utilizes. It's a very consuming way to budget for performance. But I can see machine learning applying a variation of culling to asset's to gain performance - that in turn exceeds what engines do currently. Software Optimization's like this will come and then persist.

For the non informed. Culling is a method of taking a 3d object, and slicing away the geometry you can not physically see. Example - Take a character, and cut away literally all polygon's not seen.

The back of the characters shirt, the back of the characters legs, everything that is not visible by the camera. Do it again for the front, side, top view... angles....

Engines actually do this now, poorly.

Unless more coding is added to help alleviate the fact that your dev team may not be using standard creation method's when building a game to begin with.

And then still... the result's aren't terribly great. Machine learning applied to this technique will leverage far more performance. And I do, personally, when that option arrives - consider it a Software fix.

We will still have plenty of option's as artists, to utilize software or hardware defined method's. What you are saying in fact only correlate's with what I cited in my latest post - artist's are capable of delivering result's that match ray tracing 100% through meticulous and nuanced refinement of source maps/cubemaps/ shadow maps/spec maps and 3d asset's. Whether those techniques are time consuming or the development tool's cost more - does not mean these technique's could not be applied to deliver large performance gains and with matching result's where Ray Tracing might otherwise be enabled.

But yes, Ray Tracing in hardware is certainly nothing to scoff at - but remember consumer - when a game offer's visual's but with a reduction in raytracing utilizations (some games may offer only Ray Traced lighting, other's may offer a full suite of Hardware Accelerated Ray Tracing but at modest application) - there were method's to deliver a full fledged, insanely Ray Traced product without sacrificing performance and Ray Traced Visuals - only had the artist's involved invested enough time to deliver it - and you would have never known which part's had Ray Tracing disabled.

That is where you, as the standard consumer wanting the best visual's possible - are at a loss. Particularly when you add in the fact that there are many examples of photorealism techniques which I offered on display in this thread, that have oft been ignored for nearly a decade now in most games seeking to deliver Realism. The worst culprit being underutilization of High Resolution photo accurate textures.

Something that's been possible for nearly 2 decades now. But we still see artist's offering painterly or simply blurred, less accurate Texture variations in games.

I think once AI scene will mature, it will be capable of a lot of things.

Just like DLSS lets you upscale, MS has auto HDR in their series x for old gen titles. Ray tracing might also find a cut back in actual use and be replaced by something ai oriented where possible.
 

llien

Member
I wish people let this gimmick be.
Tech to use it effectively is not there neither today nor in the foreseeable future.
As is, developers are better off not using it at all.
 

acm2000

Member
the power of ray tracing is still a couple of generations away, till then its gonna be basic implementations and picking and choosing which to use (control being a bit of an outlier and destorying hardware in the process)
 

ripeavocado

Banned
When not used to create ponds only.




And this is a mod. Imagine what new games could achieve with proper implementation of complex lighting and shading.

Let's hope RT doesn't become a simple reflection gimmick, it has potential to much much more.


It's a mod that makes the graphics unrealistic and has nothing to do with raytracing
 
I wish people let this gimmick be.
Tech to use it effectively is not there neither today nor in the foreseeable future.
As is, developers are better off not using it at all.


Its not a gimmick and we can use it right now, with a 3080 and 3090. They're strong enough that we can have games with a significant change in the image they put out with this. Quake 2 RTX cand even be played at 1440p with 60 frames, its amazing. This of course is not gonna be the case with these consoles
 

regawdless

Banned
Its not a gimmick and we can use it right now, with a 3080 and 3090. They're strong enough that we can have games with a significant change in the image they put out with this. Quake 2 RTX cand even be played at 1440p with 60 frames, its amazing. This of course is not gonna be the case with these consoles

This. I can't stress enough how much of a game changer it is when a game offers a good combination of RT techniques, like Quake 2 that you've already mentioned. It make this old ass game look so believable. Playing through the full game right now and it's great. I can't wait for more games to use these effects.

I can understand the criticism with some current games though. If for example a game only uses RT reflections and nothing else, it feels gimicky and is not worth the performance cut.
 

llien

Member
They're strong enough
They are (in this context, given how badly even 2080Ti fared with it enabled) barely faster than Turing at RT.

we can have games with a significant change in the image they put out with this
Lies.
People watch games and need to explicitly ask, if game has used DXR.
That is exact the opposite of "you need RT to achieve such effects".

 
They are (in this context, given how badly even 2080Ti fared with it enabled) barely faster than Turing at RT.


Lies.
People watch games and need to explicitly ask, if game has used DXR.
That is exact the opposite of "you need RT to achieve such effects".

Xsz5nVe.png


p7UfjV4.png



The mighty 3080 is quite literally twice as fast as the regular 2080, as in, 2 times faster. And 50% faster than the 2080 Ti. 3080 is a monstruously powerful card. Much faster than previous gen.

You need the right implementation and most of all you need to play the actual games with rtx enabled, not looking at videos. Its different when you're controlling the game.

I dont see what an engine tech demo has to do with actual existing games
 

llien

Member

Uh, dude, please don't bring upscaled resolution when talking to me, be so kind..

Whatever that actual resolution is, 1.3-1.5 times faster is a far cry from expected doubling of the performance.
(and god knows how much of it is actually faster upscaling)

Very kind of you to focus on secondary points and skip the main one.

I hope you've still enjoyed Unreal 5 demo, which for some reason doesn't bother using RT, even though even UE 4 supports it.
 
Last edited:
Uh, dude, please don't bring upscaled resolution when talking to me, be so kind..

Whatever that actual resolution is, 1.3-1.5 times faster is a far cry from expected doubling of the performance.
(and god knows how much of it is actually faster upscaling)

Very kind of you to focus on secondary points and skip the main one.

I hope you've still enjoyed Unreal 5 demo, which for some reason doesn't bother using RT, even though even UE 4 supports it.


There is no upscale involved. It actually explicitely states on both pics, native 4k and 1440p. The claim from nvidia was up to 2x performance compared to a regular 2080. Which is provided in raytraced games. And even regular rasterisation, its 70-80% faster than a 2080. Not that far from being double even there.

What was you main point again ? That people cant see the differences ? Well, if you dont want to see them i supose you wont see them. As ive said, you need to actualy play the games so you can best appreciate the differences. You need to move through the game enviroment to absorb the effects. Youtube videos or still pics dont do it justice.

The unreal demo is just that, a tech demo. Its not a game, its nothing at all. Nothing will look like it on the entire console gen that will follow. When you make an actual game with all the assets and AI and interactions you're not gonna get that fidelity.
 
Last edited:
There is no upscale involved. It actually explicitely states on both pics, native 4k and 1440p. The claim from nvidia was up to 2x performance compared to a regular 2080. Which is provided in raytraced games. And even regular rasterisation, its 70-80% faster than a 2080. Not that far from being double even there.

What was you main point again ? That people cant see the differences ? Well, if you dont want to see them i supose you wont see them. As ive said, you need to actualy play the games so you can best appreciate the differences. You need to move through the game enviroment to absorb the effects. Youtube videos or still pics dont do it justice.
Amen to this! Its like trying to understand how is 144fps better than 60fps, on a YouTube video playing at 30fps. Imagine if lighting IRL didn't bounce, refract, or reflect accurately. If would be a shit show, and that's why raytracing in gaming is so important.
 

llien

Member
The claim from nvidia was up to 2x performance compared to a regular 2080
I was referring to what average GAFer expected from the next gen RT, not to Huang's lies on Ampere announcement.

What was you main point again ?
Two core promises of RT are not there today and won't be there any time soon:

1) "it is easier to develop" (it is advisable what the actual RT rendered frame of the RT Quake looks like)
2) "it brings yet unseen effects" (an obvious lie)

The unreal demo is just that, a tech demo. Its not a game,
Unreal demo shows "we need RT for cool shadows/light effects" is BS.
And it also runs on a 36CU GPU.


Lumen uses software RT against signed distance fields for indirect lighting.....
Nice try.
 
Last edited:

llien

Member
One could summarize the entirety of your postings concerning Nvidia and anything RT under that term. :messenger_grinning_smiling:

I recall you admitted along the lines of "I am just excited because it's a cool tech", I wish you'd leave it at that.
That "gotcha, some rays are cast in software" was one funny comment.
 
Last edited:

scalman

Member
dont care how you call it .. dont care how it will be named , name it shit tracing i just dont care if it looks ok then its good. mafia remastered have good reflections there and i dont care how they made... so i wont care with new games how those RT will be made ... just give me damn new good games... shit game with RT wont make good game anyway ...
 
I recall you admitted along the lines of "I am just excited because it's a cool tech", I wish you'd leave it at that.
That "gotcha, some rays are cast in software" was one funny comment.
every simple fact that refutes your arguments is funny to you, right?
Well if we've moved on to fairy tail territory now, I recall you called yourself Marilyn and were looking for a vent to stand on
 
Last edited:

regawdless

Banned
2) "it brings yet unseen effects" (an obvious lie)

I dunno. What I'm seeing while playing Quake 2 RTX right now, I haven't seen in any other game so far. Such natural, dynamic lighting and accurate reflections are nowhere to be found in games that don't offer raytracing.
Actually playing it, you can feel the difference compared to other games.

Screenshots don't so it justice, but let's give it a try:


 
Last edited:
1. Still images or mostly static scenery, yes, dynamic scenery, no.
2. The amount of manual labour it would involve to even get close is absolutely ridiculous.

Sorry, but imho you are treading firmly in fantasy land here.
Just because you can do something theoretically doesn´t mean that it`s actually viable in practice and ML absolutely is not the end-all solution you make it out to be here, especially for something as complex as real time dynamic lighting.

You cite that signed distance field's are an issue but this is where multiple dynamic light map's can be painted. Nuanced light sources could be painted utilizing as little as 35 passes on dynamic light map. Yes far more than the perhaps 3 - 8 (and 8 is generous) in totality artist's are used to painting. But this technique combined with Lumen would take no more than 35 dynamic light map's to deliver the same result's as RTX.

The difference is - it is work, clearly a labor of love - but more work than the artist is capable of? I personally could paint out 33 light map's in around 3 day's, particularly if I have RTX Render's to Match. The real problem may in fact be Artist's aren't in some cases expertly to task.

But while Lumen boast's it erases the need for the artist to even bake light maps, the solution is Lumen and nuanced authored lightmaps. 35 of them. (and actually, simply painting far more than a few light maps without lumen would render acceptable results) So Lumen is clearly a solution you are happy overlooking to make this argument and is exactly the type of in engine solution an artist would utilize ultimately match RTX samples with 100% cadence.

DirectML was also a Pie in the Sky technology at it's original outset and now variation's are used to upscale 780 to 8k with absolutely massive performance gains.
Ai software will supplant performance issues within perhaps not this current next generation but certainly on PC within 5 years and within the next, next console gen and that is not Pie in The Sky.



Lumen is a fully dynamic global illumination solution that immediately reacts to scene and light changes. The system renders diffuse interreflection with infinite bounces and indirect specular reflections in huge, detailed environments, at scales ranging from kilometers to millimeters. Artists and designers can create more dynamic scenes using Lumen, for example, changing the sun angle for time of day, turning on a flashlight, or blowing a hole in the ceiling, and indirect lighting will adapt accordingly. Lumen erases the need to wait for lightmap bakes to finish and to author light map UVs—a huge time savings when an artist can move a light inside the Unreal Editor and lighting looks the same as when the game is run on console.
 
Last edited:

VFXVeteran

Banned
You cite that signed distance field's are an issue but this is where multiple dynamic light map's can be painted. Nuanced light sources could be painted utilizing as little as 35 passes on dynamic light map. Yes far more than the perhaps 3 - 8 (and 8 is generous) in totality artist's are used to painting. But this technique combined with Lumen would take no more than 35 dynamic light map's to deliver the same result's as RTX.

The difference is - it is work, clearly a labor of love - but more work than the artist is capable of? I personally could paint out 35 light map's in around 3 day's, particularly if I have RTX Render's to Match. The real problem may in fact be Artist's aren't in some cases expertly to task.

But while Lumen boast's it erases the need for the artist to even bake light maps, the solution is Lumen and nuanced authored lightmaps. 35 of them. (and actually, simply painting far more than a few light maps without lumen would render acceptable results) So Lumen is clearly a solution you are happy overlooking to make this argument and is exactly the type of in engine solution an artist would utilize ultimately match RTX samples with 100% cadence.

DirectML was also a Pie in the Sky technology at it's original outset and now variation's are used to upscale 780 to 8k with absolutely massive performance gains.
Ai software will supplant performance issues within perhaps not this current next generation but certainly on PC within 5 years and within the next, next console gen and that is not Pie in The Sky.


I agree with Phrixotrichus Phrixotrichus here. You are going way overboard for no reason. No company is going to do what you are proposing. You should drop it.
 
Top Bottom