• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Would this generation of consoles been better if Sony/Microsoft hadn't tried to shoehorn raytracing into the hardware and reallocate tech elsewhere?

FireFly

Member
Microsoft have said that the die space cost of RT acceleration was minimal. Even on Nvidia, RT cores were only estimated to add an extra 10%.

For sure. Rt it’s a scam. Raster techniques catched up years ago for cheaper. Enough to compete at least.
Fsr2 was also a mistake.
Long live ugly cube maps and SSR artifacts.
 
Last edited:

rofif

Can’t Git Gud
Microsoft have said that the die space cost of RT acceleration was minimal. Even on Nvidia, RT cores were only estimated to add an extra 10%.


Long live ugly cube maps and SSR artifacts.
Cube maps, ssr and other stuff looks better than this super low res pixelated reflection RT gives in games like alan wake 2.
You can do planar reflections like in games 20 years ago.
It's funny that half-life 2 got perfect reflections in every puddle of water compared to new games
 

FireFly

Member
Cube maps, ssr and other stuff looks better than this super low res pixelated reflection RT gives in games like alan wake 2.
You can do planar reflections like in games 20 years ago.
It's funny that half-life 2 got perfect reflections in every puddle of water compared to new games
But SSR also often looks pixelated, and games in any case often rely on SSR first and ray tracing only when the reflected objects are out of the field of view. (Alan Wake 2 doesn't use ray tracing at all on consoles). Cube maps are typically static and don't line up correctly with the environment. Planar reflections are extremely expensive because they require rendering the entire scene twice, and they also only work on flat surfaces.

A "fair" comparison would be to compare a game like Spider-Man in the RT and non-RT modes. Are the reflections on the cars and buildings just as good in each mode?
 

makaveli60

Member
Yep, so tired of this bullshit performance wasting ray tracing gimmick and others who can’t even differentiate if something use it and at the same time drooling over it
 

DaGwaphics

Member
The way that RT was worked into RDNA2 didn't really add a lot of die area specifically for that feature. It's not an Nvidia/Intel situation on that front. Plus, I'd much rather have it in there for those rare games that require the tech to function. At least these consoles can run those games.
 

rofif

Can’t Git Gud
But SSR also often looks pixelated, and games in any case often rely on SSR first and ray tracing only when the reflected objects are out of the field of view. (Alan Wake 2 doesn't use ray tracing at all on consoles). Cube maps are typically static and don't line up correctly with the environment. Planar reflections are extremely expensive because they require rendering the entire scene twice, and they also only work on flat surfaces.

A "fair" comparison would be to compare a game like Spider-Man in the RT and non-RT modes. Are the reflections on the cars and buildings just as good in each mode?
Exactly. Aw2 doesn’t use any rt on consoles but they didn’t bother to put good old school technique in its place
 

SlimySnake

Flashless at the Golden Globes
Could they have put that focus elsewhere and gave us better performing games?
RT hardware in the ps5 isn’t taking up extra space on the die. It’s built into the compute units. And devs are not forced to use it.

The ps5 already goes up to 230 watts. That’s the bottleneck. If they add a bigger GPU, you start approaching 300 watts and no one wants that in their consoles.
 

Audiophile

Member
For the consoles in and of themselves and during this gen, perhaps, but Ray & Path Tracing will be a key component for the future. Having the consoles make the switch sooner rather than later so that some RT functionality is available as a baseline to more games is important.

It's something the industry needed to do and doing it without the consoles would have dragged out the transition.

I honestly see this as an almost sacrificial, transitional gen from a technical standpoint; where it's just a whole load of teething pains as we see some major paradigm shifts. Which kinda worked out well given how underwhelming the line-up has been..

Good news going into PS6 is we'll have gone through the worst of those teething pains for RT & ML/AI as well as having mature RT hardware and mature ML/AI functionality. Not to mention by then devs may have gotten more accustom to utilising the SSD+I/O in more novel ways. Then add in [hopefully] a lack of pandemic. The cherry on top would be a 2028 launch allowing for better hardware, qualification for the N2P process node with gaafet transistors & backside power delivery and a shift to chiplets to drastically reduce cost; in turn also allowing for better hardware.

We'll probably be starting off with the ability to do 2-4x resolution scaling with native-like results, the ability to frame-gen 2x with only a tiny latency bump, have a far greater quantity of far more efficient RT acceleration hardware and have an architecture far more conducive to ML/AI acceleration. There also won't be a resolution jump going into the next gen as far as I can tell; which frees up a big chunk of resources that in all previous gens was spent on just adding more pixels. Nanite/Microgeometry will hopefully be much more performant too given both optimisation and much more powerful hardware.

We've seen the introduction/popularisation of 5 or so major technologies this gen, next-gen they'll all actually be possible, performant, effective and worth the cost in real-world scenarios.
 
Last edited:

Zathalus

Member
RT/PT is the endgame of graphical rendering (other then some hypothetical AI model) so got to start somewhere. The hardware cost is negligible. It's going to pay off big time with the PS6.
 

winjer

Gold Member
The RT units in RDNA2 are very simple and probably don't use much die space.
I doubt we could get even one extra CU by replacing the RT parts.
 

Hunnybun

Member
At the level the consoles can do it, I honestly don't find it very impressive. The best I've seen is Spider-Man 2 where it definitely looks good overall and adds to the graphics, but even there it's pretty low res and doesn't stand up to a lot of scrutiny. Games are packed with advanced graphical features and I wouldn't say console RT adds any more than the average one of those.

That said, I have no idea what, if any, the trade off was in including the hardware acceleration. Could they have given us 50% more tflops? If so, they did the wrong thing imo. But I doubt that's the case anyway.
 

daninthemix

Member
For sure. Rt it’s a scam. Raster techniques catched up years ago for cheaper. Enough to compete at least.
Fsr2 was also a mistake.
They were both afraid of looking positively ancient compared to Nvidia / PC tech but they couldn't afford to do it properly.

So they still ended up looking positively ancient compared to Nvidia.
 

killatopak

Member
They have to start somewhere.

PS3 had this weird configuration and devs had to figure them out and in turn prepared them for some next gen features like gpgpu. The constraints help them optimize stuff. Limitation fosters creativity. When they have sufficient knowledge and actually get tech that run more, it would be a huge efficient leap.
 
Could they have put that focus elsewhere and gave us better performing games?
Wouldn't happen because they are using AMD hardware. Specifically RDNA2 which was built to enable AMD to support RT. Even the Series S with it's tiny cut down RDNA2 GPU supports RT, even the handheld Steam deck supports RT.
 

jroc74

Phone reception is more important to me than human rights
I remember some thinking RT wouldnt even be possible until later in the gen...

Sony and MS did fine to get it included.
 
Top Bottom