• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Rudius

Member
Isn't one of MLID's whiteboard topics "AMD vs MS"? I'm really curious about that, regardless how much of it turns out true or not.



This is kind of an inaccurate perspective considering where the companies are in modern times. They're both hardware & software companies, you can't have one without the other in the fields these companies do business in. Otherwise you oversimplify their fields of R&D, expertise, etc.

As well, if (and that's a very big if) the pattern between MS and Sony's platforms insofar as 3P performance hold out for the rest of the year, or even up until Summer or Fall, and it turns out it really is 100% down to root hardware differences, it means both you and Empire Strikes Back could be right, it wouldn't have to be one or the other.

All of that said, we still need more time before making such long-term definitive statements. Maybe things shake out on MS's end and we start to see 3P games taking leads (however large or small) on Series X more regularly. Or all the same, maybe Sony maintains the lead there or that lead even grows. No one can actually say for certain at this time how that will play out, we'll need at least a few more months of 3P releases before establishing a good basis there.

One thing I will say is that we should be seeing some big advancements on next-gen games this year across both platforms as developers start to unloosen the shackles of last-gen requirements gradually over time, meaning they can actually target the next-gen hardware more predominantly.



MLID is not 100% right about that equivalence. It's not just about how fast the storage is in terms of bandwidth. Yes systems like PS5 are "mimicking" parallelized random access by having more channels (12 channels in Sony's case) and probably specifically choosing NAND that upper-class latency figures (Toshiba NAND is usually pretty good for that), etc., but NAND is never going to have the level of low latency actual DDR2 RAM does.

There's a lot more to RAM than just the bandwidth; accessing it in the first place always incurs a hit to the cycle costs, then there's other factors like bank activation timing etc. NAND simply can't compete with DDR RAM on any of that, I'd even say DDR1 is better than any NAND devices on that point even if the actual bandwidth is low by today's standards.

Also we should keep in mind that decompression isn't "free"; offloading it from CPU (either wholly or in majority) is a massive benefit the consoles have which PC doesn't (though they can brute force it with more system RAM and good-enough SSDs), but it's still going to cost some cycles to process in and decompress that data, and a few more cycles to write it into system GDDR6 memory. So in those areas, actual RAM, be it DDR2 or whatever, is always going to have the real-time advantage in terms of cycle cost savings.

That said, the SSD I/O in the next-gen consoles is, again, a massive step up from 8th-gen systems and prior, and coming pretty close to cartridge-based systems like SNES, MegaDrive, PC-Engine and Neo-Geo. We'll just need to wait until a new generation of NAND modules with even better latency figures and random access timings (and better interconnects/processing elements with lower latency and more power for quicker decompression and DMA write accesses) comes along for us to get SSD I/O with not only bandwidth that can match or exceed older DDR memories, but with real-world performance that actually cuts into that volatile memory space as well.

And I think that'll eventually happen in a couple of years, even without NVRAM (ReRAM, Optane, MRAM etc.).
According to NX Gamer, Spiderman on PS5 loads faster then Sonic on Megadrive and Wipeout on N64.
 

geordiemp

Member
This is kind of an inaccurate perspective considering where the companies are in modern times. They're both hardware & software companies, you can't have one without the other in the fields these companies do business in. Otherwise you oversimplify their fields of R&D, expertise, etc.

As well, if (and that's a very big if) the pattern between MS and Sony's platforms insofar as 3P performance hold out for the rest of the year, or even up until Summer or Fall, and it turns out it really is 100% down to root hardware differences, it means both you and Empire Strikes Back could be right, it wouldn't have to be one or the other.

All of that said, we still need more time before making such long-term definitive statements. Maybe things shake out on MS's end and we start to see 3P games taking leads (however large or small) on Series X more regularly. Or all the same, maybe Sony maintains the lead there or that lead even grows. No one can actually say for certain at this time how that will play out, we'll need at least a few more months of 3P releases before establishing a good basis there.

One thing I will say is that we should be seeing some big advancements on next-gen games this year across both platforms as developers start to unloosen the shackles of last-gen requirements gradually over time, meaning they can actually target the next-gen hardware more predominantly.

I agree MS and Sony are both hardware companies and software, its not a correct analaysis to say one is this and that.

I just think Sony went out of their way to optimise a 36 CU system, we can see that in their patents which are optimisations on top of RDA2, or maybe PC parts have some of them and we await that RDNA2 white paper which is taking its time. The interesting patent or app is the Pixel vertices process step which hints at much more efficient post process effects, which I think we see in some ps5 performance and seem to dip in XSX but who knows yet.

At the other end, I think MS designed the best system for running 4 games as cost effectively as possible, ps5 silicon could not do that. The bandwidth over 20 GB and all RAM gets 560 so each game is decent bandwidth, The server class CPU, the low RDNA2 frequencies for a server application, 4 shader arrays and 4 games .............and why have clock gating and variable frequencies when 4 games are in different stages of a clock cycle - it is not required. Nothong wrong with the design.

The mistake if any was made was Microsoft trying to make a hybrid die for server and console to save costs, it makes sense but its clear to me anyway the console application took the back seat. It has nothing to do with competancy, its simply priorities.

MS know exactly what they did and why, and I bet any money if MS engineers just designed a console die ONLY it would not look like the XSX does today. Last gen Microsoft were seduced by TV and living room, this gen they have been seduced by the cloud and only the future will tell us what was right or wrong. IF MS get a few hundred million users in far east on game pass streaming to tablets and phones, Phil will be smiling - its not about us.
 
Last edited:

onesvenus

Member
Yes, there is literally no LOD system as it's working in a frame polygon budget instead, copying the Sony Atom View tech from 2017, probably due to collaboration. It's the new method that will squeeze more out of current GPU's, but needs pretty fast data streaming. By PCIe 5.0 with 32GB/s speeds "base" PS5 will still be keeping up with its 17-22GB/s throughput.



I know you are a Big Shot here, and know much more than I do. But according to what I've watched already from Atom View and UE5, it's a frame budget system instead with 1 asset version.
I know you have been corrected already but there's a LOD system. It might be a continuous one that requires no LOD authoring but saying there's no LOD system implies that everything is rendered at the same quality and that's neither reasonable nor performant.

And no, it's not copying "Sony's Atom View due to collaboration" 🤦‍♂️. There has been a lot of research done before and after Sony's Atom View to think that. Brian Karis has published articles on his blog on what Nanite ended being for more than 10 years.

You are probably correct, but...

What if they've found a way to encode a model as a 2D signal at each unique quaternion position - (for each channel/characteristic) into an equation? They could then take the four fragment corner locations positions of the pixel they want to render, and then do integrations against those equations to get the channel value for each pixel.

Or at least, having watched Sony's atom view tech video, that's the type of solution I'd be looking at for the rendering - so that each frame's background render budget was roughly constant, and independent of model count or complexity.
Having read some of Brian Karis papers on their previous research I'm sure it's a signed distance function representation of the mesh which is then being raymarched per pixel. By increasing/decreasing the raymarch precision you can have one single model rendered with different level of detail without having to author them individually.
 
Last edited:
MS know exactly what they did and why, and I bet any money if MS engineers just designed a console die ONLY it would not look like the XSX does today. Last gen Microsoft were seduced by TV and living room, this gen they have been seduced by the cloud and only the future will tell us what was right or wrong. IF MS get a few hundred million users in far east on game pass streaming to tablets and phones, Phil will be smiling - its not about us.
Well if it is not about us, then why the hell is anyone here defending it then? It would be no different from how Konami abandoned AAA gaming to focus on casino gambling. Sure, Konami would make money from the gambling, but no one had any reason to cheer for it.

In the same way that Sony made a lot of money on Fate Grand Order, a mobile game, but despite anything else I say about Sony I would never say I LIKE it, because I don't. I don't play mobile games and the fact that Sony makes money from it is no reason for me to be happy.

As consumers, it is our job to demand what we want. We might not get it, but you would never get anything if you don't ask. I don't care for game streaming and don't mind people know it. And if Xbox go all in on streaming then I will be against it just because it is streaming.
 

geordiemp

Member
Well if it is not about us, then why the hell is anyone here defending it then? It would be no different from how Konami abandoned AAA gaming to focus on casino gambling. Sure, Konami would make money from the gambling, but no one had any reason to cheer for it.

In the same way that Sony made a lot of money on Fate Grand Order, a mobile game, but despite anything else I say about Sony I would never say I LIKE it, because I don't. I don't play mobile games and the fact that Sony makes money from it is no reason for me to be happy.

As consumers, it is our job to demand what we want. We might not get it, but you would never get anything if you don't ask. I don't care for game streaming and don't mind people know it. And if Xbox go all in on streaming then I will be against it just because it is streaming.

I was not defending streaming, I cant stand lag I have no persional interest. I am just pointing out MS design enginers likely made what they were told, fault if any lies with Phil market vision.
 
Last edited:

Bo_Hazem

Banned
I know you have been corrected already but there's a LOD system. It might be a continuous one that requires no LOD authoring but saying there's no LOD system implies that everything is rendered at the same quality and that's neither reasonable nor performant.

And no, it's not copying "Sony's Atom View due to collaboration" 🤦‍♂️. There has been a lot of research done before and after Sony's Atom View to think that. Brian Karis has published articles on his blog on what Nanite ended being for more than 10 years.

I understand what you said there, but you understand what I meant. You are more accurate because you have more experience so pardon my ignorance. It's like me saying no loading times while it's 0.8 in Demon's Souls checkpoints. :messenger_winking_tongue:

How can then one uses the same system? Doesn't that get patented? Also from my understanding Atom View is mostly using voxels, something sounds different than polygons as it seems and of course you understand it better. So they both can be the same but different in that, otherwise someone must license it or it's a shared license like mirrorless camera tech between Sony and Panasonic, I believe.

This is timestamped talking about the subject lately:

 

sircaw

Banned
In order for someone to be defensive they have to have been attacked.

It's really hilarious.

Sony fan: PS5 has xx
Xbox fan: no it doesn't, there is no evidence that it has

Or:

Sony fan: the yy in the PS5 is custom
Xbox fan: lol, an yy has existed for ages there's nothing special about it in the PS5!

When a Sony fan then replies to correct the misinformation they are then being belittled as being defensive...

Again, in order to have a need to be defensive, someone has said or done something to warrant any kind of reaction. Telling people they shouldn't be so defensive after you said something that is being perceived as being incorrect is an act to get someone to be annoyed and react, to then mount your high horse and shout "don't be so defensive" is trollish behaviour...
Damn girl go easy on the old fellah, :messenger_smiling_horns: You a cat, not a Doberman.
 

yewles1

Member
Damn girl go easy on the old fellah, :messenger_smiling_horns: You a cat, not a Doberman.
liz lemon meow GIF
 

Fafalada

Fafracer forever
But if it was equations they were recovering from disk, and regenerating the data on the fly, by supplying coordinates(as integration limits for each viewport pixel) then the performance would be constant IMO
If I'm reading you right - this is pretty much assuming we'd have an analytical way to represent arbitrary data-sets. I mean, if that existed, we can throw away all existing compression methods today.
Last I checked - the best numerical methods start to seriously break down at around 1bit per 2d coordinate + value.

Anyway, Epic's own data around this(what has been pieced together so far anyway) suggests they haven't broken any compression barriers here, but have a method that allows a fast search through the data-set to what's relevant at a given view (and presumably temporally coherent, so they keep per-frame changes down).
Note that they still store explicit textures for all the data also - something that should also be unnecessary with the encoding you propose.
 

Bo_Hazem

Banned
If I'm reading you right - this is pretty much assuming we'd have an analytical way to represent arbitrary data-sets. I mean, if that existed, we can throw away all existing compression methods today.
Last I checked - the best numerical methods start to seriously break down at around 1bit per 2d coordinate + value.

Anyway, Epic's own data around this(what has been pieced together so far anyway) suggests they haven't broken any compression barriers here, but have a method that allows a fast search through the data-set to what's relevant at a given view (and presumably temporally coherent, so they keep per-frame changes down).
Note that they still store explicit textures for all the data also - something that should also be unnecessary with the encoding you propose.

I believe PS5 does fitch the data automatically without devs intervening. It's just point where you want your data and PS5 takes care of it, as Mark Cerny said in the Road to PS5. Of course there should be a set of compressed assets assigned within a level to be ready for streaming, right? I think it's part of PS5 being easy to develop for is down to its smart API.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
So many times it has already been said and written that variable frequencies are not needed to float in values, or because it is impossible to work on stable ones, but in order to maximize the peak of hardware capabilities and the efficiency of each component when necessary. Learn the technical part, please, because the Xbox will have even more problems, because its SoC also has a power limit, but there is no such smart power management as in the PS5. The bottleneck will be exactly in the power limit, when the computational loads on the chip will be unusually high. Thats why Cerny emphasized the need to move away from the old paradigm, due to its low efficiency, to the new one.
Think twice before you write another cool story about issues of variable clocks.

giphy.gif
You sound like those xbox fanboys who were convinced that extra tflops would result in better performance. I prefer to keep an open mind and wait for the benchmarks. The certainty with which you speak of this new tech without any next gen benchmarks makes you come off as an arrogant prick.
 

SSfox

Member
Guerrilla may not be interested to make no Killzone games (i'm not super big fan of the franchise tbh, but yeah sucks for fans), but maybe Insomniac is interested to make a new Resistance game?



I'm not a resistance fan either, but i'm always open mind, could be interesting.
 

Riky

$MSFT
Guerrilla may not be interested to make no Killzone games (i'm not super big fan of the franchise tbh, but yeah sucks for fans), but maybe Insomniac is interested to make a new Resistance game?



I'm not a resistance fan either, but i'm always open mind, could be interesting.


I hope so, really annoyed by Killzone probably being killed off so Resistance would soften the blow.
 

kyliethicc

Member
Guerrilla may not be interested to make no Killzone games (i'm not super big fan of the franchise tbh, but yeah sucks for fans), but maybe Insomniac is interested to make a new Resistance game?



I'm not a resistance fan either, but i'm always open mind, could be interesting.

They might just port over the 3 Resistance games from PS3. Remaster collection to test interest for a new game.
 

roops67

Member
You sound like those xbox fanboys who were convinced that extra tflops would result in better performance. I prefer to keep an open mind and wait for the benchmarks. The certainty with which you speak of this new tech without any next gen benchmarks makes you come off as an arrogant prick.
I believe you got the wrong end of the stick
 
You sound like those xbox fanboys who were convinced that extra tflops would result in better performance. I prefer to keep an open mind and wait for the benchmarks. The certainty with which you speak of this new tech without any next gen benchmarks makes you come off as an arrogant prick.
Prick? Ok. But then you look pretty feeble-minded, because for such a long period, even a blockhead would understand what the variable frequency's point is (with a fixed power limit for the SoC) and not argue. If you still haven't figured it out, then the PS5 always has the maximum supply voltage on the SoC. And this was done in order to avoid the unknowns in full load. Just open your mind.
You've already been hysterical about the "weak ass PS5" and where was your open mind? So please think three times now before you write me another bs.
 
Prick? Ok. But then you look pretty feeble-minded, because for such a long period, even a blockhead would understand what the variable frequency's point is (with a fixed power limit for the SoC) and not argue. If you still haven't figured it out, then the PS5 always has the maximum supply voltage on the SoC. And this was done in order to avoid the unknowns in full load. Just open your mind.
You've already been hysterical about the "weak ass PS5" and where was your open mind? So please think three times now before you write me another bs.

Is the dowclocking only for situations where the PS5 doesn't need maximum frequencies?

Like if your looking at a map screen it downlclocks so the cooling system doesn't have to work as hard.
 
Guerrilla may not be interested to make no Killzone games (i'm not super big fan of the franchise tbh, but yeah sucks for fans), but maybe Insomniac is interested to make a new Resistance game?



I'm not a resistance fan either, but i'm always open mind, could be interesting.

I haven't played any Resistance games, but they seem pretty damn good, especially Resistance 3. I could play them on PS Now but I don't like to play FPS games with 30 fps.
Knowing Insomniac's greatness at making weapons I would like like to have a fresh start for the series on PS5.

Also Ratchet will be so good.
 

geordiemp

Member
Which the power budget is enough to run both clocks at maximum. Which is why I'm getting confused on when it has to downclock them.

Its because people do not think in terms of fractions of a frame, imagine you could downclock for 1 or 2 ms segments.

Start of a frame CPU is busy, its working out all logic and what is happeningin your game, what to draw, where you move, what to load.

GPU can have a nap and put its feet up, there is not much going on.

Then GPU has to draw and shade allot of things, CPU is not so busy, time for one to get all the power, one to have a nap.

It save power, add in fine fated clocks, variable frequencies and power management by load and its efficient.

In what I described above, there is no loss of gaming performance, things are fully powered when they are needed, thats the bit people miss.

All this sustained downclocking crap is people have no fucking idea what they are talking about,. its an efficiency thing for high clocks - Note pC parts do 2.5 GHz, so 2.23 GHz is fine - Ps5 and RDNA2 PC parts are designed to do this..

Why would anyone want to run CPU and GPU and full power for a full frame, its not a game anymore.

Who cares anyway,. ps5 performs just sweet so its too late for FUD - I used to post activity in 1 frame of spiderman to try to explain...
 
Last edited:
Its because people do not think in terms of fractions of a frame, imagine you could downclock for 1 or 2 ms segments.

Start of a frame CPU is busy, its working out all logic and what is happeningin your game, what to draw, where you move, what to load.

GPU can have a nap and put its feet up, there is not much going on.

Then GPU has to draw and shade allot of things, CPU is not so busy, time for one to get all the power, one to have a nap.

It save power, add in fine fated clocks, variable frequencies and power management by load and its efficient.

In what I described above, there is no loss of gaming performance, things are fully powered when they are needed, thats the bit people miss.

All this sustained downclocking crap is people have no fucking idea what they are talking about,. its an efficiency thing for high clocks - Note pC parts do 2.5 GHz, so 2.23 GHz is fine - Ps5 and RDNA2 PC parts are designed to do this..

Why would anyone want to run CPU and GPU and full power for a full frame, its not a game anymore.

Who cares anyway,. ps5 performs just sweet so its too late for FUD - I used to post activity in 1 frame of spiderman to try to explain...

I just wanted to understand it a little better and I your explanation pretty much nails it.
 

Riky

$MSFT
Which the power budget is enough to run both clocks at maximum. Which is why I'm getting confused on when it has to downclock them.

"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10% it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," Cerny told Digital Foundry"

That's from the horse's mouth.
 
"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10% it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," Cerny told Digital Foundry"

That's from the horse's mouth.

I'm not seeing that in the comparisons though.
 

geordiemp

Member
I'm not seeing that in the comparisons though.

That was when Richard was pushing sustained clocks and TF at Cerny, In theory if you gave Ps5 lots of AVX code on CPU and Furmark at 2.23 Ghz it would downclock as Cerny explained to Richard - XSX would likely thermal throttle. as would 6800.

Richard was struggling to understand, you could see Cerny frustation in the discourse, and funnily enough Sony have ignored them since.

DF are not hardware engineers and do not have a fucking clue and should stick to commenting on what games look like.

But Let Riky be happy.
 
Last edited:

Duchess

Member
Knowing Insomniac's greatness at making weapons I would like like to have a fresh start for the series on PS5.

There was a weapon in Resistance called the Bullseye, where you could lock an enemy with a tag, and then every bullet you fired from then on, no matter which way you were facing, would hit them.

It was like that bit in The Fifth Element with the homing gun. Like so:

72a1e2f6b580eca88de4dce3bf0414ca8d89c992.gif
 
That was when Richard was pushing sustained clocks and TF at Cerny, In theory if you gave Ps5 lots of AVX code on CPU and Furmark at 2.23 Ghz it would downclock as Cerny explained to Richard - XSX would likely thermal throttle. as would 6800.

Richard was struggling to understand, you could see Cerny frustation in the discourse, and funnily enough Sony have ignored them since.

DF are not hardware engineers and do not have a fucking clue and should stick to commenting on what games look like.

But Let Riky be happy.

Pretty much. If the PS5 was doing that with the comparisons like Riky Riky said with his emote it should be immediately visible in the comparisons. Take into account that the theoretical maximum is 10.28 TFs if it dropped below that the gap should be a lot bigger in the multiplatform comparisons. It doesn't take a lot to go from 10.28TFs to something like 8TFs yet the system is on par with a 12TF one. If variable clocks were such a big issue we would have seen it by now. But instead a variable 10.28TF system is on par with a fixed 12TF one.

Edit: So Riky Riky would you like to give a rebuttal or are emotes the best you can do?
 
Last edited:

Riky

$MSFT
Pretty much. If the PS5 was doing that with the comparisons like Riky Riky said with his emote it should be immediately visible in the comparisons. Take into account that the theoretical maximum is 10.28 TFs if it dropped below that the gap should be a lot bigger in the multiplatform comparisons. It doesn't take a lot to go from 10.28TFs to something like 8TFs yet the system is on par with a 12TF one. If variable clocks were such a big issue we would have seen it by now. But instead a variable 10.28TF system is on par with a fixed 12TF one.

Edit: So Riky Riky would you like to give a rebuttal or are emotes the best you can do?

I could but I've been over this several times already and I'll just end up getting thread banned at minimum so read again what Cerny says and take particular notice of the first sentence, then it becomes very obvious what the deal is.
 
I could but I've been over this several times already and I'll just end up getting thread banned at minimum so read again what Cerny says and take particular notice of the first sentence, then it becomes very obvious what the deal is.

So what is it if you know?

You won't get thread banned if you can prove what you say.
 
Last edited:

THEAP99

Banned
Games this gen should have a theater mode along with photomode.

Here's of example of what can be created in a photomode



Imagine a game like tlou2 allowing theater mode for encounters, how sick would that be. Multiplayer games as well should have it. No excuse not it
 
Last edited:

geordiemp

Member
So what is it if you know?

You won't get thread banned if you can prove what you say.
Riky does not understand hardware and neither did DF, it was during Richards sustained Terraflop phase, at least Richard is learning and does not mention TF anymore, he calls it console horsepower now lol

Personally I think Cerny over engineered the solution, you can see this easily when someone unplugged the cooler on ps5 while running a Ps5 game at full tilt, and ps5 kept going at full performance for a long time as the die heated up.

They could of saved on smart shift I think, not even needed, but that is conjecture. At the end of teh day it does not matter anymore, ps5 does not give a shit and tears through games anyway.....
 
Last edited:

roops67

Member
Which the power budget is enough to run both clocks at maximum. Which is why I'm getting confused on when it has to downclock them.
The busier the CPU or GPU gets the more power it draws which inturn creates more heat. Now you could down clock the busy part which would be throttling, or be smart and down clock the less busy part that is not in a rush and let the busy one finish it's task quicker while still maintaining the total power and thermal budget, this is some of the features of Sony's implementation of SmartShift

On the other end if you just had fixed frequencies like the XSX , when the workload varies so does the power drawn and can go over its power thermal budget. It can't throttle so it has to keep it's work light to stay within it's limits
 
Riky does not understand hardware and neither did DF, it was during Richards sustained Terraflop phase, at least Richard is learning and does not mention TF anymore, he calls it console horsepower now lol

Personally I think Cerny over engineered the solution, you can see this easily when someone unplugged the cooler on ps5 while running a Ps5 game at full tilt, and ps5 kept going at full performance for a long time as the die heated up.

They could of saved on smart shift I think, not even needed, but that is conjecture. At the end of teh day it does not matter anymore, ps5 does not give a shit and tears through games anyway.....

Honestly if someone says they are worried about getting thread banned it's probably because what they think is probably fake in the first place. I've seen people speculate all sorts of things in this thread and they didn't get banned for it. The ones that do usually are the ones spreading Mr X level type FUD.

I like hearing what others say but if they are worried about getting thread banned it's probably misinformation.
 

IntentionalPun

Ask me about my wife's perfect butthole
Which the power budget is enough to run both clocks at maximum. Which is why I'm getting confused on when it has to downclock them.

You are just missing workload from your math here.

A CPU running at 1GHZ doesn't use a constant amount of energy. It could use more or less energy depending on the work given to it.

Think of the CPU like a motorcycle and workloads being various sized people.

A motorcycle running at 500RPM w/ a hot spinner chick on it will use half gallon of gas per mile.

The same motorcycle running at 500RPM w/ a fat dude on it will use a full gallon of gas per mile.
 
Last edited:
Because there are inevitably going to be moments that would exceed that power budget if both clocks stayed full.

I'm guessing that in theory a game could have some really bad code that causes that to happen. It's why I believe Cerny mentioned that "worst case game". A title that's properly optimized shouldn't have those issues.
 
  • Like
Reactions: Rea

IntentionalPun

Ask me about my wife's perfect butthole
I'm guessing that in theory a game could have some really bad code that causes that to happen. It's why I believe Cerny mentioned that "worst case game". A title that's properly optimized shouldn't have those issues.
Not really true.. it's simply tuned to allow frequencies of the GPU/CPU that are not possible to fully sustain across all workloads.

Code that is "optimized" would push these limits really.. that's really just a separate topic.
 

PaintTinJr

Member
....


Having read some of Brian Karis papers on their previous research I'm sure it's a signed distance function representation of the mesh which is then being raymarched per pixel. By increasing/decreasing the raymarch precision you can have one single model rendered with different level of detail without having to author them individually.
I'll happily take a link to a few of those papers - if you can look them out - as that sounds really interesting.
 
Status
Not open for further replies.
Top Bottom