• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
By the way, that demo (crocodile only) was barely running on i7 with 50GB of ram.
he did say there's a lot of optimizations left to be made(They're probably holding the full detail model in ram, and it is likely uncompressed.). The animation file should compress by 98%.

If ue5 is doing similar the full detail model might be compressed in ssd, and only part of it is streamed to the ram.

edit: also it was unlikely to be using the 50GB if the volumetric video is only 5GB.
 
Last edited:


I saw on Era someone pointing out that it may be Bluepoint's game. Becase of the color blue and the tip (point) of the ice, it could be that too. But I don't know how reliable is the person who tweeted that so we'll see.

Didn't blue point already say they will reveal their ps5 game in the wired article last year ? So I feel blue point is given
 
Last edited:

silent head

Member


Brief impressions from Mike Bithell on DualSense


Troy Baker :messenger_tears_of_joy:

fOa1qCW.jpg
 

Larryfox

Member
One interesting thing with what Sony did to me was with deciding to a tray so people can odd off the market SSDs that have to be certified. I wonder why they did that because it’s not going to use all the IO processing the internal one has the benefit of. They could’ve done what Microsoft is doing and just made a custom one that has the same benefits as the internal. Both should be pretty expensive Sony has the benefit of the prices getting cheaper, but they’re pretty high when first introduced and Microsoft could make theirs cheaper if they decide to open it up to more manufacturers or decide themselves to just make it cheap. I wonder if that’s going to affect how their console design because of people reluctance to open their console up, I’m pretty sure Sony patched in the portion to use external storage on ps4 probably because a lot of just didn’t want to open their console.
 

Handy Fake

Member
One interesting thing with what Sony did to me was with deciding to a tray so people can odd off the market SSDs that have to be certified. I wonder why they did that because it’s not going to use all the IO processing the internal one has the benefit of. They could’ve done what Microsoft is doing and just made a custom one that has the same benefits as the internal. Both should be pretty expensive Sony has the benefit of the prices getting cheaper, but they’re pretty high when first introduced and Microsoft could make theirs cheaper if they decide to open it up to more manufacturers or decide themselves to just make it cheap. I wonder if that’s going to affect how their console design because of people reluctance to open their console up, I’m pretty sure Sony patched in the portion to use external storage on ps4 probably because a lot of just didn’t want to open their console.
The information on the net is vague about this but, as I understand it, the PS5 has an expansion slot for another SSD. Although it needs to be certified by Sony regarding whether you can play games from it or whether you'd be required to move the game from the expanded SSD to the internal to play it.

I may be wrong though as there are conflicting accounts.
 
Last edited:

Thirty7ven

Banned
That Dictator though, he can like MS or hate Sony it's ok for him, nobody should judge him about his own likes and tastes, but for someone who has a job like him he should (at least try) to stay focus and neutral.

That's what bothers me. Sure be biased, we all are. But when your job is doing tech analysis? Leave the bias at home, always come with an open mind, be professional. Oh and don't do the whole "Oh my console of choice ain't ahead on this? Let me go back to being a PC only fanboy"
 
Last edited:

Radical_3d

Member
The information on the net is vague about this but, as I understand it, the PS5 has an expansion slot for another SSD. Although it needs to be certified by Sony regarding whether you can play games from it or whether you'd be required to move the game from the expanded SSD to the internal to play it.

I may be wrong though as there are conflicting accounts.
The thing is... if they don't duplicate all the silicon for the expansion bay it won't do it for games. If they duplicate it, how would that work with a PCIe with just 1/3 of the original lanes? I hope this matter gets cleared in June.
 

Handy Fake

Member
The thing is... if they don't duplicate all the silicon for the expansion bay it won't do it for games. If they duplicate it, how would that work with a PCIe with just 1/3 of the original lanes? I hope this matter gets cleared in June.
Aye, there was a wee discussion about this a good few hundred pages back. I'm honestly not sure.
I do wonder if perhaps there's architecture in place for the very fast shifting of full games from external to internal SSD to make up for the lack of IO seen within the architecture.
Bo_Hazem Bo_Hazem postulated that there could possibly be some on-chip cache RAM to mitigate any slowdown of streaming from a slower SSD IO but as you say, we'd need to see when the details emerge.
 
Last edited:
That's what bothers me. Sure be biased, we all are. But when your job is doing tech analysis? Leave the bias at home, always come with an open mind, be professional. Oh and don't do the whole "Oh my console of choice ain't ahead on this? Let me go back to being a PC only fanboy"
Yeah, it's quite exasperating, specially given how much I enjoy the stuff DF does, but just, whenever he's on and there's console talk, I just can't watch it. He always does cheeky remarks or emphasize how PCs do anything better, and it's like, dude, you like PC, we get it, but let's just look at consoles and what they bring to the table on a positive manner, and sure, let's criticize what they're not so good at, but taking into consideration that they're a closed ecosystem, so, you can't really compare them to PC, and talk all the time about how PC does better this, or PC does better that. You just could see the pain in his face talking about the UE5 demo running on the PS5. That's sad, he should be above that, when talking for DF videos. On his spare time, vulturing around RE and certain discord servers? Sure, talk about whatever you want and be an asshole, but on your job? Well, I don't exactly think it's a positive thing.
 
Teraflops aren’t “fixed” on XSX. They are a theoretical peak on both systems.

The GPU frequency is fixed on XSX, but just frequency alone means nothing.

Any PC gamer will know that to utilise a GPU 100% you pretty much need a “burn test” benchmark running. Something pretty much designed to flip every transistor every clock-cycle, usually rendering nonsense or something pointless and mostly static.

Any PC gamer will also know that it’s easy to have a fixed high clock on something idling.
It’s only when a CPU/GPU is loaded up with real work to do that watts are consumed and turned into heat that needs to be dealt with.

For example, I can overclock my i7 to 5.0Ghz and it will sit there just fine. I can even run some games and it will be happy. However, if I run a stress/burn test like Prime95 or Linpack, within seconds it’s spiking in temperature and throttling back to deal with the heat. With the ability to throttle back disabled it instead crashes the entire system as the thermal protections kick in.
It only even gets close to its theoretical maximum FLOPS during a burn test, not when doing the tasks it generally does.

The quoted teraflop numbers for both systems are literally a hand calculation of a theoretical maximum that doesn’t consider thermal constraints—because the cooling capability is unknown—the figures quoted for both are basically if every single transistor was flipped every single clock tick, like the worlds worst burn-test.

There’s no such thing as a “fixed” 12 teraflops. A flop is a measurement of one kind of calculation a GPU does, relating to the programmable side rather than the fixed function parts. It depends on work-load, not on frequency.
To repeat, if it didn’t vary by work-load then a 5Ghz CPU would be the same temperature whether it was idling on the desktop or running Prime95. That is not the case. That is not how it works.

The actual peak performance either console will be able to sustain is actually calculated differently.
Microsoft has taken the traditional route to set their clocks to match what they estimate to be a worse case scenario in actual game code (and not a 12TF burn test), and a worse case ambient temperature, and what their cooling system is capable of dissipating.

Game developers will have access to profiling and telemetry tools to see how close they are to the designed limits.
Any condition where this estimated peak is breeched, like a clogged heatsink, broken fan or crazy desert heat would cause the game to crash if it was during a computationally intense task.

PS5 on the other hand encourages developers to work to a maximum wattage, and sizes the fan to match that known maximum under the expected extreme of ambient heat they expect it to reasonably encounter.
Wattage is drawn by the actual calculation work being done, not by frequency alone, and wattage creates heat.
A worse case burn-test would cause it to drop frequency a “few” percent (to apparently reduce wattage 10%), and the game wouldn’t crash. But this would only happen if the developer programmed the game in this way, and it’s not determined by ambient temperature, as all PS5s need to be equal and deterministic.
This means it technically easier to get the actual unknown maximum out of a PS5 during design time.

Both systems vary in the actual amount of work they do, neither will reach burn-test peaks.

Xbox’s power consumption (heat) varies as the work load increases at a fixed frequency, up to a point it crashes because of the heat, with an expected margin or headroom the developer works within to make sure it doesn’t happen in any reasonable scenario.

PS5’s frequency varies as work load increases at a fixed power consumption (heat), up to a point it reduces frequency to keep power consumption within the limit that the developer will work within to make sure this work capability drop doesn’t happen in any reasonable scenario, although if it does, it’s less critical. There’s no margin to be as concerned with as just peaking it won’t crash the game.

Both systems will be able to throttle back on frequency and power respectively when idling outside of a game.

In a Eurogamer article Cerny insisted it would take the likes on an intentionally programmed burn-test that flips all transistors every clock-tick to cause the GPU to declock a few percent. He further suggested that from what they’re seeing even when the GPU spends an entire 33ms frame doing actual work with no idle it’s not reducing its clocks.
He clarified that a race to idle condition wasn’t being used to keep the clocks artificially high, and that even under constant work they stayed high.

Remember, heat is generated by calculation work, not clock speed alone.

PS5’s targeting of watts instead of an estimated peak is about cooling and efficiency. It’s not about “boosting” up in the same way a mobile or PC CPU or GPU does. The people still insisting and implying PS5 works that way have failed to understand what is going on, and are likely intentionally trying to get people to believe it’s working as a mobile or PC boost clock does, for whatever strange reason.

Tables that compare the two GPUs like the last one posted here, that had TF in one column, and ROPs in another are also being disingenuous.

For the TF column, that number is a performance metric that has been calculated by multiplying some figures by the GPU core clock-speed.
For the ROP column the clock-speed is then disregarded, even though ROP performance is one of those fixed function parts of the GPU that scales with clock-speed.
The two numbers being provided aren’t comparable if the clock-speeds are different.
The same applies to the cache figures. A cache of size N at 2Ghz has twice the bandwidth of a cache at size N at 1Ghz.
This is what Cerny meant by a rising tide lifting all boats.

If you want to compare teraflops, then you need to compare the other areas of the GPU also with the clock-speed in mind. Failing to do that is to be misleading. Maybe even intentionally so, although I’m more inclined to believe the people making these kinds of comparisons just don’t understand what’s going on and how things really work.

Both the PS5 and XSX GPUs will be better at each other when doing different work loads. The XSX clearly has an advantage in outright theoretical compute ability, and therefore ray tracing ability, but how the rest of the system feeds these machines, how efficiently they’re used, how well the different APIs are written, and how skilled the game and engine developers are will have far more of an impact on the end result.

For clarity, and it’s worth repeating, I’m impressed and really happy with the job both companies are doing. Microsoft is definitely back on the right track and fans of that system should be very happy. I’m personally more excited about PS5 and what its IO might bring to the table and will be buying that first, but I’ll also be very happy to spend my money on an XSX and will be doing exactly that, too. Microsoft deserve to see that focusing the device as a gaming machine makes them money again.
Both of these babies are impressive. They’ll be closer in multi-platform perfomance than some people are suggesting (hoping?). It’s going to be a great ~7 years.
Even PC is going to take a quantum leap in response to this. Everyone wins. Be happy!
 
Just to clarify in my previous wall of text that I can’t edit any more:

A worse case burn-test would cause it to drop frequency a “few” percent (to apparently reduce wattage 10%), and the game wouldn’t crash.

This wasn’t very clear. It would only need to marginally reduce clocks to stay at the target wattage. It wouldn’t need to suddenly go to under the power target by 10%, but just maintain it.
Cerny insisted that reducing the clocks by a “few” percent equates to a drop in power consumption by 10%.
This seems about right from GPU overclocking. It can take a lot of extra power to maintain the highest attainable clock.

The way PS5 works is more analogous to modern GPU clocking where you target your overclock to sit at peak power draw but have a cooler sufficient that it never thermally throttles. It’s what you want to aim for. Power draw is actual useful work done.
 

DaGwaphics

Member
Yeah I figured as much with my experience with building PCs and watching benchmarking and overclocking videos on LTT GN and the like. Even under heavy cooling at a certain point clocks drop under thermal throttling. Thats why I am curious of the fixed claim either they have some kind of crazy cooling or the chip they are using is more effecient / underutilized so it will not thermal throttle, and it feels wasteful for a system to run at sustained clocks capable of producing 12 tf or graphical performance even if you are just watching youtube. Just my 2 cents.

Consoles traditionally provide a cooling solution capable of sustaining core clocks in all situations. This isn't new, the PS4 has locked clocks in game modes. Thermal throttling is unacceptable in a console format, the point of console is to guarantee baseline performance. You can't do that if the system is responding to ambient temps (why Sony made a point that the variable frequency on PS5 is not temp based). This isn't a new feature for XSX, this is the way every console has been built, from the 60s moving forward.

Put a console in a really bad spot (tight shelf, with no airflow) and the system will shutdown before it throttles.
 
The thing is... if they don't duplicate all the silicon for the expansion bay it won't do it for games. If they duplicate it, how would that work with a PCIe with just 1/3 of the original lanes? I hope this matter gets cleared in June.

Why (and how?) would the m.2 slot be using a third of the custom flash controllers 4x PCIe4.0 lanes?

If you’re referring to the levels of priority, this was explained in Cerny’s ASMR sermon.
The expansion NVMe gets routed through the same IO complex on the main chip, which can act as a go between so the NVMe can be seen at something with the extra levels priority. To do this with less actual levels of priority requires the NVMe drive to effectively saturate the PCIe4.0 bandwidth and run around 7.5GB/s, instead of the 5.5GB/s the built in flash storage and custom controller can get away with.
 

Handy Fake

Member
Why (and how?) would the m.2 slot be using a third of the custom flash controllers 4x PCIe4.0 lanes?

If you’re referring to the levels of priority, this was explained in Cerny’s ASMR sermon.
The expansion NVMe gets routed through the same IO complex on the main chip, which can act as a go between so the NVMe can be seen at something with the extra levels priority. To do this with less actual levels of priority requires the NVMe drive to effectively saturate the PCIe4.0 bandwidth and run around 7.5GB/s, instead of the 5.5GB/s the built in flash storage and custom controller can get away with.
So the expansion "slot" (as much as we know of it or if it exists) has direct connection to the IO? I think folk are assuming that it will be slowed by the interface, unless the interface is also proprietary?
I'm really interested to see what they've done with it.
 

DaGwaphics

Member
U saw how they changed the translation to fit their narrative?

yjLk9mY.jpg


Basically it was said if we drop the fidelity it can run on slower ssd no problem . Meaning the engine is scalable. who would have thought UE5 is scalable ?

I m shocked 😂😂

Keep in mind that they are saying 2 pixels per triangle is just MB/s. Don't think the math there flows the way you'd like. Not that I'd pay much attention this type of "leak". We'll see where the system stack up in head-to-heads soon enough.
 
Consoles traditionally provide a cooling solution capable of sustaining core clocks in all situations. This isn't new, the PS4 has locked clocks in game modes. Thermal throttling is unacceptable in a console format, the point of console is to guarantee baseline performance. You can't do that if the system is responding to ambient temps (why Sony made a point that the variable frequency on PS5 is not temp based). This isn't a new feature for XSX, this is the way every console has been built, from the 60s moving forward.

Put a console in a really bad spot (tight shelf, with no airflow) and the system will shutdown before it throttles.

This is a naive or partial view. Core clock speed isn’t a measure of actual work done. You could program a game for PS4 that causes the power draw/wattage/work done to exceed design limitations, overheat and crash. Even if the console was somewhere cool. Even with fixed clocks.

Fixed clocks do not mean fixed maximum work load. Anyone that has overclocked a modern GPU/CPU knows that you should target maximum power draw with enough cooling to keep it there.
 

pawel86ck

Banned
I wonder how big this UE5 tech demo is🤔. If it's over 100 GB then can we really expect developers will want to use such detailed assets in real games? Some games are already over 100 GB, and PS5 SDD will be only 800 GB.
 
Last edited:

Handy Fake

Member
This is a naive or partial view. Core clock speed isn’t a measure of actual work done. You could program a game for PS4 that causes the power draw/wattage/work done to exceed design limitations, overheat and crash. Even if the console was somewhere cool. Even with fixed clocks.

Fixed clocks do not mean fixed maximum work load. Anyone that has overclocked a modern GPU/CPU knows that you should target maximum power draw with enough cooling to keep it there.
That's what Linus TT does with that Cinebench test isn't it? Max out the cores to gauge temperature and throttling?
 
Keep in mind that they are saying 2 pixels per triangle is just MB/s. Don't think the math there flows the way you'd like. Not that I'd pay much attention this type of "leak". We'll see where the system stack up in head-to-heads soon enough.
And PS5 would need GBs to draw 1 pixel to 1 triangle and stream 8K on top of it all.
Scales with ssd speed.
 
I wonder how big this UE5 tech demo is🤔. If it's over 100 GB then can we really expect developers will want to use such detailed assets in real games? Some games are already over 100 GB, and PS5 SDD will be only 800 GB.

It's not like your getting alot more from the competitions SSD either so it will be a problem for both. But this does make Sonys claims of no load times more viable.
 
What I think is both PC and Xbox are going to anchor multiplats. PC's might surpass that 4-5 years later and closing/surpassing PS5 intelligent architecture later. Either way PS5 gonna benefit from its efficient SSD and i/o and GPU cach scrubbers to make the most out of it while other slower systems going to still deal with bottlenecks and other precautious calculations beyond the necessary in player's view like assets and wasting power budget to avoid going totally black/empty/bland space when turning around fast.

No so sure about that timeframe. I'd say in a year or two after next gen launch you'll be able to build a pc thats better then these consoles in all regards.





I'm not sure what they're doing there. Closest thing to a paper is this


Below's a timestamp to where they talk about animation, they say the crocodile is sourced from a low poly model, iirc. But it seems they're able to handle animations


edit:
Watched it he says the uncompressed video is around 5GB. But that once compression is used in the future it should fall by 85-98%.

98% would give 100MB for the 20min animation.


I hope we will get there some day. However i haven't seen any structural integrity/physic there. Things that collapse due to missing parts and such.
So even though they might get animations done there still a lot of stuff missing to compare it to current polygon based engines.
 

DaGwaphics

Member
This is a naive or partial view. Core clock speed isn’t a measure of actual work done. You could program a game for PS4 that causes the power draw/wattage/work done to exceed design limitations, overheat and crash. Even if the console was somewhere cool. Even with fixed clocks.

Fixed clocks do not mean fixed maximum work load. Anyone that has overclocked a modern GPU/CPU knows that you should target maximum power draw with enough cooling to keep it there.

You missed the point of my post completely. A bad coder could absolutely crash a console in a minute or two, the point was that to guarantee performance the system won't thermal throttle (it'll just shutdown to save itself).

If a console is thermal sensitive the devs could never utilize the system to the fullest, substantial thermal headroom would need to be preserved at all times. It's very basic. The max power envelope is known to developers, that's not an unknown for them to deal with (if they are crashing dev kits, they will scale back).
 
So the expansion "slot" (as much as we know of it or if it exists) has direct connection to the IO? I think folk are assuming that it will be slowed by the interface, unless the interface is also proprietary?
I'm really interested to see what they've done with it.

It will just be connected over a pretty standard PCIe4.0 bus with enough lanes to support 7.5GB/s to the main chip.

I know the point you were making, though. The proprietary secret sauce in the custom flash controller can’t be used by the expandable m.2 slot. That’s why you need a 7.5GB/s NVMe there, instead of just a 5.5GB/s one. You need an extra 2GB/s to close the gap to the custom flash controller in their testing presumably.
 

Uzupedro

Member
A lot of things(If not everything) of that stream was obvious, but here we are seeing everyday Sweeney trying to explain these stuff for everyone(even some Sony fans are saying some bullshit about the tech demo, what is wrong with these people?).
What? It's that obvious?
 

J_Gamer.exe

Member
So as the post by insane metal a good lot of pages ago... from a third party dev..

It is easy. It is useless to have 12 boxes if they do not fit through the door all together.

You have 12 boxes to fill. So you can't pass all the boxes at once. You must decide which boxes will pass and which will not. That is handled by a coordinator. And the coordinator tells the delivery man which boxes to take.

Mrs. XSX wants to make the move as soon as possible, but it turns out that only 8 boxes can fit on the door at a time. The coordinator is fast, and also uses a box compressor so that 10 boxes can go through instead of 8, but there are several drawbacks. The compressor can only compress the red boxes, and the coordinator also has to coordinate many other things, street traffic, people passing through the door, the space in the room where the boxes are stored, the noise of neighbors who distract the delivery man, search and select what the boxes are filled with, etc. Also, the delivery man is not so fast and is very distracted filling and transporting boxes. So it passes the 10 boxes (not 12) at a certain speed "1x". The lady demands that the boxes arrive, but they do not arrive as quickly as the lady would like, since although she has many boxes, the system is not capable of managing all of them properly.

On the other hand we have Mrs. PS5. You only have 10 boxes to fill. But its door is twice as big, enough for all its boxes to enter at once and there is room for people to also enter and exit through the door. Furthermore, the coordinator has the ability to automatically discard unnecessary boxes, so he doesn't waste time checking boxes that are not going to be used. In addition, anyone in the environment can do the job of the coordinator or the delivery man (even at the same time). The compressor is not that new, but it can compress all boxes, whether they are red or blue. All. And the delivery man is more than twice as fast and manages to pass the boxes at the speed of "2.5x" in the worst case, and "5x" on many occasions. In addition, if someone is left free or without work, they can help to distribute boxes with the delivery man or coordinate work with the coordinator. All this makes this removal company the most efficient ever seen and that the number of boxes available is irrelevant. For that moving system, 12 boxes are not needed, with 10 you can do the same job (and more or better in some cases). Having more boxes would only make the price of the move more expensive without needing any of it.

Of course, having more boxes available always helps to advertise yourself as a top removal company compared to the competition, even if your removal company is normal and ordinary. But it is only that, a smokescreen.

That does not mean that XSX is bad, far from it, it is an extraordinary machine. But PS5 has an efficiency NEVER seen before.

It is true that on PC there are more powerful cards or more powerful systems, but you know that these cards are never used properly, they draw raw power, but they are never used. It is the scourge of PC, an ecosystem that is too varied and unusable. In addition to exorbitant prices.

And I've always been a PCLover, but things as they are, what I've seen on PS5 I only remember something similar when 3DFX and its Glide came out. Its astonishing speed leaves you speechless.

This lines up with what we have heard from a lot of whispers coming out.

The crytek dev who didn't just say a few words he gave massive in depth answers and was dismissed by a lot of hardcore xbox fans as it was pulled for 'personal reasons'...

It's looking like this is the picture coming out. The xbox is very powerful but the ps5 is revolutionary and has made io breakthroughs and has gone for insane speed.



Xbox has been going with the most powerful console narrative and yes it is technically.

I wonder if Sony has been happy for them to push this and let it be absorbed by all?

Let's face it, it has whipped the hardcore up into a frenzy and when you read the twitter and YouTube comments a lot were believing the dealers of this world and that the XSX should be WAY more powerful.


I wonder if Sony was ok with this happening if they knew what they had in their locker and the real world performance being close, equal or even in some areas a lot in their favour, which when the reality drops it could be very bad pr for xbox and the ultra fans having meltdowns.

Afterall the expectation built up now is the worlds most powerful console has to run better or there will be chaos. I do wonder if xbox has built up too much hype on this, riding the most powerful wave but now has to back it up and reality is not quite what a lot were told. In some areas like a bit more resolution and a bit stronger ray tracing yes it can have a probably small lead but that's nothing massive or that noticeable.

Ps5 is seeming like it should easily show how much faster it is in loading and streaming which seems critical for UE5 demo graphics and the flying section for eg. If the xbox can't run things at the same settings that's terrible optics for the world's most powerful console.

In the quoted dev opinion it says 2.5-5x the speed not sure if that overall io but I'd expect it be as the ssd alone is x2 and factor in the dedicated hardware it should be more IMO.

We have seen some unbelievable attempts to damage control, the best being that laptop, turns out its not running the demo but playing the damn video, you couldn't make this stuff up. 😃

I think xbox has built up an unrealistic power angle that is going to be hard to show and with gaps expected that are completely unrealistic. Add in the ps5 being ahead in some key areas and I'd expect a lot of meltdowns to come.
 

Shmunter

Member
This is a naive or partial view. Core clock speed isn’t a measure of actual work done. You could program a game for PS4 that causes the power draw/wattage/work done to exceed design limitations, overheat and crash. Even if the console was somewhere cool. Even with fixed clocks.

Fixed clocks do not mean fixed maximum work load. Anyone that has overclocked a modern GPU/CPU knows that you should target maximum power draw with enough cooling to keep it there.
True. The fan speed has been the end user indicator of load. Loud fan? Shits being hammered.

From my limited understanding this is what Sony is attempting to mitigate with PS5. The fan in the PS5 should spin consistently with only imperceptible variance due to ambient conditions. They aim to keep the power constant, adjusting clock instead of locking clocks and fluctuate the power. The end result is the same for software.
 

Felix Argyle

Neo Member
Well yea we will be seeing many 30 fps games next gen as devs will want to push visuals hard i assume . Its always the dev choice

If we're going by PC High vs Ultra settings, the visual difference is minimal, but the fps boost is massive. The devs are pretty weird if they choose small almost unnoticeable visual difference over 60fps.
 
You missed the point of my post completely. A bad coder could absolutely crash a console in a minute or two, the point was that to guarantee performance the system won't thermal throttle (it'll just shutdown to save itself).

If a console is thermal sensitive the devs could never utilize the system to the fullest, substantial thermal headroom would need to be preserved at all times. It's very basic. The max power envelope is known to developers, that's not an unknown for them to deal with (if they are crashing dev kits, they will scale back).

Yes, you’re right this is correct. My point is “sustaining clocks” is kind of meaningless in the context of actual work done.

Calculation costs watts, watts produce heat. By targeting to run at maximum watts, you’re targeting to run at maximum calculation rate, regardless of clock speed.

This is how modern GPU overclocking is done by people that want to eliminate boost and have a repeatable stable level of performance.
You want to be power draw limited (maximum calculation per unit of time) while having a good enough cooling solution to never thermally throttle.

Clock speed doesn’t equal calculation done per unit of time either. Just as a GPU can idle at its peak clock and generate no heat/do no actual work.

Extreme examples help. You could have a fixed 2Ghz clock doing a very computationally cheap job that draws a small amount of power and makes a small amount of heat.
You could then have a variable 2Ghz peak clock that drops to 1.5Ghz to keep at maximum power draw while doing a computationally very expensive job, and the second one would be crunching more FLOPS (per unit of time, which is what FLOPS is) than the first example. Despite the clocks not only dropping but being lower.

Useful work done is watts of power draw, which then becomes heat. It’s not clock-speed which says nothing about calculation being done.

Some people clearly think clock speed is a measure of work done, or that TF can be “fixed”. My only point is that it’s demonstrably wrong, as anyone used to modern CPU/GPU overclocking will know.

It’s all about power draw. That’s the measure of numbers crunched.
 

Andodalf

Banned
It will just be connected over a pretty standard PCIe4.0 bus with enough lanes to support 7.5GB/s to the main chip.

I know the point you were making, though. The proprietary secret sauce in the custom flash controller can’t be used by the expandable m.2 slot. That’s why you need a 7.5GB/s NVMe there, instead of just a 5.5GB/s one. You need an extra 2GB/s to close the gap to the custom flash controller in their testing presumably.

I don’t think you’ll need 7.5 at all. First off, I think 7.0 is the max, in the presentation 7.0 was the number Cerny showed as a completely saturated PCIE 4.0, which he contrasted with a maxed out PCIE 3.0 and the ps5. It doesn’t sound like 7.0 is needed either, nor any specific number, but instead they’re going to verify individual drives.
 
Afterall the expectation built up now is the worlds most powerful console has to run better or there will be chaos.

The big issue that I'm seeing is Dealers comments. He's basically making the claim that the XSX should run games at double the framerate than the PS5.

It's very dangerous for him to set such high expectations since he runs a YouTube tech channel similar to NX gamer.

If the results are nowhere near that there will be alot of meltdowns. And I mean really really bad ones.

In my opinion he should put his bias aside and look at the situation objectively instead of letting his preference cloud his judgement.
 
Status
Not open for further replies.
Top Bottom