• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

Deto

Banned
The Xbox One had a BIOS update after launch. No way the PS5 is increasing any clocks. They already need to use AMD shift to hit 2.23GHz on the GPU.


It will not pass 2.23GHz, regardless of the cooling used.
Cerny himself said that in the presentation.

This is likely to be the new maximum clock for RDNA 2

With that it is not very difficult to have a hunch of the top RDNA2, here it goes:

~52CUs @ ~2.2Ghz ~15~16TF RDNA2

Above that, only if Nvidia is doing very well with Ampere.
 
Last edited:
Thank you for humoring me with my questions. I think I see now where you and I diverge in our thinking...

1) I have no doubt that SFS provides an improvement but this is not something that isn't in other APIs other than DX under a different name/method. Slightly more or less efficient maybe even but 2x or 3x not a chance in hell. When the same guy "says having said that PRT has existed before" you can bet your bottom dollar that is whats being referred to in that multiplier

I think this just boils down to an opinion, and we'll just have to see as time goes on if Sampler Feedback will have a big impact or not.

3) yes i suspect that those games were loading things into memory that weren't needed all the time. In fact few games make use of it especially with the old HDDs because you can run into problems loading them into memory and the GPU stalling (framerate issues) . Just like the 'silly' example given for the x10 multiplier this is just a case of how did they do on data management to begin with.
RAGE made good use of it in its own way, Doom 4 did in some way, but a large majority of xbox one games don't. Doesn't mean it wasn't possible in some capacity though.

I think you might be confusing "megatextures" with texture mipmap streaming. They are related technologies that I believe both make use of PRT hardware, but they have somewhat different goals. One is designed to cover the whole of your terrain with a massive texture, the other is to save memory by evicting un-needed MIP levels for all your textures from memory.

The "megatextures" concept I think had some minor success outside of idtech engines but not much. You're right in that it is not that common. On the other hand, texture streaming I think is near ubiquitous in modern game engines. Both Unreal and Unity make use of texture streaming, letting you pretty easily adjust the size of your streaming pool in the editor. So the idea that "few games make use of it" does not ring true.

Just quickly skimming their docs, Unreal's example shows that Texture Streaming saves about 50% of memory and Unity's example about 20-30% of memory. Also the video on Sample Feedback from earlier in this thread sounds a lot like this SF is targeted at enhancing Texture Streaming (referred to as a "typical thing to do" in the video).

This is difference is very clarifying for our disagreement. You think that PRT and texture streaming are broadly available but seldom used. I believe they are frequently used (and I'm pretty sure thats correct). So you and I both see "2-3X better than an XBox One X game" and you think "...without texture streaming" and I think "...with texture streaming".

I'll guess we'll find out who is right! I think if either one of us is right, it means that SF will enable one or both consoles to have 2-3X the bandwidth/memory efficiency that we see typically today. Pretty cool stuff!
 

Ascend

Member
When an engine already handles indexing the assets in a scene or precomputes the atlas, SRS will not improve it by much.
Indexing assets in a scene means you're pre-loading assets in RAM before they are required for rendering. The same applies to a texture atlas with tiles. SFS is loading in the assets after confirmation that they will be rendered immediately, not that they might be rendered in a few seconds.
 
Indexing assets in a scene means you're pre-loading assets in RAM before they are required for rendering. The same applies to a texture atlas with tiles. SFS is loading in the assets after confirmation that they will be rendered immediately, not that they might be rendered in a few seconds.
It's pulling what's in the view frustum, ready to be rendered. I probably shouldn't have said "scene" but "frame".
 

NullZ3r0

Banned
No desire for a debate. But this post make you look like you dont know what your talking about cause your too caught up in fanboy wars.
Quick history lesson (minus the fanboy nonsense)

A. There was a whole DRM fiasco that Microsoft was dealing with.

B. When announced I believe the XBox one was rated at 1.23 Tflops (then upclocked to 854mhz to 1.31 Tflops)

C. Was $100 more expensive.

So if you do simple math at the time of announcement of specs the PS4 was about 50% faster GPU with 8 ACE (later reduced to 40% after M.S. increase in clock speeds) with a better memory/bandwidth solution (8GB GDDR5) for $100 cheaper.

It isnt just the .5tflops, it doesnt work that way, for your analogy to work it would mean at announcement the XSX would be 15.4x TFlops with a far better Memory/bandwidth solution to take advantage of it.
Non of this is the case, Next Gen is faaaaaar more competitive. IMHO Next Gen isn't about Console wars, Its about Content Wars, exclusives and ecosystem is what matters.
It does work that way. Percentages are just a statistical measurement and you know what they say about statistics...

A 2 TFLOP difference is greater than a .5 TLOP difference. It will show in the games.
 
They already need to use AMD shift to hit 2.23GHz on the GPU.

That's not the purpose of AMD Smart Shift. They can already hit those clocks without Smartshift. What Smartshift does is if the GPU is being taxed and requires more power it can shift some power from the CPU to the GPU.

sony1.png


AMD-SmartShift-explainer.jpg


I also believe that Digital Foundry released some information stating that both the CPU and the GPU can run at their maximum clocks simultaneously as long as the fixed power budget is being met.
 

Ascend

Member
It's pulling what's in the view frustum, ready to be rendered. I probably shouldn't have said "scene" but "frame".
Are you assuming that all that data is already in RAM for the GPU to render it? Because that's where the main difference is.
With HDDs, you had to keep all the data in RAM for the next 30 seconds, because the HDD was too slow to load the data when in a frame it's within the view frustum.
With the SSD, it's not required to keep as much data in RAM, because you can load so much faster. Rather than 30 seconds of data in RAM of the HDD, you can have one or three or less than 10 seconds (hard to know exactly how much).
But MS basically flipped this on its head, saying only data that's in the view frustum will be loaded from the SSD. I'm sure it's not exactly like that, because you wouldn't need RAM in that case, but it's to make the point of how SFS is supposed to work.
 
t will show in the games.

I think the real question is if people will even notice the difference without Digital Foundry comparison.

Like for example if I game at 1080P 30FPs on my monitor I definitely notice the upgrade to 1440P 60FPS. These consoles will output at a much higher resolution than 1080P. I don't know if the extra 2 TFs is enough to make the difference noticeable.
 
Are you assuming that all that data is already in RAM for the GPU to render it? Because that's where the main difference is.
With HDDs, you had to keep all the data in RAM for the next 30 seconds, because the HDD was too slow to load the data when in a frame it's within the view frustum.
With the SSD, it's not required to keep as much data in RAM, because you can load so much faster. Rather than 30 seconds of data in RAM of the HDD, you can have one or three or less than 10 seconds (hard to know exactly how much).
But MS basically flipped this on its head, saying only data that's in the view frustum will be loaded from the SSD. I'm sure it's not exactly like that, because you wouldn't need RAM in that case, but it's to make the point of how SFS is supposed to work.

No, I expect them to be on disk. For the caller of the texture, they read-through RAM, then pull from disk (they do not know if the RAM already contains the data). In the worst case scenario, you need to pull in all pages from disk, and how an Engine will typically handle this is by mapping texture indexes to pages on disk. The size of the pages is usually tuned to handle excessive over fetching, and maps / atlases can be optimized so common rendered together textures are stored in the same page. There can still be over fetching where we draw a partial page at say, an edge, but this is nowhere near 50-75% miss if your tech / pre optimization is working correctly. SRS will reduce the need of an engine developer to care about this stuff, but it's not going to magically improve performance in all cases. Remember a lot of pages are in RAM anyway on the next frame update. You always still need RAM though! PS5 can write directly to VRAM though, vs a PC bouncing through System RAM first. Remember the XSeX has VRAM all the way (well except for a small bit..) but needs to engage the CPU more vs dedicated hardware.
 
Last edited:

phil_t98

#SonyToo
That's not the purpose of AMD Smart Shift. They can already hit those clocks without Smartshift. What Smartshift does is if the GPU is being taxed and requires more power it can shift some power from the CPU to the GPU.

sony1.png


AMD-SmartShift-explainer.jpg


I also believe that Digital Foundry released some information stating that both the CPU and the GPU can run at their maximum clocks simultaneously as long as the fixed power budget is being met.
Why would they claw back power from one when the other is being taxed? I mean what if games are heaven cpu and gpu intensive? And example from this gen was assassins creed unity (I think it was unity) heavily taxed both got and cpu. Would games suffer from power being cut from gpu sacrificing itself for the cpu or vice versa?
 
I think the real question is if people will even notice the difference without Digital Foundry comparison.

Like for example if I game at 1080P 30FPs on my monitor I definitely notice the upgrade to 1440P 60FPS. These consoles will output at a much higher resolution than 1080P. I don't know if the extra 2 TFs is enough to make the difference noticeable.

The difference between RDR2 on Pro vs One X was very noticeable to me. I know that’s not a perfect comparison as 4.2 - 6 is more of a gap, percentage wise, than 10.28
- 12.1.

But, I think it will be a case of a lot if people saying there is a difference vs a lot of people saying there isn’t a difference regardless of whether or not one truly exists or not.
 
Last edited:
No, I expect them to be on disk. For the caller of the texture, they read-through RAM, then pull from disk (they do not know if the RAM already contains the data). In the worst case scenario, you need to pull in all pages from disk, and how an Engine will typically handle this is by mapping texture indexes to pages on disk. The size of the pages is usually tuned to handle excessive over fetching, and maps / atlases can be optimized so common rendered together textures are stored in the same page. There can still be over fetching where we draw a partial page at say, an edge, but this is nowhere near 50-75% miss if your tech / pre optimization is working correctly.

How does the index inform you which mipmap level is needed?
 
How does the index inform you which mipmap level is needed?
It's still the same index, you have a different set of pages for lower res tiles. Mipmap is chosen by the engine for that index (based on whatever it wants, viewdistance, or on initial load to speed it up)

Here's a decent article on a similar (much older) variation. (It has a feedback buffer in order to determine what to load).

 
Last edited:

FireFly

Member
It will not pass 2.23GHz, regardless of the cooling used.
Cerny himself said that in the presentation.

This is likely to be the new maximum clock for RDNA 2

With that it is not very difficult to have a hunch of the top RDNA2, here it goes:

~52CUs @ ~2.2Ghz ~15~16TF RDNA2

Above that, only if Nvidia is doing very well with Ampere.
In that case it would only be about 300 mm^2 in size, so wouldn't exactly qualify as "big navi". I think we should expect 500 mm^2.
 
Mipmap is chosen by the engine for that index (based on whatever it wants, viewdistance, or on initial load to speed it up)

This is the only part that Sampler Feedback addresses. Microsoft's claim is that with feedback from texture sampling, your decisions on which mipmap level to use will improve the efficiency of your streaming impact on memory and bandwidth by 2X-3X. The rest of what you're talking about isn't really what SF is trying to solve.
 
This is the only part that Sampler Feedback addresses. Microsoft's claim is that with feedback from texture sampling, your decisions on which mipmap level to use will improve the efficiency of your streaming impact on memory and bandwidth by 2X-3X. The rest of what you're talking about isn't really what SF is trying to solve.
RAGE has it's own feedback buffer for mipmaps.
 
The difference between RDR2 on Pro vs One X was very noticeable to me. I know that’s not a perfect comparison as 4.2 - 6 is more of a gap, percentage wise, than 10.28
- 12.1.

But, I think it will be a case of a lot if people saying there is a difference vs a lot of people saying there isn’t a difference regardless of whether or not one truly exists or not.

It wasn't on my monitor but maybe I need a much larger screen to appreciate the differences. It's why I game at 1440P instead 4K on my 4K monitor.
 
Why would they claw back power from one when the other is being taxed? I mean what if games are heaven cpu and gpu intensive? And example from this gen was assassins creed unity (I think it was unity) heavily taxed both got and cpu. Would games suffer from power being cut from gpu sacrificing itself for the cpu or vice versa?

Because Sonys power budget is fixed.
 

rntongo

Banned
That's not the purpose of AMD Smart Shift. They can already hit those clocks without Smartshift. What Smartshift does is if the GPU is being taxed and requires more power it can shift some power from the CPU to the GPU.

sony1.png


AMD-SmartShift-explainer.jpg


I also believe that Digital Foundry released some information stating that both the CPU and the GPU can run at their maximum clocks simultaneously as long as the fixed power budget is being met.

Look at the third bar! You realize your diagram shows a fixed power budget and variable processor performance? You realize that in order to hit the GPU’s(2.23GHz) max performance you have to draw power and performance from the CPU?
 

jimbojim

Banned
Care to correct me then?
I mean did I make any mistake with these calculations?
Are they wrong?
Sure the compression rates are assumptions I already said that.


Just so you know based on my example:
7.87 = 100%
5.99 = 76%
Delta = 24% aka XSX is 24% slower then PS5.

Please note that while xsx has nearly 6.0GB/s compressed data in this case the PS5 has only 7.9GB/s which is still not what Cerny said.
I mean this is simple math you can probably tell me if I've made any mistakes.

Are you comparing textures only?
 

rntongo

Banned
How so? They using varible?

In order to maintain the high GPU clockspeed Sony used a fixed power budget per workload. So in order to hit 2.23GHz, power is drawn from the CPU and a reduction in the CPU clockspeed occurs.

So fixed power budget per workload with variable processor clocks.
 

phil_t98

#SonyToo
Look at the third bar! You realize your diagram shows a fixed power budget and variable processor performance? You realize that in order to hit the GPU’s(2.23GHz) max performance you have to draw power and performance from the CPU?
So
Does that mean if it’s a heavy cpu environment the gpu will be held back or vice verse?
 

phil_t98

#SonyToo
In order to maintain the high GPU clockspeed Sony used a fixed power budget per workload. So in order to hit 2.23GHz, power is drawn from the CPU and a reduction in the CPU clockspeed occurs.

So fixed power budget per workload with variable processor clocks.
How does that not effect games then?
 

jimbojim

Banned
To be very honest. The only numbers we should be going by are 8-9GB/s for PS5 and 4.8GB/s for XSX. The 22GB/s by Cerny was a theoretical max, he said up to 22GB/s. How often it happens we'll have to wait and see. On the other hand the over 6GB/s for XSX that Andrew Goossen mentioned , he'll have to explain further as well.

He won't explain anything further because it won't and it can't go over 7. It's at 6, maybe 6.3, otherwise he would mentioned it over 7 if it is like that in Eurogamer interview. Explaining it further and bombastically announcing it can go over 7 is funny as shit. And bad PR. Fuck it, let's go publicly and say it can go 1GB/s more. :D
 

Three

Member
Thank you for humoring me with my questions. I think I see now where you and I diverge in our thinking...



I think this just boils down to an opinion, and we'll just have to see as time goes on if Sampler Feedback will have a big impact or not.



I think you might be confusing "megatextures" with texture mipmap streaming. They are related technologies that I believe both make use of PRT hardware, but they have somewhat different goals. One is designed to cover the whole of your terrain with a massive texture, the other is to save memory by evicting un-needed MIP levels for all your textures from memory.

The "megatextures" concept I think had some minor success outside of idtech engines but not much. You're right in that it is not that common. On the other hand, texture streaming I think is near ubiquitous in modern game engines. Both Unreal and Unity make use of texture streaming, letting you pretty easily adjust the size of your streaming pool in the editor. So the idea that "few games make use of it" does not ring true.

Just quickly skimming their docs, Unreal's example shows that Texture Streaming saves about 50% of memory and Unity's example about 20-30% of memory. Also the video on Sample Feedback from earlier in this thread sounds a lot like this SF is targeted at enhancing Texture Streaming (referred to as a "typical thing to do" in the video).

This is difference is very clarifying for our disagreement. You think that PRT and texture streaming are broadly available but seldom used. I believe they are frequently used (and I'm pretty sure thats correct). So you and I both see "2-3X better than an XBox One X game" and you think "...without texture streaming" and I think "...with texture streaming".

I'll guess we'll find out who is right! I think if either one of us is right, it means that SF will enable one or both consoles to have 2-3X the bandwidth/memory efficiency that we see typically today. Pretty cool stuff!

Our disagreement is whether this is a hardware specific feature that allows a 2x multiplier when compared to a game that does not use PRT and efficient streaming but still tailored to a HDD.

I mentioned this is just fancy streaming PRT+. There are certainly things that become more common or plausible in next gen engines with faster storage. Like being able to switch textures much more quickly when the player turns around and therefore not needing as much unused textures in memory without stalling the GPU/killing the framerate. This itself increases efficiency of memory use.

Where I disagree though is the idea that this is actually some hardware specific secret sauce. The reason what is described in SFS is seldom used or offers less efficiency on xbox one games is because of the storage and hence the current engines being used. You would need those textures not seen in the scene because the engine would perform poorly streaming them in when needed.

If they tried to offer the same efficiency by streaming more textures in and out of memory the GPU would stall waiting for the streamed assets not because it would be impossible to do due to lack of GPU secret sauce.

So let me ask my questions to you now
1) What hardware feature is required for a GPU to support SFS or 'PRT+' ? What does the GPU do differently for a more than double boost in performance?
2) How would this hardware feature not show on new hardware but be on 2018 cards?

As for megatextures its goal wasn't to just "cover the terrain with a massive texture". Its aim was the same. Loading into memory only parts of a texture you need through streaming.


Virtual Textures became big.
 
Last edited:

rntongo

Banned
So
Does that mean if it’s a heavy cpu environment the gpu will be held back or vice verse?
Cerny used unclear words but what I got from it is that the CPU maintains 3.5GHz and the GPU clock speed he did not mention. When a game requires the GPU to hit 2.23GHz then power is drawn from the CPU to the GPU. The APU has power profiles so it will know when it needs to shift power depending on the workload. But it seems for most games, they will be "near" 2.23GHz then for the AAA games like GOW, they'll hit 2.23GHz on the GPU.
 
The reason what is described in SFS is seldom used or offers less efficiency on xbox one games is because of the storage and hence the current engines being used.

As I pointed out, texture streaming is not seldom used. It is common. Just google “[engine name] texture streaming”. The streaming is the same as what you’re talking about - moving mip levels or tiles in and out of memory.

So let me ask my questions to you now
1) What hardware feature is required for a GPU to support SFS or 'PRT+' ? What does the GPU do differently.
2) How would this hardware feature not show on new hardware but be on 2018 cards?

1) Texture sampling hardware was extended to allow writing back what was sampled. Previously texture sampling hardware could not do this. This was clearly stated in the video I linked.

2) Not sure what “not show on new hardware” means. Do you mean why it might not be on the PS5? Because it’s a brand new feature for AMD and it’s not out of the question that it isn’t in their RDNA implementation (And other stated reasons).

As for megatextures its goal wasn't to just "cover the terrain with a massive texture". Its aim was the same. Loading into memory only parts of a texture you need through streaming.

Yes, in order to cover terrain with a massive texture:


”The MegaTexture technology tackled this issue by introducing a means to create expansive outdoor scenes. By painting a single massive texture (32,768×32,768 pixels, though it has been extended to larger dimensions in recent versions of the MegaTexture technology) covering the entire polygon map and highly detailed terrain, the desired effects can be achieved.”
 

jimbojim

Banned
Look at the third bar! You realize your diagram shows a fixed power budget and variable processor performance? You realize that in order to hit the GPU’s(2.23GHz) max performance you have to draw power and performance from the CPU?
In order to maintain the high GPU clockspeed Sony used a fixed power budget per workload. So in order to hit 2.23GHz, power is drawn from the CPU and a reduction in the CPU clockspeed occurs.

So fixed power budget per workload with variable processor clocks.


Cerny used unclear words but what I got from it is that the CPU maintains 3.5GHz and the GPU clock speed he did not mention. When a game requires the GPU to hit 2.23GHz then power is drawn from the CPU to the GPU. The APU has power profiles so it will know when it needs to shift power depending on the workload. But it seems for most games, they will be "near" 2.23GHz then for the AAA games like GOW, they'll hit 2.23GHz on the GPU.

Maybe you should watch Cerny's presentation again or read his interview with Eurogamer?

There's enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz, it isn't the case that the developer has to choose to run one of them slower."

Put simply, with race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time.


Devs knows if they keep workload optimized and within power limit, system won't downclock.
 
No, I expect them to be on disk. For the caller of the texture, they read-through RAM, then pull from disk (they do not know if the RAM already contains the data). In the worst case scenario, you need to pull in all pages from disk, and how an Engine will typically handle this is by mapping texture indexes to pages on disk. The size of the pages is usually tuned to handle excessive over fetching, and maps / atlases can be optimized so common rendered together textures are stored in the same page. There can still be over fetching where we draw a partial page at say, an edge, but this is nowhere near 50-75% miss if your tech / pre optimization is working correctly. SRS will reduce the need of an engine developer to care about this stuff, but it's not going to magically improve performance in all cases. Remember a lot of pages are in RAM anyway on the next frame update. You always still need RAM though! PS5 can write directly to VRAM though, vs a PC bouncing through System RAM first. Remember the XSeX has VRAM all the way (well except for a small bit..) but needs to engage the CPU more vs dedicated hardware.

The speculation is that both systems through APIs and HW can have the GPU address the SSD independently.

Apparently this is the basis of the 100gb instant partition. I have seen speculation that it maybe SSG related or a full on Software defined flash implementation where there is an independent pcie bus fit for purpose.

Supposedly this is an SSD GPU cache request and bypasses even VRAM.

I dont know enough about either technology but I have read people surmise about those implementations.

Sony has been a bit more forthcoming on their I/O solutions and architecture in this respect by outlining in picture form, the I/O subsystem capabilities.

I love all of this talk.
 

Panajev2001a

GAF's Pleasant Genius
Microsoft's claim is that with feedback from texture sampling, your decisions on which mipmap level to use will improve the efficiency of your streaming impact on memory and bandwidth by 2X-3X

No, I do not think they are claiming that, but maybe they are trying to imply it.
I do not think they are claiming a 2-3x general improvement over titles implementing texture streaming with PRT and/or custom solutions, that would be gigantic and you would hear devs chanting in the streets ;).
 
No, I do not think they are claiming that, but maybe they are trying to imply it.
I do not think they are claiming a 2-3x general improvement over titles implementing texture streaming with PRT and/or custom solutions, that would be gigantic and you would hear devs chanting in the streets ;).

Honestly this is the only reasonable argument against this being their claim with the information we have. I’m still inclined to think this is the case. We’ll find out soon enough!
 
Maybe you should watch Cerny's presentation again or read his interview with Eurogamer?






Devs knows if they keep workload optimized and within power limit, system won't downclock.

That's the interview that I was making a reference to. Hopefully people learn from this.
 
I guess it's from both sides. Because any time someone brings up SFS as a talking point for the Xbox, people say that the same thing has been done since 2013 and that it will thus give zero advantage. Which is really absurd. Microsoft is not so stupid as to advertise 2013 tech as a new feature.
They are not stupid they know people like you will defend that number in internet.
 

rntongo

Banned
Maybe you should watch Cerny's presentation again or read his interview with Eurogamer?






Devs knows if they keep workload optimized and within power limit, system won't downclock.


These statements were filled with qualifiers such as potentially and near that obfuscate the actual processor speeds.

"enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz"

"the boost clock system should still see both components running near to or at peak frequency most of the time."

In any case Rich from DigitalFoundry/Eurogamer had this to say in the video accompanying the eurogamer article:



On the devkits that are using fixed power profiles
"More than one dev has told us they are running the CPU throttled back allowing for excess power to the CPU in order to ensure a consistently locked 2.23GHz" you can hear it all at minute 4:35-4:50.
He does clarify later on that the CPU is still powerful enough although with this it's clear that you cannot have both realistically running at their max clocks

Cerny told Rich that the actual PS5 will have different power profiles i.e variable in order to do the above more efficiently. It's at 5:43-6:04.

So the PS5 has variable clock speeds that will hit 2.23GHz based on the workload. Devs will most likely prefer to hit close to 2.23GHz and throttle the CPU performance. If they could both hit max clock speeds, you would only need to reduce the amount of power from one processor not divert it to another one.

Notice Cerny doesn't mention the percentage drop in clockspeeds?
"
Cerny also stresses that power consumption and clock speeds don't have a linear relationship
. Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency," What is the percent reduction in frequency?
 
Last edited:
"More than one dev has told us they are running the CPU throttled back allowing for excess power to the CPU in order to ensure a consistently locked 2.23GHz" you can hear it all at minute 4:35-4:50. He does clarify later on that the CPU is still powerful enough although with this it's clear that you cannot have both realistically running at their max clocks

Yeah, this is the quote that had a little “skeptical,” for lack of a better word.

I am sure that the CPU will always be running at a relatively high clock but I have to say that, ideally, you’d probably want both your CPU and GPU to be both be able to sustain max clocks
 
It’s just a basic ad hominem attack. It’s used to try and discredit an argument by vilifying the one who makes it. I say this person is bad, so their argument is bad, so if you have the same argument you must be bad.

Lol yeah that's basically what it seems like at this point. They're arguing over semantics, basically. I've got no problem with people wanting me to clarify certain statements if it's to avoid any confusion behind what's directly said or any intention behind them that might be perceived negatively by them, but at least pull up the quotes in question xD.

I can't speak on anything if I have no idea what they're specifically talking about, it's impossible!

And you're still not able to understand the point of my post.




Want to point me where I said it was your responsibility?


Right, I never said it.



It has nothing to do with my post.

I'm not going to repeat myself. Either you're trolling or your reading is just that bad. lol

Go over my post again before replying to me.

Like I said dude, you do you. But I don't think there's anything on this particular topic needing further discussion. Let's just agree to move forward with the discourse of the thread from here on.

Yeah, this is the quote that had a little “skeptical,” for lack of a better word.

I am sure that the CPU will always be running at a relatively high clock but I have to say that, ideally, you’d probably want both your CPU and GPU to be both be able to sustain max clocks

This is where the variable frequency approach, with Smartshift and the cooling system, hopefully come into play. Game workloads probably won't always require the CPU at maximum speeds (same as with GPU), but with those components performing as they should, it would mean the CPU and GPU can hit those peaks if a game workload demands it, for however long they demand it.

And, I'm assuming, in any case where a game workload either has unoptimized code, or clocks are being sustained for a prolonged period where the power budget gets too stressed, the system's automation processes will either shift some of the power budget around to components that need some extra juice, or lower some of the power load to a particularly power-hungry component (like the GPU).

...or something like that.

It will not pass 2.23GHz, regardless of the cooling used.
Cerny himself said that in the presentation.

This is likely to be the new maximum clock for RDNA 2

With that it is not very difficult to have a hunch of the top RDNA2, here it goes:

~52CUs @ ~2.2Ghz ~15~16TF RDNA2

Above that, only if Nvidia is doing very well with Ampere.

I"m honestly still curious if PS5's GPU clock is reflective of saturating the top end of RDNA2's sweetspot range, or if it's a result of pushing beyond that with their custom cooling system combined with their specific implementation of Smartshift.

IIRC, the sweetspot for RDNA1 was 1700 MHz to 1800 MHz. RDNA2 has a 50% PPW improvement over RDNA1, but I doubt that translates to 50% IPC. I'd wager it's closer to 10% to 20% more IPC over RDNA1 (perhaps 25% if sticking with rate of RDNA1 IPC gains over GCN).

Still trying to figure where the new sweetspot would fall at based off of that; however I don't feel the GPU clock on PS5 is within the sweetspot range but rather it's beyond it. That's partially based on XSX's GPU clock, which I assume is on the lower end of the new sweetspot since it's a larger chip. So it might be possible the new sweetspot on DUV enhanced is 1800 MHz to 2000 MHz.
 
Last edited:

Dodkrake

Banned
Please provide proof of your statement
PS fanboys made a big deal over a .5 TFLOPS difference last Gen, but for whatever reason a 2 TFLOP advantage is "insignificant". Yet with that .5 TFLOP difference manifested itself in framerate and resolution. So I suspect that a bigger gap this gen will see at least a similar difference. I think it will be greater because I'm not buying Sony's clock speed claims. If the damn thing spent "most of it's time" at 2.23 GHz then just say that's the clock speed and call it a day. PC manufacturers do that all the time. But I digress, back to storage...

There's nothing disingenuous about my post. The real world difference between Xbox Series X and PS5 storage solutions will be measured in milliseconds and not seconds. The end user won't notice the difference in game. There's nothing in that UE5 demo that couldn't be done in the Xbox Velocity Architecture. You're believing in a fantasy world where only Sony listened to developers and put significant investment in their storage solution while MS went to Best Buy and just slapped a random SSD in the box and called it a day.

Both companies focused significantly on storage and asset streaming. However, Microsoft didn't skimp on GPU to get there.

1. That .5 difference was ~40%, the current 1.8 is ~15%, if math serves me right

2. More FUD. This has been explained tons of times by people that have way more insight on the tech behind the clock speed.

3. If you're talking about current game design I wholeheartedly agree. But if you start leveraging the SSD for new ways of designing your games, you will absolutely see a difference (in first party games)

4. Nobody skimped on GPU. The faster Xbox knuckleheads understand that both consoles were designed differently, the faster you'll realize that both of them will perform similarly in third party games.
 

rntongo

Banned
Yeah, this is the quote that had a little “skeptical,” for lack of a better word.

I am sure that the CPU will always be running at a relatively high clock but I have to say that, ideally, you’d probably want both your CPU and GPU to be both be able to sustain max clocks

Yes I think the default is for the CPU to run at 3.5GHz and when the GPU needs to hit 2.23GHz, AMD shift kicks in. So basically it really is a 10.2Tflop machine but at the expense of a consistent 3.5GHz CPU clock
 
Top Bottom