DynamiteCop!
Banned
You're not exactly grasping the concept here, bud.And yet the pro still runs better
You're not exactly grasping the concept here, bud.And yet the pro still runs better
Neither are you its a fact that the pro version has a higher framerate compared to the X which is the more powerful system, most people care about frames than resolution which is why devs and companies like MS and Sony going forward should give options for resolution or performance via frame rate.You're not exactly grasping the concept here, bud.
To them it's impossible, and they think they can wordsmith their way out of it. It's insanity.You know what is the most annoying thing from all those threads... basically since 2013...every Sony Fanboy choking on Cernys di&k and thinking he is god. Is it really that far fetch that another engineer or group of engineers came up with a better design?
It did actually 17-21% -> 10-10.27TF. Cerny did state it will remain at peak frequency or close to most of the timeAh, more funny numbers that don't take into account situational performance.
*Checks username*
You know what is the most annoying thing from all those threads... basically since 2013...every Sony Fanboy choking on Cernys di&k and thinking he is god. Is it really that far fetch that another engineer or group of engineers came up with a better design?
Spreewaldgurke Some choice quotes you can use for the OP Summary:
SSD
- There's low level and high level access - it's the new I/O API that allows developers to tap into the extreme speed of the new hardware.
- The concept of filenames and paths is gone in favour of an ID-based system which tells the system exactly where to find the data they need as quickly as possible.
- With latency of just a few milliseconds, data can be requested and delivered within the processing time of a single frame, or at worst for the next frame. 'Just in time' approach to data delivery with less caching (RAM usage).
- Developers can easily tap into the speed of the SSD without requiring bespoke code to get the best out of it solid-state solution.
- A significant silicon investment in the flash controller ensures top performance: the developer simply needs to use the new API. Technology should deliver instant benefits, and won't require extensive developer buy-in to utilise it.
Indeed, remember the crazy streaming techniques GTA5 used on PS360 streaming simultaneously from disk and hdd to maximize bandwidthI can picture the next GTA using a 3D scan of a whole city with a dataset too big to fit into 16GB of RAM but because PS5 & Xbox SX will have SSDs they can stream it.
God, I wish I could skip right to the fall. It's going to be glorious. So many mouths writing checks their respective asses can't cover.
That's what he said. CPU is not being being used as much by design in order to keep GPU at top speed.I don’t think you understood what it wrote in that quote.
Basically the CPU is not being used so it keeps in lower frequency...
Indeed, remember the crazy streaming techniques GTA5 used on PS360 streaming simultaneously from disk and hdd to maximize
Next R* next gen only game will be insane, GTA6 might be crossgen :/
By choice.That's what he said. CPU is not being being used as much by design in order to keep GPU at top speed.
Pc based clocks on 5700 XT and similar aren’t just temperature based either.... power delivery and compute load are also factors ... so it looks like they’ve removed the temperature factor based on the chips AND cooling solution having a consistent profile .
At the right voltage my 5700XT clocks never vary more than 60 or so MHz from game to game and are always between 1980mhz and about 2040mhz.
If they’ve tuned it properly and know the profile of the cooling and APU as well as Cerny says they do they won’t have any trouble being consistent.
Who cares what he estimates? No one has the knowledge to even make that assertion. The frequency swing is completely unknown...NXGamer estimates that the PS5 will vary by 50 MHz or so. He said this a weeks ago, so you're findings and opinion is SPOT ON!
That's what "by design" implies.By choice.
The dev choose a debug/profiling profile that lower CPU specs.
Devs kits doesn’t allow automatically clocks.
really wondering what will happened on ps5 in similar scenario
also will main thread high ipc love in games stop at some point ?
Who cares what he estimates? No one has the knowledge to even make that assertion. The frequency swing is completely unknown...
It's getting really old seeing everyone running to an appeal to authority for the completely unknown when in entire reality these people are not an authority of any kind.
That "18%" (minimum btw) translates to a PlayStation 4 and a half of GPU compute on top of the PlayStation 5's GPU.
It's not just teraflops, Microsoft's GPU goes places which the PlayStation 5's cannot follow.
First off my assertions are completely accurate, and 18% is the minimum because the PlayStation 5 clocks downwardly. You charlatans seem to really grasp to that 10.28 figure like it's the result of fixed frequency. You just run with it like it's business as usual, like you're talking to people who don't understand that's a peak boost clock which goes down.
Secondly the Series X will undoubtedly have more ROPs, the likely scenario is 72 or 80 vs. 64. Microsoft is running a GPU of considerably more size at conservative speeds, a comparative retail unit would undoubtedly be running around 2Ghz and be more in the upper range of the stack. 72 to 80 ROPs makes much more sense for their system than 64 does. Sony's configuration is of a smaller GPU that would also be in the neighborhood of 2Ghz but fall more into the middle and given the workload capability 64 would be peak to avoid bottlenecking on output.
More TMU's is also a given.
One question : why are you in this thread ? You clearly have an agenda and it gets old...Who cares what he estimates? No one has the knowledge to even make that assertion. The frequency swing is completely unknown.
One question : why are you in this thread ? You clearly have an agenda and it gets old...
It all kinda seems like double speak back and forth.nope
"Developers don't need to optimise in any way; if necessary, the frequency will adjust to whatever actions the CPU and GPU are performing,...Right now, it's still difficult to get a grip on boost and the extent to which clocks may vary. "
"Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core."
CPU load is pretty much always less than 40%. Only for few secs it reached 61% and ironically GPU load then dropped to 91%. What was most interesting is that this game is using 10gig of vram and 13gig of main ram. This just indicates that the ram upgrade this gen is pretty disappointing.
Devs doesn’t need to optimized to the variable frequencies works and that is true.It all kinda seems like double speak back and forth.
On one hand, they say developers don’t need to optimize, but then in the same set of comments they say they expect devs to find new ways to optimize their code.
On one hand they say there’s enough power to run both at max, then on the other hand they say they have different power profiles for devs to use depending on their power needs.
It all kinda seems like purposeful obfuscation, instead of giving straight answers.
It all kinda seems like double speak back and forth.
On one hand, they say developers don’t need to optimize, but then in the same set of comments they say they expect devs to find new ways to optimize their code.
On one hand they say there’s enough power to run both at max, then on the other hand they say they have different power profiles for devs to use depending on their power needs.
It all kinda seems like purposeful obfuscation, instead of giving straight answers.
It is a biased test.The one real take away for me was the CU count comparison - a really good piece of investigating there by DF.
While the higher clocks can produce equal performance on fewer compute units, the ceiling seems rather low for the trade offs involved. 36 CUs compared to 52 in the Series X is, for me, now a major point of interest for me. Parallelism is a major focus for modern engines - PS5, potentially, has a real tangible disadvantage in this area.
I mean I guess you could say devs don’t “need” to do anything.Devs doesn’t need to optimized to the variable frequencies works and that is true.
But if you optimize you can control the workload and so the variation in frequency... that is true too.
The profiles are only for test, debug, profiling. Retail machine won’t have fixed profiles.
Well that is true for any hardware even the actual PS4.I mean I guess you could say devs don’t “need” to do anything.
But reality seems devs will have to optimize their code in very specific ways to get the most out of PS5
It is a biased test.
RX 5700 doesn’t run at 2100Mhz and so the performance showed in the video is with the GPU throttling.
Yes and no. This is a bit more specific to just the PS5 and something devs won’t have to consider on other platforms. Kinda like X1 esram.Well that is true for any hardware even the actual PS4.
Yeap.In the DF video didn't they get vastly diminishing returns on FPS as the clock went higher and higher ?
IMO I believe that optimization is workload is already done in all games for any platform... so it is basically the same.Yes and no. This is a bit more specific to just the PS5 and something devs won’t have to consider on other platforms. Kinda like X1 esram.
actually smartshift is a nice piece of technology the problem is that sony presented it in the worst way possible. It is usefull to squeeze every little power oh your chip if you already have a comfort termal/power budget, a thing that i'm not so sure sony is having right now.
I firmly believe that the console was planned for 2019 and then rdna2 wasn't ready and neither sony so they postponed to 2020. It will for sure be a nice console and 1st party will be able to make some awesome games on it, but from a design and technological standpoint i don't like it, doesn't seem to me like a balanced system and still Sony has to prove they know how to make a silent gaming box. Beign a sucker for gaming i will buy both but the more i hear the more i think sony changed theyre plans mid year
Exactly. I really hope they know what they are doing with this PS5 but day after day it seems to me more and more that it can easily be a disaster in the end. And I really hope I'm not right, because I want the best possible PS5. If that means delaying and redesigning the thing then so be it, but unfortunately after announcing all these, there is almost no chance for this.this is exactly what i was thinking, in open world games you can transform a calm scene in a giant mess just by playing the damn game and creating disasters, just look how rdr2 slow down during scene with fire, explosions or a shitload of npc on screen fighting.
It isn't an appropriate comparison and a bit misleading to present without a disclaimer on the potential variables that make such comparison futileIn the DF video didn't they get vastly diminishing returns on FPS as the clock went higher and higher ?
I see. Shame, worst case scenario has to be taken into account always = hindering what could be possible if the machine worked like the previous Playstations. I don't like this, it's similar to the concept of Lockhart hindering nextgen which I was also bitching about. It's clear now that this was designed as a less powerful machine. I'd rather they left the GPU at 9,2 TF and have the machine work similarly to PS4. I suppose this will be the first machine I will buy anyway, but I'm quite disappointed. Thanks for clarfying.
CPU load is pretty much always less than 40%. Only for few secs it reached 61% and ironically GPU load then dropped to 91%. What was most interesting is that this game is using 10gig of vram and 13gig of main ram. This just indicates that the ram upgrade this gen is pretty disappointing.
It’s the best play station ever made!You clearly do not understand what people are writing to you. The PS5 is not hindering anything of what's possible on the console. You just aren't understanding how it works. If the GPU was always 9.2 TFs constant, it'll be a weaker console.
How can you not understand this? The design is more exotic and different, but that doesn't mean it's worse than if it was the GitHub GPU. The only thing you need to worry about is this......did Cerny create a good enough cooling system for the PS5. If he did, then the PS5 will blow you away more than any other Playstation ever.
It seems that you are the one who don't understand what I wrote. Of course it would be weaker but they could design their games better if they always knew their budget. With what we have now they could squeeze in more graphics at max GPU usage but then that maybe means that the CPU can't keep up = framerate tanks. Please read again and try to understand what I say before accusing me that I don't understand something.You clearly do not understand what people are writing to you. The PS5 is not hindering anything of what's possible on the console. You just aren't understanding how it works. If the GPU was always 9.2 TFs constant, it'll be a weaker console.
How can you not understand this? The design is more exotic and different, but that doesn't mean it's worse than if it was the GitHub GPU. The only thing you need to worry about is this......did Cerny create a good enough cooling system for the PS5. If he did, then the PS5 will blow you away more than any other Playstation ever.
With a 17-21% gap in GPU performance and roughly the same amount of bandwidth proportional to their GPUs computational power. Im curious what makes you think Sony will have to resort to parity policies?
Games will just use dynamic resolution, at worst the PS5 CPU throttles to 3GHZ but i can't see that being an issue for cross gen games designed around jaguar cores as richard pointed out.
People can try to sugarcoat the bad design all day long, but what they should do is give sony lots of shit so they can make changes still by redesigning there box
Agree was just making up a hypothetical worst case scenario were a launch game didn't bother to optimize for power consumption, even then it wouldn't be an issue because as Richard said crossgen games will hardly tax the CPU (unless they aim 120fps) and also because the power control unit would automatically redirect power to CPU when available.The CPU frequency probably doesn't even have to go down that much. I'd be surprised it it goes down more than 3.2GHz.
Who cares what he estimates? No one has the knowledge to even make that assertion. The frequency swing is completely unknown...
It's getting really old seeing everyone running to an appeal to authority for the completely unknown when in entire reality these people are not an authority of any kind.
It seems that you are the one who don't understand what I wrote. Of course it would be weaker but they could design their games better if they always knew their budget. With what we have now they could squeeze in more graphics at max GPU usage but then that maybe means that the CPU can't keep up = framerate tanks. Please read again and try to understand what I say before accusing me that I don't understand something.
In the DF video didn't they get vastly diminishing returns on FPS as the clock went higher and higher ?