• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

12.15 - 10.28= 1.87 Teraflops difference between the XSX and PS5 (52 CU's vs. 36 CU's)

Lol, it's not enough that they have a 12.1 vs 10.3 scenerio, Xbox fans have to spread even more FUD. Are you guys still that worried about the PS5? Cerny said that the system will be running constantly in Boost Mode, due in large part to their cooling system being able to handle that kind of heat. 10.3 Tflops isn't just a rare occurence. That is what it will run at the vast majority of the time. And it will only downclock slightly if the CPU is demanding more.
Ah.... the boost mode koolaid. Drink!!
 
The Github leak was true.

PS5 is using 9.2tflops, they've increased the clock speed to the Max at 10.28 TFLOPs, and the CPU as well. When playing games on PS5 you won't be getting the exact number of 10.28Tflops, it'll be in the 9.2tflops to 10.28Tflops range.

Also on PS5 you probably be only getting Basic Ray Tracing since it relies on higher CUs, while Xbox Series X hasn't boost any clock speeds, it has 12Tflops and 52CUs. On Xbox Series X you can expect more advanced Ray Tracing.

I wouldn’t expect anything special with RT on the Xbox either, the Minecraft demo kinda already shows this. Dropped down to 1080p and running between 30 - 60fps. In this case they are path tracing the entire scene instead of just some select effects, but still.

Quality ray tracing across the board is still 10 years away.
 

Shmunter

Member
It wasn't a joke though... You sincerely look like that type of guy. Your logic and rationale seems to be completely non-existent.
You seem to know a lot about those types of guys. Is it something you think about a lot? When your alone maybe?

By the way don’t address me again, I don’t mean to imply but you’re obviously very young and I don’t like being the mean bad man upsetting children.
 

DForce

NaughtyDog Defense Force
The problem is that it's not consistent and there's no data right now that tells us what the average TF number will be in the real world for PS5. Xbox Series X is ALWAYS running at 12.155 TF without breaking a sweat, while PS5 can only achieve its max 10.28 at its peak frequency which has already been express won't be all the time, so the average TF number for PS5 might be 9.5TF while the average TF for XSX remains at 12.155 since it never changes.


When DF or others run the numbers, we could be looking at something like this:

PS5 average TF performance = 9.2 TF with an occasional spike going up to 10.2
XSX avarage TF performance = 12.155 consistently.


It's odd that people automatically turn into a tech expert now that specs are out. lol.

It was clear in Cerny's presentation that 10.28TF it consistent whenever running a game. The frequency changes during workload. He ONLY talks about a drop from 10.28TF during worst case scenarios where the CPU and GPU will receive a minor drop in performance.

Sony's customised version of the AMD RDNA 2 GPU features 36 compute units running at frequencies that are capped at 2.23GHz, effectively delivering 10.28TF of peak compute performance. However, again, while 2.23GHz is the limit and also the typical speed, it can drop lower based on the workloads being demanded of it. PS5 uses a boost clock then - and we'll explain that presently - but equally importantly, it's important to remember that performance from an RDNA compute unit far outstrips a PS4 or PS4 Pro equivalent, based on an older architecture.

He says nothing about occasional spikes. :messenger_grinning_sweat:

On the face of it, PlayStation 5 delivers a ton of power, but there does seem to be an extra onus on developers to optimise to these new characteristics. The question is, what happens when the processor does hit its power limit and components down-clock? In his presentation, Mark Cerny freely admits that CPU and GPU won't always be running at 3.5GHz and 2.23GHz respectively.

"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," he explains. "All things considered, the change to a variable frequency approach will show significant gains for PlayStation gamers."

The only drop from the target is if it's during a worst case scenario and both the GPU and CPU will receive a drop in performance by 10 percent.


So, from the looks of things, a consistent 10.28TF is possible with no issues.
 
gt1kS3y.png
 

Knch

Member
What is the key factor behind implementing only 36 CUs? Why not use, like, 48, or whatever?

Cost? power consumption?
Better yield and thus lower cost. The larger what you're "stamping" out of a silicon wafer, the higher the chance something going wrong, the more stamped GPU/CPU/APU you lose.
 

sinnergy

Member
Heres my take, the PS5 brings more to the next gen table based on what’s been presented.

The extra 2 TF on the XsX does not make up for the innovations and potential that PS5 has for future gaming. The xsx is not much more than a nice gfx card and will have trouble keeping up with games that are built round the ps5 architecture, but not the other way around. There is also the matter of VR as a point of difference which I won’t go into.

Any PlayStation fans initially disappointed by the specs will soon realise that not only should they not be disappointed, but they should rejoice by what is undoubtedly a landmark console that will smash all records next gen and provide the best gaming experiences they have ever imagined. Analysts will soon catch on.

So to answer your question I will not pay $399 for XsX, but I will easily pay $499 for a PS5.
For me is the other way around the so called innovation won’t make up the power , 3D audio , nice almost everyone is playing on Shitty tv speakers , Ssd , everyone had that for ages in the pc.
 

sinnergy

Member
I wouldn’t expect anything special with RT on the Xbox either, the Minecraft demo kinda already shows this. Dropped down to 1080p and running between 30 - 60fps. In this case they are path tracing the entire scene instead of just some select effects, but still.

Quality ray tracing across the board is still 10 years away.
Yet it runs the same as a high specced pc which costs at least 5 times as much 🤣
 

Neur4lN01s3

Neophyte
The funny thing is:

If Microsoft raise clock "dynamically" to 2.23 GHz, XSX will have 14.6 TFlops

Maybe the easy way to do a XsX Pro (with sustained 14,6, not dynamic) when the nanometers will decrease
 
Last edited:

Jigsaah

Gold Member
4k is 4x 1080.

If you have a 1080p30 game running base ps4, you would need 7.5 teraflops to run it at 4k native and double that (15 teraflops) to make it 4k60.

Granted it's a newer and more efficient architecture for this generation but still, I doubt we'll see 4k60 anywhere soon. I think the realistic scenario will be "next gen" graphics at native 4k30 on the xbox and upscaled to 4k30 on the ps5. At the end of the day, tweaking the resolution is the easiest way for a dev to upscale/downscale the graphics to a different spec.
That's bullshit and you know it. Fortnite originally ran at 1080p 30 fps when it launched. It was able to then run through Xbox One X enhancements to currently be 4k 60fps...on a 6 Teraflop machine. There's more to this than just the TF count. You doing simple math leaves out the biggest factor to the frame rate, the CPU...which is massively better than the Jaguar chips in the Xbox and PS4.
 

martino

Member
cerny presentation felt like orwell
Better yield and thus lower cost. The larger what you're "stamping" out of a silicon wafer, the higher the chance something going wrong, the more stamped GPU/CPU/APU you lose.

the higher the frequency the more you lost chip which they can't reach it
 
The XSX didn't achieve 12 Teraflops efficiently since they needed 52 CUs to achieve a mere 1.87 Teraflops difference.

Again, this thread is to start a conversation about whether the XSX's 52 CU's was efficient or not to achieve that additional 1.87 Teraflops gain or not.

Running higher frequencies results in lower efficiency. So higher CU, lower clock rate, is more efficient.
 

Trogdor1123

Gold Member
Can someone with knowledge of yields weigh in? Is a high performance die harder to do than a large die? Which will impact yields more?
 

lynux3

Member
2.23 GHz cannot be sustained. Why? Common sense tells me this.

If it could be sustained, then why isn't the base clock speed 2.23GHz? Why is there even a consideration to mention "up to"?

Why didn't he mention the base clock speed? I would love to know what percentage of usage takes up the 2.23GHz. I say less than 40% of the time, but I'll be guessing.
Indeed. You'll be guessing for the rest of this generation. Like Cerny said, he "expects" the clock to retain its 2.23GHz most of the time until SmartShift rears its head. There's nothing to suggest it won't continuously be there and you've yet to provide that evidence.

However, based on your posting history I don't expect you ever will.
 

Marlenus

Member
Can someone with knowledge of yields weigh in? Is a high performance die harder to do than a large die? Which will impact yields more?

No idea to be honest because we have no clue on the defect density of the node or the voltage/frequency curve of the architecture on the node.

Maybe yields for the Xbox Soc are higher because lower binned chips will still pass validation and maybe yields for the PS5 Soc will be higher because of fewer defects. Can't say at the moment.
 
Elaborate more?
Here is an example for the ryzen 3700x. Power and heat are closely related, so you can see how heat dissipation ryzes dramatically when you pass the sweet spot (around 3.3ghz).

svqvQrf.jpg


svqvQrf.jpg


When the console launches, and all the tech heads and tech sites start doing their tests, i wont be surprised if PS5 is operating around the 9.5tf mark with heavy graphical games, especially the longer you play.
The problem is that in the example Cerny gave (the map screen of horizon zero dawn making his PS4 pro sound like a jet)... This is why there is a 9.2tf mode, so that there is less power and strain when it has no reason to be.

Obviously it will boost CPU or GPU when they are under stress (however he implied that they would not boost at the same time).
 
Last edited:

wintersouls

Member
Oh, you read this elsewhere? (Couldn't figure it out on your own, huh?)

Please, tell me more.


It's a simple quote from something I agree with. Do you have a problem with that? You might think of not insulting others. It's free, I encourage you to try it. Bye.
 

martino

Member
No idea to be honest because we have no clue on the defect density of the node or the voltage/frequency curve of the architecture on the node.

Maybe yields for the Xbox Soc are higher because lower binned chips will still pass validation and maybe yields for the PS5 Soc will be higher because of fewer defects. Can't say at the moment.
you need to take into consideration the other console in the equation. Some chips not going into xsx can go there
 
Last edited:

Ribi

Member
You sony and Xbox fans crack me up the real winner here is Nintendo revolutionizing the game by just breaking 30fps on most ports. You can have all your flops I'll take my joycon
 
Last edited:

CatLady

Selfishly plays on Xbox Purr-ies X
Lead systems architect says the ps5 will run in boost mode a majority of the time, but I guess he was lying and these people who aren't working on the ps5 are right. 🤔

Oh hell yeah, Aaron Greenberg just wishes he was half as good as Cerny.
 

ThatGamingDude

I am a virgin
This thread is to start a conversation about whether the XSX's 52 CU's was efficient or not to achieve that additional 1.87 Teraflops gain or not.
The only way to make that determination is to get in at the processor level and see exactly how the CU's work compared to PS5's.
We'll know after product release, and once people start being able to get chip readers and stuff going on them.
I'll need someone like a tech vetted dev on GAF to elaborate more on this.
I'm not vetted, and really just an ass hat in an office with a big mouth, but here 're goes

I'm not explaining shit, as it's implied from your posts you get it

Teraflops are great and all for measuring computations at a hardware level, but it's REALLY only part of the equation

The software running on the hardware is really going to be a major factor, as well as how the operating system runs and talks with the hardware
For example; you could have a full 3D rendered game, exceptionally coded, great graphics, and because the code is so exceptional and utilizes the hardware in particular ways, the hardware never even bats an eye. Put an equally coded bit coin miner on it? Gonna crank out that heat son

Then, you have to take into account that in the lifespan of the product, earlier games are not going to utilize the hardware as well as later, because of the knowledge of how the hard ware works is still being built upon.

You can argue this and that about hardware and teraflops and whatever; a huge component is still that you have to have firmware, an OS, and software running to utilize that hardware, and until we get live units, there's really not much to discuss

Software + firmware + OS + Hardware = System; Software, firmware, and OS are going to play a role too, so ignoring them in discussion would be silly

Even when we can do benchmarks, it's not going to be really something to be able to answered until the product comes out, and is under study for some time

tl;dr Don't use teraflops strictly as a basis for how well a system will perform, it's silly, and ignores a bunch of factors in the equation
 
Last edited:

wintersouls

Member
Since yesterday I have not stopped reading people who thought without having a clue of what they are talking about because they do not really understand what Cerny explained yesterday in the face of programmers and studios, not players. When you actually see the PS5 games next to the Xbox X, more than one will be surprised when they see that they look the same or even better at the Firts Party. I do not have doubt.

The games are going to speak for themselves. Time to time.
 

Mynd

Neo Member
12tf Vs 10tf

Both are theoretical maximums, and nothing is ever 100% efficient.

I think it's power Vs speed.

(Bad analogy) if it's a 100 meter race:
+ PS5 starts behind XsX but is faster
+ SxS starts *15 meters ahead of PS5 but is slower.

* A random value.

I have no idea what the end result will be, if I were to guess:
+ 3rd party devs will have an easier time with XsX (achieving more without as much effort?)

And another guess, game performance could depend on how a game a built ie the game engine.

I’ll repeat what I’ve already said, go look at the original launch specs for the consoles this gen. There’s your answer.

we already know how this works out.
 

StreetsofBeige

Gold Member
12tf Vs 10tf

Both are theoretical maximums, and nothing is ever 100% efficient.

I think it's power Vs speed.

(Bad analogy) if it's a 100 meter race:
+ PS5 starts behind XsX but is faster
+ SxS starts *15 meters ahead of PS5 but is slower.

* A random value.

I have no idea what the end result will be, if I were to guess:
+ 3rd party devs will have an easier time with XsX (achieving more without as much effort?)

And another guess, game performance could depend on how a game a built ie the game engine.
And SSD speed is a theoretical maximum too. Who knows how often and efficiently PS5 can transfer data at 5.5gb/s.

For PS5 there's already a theoretical "max" in cpu/gpu. Both specs can't run at their max 3.5 ghz and 2.23 ghz for long as they have issues with being maxed out due to issues. If they didn't, they'd have base clocks at 3.5 and 2.23 and not have to worry about one spec maxing out and the other needing to turn it down.
 

scalman

Member
yes they both are UP TO , noone said or believe that xbox has solid number, they both doing power caving and such and its UP TO. everyone just love those maxed numbers. doesnt even mean that devs could use any of those more TFL. but some people are just here to make threads and talk , they dont even play games anyway.
 
What is the key factor behind implementing only 36 CUs? Why not use, like, 48, or whatever?

Cost? power consumption?

I assume it's for backwards compatibility since PS4 Pro also had 36 CUs so it probably helps them keep bc without asking developers to revisit their old games.

Also it's probably tied to AMD architecture and it's memory bus - PS4/Pro/PS5 are all on 256 bit bus. To add more cus they would have to increase memory bus to 320 or 384 bits.
This in turn would complicate motherboard as you would need more traces and more memory chips and this would increase production costs.
 
Top Bottom