• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NXGamer - The TFlops are a lie

4357YBx.jpg


A) Lets say PS5 is 13 TFLOPS and Xsex is 12.2 TFLOPS, but Xsex has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly lower GPU speed, which console would be better?
B) Lets say PS5 is 9.2 TFLOPS and Xsex is 12.2 TFLOPS, but PS5 has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly higher GPU speed, which console would be better?
 
Last edited:
If 10% more TFs didn't matter MS should never have boosted the clock of XB1 before launch due Activision asking more power to make the CoD for their console... ~10% more TFs (1.19TFs to 1.31TFs) was enough to make Activision delivery the game.
I would have liked to be invited to that meeting!
 
A) Lets say PS5 is 13 TFLOPS and Xsex is 12.2 TFLOPS, but Xsex has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly lower GPU speed, which console would be better?
B) Lets say PS5 is 9.2 TFLOPS and Xsex is 12.2 TFLOPS, but PS5 has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly higher GPU speed, which console would be better?
These are not the only factors, the PS systems have an ARM cpu to run the background tasks and I/O, so the CPU is actually dedicated to gaming, other bespoke hardware will be used by each machines for sound and other specific tasks (raytracing maybe, Sony has an AI accelerator if some rumors are true), but if the variables you give are all there is to it the answers are:

A) Series X will win
B) It depends.

This is why we need to wait, all those incomplete rumors are borderline useless, and some of them could be misleading.
 

NXGamer

Member
So why the false title?
It's not false, it's attention grabbing. If I put "The Tflops are not all you should worry about when comparing hardware potential" it just does not flow as much.

The video explains all this and stresses that Tflops ARE important but not the ONLY thing you need concentrate on, I have to assume people will watch it AND then comment. The sad story is, with no 4 word title the majority of the audience just skip over the video.
 

ethomaz

Banned
People should say something like: They don't manifest in the same manner in all architecture....

My assumption is that this is way too subtle for most people to understand the nuance.
They did manifest in the same way in all architectures.
1 flops means you can do one operation of FP32 in one second.
That is identical in all processing chips in the market (CPU, GPU, etc etc etc).

Now FP32 operations is not everything is needed for 3D graphic render.... so operation that are not FP32 can take more than 1 second to be processed or (what happens a lot in processing) is the FP32 units waiting something else be executed before to their job (if one FP32 units needs to wait 3 seconds to process it operation that means it take 4s... so that FP32 units is not delivering 1 flops but due the wait just 0.25 flops... and the wait can be anything since own GPU schedule to external memory).

In resume some hardware are more efficient to use these flops than others hardware.
The biggest catch with GPU parallel compute is just to use these flops for something else while they are waiting workload without do nothing.

With GPU parallel compute you increase the efficiency of the GPU hardware to use these flops.

Efficiency is the key here.
 

ethomaz

Banned
It's not false, it's attention grabbing. If I put "The Tflops are not all you should worry about when comparing hardware potential" it just does not flow as much.

The video explains all this and stresses that Tflops ARE important but not the ONLY thing you need concentrate on, I have to assume people will watch it AND then comment. The sad story is, with no 4 word title the majority of the audience just skip over the video.
I believe people won't watch videos with titles they know is making a false claim.
I did not watch.
 
Last edited:

ethomaz

Banned
Then your conversation is uninformed and based on assumption, not good for someone quoting "technical truth" tut tut!
Yeap the title being a false claim is just me being uninformed and based in assumptions lol

What a joke :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:

You can make a video reasonable and on spot but the title like this will still be a false claim.... a lie.
No matter what you show in your video will make TFlops a lie.
 
Last edited:

NXGamer

Member
One thing you should have done when overclocking the 750 ti would have been to only overclock the GPU, so that the TF value is better isolated, it kind of muddies the water.

- So overclock the GPU only
- Then put the GPU back to regular speed and overclock only the RAM
- Overclock both GPU and memory

That would be a lot of work, but it would show how each element affect the final result.
I did, I even show the difference int he video with Rise of the TR, 23 to 28 from OC to the Core, RAM OC does nothing.
 
4357YBx.jpg


A) Lets say PS5 is 13 TFLOPS and Xsex is 12.2 TFLOPS, but Xsex has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly lower GPU speed, which console would be better?
B) Lets say PS5 is 9.2 TFLOPS and Xsex is 12.2 TFLOPS, but PS5 has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly higher GPU speed, which console would be better?
A.) xbox would win since cpu power is what will have the biggest performance advantage
B.)Xbox would still win ps5 would bottleneck if the GPU ran way higher then the CPU thats before we take in account the ram
 

NXGamer

Member
Yeap the title being a false claim is just me being uninformed and based in assumptions lol

What a joke :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:

You can make a video reasonable and on spot but the title like this will still be a false claim.... a lie.
Your comment "I did not watch. "

Then states "
The video title just try to spread the bullshit claim that "AMD flops are different from nVidia flops".
Flops are the same for AMD and nVidia... it is a metric, it a fixed metric, it is not 1 for AMD and 2 for nVidia... it is equal for both.

How the hardware efficiently uses these flops, and that includes all the parts of the hardware, is what delivery the final performance. "

The very definition of assumption, bias or whatever else you want to call it.

If you wont even listen to an argument or other persons opinion, how will you learn...
 

ethomaz

Banned
A) Lets say PS5 is 13 TFLOPS and Xsex is 12.2 TFLOPS, but Xsex has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly lower GPU speed, which console would be better?
B) Lets say PS5 is 9.2 TFLOPS and Xsex is 12.2 TFLOPS, but PS5 has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly higher GPU speed, which console would be better?
In both cases Xtower will probably have the advantage.
 

ethomaz

Banned
Your comment "I did not watch. "

Then states "
The video title just try to spread the bullshit claim that "AMD flops are different from nVidia flops".
Flops are the same for AMD and nVidia... it is a metric, it a fixed metric, it is not 1 for AMD and 2 for nVidia... it is equal for both.

How the hardware efficiently uses these flops, and that includes all the parts of the hardware, is what delivery the final performance. "

The very definition of assumption, bias or whatever else you want to call it.

If you wont even listen to an argument or other persons opinion, how will you learn...
The video title exactly do that.
A lie to spread misinformation.

Make an accurate title next time ;)

Edit - It is not personal to you... anybody that makes that claim will happen the same because the claim is just a lie... it is unsustainable.
 
Last edited:

NXGamer

Member
NXGamer NXGamer does great work but I disagree

Power is quantifiable, measurable

Power is power. You want the most you can get, period
And that is fine, the whole point is discussion, so thanks.

Ok, so answer me this, if you were stuck on a snow filled road, uphill and windy and you had 2 options to get home as fast as you could.
1) A Mountain Bike with spiked wheels, 20 gears and ABS.
2) A Dodge viper, rear wheel drive, slick tyres and 350 BHP.

Which would you choose?
 

ethomaz

Banned
Ask an informed question and watch the video next time ;-)
I can ask a lot....

Why did you lied in the title?

I can watch all videos since the tittle is not a lie.... that makes my second question... why should I watch a video with a lie in the title?
 
Last edited:

sinnergy

Member
He says higher pixel fill rate doesn’t matter that much, isn’t that what the PS5 github showed higher clocks , higher pixel fill rate? That’s what I got from the video ... right? If the other areas are better on the side of the competition...
 
Last edited:

NXGamer

Member
4357YBx.jpg


A) Lets say PS5 is 13 TFLOPS and Xsex is 12.2 TFLOPS, but Xsex has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly lower GPU speed, which console would be better?
B) Lets say PS5 is 9.2 TFLOPS and Xsex is 12.2 TFLOPS, but PS5 has MORE pixel fillrate, texture fillrate, slightly more CPU speed, slightly higher GPU speed, which console would be better?
A would be Xbox,
B PS5 would likely have better IQ, resolution, details and potentially more consistent frame-rate. Xbox would be able to do more at a lower resolution or slower rate.
 
These are not the only factors, the PS systems have an ARM cpu to run the background tasks and I/O, so the CPU is actually dedicated to gaming, other bespoke hardware will be used by each machines for sound and other specific tasks (raytracing maybe, Sony has an AI accelerator if some rumors are true), but if the variables you give are all there is to it the answers are:

A) Series X will win
B) It depends.

This is why we need to wait, all those incomplete rumors are borderline useless, and some of them could be misleading.

In both cases Xtower will probably have the advantage.

A.) xbox would win since cpu power is what will have the biggest performance advantage
B.)Xbox would still win ps5 would bottleneck if the GPU ran way higher then the CPU thats before we take in account the ram

Hmm.... :pie_thinking:

The insiders are stating that PS5 has the edge slightly in TFLOPS and other things, versus the leaks stating Xbox is double digits TFLOPS versus Sonys is not but has better implementation of SSD and other things.

Fuck so hard to tell, its like they are both close to the finish line, one is slightly closer/far depending on the camera angle and perspective.
 
Last edited:

ethomaz

Banned
He says higher pixel fill rate doesn’t matter that much, isn’t that what the PS5 github showed higher clocks , higher pixel fill rate? That’s what I got from the video ... right? If the other areas are better on the side of the competition...
Pixel Fill Rate is number of ROPs x clock? So only high clocks won't make higher pixel fill rate... how many ROPs the PS5 Github leak has?

Hmm.... :pie_thinking:

The insiders are stating that PS5 has the edge slightly in TFLOPS and other things, versus the leaks stating Xbox is double digits TFLOPS versus Sonys is not but has better implementation of SSD and other things.

Fuck so hard to tell, its like they are both close to the finish line, one is slightly closer/far depending on the camera angle and perspective.
Well you need to choose one leak because if you try to make sense to all of them then you will have headaches before reach a conclusion :D
 
Last edited:

Spukc

always chasing the next thrill
And that is fine, the whole point is discussion, so thanks.

Ok, so answer me this, if you were stuck on a snow filled road, uphill and windy and you had 2 options to get home as fast as you could.
1) A Mountain Bike with spiked wheels, 20 gears and ABS.
2) A Dodge viper, rear wheel drive, slick tyres and 350 BHP.

Which would you choose?
Option 2 , only pussies cycle on mountain bikes.

Also Tflops do matter, But somehow they magically don't anymore, every time another console seems to have less of them.
 

sinnergy

Member
Pixel Fill Rate is number of ROPs x clock? So only high clocks won't make higher pixel fill rate... how many ROPs the PS5 Github has?

but it was my understanding PS5 had Series X beat.

looked it up, PS5 has 64 ROPs at 911 MHz in PS4 pro mode, how many ROPs would PS5 mode have? with 40 CU?

hmm the leak says 36 CU at 2 GHz native, would mean 64 ROPs, right ?
 
Last edited:

NXGamer

Member
Option 2 , only pussies cycle on mountain bikes.

Also Tflops do matter, But somehow they magically don't anymore, every time another console seems to have less of them.
Ha ha, the right answer.

I will say it here as it seems to have been missed by many commenting, I am NOT saying TFLOPS do not matter, I even say they are Important. ALL I am saying AND demonstrating here is it is only part of the conversation, you have take all of the related elements into account.

Even this Generation we had the "but dah CPU's are poor lol" so no good, although extreme it demonstrates the balance of hardware or a machine.

An 8 Tflop GPU paired with a 256-bit Bus, 32 ROPS, 56 TMU's and/or 200GB/s bandwidth will be much slower and perform worse than a 6 Tflop GPU with 384-Bit Bus, 64 ROPS, 96 TMU's and 512GB/s bandwidth, most of the time. It is all about each part not limiting the other, or to the least amount.
 

sinnergy

Member
Do ROPs scale with CU count ? If series X has 56 CUs for example wouldn’t it also have more ROPs? If for example PS5 has 36 CU 64 ROPs, no idea...
 
Last edited:
Ha ha, the right answer.

I will say it here as it seems to have been missed by many commenting, I am NOT saying TFLOPS do not matter, I even say they are Important. ALL I am saying AND demonstrating here is it is only part of the conversation, you have take all of the related elements into account.

Even this Generation we had the "but dah CPU's are poor lol" so no good, although extreme it demonstrates the balance of hardware or a machine.

An 8 Tflop GPU paired with a 256-bit Bus, 32 ROPS, 56 TMU's and/or 200GB/s bandwidth will be much slower and perform worse than a 6 Tflop GPU with 384-Bit Bus, 64 ROPS, 96 TMU's and 512GB/s bandwidth, most of the time. It is all about each part not limiting the other, or to the least amount.

I guess what would be the tipping scale, or what would make something cross the finish line with all things being equal for both consoles assuming PS5 is 9.2 TFLOPS and Xsex being 12 TFLOPS, because there has been some sort of establishment that TFLOP is not the only thing that matters. Since you created a good chart in your video, i was playing around with the variables and asking members of neogaf and you who have a better understanding of this.

Thanking you guys for answering my noob questions. lol.
 

GHG

Gold Member
And that is fine, the whole point is discussion, so thanks.

Ok, so answer me this, if you were stuck on a snow filled road, uphill and windy and you had 2 options to get home as fast as you could.
1) A Mountain Bike with spiked wheels, 20 gears and ABS.
2) A Dodge viper, rear wheel drive, slick tyres and 350 BHP.

Which would you choose?

I'd go for the Dodge Viper. Even if I couldn't go anywhere at least I'd be warm and comfortable.

Greta-Thunberg-kako-se-usudjujes.gif
 

thelastword

Banned
Tflops are Tflops, it's math it's not wrong......Some architectures are just more efficient than others, so some pull more performance at the same TFLOP count......Effciency could be hardware based and software based...….NV had more performance in the past because it did things like lowering, texture detail, color samples in it's games to boost performance, sometimes imperceptible to a player in motion...….If a GPU supports VRS and one does not, you will see improved perf on the one with VRS....

As and example; Vega had lots of raw performance, it did none of the things NV did to push perf, so it worked harder and all it's TFLOPS went into raw rendering....Vega had the raw throughput but the architecture, it never really got the most out of the architecture because devs favored Nvidia, yet if Vega had the efficiency of Pascal it would run circles around it, I think some games on Vega gives credence to that fact, some games performed better on vega, more suited to the architecture, but that still does not negate the fact that AMD was not as efficient......Look at this then, in the latest 4000 APU's, AMD was able to pull 59% more efficiency out of old vega arch, so it goes to show, a higher TFLOP count is always better, it all depends on the engineering relative to efficiency and how easy it is to utilize 100% of the architecture...……..

From the rumormill, Sony seems to be winning on both power and ease of development, so it's really those two combined that will make what you see on your screen all the more impressive......Even the Black Tiger devs should put out something not stuck in 1994......
 

ethomaz

Banned
but it was my understanding PS5 had Series X beat.

looked it up, PS5 has 64 ROPs at 911 MHz in PS4 pro mode, how many ROPs would PS5 mode have? with 40 CU?

hmm the leak says 36 CU at 2 GHz native, would mean 64 ROPs, right ?
I don't know but if you take RDNA cards we have:

40CUs / 64ROPs
36CUs / 64ROPs
32CUs / 64ROPs
24CUs / 32ROPs
22CUs / 32ROPs
20CUs / 32ROPs

I don't think CUs and ROPs are related at all.
Seems like AMD can put the amount of ROPs they want in RDNA.

Said that for 8k/4k output 64ROPs seems low.
Maybe PS5 / Xtower has 96 or 128 ROPs.
 
Last edited:
1 flops means you can do one operation of FP32 in one second.
The point is how useful is the metric taken at face value.

It has some use, but it is far from the whole picture.

It's like me asking how long will it take me to get to the grocery and the only information you have is how many horse power my car's motor can generate - sure it's a fixed measure, but it's not very useful.
 
Last edited:
Tflops are Tflops, it's math it's not wrong......Some architectures are just more efficient than others, so some pull more performance at the same TFLOP count......Effciency could be hardware based and software based...….NV had more performance in the past because it did things like lowering, texture detail, color samples in it's games to boost performance, sometimes imperceptible to a player in motion...….If a GPU supports VRS and one does not, you will see improved perf on the one with VRS....

As and example; Vega had lots of raw performance, it did none of the things NV did to push perf, so it worked harder and all it's TFLOPS went into raw rendering....Vega had the raw throughput but the architecture, it never really got the most out of the architecture because devs favored Nvidia, yet if Vega had the efficiency of Pascal it would run circles around it, I think some games on Vega gives credence to that fact, some games performed better on vega, more suited to the architecture, but that still does not negate the fact that AMD was not as efficient......Look at this then, in the latest 4000 APU's, AMD was able to pull 59% more efficiency out of old vega arch, so it goes to show, a higher TFLOP count is always better, it all depends on the engineering relative to efficiency and how easy it is to utilize 100% of the architecture...……..

From the rumormill, Sony seems to be winning on both power and ease of development, so it's really those two combined that will make what you see on your screen all the more impressive......Even the Black Tiger devs should put out something not stuck in 1994......

I'm confused. Does Vega have more TFLOPS than Nvidia Pascal GPUs?
 

ethomaz

Banned
I'm confused. Does Vega have more TFLOPS than Nvidia Pascal GPUs?
Pascal has less TFLOPS.
Vega indeed take the lead in compute tasks like he said because it has more power.
nVidia hardware is more efficient in the use of the flops to render tasks... nVidia do more with less when the talk is render 3D.

Edit - Fixed because "it" was confusing.
 
Last edited:
Well you need to choose one leak because if you try to make sense to all of them then you will have headaches before reach a conclusion :D
I have a headache.
I'm confused. Does Vega have more TFLOPS than Nvidia Pascal GPUs?
They are a literal flop, so they win all the time!

I don't think CUs and ROPs are related at all.
Not directly, those who design the cards try to balance them for a target resolution/silicon space... the PS4 has a "disproportionate" amount of ROPs compared to its CU count (when put against what AMD in its cards of the time that offered with similar computing power), which probably helps it run games at 1080p, obviously MS did not agree, but given how the xbox one performs at 1080p, it's telling.
 
Last edited:

sinnergy

Member
I don't know but if you take RDNA cards we have:

40CUs / 64ROPs
36CUs / 64ROPs
32CUs / 64ROPs
24CUs / 32ROPs
22CUs / 32ROPs
20CUs / 32ROPs

I don't think CUs and ROPs are related at all.
Seems like AMD can put the amount of ROPs they want in RDNA.

Said that for 8k/4k output 64ROPs seems low.
Maybe PS5 / Xtower has 96 or 128 ROPs.
Hmm interesting let’s assume the leak is right Sony could still beat Series X in pixel fill rate if Series X also has 64 ROPs,

unless 54 CU and up has 128 ROPs, doubled them, or 96 ROPs if you add another 32, maybe that’s the stepping.
 
Last edited:

Nikana

Go Go Neo Rangers!
And that is fine, the whole point is discussion, so thanks.

Ok, so answer me this, if you were stuck on a snow filled road, uphill and windy and you had 2 options to get home as fast as you could.
1) A Mountain Bike with spiked wheels, 20 gears and ABS.
2) A Dodge viper, rear wheel drive, slick tyres and 350 BHP.

Which would you choose?
viper. Crank the heat, wait for the snow to melt, sell the viper, profit
 
It have less TFLOPS.
Vega indeed take the lead in compute tasks like he said because it has more power.

so Vega has less TFLOPS and is less efficient than NVIDIA pascal-makes sense, but then how to does it have more power? Because of its compute units and HBM2 memory as oppose to GDDR? From my understanding its good for video editing, graphics rendering
 

ethomaz

Banned
so Vega has less TFLOPS and is less efficient than NVIDIA pascal-makes sense, but then how to does it have more power? Because of its compute units and HBM2 memory as oppose to GDDR? From my understanding its good for video editing, graphics rendering
Pascal has less TFs... way less.

TFs: Vega > Pacal
Compute tasks: Vega > Pascal
Render 3D tasks: Pascal > Vega
 
Last edited:

Journey

Banned
Where was his analysis when it was rumored the PS5 had more teraflops? Then he tries to claim he's not biased :messenger_grinning_sweat:
2nd topic about Teraflops not mattering after the rumors of 9.2 vs 12 surfaced. You have to admit the timing is sketchy even if intents are pure.
 
Last edited:
TLDW please.
For those, allergic to clickbaity looking videos.
He's simply making a comparison with PC parts. Games made specifically for the console hardware vs. games made for general hardware. PC conterparts usually don't perform as well as console parts even with better specs in general. I mean, supposedly everyone knows this, but still. He has a point.
 

ethomaz

Banned
Do ROPs scale with CU count ? If series X has 56 CUs for example wouldn’t it also have more ROPs? If for example PS5 has 36 CU 64 ROPs, no idea...
I don't think it is tied in RDNA.
It was in GCN... a big issue because you had big GPUs with ROPs limitation.

GCN have 16 ROPs per cluster of CUs.... GCN can have max of 4 cluster of CUs... so you have 64 ROPs only to cards like Vega... and 16 ROPs only to smaller chips like Xbox One.
 
Last edited:
Pascal has less TFs... way less.

TFs: Vega > Pacal
Compute tasks: Vega > Pascal
Render 3D tasks: Pascal > Vega

Ok vega has more TFLOPS, but then he is also stating that having more TFLOPS is always better:

"a higher TFLOP count is always better"

But then saying that Vega is not better because of its architecture and inefficiencies, which is what NXgamer is trying to say that higher TFLOP doesn't make a gpu better.
 

ethomaz

Banned
Ok vega has more TFLOPS, but then he is also stating that having more TFLOPS is always better:

"a higher TFLOP count is always better"

But then saying that Vega is not better because of its architecture and inefficiencies, which is what NXgamer is trying to say that higher TFLOP doesn't make a gpu better.
Any card with more TFs will be better.
Pascal will be better with more TFs.
Vega will be better with more TFs.

Thelastword is biased to AMD so he always try to makes AMD cards better than it is but what he said I think it is right.
Vega is more powerful than Pascal.
Vega has more TFs than Pascal.
Vega does Compute Tasks better than Pascal.
Vega is less efficient in 3D render than Pascal.
Pascal does 3D Render better than Vega.
 
Last edited:

sinnergy

Member
I don't think it is tied in RDNA.
It was in GCN... a big issue because you had big GPUs with ROPs limitation.

GCN have 16 ROPs per cluster of CUs.... GCN can have max of 4 cluster of CUs... so you have 64 ROPs only to cards like Vega... and 16 ROPs only to smaller chips like Xbox One.
Could a Series X have 56 CU have 96 or 128 ROPS, if you add another 32 like you see with the jump in ROPs from 24 CUs to 40 CU count , otherwise you are limited ...
 
Last edited:
Top Bottom