• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

nVidia Teraflops vs. AMD Teraflops: What is the exchange rate these days?

Kleegamefan

K. LEE GAIDEN
So most(perhaps all?) of the impressive next gen-ish looking E3 demos (Watch_Dogs, Star Wars 1313, Unreal Engine 4, Agani's Philosophy) seemed to be running on nVidia hardware....GTX 680 IIRC?


Epic is saying 2+ teraflops is needed for stuff like sameritan but again, they use nVidia hardware...

Problem is, rumor has it 100% of all next gen consoles will be powered by AMD Radeon technology across the board..

So is it a 1:1 ratio of comparing 1TF performance between nVidia and AMD Radeon cards?

I suspect not, but what is it exactly?
 

Htown

STOP SHITTING ON MY MOTHER'S HEADSTONE
First of all, console performance with a given graphics architecture and PC performance with a given architecture are apples and oranges.

Even when comparing on PC, though, it's not that simple.
 
That's like measuring if people from different nations are in average better or worse at certain Olympic competitions in order to know what nation has the best breeds of dogs.

difference between GPUs in PC and in console >>>>>>> difference between the two brands in PC.
 

bobbytkc

ADD New Gen Gamer
Fist of all, flops are the just the number of basic floating point operations per second. That just roughly means the number of numbers you can crunch each second. Computing floating point numbers is not everything that the gpu does, but if you are comparing the same operation, there is no such thing as exchange rate, cuz you are comparing the same thing.
 

Kleegamefan

K. LEE GAIDEN
Hmm....so its a meaningless comparison?

Wonder why epic choose an arbitrary performance bar for the console vendors to hurdle? :/
 
So is it a 1:1 ratio of comparing 1TF performance between nVidia and AMD Radeon cards?

Nope. Architectures are different.

I suspect not, but what is it exactly?

That's tough to say. That's something a person would IMO have to look at benchmarks of varying, comparable AMD/nVidia cards to try and form a ratio to try and get an idea. Take 680 benchmarks vs 7970 benchmarks. AMD/ATi seems to have improved the performance of their GPUs compared to comparable class GPUs from nVidia, but they still lag behind when looking at the FLOP rating.

The 7970 is rated at ~3.8 TFLOPS and the 680 at ~3.1 TFLOPs. Yet in most benchmark comparisons I've seen (including the ones in the following link), the 680 still edges out the 7970 in most tests.

http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-7.html

And as I've seen it nVidia's FLOP rating has been considered "real world performance" or more accurate.

Then of course the GPUs in the consoles won't be on par raw power-wise with a 7970 or 680.

We're left wondering how unoptimized are these demos when using a 680 and depending on what the console GPUs are (PS4's seems to be the best so far), how long will it take to fully optimize usage with their GPUs.
 

tokkun

Member
Hmm....so its a meaningless comparison?

Wonder why epic choose an arbitrary performance bar for the console vendors to hurdle? :/

I suppose they felt pressured to say something about performance, and the alternatives to FLOPs are as bad or worse in terms of being vague and misleading.
 

McHuj

Member
And as I've seen it nVidia's FLOP rating has been considered "real world performance" or more accurate.

With the Kepler, both Nvidia and AMD count the flops the same ways. number of shaders x 2 x clock frequency.

The magic in why one is better versus the other lies in how/why Nvidia can feed it's shaders more efficiently. All the caches in a GPU, thread schedulers, and other support functionality are just as important to the over all performance of the GPU.

That's why I think a modern GPU even with a similar theoretical flop count to Xenos could run circles around it.
 
The magic in why one is better versus the other lies in how/why Nvidia can feed it's shaders more efficiently. All the caches in a GPU, thread schedulers, and other support functionality are just as important to the over all performance of the GPU.

Thanks as I was drawing a blank on the why and had planned to include it, but didn't really bother looking it up to remember it.

But yeah those things you listed play into why nVidia's GPU ratings are more accurate.
 

Durante

Member
I don't think it's a meaningless question, by going with an average over a lot of games/graphics benchmarks you can at least arrive at a ballpark number.

Since the Kepler (the 600 series), I'd say the exchange ratio is about 4 : 3 or slightly better, in favor of NV. Before Kepler, it was closer to 5 : 3 in favor of NV.

As for recent developments, Kepler went broader in terms of SIMD (and thus lost some efficiency in exchange for more FLOPs per transistor count) while AMD ditched VLIW which increased their resource utilization for most workloads.

This also means that a modern non-VLIW 2.4 TFLOP AMD GPU will be more than 10x as fast in most realistic scenarios as the 240 GFLOP Xenos in 360.
 
Top Bottom