• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

what’s a teraflop

Mista

Banned
Ask your wife.

A teraflop refers to the capability of a processor to calculate one trillion floating-point operations per second. For example 5 teraflops its processor setup is capable of handling 5 trillion floating-point calculations every second.

Flops is often used as a metric for computational power because it is having hardware that is able to process more mathetical operations or functions per second is better than hardware that is able to process less.
 
Last edited:

Papa

Banned
Ask your wife.

A teraflop refers to the capability of a processor to calculate one trillion floating-point operations per second. For example 5 teraflops its processor setup is capable of handling 5 trillion floating-point calculations every second.


Flops is often used as a metric for computational power because it is having hardware that is able to process more mathetical operations or functions per second is better than hardware that is able to process less.

thx wifey
 

Papa

Banned
Wait Mista Mista I thought processing power was determined by the processor speed. What’s the difference between hertz and flopz?
 

McCheese

Member
hertz is clock speed, but modern cpus can perform multiple operations per-cycle, and even pre-empt operations that may occur next.

flops are floating-point operations, so moving a number into a register, doing something with it and putting it back into memory.

so the math is sockets * cores * clockspeed * number of flops it can perform one cycle
 
A teraflop refers to the capability of a processor to calculate one trillion floating-point operations per second.


Technically it is the capability to complete up to one trillion calculations per second at peak output.

Processors are multi stage pipelned, like an assembly line. And just like an assembly line, for cars for example.

If say a car take 24hrs to assemble. Take that work, divide it into 24 stages, then man each stage creating an assembly line. It will be accurate the say that at the end of the line you are outputting 1 car per hour. But it still took 24hrs to fully get through the pipleline.

A factory with an assembly line once fully running it could output a fully assembled car every 1 hour or whatever its peak is. But for that first car to be completed there is significant start up time. And then of course if the line has to switch to buulding trucks or some other vehicle, you may need to "flush" much or all of the completed work in the piple line. And that is where performance is lost.
 
Last edited:

CuNi

Member
I feel like me going into all those threads and screaming "A FLOP IS A SCIENTIFIC MEASURING WHICH IS NOT DIRECTLY TRANSLATABLE INTO GAMING" all lead up to this moment in time.
 

Azelover

Titanic was called the Ship of Dreams, and it was. It really was.
I was thinking of a joke, but I better not.

A flop is floating point operation.
 

molasar

Banned
It happens when your snake has a malfunction. Then you can describe its level on a floposcale from a bitflop to a pebiflop. A teraflop is somewhere between. Probably Richard from DF can give you more tech details about it.
 
Last edited:

Aggelos

Member
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second.
Floating-point arithmetic is needed for very large or very small real numbers, or computations that require a large dynamic range.





In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times.


 
Last edited:

Diddy X

Member
I guess a flop is like the smallest unit a 3d capable processor can compute so they use it to measure the power of videogame hardware.
 

TGO

Hype Train conductor. Works harder than it steams.
giphy.gif

Never gets old.:messenger_tears_of_joy:
 
Last edited:

Cato

Banned
I see it mentioned a lot but I have no idea what it is.

I know what a gigabyte is.

I know what a gigahertz is.

I don’t know what a teraflop is.

mods halp

The Cray X-MP 4/16 supercomputer(a.k.a very expensive sofa for the computer lab) I used at uni did ~110megaflops per core.
One class assignment was to write assembler(or fortran) subroutines for matrix multiplications
and keep the pipelines from stalling. How to interleave your data across the memory banks were very important
and to get a passing grade your code had to sustain >95% of theoretical peak performance for these matrix multiplications.


But real men don't measure performance in flops. Measuring it in DAXPY is where the real game is.
 

Hostile_18

Banned
This place has gotten really fucking weird lately. The endless name changes, the weird in-jokes, the frat atmosphere, the “yo mama” quips. Not saying it’s bad necessarily, it’s just.... interesting I guess?

For a while (late last year) the board was becoming slightly aggressive again, now it's like a group of friends rather than members. I'm loving it and hope it lasts ❤.

If you don't like the love I recommend the next gen speculation thread, they play by their own rules over there. 😉
 

Spukc

always chasing the next thrill
No idea OP
Only thing i know is that the new xbox series x has the most.
 
Last edited:

Cato

Banned
The Cray X-MP 4/16 supercomputer(a.k.a very expensive sofa for the computer lab) I used at uni did ~110megaflops per core.
One class assignment was to write assembler(or fortran) subroutines for matrix multiplications
and keep the pipelines from stalling. How to interleave your data across the memory banks were very important
and to get a passing grade your code had to sustain >95% of theoretical peak performance for these matrix multiplications.


But real men don't measure performance in flops. Measuring it in DAXPY is where the real game is.

Back to real question.
Flops or floating point operations per second was used for measuring how fast you can do actual arithmetic.
(for non integers)
Hertz is the actual clock speed, and for integers operations you nowadays have an integer artithmetic unit that can do most operations in a single clock cycle. At least add and subtract. Multiplication is a little more expensive but not much, and division can be somewhat more expensive.

But real programs dont really limit themselves to integer math but need floating point numbers.
Floating point is the name of a different representation for numbers that are closer to describe rational numbers instead of integers.
There used to be a whole lot of different representations for these "rational" numbers but in the end ieee won out and everything today uses :

As you see, the representation is a LOT more complex than the integer case where you could just use a simple adder-circuit to add two numbers together.
In this case before you can even add the numbers you first need to normalize them so that they use the same exponent. etc.
Even a "simple" thing like addition becomes very complex and will require very many clockcycles to complete.
That is not even talking about things like multiplication, exponents, logarithms, trigonometric functions.
Some of the more complex operations on rational numbers/floating point numbers could take thousands or tens of thousands of clock cycles. Even with dedicated hardware.

Since the span in clock-cycles required between different operations spans across several/many orders of magnitude between
cheap and expensive operations hertz is not a practical measure of floating point performance. Henze using flops instead which is mostly kind of an average, middle of the road, of the more common operations. I think flops map closely to the cost of a multiplication, which is more expensive than an addition but less expensive than a division.

But even then, flops is too in-exact for some fields. Linear algebra, matrix multiplication, vector analysis, etc is all about a very specific operation where the atom is basically computing result = a * X + Y over and over and over.
Hence those folks measure performance in S/DAXPY. Single / Double precicion AX Plus Y.


On top of this there is an entire field halfway between mathematics and computer science called Numerical Analysis that is all about how to do floating point computations.
 
Last edited:
D

Deleted member 779727

Unconfirmed Member
When your teraplop breaks the sound barrier and teraflops in the bowl.

That's a teraflop.
 
Top Bottom