• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGLeaks Durango specs: x64 8-core CPU @1.6GHz, 8GB DDR3 + 32MB ESRAM, 50GB 6x BD...

Can someone kindly "in layman terms" kindly explain this whole "efficiency" when it comes to GPU?

I think there are two different things here:

1.) Aegis was talking about how efficient MS did use those CUs in the Xbox 360 vs. how they intend to use them in the Xbox 3. Which will be more efficient.

2.) People claiming those 12 CUs in the Xbox 3 can achieve the same thing as the 18 CUs in the PS4, which would make them (and probably the upcoming Radeon HD 8XXX) 50% more efficient (if I did not horribly fail at math), which I really doubt.
 
Can someone "in layman terms" kindly explain this whole "efficiency" when it comes to GPU?

It's the difference between an american sports car and a european sports car.

American sports cars have a ton of muscle, but first curve and they are already down a couple of seconds in the race. European sports car also has muscle, but it does curves too. It's wonderful.

(this is not a direct comparison to Orbis and Durango)
 

StevieP

Banned
StevieP
You mentioned epic axing their lighting system for ue4 (I knew I heard someone credible mention it before, so it was you)

Do you have a source for this? I got called out for mentioning it earlier.
I'd love it if you made it up but I'm a glass half empty kind of guy so I don't doubt it to be true.

I'm not the right person to ask. Just relaying as I always have.
AlStrong should join this thread.
 

Durante

Member
Can someone kindly "in layman terms" kindly explain this whole "efficiency" when it comes to GPU?
GPUs feature some maximum amount of floating point operations they can potentially execute per second. This is a function of their clock speed and degree of parallelism (which is in turn closely related to die size). These are the theoretical "TFLOP" numbers that get thrown around.

"Efficiency" characterizes how well applications are able to make use of that potential performance. For example, if you have a GPU providing 2 TFLOPs, and your application achieves a throughput of 1 TFLOP, you'll have 50% efficiency.

Different architectures make it easier or harder to achieve good efficiency. For example, currently on PC it's widely accepted that nVidia's FLOPs are "worth more" than AMDs FLOPs, since AMD GPUs require a higher potential maximum performance to provide framerates on par with NV GPUs.

Now, here we have the case that both GPUs are provided by the same manufacturer, at almost the same time. Thus, claims of large efficiency differentials seem hard to believe.
 
Gemüsepizza;46708422 said:
2.) People claiming those 12 CUs in the Xbox 3 can achieve the same thing as the 18 CUs in the PS4, which would make them (and probably the upcoming Radeon HD 8XXX) 66% more efficient (if I did not horribly fail at math), which I really doubt.
It would be 50%, by my math?
 

coldfoot

Banned
As someone who does not have a cable box and never will, this is disappointing.
Why won't they make it such that the consoles will work AS an IPTV cable box, so eliminating one extra box from the center. They can even have them rented as cable boxes for $10-20/month from the cable company, which would negate the upfront costs of these consoles and enable them to be in every home, because Americans like installments even though it costs them more in the long term.

Imagine the set up of someone who has a 720, a cable box, and an AVR. HDMI would originate at cable box, pass through the xbox, pass through the AVR, and go to the TV. Too long of a chain for my liking.
 
$600 PC = Xbox
Orbis = Gamecube
Durango = Playstation 2
Wii U = Dreamcast

Both in terms of raw power and how popularity will shake out.
Sony's orbis will sell 20 million while Durango sells 150 million +? Durango will get more Japanese third party support than orbis? 600$ gaming PC's will outsell orbis? Lol sounds legit
 
Orbis = OG Xbox
Durango = Gamecube
Wii U = Dreamcast

Orbis is the most dev friendly (unified, high speed memory, more raw horsepower available), but the gap won't be massive and optimizing how the EDram is fed (which MS' dev tools will make very easy) will help close the gap and get nice bang for the buck from exclusives.

I'd bet the Durango winds up as the "target" platform with the Orbis having better framerates and slightly higher rate of 1080p support (because I think every dev is still going to shoot for 720p/30fps. Res and fps aren't nearly the wow factor that more polys and better textures can be).

It's going to come down to MSRP and exclusives. Sony will rely on their first parties, while MS will look to buy a few more original IPs from developers that they need to hit on (think Gears and Mass Effect pre-Bioware buyout).

Sites like digital foundry existing and this HILARIOUS neogaf threads of fanboys on both sides bragging how their version of the game runs at 27 fps 704p while the other INFERIOR version only runs at 25 fps 698 p could mean that in that case MS will lose quite a few customers to Sony.
 

i-Lo

Member
Gemüsepizza;46708422 said:
I think there are two different things here:

1.) Aegis was talking about how efficient MS did use does CUs in the Xbox 360 vs. how they intend to use them in the Xbox 3.

2.) People claiming those 12 CUs in the Xbox 3 can achieve the same thing as the 18 CUs in the PS4, which would make them (and probably the upcoming Radeon HD 8XXX) 66% more efficient (if I did not horribly fail at math), which I really doubt.

Yea, that makes more sense. Thanks. However, it is hard to believe if the GPU in durango is as efficient as a pitcarin or rather the GPU in pitcairn is so inefficient that it matches something it should easily outclass.

It's the difference between an american sports car and a european sports car.

American sports cars have a ton of muscle, but first curve and they are already down a couple of seconds in the race. European sports car also has muscle, but it does curves too. It's wonderful.

(this is not a direct comparison to Orbis and Durango)

Err... Durango is eu sports car? And Orbis is American equivalent? But Sony is Japanese and they make cars specifically to go sideways ;P
 
Does this supposed efficiency of the GPU come from MS having a better development environment than Sony or just that the Xbox has more efficient hardware as you would expect them hardware wise to be based off very similar architectures so I don't see where MS would get efficiency that Sony can't.
 

Margalis

Banned
As someone who does not have a cable box and never will, this is disappointing.
Why won't they make it such that the consoles will work AS an IPTV cable box, so eliminating one extra box from the center.

Because Time Warner controls your cable and would never allow themselves to be cut out.
 

Kimawolf

Member
So to be serious for a moment. Could a system with this set up actually accomplish the Square/Enix demo or the Watchdog demo, or were they just this years Giant Dancing Robot or Sony's "old man face" demo?
 

Spongebob

Banned
As someone who does not have a cable box and never will, this is disappointing.
Why won't they make it such that the consoles will work AS an IPTV cable box, so eliminating one extra box from the center.

Imagine the set up of someone who has a 720, a cable box, and an AVR. HDMI would originate at cable box, pass through the xbox, pass through the AVR, and go to the TV. Too long of a chain for my liking.

Don't worry man, Durango will be a cable box/tv tuner through and through, you won't be disappointed.

I imagine that when MS unveil Durango they will unveil all the partnerships they have with cable providers and television stations.
 

i-Lo

Member
GPUs feature some maximum amount of floating point operations they can potentially execute per second. This is a function of their clock speed and degree of parallelism (which is in turn closely related to die size). These are the theoretical "TFLOP" numbers that get thrown around.

"Efficiency" characterizes how well applications are able to make use of that potential performance. For example, if you have a GPU providing 2 TFLOPs, and your application achieves a throughput of 1 TFLOP, you'll have 50% efficiency.

Different architectures make it easier or harder to achieve good efficiency. For example, currently on PC it's widely accepted that nVidia's FLOPs are "worth more" than AMDs FLOPs, since AMD GPUs require a higher potential maximum performance to provide framerates on par with NV GPUs.

Now, here we have the case that both GPUs are provided by the same manufacturer, at almost the same time. Thus, claims of large efficiency differentials seem hard to believe.

So isn't it also dependent on the software to ensure it can draw "peak" performance most if not all the time to hover around the magical mark of ~100% efficiency?
 

gofreak

GAF's Bob Woodward
Does this supposed efficiency of the GPU come from MS having a better development environment than Sony or just that the Xbox has more efficient hardware as you would expect them hardware wise to be based off very similar architectures so I don't see where MS would get efficiency that Sony can't.

People are confusing this 'efficiency' thing.

I think it turns out aegies was referring to Durango against 360 rather than Durango against Orbis.

In terms of the latter, it's possible it is using GCN2 CUs instead of GCN, but the efficiency or average performance gain from that kind of refresh is not going to be '66%'.
 

coldfoot

Banned
Because Time Warner controls your cable and would never allow themselves to be cut out.
They are being cut out, whether they like it or not. I haven't seen a single person who likes the UI they have on their cable box. People are simply cutting cable and moving to netflix, hulu, etc. The cable company could just make an app that would allow their users to watch their channels from the console.
I don't have cable and make do with the free channels that come through the coax cable that has my cable modem attached to it. It's enough, I get the news networks and TBS. As long as I can watch the news, NFL playoffs and superbowl, the oscars, etc, that's all I need. All TV shows can be watched online.
 

Into

Member
KYExloW.png


For da belt'
 

Jadedx

Banned
One more question for you Aegies, Does the secret sauce add anymore flops?


And some general questions for anyone else.

What about the other chips in the roadmap? The 360 cpu and the second always on processor?
Does the Data Move Engine only help with the ram bandwidth or does it do anything else?
What would the likely price be? can they make a profit from launch?

I did not see an answer I will ask again.


Also, I just noticed it is Data Move Engines, not just 1, is it multiple blitters?
 

Durante

Member
So isn't it also dependent on the software to ensure it can draw "peak" performance most if not all the time to hover around the magical mark of ~100% efficiency?
Totally. But when you talk about GPU efficiency, you are talking about how easy/hard the hardware makes attaining efficiency in a variety of scenarios.

Also, almost nothing in the real world "hovers around" 100% efficiency. Not even close.
 

pixlexic

Banned
why do watts/volage matter? who cares? also, whats wifi direct or whatever its called

They matter a lot less than they did 3 years ago. The smaller you can make a circuit the less power you need to get a electric pulse form point a to point b. But you can still brute force it there fast with more wattage. But then you also have to worry about heat displacement more.
 

Sid

Member
Don't worry man, Durango will be a cable box/tv tuner through and through, you won't be disappointed.

I imagine that when MS unveil Durango they will unveil all the partnerships they have with cable providers and television stations.
TVii 2?
 

Ashes

Banned
Hmm, interesting. I was just looking for an alternative metric to compare instead of FLOPS. Is there any good on-paper metric for CPU performance?

EDIT: people kicking the crap out of a dead horse with the analogy thing.

btw IPC is instructions per cycle, not instructions per core.

Generally speaking anything over generalised is going to skew results, but we're power limited, so improving ipc is actually one of the best (bar more cores) on achieving greater results...

Without greater ipcs, and if power was no problem, we can resort to cores:

8*1.6
4*3.2
2*6.4
1*13.2ghz

It's not equal though.. you lose a bit the more cores you have... seems to be anyway.

I've heard bobcat is very efficient, so I've been thinking, that this 8 core jaguar would have the same theoretical peak as the x360, about 100 gflops but do a lot better in ipc, so you'd get a lot closer that that 100 gflops performance indicates....


http://en.wikipedia.org/wiki/Instructions_per_cycle#Calculation_of_IPC
 
Top Bottom