• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGLeaks rumor: Durango CPU Overview

No, the 6T per cell is for SRAM. Durango will use ESRAM which is just edram. Same as WiiU

Wii U uses eDRAM. The E in eSRAM stands for embedded, MS are said to be using embedded SRAM, though we aren't sure whether it's full fat SRAM with 6T per bit or some diet version.
 

Durante

Member
In terms of raw power how does it compare to high end Ivy Bridge processors ?
It's a competitor to Atom (a superior competitor, but in the same market nonetheless). How do you think?

To answer your question with less snark, not well. Ivy Bridge will offer far more performance per clock, on top of featuring more than twice the clock rate.
 
But it's only more efficient because the theoretical maximum is much lower. Put it this way, using AMD's old architecture the theoretical max FLOPs of chip X sat at around 4.5TF, the real world performance was 1.5TF (efficiency of 33%), on the new architecture, the theoretical maximum is just 1.5TF which is now matched by real world performance because of architectural changes, however it's basically the same as before. All AMD did was bring their max performance down from ridiculous levels under VLIW to more realistic levels for GCN. Basically aping Nvidia.

Yes and no imo...it's true flops have seemed to flatline a bit lately, but I think they still do increase at the high end. It was more a move towards compute imo.

I'm not sure GAMES wouldn't still be better off with vliw4 8 Teraflop GPU's for the same die size as todays 3-4 TF GPU's. It's just both Nvidia and AMD decided games werent the end all be all anymore.

And Nvidia went from a extremely low number of SP's double pumped, and low flop counts, to recently greater numbers of SP's at single pumped clocks, and Nvidia's teraflops got a lot closer to AMD's this gen (eg conversly, Nvidia's efficiency per flop edge decreased). That's why IMO Nvidia made a big move towards AMD's philosophy as much as vice versa.
 

Eideka

Banned
It's a competitor to Atom (a superior competitor, but in the same market nonetheless). How do you think?

To answer your question with less snark, not well. Ivy Bridge will offer far more performance per clock, on top of featuring more than twice the clock rate.

Well, that's disappointing to know that consoles are not a match for 2012 PC hardware. :/
 

jaosobno

Member
After reading the article, CPU looks good but it's far from amazing.

Seems most effort was put into increasing efficiency (branch prediction, out-of order processing, etc.) but it's definitely nothing to write home about.

Since this CPU does 2 IPC, I wonder does anyone know how much can Xenon do? IIRC, Cell's SPU have 2 IPC too, was this the case with Xenon cores aswell?
 
Yes and no imo...it's true flops have seemed to flatline a bit lately, but I think they still do increase at the high end. It was more a move towards compute imo.

I'm not sure GAMES wouldn't still be better off with vliw4 8 Teraflop GPU's for the same die size as todays 3-4 TF GPU's. It's just both Nvidia and AMD decided games werent the end all be all anymore.

And Nvidia went from a extremely low number of SP's double pumped, and low flop counts, to recently greater numbers of SP's at single pumped clocks, and Nvidia's teraflops got a lot closer to AMD's this gen (eg conversly, Nvidia's efficiency per flop edge decreased). That's why IMO Nvidia made a big move towards AMD's philosophy as much as vice versa.

I'm not disputing any of that, all I was saying is that GCN improves efficiency because they decreased the maximum theoretical performance. That's really all this "100%" efficiency crap is.
 

Biggzy

Member
Well, that's disappointing to know that consoles are not a match for 2012 PC hardware. :/

Both Sony and Microsoft have gone with architectures that allow for efficient transferring of tasks - like physics calculations - to be run on the GPU. So as long as the CPU is good enough for developers to dump their game code in, I don't think it will be too much of a problem, at least originally.
 

Reiko

Banned
Both Sony and Microsoft have gone with architectures that allow for efficient transferring of tasks - like physics calculations - to be run on the GPU. So as long as the CPU is good enough for developers to dump their game code in I don't think it will be too much of a problem.

All that last gen Cell work thrown in the trash:/


Wired then,he gave info to vgleaks then tweeted durango's spec will better than that?

Perhaps he knows something we don't. Maybe he's trolling.

But his past history checks out bigtime.
 

Eideka

Banned
Both Sony and Microsoft have gone with architectures that allow for efficient transferring of tasks - like physics calculations - to be run on the GPU. So as long as the CPU is good enough for developers to dump their game code in I don't think it will be too much of a problem.

Of course, I don't think it will be a bottleneck but truth be told I expected much more from next-gen consoles specs wise.
 

Biggzy

Member
All that last gen Cell work thrown in the trash:/

Yep. But Sony misread where the industry was going towards all those years ago.

Of course, I don't think it will be a bottleneck but truth be told I expected much more from next-gen consoles specs wise.

If it allowed Sony to put in a more powerful GPU in the PS4, then I think it was the right call.
 
All that last gen Cell work thrown in the trash:/

Not really. Basically all of what first party developers were doing with SPUs is now possible on the GPU, Sony's first party will be very quick to adapt to the new environment because they have been optimising for this type of architecture already.
 

Kaako

Felium Defensor
Wait, wasn't Durango supposed to have a customized, better CPU based off of this architecture? Or did that rumor go the way of the wizard-jizz-sauce as well aka down the drain.
 

Durante

Member
All that last gen Cell work thrown in the trash:/
Actually, that depends on your perspective. Cell was a single heterogeneous chip with high-bandwidth, low latency communication between a more traditional CPU part and separate, highly parallel processors with user-addressable local storage. Both of the new consoles' APUs are the same thing :p
 

Reiko

Banned
Wait, wasn't Durango supposed to have a customized, better CPU based off of this architecture? Or did that rumor go the way of the wizard-jizz-sauce as well aka down the drain.

Why don't you PM bgassassin to find out. He said he would risk a ban to prove his info was legit.

Not really. Basically all of what first party developers were doing with SPUs is now possible on the GPU, Sony's first party will be very quick to adapt to the new environment because they have been optimising for this type of architecture already.


Actually, that depends on your perspective. Cell was a single heterogeneous chip with high-bandwidth, low latency communication between a more traditional CPU part and separate, highly parallel processors with user-addressable local storage. Both of the new consoles' APUs are the same thing :p

Oh. That's good then.
 

Biggzy

Member
Wait, wasn't Durango supposed to have a customized, better CPU based off of this architecture? Or did that rumor go the way of the wizard-jizz-sauce as well aka down the drain.

I think it originated from bgassassin on beyond forums where he mentioned that Microsoft added VMX128 to the Xenon.
 

KidBeta

Junior Member
After reading the article, CPU looks good but it's far from amazing.

Seems most effort was put into increasing efficiency (branch prediction, out-of order processing, etc.) but it's definitely nothing to write home about.

Since this CPU does 2 IPC, I wonder does anyone know how much can Xenon do? IIRC, Cell's SPU have 2 IPC too, was this the case with Xenon cores aswell?

All 100% standard Jaguar :/
 

DieH@rd

Banned
Wait, wasn't Durango supposed to have a customized, better CPU based off of this architecture? Or did that rumor go the way of the wizard-jizz-sauce as well aka down the drain.

So far, only wizard jizz that Durango has are fancy DMA modules that try to fix the problem of slow DDR3 ram... Nothing very special.
 
Wired then,he gave info to vgleaks then tweeted durango's spec will better than that?

I dont think he even tweeted Durango's spec will be better. Somebody said Orbis looks way better, he said "you have no idea" or something, IIRC. Combine that with his recent podcast comments (n which he attributed favoring Durango games to developers having had Durango dev kits longer), and I just dont think there's a ton there.

I like the idea of some secret Durango bump, but there are too many highly solid sources like bkilian and lherre who have said otherwise. And we also have time issues, it would seem too late to add CU's or anything. And with so much of Orbis rumors proving true, it's harder to imagine all the consistent Durango rumors being false.

I think the best hope for Durango power is that the ESRAM, used in clever ways, can really turbocharge the GPU's flops efficiency above and beyond normal GCN/GCN2.
 

Krilekk

Banned
Wired then,he gave info to vgleaks then tweeted durango's spec will better than that?

Not weird, he always said they were outdated specs dating back to 02/12. But who knows, for all we know Microsoft could be the one to release a console in 2014, we don't even know if their APU was finalized in 12/12 or not, they might intend to wait for stacked DDR3.
 

monome

Member
MS won't fool me into paying for Kinect 2 what they saved on the console architecture.

As far as I can tell Kinect 2 only hardcore advantage is its implementation with Fortalezza.
And I don't even know how/if that would work well.
99% of Kinect 2 use will be for apps, and if MS is serious on making money on apps, I demand they give me a discount on the hardware before I consider paying for apps.

So Kinect 2, conceptually, should be very much a no-upfront payment add-on, or very low.
Just as consoles are sold at loss and recoup on games licensing fees.

No Halo 5 at launch, blocked used games etc...whether MS will price its console crazy low, or they really believe my (great) time with 360 has made me their bitch.
 
I dont think he even tweeted Durango's spec will be better. Somebody said Orbis looks way better, he said "you have no idea" or something, IIRC. Combine that with his recent podcast comments (n which he attributed favoring Durango games to developers having had Durango dev kits longer), and I just dont think there's a ton there.

I like the idea of some secret Durango bump, but there are too many highly solid sources like bkilian and lherre who have said otherwise.

I think the best hope for Durango power is that the ESRAM, used in clever ways, can really turbocharge the GPU's flops efficiency above and beyond normal GCN/GCN2.
That's what he tweeted
https://twitter.com/superDaE/status/298904320550768640
https://twitter.com/superDaE/status/298786387098992640
Unless PS4 downgraded the spec(which is not possible)
 

Reiko

Banned
I dont think he even tweeted Durango's spec will be better. Somebody said Orbis looks way better, he said "you have no idea" or something, IIRC. Combine that with his recent podcast comments (n which he attributed favoring Durango games to developers having had Durango dev kits longer), and I just dont think there's a ton there.

I like the idea of some secret Durango bump, but there are too many highly solid sources like bkilian and lherre who have said otherwise. And we also have time issues, it would seem too late to add CU's or anything. And with so much of Orbis rumors proving true, it's harder to imagine all the consistent Durango rumors being false.

I think the best hope for Durango power is that the ESRAM, used in clever ways, can really turbocharge the GPU's flops efficiency above and beyond normal GCN/GCN2.

If all that engineering and R&D led to a sub par product... Durango engineers would be in deep shit. I'm sure they knew what they were doing.
 

Yes I'd forgot the second tweet...

You still have bkilian and lherre saying 12 CU's is it...

If all that engineering and R&D led to a sub par product... Durango engineers would be in deep shit. I'm sure they knew what they were doing.

Too be fair, they were probably given a budget and did the best they could with it, not aiming at a competitors product.

Compared to Wii U, Durango will be awesome, sure, lol.

Basically if corporate on high said "we want a low BOM" then great engineers would still be handicapped.
 
I dont think he even tweeted Durango's spec will be better. Somebody said Orbis looks way better, he said "you have no idea" or something, IIRC. Combine that with his recent podcast comments (n which he attributed favoring Durango games to developers having had Durango dev kits longer), and I just dont think there's a ton there.

I like the idea of some secret Durango bump, but there are too many highly solid sources like bkilian and lherre who have said otherwise. And we also have time issues, it would seem too late to add CU's or anything. And with so much of Orbis rumors proving true, it's harder to imagine all the consistent Durango rumors being false.

I think the best hope for Durango power is that the ESRAM, used in clever ways, can really turbocharge the GPU's flops efficiency above and beyond normal GCN/GCN2.

Special sauce. Derp.

Seriously though, that's not possible. The GPU power is the GPU power, no amount of extra on die bandwidth is going to make is more powerful out of nothing.
 

Durante

Member
Now wait, it's totally possible for some algorithms to be much more efficient thanks to the on-die low latency memory. It's just unlikely that this would be enough to offset the overall rumored power difference, which would apply throughout all graphics processing.
 
Wasn't edram only 32GB/s?
And i don't think esram=edram.


Please anybody can put some light about this?

How could Microsoft decide to put 32mb esram 106gb/s, when you have 10mb edram 256gb/s in Xbox 360? IIRC, I've read that's because esram doesn't have ROPs inside or something like this, but I don't understand.

Moreover seems Durango esram can do a lot more things than the old 360 edram. From another vgleak:

ESRAM

Durango has no video memory (VRAM) in the traditional sense, but the GPU does contain 32 MB of fast embedded SRAM (ESRAM). ESRAM on Durango is free from many of the restrictions that affect EDRAM on Xbox 360. Durango supports the following scenarios:

Texturing from ESRAM
Rendering to surfaces in main RAM
Read back from render targets without performing a resolve (in certain cases)
 

Fafalada

Fafracer forever
Durante said:
Depending on how cynically you chose to interpret it, it's either:
Dev-documents highlighting differences to previous architecture is fairly standard practice - fewer people in profession spend their time studying hw-architectural trends(outside of their immediate scope of work) than you imagine.
But at the same time - are these actual dev-docs or just VGleaks essays?

No FMA either, I thought that might happen after the improved vector units rumor.
Is that really an improvement? They state separate MUL/ADD execution resources, ultimately I don't care if I issue a MADD or MUL+ADD if they both execute in 1 cycle on average (not to mention much of MADDs usage is interchangeable with DOT).
 

KidBeta

Junior Member
Wired then,he gave info to vgleaks then tweeted durango's spec will better than that?

All that last gen Cell work thrown in the trash:/




Perhaps he knows something we don't. Maybe he's trolling.

But his past history checks out bigtime.

If that is the truth then this make SuperDae's comments that the 720s specs have been upgraded more credible.


And yet on the 12th, 7 days after, Kotaku posts everything superdae knows.

http://www.kotaku.com.au/2013/02/we...xt-xbox-from-someone-who-says-theyve-got-one/

Identical to previous leaks, he was trolling.
 
Please anybody can put some light about this?

How could Microsoft decide to put 32mb esram 106gb/s, when you have 10mb edram 256gb/s in Xbox 360? IIRC, I've read that's because esram doesn't have ROPs inside or something like this, but I don't understand.

Moreover seems Durango esram can do a lot more things than the old 360 edram. From another vgleak:

256Gb/s = 32GB/s, 106GB/s = 106GB/s

Gb/8 = GB
 
Now wait, it's totally possible for some algorithms to be much more efficient thanks to the on-die low latency memory. It's just unlikely that this would be enough to offset the overall rumored power difference, which would apply throughout all graphics processing.

Isn't fetching data from main gddr5 like a 200~300 cycle penalty for a wavefront if remember it right?
But you can easily mask this penalty by having enough wavefronts running doing calculations.
You can probably increase efficiency for a lot of algorithms if that initial penalty can be brought down to 50~100 cycle penalty.
Because most the time when i used opencl for image processing i was memory bound read pixel in write pixel out..
 

monome

Member
Basically if corporate on high said "we want a low BOM" then great engineers would still be handicapped.

even though nothing is official, this is what makes the most sense.

MS may have chosen to build a candy box, and fill it with crack (apps, microtransactions, lower retail priced gamesdue to no resales etc...).
Since the box is a diversion, they may have focused on making it very small (wife acceptance factor), very cheap (to get people have several candy boxes at home) etc...

And for most people, 720 vs 360 graphics will be mind blowing anyway.

MS is doing to consoles what dealers do to drugs. Build reputation, work on your customers addiction, and then dilute your drugs and keep selling them at the same price.
 
Top Bottom