• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGLeaks Durango specs: x64 8-core CPU @1.6GHz, 8GB DDR3 + 32MB ESRAM, 50GB 6x BD...

Orbis:
walter-white-largedufsc.jpg


Durango:
breaking_bad___gustavifuyk.jpg


Wii U:
gale-boetticher-s3-59dxecs.jpg

i would go with this tbh.
 

Durante

Member
GPU's are not the same.

There is a reason MS is calling their units shader cores and not compute units.

They're maximized for graphics.
Say, do you work for MS? I just saw your location in your profile.

Anyway, I'm getting more and more curious to see what this "special sauce" entails. It's in my nature to be exceedingly skeptical about such things though.


ROPS are not in esram this time around.
But that would still make the combined BW of both pools inferior to the BW of Orbis' single pool. Which just seems a bit weak in comparison. And needlessly limited for ESram.
 

Thraktor

Member
I agree. The quoted bandwidth seems far too low. One explanation attempt posted earlier is that it's the main memory <-> ESRAM BW, not the ESRAM <-> GPU BW.

My guess is the bandwidth figures are for the CPU, which would make sense if it has to travel over the northbridge. I'd expect ~200GB/s minimum between GPU and eSRAM.
 

Pagusas

Elden Member
At least this time they can store 1280x720 with 4x MSAA in the embedded memory.
Still not enough to do 1080p with 4x MSAA though.

32mb is enough to do 1080p with 4xmsaa, in fact its like the exact amount needed if I added things up right. But lets be real here the days of msaa are gone, FXAA is where all dev's are heading as its way cheaper and most people cant tell the difference (yes all of us can, but FXAA is improving all the time, and more than good enough for most).
 

i-Lo

Member
Gemüsepizza;46711044 said:
And CUs in cutting edge GPUs aren't "maximized for graphics"?

Apparently not. Guess the engineers at Sony went to AMD to only party and have orgies. They woke up with splitting headache and special sauce on their bodies. And since time ran out and report was to be made to Kaz hirai in his ivory tower, these engineers just slapped some parts together and called it a day. Much like RSX.
 

Karak

Member
Gemüsepizza;46711044 said:
And CUs in cutting edge GPUs aren't "maximized for graphics"?

One of my big confusions on Friday as well. I could not grasp why CU's were not the "fix" and in addition that Move engine confused the shit out of me.

But for all intents and purposes MS outlined a specific desire for graphics to AMD who found some custom ways to get MS what they wanted without going with a bigger GPU. We will see how that pans out and if MS wasted millions and what MS's overall target was.
If its a ton of wasted silicon we will know:)
 
Something regarding the ESRAM doesn't add up.

So far as I know ESRAM is supposed to be better than EDRAM, but the 10MB EDRAM on 360 provided a whopping 256GB/s bandwidth to the GPU (daughter die straight to ROPS)

gpu_diagram.jpg


So for 720 they are going with supposedly better ESRAM that has only 102GB/s bandwidth?

That's a pretty hefty bandwidth drop!
That diagram is slightly wrong. The main die of 360's gpu and the daughter die had a 32GB/s link.

The 256GB/s comes from the fact that the edram was fast enough so every ROP (the logic which had direct access to the edram) could write every thing they could every cycle. (Probably) for size restrictions they were very simple and didn't support any data compression. The 256 is actually only achievable when the ROPs write the biggest data they can (32 bit pixels with MSAA 4x IIRC)... So don't read much to it, i'm sure that if durango has logic attached to the esram like 360 had it will have enough bandwidth to write as much as it possible can.
 
It seems one thing Durango and Orbis fanboys agree on: Lets be mean to Wii U.

But lets be honest, Wii U deserves it.

No one shall say, that Nintendo does not care about the die hard Sony or Microsoft fans. They deliver consoles, which them guys can shit on endlessly as soon as they are running out of reasonable arguments in their true next gen discussions. Seems to occur quite fast and often at the moment and it seems to be very relieving.
 

McHuj

Member
Gemüsepizza;46711044 said:
And CUs in cutting edge GPUs aren't "maximized for graphics"?

Yes, but. AMD put in a lot of effort and tech to make the GCN architecture efficient at high performance computing, stuff that's not necessarily needed in a console. I'm guessing they took that stuff out.
 
I also remember:

PS3: 60fps
360: 30fps

We are talking about a fanbase that could spend an afternoon arguing how PS2 was just as capable as Xbox, and even more in same cases. (Personal experience)

Maybe I'm seeing things wrong, but even though I've been a PS gamer since the PSone days (yes PSONE) I've always noticed how vocal and passionate sony fans are. Obviously each group of fans has its own thing going on, but maybe because there aren't that many Xbox or even Nintendo fans over here, I see it the way I see it.
 

aegies

Member
Apparently not. Guess the engineers at Sony went to AMD to only party and have orgies. They woke up with splitting headache and special sauce on their bodies. And since time ran out and report was to be made to Kaz hirai in his ivory tower, these engineers just slapped some parts together and called it a day. Much like RSX.

The original 6670 next gen console rumor last year was accurate, and it was Orbis. That's changed. Microsoft had longer to work on their final design. It's more custom.

The comparisons to the PS3 aren't necessarily inaccurate, I think. Durango's architecture is more exotic than Orbis by all indications. I just think, based on Microsoft's dev history and general developer sentiment, that Microsoft has Durango's development environment more on lockdown than Sony had with the PS3.
 
Also the esram will be able to communicate more directly than the edram on the 360. One of the major dev complaints.

So the EDRAM in 360 had that internal 256 GB/s but it was a bit of a one trick pony, and the ESRAM in Durango is more balanced and dev-friendly?

It does make sense that the 102 GB/s for Durango ESRAM should be compared to the 32GB/s external bandwidth figure for Xenos.

Still quite a drop from the 192 GB/s of Orbis and it's GDDR5 though.
 

kpjolee

Member
GPU's are not the same.

There is a reason MS is calling their units shader cores and not compute units.

They're maximized for graphics.

I don't think there is much difference in calling them shader cores or compute units. Compute units is basically group of shader cores.
 

Jadedx

Banned
you are a bit off, it costs about $10 for 1gb GDDR at consumer level. (2gb cards vs 1gb cards of the same model.) so total 4gb would probably be less than $30 for sony to buy in bulk.

DDR3 is just about half that price.

overall 8gb ddr3 should cost about the same as 4gb Gddr5.

I've read that costs less than 2 dollars for 1gb of ddr3, so 8gb would probably be ~9 dollars for MS.
 

Thraktor

Member
For those thinking that the GPUs will be fundamentally different; if AMD came up with a GPU architecture that's more efficient than GCN, then they'd be using it in their PC graphics cards. Sony and MS both want the most advanced AMD graphics tech available, and in both cases that's GCN. There might be small tweaks here and there, but the GPUs will be fundamentally the same, but with ~30% more power on PS4's part.
 

JaggedSac

Member
One of my big confusions on Friday as well. I could not grasp why CU's were not the "fix" and in addition that Move engine confused the shit out of me.

But for all intents and purposes MS outlined a specific desire for graphics to AMD who found some custom ways to get MS what they wanted without going with a bigger GPU. We will see how that pans out and if MS wasted millions and what MS's overall target was.
If its a ton of wasted silicon we will know:)

Spill ALL the beans man, for christ's sake, lol.
 

Durante

Member
Yes, but. AMD put in a lot of effort and tech to make the GCN architecture efficient at high performance computing, stuff that's not necessarily needed in a console. I'm guessing they took that stuff out.
Simply "taking stuff out" wouldn't make it more efficient per FLOP though, at best it would maintain the same efficiency for graphics while reducing the die size.

Also, considering the CPUs in these consoles I'd have thought that maintaining GPGPU performance would be a rather important goal.
 
At least this time they can store 1280x720 with 4x MSAA in the embedded memory.
Still not enough to do 1080p with 4x MSAA though.

Tiling was used on 360, we should expect the same here if the frame buffer doesn't fit. Of course the situation here is much better due to having triple the bandwidth.
 

P90

Member
This thread somehow just turned into Wii u bashing...

True to a vocal block at NeoGAF. In all honesty, I think Nintendo under spec'd the WiiU and should be reasonably criticized. Not that a console is all about polyflops and AA shadercores... I think the WiiU's gamepad is a brilliant idea.
 

Durante

Member
For those thinking that the GPUs will be fundamentally different; if AMD came up with a GPU architecture that's more efficient than GCN, then they'd be using it in their PC graphics cards. Sony and MS both want the most advanced AMD graphics tech available, and in both cases that's GCN. There might be small tweaks here and there, but the GPUs will be fundamentally the same, but with ~30% more power on PS4's part.
That's also why I find it difficult to believe in large efficiency differentials.
 

aegies

Member
Can the eSRAM buffer be sideloaded into the main memory?
Guess not.

Let me see if this helps. For Durango:

Rendering into ESRAM: Yes.
Rendering into DRAM: Yes.
Texturing from ESRAM: Yes.
Texturing from DRAM: Yes.
Resolving into ESRAM: Yes.
Resolving into DRAM: Yes.

For the 360, that would be yes, no, no, yes, no, yes.
 

McHuj

Member
For those thinking that the GPUs will be fundamentally different; if AMD came up with a GPU architecture that's more efficient than GCN, then they'd be using it in their PC graphics cards.

How do you know they're not going to? Whatever tech AMD came up with can probably filter into the PC GPU's as well. Consoles just could be the first product introducing these features.
 
Specs we can compare between all three systems.

RAM amount. 720 = 2X PS4 = 2x Wii U
RAM bandwidth. PS4 >>>> 720 ( a mover wont help too much since a certain amount of ESRAM will always be taken up by framebuffer data ) >>> Wii U.

GPU. Ps4. = 1.5x 720 = 2-3x Wii U.

All in all. The gap between all three systems isnt that big. Certainly, there's nothing near the gap with Wii vs HD consoles. It really is more of a Dreamcast vs XBox/GC situation.
 

gofreak

GAF's Bob Woodward
Let me see if this helps. For Durango:

Rendering into ESRAM: Yes.
Rendering into DRAM: Yes.
Texturing from ESRAM: Yes.
Texturing from DRAM: Yes.
Resolving into ESRAM: Yes.
Resolving into DRAM: Yes.

For the 360, that would be yes, no, no, yes, no, yes.

This is a big difference.

I mean, sampling from eSRAM. Interesting.
 
Gemüsepizza;46711044 said:
And CUs in cutting edge GPUs aren't "maximized for graphics"?

Not entirely for graphics, they also try to push general compute so gpus can be used on other tasks other than gaming...

Thing is some of those non graphical tasks are important in games, and developers are taking advantage of the extra flexibility this allows.
And also, GCN has given AMD gpus enough flexibility to run those compute tasks along with an increase in graphics efficiency, so i don't think there's need to complete ignore flexibility unless it extrapolates your budget...

based on estimated performance given what we know.

i updated the image, turns out passmark is a terrible GPU metric:

jgMDftxnri6I.png

Those gpus have entirely different specs. The 7850 has more flops, but also has more rops, more texture units, more bandwidth, more of everything...

Usually on Pc that's how it goes, the more powerful models have everything better than the weak ones, we don't know if that will hold true on these consoles.
 
Something regarding the ESRAM doesn't add up.

So far as I know ESRAM is supposed to be better than EDRAM, but the 10MB EDRAM on 360 provided a whopping 256GB/s bandwidth to the GPU (daughter die straight to ROPS)

gpu_diagram.jpg


So for 720 they are going with supposedly better ESRAM that has only 102GB/s bandwidth?

That's a pretty hefty bandwidth drop!


Pretty sure eDRAM provided 33GB/s to the GPU. It had 256GB/s internal bandwidth. Or not didn't even see the chart.
 
Top Bottom