TRUTHFACT: MS having eSRAM yield problems on Xbox One


y'all should be ashamed
So this was in the previous Kinect demo thread, and the conversation shifted completely to a rumor that XBox One was getting a huge GPU clock downgrade. While that seems to still be debated, all insiders do agree that MS is having problems with the yields on eSRAM.

Here's a small sample of where the conversation was going and where we ended up:


It's highly unlikely at this point that the downclock rumor is true. That part of the rumor can be laid to rest.

The low yields however appear to be legit, with confirmation via very low XB1 allocations to stores vs. PS4 allocations.


ucan RUMUR frmo hte thread tittle TEAMMOD. esram yieldls. are "troubling"toput it lightly.


btw youiw lil not njhear abouthing about tech atMS E3 presentation. tech tarbaby (notracistst!) IS TSTICKIER THAN YOU WIL EVER KNOW. I LOOK FORWARF TO READING THE BOOK BY VENTUREBEAT.

theyMSrushed. --caughtoutbysonychangs. leadershitfailure.p.
(if you need that translated:
Hey, why does CBoaT talk like that? The answer:
But is he legit? Mods at GAF say he's legit. I'd say that's pretty damn legit!


I've heard the exact same thing as Thuway.

GPU is getting DC from 2 different sources. One that told me several weeks ago and the other that reconfirmed yesterday.
For everyone asking- this information is all pretty recent. Around the PlayStation Meeting the Xbox One was way behind (OS + hardware). Engineers were scrambling to get things sorted out.

It turns out, they didn't sort it out. The OS you saw was a complete and total lie. The current plan is to get the yields up, lower the clock rate, and to have enough units out for a sell out in the Fall.

For those asking how this affects performance- to be perfectly frank; it is nothing turning down features won't solve. The mass market will never notice a difference between 1080p and 900p; neither will they care about dynamic shadows / global illumination / or tesselation. Go to your PC - and turn shadows from Ultra to medium, disable tesselation, and lower the resolution to 900p; and you'll find games run totally fine.

Microsoft is purely behind and it's now time to make drastic decisions. I don't think any one is happy about the lower clocks, but no one is depressed about it either. The Xbox One is an all-in-on device; and that's how it will be marketed.
Yeah unlike some of the info we had that was quite old all this stuff has been coming out only recently and only get reconfirmed over and over.

People should have seen this coming though when MS didn't announce anything about the GPU and instead focused on the transistors which was ridiculous. They are using misdirection and dropping random spec info in a attempt to make casuals think "Wow 5 Million Transistys is a lot!!" They very likely won't be revealing the specifics of the GPU even at E3 and take a page out of Nintendo and keep that stuff under wraps.

It was a bet, like any other bet, but the downclock isn't what is worrisome, it's the half baked OS.
Unfortunately it's not. There are performance issues. In some cases, its quite significant.
Yes Crazy [crazy buttocks on a train] has confirmed this off site. Also GopherD is as close to a nailed down confirmation that you could ask for.

Forget me and thuway, proelite and others when Gopher talks then you should really pay attention.

Original Post:

I am not trying to bring out the MS Defense Force at me, but I've heard GPU clocks might be downgraded. 8-900 gigaflops for gaming. The APU is big. This isn't 100% confirmed though and is being done to improve yields.

CBoat do you mind adding anything?
Hmm, I've been hearing some of this myself. It'll be hilarious if MS tries to sell a more expensive gaming machine with 1/2 of the perf. Knowing people's buying habits, they still might beat Sony on the market though. :/
Also someone has suggested to me sony is doing the same, but I've heard from Sony devs the machine is very mature and not facing the issues Xbox One is. Microsoft started late.
I wouldn't worry about Sony. In fact I am hearing the issue is because ES RAM is causing issues.
Nope. Clocks can only go down at this stage as they try to get acceptable yields.

Releasing useless info such as transistors count but not clock speeds should tell you all.
Power has never killed a system. Developers aren't upset by this btw, they can easily scale down. 1080p might not be defacto for Xbone.
Perf tweaking is the easiest and last thing to do in development.

We're not talking about the ram or clocks being halved.
I heard this today from someone reliable, but I don't want to make this a fact unless I hear it from more than one source. ProElite is saying he's heard this too, but I want someone else to confirm before this turns out to be true or false. If it's false I'll be tarred and hung, but I take those risks when I hear things :/.
I've heard the same about the ESRAM (as I shared with another poster who mentioned as such in this thread), and truthfully I think that's where all of the yield issues are coming from. Whether they need to downclock something to improve yields is a different story, but MS' APU is much more complicated and required a lot more engineers at AMD/ATI for a reason.
downclocking the GPU wouldn't make any sense if the ESRAM is causing yield problems

and if the CPU or GPU sections of the chip are causing any issues then expect to see the same from sony, they're using the same architecture, chip foundry, and even roughly similar die sizes
Correct. However Sony's pretty much good to go at this point, pre-manufacture. (edit: and their APU is much less complicated - ie no ESRam). We'll see how their manufacture goes over the next couple months.
I've had things wrong before too. I was led to believe there would be "special sauce"- when in reality all that was an audio processor, ES RAM, and move engines. I also was told sizes of the PS4 box that were totally false as well. Like I said, I'm not stating this as fact, but I've heard this issue from a source who is in the know, and I'm HAPPY to be wrong. I don't put myself out there for no reason.
Get used to it. Across all future platforms

Before you guys jump on thuway, keep this in mind: being a "third hand" source (or even a second-hand source) aka a "leak" is prone to being lead astray sometimes, or sometimes getting unclear or misunderstood information. What he says is not told to play with gaffers' minds (as some other posters attempt to) but to share as much as he can, right or wrong (and often right).
choppy UI = Lower clocked parts and beta code. That would make sense.

MS is most likely waiting on a respin to come back to see whats going on. Rumors of yield problems have existed since last year so I'm sure they have something going on.

PS4 is doing just fine the last I heard. I wouldn't worry about Sony.

BTW - some other site is claiming -

Yeah, this is true.
(note: not sure which part he's referring to, but at least the yields part.)

Pay attention to this poster, folks. Not a third hand source like myself or thuway, but....

He'd know.


So what does this mean? This is a great explanation by SneakyStephan:

Basically they create a large circular silicon wafer
and put the conducting pathways for the (processor or memory) chip on it. ( if you want to know how it works)

On one of these wafers they can fit many chips which they then cut out (like baking a pizza and cutting it into identical slices)

There will however be defects on the wafer, the chips that occupy the areas that have the defects on them are either useless or can be used as lower end parts by disabling part of the chip
(like if a gpu has 18 compute units you could only use 16 if the defect is on one of the others, ps4 already does this by only using 18 out of 20 cus so they can have two for redundancy to improve yields)

Not all of them come out as well as the others and some can handle more voltage than others, so they can be higher clocked.

Now the problem is that when you have really large chips.
The larger the chip the bigger the chance that each chip will have a defect. (exponentially much so)
If you have 10 defects on your wafer but you fit 100 small dies on the wafer then only at most 10 , so ten percent, will be throwaway or salvaged as low end part, but if you only fit 10 on there then it's likely that many of them may have one of those defects.
You could easily have to throw away 30-50 percent or even all of them.

Xbox one uses APU which is a really big die that needs to house the cpu, gpu AND in xbox one's case also the ESRAM which takes up a huge amount of transistors and therefor physical space on the chip.
They end up with little hardware power yet a huge (5 billion transistor) die, so they have low yields.
Normally low end hardware only takes a tiny little die so yields are good, but for some reason that either MS engineers or suits/beancounters can only know they decided to go for this huge ass APU with esram.

From my limited knowledge Sony ended up with a lot more bang for their buck... they put their die space into a bit more gpu power and didn't design their apu around needing esram (since they didn't cheap out on the vram, which is a collection of seperate chips that are embedded on a PCB and connected to the APU through a memory bus)
Meanwhile MS seems to have thrown the baby out with the bathwater.

Some good explanations regarding why eSRAM yields could lead to downclocking:


y'all should be ashamed
Should add "800-900 flops" to the title.
I think that's a bit less unsubstantiated; I would rather not have this thread filled up with OH MAN ITS OVER posts but more interesting tech posts. :p

Mods can change it if they feel otherwise though.
It's an eSRAM issue, which could be caused by too much heat dissipating from the GPU.
The eSRAM has a clock of it's own too. So if the issue is indeed concerning the eSRAM, then;

1. Downclock the eSRAM (Bandwidth gets effected)
2. Downclock the GPU (Raw graphics power gets effected)
3. Fuse off CUs in the GPU and maintain the same clock (Raw graphics power gets effected)

Either of the above is not promising.

Every 100MHz drop in the ESRAM drops the eSRAM bw by ~13Gbps. 10CUs at 800MHz ~ 12CUs at 667MHz (~ 1TFlop)
I'll quote myself to this thread:

Dat Junior Member said:
I don't buy that whole 800-900 gflops rumour. That's barely 3 times the raw power of the PS3. I know the new architecture is also more efficient but still a very minor gap.
Edit: also, a VHS case all perforated which consist in half chipset half A HUGE FAN and no internal power source overheating? No fucking way. The Xbone must be cooler than a Wayfarers.
What a cluster fuck if this is true. The non-gaming feature were already eating up a whole lot of resources. Now they have to downgrade it even further.
That must be such a bitter pill to swallow.

They plan way ahead that 8GB is needed and the only viable solution at the time is eSRAM + DDR3 combo.
Now they are stuck with an overly complicated APU that's causing yield problems and that's majorly lacking in performance and elegance compared to their competitor.
Everything about the XB-one feels rushed.

The role reversal in terms of tech this generation is almost complete. Now its XB-one with the overly complicated architecture.
800-900 flops.

I have a real hard time believing that. Going from 50% faster to 200% faster for the PS4 GPU in comparison is just ridiculous.
Semantics, careful.

50% faster = 150% of the power
100% faster = 200% of the power
200% faster = 300% of the power

Anyone want to vouch for the posters' position of credible knowledge before this turns into a 32 page thread about nothing?
He's pretty creditable.
I'm certainly not an insider. I'm just using common sense . MS is running early code on early hardware.

I have a media center with a dual core e-350 which is a dual core bobcat at 1.6ghz . I have 8 gigs of ram in it and I'm able to run media center side by side with a browser in windows 8.

Jaguar is clock for clock faster than a bobcat and the xbox one will have 8 of them in it. I don't see the AI having problems at launch.

Currently we know that 10% of GPU resources are dedicated to the OS for the Xbox One. Even with a GPU down clock I can't see MS reducing the number of GPU FLOPS dedicated to the OS so I think its reasonable to expect that the # of FLOPS dedicated to the OS would stay fixed regardless of a down clock.

10% of 1.229TFLOPS = 0.123TFLOPS dedicated to the OS.

A 100-200MHz reduction in the GPU clock speed would bring it down to 0.922-1.075TFLOPS.

Subtract that from 0.123TFLOPS and you got 0.799-0.952TFLOPS left for games.
Anyone even considering getting an Xbone at launch after the whole RRoD fiasco is nuts, MS has proven they are willing to launch a broken console if it means launching asap.
So what does this mean for the next Xbone? Can they ship it like this and risk titles being decidedly inferior on their console, or would they delay it to fix the issues?
I'm certainly not an insider. I'm just using common sense . MS is running early code on early hardware.

I have a media center with a dual core e-350 which is a dual core bobcat at 1.6ghz . I have 8 gigs of ram in it and I'm able to run media center side by side with a browser in windows 8.

Jaguar is clock for clock faster than a bobcat and the xbox one will have 8 of them in it. I don't see the AI having problems at launch.
neither do i, but this isn't about the UI (i assume that's what you meant).

a drop in clock speed is significant, because it widens the gap between the Xbone and the PS4.
Whats going on with Microsoft?

We have been hearing rumours about the next xbox years before we heard anything about the PS4 to the point "people in the know" started to get concerned Sony was far behind and potentially missing 2013 then BOOM! things start to get announced and it turns out MS is the one who is a mess and Sony has secretly had its shit together this whole time and was just waiting to pounce.

I mean whoever launches the cheapest will probably still win in the end so a slight difference in power wont really matter but its just weird MS seems to be having so much trouble when they should be the ones who know what they are doing.
Serious question, why would MS go with such a complex design instead of just using better RAM like Sony? It seems like In trying to save money they are actually increasing costs and sacrificing yields and performance.
I've a hard time believing this. Just seems too big. That said, if it's a question of having sufficient supply for the holiday season, it's probably no choice at all.