• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft claims 200GB/s of bandwidth, gets caught fudging numbers

E-Cat

Member
Xbox 360 has 278.4 GB/s of memory system bandwidth.
Xbox 360 > PS4 > Xbox One

sg017lt.gif
 
This is why I love NeoGAF. Poking holes in the BS that is spewed out by companies and helping people understand that normally wouldn't. Thanks OP and other contributors
 
I find it interesting that in the past week or so maths have changed such that 102 + 68 = 182 now instead of 170. :)

Yea, that sticks out to me now that I look at it lol. Maybe I did make a mistake haha.

In any case the reasoning is there, if Microsoft's system can correctly know which items to have in ESRAM then the performance hit won't be too too bad.

Unified 176GB/s is still obviously better of course :)
 
How much bandwidth do you actually need for a 1080p game at 30fps given that a lot of the textures and geometry are going to be exactly the same from frame to frame?

Even if the textures and bandwidth are exactly the same from frame to frame the entire scene still needs to be streamed from the memory that the GPU is going to be rendering from to the GPU. For you to be able to render a scene in a certain amount of time (16ms for 60fps or 33fps for 30fps) you need to be able to supply all of the assets in a scene to the GPU from whatever memory it's rendering from.

Assume for this exercise you have unlimited GPU power (which compared to the 360 and PS3 GPUs might as well be true). We also make a lot of assumptions about latency which don't really apply in this scenario and assumptions about the GPU getting exclusive access to main memory which won't happen in the real world. This scenario is simplistic and a first approximation but basically works for illustrative puposes.

Let's assume we have 2GB of assets in a scene which is what developers are starting to push toward and will certainly hit this generation, at least on the PC side.

In the case of XB1, if you have 2GB of assets in a scene and the GPU can retrieve from main memory at 68GB/sec it's going to take 29ms for the bus to get the entire scene through the GPU. So you're going to be limited to 30fps by virtue of how long it takes the GPU to physically get the scene from memory to the rendering units.

What about the eDRAM? Well there's only 32MB of it. Nothing can make a 68GB/sec bus feed an eDRAM unit to transfer faster to the GPU. Streaming data to the GPU is purely how fast you can shove data down a pipe. Caches don't mean shit. It's basically good for frame buffer and AA and that's it.

The PS4 on the other hand is going to get the assets in the scenes streamed to the GPU in 11ms. So you can target 60fps or you can increase the amount of assets in the scene, make textures bigger, more geometry, etc. Memory bandwidth is almost certainly going to be the limiting factor as scenes push past 1GB stopping the XB1 from targeting 60FPS while the PS4 will still be cruising along with enough bandwidth to spare.

It might come down to shader throughput at some point too but I think we're going to see more games this generation which are 30fps on XB1 and 60fps on PS4 just by virtue of their memory bandwidth.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
No....No you cant. just add those two numbers together.

durango_memory.jpg


According to the leaked docs, the GPU can pull data from DDR3 and eSRAM in parelell for a theoretical max of 170GB/s.

But back to the OT, MS did the same fudging with the system bandwidth in the 360 on the Major Nelson 360 PS3 comparison article. They made the 360 bandwidth look massive by adding the eDRAM bandwidth (with compression) to the system RAM bandwidth.
 

OTIX

Member
They should have included the CPU caches as well, all of them separately of course. If you're gonna lie why not go all the way?
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Even if the textures and bandwidth are exactly the same from frame to frame the entire scene still needs to be streamed from the memory that the GPU is going to be rendering from to the GPU. For you to be able to render a scene in a certain amount of time (16ms for 60fps or 33fps for 30fps) you need to be able to supply all of the assets in a scene to the GPU from whatever memory it's rendering from.

Assume for this exercise you have unlimited GPU power (which compared to the 360 and PS3 GPUs might as well be true). We also make a lot of assumptions about latency which don't really apply in this scenario and assumptions about the GPU getting exclusive access to main memory which won't happen in the real world. This scenario is simplistic and a first approximation but basically works for illustrative puposes.

Let's assume we have 2GB of assets in a scene which is what developers are starting to push toward and will certainly hit this generation, at least on the PC side.

In the case of XB1, if you have 2GB of assets in a scene and the GPU can retrieve from main memory at 68GB/sec it's going to take 29ms for the bus to get the entire scene through the GPU. So you're going to be limited to 30fps by virtue of how long it takes the GPU to physically get the scene from memory to the rendering units.

What about the eDRAM? Well there's only 32MB of it. Nothing can make a 68GB/sec bus feed an eDRAM unit to transfer faster to the GPU. Streaming data to the GPU is purely how fast you can shove data down a pipe. Caches don't mean shit. It's basically good for frame buffer and AA and that's it.

The PS4 on the other hand is going to get the assets in the scenes streamed to the GPU in 11ms. So you can target 60fps or you can increase the amount of assets in the scene, make textures bigger, more geometry, etc. Memory bandwidth is almost certainly going to be the limiting factor as scenes push past 1GB stopping the XB1 from targeting 60FPS while the PS4 will still be cruising along with enough bandwidth to spare.

It might come down to shader throughput at some point too but I think we're going to see more games this generation which are 30fps on XB1 and 60fps on PS4 just by virtue of their memory bandwidth.

That assumes that there is a single pool of main memory.

This image seems to state that there is a separate memory pool for the GPU. I don't know how accurate this image is:-

durango_memory.jpg


EDIT. Nope. I'm wrong. Just the ability to pull from the main memory and ESRAM at the same time. You're analysis is spot on.
 
A 53 MB file split between eSRAM and main memory can be accessed @182GB/s. That's what the eSRAM is here for.

32MB@102GB/s = 0.000292180s
21.329MB@68GB/s = 0.000292120s

The buses are separate so they work simultaneously. So the entire 53MB file is done in 0.000292180s (the largest of the two times) and that gives you an rate @ 182.5GB/s.

My calculator seems fine from my standpoint.
I think you are doing a power of 2 to power of 10 conversion wrong somewhere. It is honestly as simple as 102+68 = 170.
 

dr_rus

Member
durango_memory.jpg


According to the leaked docs, the GPU can pull data from DDR3 and eSRAM in parelell for a theoretical max of 170GB/s.
Yes, it can, but that gives you 170 GB/s for only 32 MBs of data, all the rest is just 68 GB/s. So you still cannot claim 170 GB/s for all of your RAM access.
 
you guys are assuming all the specs are exactly the same as the vgleaks posted to make this claim and none of you know that. microsoft did not reveal any clock speeds or any flop numbers to derive clock speeds from either. they did more or less confirm 768 shaders.

it's in the realm of possibility for example that a simple gpu upclock to 1ghz would boost the vgleaks diagram bw as is to 200gb/s with no other changes.

it's possible ms outright lied/fudged. it's also possible they did not.
 

Matt

Member
you guys are assuming all the specs are exactly the same as the vgleaks posted to make this claim and none of you know that. microsoft did not reveal any clock speeds or any flop numbers to derive clock speeds from either. they did more or less confirm 768 shaders.

it's in the realm of possibility for example that a simple gpu upclock to 1ghz would boost the vgleaks diagram bw as is to 200gb/s with no other changes.

it's possible ms outright lied/fudged. it's also possible they did not.

They didn't up anything.
 
you guys are assuming all the specs are exactly the same as the vgleaks posted to make this claim and none of you know that. microsoft did not reveal any clock speeds or any flop numbers to derive clock speeds from either. they did more or less confirm 768 shaders.

it's in the realm of possibility for example that a simple gpu upclock to 1ghz would boost the vgleaks diagram bw as is to 200gb/sy with no other changes.

it's possible ms outright lied/fudged. it's also possible they did not.


GPU clock has NOTHING to do with memory bandwidth. You are confused with FLOPs.

Memory bandwidth depends on memory clock and memory type. And we already have very good photos of their RAM chips. They are using Micron DDR3-2133
 

joshcryer

it's ok, you're all right now
They are using Micron DDR3-2133

Was going to post that, from what we know the vgleaks specs are spot on...

I still say the big deal will be whether AMD can get hUMA adopted for PC. Otherwise X1 will be the developers' low end platform and they may have minor improvements with PS4, depending on adoption rate. If AMD is successful then hUMA-PC's will become standard and X1 will take a back seat.

In the end PC always wins it's just a matter of whether PCs are the leader for graphical performance (as far as developers are concerned; ie, whether they target PCs first and downgrade the graphics or target consoles first and upgrade or not care about upgrading the graphics).

/drunk talking, grain of salt, corrections welcome, etc...
 

Durante

Member
When people ask why Nintendo doesn't release specs? This is why. Everyone just lies anyways.
Actually, the reason Nintendo don't release specs since Wii is the exact same reason MS didn't release GPU specs this time around: because they are worse.
 
When people ask why Nintendo doesn't release specs? This is why. Everyone just lies anyways.

actually why is the same reason microsoft wont say how many flops they have. it makes them look bad.

in Nintendo's case really really bad.

microsoft said 8 cores and 8gb ram, coincidentally the specs where they are equal to ps4.

edit:beaten

but then again in some video interview microsoft basically admitted 768 shaders i guess.
 
Are there not organisations that protect us from major nelsons BS? Advertising falsehoods is kinda scary. Also it seems the esram exists only to pretend its faster than it really is. Yes first party devs for the xbone will be able to be creative with it but it will not solve the bandwidth issues between multiplatform games.
 
Are there not organisations that protect us from major nelsons BS? Advertising falsehoods is kinda scary. Also it seems the esram exists only to pretend its faster than it really is. Yes first party devs for the xbone will be able to be creative with it but it will not solve the bandwidth issues between multiplatform games.

break

Vgleaks has a wealth of info, likely supplied from game developers with direct access to Xbox One specs, that looks to be very accurate at this point. According to their data, there’s roughly 50GB/s of bandwidth in each direction to the SoC’s embedded SRAM (102GB/s total bandwidth). The combination of the two plus the CPU-GPU connection at 30GB/s is how Microsoft arrives at its 200GB/s bandwidth figure, although in reality that’s not how any of this works. If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated.

Are there not organisations that protect us from major nelsons BS?

I dont even know what to say about this. Are you serious? You do realize every company puts forth "favorable" numbers that are arguably fudged. Go back to Sony claiming PS3 had two teraflops...and making a bunch of cg vids at e3 2005 and saying they were gameplay.
 

KidBeta

Junior Member
break





I dont even know what to say about this. Are you serious? You do realize every company puts forth "favorable" numbers that are arguably fudged. Go back to Sony claiming PS3 had two teraflops...and making a bunch of cg vids at e3 2005 and saying they were gameplay.

Did sony have a major member of staff do a 4 part blog post on why they are better?.
 

twobear

sputum-flecked apoplexy
Actually, the reason Nintendo don't release specs since Wii is the exact same reason MS didn't release GPU specs this time around: because they are worse.
This is kind of disingenuous. It's probably true for MS but Nintendo didn't release specs because the Wii/U wasn't intended to be sold on the promise of ray traced reflections.

It's a bit like saying the reason Fiat don't tout the 500's 0-62 time is because its worse than a BMW's. It's not at all, it's because it's not built for good acceleration and nobody buys it for that reason.
 

Durante

Member
This is kind of disingenuous. It's probably true for MS but Nintendo didn't release specs because the Wii/U wasn't intended to be sold on the promise of ray traced reflections.

It's a bit like saying the reason Fiat don't tout the 500's 0-62 time is because its worse than a BMW's. It's not at all, it's because it's not built for good acceleration and nobody buys it for that reason.
Your comparison doesn't really work: the Fiat 500's 0-100 times are right there in its spec list. Sure, they don't advertise it, but they also aren't completely withholding important information.
 
Top Bottom