• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.
This seems reactionary (imo) to Mark Cerney's presentation yesterday. Nice to see MS still has DF (Eurogamer) in their back pockets for this gen.

DF analysis has been spot on and the most reliable imo, just because 360 won most of those head to heads doesn't mean they're in MS back pockets, it's called reality.
 

Tripolygon

Banned
So games will basically look the same and xb1 will run at 30fps while the ps4 will run at 60? I'm ok with that.

People need to stop saying this.., If a game runs 60fps on PS4 developers will find ways to make it run 60fps on Xbone. There are already game which are targeted to 60fps on Xbone and PS4.
 

WolvenOne

Member
You don't need to be an expert. I mean do you guys really think the Microsoft engineers are like this when working with the machine 5 months before release:

the-fuck-is-this-the-fuck-is-that.jpg


C'mon...

Oh, I'm sure Microsoft developers are figuring out various methods of, punching above their weight class, so to speak. That's kind of the norm for consoles, as the devs get into the nitty gritty of the hardware, they find various little loop-holes to exploit, to pull out more horsepower. The Sony devs will be discovering the same sort of loopholes about now, though they may not be memory bandwidth specific in their case.

One or two years from now, we'll start seeing the benefits of all these, discoveries. Though I would expect them to be utilized more heavily in exclusives than on multi-plats.

Still, having some expertise does matter, at least in relation to trying to explain/discern little issues like this one.
 

borghe

Loves the Greater Toronto Area
DF analysis has been spot on and the most reliable imo, just because 360 won most of those head to heads doesn't mean they're in MS back pockets, it's called reality.

the only thing I'll say about their head to heads... is that the MAJORITY of the time when they'd essentially call the head to head "to close to call" they would almost always lean to the side of the 360.. and it would be like "IQ is slightly better on PS3, but frame rate is better on 360. 360!" and then on a different one "IQ is slightly better on 360, but frame rate is slightly better on PS3. 360!"
 

ekim

Member
Don't take the "few hours" thing literally. Maybe you'll see in a few days, a week? I don't know.

Important is only that Microsoft won't get away with this "math".



It's bullshit that just needs confirmation.

This explains nothing - so a major leak going to happen? Or are you just guessing?
 

Vestal

Gold Member
Don't take the "few hours" thing literally. Maybe you'll see in a few days, a week? I don't know.

Important is only that Microsoft won't get away with this "math".



It's bullshit that just needs confirmation.

Then you are attacking perceived BS with your own brand of BS??
 

Kinan

Member
88% increase over the design value? How in hell could that happen? Normally it goes in the other direction.

My imagination failing me.

"oh, we have actually builit in a wider bus in there? niiiiiiice."

"look, I've tried higher clocks today and it worked!"
 

link1201

Member
Don't take the "few hours" thing literally. Maybe you'll see in a few days, a week? I don't know.

Important is only that Microsoft won't get away with this "math".



It's bullshit that just needs confirmation.

You're acting as if this was a MS press release or something.
 

Gestault

Member
Do you have an actual counter argument to that interpretation?

Maybe I'm too dismissive of some comments here, but the content of the article seems to offer a reasonable "counterpoint" to the view that this is somehow a downgrade. The new info seems unequivocally good, and offers retorts to some of the unsubstantiated but assumed negative rumors about the hardware profile.
 
How many TFLOPs would Fulgore need to function in real life?


I just want to know when did TFLOP performance become a relevant measuring factor again? lol


nVidia's GTX 680 = 3.1 TFLOPS
AMDs 7970 = 3.8 TFLOPS

Whoa! why is everyone flocking to GTX 680s then? 7970 must be KILLING the GTX 680.

Difference between PS4 and Xbox One = 7970 vs GTX 680 confirmed! >D
 

-COOLIO-

The Everyman
are there any posts explaining how an esram setup works? is it 32 mb of super fast memory and 8gbs of slower memory? or does it somehow speed up all ram?
 
I just want to know when did TFLOP performance become a relevant measuring factor again? lol


nVidia's GTX 680 = 3.1 TFLOPS
AMDs 7970 = 3.8 TFLOPS

Whoa! why is everyone flocking to GTX 680s then? 7970 must be KILLING the GTX 680.

Difference between PS4 and Xbox One = 7970 vs GTX 680 confirmed! >D

That is an apple vs oranges comparison.
In ps4 and X1 case its an smaller apple vs bigger apple comparison.
 
I just want to know when did TFLOP performance become a relevant measuring factor again? lol


nVidia's GTX 680 = 3.1 TFLOPS
AMDs 7970 = 3.8 TFLOPS

Whoa! why is everyone flocking to GTX 680s then? 7970 must be KILLING the GTX 680.

Difference between PS4 and Xbox One = 7970 vs GTX 680 confirmed! >D

Nvidia flops =/= AMD flops

Both these consoles have AMD GPUs so your hail mary of a post just fell flat on its tits.
 

Osiris

I permanently banned my 6 year old daughter from using the PS4 for mistakenly sending grief reports as it's too hard to watch or talk to her
Don't take the "few hours" thing literally. Maybe you'll see in a few days, a week? I don't know.

You know teasing like this is heavily frowned upon here, to the point of severe risk to your account.

If you know something, spill it or you should have kept quiet, if it was personal speculation, it should have been presented as such.
 

Hana-Bi

Member
Not necesseraly but possible. 750Mhz would fit perfectly. But MS just doing some totally weird math to sugar-coat their spec-sheet is in the race as well.

Let's just wait and see how things turn out. :)

You know that MS said this to devs and not to any news sites... It's not a PR stunt they try to pull here...
 
88% increase over the design value? How in hell could that happen? Normally it goes in the other direction.

My imagination failing me.

"oh, we have actually builit in a wider bus in there? niiiiiiice."

"look, I've tried higher clocks today and it worked!
"

Not possible. Because that means the GPU clock rate is doubled (Calm your shit Kotaku, I'm using this as an example). They are saying they are doing a read AND write at the same time. So it "doubles" the bandwidth... (102x2 = 204GB/s)

Except they said 88% increase... = 192GB/s... but... where the fuck did that number come from? Why wouldn't it be able to be a 100% increase? Well... because you'd have things trying to right and read to conflicting spots at the same time.

So that's why they say "realistically" it's 133GB/s....

But wait... if the 88% increase isn't account for those conflicts and data misses... then how did they make up that number?

So...currently.. they say:
128 (bit) x 800 (clock rate) = 102.4GB/s
Well.. downclocking the GPU from 800 to 750 (going down 50) will get you...
128 (bit) x 750 (clock rate) = 96GB/s
Double that since they are saying you can read/write at the same time...
96 x 2 = 192GB/s

So... several of us are thinking this 88% thing is just a bullshit number they came up with after changing around their GPU clock and "double accessing" the RAM.
 
first, I would like to see your definition (and screenshots) of "games that look better".

Second, remember that PC has two pools of RAM. 2GB on the video card, but the games are still accessing system RAM which these days is easily 8-16GB or more. Even if the game is running x86 with 32-bit instructions Win7 will still give that WoW64 instance the full 4GB system memory if available.



This is what DF is saying. But MS' 192GB/s number doesn't make sense then. The entire basis behind their number is simultaneous reads and writes. Which at 102GB/s would be a theoretical 204GB/s.. but that's not what MS said. MS said 192GB/s. So cut that effective number in half and the real number is 96GB/s. Divide that by 128 bytes and you get 750Mhz.
I understand the math, but DF say Ms is still telling developers that the 102.4GB/s is the number for full single write or read operations, and that there were no downgrade.

As for 192GB/s it doesn't make much sense to begin with... How do you by chance design something with a fully bidirectional bus and just happens to find out on the eve of the console launch?
 

ekim

Member
Oh, Digital Foundry... you were a reliable source once... this is more a Kotaku thing...

So now they are not reliable because this article doesn't fit to your personal preference? Or do you know more about the matter then they do?
 

Vestal

Gold Member
I'm no insider! I never claimed to be an insider! All I'm saying is that Microsoft won't get away with this.

You made a post implying that something was awry as if you had some facts to back it up and that everything would be cleared up in a few hours. Even using Spoiler to black out part of the text saying MS would pay for something.

Making wild accusations like that with no actual facts or substance to your argument really derails a conversation as you have seen over the past 2 pages.
 

borghe

Loves the Greater Toronto Area
I find it funny how positive news/rumors about the PS4 are all true and about Xbox One are all false.

or could it be

"positive reveals" about PS4 are in fact just full tech disclosures

"positive reveals" about XBONE are in fact just PR spin using behind the scenes numbers and fuzzy math

I would LOVE, as in genuinely applaud them, for MS to come out and say "these are the specs of our system". It would honestly end a lot of debate AND negativity. This spinning numbers in one breath and then downplaying comparisons in the next is just the same sort of PR failure that they've been exhibiting since the original presser.

Flops are flops, now they're supposed to be different??

Here's a good one:


DDR3 bandwidth =/= GDDR5 bandwidth, therefore cannot be compared either... amirite?

his statement was wrong... but his intent is still true. Nvidia compared to AMD is pointless in numbers like FLOPs because of architecture differences.. but that's not what's happening here. This is basically comparing a 7770 to a 7850.. i.e. same basic architecture.. and when comparing say the Radeon family (usually in a price to performance comparison) the FLOPs within the family are an easy "at-a-glance" comparison tool. of course most sites at that point just use FPS regardless.
 

maltrain

Junior Member
So now they are not reliable because this article doesn't fit to your personal preference? Or do you know more about the matter then they do?

This "information" is pure PR bullshit. As simple as that. Leave that to Kotaku, Polygon or even GT, but a serious media like Digital Foundry isn't for this kind of shit.
 

Crisco

Banned
Sounds like it was the downclocked, the math makes sense. 50 mhz isn't really a tangible difference though, I remember overclocking GPUs by about that much and it never made much difference in benchmarks, maybe a frame or two.
 

Freki

Member
I understand the math, but DF say Ms is still telling developers that the 102.4GB/s is the number for full single write or read operations, and that there were no downgrade.

As for 192GB/s it doesn't make much sense to begin with... How do you by chance design something with a fully bidirectional bus and just happens to find out on the eve of the console launch?

This.
It's not like realizing you can upclock you GPU by 50mhz because the yields are better than expected.

Bus Design, Pipeline Design, etc. were done a long time ago...
 

WolvenOne

Member
Not possible. Because that means the GPU clock rate is doubled (Calm your shit Kotaku, I'm using this as an example). They are saying they are doing a read AND write at the same time. So it "doubles" the bandwidth... (102x2 = 204GB/s)

Except they said 88% increase... = 192GB/s... but... where the fuck did that number come from? Why wouldn't it be able to be a 100% increase? Well... because you'd have things trying to right and read to conflicting spots at the same time.

So that's why they say "realistically" it's 133GB/s....

But wait... if the 88% increase isn't account for those conflicts and data misses... then how did they make up that number?

So...currently.. they say:
128 (bit) x 800 (clock rate) = 102.4GB/s
Well.. downclocking the GPU from 800 to 750 (going down 50) will get you...
128 (bit) x 750 (clock rate) = 96GB/s
Double that since they are saying you can read/write at the same time...
96 x 2 = 192GB/s

So... several of us are thinking this 88% thing is just a bullshit number they came up with after changing around their GPU clock and "double accessing" the RAM.

Very plausible, though of course we won't know for certain for awhile.
 

ekim

Member
This "information" is pure PR bullshit. As simple as that. Leave that to Kotaku, Polygon or even GT, but Digital Foundry isn't for this kind shit.

So MS now tries to fool it's own devs with some PR? Why can't you accept this information as a given? Where is your proof that this is pure PR bullshit?
 

allan-bh

Member
or could it be

"positive reveals" about PS4 are in fact just full tech disclosures

"positive reveals" about XBONE are in fact just PR spin using behind the scenes numbers and fuzzy math

Why is this spin? I don't see any concrete evidence to say that, just speculations.
 

lucius

Member
But since the cloud was going to make it 70 times more powerful and they been hyping that to death why would they even need this?
 

Tripolygon

Banned
May 2013

Similarly, the company claimed that there was more than 200GB of bandwidth within the system. Again, the number had no context or clarification and if rumors are to be believed, it suggests some rather creative accounting: 68 GB main memory bandwidth, 102GB bandwidth to an embedded SRAM buffer for the GPU, and 30GB bandwidth between the CPU and GPU. While that does add up to 200GB, there are no two parts of the SoC that can communicate with each other at 200 GB/s. The fastest link is believed to be the GPU read performance, which can aggregate across the main memory and SRAM buffer for 170 GB total.

June 2013 with this new update Xbox one has

192GB/s eSRAM
68GB/s Main RAM
30GB/s bandwidth between the CPU and GPU

=290GB/s bandwidth

So Xbox one has 290GB/s bandwidth, just following their math people.
 
Status
Not open for further replies.
Top Bottom