Albert Penello puts dGPU Xbox One rumor to rest

Status
Not open for further replies.
Jul 5, 2013
1,212
0
0
Fudgetown
Well, the trick would be to not draw attention to the power difference at all. Saying the difference is 'greatly exaggerated' is still admitting that there is a difference at all.
PR should focus on aspects which sell the system (Kinect etc)
Well this is where they can't really win.

If they say something about the power difference, they're drawing attention to it.

If they say nothing about it, their silence is deafening and everyone begins to assume the difference is huge (remember how, before E3, most people were 100% positive Sony would have the same DRM/used games policies as Microsoft, partly because they wouldn't talk about it?).

There is literally no patch of ground here that doesn't have a land mine under it.
 

FINALBOSS

Banned
Apr 23, 2010
3,801
0
0
Paris
We need some tech wiz's to tear down Alberts post because he reiterated that mythical ESRAM number again.

Apparently Albert has Sony's specs too...lol.
 

tmac25

Neo Member
Feb 2, 2012
168
0
0
How can you not understand the concern? No one wants to be playing a game that makes compromises when there's a version on a cheaper console that has all the bells and whistles. Really don't understand how anyone can't understand that.

And being more powerful doesn't have anything to do with the consoles doing well--so that entire point is moot.
Not sure if you have kids, but I can tell you the Kinect alone is worth the 100 dollar hike to my family and I, but I understand that's not for everyone. The price point has never been a concern of mine.

As for compromising, I'm very confident the X1 will be able to deliver a fine visual, regardless of if the PS4 looks significantly better, I'm looking forward to gameplay, not visuals, but to each their own.

Precisely. Take State of Decay for example. Frame rate issues, tearing, glitches, and all but it was a fantastic game. It was first and foremost a fun game and that's what matters to me. Something like Second Son visually looks really good but if it isn't fun then that's it, game over. For me at least it's "I had fun playing this game, it's a really good game" > "look how many particle effects we can put in at one time". Sometimes when I get on here I feel like I'm the only one that thinks along these lines.
I think we're alone in this thread, my friend.
 

velociraptor

Junior Member
Aug 1, 2012
9,142
0
0
I think it's a little comical to suggest, with this analogy, that the PS4 has no tuning.

Also, you'd need to use a car with 462HP engine instead of the M3 to be at the right power ratio.
What I was trying to suggest that raw power will always beat the 'tuning/optimising' you put in a weaker car. Both the PS4 and Xbox are highly customised. It's just that one console is actually that much powerful than the other.
 

Vizzeh

Banned
Jun 17, 2013
2,504
0
0
35
Lisburn - UK
Let me give you an analogy. You can have two cars.

One car is a BMW 335i, a 330hp coupe. You have tuned the suspension settings. The gear ratios are perfect. The tyres have the 'right' pressure and width. The torque, camber, ride height, aerodynamics are all 'balanced' - you get the picture.

The other is a BMW M3. It has no such tuning. but is packing a 414HP V8 engine.

What will you place your bets on?
I believe your assuming devs will spend more time tweaking an x1 system? I propose the opposite, esram and move engines will probably increase development time, therefore cost and seemingly it will be much harder to programme for. Ps3 shown signs with this even last year with cod on sub par resolutions and frame Rate.

That's 3rd party. At least Sony had 1st party developers to pull them out of a technological hole, Microsoft do not have that luxury in quite the same abundance, if even v little.
 

The Flash

Banned
Jun 6, 2013
12,596
0
0
Central City
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
thumbsup.gif
 
Apr 27, 2009
3,994
0
0
I don't think the differences will be resolution, at least not to the level of 720p vs 1080p. It'll probably be true 1080p versus something close to it if it's resolution related at all.

What I do think will happen as the generation stretches is we'll see some framerate differences, we'll see games who want to keep the framerate the same in both but have to lower a good number of details or maybe ditch any AA they have planned to get the Xbox version to stay at 30. Maybe an open world game will have less detail in some areas or less particles. There's any number of things that can happen with the tech difference, it's up to the devs to decide what they're going to change/lower.
I agree with what you're saying. I used the resolution difference because it's an easy way to frame the discussion. Although I think resolution will be one of the easiest and cheapest things for a dev to tweak.

Regardless, I don't expect the differences between the consoles to be very significant despite the PS4's obviously power advantage.
 

morpix

Member
Apr 15, 2013
1,138
0
0
Hamilton, New Zealand
Let me give you an analogy. You can have two cars.

One car is a BMW 335i, a 330hp coupe. You have tuned the suspension settings. The gear ratios are perfect. The tyres have the 'right' pressure and width. The torque, camber, ride height, aerodynamics are all 'balanced' - you get the picture.

The other is a BMW M3. It has no such tuning. but is packing a 414HP V8 engine.

What will you place your bets on?
Bad analogy. The M3 is already tuned by the factory. The M-class is BMW's performance-tuned vehicles (like Mercedes' AMG division).
 

Skeff

Member
Jun 2, 2013
5,813
0
0
Leeds, UK
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
Sorry Albert, but no.

• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
No, just no, you don't add Bandwidths, that's not how hardware works.

• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
So, didn't you earlier state you don't know what Sony are doing with their console but now you know their unannounced CPU clocks?

• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU
that's odd because the PS4 does not have 10gb/sec bandwidth, I believe you are making a mistake regarding onion, onion+ and onion(+) buses.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
May 15, 2013
16,948
0
595
Germany
twitter.com
18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
These inherent inefficiencies are a new phenomenon to me. What exactly causes them?

Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
I fail to see the difference between a global clock increase and a "local" (?) clock increase. What do you mean?
 

TheGreyHulk

Member
Jun 4, 2012
2,853
0
410
honestly, we need to just let the games do the talking...

sad part is that we have to wait a long time till we get games that are really doing work with the hardware.

we all know for a fact the PS4 is stronger. it's in the hardware. we just need to let multi platform games do the talking. i'm not expecting a big diff at launch coming from the ps360 hardware.
 

Ricerocket

Member
Jun 4, 2013
4,954
0
0
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
You're only speaking on one side... yet you keep bringing up the point as if the hardware in Sony's PS4 is just that, with nothing else to it.
 

benny_a

extra source of jiggaflops
Apr 25, 2009
17,350
0
0
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
I'm very disappointed in this magic math right there. I hope the upcoming post by your graphics people is a bit more honest with the real world numbers.
(I don't think saying "on paper" is a get-out-of-jail-free card.)
 

Derrick01

Banned
May 9, 2011
34,663
0
0
We need some tech wiz's to tear down Alberts post because he reiterated that mythical ESRAM number again.

Apparently Albert has Sony's specs too...lol.
Yeah when I see "peak performance" I have to roll my eyes. I saw it constantly broken down as BS when all these debates were raging on here months ago.

We may have to summon Durante to shoot all this stuff down once again.
 

Wishmaster92

Member
Apr 16, 2012
8,471
0
490
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
Thanks for the explanation.
 

FINALBOSS

Banned
Apr 23, 2010
3,801
0
0
Paris
Yeah when I see "peak performance" I have to roll my eyes. I saw it constantly broken down as BS when all these debates were raging on here months ago.

We may have to summon Durante to shoot all this stuff down once again.
I'm very disappointed in this magic math right there. I hope the upcoming post by your graphics people is a bit more honest with the real world numbers.
(I don't think saying "on paper" is a get-out-of-jail-free card.)
These inherent inefficiencies are a new phenomenon to me. What exactly causes them?
I fail to see the difference between a global clock increase and a "local" (?) clock increase. What do you mean?
Sorry Albert, but no.
No, just no, you don't add Bandwidths, that's not how hardware works.
So, didn't you earlier state you don't know what Sony are doing with their console but now you know their unannounced CPU clocks?
that's odd because the PS4 does not have 10gb/sec bandwidth, I believe you are making a mistake regarding onion, onion+ and onion(+) buses.
Thanks for the input guys. As I figured it's filled with mythical performance numbers and "magic".
 

pixlexic

Banned
Nov 10, 2012
7,468
0
0
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
Its useless to argue. People here have their minds made up. Once the games start coming out then there will be no guessing.
 

LittleJohnny

Banned
Oct 17, 2006
1,099
0
0
Xbone has weaker hardware. The sooner you accept it, the sooner you can get on with your life.
The GameCube had weaker hardware than the Xbox and GameCube games were pretty competitive throughout the generation. Its not like this hasn't happened before. Based on AMDs description on next Gen console work, they were both targeting the best possible visuals for a console budget, Sony just happened to bet on GDDR5 but we already know MS couldn't go that route from the start because they absolutely wanted 8GB and nothing less. Some good timing and a little luck helped Sony be where they are today.
 

SenkiDala

Member
Nov 25, 2006
1,637
0
970
France
First of all I'll buy both PS4 and X1. Just to not be misunderstood.

Now, honestly, have you seen on PS4, a game that really seems having 40% better graphics than X1 games?

Well, for now it's hard to find a good exemple, Dead Rising 3 can't be compared to inFamous 3 because DR3 is showing us 2000/3000 zombies on screen when inFa3 is showing us 20 pedestrians at most. Knack don't have any equivalent on X1 for now. Killzone too, Titan Fall is a multiplayer game so it can't be compared. What we have left is Forza 5 / DriveClub, since they're 2 racing games, why not.

Forza 5 is running at 1080p/60fps when DC is running at 1080p/30fps, also do you feel DriveClub is REALLY so much more impressive than Forza 5? Do you?

Also I don't think the "well, let devs some times to take advantage of it" works here, since I remember that if I compare the PS2 line up games and Xbox (the first one) line up games I can directly see a difference, I don't need to "wait". For exemple take SSX (which is an awesome game) and Amped (which is nice too), I remember exactly that you could REALLY SEE the difference right away, the power advantage of the Xbox was obvious, no need to discuss about it.

Well I just wanted to say that... Dunno if it's relevant, but well. :)
 

benny_a

extra source of jiggaflops
Apr 25, 2009
17,350
0
0
It seems like every couple months that the "bandwidth" on the One is increasing.
Well the eSRAM bandwidth was legitimately increased when they increased the GPU clock speed.

First of all I'll buy both PS4 and X1. Just to not be misunderstood.
When I a post starts like that in a technical thread about facts, then I know I can safely ignore it.
 

Duxxy3

Member
Nov 30, 2007
22,332
3
795
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
And with ALL of that the system is still seen by everybody, including developers, to be significantly weaker.

It all comes down to the graphical abilities and right now there is no confusion as to which has the better card.

All the FUD in the world won't make the system more powerful than it actually is.
 

2dshmuplover

Banned
Sep 20, 2012
139
0
0
after seeing deep down and drive I agree with him that they wont be as far a part as people wish.
This.

Forget the on-paper specs, just look at the games we've seen so far of your indication of power difference. For some reason a lot of people don't appear to want to do that though.
 

Vizzeh

Banned
Jun 17, 2013
2,504
0
0
35
Lisburn - UK
Wish we had an official Sony guy on these boards, a little more official than our known sources. But then again, they would never enter a hardware discussion war.. But shots fired I guess?
 

Klocker

Member
Apr 1, 2006
8,144
0
1,145
Arizona
Hey Albert - can you please leak us the sony specs you so obviously have at hand? thanks
He's obviously using the same leaked and announced specs that everyone else here so vehemently claims a huge disparity with so it's more than fair....more than fair comparison. He clearly stated he's not disparaging ps4 just explaining his perspective for bursting the "40-50% better in games than xbo" bubble.
 

Finalizer

Member
Jul 6, 2013
3,080
0
0
Thanks again for letting my participate. Hope this gives people more background on my claims.
Euuugh... Seriously man, just stop this. It looks no better when you comment on how "we don't know the full story from MS' side" when the exact same thing can very well be said about your perspective on Sony's hardware and engineering. And again, you comment on this with an extremely obvious bias in the discussion. I don't think it's beneficial for you to participate in these discussions; again, I feel like it's going to do more damage to people's perspective of your (and other MS representatives') participation on these forums when you come in here and start engaging in hardware comparisons with your direct competition.
 

USC-fan

Banned
Oct 9, 2005
7,115
1
0
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
Really love to have one of your "tech guy" explain how esram reads & writes at the same time.

Gpu have a lot more than just CU. PS4 have 100% advantage in other key areas on the GPU that you did not even talk about.

I know you think you gain something by coming on here and giving us the PR message but a lot of us know a little about what we are talking about here. Putting the "peak bw" nonsense just rubs me the wrong way.
 

Iacobellis

Junior Member
Jun 25, 2012
10,069
0
400
Connecticut
First of all I'll buy both PS4 and X1. Just to not be misunderstood.

Now, honestly, have you seen on PS4, a game that really seems having 40% better graphics than X1 games?

Well, for now it's hard to find a good exemple, Dead Rising 3 can't be compared to inFamous 3 because DR3 is showing us 2000/3000 zombies on screen when inFa3 is showing us 20 pedestrians at most. Knack don't have any equivalent on X1 for now. Killzone too, Titan Fall is a multiplayer game so it can't be compared. What we have left is Forza 5 / DriveClub, since they're 2 racing games, why not.

Forza 5 is running at 1080p/60fps when DC is running at 1080p/30fps, also do you feel DriveClub is REALLY so much more impressive than Forza 5? Do you?

Also I don't think the "well, let devs some times to take advantage of it" works here, since I remember that if I compare the PS2 line up games and Xbox (the first one) line up games I can directly see a difference, I don't need to "wait". For exemple take SSX (which is an awesome game) and Amped (which is nice too), I remember exactly that you could REALLY SEE the difference right away, the power advantage of the Xbox was obvious, no need to discuss about it.

Well I just wanted to say that... Dunno if it's relevant, but well. :)
Third party vs. first party with a year difference. Xbox was more powerful, but it's not the most fair comparison.
 

Fox_Mulder

Rockefellers. Skull and Bones. Microsoft. Al Qaeda. A Cabal of Bankers. The melting point of steel. What do these things have in common? Wake up sheeple, the landfill wasn't even REAL!
Jun 22, 2010
2,022
0
0
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
Serious question: what about cloud?
 

Skeff

Member
Jun 2, 2013
5,813
0
0
Leeds, UK
Its useless to argue. People here have their mind made up. Once the games start coming out then there will be no guessing.
I think you'll find many people on here are simply asking uestions as follow ups as there's plenty of holes in Mr. Penellos post.

for instance 30gb/s is 3x the bandwidth of the PS4's coherent bandwidth, so PS4 is obviously 10gb/s, which we already know is not true.

Also He states that their CPU is at least 10% faster, when we don't know the clockspeed of the CPU of the PS4 and even if it was 1.6Ghz that would make 1.75 less than 10% faster.
 

timlot

Banned
Dec 1, 2004
1,551
0
0
Hey Albert - can you please leak us the sony specs you so obviously have at hand? thanks
Folk act as if MS and Sony don't have access to one another's development hardware. As if Ubi, EA or any other 3rd party wouldn't share that info.
 

Curufinwe

Member
May 20, 2009
31,241
2
725
He's obviously using the same leaked and announced specs that everyone else here so vehemently claims a huge disparity with so it's more than fair....more than fair comparison. He clearly stated he's not disparaging ps4 just explaining his perspective for bursting the "40-50% better in games than xbo" bubble.
He's an official Microsoft spokesman who is adding bandwidth numbers together and pretending the total is a meaningful number. That was bad in 2005; in 2013, it's frankly unbelievable.
 

benny_a

extra source of jiggaflops
Apr 25, 2009
17,350
0
0
He's obviously using the same leaked and announced specs that everyone else here so vehemently claims a huge disparity with so it's more than fair....more than fair comparison. He clearly stated he's not disparaging ps4 just explaining his perspective for bursting the "40-50% better in games than xbo" bubble.
He has not bursted a single bubble. He has even used arguments that TheKayle gets ridiculed for.
 

Spongebob

Banned
Jan 13, 2013
6,530
0
0
twitter.com
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
Its the same thing...
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
You can't add bandwidth like that...

Also, how exactly are you folk calculating the 204GB/s value?
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
I thought you weren't aware of Sony's final CPU clock speed...
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
What is this even supposed to mean?
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.
PS4 also has 30GB/s with Onion and Onion+...
 

TheCloser

Banned
Mar 8, 2013
1,974
0
0
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
I do not mean to offend when I ask this but who typed up this nonsense for you. 272gb/s peak on paper. What rubbish is this? I study computer science for a living and this is garbage. Creative engineering and fud at its best. You do not have anywhere close to a 272gb/s peak at all. It's one thing to beat around the Bush but it's another to straight up lie. The fact that esram can only move 32mb at the previously reported 194gb/s peak should point to the kind of nonsense you are spreading. The majority of the data on the ram can only move at 68gb/s. I'm not even going to bother with the other nonsense in this post. It's truly embarrassing that you can come up here and type this nonsense. You do not just add up bandwidth numbers like that. Shameful.
 

FINALBOSS

Banned
Apr 23, 2010
3,801
0
0
Paris
Its useless to argue. People here have their minds made up. Once the games start coming out then there will be no guessing.
It's not about arguing it's about presenting facts. Tech-heads are genuinely curious...even ones who aren't really interested in console gaming. So stop acting like everyone is here to put down the Xbone because our minds are made up on the PS4 being the more capable machine.

We want explanations. This was not one of those explanations.
 

Klocker

Member
Apr 1, 2006
8,144
0
1,145
Arizona
Euuugh... Seriously man, just stop this. It looks no better when you comment on how "we don't know the full story from MS' side" when the exact same thing can very well be said about your perspective on Sony's hardware and engineering. And again, you comment on this with an extremely obvious bias in the discussion. I don't think it's beneficial for you to participate in these discussions; again, I feel like it's going to do more damage to people's perspective of your (and other MS representatives') participation on these forums when you come in here and start engaging in hardware comparisons with your direct competition.
I don't understand why people can not appreciate his participation or perspective without trying to shut him up from giving the perspective form the other side?

Why try to shut down the conversation. He obviously would not be coming on here putting it on the line if he was not confident he could back it up when the games come out over the next year or two,

The "assumption that ps4 is auto-40% better in games" deserves to be challenged
 

statham

Banned
Dec 8, 2010
15,404
0
0
Florida
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.
good post, I want more a Kitnect fitness game, please tell Phil.
 
Status
Not open for further replies.