• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Albert Penello puts dGPU Xbox One rumor to rest

Status
Not open for further replies.

TheD

The Detective
So if you're saying you can't add bandwidth - you can.

NO, You can not when one pool is much smaller and faster than the other!

Only things in the SRAM will have that higher bandwidth, everything else will be stuck with the much slower main RAM!

Please stop trying to insult our hardware knowledge.
 
So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
Clearly put, and doubled down. I like this.
 
Here's something I don't understand:

If the performance differences aren't as big as the specs would leave everyone to believe, why don't they simply explain why?

Is Microsoft holding themselves to an NDA agreement? Are they trying to play hard to get?

Edit: Nevermind - saw the first post and didn't read the thread. The morale of the story? I'm an idiot.
 

Klocker

Member
These next couple of months are going to be irritatingly hard to get by if this kind of posting continues. Especially when MSFT guys are spouting whatever they want while Sony goes about their business.

Ignore ms threads... I ignore Sony ones since im not getting that machine... win/win :)
 

The Flash

Banned
2013_%2525209_%2525206_%2525203_21.gif

Glorious
 

Elios83

Member
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.

But you don't have higher bandwidth. Memory bandwidth doesn't work like that and most of your bandwidth comes from a really small buffer (32MB) which can't be assumed to be the same of having full access to 8GB of datas at 176GB/s.
Also it would be nice to know more about how this read and write simultaneously on your esram works, it must be so tricky and far from real world situations that the company didn't even know about it until after E3 when they stated that the esram bandwidth was 102GB/s ;)
A GPU isn't just CUs, what about the ROPs, the pixel and texel fill rate? Those are very important aspects for high frame rates at high resolutions which you completely forget to mention...yeah because you're at a clear disadvantage there as well having HALF of them.
You're talking about the optimizations on your side but you totally dismiss what was made on the other side. You say that your CPU is more powerful by 10% (which btw equals to something irrelevant considering we're talking about a difference of a bunch of Gigaflops in Teraflops machines) but doing so you're claiming to know something that Sony has never announced.

Just let the games talk. In this way you're just giving more and more attention to an aspect where you can't win because the Xbox One hardware is objectively inferior and the chance of backfiring is extremely high...infact it has already happened.
 
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.


I think it's time to hire a new fellow. Even Einstein can be wrong, just saying.
 

Spongebob

Banned
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
I've heard it all.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
So if you're saying you can't add bandwidth - you can.

Sure, but not all bandwidths are equal. The bandwidth to a small pool of memory can only be saturated for data access to data that actually fits into this pool, and when you use the ESRAM pool for render targets, as everybody did on the 360, textures will be fetched from main memory, so texture fetching will be limited by the DDR3 bandwidth. In addition, if your render targets don't fit into the ESRAM pool, which limited deferred renderers (at 720p) on the 360 all the time and might limit the XB1 (at 1080p) as well, then you loose the flexibility of a single fast pool of memory.

Of course, I do not know specifics beyond what is publicly available to me, put it is a fact nonetheless that not all bandwidths are equal.
 

Klocker

Member
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.



Excellent...thanks for clarifying the facts
 

benny_a

extra source of jiggaflops
I'm bailing. Guys, even if you're emotional because you feel he is being disingenuous or dishonest or whatever - try to keep it respectful.

These threads can be a great opportunity to learn from the back and forth. We should try to not get it closed.
 

vpance

Member
He is spreading disinformation, but its his job so you can't really blame him, the problem starts when people like you blindly believe what he says, when he is wrong, oh so very wrong.

Disinfo only sets out to mislead paying consumers. It may be his job but we should not just dismiss it out of hand either.

These next couple of months are going to be irritatingly hard to get by if this kind of posting continues. Especially when MSFT guys are spouting whatever they want while Sony goes about their business.

If some write up comes from the MSFT "fellows" you can bet it will be very pro Xbone.

Agreed. It really doesn't serve the community well at all at this point. It's just noise creation.
 

Curufinwe

Member
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

You can add bandwidth numbers together. They just don't mean anything when one number is for a paltry 32 MB of ESRAM compared to 8 GB of DDR3. No one outside of the Microsoft engineers trying to justify their decisions thinks that setup is preferable to having 8 GB of GDDR5 RAM.

Where did you get Sony's final CPU clock speed from and where did you get the information that the PS4 has a maximum of 10gb/s coherent bandwidth?
 

WinFonda

Member
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously
Albert, the technical fellow you're referring to is much more anonymous to us than the sources we have telling us PS4 is a faster machine. You're basically saying "we got the wizard of oz" back here behind the curtain. He's faceless, nameless, and unaccountable. Oh, and all-powerful. Obviously.
 

Pain

Banned
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
So you know the PS4s specs complete with final clock speeds?
 

ypo

Member
It's pretty obvious at this point that Penello is not interested in *honest* conversations as he proclaimed and wants people to believe.
 

Skeff

Member
So the absolute maximum possible, counting every single cycle in the most ideal situation possible that's based on that formula and has nothing to do with real world... is what you call the "reasonable expectation" Did I read that right? Serious question...

I compared it to the ease of accessibility of the 68gb/s for the DDR3 in the XB1, the maximum theoretical performance of both of those as stated are reasonable representations, I suppose I should have said.

The ESRAM read+write simultaneously is not always applicable.
 
This should hopefully dismiss the notion I'm simply creating FUD or spin.

You say the above and then proceed to post the following FUD.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

So how long have you had access Sony's hardware?
Since you are clearly in possession of their specs and customizations.
These are exactly the kind of actions that damage the company, I like to give people the benefit of doubt, but with each new post like this you get ever closer to my ignore list.
 

astraycat

Member
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.

While this is great an all, what I (and I presume many others) want are numbers that correspond with the previous numbers given.

109GB/s is easily attained with the following formula:

(1024 bits/cycle) * (1 Byte/8 bits) * (853MHz) = 109GB/s.

However, we have no such formula for the oft-reported 204GB/s number. Assuming simultaneous read writes as a theoretical max for each cycle, we get the following numbers:

(2048 bits/cycle) * (1 Byte/8 bits) * (853MHz) = 218GB/s

This is the main point of contention when talking about ESRAM bandwidth, if you could get it cleared up it would calm the forum wars quite a bit.
 

Spongebob

Banned
I'm bailing. Guys, even if you're emotional because you feel he is being disingenuous or dishonest or whatever - try to keep it respectful.

These threads can be a great opportunity to learn from the back and forth. We should try to not get it closed.
Have you heard of the golden rule?
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
I think it's time to hire a new fellow. Even Einstein can be wrong, just saying.

As a side note, information can always suffer from not degradated accuracy if it is passed down by many different hands.
 

TheCloser

Banned
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.


I'd suggests that you stop. What you posted is false. If he reviewed this and gave you the go ahead, he should be let go immediately. Like I said earlier, your post is a combination of creative accounting and lies. A theme that resonates throughput the whole campaign leading up to release. Even the latest Xbox one commercial is designed to spread misinformation which will lead people to believe that they can watch the info for free if they buy the Xbox one. You guys don't get it. Lying will get you nowhere but if you want to keep at it, go ahead.
 

Finalizer

Member
I didn't really want to participate in the meat of the arguments in and of themselves... But fuck it, I think this one demands attention enough.

Argument from Authority.

How about we have another perspective on this one - someone wanna ask Carmack or some such on twitter about adding bandwidths? If Gaf engineers aren't good enough, surely someone of his caliber is sufficient to comment on the matter.
 
Alright I get how good a game looks is subjective from person to person. That's all fine and dandy and obvious. But saying a game looks significantly downgraded from a CGI to real time graphics POV is ridiculous. Especially comparing screens and videos from a bunch of off screen footage vs direct.

Deep Down does look a bit worse then then its initial reveal, but no where near motorstorm levels.

You asked for it lol!!

j3de.jpg

in5g.jpg

fasm.jpg


Reality

deepdown_lower_marqz6umg.jpg


Shame on you Capcom!
 

StevieP

Banned
You can add bandwidth numbers together. They just don't mean anything when one number is for a paltry 32 MB of ESRAM compared to 8 GB of DDR3. No one outside of the Microsoft engineers trying to justify their decisions thinks that setup is preferable to having 8 GB of GDDR5 RAM.

Where did you Sony's final CPU clock speed from and where did you get the information that the PS4 has a maximum of 10gb/s coherent bandwidth?

Microsoft chose 8gb of DDR3 because it was the only way to have 8gb of guaranteed ram in the design stages and that was needed for the vision of the machine. Sony lucked out (their setup originally had *2*gb of GDDR5 when the first papers were sent out).
 
Saying "Are audio chip is better" without putting out data is straight up elementary. The PS4 and X1 have dedicated audio chips, which is good for both consoles. But what makes the X1 sound chip "better"? Last I remember, Mark said that there's a dedicated chat chip as well....
 
It's much easier to utilize GDDR5 though. That alone gives it an advantage even if the XB1 has the ability to sometimes surpass PS4s bandwidth.

I like Sony's memory setup better, and not having to deal with eSram to reach your goals is ideal as well...

But I'm not arguing that. He said that listing Peak bw makes his stomach turn when Sony is also listing peak bandwidth. I'm just calling it like it is.
 

Vizzeh

Banned
Albert, the technical fellow you're referring to is much more anonymous to us than the sources we have telling us PS4 is a faster machine. You're basically saying "we got the wizard of oz" back here behind the curtain. He's faceless, nameless, and unaccountable.
Wizard of oz lol :p
 

x-Lundz-x

Member
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.

I don't disagree with anything you have posted because I am certainly not an engineer so I really can't comment on the memory or read/write cycles. What I can understand is you (Xbox) have a much weaker GPU on paper and one thing you keep failing to address. Why is that and why does this not matter? Having a dedicated audio chip is not going to make up this difference no matter how you spin it. However I appreciate you posting, so thank you.
 

TheCloser

Banned
While this is great an all, what I (and I presume many others) want are numbers that correspond with the previous numbers given.

109GB/s is easily attained with the following formula:

(1024 bits/cycle) * (1 Byte/8 bits) * (853MHz) = 109GB/s.

However, we have no such formula for the oft-reported 204GB/s number. Assuming simultaneous read writes as a theoretical max for each cycle, we get the following numbers:

(2048 bits/cycle) * (1 Byte/8 bits) * (853MHz) = 218GB/s

This is the main point of contention when talking about ESRAM bandwidth, if you could get it cleared up it would calm the forum wars quite a bit.

The math doesn't work and it never will because it's a lie.
 

Bsigg12

Member
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.

This is what we call sticking to your guns. It's reassuring when someone can really dig in. Solid post.
 
I remember when major nelson pulled the same "adding memory bandwidth" back during the 360 days to tout that system's superiority.

Can't believe it still goes on to this day.

I have no doubt the technical fellow knows his stuff, but that doesn't mean people with experience don't stretch the truth or obfuscate facts in order to make their designs look better. Appealing to authority isn't something I'm going to accept on blind loyalty.
 

Skenzin

Banned
Maybe he's eluding to a gpu rated at 1.3tf that runs at 90-100% effeciency is on par with 1.8tf gpu that runs 65-80% effeciency.. As in, its able to reach is full throughput in normal situations rather than hypothetical lab situations. This is by far the bestpart of a console generation. The anticipation.
 
Have you heard of the golden rule?
I think its a matter of giving a poster the benefit of the doubt unless you know otherwise.

In this case Albert has stated two things as facts, one regarding adding bandwidth and other regarding a read/write cycle. Seems very unlike standard PR spin and FUD to hold fast to such statements. And if he's wrong, surely somebody can point that out with similar certainty?
 

Spongebob

Banned
I remember when major nelson pulled the same "adding memory bandwidth" back during the 360 days to tout that system's superiority.

Can't believe it still goes on to this day.

I have no doubt the technical fellow knows his stuff, but that doesn't mean people with experience don't stretch the truth or obfuscate facts in order to make their designs look better. Appealing to authority isn't something I'm going to accept on blind loyalty.

Never Forget.

300zo8i.jpg
 

Pain

Banned
It's funny how Alberts posts are only bringing more attention to the Xbox Ones disadvantages. Talk about 1 step forward , 2 steps back.

I look forward to a thorough, unbiased comparison by a reputable website.
 

Vizzeh

Banned
I don't disagree with anything you have posted because I am certainly not an engineer so I really can't comment on the memory or read/write cycles. What I can understand is you (Xbox) have a much weaker GPU on paper and one thing you keep failing to address. Why is that and why does this not matter? Having a dedicated audio chip is not going to make up this difference no matter how you spin it. However I appreciate you posting, so thank you.

Its weaker because the space on the die that contains the CPU and GPU now has to facilitate esram and move engines where Sony filled theirs with more CU's etc
 
Status
Not open for further replies.
Top Bottom