Damn. Amazing lol.
People have pointed out that MS (seemingly) only started gaining interest in GAF after the DRM fiasco. I wouldn't be at all surprised that they didn't know what kind of a den of truth wolves they were jumping into and were hoping to that the spin would trickle down from the "hardcore" to their more casual friends.
I have to imagine it's a situation where someone who has been closely associated with the project for years doesn't want to downplay his creation. Albert's post are usually honest and do away with the PR speak, but he's not going to admit there's a significant difference in power, especially if that difference has a tendency to be exaggerated.
Are you kidding? I mean really - are you kidding?
This is part and parcel of the territory here. You have to answer for your statements, especially if you're here in an official capacity. People get banned for being out of line, but poking holes in the arguments of other posters is well within the rules.
I've had my work here both praised and eviscerated, called out by numerous forum folks both publicly and via PM when I got stuff wrong, and I'm a goddamn admin. Guess what - I wouldn't have it any other way. That is what makes NeoGAF what it is.
There are many, many people who are more than capable of assessing, vetting and debunking technical claims and they have every right to do so. That's the price of doing business here. If we had official Nintendo or Sony reps on board, they would be subject to the same process.
If you're scared, buy a dog.
Sure. If you say so
Likely did the obvious. Probably asked and found out from developers personally.
.
Sure. If you say so
Apparently he mistaken what they were saying
Holy crap this thread keeps giving. What a fantastic post.
The simultaneous read/write at 200gb/s is completely theoretical where as the 176gb/s in PS4, just as the 109 gb/s or the 68gb/s in the XB1 are both reasonable expectations.
tl:dr
176 is the absolute maximum possible
68 is fine
109 is the guaranteed
200gb/s+ is not close.
Also you don't just add them up.
I know PS4 is more powerful, but I do think the XB1 has some advantages in some areas. but overall the PS4 is more powerful, I don't think any regular xbox fan on this site doubts this. It hasn't swayed my purchasing opinion, I'm excited to see what kinect can bring to the table.
I'm surprised Albert even bothers to post on here with the way some of you speak to him..
Or maybe they just want to correct some misinformation
Penello is still adamant that the power disparity between the two consoles is still not as huge as commonly believed. Based on the data we currently know, do you tech savvy guys believe there is any feasible way that he could be correct or are his arguments just damage control?
He claims that it's not a simple 6% increase.
It's a 6% increase per CU!...as though that makes a difference???
Using the logic he used for the memory bandwidth, that means 6% increase x 12 CU's = 72% increase, putting it well beyond the PS4!
Don't worry he's not a PR and is a MS engineer so I trust him.
And based on what I've seen, that's just backfiring spectacularly.
Because he's engaging in these discussions, it's so easy to see his participation on these forums as being designed to do just that - spin news in the hopes of changing the narrative. And that kind of image is not what MS needs to be painting themselves with right now, not when there are plenty of other things that Penello and co. could be doing around here instead, plenty of things that could be building genuine good will.
I haven't really been following this but why is MS making such a big deal about this now when they were proudly proclaiming back during the reveal that they purposefully did not target the high-end graphics?
Are you kidding? I mean really - are you kidding?
This is part and parcel of the territory here. You have to answer for your statements, especially if you're here in an official capacity. People get banned for being out of line, but poking holes in the arguments of other posters is well within the rules.
I've had my work here both praised and eviscerated, called out by numerous forum folks both publicly and via PM when I got stuff wrong, and I'm a goddamn admin. Guess what - I wouldn't have it any other way. That is what makes NeoGAF what it is.
There are many, many people who are more than capable of assessing, vetting and debunking technical claims and they have every right to do so. That's the price of doing business here. If we had official Nintendo or Sony reps on board, they would be subject to the same process.
If you're scared, buy a dog.
look at my posts, Albert clearly stated at 30gb/s the xb1 has 3x more coherent bandwidth than the PS4. This means Albert is stating the PS4 has a maximum of 10gb/s coherent bandwidth, We know this is factually incorrect.
Depending on what other things you can do with the console, I think it could be quite useful. I'm mainly thinking of long loading times, or multiplayer lobbies where you're waiting for the next round.
Maybe even a Fighting Game lobby where it isn't your turn for awhile.
The question we should all be making to Albert Penello is what the hell happened to cloud computing?
Like the rest of his post.There is nothing inherently wrong with his argument, it's just being incorrectly applied. His argument is simply that adding cores is not always beneficial over increasing the speed of the cores you actually use. This is true in general computing, the only problem is that it really isn't true with graphics programming, and the GCN architecture which are heavily parallel. It's a pretty easy mistake to make, and not at all crazy -- just wrong
he isn't pr or an engineer
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.
We are lucky to have a small handful working on Xbox.
I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.
So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.
So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.
I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.
I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
Fix that for you. And your statement is Incorrect, the 176GB/s is the absolute maximum, NOT the reasonable expectation.
Here's where that number comes from:
The nominal speed of the ram is 1,375Mhz. GDDR5 makes it 5,500Mhz effective.
Taking the effective speed (5500) x the bust width (256-bit) gives you the maximum aka theoretical bandwidth
5,500 x 32 (1 per 8-bit) = 176GB/s
That right there is an absolute maximum, that's not an average by any stretch of the word.
Luckily GAF is viewed by many but think of the poor souls on other less informed forums, just lapping all this miss-information all doe-eyed with silver spoons.
Albert came back to spread more misinformation around?
Unbelievable.
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.
We are lucky to have a small handful working on Xbox.
I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.
So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.
So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.
I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.
I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
It's much easier to utilize GDDR5 though. That alone gives it an advantage even if the XB1 has the ability to sometimes surpass PS4s bandwidth.Fix that for you. And your statement is Incorrect, the 176GB/s is the absolute maximum, NOT the reasonable expectation.
Here's where that number comes from:
The nominal speed of the ram is 1,375Mhz. GDDR5 makes it 5,500Mhz effective.
Taking the effective speed (5500) x the bus width (256-bit) gives you the maximum aka theoretical bandwidth
5,500 x 32 (1 per 8-bit) = 176GB/s
That right there is an absolute maximum, that's not an average or the reasonably attainable figure.
Are you kidding? I mean really - are you kidding?
This is part and parcel of the territory here. You have to answer for your statements, especially if you're here in an official capacity. People get banned for being out of line, but poking holes in the arguments of other posters is well within the rules.
I've had my work here both praised and eviscerated, called out by numerous forum folks both publicly and via PM when I got stuff wrong, and I'm a goddamn admin. Guess what - I wouldn't have it any other way. That is what makes NeoGAF what it is.
There are many, many people who are more than capable of assessing, vetting and debunking technical claims and they have every right to do so. That's the price of doing business here. If we had official Nintendo or Sony reps on board, they would be subject to the same process.
If you're scared, buy a dog.
I deleted half of it. I think I need to do it again.
edit: Albert, I just don't understand how you have the details on Sony's hardware to make these statements. What am I missing?
No you didn't. Did I say average? or did I say reasonable expectation? Please learn to read.
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.
We are lucky to have a small handful working on Xbox.
I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.
So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.
So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.
I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.
I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.
We are lucky to have a small handful working on Xbox.
I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.
So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.
So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.
I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.
I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.
We are lucky to have a small handful working on Xbox.
I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.
So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.
So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.
I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.
I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
I don't understand how these two statements are logically consistent.
By the distributive property:
(X + X + ... + X) * 1.06 = 1.06*X + 1.06*X + ... + 1.06*X.
And that's assuming the CUs work linearly.
Unless I'm misunderstanding you, you said the exact opposite -- having additional CUs does not result in a linear increase in power.
Therefore, your statement is actually THE OPPOSITE of what you are pointing out:
(X + X + ... + X) * 1.06 > 1.06*X + 1.06*X + ... + 1.06*X.
But then, by your very own logic, having each CU running 6% faster is actually **worse** than a 6% increase overall.
Any comments on your bare faced lie regarding the 3x coherent bandwidth?
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.
We are lucky to have a small handful working on Xbox.
I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.
So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.
So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.
I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.
This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.
I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
Are you kidding? I mean really - are you kidding?
This is part and parcel of the territory here. You have to answer for your statements, especially if you're here in an official capacity. People get banned for being out of line, but poking holes in the arguments of other posters is well within the rules.
I've had my work here both praised and eviscerated, called out by numerous forum folks both publicly and via PM when I got stuff wrong, and I'm a goddamn admin. Guess what - I wouldn't have it any other way. That is what makes NeoGAF what it is.
There are many, many people who are more than capable of assessing, vetting and debunking technical claims and they have every right to do so. That's the price of doing business here. If we had official Nintendo or Sony reps on board, they would be subject to the same process.
If you're scared, buy a dog.
timlot
Banned
Today, 09:29 PM, Post #348
Dude. Really? challenging his point is one thing but calling him a liar after he just explained where his facts came from?
Come on man. :lol
The question we should all be making to Albert Penello is what the hell happened to cloud computing?