• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox One has Bluray drive, 8GB ram, 8 core CPU 500GB HDD, 3 Operating systems

Aaron

Member
Really depends. Weather physics calculations would certainly be something easily pushed to the cloud. Group AI is another. Take GTA for example, the cloud could compute AI routines for all AI, the only time AI is calculated locally would be for when player interactions take place.
While that's certainly possible, it would mean a hell of a lot of work for the developers for background details that would be much, much easier to fake locally. I can't see it being practical outside of maybe an mmo with a massive number of AI routines, but that would be online anyway with calculations for the whole server and not just a single user.

The hospital I work at tried to implement cloud computing and virtual machines. It was a disaster. While a neat idea, it's the sort of thing that just falls apart under real world conditions. And that's with servers on site, not several cities or even states away.
 

Klocker

Member
discussions at B3D that to get to the announced 5 Billion transistors and 200GB/s memory bandwidth they may have doubled the ESRAM
 
discussions at B3D that to get to the announced 5 Billion transistors and 200GB/s memory bandwidth they may have doubled the ESRAM

No, the idiots on B3D speculate that. Everyone else, including the former member of the Xbox team, believe MS just used some funny math that added the bandwidth between the CPU and GPU to the DDR3 and ESRAM figures to get over 200GBps. 32MB has also been confirmed elsewhere.
 

JaggedSac

Member
While that's certainly possible, it would mean a hell of a lot of work for the developers for background details that would be much, much easier to fake locally. I can't see it being practical outside of maybe an mmo with a massive number of AI routines, but that would be online anyway with calculations for the whole server and not just a single user.

That really depends on how good the toolset and documentation is for the cloud stuff.
 

Fi Fo Nye

Banned
You're using a COD game to gauge how powerful the console is? Really?

"Look at the next title in a series known for it's graphical prowess and made exclusively for one system, compare it to a game known to reuse the same engine for years making small improvements over titles, it is made for both systems."

Nice comparison brah. Quantum Break would be better, but let's wait till MS reveals a next-gen FPS of their own.

That's absolutely stupid. Call of Duty has NEVER been a good looking game. They've always scaled back graphics in favor of performance, while Killzones scale back performance in favor of graphics.

This is literally the worst comparison you could make.

Still, guys, MS had an extra two or three months to prepare... and all they could show for it were an iterative COD (same as an iterative Killzone) and an unprecedented partnership with EA (also iterative). That's the situation I asked for to be assessed.

Look, I never said one way or the other that one console is more powerful, I only beckoned for a comparison between two FPSes that were shown at each respective reveal. I know, it sucks that we have to wait until E3 while Sony has already shown off Infamous, Knack, Drive Club, Witness, and I don't know, a vision for gaming.
 

KidBeta

Junior Member
discussions at B3D that to get to the announced 5 Billion transistors and 200GB/s memory bandwidth they may have doubled the ESRAM

if 32MB of eSRAM is ~ 2B transistors, and the entire console is 5B transistors.

Then it sure as hell didn't get doubled.

I read the thread wrong, nm.
 

StevieP

Banned
While that's certainly possible, it would mean a hell of a lot of work for the developers for background details that would be much, much easier to fake locally. I can't see it being practical outside of maybe an mmo with a massive number of AI routines, but that would be online anyway with calculations for the whole server and not just a single user.

The hospital I work at tried to implement cloud computing and virtual machines. It was a disaster. While a neat idea, it's the sort of thing that just falls apart under real world conditions. And that's with servers on site, not several cities or even states away.

I implemented a fully virtualized local cloud at my work years back. Works fine if the people know what they're doing. With all that said, it means little in terms of gaming
 

Klocker

Member
No, the idiots on B3D speculate that. Everyone else, including the former member of the Xbox team, believe MS just used some funny math that added the bandwidth between the CPU and GPU to the DDR3 and ESRAM figures to get over 200GBps. 32MB has also been confirmed elsewhere.

you mean this?

bkilian

maybe they doubled the ESRAM to 64 MB? That would take care of the "missing transistors" and the memory bandwidth.

but later he said he was just guessing and still thinks there were probably no changes, yes, but something makes up those 5 billion transistors
 

Aaron

Member
That really depends on how good the toolset and documentation is for the cloud stuff.
Passing cert is already a nightmare. Developers aren't going to add what's essentially tinsel, which they'll have to test for latency, varying connections, and the result of disconnections. Sure, if they are making an online game they'll have to test for these things anyway, but are they going to want to spare the bandwidth knowing they still have to cater to people with crap connections? I just can't imagine this would ever be worth the effort it will take to implement.

I implemented a fully virtualized local cloud at my work years back. Works fine if the people know what they're doing. With all that said, it means little in terms of gaming
You're talking about a controlled environment though. Not a game a developer at least hopes will sell to millions.
 

Jburton

Banned
I implemented a fully virtualized local cloud at my work years back. Works fine if the people know what they're doing. With all that said, it means little in terms of gaming

Having to send data and await calculations from the cloud would only add more latency ..... I can see no positives from this.
 
The Kinect 2 sounds like it's a substantial upgrade. Which, if the first piece of shit kinect sold for $150 this must cost even more. So that's all money going to something that doesn't do fuckall for actual games.


Unless talking to your system like you did in socom 1 on the ps2 is some sort of gamechanger.
 

Radec

Member
R6NC


Works for Google.

Does it now?
 

JaggedSac

Member
Passing cert is already a nightmare. Developers aren't going to add what's essentially tinsel, which they'll have to test for latency, varying connections, and the result of disconnections. Sure, if they are making an online game they'll have to test for these things anyway, but are they going to want to spare the bandwidth knowing they still have to cater to people with crap connections? I just can't imagine this would ever be worth the effort it will take to implement.

You are making cloud stuff out to be harder than it is.

Having to send data and await calculations from the cloud would only add more latency ..... I can see no positives from this.

Obviously you wouldn't want to do things that are latency sensitive. I will use my weather pattern example. Game makes a request to the cloud service for some cloud volume calculations, wind, rain, etc, passing in some data about its current state(perhaps this could be cached on the server, hell perhaps all players at a given time are sharing the same weather patterns for a particular location, so the cloud service is actually calculating this stuff all the time and just dumping the current set to the game). The cloud service could return a data dump containing the calculations for a given time interval, say 10 seconds or something. There is basically a rolling update of future weather data that the game is using. Obviously there would need to be a degradation of this is connection is lost, but the concept is there.
 

Perkel

Banned
but later he said he was just guessing and still thinks there were probably no changes, yes, but something makes up those 5 billion transistors


CPU+GPU - 3 bilion
32MB ESRAM - 2 bilion

No one uses transistors to describe power anymore. They used that because it is the only one thing that is "better" in comparison to PS4.

They even mentioned ~200GB/s bandwidth "across the system" which completely points what we were speculating from leaks. Which means in reality ~60-70 GB/s for DDR3 ram 100GB/s for SRAM where you can't combine them.
 

Fi Fo Nye

Banned
The Kinect 2 sounds like it's a substantial upgrade. Which, if the first piece of shit kinect sold for $150 this must cost even more. So that's all money going to something that doesn't do fuckall for actual games.


Unless talking to your system like you did in socom 1 on the ps2 is some sort of gamechanger.

Maybe others can corroborate, but the first Kinect was being sold for a substantial profit, was it not? That is only to say that maybe Kinect 2.0 doesn't even cost more than $100 to make, but even anywhere close to that could be close to 25% of the total cost of build nonetheless. I think what's really fucking them up isn't an upgraded Kinect, but all this esRAM and Move (the engines, not controllers) stuff.
 

abadguy

Banned
Still, guys, MS had an extra two or three months to prepare... and all they could show for it were an iterative COD (same as an iterative Killzone) and an unprecedented partnership with EA (also iterative). That's the situation I asked for to be assessed.

Look, I never said one way or the other that one console is more powerful, I only beckoned for a comparison between two FPSes that were shown at each respective reveal. I know, it sucks that we have to wait until E3 while Sony has already shown off Infamous, Knack, Drive Club, Witness, and I don't know, a vision for gaming.

Yeah and it was stated before the showing that E3 was for the games and last i checked they are showing about 15exclusives. Funny thing is people are bitching if they had all of this media focus shit at E3, they actually spare us sitting through it in E3 in favor of games and people still bitch about it.( ThisisNeogaf.gif) And yes COD VS Killzone is a dumb comparison to make, since when has a COD game ever pushed any console graphically? I don't remember people comparing it with Killzone or any other Sony first party titles this past gen, why do it now?
 

Fi Fo Nye

Banned
Yeah and it was stated before the showing that E3 was for the games and last i checked they are showing about 15exclusives. Funny thing is people are bitching if they had all of this media focus shit at E3, they actually spare us sitting through it in E3 in favor of games and people still bitch about it.( ThisisNeogaf.gif) And yes COD VS Killzone is a dumb comparison to make, since when has a COD game ever pushed any console graphically? I don't remember people comparing it with Killzone or any other Sony first party titles this past gen, why do it now?

Ok, fine. PS4 is more powerful. There, I answered the man's question.
 

JaggedSac

Member
And just so people are aware, Halo 4 is using cloud computing:

http://www.zdnet.com/microsofts-orleans-cloud-programming-model-gets-a-halo-test-drive-7000009300/

"The cloud-systems team celebrated a year of successful deployment of its distributed cloud technology—Orleans—in production for Microsoft’s Halo team, and the team has scaled its system very significantly since then."

Another bit of info:

http://www.microsoft.com/enterprise...m-Big-Data-in-the-Cloud.aspx#fbid=bN36T7363UO
 
Maybe others can corroborate, but the first Kinect was being sold for a substantial profit, was it not? That is only to say that maybe Kinect 2.0 doesn't even cost more than $100 to make, but even anywhere close to that could be close to 25% of the total cost of build nonetheless. I think what's really fucking them up isn't an upgraded Kinect, but all this esRAM and Move (the engines, not controllers) stuff.


Even if it's $100, that's a lot of money that could be used to make it a more powerful system.


To me it speaks to the bigger problem that most of us have with it, they spent a lot of time and a lot of money on things that aren't for games. I fully concede that the mainstream may love it and it might sell a trillion units (or it may totally bomb... who knows). But when it comes to the hate it's getting here, on twitter, on reddit or even on xbox specific forums... i think that's exactly it: money spent on non gaming shit.
 

JAYSIMPLE

Banned
if e3 is just constant games it will amaze. its just annoying waiting. after the kinect crap etc im just not sold on trusting microsoft
 

ascii42

Member
Yes we know. The guy was saying its 50% more powerful. The other guy then said 33%. If he said 33% weaker he would be right too.
Started with "50% less" actually
"Move Engines" also, that make up for vastly lower memory bandwidth and 50% less GPU power.

Tripping balls.

33%

Math. It isn't that hard.

18 compute units versus 12 ...... 50% more.


You should learn math.
 

Fi Fo Nye

Banned
Even if it's $100, that's a lot of money that could be used to make it a more powerful system.


To me it speaks to the bigger problem that most of us have with it, they spent a lot of time and a lot of money on things that aren't for games. I fully concede that the mainstream may love it and it might sell a trillion units (or it may totally bomb... who knows). But when it comes to the hate it's getting here, on twitter, on reddit or even on xbox specific forums... i think that's exactly it: money spent on non gaming shit.

The thing that's really funny to me... is that when you look at the PS4's build, especially after reading the in-depth interviews given by Cerny et al, it is truly a system in harmony (with a bit of extra juice thrown in for good measure).

And then here you have the Xbox One... being hyped by its creators as perfectly balanced, when already DDR3 is the bottleneck (or else they wouldn't need 5 billion transistors in the first goddamn place).
 

abadguy

Banned
The thing that's really funny to me... is that when you look at the PS4's build, especially after reading the in-depth interviews given by Cerny et al, it is truly a system in harmony (with a bit of extra juice thrown in for good measure).

And then here you have the Xbox One... being hyped by its creators as perfectly balanced, when already DDR3 is the bottleneck (or else they wouldn't need 5 billion transistors in the first goddamn place).

You do good work, does Sony give you commission or what?
 

Fi Fo Nye

Banned
Hey bro, just because my avatar is only on PlayStation, it doesn't make me any less qualified to offer commentary than you.
 

JaggedSac

Member
The thing that's really funny to me... is that when you look at the PS4's build, especially after reading the in-depth interviews given by Cerny et al, it is truly a system in harmony (with a bit of extra juice thrown in for good measure).

And then here you have the Xbox One... being hyped by its creators as perfectly balanced, when already DDR3 is the bottleneck (or else they wouldn't need 5 billion transistors in the first goddamn place).

Cost and power constraints can certainly be considered things that need balancing.
 

G_Berry

Banned
lol poor DDR3, it's be perfectly fine in monster gaming rigs for years and as soon as it's in a Microsoft console it gets poo-pooed.
 

Piggus

Member
You do good work, does Sony give you commission or what?

Well, he's right... Using SRAM as a cache to try to make up for slow DDR3 isn't the most elegant solution. All it does is create a massive single die that puts out more heat.

lol poor DDR3, it's be perfectly fine in monster gaming rigs for years and as soon as it's in a Microsoft console it gets poo-pooed.

Monster gaming rigs have GDDR5 on the video card for graphics tasks. There's no modern high end (or even mid-range) GPU that just uses DDR3 for graphics-related tasks. Using a single pool of memory is a better solution, but that memory has to be fast.
 
CPU+GPU - 3 bilion
32MB ESRAM - 2 bilion

No one uses transistors to describe power anymore. They used that because it is the only one thing that is "better" in comparison to PS4.

They even mentioned ~200GB/s bandwidth "across the system" which completely points what we were speculating from leaks. Which means in reality ~60-70 GB/s for DDR3 ram 100GB/s for SRAM where you can't combine them.

Yes and bkilian said there counting the CPU-GPU interconnect as well, which is 30gb/s of bandwidth. So its 102.4+68.2+30= 200.6gb/s of bandwidth. Clearly more than 200...Sony could do the same thing in order to add an extra 30gb/s.
 

Jburton

Banned
Well, he's right... Using SRAM as a cache to try to make up for slow DDR3 isn't the most elegant solution. All it does is create a massive single die that puts out more heat.



Monster gaming rigs have GDDR5 on the video card for graphics tasks. There's no modern high end (or even mid-range) GPU that just uses DDR3 for graphics-related tasks. Using a single pool of memory is a better solution, but that memory has to be fast.

The SRAM is a solution to using low end ram, not a better architecture choice.
 

JaggedSac

Member
Yes, and Sony found a way to balance cost and power without Kinect and 5 billion transistors.

Yes, they decided the cost and power required to include GDDR5 was worth it to them.

EDIT: I am certainly not saying eSRAM was the better choice, just saying that for the platform goals, both companies had a BoM target and what we see is what the results of that tug of war was.
 
Well, he's right... Using SRAM as a cache to try to make up for slow DDR3 isn't the most elegant solution. All it does is create a massive single die that puts out more heat.

Yep which really adds to the BoM. ESRAM isn't cheap. Cerny design with PS4 is much better all around. MS is probably really pissed an upset that they built a more expensive architecture(as far the memory set up is concerned) thats also needlessly more complex and inferior all at the same time. I'm sure this wasn't there original intention. The more expensive memory setup also took away from there budget for the GPU. Anotherwards the reason you probably only have a 1.2tflop GPU in Xbox One is because of the inclusion of the ESRAM. Sometimes its just the way the cookie crumbles. Heres what I mean...

MS made the choice to have 8gb of DDR3 RAM from the get go because of there focus on non gaming related features with the OS. So they designed there system architecture around this. They had no idea back then that GDDR5 would be ready in time, as far as chip densities go. So knowning 8GB of RAM was a priority, they therefor had to go with the only for sure way to achieve that, at the time.

Where as Sony had started out with 4gb of GDDR5, and designed there architecture around this. Then probably 6 months ago they realized they would be able to achieve 8gb because of the chip densities. It was now very simple for them to double the RAM.

Really everything just worked out better for Sony, the stars aligned for them, you could almost say a little bit of luck was involved. MS was in no position to make a change to counter them.
 

Piggus

Member
The SRAM is a solution to using low end ram, not a better architecture choice.

Agreed. I'm saying that a single pool of memory is better for developers but it's kind of pointless if that memory is really slow. I'm not saying the SRAM is a beeter choice (it isn't). I sort of consider the architecture as a single pool with a cache rather than a split pool, though that could be misinterpreting it.

Yep which really adds to the BoM. ESRAM isn't cheap. Cerny design with PS4 is much better all around. MS is probably really pissed an upset that they built a more expensive architecture thats also needlessly more complex and inferior at the same time. I'm sure this wasn't there original intention. Sometimes its just the way the cookie crumbles. Heres what I mean...

MS made the choice to have 8gb of DDR3 RAM from the get go because of there focus on non gaming related features with the OS. So they designed there system architecture around this. They had no idea back then that GDDR5 would be ready in time, as far as chip densities go. So knowning 8GB of RAM was a priority, they therefor had to go with the only for sure way to achieve that, at the time.

Where as Sony had started out with 4gb of GDDR5, and designed there architecture around this. Then probably 6 months ago they realized they would be able to achieve 8gb because of the chip densities. It was now very simple for them to double the RAM.

Really everything just worked out better for Sony, the stars aligned for them, you could almost say a little bit of luck was involved. MS was in no position to make a change to counter them.

That's what a lot of people didn't realize in the leaks threads. A lot of people who were surprised by the PS4 having that much GDDR5 (it WAS pretty shocking and Sony definitely got really lucky with chip densities) assumed that it was equally likely for Microsoft to make last minute changes of their own. Unfortunately for them, doubling the amount of RAM is significantly easier than any kind of meaningful upgrade MS could have done. Sony didn't have to touch their system architecture to double their RAM, whereas MS would have had to scrap their entire DDR3+SRAM architecture to level the playing field. Something like that just isn't going to happen this late in the game. The only thing they could have done was adjust clock speeds or MAYBE use a more powerful GPU, but it doesn't even sound like they're doing that.
 

Man

Member
Agree with JohnnySasakis theory. In the end I don't believe Microsofts 5 billion transistor ESram/DDR3 is significantly cheaper to produce than Sonys 3 billion transistor GDDR5 solution. MS even had to resort to 40nm for launch [edit: Apparantly Wired.com made a mistake, it is 28nm.]. I'm expecting Sony to announce 28nm at E3.
 

Fi Fo Nye

Banned
Yes, they decided the cost and power required to include GDDR5 was worth it to them.

EDIT: I am certainly not saying eSRAM was the better choice, just saying that for the platform goals, both companies had a BoM target and what we see is what the results of that tug of war was.

Wrong. Sony decided that the cost and power to include unified 176 GB/s of bandwidth (8GBs worth), and 50% more GPU power, was more worth it to them.
 
I said it all along that the Xbox One's design was an ASSURED way of getting 8 GB of RAM in the system to support their non-gaming agenda.

This could possibly bite them in the ass big time. It's such a needlessly more complex solution.

Sony lucked out big time here.
 

JaggedSac

Member
Wrong. Sony decided that the cost and power to include unified 176 GB/s of bandwidth (8GBs worth), and 50% more GPU power, was more worth it to them.

Yes, sorry, worded it backwards. They decided the extra cost and power required were worth the extra perf.
 

artist

Banned
Yep which really adds to the BoM. ESRAM isn't cheap. Cerny design with PS4 is much better all around. MS is probably really pissed an upset that they built a more expensive architecture(as far the memory set up is concerned) thats also needlessly more complex and inferior all at the same time. I'm sure this wasn't there original intention. The more expensive memory setup also took away from there budget for the GPU. Anotherwards the reason you probably only have a 1.2tflop GPU in Xbox One is because of the inclusion of the ESRAM. Sometimes its just the way the cookie crumbles. Heres what I mean...

MS made the choice to have 8gb of DDR3 RAM from the get go because of there focus on non gaming related features with the OS. So they designed there system architecture around this. They had no idea back then that GDDR5 would be ready in time, as far as chip densities go. So knowning 8GB of RAM was a priority, they therefor had to go with the only for sure way to achieve that, at the time.

Where as Sony had started out with 4gb of GDDR5, and designed there architecture around this. Then probably 6 months ago they realized they would be able to achieve 8gb because of the chip densities. It was now very simple for them to double the RAM.

Really everything just worked out better for Sony, the stars aligned for them, you could almost say a little bit of luck was involved. MS was in no position to make a change to counter them.
To add to that; both decided to bet on two different approaches. Sony bet on using a smaller but faster pool of RAM (2-4GB GDDR5) and eat the (higher) RAM cost upfront where as Microsoft decided they would bet on a bigger pool of slower RAM (8GB DDR3) and bet on process nodes to drive the cost of the ESRAM down.

Also keep on mind that the work on both consoles would have started years ago. If the PS4 began taking shape in '07/08 then GDDR5 was just making it's way then. To bet on cutting edge RAM of that time and lots of it (2GB) required taking huge risks. Not saying that Microsoft havent taken risks, they have with a big APU. Unfortunately for Microsoft the process node shrinks have slowed down past 40nm and it doesnt seem like a big cost advantage now.

It's easy to say this all now but personally I dont know which approach I would have picked 5-6 years ago.
 

KidBeta

Junior Member
Agree with JohnnySasakis theory. In the end I don't believe Microsofts 5 billion transistor ESram/DDR3 is significantly cheaper to produce than Sonys 3 billion transistor GDDR5 solution. MS even had to resort to 40nm for launch. I'm expecting Sony to announce 28nm at E3.

Can you give us a link to the 40nm rumour?.
 
To add to that; both decided to bet on two different approaches. Sony bet on using a smaller but faster pool of RAM (2-4GB GDDR5) and eat the (higher) RAM cost upfront where as Microsoft decided they would bet on a bigger pool of slower RAM (8GB DDR3) and bet on process nodes to drive the cost of the ESRAM down.

Also keep on mind that the work on both consoles would have started years ago. If the PS4 began taking shape in '07/08 then GDDR5 was just making it's way then. To bet on cutting edge RAM of that time and lots of it (2GB) required taking huge risks. Not saying that Microsoft havent taken risks, they have with a big APU. Unfortunately for Microsoft the process node shrinks have slowed down past 40nm and it doesnt seem like a big cost advantage now.

It's easy to say this all now but personally I dont know which approach I would have picked 5-6 years ago.

It's a little strange that Microsoft didn't try and go with a split pool architecture.

Say, 2-4 GB of GDDR4 + 4 GB of DDR3 for their non-gaming applications.
 
Agree with JohnnySasakis theory. In the end I don't believe Microsofts 5 billion transistor ESram/DDR3 is significantly cheaper to produce than Sonys 3 billion transistor GDDR5 solution. MS even had to resort to 40nm for launch. I'm expecting Sony to announce 28nm at E3.

I would actually say they both cost about the same. I would say MS memory situation and added motherboard complexity is more expensive, but is mostly canceled out by Sony's beefier GPU. I wouldn't be surprised if MS solution is a little bit more expensive. If it is indeed 40nm, MS would be the more expensive of the two. Looking at the cooling(whole right size is vents) and size of this machine, it might make some sense that it is indeed 40nm...

edit: Its going to be interesting to see how Sony's machine compares size wise. We haven't had any confirmation that its 28nm yet have we? Is 32nm a possibility?
 
Top Bottom