• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox One has Bluray drive, 8GB ram, 8 core CPU 500GB HDD, 3 Operating systems

snack

Member
Was there going to be any doubt that the Xbox One was going to have Bluray? HD-DVD is dead, and has been for some while now.
 
Anandtech breakdown

Yesterday Microsoft finally took the covers off the new Xbox, what it hopes will last for many years to come. At a high level here’s what we’re dealing with:

- 8-core AMD Jaguar CPU
- 12 CU/768 SP AMD GCN GPU
- 8GB DDR3 system memory
- 500GB HDD
- Blu-ray drive
- 2.4/5.0GHz 802.11 a/b/g/n, multiple radios with WiFi Direct support
- 4K HDMI in/out (for cable TV passthrough)
- USB 3.0
- Available later this year

Although Microsoft did its best to minimize AMD’s role in all of this, the Xbox One features a semi-custom 28nm APU designed with AMD. If this sounds familiar it’s because the strategy is very similar to what Sony employed for the PS4’s silicon.

The phrase semi-custom comes from the fact that AMD is leveraging much of its already developed IP for the SoC. On the CPU front we have two Jaguar compute units, each one with four independent processor cores and a shared 2MB L2 cache. The combination of the two give the Xbox One its 8-core CPU. This is the same basic layout of the PS4‘s SoC.

A look at Wired’s excellent high-res teardown photo of the motherboard reveals Micron DDR3-2133 DRAM on board (16 x 16-bit DDR3 devices to be exact). A little math gives us 68.3GB/s of bandwidth to system memory.

To make up for the gap, Microsoft added embedded SRAM on die (not eDRAM, less area efficient but lower latency and doesn't need refreshing). All information points to 32MB of 6T-SRAM, or roughly 1.6 billion transistors for this memory. It’s not immediately clear whether or not this is a true cache or software managed memory. I’d hope for the former but it’s quite possible that it isn’t. At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads (although there’s not much room for growth).

There are merits to both approaches. Sony has the most present-day-GPU-centric approach to its memory subsystem: give the GPU a wide and fast GDDR5 interface and call it a day. It’s well understood and simple to manage. The downsides? High speed GDDR5 isn’t the most power efficient, and Sony is now married to a more costly memory technology for the life of the PlayStation 4.

Microsoft’s approach leaves some questions about implementation, and is potentially more complex to deal with depending on that implementation. Microsoft specifically called out its 8GB of memory as being “power friendly”, a nod to the lower power operation of DDR3-2133 compared to 5.5GHz GDDR5 used in the PS4. There are also cost benefits. DDR3 is presently cheaper than GDDR5 and that gap should remain over time (although 2133MHz DDR3 is by no means the cheapest available). The 32MB of embedded SRAM is costly, but SRAM scales well with smaller processes. Microsoft probably figures it can significantly cut down the die area of the eSRAM at 20nm and by 14/16nm it shouldn’t be a problem at all.

Even if Microsoft can’t deliver the same effective memory bandwidth as Sony, it also has fewer GPU execution resources - it’s entirely possible that the Xbox One’s memory bandwidth demands will be inherently lower to begin with.
 
Just one graph to illustrate why Sony will wipe the floor with this piece of crap from Microsoft..

Games designers are shifting traditional CPU tasks over to the GPU, in this graph you can see that even low end budget CPU's perform nearly the same as $300 CPU's when they have a decent GPU to take the strain.

The PS4 GPU has 50% more power than the 'one'.

54503.png


Conclusion...the CPU in both the PS4 and 'One' are relatively weak, which means most developers will program their games to use the GPU, 2 or 3 yrs from now, the difference in power between the 2 consoles will be all too apparent as Microsoft's cheapskate approach to the GPU comes home to roost.
 

kharma45

Member
Just one graph to illustrate why Sony will wipe the floor with this piece of crap from Microsoft..

Games designers are shifting traditional CPU tasks over to the GPU, in this graph you can see that even low end budget CPU's perform nearly the same as $300 CPU's when they have a decent GPU to take the strain.

The PS4 GPU has 50% more power than the 'one'.

http://images.anandtech.com/graphs/graph6934/54503.png[img]

Conclusion...the CPU in both the PS4 and 'One' are relatively weak, which means most developers will program their games to use the GPU, 2 or 3 yrs from now, the difference in power between the 2 consoles will be all too apparent as Microsoft's cheapskate approach to the GPU comes home to roost.[/QUOTE]

Yeah it shows that Metro 2033 is very GPU dependent, but what of a more modern game like Crysis 3? It's very CPU dependent too, as are titles like Battlefield 3 and Far Cry 3.

[IMG]http://i.imgur.com/WGuy37a.png
 
Yeah it shows that Metro 2033 is very GPU dependent, but what of a more modern game like Crysis 3? It's very CPU dependent too, as are titles like Battlefield 3 and Far Cry 3.

But dev's won't have the luxury of a powerful CPU on the Xbox or PS4...so they will transition to the GPU, where the PS4 has a 50% power advantage.

either way you cut it, the GPU will count for more than the CPU in games for the next 5yrs.
 

mr2xxx

Banned
Yeah it shows that Metro 2033 is very GPU dependent, but what of a more modern game like Crysis 3? It's very CPU dependent too, as are titles like Battlefield 3 and Far Cry 3.

Big budget titles will be developed around the limited CPU. PC has to compensate for that fact, which they have been. Intel CPU's have not had a large leap in performance since the 920. Compare that to say how much more powerful a current gen GPU is compared to a 9800 GTX from the same year as the 920.
 

Panajev2001a

GAF's Pleasant Genius
To add to that; both decided to bet on two different approaches. Sony bet on using a smaller but faster pool of RAM (2-4GB GDDR5) and eat the (higher) RAM cost upfront where as Microsoft decided they would bet on a bigger pool of slower RAM (8GB DDR3) and bet on process nodes to drive the cost of the ESRAM down.

Also keep on mind that the work on both consoles would have started years ago. If the PS4 began taking shape in '07/08 then GDDR5 was just making it's way then. To bet on cutting edge RAM of that time and lots of it (2GB) required taking huge risks. Not saying that Microsoft havent taken risks, they have with a big APU. Unfortunately for Microsoft the process node shrinks have slowed down past 40nm and it doesnt seem like a big cost advantage now.

It's easy to say this all now but personally I dont know which approach I would have picked 5-6 years ago.

Betting that heavily on process node scaling has been a dangerous bet for quite a few years now, unless your manufacturing partner is called Intel or IBM which are two of the few players with the whole vertical processor design and manufacturing stack in house and quite well funded.

For all things people like to criticize Kutaragi for, betting on heavy manufacturing scaling kind of stopped with the PS2 already (PS3 chips at launch were not pushing the manufacturing process used as harshly as PS2's EE and GS did when PS2 launched).

Let's see if Sony will try to convert the design to stacked RAM during the life of PS4. It is good that they can launch with GDDR5 without breaking the bank though and they still have various options to reduce cost over time.
 
Top Bottom