• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The NEW Xbox one thread of Hot Chips and No Pix

Satchel

Banned
The flash will be incredibly slow compared to the DDR3, it will not be used for anything which requires performance.




Which is very little (<10%) of a Jaguar core. MS needed the extra horse power for Kinect otherwise they could have saved the time, money and die space.

Really? Even for the OS? I thought flash storage was quick. Hence why so many use SSDs for their OS partitions.
 

Trogdor1123

Member
Can someone use puppets to explain this to me... i clearly need it dummied down that much as I am lost. Is this all good news? Bad News?
 

USC-fan

Banned
So I think the big question is, with all this new info, have we learned anything that would make us think the gap between the two consoles is lessened?

Really the only new info is on the kinect stuff. It would be an advantage in game that have high end audio IF it not reserved for kinect. If it reserved the PS4 have the advantage since it have DSP audio chip.

The flash could lead to fast OS even if it only 200MB/s since it seek times should be great compared to the 2.5" HDD. Unless you replace the HDD to a SSHD or SSD in your PS4.

We are just going to wait to see the games. The gap only going to widen over the course of the gen, imo.
 

Klocker

Member
The flash will be incredibly slow compared to the DDR3, it will not be used for anything which requires performance.




Which is very little (<10%) of a Jaguar core. MS needed the extra horse power for Kinect otherwise they could have saved the time, money and die space.

Yes but as noted in my examples above, for sound, allegedly they are similar...also should we not expect much more from our audio this gen? I know I do. &#128522;


Edit...Oh and same audio engineer that worked on chip says that the majority of the chip is for powering game sound completely unrelated to Kinect so that is a myth
 
Really? Even for the OS? I thought flash storage was quick. Hence why so many use SSDs for their OS partitions.

Yes, flash storage is slow compared to RAM. People with a stupid amount of RAM can use things called RAM Disks and they're miles faster than flash memory.

Flash storage is useful because, unlike RAM, its not volatile memory. But even then, you will still need RAM even if you have a 1TB SSD. For instance, the snap feature will not work if you're constantly reading/writing from the flash memory. Its just not fast enough.
 
So I think the big question is, with all this new info, have we learned anything that would make us think the gap between the two consoles is lessened?

Probably not, but it tells us that the Xbox One isn't quite as badly put together or designed as some were led to believe. It seems like a pretty rock solid design, so if you thought the system was crap before without a bunch of considerations for how to best maximize performance, then I guess you could say this lessens the gap? If you were already convinced that the console was quite capable, but just not packing the performance heft of the PS4, then nothing changes. You still have a more nuanced appreciation of what's inside the system, however.
 

Phawx

Member
do you think is possible that the x1 have an hibernation mode that transfer the full ddr3 content in this flash memory or this operation will cause the same problem ?

MS 'could' do that, but they won't. The Xbone won't power down, it will only ever go into a standby mode. Power gating all parts of a chip not necessary for watching TV only. It's going to be necessary anyway if you want to have a STB going through your Xbone. People would be pretty upset if the TV didn't work any time the Xbone was 'off', right?

I'd bet that it would be used as a block level cache and just let most used LBA on the hdd govern it's use. There could also be a small slice for developers, but most likely, block-level is going to be easier and far less of a headache as any time you switch a game you'll be thrashing different data into the flash.

Basically, let the user decide based on how they use the Xbone to elect what gets put into cache.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Really? Even for the OS? I thought flash storage was quick. Hence why so many use SSDs for their OS partitions.

It is a term used for all kinds of static RAM. Thumb drives and SD cards are also flash, but most of the time they are slower than HDs. The 12GB PS3 is flash, but performs poorly in tests.
 

tokkun

Member
Well, from what I know from an internal dev document that Microsoft gave to developers, the ESRAM is most definitely a generic scratchpad, but I don't know if that automatically means it has its own address space or not. Does being a scratchpad automatically suggest it must have its own address space?

Yes, the term "scratchpad" usually implies a separate address space from main memory. Any coherence must be software-managed, rather than hardware managed. For this reason they are sometimes referred to as "software-managed caches".

Some examples of scratchpads in gaming are the shared memory in Nvidia GPUs and the local stores in Cell's SPEs.

The advantages of a scratchpad are that it can be more efficiently used in regular and predictable code and that it always has deterministic performance (it is impossible to 'miss' in a scratchpad). Also, not implementing hardware-managed coherence may allow them to run at a higher speed.

The disadvantage is that they are harder to program for.

(By the way, the points mentioned above are the things I think people should care about, not whether this implementation gets labeled as 'hUMA' or not).
 
What does this all mean?
Can someone explain it in lamens term, or at least dbz terms.

DBZ terms/pics not allowed in this thread.

Also, don't mean to sound...mean.. but laymen terms of this isn't really necessary. This is only really relevant to tech heads and maybe programmers. Other than that, it's just neat facts.
 
I'm guessing the Xbox One doesn't have hUMA according to the second pic.

The second pic shows that the coherent memory is just "cache" (not really) for the CPU.

I don't know, but this seems to suggest otherwise unless I'm misinterpreting the meaning. He was one of the presenters, and is a Microsoft hardware architect.

http://www.itworld.com/hardware/370538/xbox-one-will-have-high-performance-custom-chip?page=0,0

Sell said. One unique aspect of the chip is a shared memory pool that can be accessed by CPUs, GPUs and other processors in the system. Typically, GPUs and CPUs have different memory systems, but the new features increase the overall addressable memory in the Xbox One. The GPUs and CPUs have also been modified to enable shared memory.Shared memory is also part of a specification being pushed by the HSA Foundation, which wants to blur the line between GPU and CPU memory to make programming easier. AMD is one of the founding members of the HSA Foundation, though Sony is also a member, which suggests that shared memory may also be part of PlayStation 4.

Yes, the term "scratchpad" usually implies a separate address space from main memory. Any coherence must be software-managed, rather than hardware managed. For this reason they are sometimes referred to as "software-managed caches".

Some examples of scratchpads in gaming are the shared memory in Nvidia GPUs and the local stores in Cell's SPEs.

The advantages of a scratchpad are that it can be more efficiently used in regular and predictable code and that it always has deterministic performance (it is impossible to 'miss' in a scratchpad). Also, not implementing hardware-managed coherence may allow them to run at a higher speed.

The disadvantage is that they are harder to program for.

(By the way, the points mentioned above are the things I think people should care about, not whether this implementation gets labeled as 'hUMA' or not).

Thanks, pretty helpful info. And especially agree on the 'hUMA' stuff. It's literally beginning to lose its meaning. A quick look at some of the Xbox One games shown so far should probably indicate at a bare minimum that with or without 'hUMA' the Xbox One is not desperately lacking graphics performance. I also recall what Dave Baumann said about what he believes the Xbox One GPU will be capable of once devs make proper use of the ESRAM.
 

Satchel

Banned
Yes, flash storage is slow compared to RAM. People with a stupid amount of RAM can use things called RAM Disks and they're miles faster than flash memory.

Flash storage is useful because, unlike RAM, its not volatile memory. But even then, you will still need RAM even if you have a 1TB SSD. For instance, the snap feature will not work if you're constantly reading/writing from the flash memory. Its just not fast enough.

So having the OS based on the flash, and then using say...1GB? of the RAM wouldn't make it a quick OS?
 
I guess I could take partial responsibility for what happened to the other thread. lol

But on topic, some interesting new info came out of this. Wonder if we'll see noticeable differences in multiplats.
 
Yes, flash storage is slow compared to RAM. People with a stupid amount of RAM can use things called RAM Disks and they're miles faster than flash memory.

Flash storage is useful because, unlike RAM, its not volatile memory. For most things, you will still need RAM.

Flash also has a really low read/write lifespan. Sometime like 10,000 read/writes for fast flash memory and 100,000 for slow ones. Ones that's hit the flash memory is dead.

Don't expect the flash to be used for anytime other than storing user data.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Yes but as noted in my examples above, for sound, allegedly they are similar...also should we not expect much more from our audio this gen? I know I do. &#128522;


Edit...Oh and same audio engineer that worked on chip says that the majority of the chip is for powering game sound completely unrelated to Kinect so that is a myth

Most people can't hear the results of more expensive algorithms, humans are not particularly adept at picking up subtle audio differences, we are a visual species.

I know what bkilian has said, he also said it would not exists if not for Kinect.

bklian said:
The Audio processor was originally devised to be able to offload Kinect Audio processing, and the chip designers came to the audio team and said "We have a bunch of extra transistors we can throw in for free, what would you like them to do?" or something close to that. The SHAPE block was the result of that conversation.

http://forum.beyond3d.com/showpost.php?p=1765802&postcount=277

From the same thread:

Relab said:
An environmental simulation take 2% of a Harpertown/Penryn core (2008) at 2.8GHz. That translate to around 6% on a Jaguar core (estimation - on the safe side).

http://forum.beyond3d.com/showpost.php?p=1766028&postcount=292
 
So... 8GB of flash memory and 8GB of DDR3 RAM?

What does the flash RAM do?

It is not RAM. Flash memory is just storage. It will probably be used to tombstone background apps similar to how iOS and Android will write a save state of an app when you are multitasking.

ps4 does not. It will have to both stream data and cache on the same hdd, which can not be done at the same time.

We have no confirmation that the PS4 has no flash. And in fact it was rumored to have 16GB of flash built in. Sony haven't publicized this, but they never said the Vita has 4GB of flash built in for OS use either, and that was discovered in teardowns post release. Based on how Vita works and the multitasking features expected on the PS4 it is highly likely there is some amount of OS managed flash storage in the PS4.

I would not expect game developers to have direct access to the flash memory on either platform.
 

chadskin

Member
I don't know, but this seems to suggest otherwise. He was one of the presenters, and is a Microsoft hardware architect.

http://www.itworld.com/hardware/370538/xbox-one-will-have-high-performance-custom-chip?page=0,0

That piece reads to me like they are talking about the 360's UMA and doesn't really help in that regard. From what I understand, due to the eSRAM the Xbone is not going to use hUMA (hence the AMD rep saying only the PS4 uses hUMA) but a similiar, comparable architecture.
 
So having the OS based on the flash, and then using say...1GB? of the RAM wouldn't make it a quick OS?

For what Microsoft wants to do with it, probably not. Remember what the Xbox One's OS is attempting to achieve with seamless multitasking. It can't constantly be swapping data from RAM <-> flash memory to achieve stutter-less performance.
 

Phawx

Member
Flash also has a really low read/write lifespan. Sometime like 10,000 read/writes for fast flash memory and 100,000 for slow ones. Ones that's hit the flash memory is dead.

Don't expect the flash to be used for anytime other than storing user data.

Are we sure it isn't 16 or 32GB of flash provisioned down to 8GB? The Xbone might have an 8 year life, so I'm sure MS might plan for that.
 

Satchel

Banned
It is a term used for all kinds of static RAM. Thumb drives and SD cards are also flash, but most of the time they are slower than HDs. The 12GB PS3 is flash, but performs poorly in tests.

Ok fair enough. I assure you I'm no expert (duh), I was just basing my speculation off the common beliefs (ie misconceptions obviously) on how these things work.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
I agree that the hUMA-term is not very helpful since we all don't seem to know the exact feature set that constitutes a hUMA architecture. In general, the slides align nicely with the leaked documents (after adapting the numbers to the new 853mhz clock). My take is:

  • In both XB1/PS4 the GPU can access main memory by probing the CPU's L2 caches (cache-coherency)
  • In both XB1/PS4 the GPU can (and must to achieve peak bandwidth) bypass the CPU's caches at will (no cache-coherency)
  • ESRAM seems to be a GPU-only scratchpad with a dedicated address space and DMA support via the 4 move engines
  • The PS4 seems to have a finegrained mechanism to bypass GPU cache lines (volatile tag) while the XB1 needs to flush the entire GPU cache
  • XB1 has 2 GFX and 2 ACE processors, the PS4 has 2 GFX and 8 ACE processors which, in combination with the above mentioned volatile tag, shows a bigger emphasize on GPGPU/HSA in general.
 
I don't know, but this seems to suggest otherwise unless I'm misinterpreting the meaning. He was one of the presenters, and is a Microsoft hardware architect.

http://www.itworld.com/hardware/370538/xbox-one-will-have-high-performance-custom-chip?page=0,0

That says the shared memory pool for the CPU and GPU is actually a separate memory from the main DDR3 memory pool. (a much much much smaller memory pool as well I'm assuming).

This implies that the system will need to pull data from the main DDR3 memory pool into this separate pool so the GPU and CPU can work on the same data and keep doing this swap in and swap for new data structures.

It's not as elegant as the PS4 solution. The shared memory size of the PS4 is basically the main 8 RAM of GDDR5 memory. (assumption on my part).
 

Sounddeli

Banned
Looking at it, I can see they will treat it like a turbo charged 360. ESRAM used as framebuffer, and for compute temp storage.

The one advantage the esram has over the 360 is it's addressable, which means your post processing is going to be slicker than the PS4 by about 30gb/s, without stalls. Thats a fair boost.

Man this is going to be so much more even than raw numbers can show I think.
 

Klocker

Member
Most people can't hear the results of more expensive algorithms, humans are not particularly adept at picking up subtle audio differences, we are a visual species.

I know what bkilian has said, he also said it would not exists if not for Kinect.



http://forum.beyond3d.com/showpost.php?p=1765802&postcount=277



Perhaps that is how it started but certainly not how it ended. &#128522;

And yes I agree higher fidelity alone is not going to change the perception especially not for me with tinitus but I would love more diverse and plentiful sounds with more rich sound fields. I thought bf3 sounded amazing but it could sound so much more like a movie
 

Metfanant

Member
So having the OS based on the flash, and then using say...1GB? of the RAM wouldn't make it a quick OS?

i might be reading too much into your post, but it seems like your trying to get at the point that maybe this flash memory will allow MS to dedicate less RAM to the OS (3GB vs 1GB)??...

if that is where you're trying to get the answer is no...
 
This is interesting:

http://venturebeat.com/2013/08/26/m...se-details-are-critical-for-the-kind-of-expe/



Looks like the processing is what will be causing most of the latency with Kinect now.

I was really excited to see this. Latency is that low on the thing now? That's pretty awesome.

That says the shared memory pool for the CPU and GPU is actually a separate memory from the main DDR3 memory pool. (a much much much smaller memory pool as well I'm assuming).

This implies that the system will need to pull data from the main DDR3 memory pool into this separate pool so the GPU and CPU can work on the same data and keep doing this swap in and swap for new data structures.

It's not as elegant as the PS4 solution. The shared memory size of the PS4 is basically the main 8 RAM of GDDR5 memory. (assumption on my part).

Hmm, the way I understood it in context with the diagrams is that the CPU and GPU can both work together on the same data while using either pool of memory, only in the case of the ESRAM, the Xbox One will rely on the Guest Host GPU MMU to serve as a pathway to bring the necessary data from ESRAM up to the CPU cache coherent memory access. If you look at one of the diagrams, you see quite clearly a blue coherent access pipeline or pathway feeding directly into the Memory Management Unit that the ESRAM is directly fed into. The ESRAM just appears to have one step more than the DDR3 does in order to achieve coherent memory access with the CPU cache.

I'll find the diagram that I believe shows this.

This is the one I was referring to.

Xbox%20SoC%20block.jpg


Notice the blue coherency pathway leading into that Guest Host GPU MMU that the ESRAM is also piped into? It looks like one extra step for the ESRAM, which likely breaks the strict definition of 'hUMA', but I assume a nice capability to have nonetheless.

And to give credit to the site where I found the picture.

http://www.eetimes.com/document.asp?page_number=3&doc_id=1319316&image_number=3
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Probably not, but it tells us that the Xbox One isn't quite as badly put together or designed as some were led to believe.

Discrediting the Microsoft engineers was always wrong. (I also find the jokes about the XB1's box size ridiculous). The engineers certainly did a good job in implementing the requirements, which most noticeably included the guarantee of 8GB Ram from which everything else radiated. One can criticize those requirements but that is not the engineer's fault.
 

chadskin

Member
Probably not, but it tells us that the Xbox One isn't quite as badly put together or designed as some were led to believe. It seems like a pretty rock solid design.

I can't shake the feeling it's just a bit too complicated, like they started out with an easy design and ended up just adding more and more layers where they see fit to achieve their goal.
 
I can't shake the feeling it's just a bit too complicated, like they started out with an easy design and ended up just adding more and more layers where they see fit to achieve their goal.

Well, they say it themselves. It's a complex design, but they had some obviously amazingly talented people working on it, so they probably did about a good a job as they could given the priorities of the system.

Discrediting the Microsoft engineers was always wrong. (I also find the jokes about the XB1's box size ridiculous). The engineers certainly did a good job in implementing the requirements, which most noticeably included the guarantee of 8GB Ram from which everything else radiated. One can criticize those requirements but that is not the engineer's fault.

Very true. A task was placed before them, and they saw it through to completion likely about as well as you could hope.
 

USC-fan

Banned
I agree that the hUMA-term is not very helpful since we all don't seem to know the exact feature set that constitutes a hUMA architecture. In general, the slides align nicely with the leaked documents (after adapting the numbers to the new 853mhz clock). My take is:

  • In both XB1/PS4 the GPU can access main memory by probing the CPU's L2 caches (cache-coherency)
  • In both XB1/PS4 the GPU can (and must to achieve peak bandwidth) bypass the CPU's caches at will (no cache-coherency)
  • ESRAM seems to be a GPU-only scratchpad with a dedicated address space and DMA support via the 4 move engines
  • The PS4 seems to have a finegrained mechanism to bypass GPU cache lines (volatile tag) while the XB1 needs to flush the entire GPU cache
  • XB1 has 2 GFX and 2 ACE processors, the PS4 has 2 GFX and 8 ACE processors which, in combination with the above mentioned volatile tag, shows a bigger emphasize on GPGPU/HSA in general.

You are missing onion+ bus which bypass the gpu cache for PS4.
 
I don't know, but this seems to suggest otherwise unless I'm misinterpreting the meaning. He was one of the presenters, and is a Microsoft hardware architect.
Actually, the quotation you gave is not from one of the Microsoft architects, it's a paraphrase by the IDG News writer. But here it is:
Agam Shah said:
One unique aspect of the chip is a shared memory pool that can be accessed by CPUs, GPUs and other processors in the system. Typically, GPUs and CPUs have different memory systems, but the new features increase the overall addressable memory in the Xbox One. The GPUs and CPUs have also been modified to enable shared memory.Shared memory is also part of a specification being pushed by the HSA Foundation, which wants to blur the line between GPU and CPU memory to make programming easier.
Despite the callout to HSA at the end, this quote actually isn't very clear. The first two sentences could be about the Xbox 360, which obviously wasn't hUMA. The second two sentences seem to be about hUMA, but that depends on if "shared memory" means something different than "shared memory pool". The bit about "part of a specification" also casts doubt.

The One may well have hUMA capabilities, but this quote doesn't prove that at all.

A quick look at some of the Xbox One games shown so far should probably indicate at a bare minimum that with or without 'hUMA' the Xbox One is not desperately lacking graphics performance.
Of course not, because no one would be using extensive hUMA techniques right now. No matter which console(s) have the feature, it probably won't be visible for several years. It's just another possible contribution to the improvement all generations see over time.
 

USC-fan

Banned
That's the implementation of the volatile tag.

Cerney: The GPGPU for us is a feature that is of utmost importance. For that purpose, weve customized the existing technologies in many ways.

Just as an examplewhen the CPU and GPU exchange information in a generic PC, the CPU inputs information, and the GPU needs to read the information and clear the cache, initially. When returning the results, the GPU needs to clear the cache, then return the result to the CPU. Weve created a cache bypass. The GPU can return the result using this bypass directly. By using this design, we can send data directly from the main memory to the GPU shader core. Essentially, we can bypass the GPU L1 and L2 cache. Of course, this isnt just for data read, but also for write. Because of this, we have an extremely high bandwidth of 10GB/sec.

Also, we've also added a little tag to the L2 cache. We call this the VOLATILE tag. We are able to control data in the cache based on whether the data is marked with VOLATILE or not. If this tag is used, this data can be written directly to the memory. As a result, the entirety of the cache can be used efficiently for graphics processing.

This function allows for harmonization of graphics processing and computing, and allows for efficient function of both. Essentially Harmony in Japanese. Were trying to replicate the SPU Runtime System (SPURS) of the PS3 by heavily customizing the cache and bus. SPURS is designed to virtualize and independently manage SPU resources. For the PS4 hardware, the GPU can also be used in an analogous manner as x86-64 to use resources at various levels. This idea has 8 pipes and each pipe(?) has 8 computation queues. Each queue can execute things such as physics computation middle ware, and other prioprietarily designed workflows. This, while simultaneously handling graphics processing.

This type of functionality isnt used widely in the launch titles. However, I expect this to be used widely in many games throughout the life of the console and see this becoming an extremely important feature.

Getting a little off topic. VOLATILE tag were added to GCN and deal with L2 cache. the Onion+ Bus bypass the gpu cache altogether.
 
I agree that the hUMA-term is not very helpful since we all don't seem to know the exact feature set that constitutes a hUMA architecture. In general, the slides align nicely with the leaked documents (after adapting the numbers to the new 853mhz clock). My take is:

  • In both XB1/PS4 the GPU can access main memory by probing the CPU's L2 caches (cache-coherency)
  • In both XB1/PS4 the GPU can (and must to achieve peak bandwidth) bypass the CPU's caches at will (no cache-coherency)
  • ESRAM seems to be a GPU-only scratchpad with a dedicated address space and DMA support via the 4 move engines
  • The PS4 seems to have a finegrained mechanism to bypass GPU cache lines (volatile tag) while the XB1 needs to flush the entire GPU cache
  • XB1 has 2 GFX and 2 ACE processors, the PS4 has 2 GFX and 8 ACE processors which, in combination with the above mentioned volatile tag, shows a bigger emphasize on GPGPU/HSA in general.

Fantastic post. This seems to be the most concise explanation of the differences yet. Although, this is the first time we're having this much info on both consoles. Good job. Saving this one for future reference. Gotta say this thread has been pretty kickass so far. A lot of exchanging of info. Learning a few new things, and nobody (yet) is talking like they want to kill the other guy :p
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Onion+ and the volotile tag are different things.

From my understanding of this figure:


Onion is the cache-coherent access to main memory via the CPU's L1/L2 caches while Onion+ bypasses the GPU caches. The difference between the two is whether the volatile tag is set or not.

Next, to support the case where you want to use the GPU L2 cache simultaneously for both graphics processing and asynchronous compute, we have added a bit in the tags of the cache lines, we call it the 'volatile' bit. You can then selectively mark all accesses by compute as 'volatile,' and when it's time for compute to read from system memory, it can invalidate, selectively, the lines it uses in the L2. When it comes time to write back the results, it can write back selectively the lines that it uses. This innovation allows compute to use the GPU L2 cache and perform the required operations without significantly impacting the graphics operations going on at the same time -- in other words, it radically reduces the overhead of running compute and graphics together on the GPU.

http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?page=2
 

chadskin

Member
While googling around, I found a nice quote from 2009.

"We don't provide the 'easy to program for' console that (developers) want, because 'easy to program for' means that anybody will be able to take advantage of pretty much what the hardware can do, so then the question is, what do you do for the rest of the nine-and-a-half years?"
--Kaz Hirai, CEO, Sony Computer Entertainment

Seems like the tables have turned, with the PS4 being the (presumably) "easier to program for" console.
 

USC-fan

Banned
From my understanding of this figure:



Onion is the cache-coherent access to main memory via the CPU's L1/L2 caches while Onion+ bypasses the GPU caches. The difference between the two is whether the volatile tag is set or not.



http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?page=2

Volatile tag deal with L2 cache. These are still stored in L2 cache. Onion bus bypasses the L2/L1 cache. Onion plus is not been added to any amd product but onion and volatile tags have.
 
Actually, the quotation you gave is not from one of the Microsoft architects, it's a paraphrase by the IDG News writer. But here it is:

Despite the callout to HSA at the end, this quote actually isn't very clear. The first two sentences could be about the Xbox 360, which obviously wasn't hUMA. The second two sentences seem to be about hUMA, but that depends on if "shared memory" means something different than "shared memory pool". The bit about "part of a specification" also casts doubt.

The One may well have hUMA capabilities, but this quote doesn't prove that at all.


Of course not, because no one would be using extensive hUMA techniques right now. No matter which console(s) have the feature, it probably won't be visible for several years. It's just another possible contribution to the improvement all generations see over time.

All good points, especially regarding the paraphrasing of the HSA callout. It's hard to know for certain, but I've more or less come to the conclusion from some reading that the XB1 maybe doesn't include 'hUMA', but perhaps has its own solution that accounts for their ESRAM implementation, which obviously seems like it may end up proving quite useful for the system.

Not sure I'd discount Ms's ability to write tools to take advantage of all of this as they always have

This. If anybody can make the perfect tools to get the most out of this, it's probably Microsoft, as devs seem to historically praise their tools.
 
While googling around, I found a nice quote from 2009.



Seems like the tables have turned, with the PS4 being the (presumably) "easier to program for" console.
Sony has stated that they made custom modifications to exploit later on so best of both worlds. Ps3s approach provided a lot of hardships so now there's a chance for ps2 days of glory to return.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Seems like the tables have turned, with the PS4 being the (presumably) "easier to program for" console.

The PS4 will be easier to program for since you don't have to think about what (and how much of that) fits into the ESRAM. The ESRAM will have to contain framebuffers for sure, since performance would go down the toilet otherwise. Framebuffers, however, can easily take more than 32MB at 1080p if you are employing deferred (two-pass) rendering. So that will be a limitation and a puzzle since you have to evaluate multiple trade-offs to find the best one.

However, it won't require you to turn-over your entire engine. The difficulties should be confined to rendering. The XB1 is not as "hard" to understand as the console architectures of the past.
 

Sounddeli

Banned
The PS4 will be easier to program for since you don't have to think about what (and how much of that) fits into the ESRAM. The ESRAM will have to contain framebuffers for sure, since performance would go down the toilet otherwise. Framebuffers, however, can easily take more than 32MB at 1080p if you are employing deferred (two-pass) rendering. So that will be a limitation and a puzzle since you have to evaluate multiple trade-offs to find the best one.

However, it won't require you to turn-over your entire engine. The difficulties should be confined to rendering. The XB1 is not as "hard" to understand as the console architectures of the past.

Devs have been developing for the xbox360 for the past 8 years. Dont think the ESRAM will be that complicated.
 

badb0y

Member
While googling around, I found a nice quote from 2009.



Seems like the tables have turned, with the PS4 being the (presumably) "easier to program for" console.
Correct. That strategy clearly didn't work well for Sony since developers complained about their hardware for some 7 years or so.
 
Top Bottom