• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD: PlayStation 4 supports hUMA, Xbox One does not

THis is how AMD explains huma.



I have no clue, so for me in my simple mind, I don't see what changed from the unified memory perspective. Isn't this how it always was meant to be for both consoles?

Edit: Ok, I didn't read the added description..

One big thing people might be missing, not only is processing power being saved with hUMA, but PS4 effectively has even more memory now since data isn't duplicated in memory for the GPU and CPU. A single pointer can now be used by both the GPU and CPU.
 

RoboPlato

I'd be in the dick
I expected this result but it's shocking that AMD themselves came out and said this. I guess they're really going to be pushing hUMA APUs in the next few years and want to talk it up.
 

twobear

sputum-flecked apoplexy
We're not going to know from these launch titles, because most of them are being rushed. Also, weird example, because at E3 DriveClub < Forza 5 = GT6.
Since they're all being rushed, though, I'm assuming that the detriment to each is the same. I'm not assuming that they're all indicative of what visuals will look like in three years, only that if the gap were as big as Xbox and DC (even PS2), we'd already see it. Unless you think Forza 5 is as good as Xbone will ever look and Driveclub is only utilising ~20% of the PS4's power, of course. Which is possible but seems unlikely to me.

[edit] I picked DC and F5 because they're both racers. I think the point remains valid if you pick, say, Fable Heroes (Legends?) and Knack, though. The visuals in Knack are noticeably better but they're not Dreamcast to Xbox level.
 
Interesting.

Well it matches up with CBOAT saying that PS4 games are ready to master while Xbox is stuck in development hell with the latest SDK dropping the performance by 20%. I'm sure once it is optimised over the next couple of months the performance gap will shrink. I'll get him to ask the third parties what they think will happen.
 
Good. GOOD.

Also here is Yung Humma!
yung-humma-o.gif

YES I was waiting for some Turquoise Jeep.
 
It's better to start at the top and scale down than it does to start at the bottom and scale up, because once you have your lead platform -- the platform that has the most to offer -- it's simply a matter of reducing the likes of internal rendering resolution, texture resolution, particle effects, etc. to accommodate the weaker hardware. Doing things the other way around cripples the higher-end platform/s needlessly with no practical benefit to the developer.

Welp, I learned something new today. I just assumed it was easier the other way around since that's how it was done with 360 and PS3. But given that they had different internal architectures, I suppose that makes sense.

And Soul Destroyer said the pc will be the lead platform this gen so it sounds like the pc version will look best, followed by PS3 and 360. Hopefully that means no gimped pc versions this time around (I'm looking at you Dark Souls)
 

Pug

Member
Well it matches up with CBOAT saying that PS4 games are ready to master while Xbox is stuck in development hell with the latest SDK dropping the performance by 20%. I'm sure once it is optimised over the next couple of months the performance gap will shrink. I'll get him to ask the third parties what they think will happen.

Yeah also ask him when the PS4 is launching and MS indie plans !
 
Whatever, we will see when the machines launch and we actually get our hands on the games. If there are any major differences (like this article suggests) then i will be very suprised.

Great counter argument there. Totally invalidates what AMD said. Time to lock up the thread I guess.
 

Cornbread78

Member
GDDR5 only makes a small difference in latency, doubt it'll affect gameplay that much. Maybe right out of the gate but in the long run the systems will pretty much have the same performance on games. Not gona argue for either one here though remember all those graphics compraisons between the 360 and the PS3 and how everybody made it a huge deal but when it came down to it didn't really make a difference? I mean, TLOU looks amazing as does Halo 4.

I don't think anyone will doubt they will both look awesome, however, I believe people can easily point out that the difference this gen is much greater in regards to architecture than last gen. With the gap growing everyday, we should expect to see a difference if the same game is booted up on machines side by side and played.
 

vg260

Member
Too hUMAn a PS4 exclusive?

Anyway, for those potentially interested in both systems, this certainly seems significant. For example, I'm pretty much going to make one system my go to for 3rd party games, even if I might consider the other for exclusives.
 
But wouldn't it make sense for developers to develop games for the weaker hardware? If the architectures are identical it should be easy to port games, right? So I would imagine they would develop for the xbone and port. So for third party games at least I don't imagine seeing significant improvements on the ps4 version. With development costs the way they are I can't see developers spending significantly more time on the ps4 version to improve visuals.
The way I've heard it, devs will have an easier time getting better performance due to the ease of development for the PS4. While not exactly free, there's less overhead to deal with, so if the power is there it will be put to use.
 
Someone explain to me what this means in Dragonball Z terms!

Fusion?Kaioken?Babidi Mindcontrol?


In all seriousness: Copying data around in memory pools instead of pointing memory addresses: Is this something that makes programming much easier and improves performance that huge? How much latency does the copying add to the processing of a single frame?
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
From the AMD site:-

AMD demonstrates the technologies functionality with the following examples, without hUMA, the CPU must first explicitly copy data to GPU memory, the GPU completes the computation and then the CPU must explicitly copies the result back to CPU memory in order for it to be read. With hUMA the CPU can simply pass a pointer to the GPU, which completes the computation and produces a result that the CPU can directly read without any copying required.

Okay. This sounds good but it's comparing a standard PC architecture with separate memory pools for CPU memory and GPU memory to a unified memory architecture like PS4.

I don't see anything here that can't be done on XBO.
 

orioto

Good Art™
Since they're all being rushed, though, I'm assuming that the detriment to each is the same. I'm not assuming that they're all indicative of what visuals will look like in three years, only that if the gap were as big as Xbox and DC (even PS2), we'd already see it. Unless you think Forza 5 is as good as Xbone will ever look and Driveclub is only utilising ~20% of the PS4's power, of course. Which is possible but seems unlikely to me.

[edit] I picked DC and F5 because they're both racers. I think the point remains valid if you pick, say, Fable Heroes (Legends?) and Knack, though. The visuals in Knack are noticeably better but they're not Dreamcast to Xbox level.

But those games have a really different approach, that illustrates most first party games for both consoles.

Forza play the safe card without using too taxing techs and having nice eye candy, while DC try to be a next gen showcase of every costly things imaginable.
 

Dragon

Banned
At this point i think some here would probably commit suicide if they couldnt go another day without dick waving. :) But the games on both machines tell a different story. Theres no huge jump from one to the other. Both machines look like they could easily pull off the same visuals should their exclusives ever trade places.

I've seen it all. Someone being insecure about a video game console's performance.
 

PJV3

Member
On a classical system you have a RAM pool and a VRAM pool that are physically speperated. Copying data from one pool to the other creates latency. The GPU is very good ad hiding latency. What it needs most is high bandwidth. The CPU on the other hand is extremely sensitive to latency. The CPU needs extremely low latency to work efficiently. Copying data from the RAM (CPU) to the VRAM (GPU) creates latency, but that's okay for the GPU. Copying data from RAM (CPU) to VRAM (GPU) and back to the RAM (CPU) creates even more latency. It's too much for the CPU. The copying alone takes longer than the computation wich makes this roundtrip highly ineffective.

Xbox360 and older APUs have a unified RAM. This means that the RAM is no longer physically seperated, but even though it's the same RAM chips, the system still distincts between memory partition for the differenct processors. You still need to copy the data between CPU partition and GPU partition, but this will be much more efficient than copying it between physically seperated pools. But it's still too much latency for a CPU, GPU, CPU roundtrip.

PS4 will have hUMA wich means that you no longer need a distinction between CPU partition and GPU partition. Both processors can use the same pieces of data at the same time. You don't need to copy stuff and this allow for completely new algorithms that utilize CPU and GPU at the same time. This is interesting since a GPU is very strong, but extremely dumb. A CPU is extremely smart, very weak. Since you can utilize both processors at the same time for a single task you have a system that is extremely smart and extremely strong at the same time.

It will allow for an extreme boost for many, many algorithms and parts of algorithms. On top of that it will allow for completely new classes of algorithms. This is a game changer.

I don't understand really, but is it like Batman and Superman having a baby?

Actually you explained it very well, thanks.
 
Welp, I learned something new today. I just assumed it was easier the other way around since that's how it was done with 360 and PS3. But given that they had different internal architectures, I suppose that makes sense.

And Soul Destroyer said the pc will be the lead platform this gen so it sounds like the pc version will look best, followed by PS3 and 360. Hopefully that means no gimped pc versions this time around (I'm looking at you Dark Souls)

it was done that way because the 360 was MUCH easier for developers to use than the ps3 for most of that system's lifespan. so the ps3 was more powerful, but only in theory. in a practical sense, most devs could get more out of the 360 than they could from the ps3 for many years. On top of that, the 360 was selling much better and had a larger install base, thanks to a lower price point and an additional year on the market. making ps3 lead didn't really make sense, outside of titles that sold heavily in JP.

PC will be lead platform initially, but i expect that to change as the install base of both consoles increases. power gaps aside, ps4/xbone are a lot more similar to each other than they are to the PC, and as the userbase increases you'll likely see developers target those first and then port over to PC.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
I don't believe that, sounds press release talk about the new technology.

Still "funny" that an AMD representative is so public about the performance differences of its customer's competing products.
 

Panajev2001a

GAF's Pleasant Genius
There's nothing in the hUMA outlines I've seen that indicates any sort of 3D performance increase. It's all to do with making GPGPU calculations easier.

... and it could be stated that more and more calculations for graphical related tasks too would move to compute shaders/GPGPU calculations.
 

RoboPlato

I'd be in the dick
yay! another thread about how the ps4 is so superior to the xbox one and we should all buy ps4

/sarcasm
Really? This is a tech based statement from the company supplying th tech for both consoles. Making this statement publicly is pretty surprising and worth a thread and discussion.
 
DerZuhälter;77502041 said:
Someone explain to me what this means in Dragonball Z terms!

Fusion?Kaioken?Babidi Mindcontrol?

PS4 CPU = Majin Vegeta
PS4 GPU = SSJ3 Goku
PS4 gddr5 memory = Mystic Gohan

hUMA is the fusion of vegeta, goku, and Gohan.
 

RetroStu

Banned
Great counter argument there. Totally invalidates what AMD said. Time to lock up the thread I guess.

My 'counter argument' is that we will see when the machines launch and we can actually play finished games. I'm getting both consoles so it matters little to me and games from this generation still impress me so i'm not going to worry about a machine with upwards of 10 times the power of current gen whether this rumour is true or not.
 
Absolute bollocks.

Although his wording/phrasing isn't quite right it's easy to see the point he is trying to put across,(Which is that changes/calculations are seen by both the CPU/GPU in memory in real time without the need to copy or flush cache to update) so dismissing his post as "Absolute bollocks" is not needed.He clearly said at the start of the post it was not a tech interpretation, it was what he thought with his knowledge.

If you are not happy with the way the console of your choice is shaping up, please refrain from throwing the toys out the pram.
 

twobear

sputum-flecked apoplexy
But those games have a really different approach, that illustrates most first party games for both consoles.

Forza play the safe card without using too taxing techs and having nice eye candy, while DC try to be a next gen showcase of every costly things imaginable.
...and Driveclub looks better as a result. Not sure what your point is here? You think that DC is 4.5x the computational workload of F5? Okay...
 

Hana-Bi

Member
AMD even spoke, at one point, about the idea of using an embedded eDRAM chip as a cache for GPU memory — essentially speaking to the Xbox Durango’s expected memory structure. The following quote comes from AMD’s HSA briefing/seminar:


“Game developers and other 3D rendering programs have wanted to use extremely large textures for a number of years and they’ve had to go through a lot of tricks to pack pieces of textures into smaller textures, or split the textures into smaller textures, because of problems with the legacy memory model… Today, a whole texture has to be locked down in physical memory before the GPU is allowed to touch any part of it. If the GPU is only going to touch a small part of it, you’d like to only bring those pages into physical memory and therefore be able to accommodate other large textures.

With a hUMA approach to 3D rendering, applications will be able to code much more naturally with large textures and yet not run out of physical memory, because only the real working set will be brought into physical memory.”

This is broadly analogous to hardware support for the MegaTexturing technology that John Carmack debuted in Rage.

http://www.extremetech.com/gaming/1...u-memory-should-appear-in-kaveri-xbox-720-ps4

This is from April. Don't see a reason why the X1 shouldn't have hUMA.
 

joshcryer

it's ok, you're all right now
It will allow for an extreme boost for many, many algorithms and parts of algorithms. On top of that it will allow for completely new classes of algorithms. This is a game changer.

This is at least what AMD is hoping for. We'll have to see what happens and if it's really scalable. From what I know of architecture and software the whole unified architecture is the future, you basically want one address space, in the end. It minimizes IPC which is the biggest cause of latency in any software system. (I'm not saying that PS4 is a single address space system that does that, mind you, I'm saying PC architecture would benefit greatly if we had such a system.)

This is from April. Don't see a reason why the X1 shouldn't have hUMA.

The ESRAM rules out hUMA because if they need to have a very fast cache to put stuff when they're copying it in and out of GPU / CPU space. Yes it has unified RAM, but the CPU allots how much virtual memory both the CPU and GPU get access too at a given time. It's all virtual which is why they need the ESRAM to speed up the bottlenecks.
 

FeiRR

Banned
Can someone of TechGAF explain what is the true advantage of this technology?

what is the difference between hUMA and normal Unified memory? and how this impact in PS4 and X1 architecture?

Imagine you had two fridges at home: one normal fridge and a small, convenient fridge just within the reach of your arm, at your couch. Neat idea, isn't it? But you'd have to remember that your small, couch fridge needs to be refilled every time you want to play and have a beer. This means carrying all beers from the big fridge to the small fridge before playing. Now, if you have a PS4, that small fridge magically fills with beer from the big fridge! They teleport there at wish!

Now imagine that your big fridge is an X1. The beers there don't cool down as fast as you'd like them to so you have a very small freezer under the stairs. It's very efficient but it's nowhere near the couch and, sorry, no magic of teleportation!
 

TronLight

Everybody is Mikkelsexual
THis is how AMD explains huma.



I have no clue, so for me in my simple mind, I don't see what changed from the unified memory perspective. Isn't this how it always was meant to be for both consoles?

Basically is something like this:

Without hUMA: 8GB of RAM can be accessed by both GPU and CPU, but, if your GPU needs 4GB for graphic, and you CPU needs 2GB for logic, it would look like you have a split pool for GPU and CPU, because they cannot communicate directly.
It's different from a physically split pool because, like in PS3, with a physically split pool you have, say, 4GB for GPU and 4GB for CPU, stop, you can't go overboard. With a unified pool you can give the GPU 6GB and the CPU 2, or 7/1, or 5/3, however you like. But those two pool cannot communicate directly.
Like, say the GPU needs a 1MB of data that the CPU is using. The GPU needs to ask the CPU first, then the CPU will copy the 1MB and the GPU will use it. This takes time.


With hUMA: Everything is shared, both GPU and CPU can use the same 8GB of data in the RAM, and they can communicate directly. So, if the GPU needs that 1MB of data, it just takes it, without having to tell the CPU to copy it over it's RAM pool.

At least, that's what I understood.

Edit: Beaten by Wicked. :lol
 

strata8

Member
... and it could be stated that more and more calculations for graphical related tasks too would move to compute shaders/GPGPU calculations.

It might allow you to be more flexible with what you do, but you won't magically unlock higher performance by moving to software-based rendering.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
But PRT is GCN technology, even PC 7xxx cards have it already.

The point of PRT is also that the texture tiles of large textures can fit into the GPUs cache. It's not just about VRAM and main memory.
 

jaosobno

Member
On a classical system you have a RAM pool and a VRAM pool that are physically speperated. Copying data from one pool to the other creates latency. The GPU is very good ad hiding latency. What it needs most is high bandwidth. The CPU on the other hand is extremely sensitive to latency. The CPU needs extremely low latency to work efficiently. Copying data from the RAM (CPU) to the VRAM (GPU) creates latency, but that's okay for the GPU. Copying data from RAM (CPU) to VRAM (GPU) and back to the RAM (CPU) creates even more latency. It's too much for the CPU. The copying alone takes longer than the computation wich makes this roundtrip highly ineffective.

Xbox360 and older APUs have a unified RAM. This means that the RAM is no longer physically seperated, but even though it's the same RAM chips, the system still distincts between memory partition for the differenct processors. You still need to copy the data between CPU partition and GPU partition, but this will be much more efficient than copying it between physically seperated pools. But it's still too much latency for a CPU, GPU, CPU roundtrip.

PS4 will have hUMA wich means that you no longer need a distinction between CPU partition and GPU partition. Both processors can use the same pieces of data at the same time. You don't need to copy stuff and this allows for completely new algorithms that utilize CPU and GPU at the same time. This is interesting since a GPU is very strong, but extremely dumb. A CPU is extremely smart, but very weak. Since you can utilize both processors at the same time for a single task you have a system that is extremely smart and extremely strong at the same time.

It will allow for an extreme boost for many, many algorithms and parts of algorithms. On top of that it will allow for completely new classes of algorithms. This is a game changer.

Based W!CKED.
 

TheHater

Member
At this point i think some here would probably commit suicide if they couldnt go another day without dick waving. :) But the games on both machines tell a different story. Theres no huge jump from one to the other. Both machines look like they could easily pull off the same visuals should their exclusives ever trade places.

It's kinda hard to compare games between Xbox One and PS4 when the Xbox One games are running on PC and the PS4 games are running on PS4 hardware.
 
Aren't both MS and Sony costumers of AMD. Weird to make one of them look bad.

MS spent a lot on their own design IIRC, and Sony worked with AMD to push toward the HSA vision that AMD has been wanting for so long. It's beneficial to them to promote the power increases that is has over other designs.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Although his wording/phrasing isn't quite right it's easy to see the point he is trying to put across,(Which is that changes/calculations are seen by both the CPU/GPU in memory in real time without the need to copy or flush cache to update) so dismissing his post as "Absolute bollocks" is not needed.He clearly said at the start of the post it was not a tech interpretation, it was what he thought with his knowledge.

If you are not happy with the way the console of your choice is shaping up, please refrain from throwing the toys out the pram.

Hey. My preferred next gen console (PS4) is shaping up just nicely thank you. I just don't take the OP's word at face value and try to apply some common sense to the OPs implication.

Also. For the GPU or CPU to 'see' the updated data in real time. The respective caches still need to complete a read to the main memory pool ( I think).
I still don't see why there is a advantage over what XBO does. I'm missing a piece of the puzzle.
 
Top Bottom