• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Ubisoft GDC EU Presentation Shows Playstation 4 & Xbox One CPU & GPU Performance- RGT

Actually the business side had the last call; cost is everything to these companies now.

It's also for consumers mostly. $499 or $599 PS4 would sell a lot less than PS4 with current specs and price. Most consumers couldn't care a less of that added horsepower. They want affordable console with clear upgrade over previous gen. PS4 is exactly that.
 
Doesn't this fly in the face of what this thread is about? They're saying that a task that last generation could have been done on the CPU, this time can be offloaded to the GPU. So with the 360 you had to use the CPU to make 38 dancers, whereas on the PS4 you can use the GPU to do it and you can make 1600 of them.

I'm sure that some tasks can't be offloaded to the CPU of course.

My point is that it is not an off load where all of a sudden you have 5 ms rendered for free in the pipeline. That is not how the "offload" even in the best circumstances, works out.

Also, not all stuff goes over to the GPU... otherwise GPUs wouldnt be the specialized beasts that they are.
 

vpance

Member
Current GPU compute is more efficient than CELL:

CELL ~200 Gflops for 105 dancers = 0.52 dancers / Gflop
GPU Compute 1840 Gflops for 1600 dancers = 0.87 dancers / Gflop
 

maneil99

Member
Intel's chips are much much more efficient than amd's they use less power and produce less heat. All while being more powerful. The decision to go with amd chips will have been a cost reduction one as Intel is not cheap. But itel is better than amd in every metric for cpu's
Not for APUs . Going Intel would have required a dedicated GPU.
 
I'm really glad to see some solid numbers and have an idea of what these systems are capable of. This is the fuel of fanboys but, as a consumer, I like to know what I'm buying. This is informative as hell.
 

twobear

sputum-flecked apoplexy
My point is that it is not an off load where all of a sudden you have 5 ms rendered for free in the pipeline. That is not how the "offload" even in the best circumstances, works out.

Also, not all stuff goes over to the GPU... otherwise GPUs wouldnt be the specialized beasts that they are.
Yes, I realise, CPGPU stuff isn't free.

But my understanding of these slides is that Ubisoft are saying that just looking at the CPU benchmarks and assuming that there's an issue isn't the right way of thinking about it, because tasks that could only be done on the CPU last generation can be done by the GPU this generation.
 
When games look like this,



I don't really care for CPU. The GPU is where you're going to see a lot of tasks offloaded, that will have a bigger effect on graphics.
While last gen consoles had great CPU's, memory and GPU are far weaker than what's in these consoles.


So you're just going to ignore the rest of the console?

Posting images of Uncharted 4's vertical slice mean nothing. If you are impressed by that, just imagine what ND could have achieved with better hardware.
 

mrklaw

MrArseFace
There are always going to be bottlenecks somewhere. Last gen deveolpers had to start using multithreaded code to get the most out of 360, and especially PS3. That is even more the case with current gen consoles as each core is relatively weak. Look how many PC games don't do that properly (perhaps because they don't need to because PC CPUs are relatively high powered)

This gen will be all about efficiently using GPGPU,nand I think those deveolpers that pushed the PS3 will have a potential headstart, being used to applying non-standard approaches to graphics rendering and using SPEs will help them moving to GPGPU
 

hodgy100

Member
Not for APUs . Going Intel would have required a dedicated GPU.
Yeah Intel's igpu's are pants. But then that's talking about gpu's not CPU's. I just found the claim that an Intel chip would be more power hungry and produce more heat making it a "brute force" method, rather amusing.
 
Why is UbiSoft using DirectX 11 on the PS4? It is my understanding that while the PS4 has a DirectX 11 compatibility wrapper around its own API, it is not the most efficient way to access the hardware. Is this a correct interpretation?
playstation-4-directx-11-HLSL.jpg
 

i-Lo

Member
Posting images of Uncharted 4's vertical slice mean nothing. If you are impressed by that, just imagine what ND could have achieved with better hardware.

That's like saying what any developer can achieve on PC.

This is an assessment of closed system and the limitations within which developers have to work within; progressive optimization is the key to iterative improvements akin to any gen.
 
Current GPU compute is more efficient than CELL:

CELL ~200 Gflops for 105 dancers = 0.52 dancers / Gflop
GPU Compute 1840 Gflops for 1600 dancers = 0.87 dancers / Gflop

The CELL as we know it is also several years old. AFAIK no CPU manufacturer kept looking in that direction, but I may be wrong.
 
Not really; it seems like having unneccessarily high GPU spec requirements because the GPUs are being used to do work that should be being done by the CPU is going to be a thing that continues.

That's what I'm scared about too.
I'm already waiting on upgrading my GPU, as I'm not sure if I want anything below 6gigs for the next 2-3 years..
 

Shpeshal Nick

aka Collingwood
If I'm reading all that info correctly (probably not), I think I can see why last gen lasted so long and produced some amazing results visually.
 

Avtomat

Member
Maybe one day when the CPU bottleneck ends up impeding more than just having a bazillion AI characters on screen at once. Until then, I don't know what what bottom line difference it makes. Or will make, moving forward. Don't both consoles offload a bunch of CPU processing to dedicated audio chips and GPU processing as well?

Exactly.

I personally would have wanted higher performing consoles but with the benefit of hindsight both Sony and MS (Sony especially) made the right choices.
 

Gamer345

Banned
Pretty sure the PS4 cpu could be overclocked to about 1.8 or 2.0ghz

Don't think Sony ever revealed the final cpu clock speed
 

Melchiah

Member
Sure you have underutilizerd compontents. But if you feed those underutilized components with info to get em moving... you are taking away from bandwidth. Even in the best scenario you are not just getting "free performance" due to clever programming. It is give and take... like all things on a closed platform.

Also, please be aware that there str a lot of "words" thrown about by hardware makers about their products. Even Cernys words should be looked on with scrutiny. "Super charged" "power of the cell" "HuMA" etc... all acronyms power point presentations etc.. direct from manufactures have to be looked upon with said scrutiny. In the end they are propogandizing a product.

Good thing the PS4 was built with general-purpose computing on graphics processing units in mind.

gXZEf3kl.png
 
doesn't gpgpu not work against image quality? i'm pretty sure i read somewhere cerny saying devs can up performance without sacrificing graphical quality. i imagine there will be tradeoffs but gpgpu doesn't take away from rendering and vice versa.

There is not magic. If you use it for one thing, you don't have it at the same time for another.

Ehhh, kind of. There are actually a lot of times where GPU utilization, even when just rendering stuff, is actually fairly poor. You can just be transforming and shading triangles, and have the GPU working a whole bunch, but in practice, only half of the GPU is being used, because of...reasons. The whole idea behind "asynchronous compute" on the GPU is to fill up that other half with stuff when some bits are underutilized, and up to a point/in some regards, it actually IS a free lunch. The whole deal behind having multiple "async compute contexts" on the PS4 and XB1 (and other GCN-based GPUs, via Mantle) is to give the GPU the opportunity to ignite that dark silicon with some other tasks when it's available.

These two presentations on the implementation of tessellation/subdivision in Call of Duty Ghost, and voxel cone tracing for The Tomorrow Children, briefly touch on the subject at the end:
http://advances.realtimerendering.c..._2014_tessellation_in_call_of_duty_ghosts.pdf
http://fumufumu.q-games.com/archives/Cascaded_Voxel_Cone_Tracing_final_speaker_notes.pdf
If you're a real tech hound, you're going to hear a lot more about "async compute" in the coming years.
 
Pretty sure the PS4 cpu could be overclocked to about 1.8 or 2.0ghz

Don't think Sony ever revealed the final cpu clock speed

With the shouting from the rooftops that Microsoft did (ok maybe Twitter and a bunch of articles on the topic) when they increased the CPU clock approximately 10%, if Sony went to 1.8 or even 2 GHz I think someone would have said something, even just to overshadow what Microsoft did (12.5% or 25% CPU clock improvement to Microsoft's 10%).

As for the Ubisoft presentation, I am missing the parity in the results. Ubisoft clearly says there is an insignificant difference in the CPU capabilities and a drastic difference in the GPU capabilities. They also mention that in order to get around CPU limitations they are offloading to GPU. So, why again is Assassin's Creed: Unity set at 900p30 on both systems? This GDC presentation seems to imply that there should be a potentially significant difference between Ubisoft titles on each platform. Or I am misunderstanding?
 

Smokey

Member
It's the best they could do while keeping the consoles affordable and not melting. I really don't know why so many users pretend to know these things better than the engineers at Sony and MS. Both Sony and MS lost billions early last gen because they shot for the moon with the specs.

I'm not sure how getting a CPU that has a greater frequency than 1.65ghz in 2014 is shooting for the moon but...ok.
 
So, why again is Assassin's Creed: Unity set at 900p30 on both systems? This GDC presentation seems to imply that there should be a potentially significant difference between Ubisoft titles on each platform. Or I am misunderstanding?

Perhaps one of them has shitty frame-rate, shitty AA, lower draw distance and doesn't look as good?
 

Vizzeh

Banned
Personal opinion of course, but I get the feeling Ubi spend more time tweaking the absolute balls off X1's Memory bandwidth - esram + ddr3 and alot less time tweaking PS4's in combo with compute, I highly doubt anywhere near equal time..... according to Ubi's own presentation, with PS4 having 100% more compute power in GPGPU, if they did more optimizing with asynchronous compute, AI and units on screen may not be a problem with any issue with the CPU being patched over or negated.

Still off the opinion what their talking about with the AI, shouldn't have issue with "resolution" etc anyway.
 

Fafalada

Fafracer forever
Dictator93 said:
That shows the PS3 CPU being better than the new jaguars at a certain task. Yes.
PS3 CPU is better at certain tasks than equivalently clocked I7s as well. The question is more whether those tasks need to be done on CPUs (the answer is obvious for some, but not for all of them).
 

Melchiah

Member
Yes, you get higher GPU utilization (which is the whole point), but to feed the utilization you have to take away from the bandwidth.

Which is what I have been saying, this entire thread.

I'd wager that's why there are several buses built into the system.
 

mrklaw

MrArseFace
Yes, you get higher GPU utilization (which is the whole point), but to feed the utilization you have to take away from the bandwidth.

Which is what I have been saying, this entire thread.

On a unified memory setup, doing it on the CPU would use up bandwidth too
 

Gurish

Member
Posting images of Uncharted 4's vertical slice mean nothing. If you are impressed by that, just imagine what ND could have achieved with better hardware.
If it will actually look like that it's more than enough, in fact, it's bigger than I've ever imagined regarding next gen, and a more impressive jump in my opinion than PS2 to PS3, We can always say "imagine what they could do with" but there is no end to this, if they will achieve this target it would be amazing and you can stop the graphics progression for rest of the gen for all i care, U4's teaser visuals is good enough until the PS5.
 

RoboPlato

I'd be in the dick
Pretty sure the PS4 cpu could be overclocked to about 1.8 or 2.0ghz

Don't think Sony ever revealed the final cpu clock speed
It's pretty clear that it's 1.6GHz. There's several tech presentations that point to that. I am curious if Sony could do a small upclock like MS did pre launch. Nothing major, just a little boost without having much if an impact on heat output and power useage.
 

Durante

Member
PS3 CPU higher than PS4 CPU?
For this type of workload that's not surprising. (The Cell in PS3 peaks at ~180 SP GFLOPs, the PS4 CPU cores sum up to ~100)

doesn't gpgpu not work against image quality? i'm pretty sure i read somewhere cerny saying devs can up performance without sacrificing graphical quality. i imagine there will be tradeoffs but gpgpu doesn't take away from rendering and vice versa.
That's just wrong. You have a set amount of computing resources in hardware, and if they are doing compute they are not doing graphics and vice versa. (Of course, you might find times during your frame where there are some idling resources, but then that also means you could have used those for either graphics or compute)

Good thing the PS4 was built with general-purpose computing on graphics processing units in mind. [snip]
This has nothing to do with what Dictator wrote. How is the caching and memory architecture of the PS4 SoC relevant?
 

Vizzeh

Banned
I'd wager that's why there are several buses built into the system.

This quote from the RGT article maybe of interest:
Quite simply speaking, not only do you need to make sure the correct data is being worked on by either the CPU or GPU, but that data is being processed in the correct order. As we have said in our Sucker Punch Second Son Post Mortem “Sucker Punch point out that it’s sometimes better to avoid using the cache (the Garlic bus of the PS4 actually bypasses the caches of the GPU). Sometimes the only way around this is to issue a “sync” command to the GPU’s buffer – but it’s not something you wish to constantly issue.”
 
Yes, you get higher GPU utilization (which is the whole point), but to feed the utilization you have to take away from the bandwidth.

Which is what I have been saying, this entire thread.

That memory bandwidth is actually, another one of the things that is seldomly 100% utilized. If only 10% of the lifetime of a specific shader execution is spent waiting on off-chip data, it's possible for something else to take advantage of that other 90%. Or, vice-versa: if something is spending almost all of its time waiting for off-chip data, you can execute some other compute operations that are very light on memory usage.
 

Duxxy3

Member
I'm sure everyone will look at the GPU's, but my god... these new CPU's are complete and utter trash. I hope that Sony and Microsoft are getting these for free with the purchase of the GPU.
 

Gamer345

Banned
With the shouting from the rooftops that Microsoft did (ok maybe Twitter and a bunch of articles on the topic) when they increased the CPU clock approximately 10%, if Sony went to 1.8 or even 2 GHz I think someone would have said something, even just to overshadow what Microsoft did (12.5% or 25% CPU clock improvement to Microsoft's 10%).

As for the Ubisoft presentation, I am missing the parity in the results. Ubisoft clearly says there is an insignificant difference in the CPU capabilities and a drastic difference in the GPU capabilities. They also mention that in order to get around CPU limitations they are offloading to GPU. So, why again is Assassin's Creed: Unity set at 900p30 on both systems? This GDC presentation seems to imply that there should be a potentially significant difference between Ubisoft titles on each platform. Or I am misunderstanding?

Well someone did say something because there are rumours about it and as I said Sony never revealed their final cpu clockspeed

Sony don't really shout from the rooftops like MS with cloud of DX12, but we clearly know the Sony ICE Team do some optimizations on the hardware and provide improved SDK's for Sony consoles as time passes on
 

Vizzeh

Banned
I'm sure everyone will look at the GPU's, but my god... these new CPU's are complete and utter trash. I hope that Sony and Microsoft are getting these for free with the purchase of the GPU.

Atleast we have the extra ARM processor for a bit of extra processing lol :D
 

Bgamer90

Banned
It's also for consumers mostly. $499 or $599 PS4 would sell a lot less than PS4 with current specs and price. Most consumers couldn't care a less of that added horsepower. They want affordable console with clear upgrade over previous gen. PS4 is exactly that.

Yep. Spot on.
 

Gamer345

Banned
It's pretty clear that it's 1.6GHz. There's several tech presentations that point to that. I am curious if Sony could do a small upclock like MS did pre launch. Nothing major, just a little boost without having much if an impact on heat output and power useage.

Not really, as Sony never revealed the final clock speed as I said, and dont some other benchmarks show the PS4 cpu is faster

like the substance engine benchmark

http://www.neogaf.com/forum/showthread.php?t=737629

Now According to details revealed in the benchmark, PS4 is actually running at 2Ghz in order to produce 14 MB/s of textures versus 12 MB/s textures for the Xbox One.
 

Melchiah

Member
You are not magically getting more than 176GB/s. The hardware does not go beyond using that.

No, but from what I understand the buses work in tandem when needed.

Ehhh, kind of. There are actually a lot of times where GPU utilization, even when just rendering stuff, is actually fairly poor. You can just be transforming and shading triangles, and have the GPU working a whole bunch, but in practice, only half of the GPU is being used, because of...reasons. The whole idea behind "asynchronous compute" on the GPU is to fill up that other half with stuff when some bits are underutilized, and up to a point/in some regards, it actually IS a free lunch. The whole deal behind having multiple "async compute contexts" on the PS4 and XB1 (and other GCN-based GPUs, via Mantle) is to give the GPU the opportunity to ignite that dark silicon with some other tasks when it's available.

These two presentations on the implementation of tessellation/subdivision in Call of Duty Ghost, and voxel cone tracing for The Tomorrow Children, briefly touch on the subject at the end:
http://advances.realtimerendering.c..._2014_tessellation_in_call_of_duty_ghosts.pdf
http://fumufumu.q-games.com/archives/Cascaded_Voxel_Cone_Tracing_final_speaker_notes.pdf
If you're a real tech hound, you're going to hear a lot more about "async compute" in the coming years.



EDIT:
This has nothing to do with what Dictator wrote. How is the caching and memory architecture of the PS4 SoC relevant?

As he was talking about the lack of bandwidth, I posted the image as it shows the system bandwidth across the buses.


This quote from the RGT article maybe of interest:

Quite simply speaking, not only do you need to make sure the correct data is being worked on by either the CPU or GPU, but that data is being processed in the correct order. As we have said in our Sucker Punch Second Son Post Mortem “Sucker Punch point out that it’s sometimes better to avoid using the cache (the Garlic bus of the PS4 actually bypasses the caches of the GPU). Sometimes the only way around this is to issue a “sync” command to the GPU’s buffer – but it’s not something you wish to constantly issue.”

Interesting. So I may have understood it wrong.
 

Durante

Member
I'm sure everyone will look at the GPU's, but my god... these new CPU's are complete and utter trash. I hope that Sony and Microsoft are getting these for free with the purchase of the GPU.
Well, if you look at AMDs recent financials you'll see that they aren't paying all that much for these chip designs.

Well someone did say something because there are rumours about it and as I said Sony never revealed their final cpu clockspeed
I think the likely reason Sony never revealed the clock on their CPU cores is the same one as why MS never revealed an official TFlop number (unlike Sony): You don't post numbers where yours is smaller than your competitor's.
 

Atilac

Member
It is really irritating to me just how low MS and Sony went on these CPUs. Just terrible.
Inflation + stagnant wages = the current patch of systems which aren't as bad as snobby pc elitist will have you believe.
Edit: in addition last gen systems were sold at a loss, this gen they are being sold at a profit,
 

Vizzeh

Banned
Not really, as Sony never revealed the final clock speed as I said, and dont some other benchmarks show the PS4 cpu is faster

like the substance engine benchmark

http://www.neogaf.com/forum/showthread.php?t=737629

Now According to details revealed in the benchmark, PS4 is actually running at 2Ghz in order to produce 14 MB/s of textures versus 12 MB/s textures for the Xbox One.

I think that was due to the quicker memory reads of the PS4, not the higher clock speeds.

Sony pretty much confirmed it here anyway:

(https://plus.google.com/+sonyuk/posts/eiA6sDQvWwQ)

Im sure Sony are using X1's as guinea pigs atm anyway with the clock speeds at 1.75ghz, they can sit back and check for RROD's past the opening years, while testing their own cooling system.
im curious if they will ever bump it up, both systems are likely close anyway since X1 needs the extra juice reservation for the snap features + kinect etc so its negligible in that regard.

More benefit will likely be obtained by Sony having developers utilize that GPGPU power, rather than an upclock.
 
Top Bottom