• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD: PlayStation 4 supports hUMA, Xbox One does not

TheD

The Detective
Frankly AMD is best position ever. Soon in AAA gaming having more cores will be better than having less faster cores. Right now it is opposite. Games do use 1 core for all its logic and rest as help instead of stuffing all cores properly.

No, It is always better to have less cores with the same total power!
 

Perkel

Banned
Aside from the fact that this is incorrect to label as such for all games (especially more recent stuff), having weaker cores with lower ipc is never preferred to higher IPC.

Sure thing (for now). With next gen AAA gaming developers will need to use jaguar cores which themselves aren't that fast (in comparison to PC intels or better AMD cores). They will cut their main logic to fill up rest of cores and move ton of expensive things like physic to GPUs. That is pro cores not pro single core performance.

No, It is always better to have less cores with the same total power!

For now for PC gaming yes. That will change in future.
 

driver116

Member
Haha. Yeah, according to Mr. Leadbetter PS4 is an unbalanced piece of hardware, anyway. :D

The guy has shown unprecedented bias to Microsoft/Xbox this past gen with his DF articles so I imagine this won't be changing. If a 360 version was superior he would make it clear throughout the article and in the conclusion, however if a PS3 version was superior the article would downplay it. There was a clear distinction of the articles by Leadbetter when compared to Bierton and Morgan.
 

TheD

The Detective
Sure thing (for now). With next gen AAA gaming developers will need to use jaguar cores which themselves aren't that fast (in comparison to PC intels or better AMD cores). They will cut their main logic to fill up rest of cores and move ton of expensive things like physic to GPUs. That is pro cores not pro single core performance.



For now for PC gaming yes. That will change in future.


No!

It is always better to have the same processing power in the least amount of processing elements!
Not only is there overhead with passing data around between cores but not all problems can be split and thus the whole system is slowed down waiting for the one core that has been given the problem that can not be split to finish! (Amdahl's law).

Anyone with knowledge of how computers work would know that!
 

Perkel

Banned
No!

It is always better to have processing power in the least amount of processing elements!

Anyone with knowledge of how computers work would know that!

Let me ask you a question. Why we use multicore CPUs ? Because there is limit to which you can increase single core performance.

My point was strictly connected to AMD 8 cores struggling to beat i5s in games when in properly multicore written applications it beats i5 like a child. Games now do favor single core performance instead of n+ cores

Ofcourse everything would be better with with just on superfast core but we can't have that and hardware producers can't create that.
 

TheD

The Detective
Let me ask you a question. Why we use multicore CPUs ? Because there is limit to which you can do single core performance.

My point was strictly connected to AMD 8 cores struggling to beat i5s in games when in properly multicore written applications it beats i5 like a child. Games now do favor single core performance instead of n+ cores

Ofcourse everything would be better with with just on superfast core but we can't have that and hardware producers can't create that.

We use multi core CPUs due to how hard it is to scale CPU speed, that does not mean it is advantageous to do it if we had the choice!

8 core AMD CPUs are not even close to "beating an i5 like a child", Haswell i5s keep up with the 8350 in most heavily multi threaded loads http://www.anandtech.com/bench/product/697?vs=837

And even if it did, you are forgetting the fact that the console games do not have access to all 8 cores and thus 8 core CPUs would lose their advantage.

Oles Shishkovstov, Chief Technology Officer at 4A Games (Metro 2033, Last Light), said that he prefers many low-power cores to few high-performance cores.

"I always wanted a lot of relatively-low-power cores instead of single super-high-performance one, because it's easier to simply parallelise something instead of changing core-algorithms or chasing every cycle inside critical code segment (not that we don't do that, but very often we can avoid it)." source

Whether you have eight Jaguars or four Steamrollers is very often a matter of personal taste from a programmer's point of view, but since PS4 is designed as an APU with GPGPU in mind, it definitely makes sense to go with low-power cores: eight Jaguars allow for a bigger integrated GPU.

Facts are facts, same power and less cores beats more cores.

And that quote is flawed, if something had 1 core and something else has 2 cores but the single core was 2x as fast as each of the 2 cores you would not have to do anything to get a speed up compared to having to parallelise to get the same speed out of the slower pair!
 

Perkel

Banned
Oles Shishkovstov, Chief Technology Officer at 4A Games (Metro 2033, Last Light), said that he prefers many low-power cores to few high-performance cores.

"I always wanted a lot of relatively-low-power cores instead of single super-high-performance one, because it's easier to simply parallelise something instead of changing core-algorithms or chasing every cycle inside critical code segment (not that we don't do that, but very often we can avoid it)." source

Whether you have eight Jaguars or four Steamrollers is very often a matter of personal taste from a programmer's point of view, but since PS4 is designed as an APU with GPGPU in mind, it definitely makes sense to go with low-power cores: eight Jaguars allow for a bigger integrated GPU.

Same dev also "jobified" a lot of things for multicore like AI which often lies in one main logic core. His approach will be imo standard in next gen since devs won't be able to stress much single core for logic much (since it is slow)
 

madmackem

Member
So when will we actually SEE all these graphical advantages the PS4 has? I keep hearing that the console is noticeably more powerful, and yet when looking at the games, not ONE has looked graphically impossible to mimic on the competitor's console. "Just wait a couple years and the differences will show themselves." If the graphical hardware differences are so drastic, why is it taking several years to finally see differences? I feel like Sony has the best engineered (and more powerful) console, but for some reason they simply REFUSE to show it. SHOW me why you have the most powerful console Sony. SHOW me that your console is capable of things the competition can only dream of doing.

Dude the consoles aint even out yet, do you expect launch those that have been years in development to harness all the power day one?. The fact alot of them were working on 2g gddr5 for a period of time shows how you cant put alot into launch day software for either console.
 

Perkel

Banned
We use multi core CPUs due to how hard it is to scale CPU speed, that does not mean it is advantageous to do it if we had the choice!

8 core AMD CPUs are not even close to "beating an i5 like a child", Haswell i5s keep up with the 8350 in most heavily multi threaded loads http://www.anandtech.com/bench/product/697?vs=837

And even if it did, you are forgetting the fact that the console games do not have access to all 8 cores and thus 8 core CPUs would lose their advantage.

Facts are facts, same power and less cores beats more cores.

But there is no "same power". AMD and Intel does multicores because they can get more power for cheaper price creating multicore CPU rather than single core super fast CPU.

"Beating like a child" is maybe to much but point is that 8 core is cheaper than i5 and in your benchmarks in multicore apps it beats i5. In games that 8 core is worse because games still are not properly multithreaded like SC which uses 1 core for all its logic and rest as a help.

So when will we actually SEE all these graphical advantages the PS4 has? I keep hearing that the console is noticeably more powerful, and yet when looking at the games, not ONE has looked graphically impossible to mimic on the competitor's console. "Just wait a couple years and the differences will show themselves." If the graphical hardware differences are so drastic, why is it taking several years to finally see differences? I feel like Sony has the best engineered (and more powerful) console, but for some reason they simply REFUSE to show it. SHOW me why you have the most powerful console Sony. SHOW me that your console is capable of things the competition can only dream of doing.

We will know that day1 when first Digital Foundry comparisons will hit. There is a reason why we see rarely XBONE consoles with multiplatform games. They are mostly PCs like E3.
 

filopilo

Member
We will know that day1 when first Digital Foundry comparisons will hit.

No doubt . if they are being honest and fair , we'll see day one the disparities between versions.
Now , for the developpers there can be some 'political' reasons to restraint one version to minimize the disparities and find a middleground.
 

TheD

The Detective
But there is no "same power". AMD and Intel does multicores because they can get more power for cheaper price creating multicore CPU rather than single core super fast CPU.

"Beating like a child" is maybe to much but point is that 8 core is cheaper than i5 and in your benchmarks in multicore apps it beats i5. In games that 8 core is worse because games still are not properly multithreaded like SC which uses 1 core for all its logic and rest as a help.



We will know that day1 when first Digital Foundry comparisons will hit. There is a reason why we see rarely XBONE consoles with multiplatform games. They are mostly PCs like E3.


But an i5 and a 8350 are about the same power (at the clock speeds they ship at), it is just that the i5 has it tied up in less cores!
Only 7Zip shows a large difference, that just happens to go away with the hyperthreaded i7.

SC2 is an outlier, lots of other games are much more multi thread than it.
 

FranXico

Member
So when will we actually SEE all these graphical advantages the PS4 has? I keep hearing that the console is noticeably more powerful, and yet when looking at the games, not ONE has looked graphically impossible to mimic on the competitor's console. "Just wait a couple years and the differences will show themselves." If the graphical hardware differences are so drastic, why is it taking several years to finally see differences? I feel like Sony has the best engineered (and more powerful) console, but for some reason they simply REFUSE to show it. SHOW me why you have the most powerful console Sony. SHOW me that your console is capable of things the competition can only dream of doing.

Both consoles were designed to scale their capabilities over time. There literally are software "switches" that control the CPU and GPU clock speed, etc. just waiting for a firmware update to unlock them.

The recent 53 MHz upclock in the XBox One was software enabled. Expect this kind of "upgrade" to happen again.
 

Perkel

Banned
No doubt . if they are being honest and fair , we'll see day one the disparities between versions.
Now , for the developpers there can be some 'political' reasons to restraint one version to minimize the disparities and find a middleground.

More like don't show their Xbone version at all.
 
So basically with PS4 you can put a texture/gpgpu code or other gpu stuff somewhere inside those 8GB and say "there is, use it" while with XBox one you've to "upload" to gpu ram space (mem->mem copy).
 

TheD

The Detective
So much butt hurt over DF.
Sure their estimates of the bone's power are off and some other articles they have posted about it are a bit bad, but claiming they will sabotage the faceoffs is dumb and childish (and don't get me started on the whining about the results of old faceoffs and trying to witch hunt their wording) and ignoring all the positive interviews leadbetter has done with Sony devs about the tech of their games.
 

dr_rus

Member
Frankly AMD is best position ever. Soon in AAA gaming having more cores will be better than having less faster cores. Right now it is opposite. Games do use 1 core for all its logic and rest as help instead of stuffing all cores properly.
4 cores of Haswell are faster than 8 cores of Jaguar in multithreaded workloads. Having less cores which are faster in both single and multithreaded workloads will always be preferable to having more cores which are slower everywhere. If anything, Intel CPUs will get the same boost from new consoles thanks to higher parallelization.
 

Thrakier

Member
This is something we keep forgetting... Most people won't tell the difference. Developers know what's going on, so they can really say one version is better than the other, but the differences will be minute.

Don't count on Richard Leadbetter to expose them this time, though ;)

Why do you hope so? And who is "most people" and how are most people related to your personal experience.
 

whome0

Member
Oles Shishkovstov, Chief Technology Officer at 4A Games (Metro 2033, Last Light), said that he prefers many low-power cores to few high-performance cores.

And that quote is flawed, if something had 1 core and something else has 2 cores but the single core was 2x as fast as each of the 2 cores you would not have to do anything to get a speed up compared to having to parallelise to get the same speed out of the slower pair!

J.C once made a twitter comment about the same question. Its all down to how much more powerful few cores are. I guess he prefers traditional "single thread" gameloop approach if that still was feasible option. Answer is not a simple yes/no debate.
John Carmack says about few-many cores
Q: Is 4 cores at a high clock rate or 8 cores at half the speed better from a game development perspective?
A: All apps are better with half the cores at twice the performance, the question gets more nuanced if the trade is at 1.5x perf
 

driver116

Member
So much butt hurt over DF.
Sure their estimates of the bone's power are off and some other articles they have posted about it are a bit bad, but claiming they will sabotage the faceoffs is dumb and childish (and don't get me started on the whining about the results of old faceoffs and trying to witch hunt their wording).

Key word there. Besides we don't need to read the articles as we can simply look at the screenshot comparisons and FPS test videos to form conclusions.
 

Perkel

Banned
4 cores of Haswell are faster than 8 cores of Jaguar in multithreaded workloads. Having less cores which are faster in both single and multithreaded workloads will always be preferable to having more cores which are slower everywhere. If anything, Intel CPUs will get the same boost from new consoles thanks to higher parallelization.

Meanwhile other poster showed link to anadtech comparison where 8 core from AMD beats more expensive 4 core "faster" intel CPU in almost all multicore apps with one or two of them beating it hard.

But my point wasn't about apps where it is known that multicore will be better my point for this post was gaming and how game now on PC do not scale well with cores and often they don't use other cores evenly and in this case Intel will always be winner since they prioritize single core performance and since games do not scale to n+ cores properly 8 core from AMD can't "win".
 

antic604

Banned
Ok, sorry if I might be repeating something someone else said in last 12 pages (I'm catching up on 11th right now...), but to me it seems like:

Microsoft explicitly wanted 8GB of memory, to ensue there's room for all the non-gaming stuff (Kinect, TV, Skype, Apps, etc). As a result of fixed price point, they needed to go DDR3 route + embedded RAM (to compensate for lower b/w), which - best to my understanding - excludes full hUMA compatibility. To make up for that shortcoming, namely to make the copying of memory between GPU and CPU pools faster and 'cheaper' (in terms of CPU performance) they've added the Move Engines.

Sony on the other hand wanted unified memory (with high b/w), so they chose GDDR5, even if that meant they'll only have 4GB of it. Still, that setup allowed for full hUMA compliance and was easy to develop for. Sony got lucky 8GB was possibly by the launch time and only now are adding stuff to the OS, to rival MS's non-gaming features (that's my impression at least).

In the end I expect both Xbone and PS4 to perform similarly in that regard, i.e. memory operations for GPGPU (I'm not touching the 1.8TF vs. 1.3TF GPU difference here, additional CUs, ROPs, ACEs, etc), only they've chosen different routes to achieve that. It looks like MS's memory setup is more convoluted, but I'm sure their SDK will hide a lot of it for the devs to not worry about.

So, my interpretation is that PS4 is "full hUMA", but Xbone is a "hUMA derivative", in a sense that it can achieve similar results (again, talking only about memory) with similar performance. If that's the case, no wonder AMD will hype Sony's solution, because this is something they can sell to PC market - a clean, elegant and efficient solution. Even if MS's approach will yield similar results, it looks much more complicated and most likely is more expensive and definitely more difficult to manufacture (i.e. all the APU yields rumors).
 

Perkel

Banned
J.C once made a twitter comment about the same question. Its all down to how much more powerful few cores are. I guess he prefers traditional "single thread" gameloop approach if that still was feasible option. Answer is not a simple yes/no debate.
John Carmack says about few-many cores

Why people always get this multicore vs single core performance in such a way ? Meaning "which is better: 4 2Ghz cores or 8 1Ghz cores ?"

Reality is that you have 4 2Ghz cores and 8 1,6-1,8 Ghz cores where more cores are slower but not half slower and it can do 8 jobs instead of 4 jobs at the same time (one of the reasons why i5 is virtually 8 cores)

edit:

Like i said everything would be amazing if we had one supercore but we don't and we won't have that.
 

TheD

The Detective
Meanwhile other poster showed link to anadtech comparison where 8 core from AMD beats more expensive 4 core "faster" intel CPU in almost all multicore apps with one or two of them beating it hard.

But my point wasn't about apps where it is known that multicore will be better my point for this post was gaming and how game now on PC do not scale well with cores and often they don't use other cores evenly and in this case Intel will always be winner since they prioritize single core performance and since games do not scale to n+ cores properly 8 core from AMD can't "win".

Not only do the benchmarks not show that at all, but he said Haswell and Jaguar, not Haswell i5 vs Piledriver!
 

RedAssedApe

Banned
So when will we actually SEE all these graphical advantages the PS4 has? I keep hearing that the console is noticeably more powerful, and yet when looking at the games, not ONE has looked graphically impossible to mimic on the competitor's console. "Just wait a couple years and the differences will show themselves." If the graphical hardware differences are so drastic, why is it taking several years to finally see differences? I feel like Sony has the best engineered (and more powerful) console, but for some reason they simply REFUSE to show it. SHOW me why you have the most powerful console Sony. SHOW me that your console is capable of things the competition can only dream of doing.

strange ranty post. xbone isn't the wiiu. no one rational who isn't a fan boy is posting that the ps4 is going to make the xbone look like one. the subtle differences will most likely manifest themselves in performance fps, aa and the like.

the system hasn't even been released, sdks just finalized, and we have yet to really see any real cross-platform comparisons or a naughty dog game. stop acting like the ps4 and xbone have been out for several years already. i'm assuming you are buying a xbone but in the event that is not the case be happy that Infamous and Killzone already look pretty awesome. :p

Both consoles were designed to scale their capabilities over time. There literally are software "switches" that control the CPU and GPU clock speed, etc. just waiting for a firmware update to unlock them.

The recent 53 MHz upclock in the XBox One was software enabled. Expect this kind of "upgrade" to happen again.

I doubt this would happen. No reason to underclock a CPU for a non-mobile device when you know its theoretical limits for that form factor and cooling requirements. PSP was an anomaly where the cpu was speced at the higher clock rate but underclocked it for battery life reasons.
 

Perkel

Banned
Not only do the benchmarks not show that at all, but he said Haswell and Jaguar, not Haswell i5 vs piledriver!

Sorry i didn't notice that it was Jaguar compared i talked earlier about FX series (in PC space) not console one.

As of comparing Haswell to Jaguar it is no brainer that Haswell will beat Jaguar in CPU tasks since Jaguar in in consoles is supposed to be helped by GPUs compute in first place doing all hard work like Physic.

My point of view is that people programming for low power jaguar cores will change how multithreading in games is done. Like A4 dev for example. Right now in PC space multithreading is not n+ but fixed and then additional cores are used to "boost" work like in Sim City.
 

Horp

Member
Sorry i didn't notice that it was Jaguar compared i talked earlier about FX series (in PC space) not console one.

As of comparing Haswell to Jaguar it is no brainer that Haswell will beat Jaguar in CPU tasks since Jaguar in in consoles is supposed to be helped by GPUs compute in first place doing all hard work like Physic.

My point of view is that people programming for low power jaguar cores will change how multithreading in games is done. Like A4 dev for example. Right now in PC space multithreading is not n+ but fixed and then additional cores are used to "boost" work like in Sim City.

Multithreading is awesome, it really is. But since I'm a senior programmer I can tell you that not everything can be solved by multithreading. Some tasks are linear by nature. For example repeated tasks in which each repetition relies on the result of the previous repetition. Thus, from a programming standpoint, fewer, stronger cores are always better than more, weaker cores. If we somehow had a GPU that had one or two cores that could give the same performance as all those shader cores combined, it would be so much easier to work with; and you could do things that is impossible today. There are no such GPU, so graphics programming have had to adjust accordingly. Some things are possible, other aren't. But forcing normal CPU programming to be scalable to n+ cores (or threads, really), is almost impossible. And certainly not a benefit to the developers. It consumes less power and is cheaper however.
 

Horp

Member
Sure, but it's the other way round as well: Modern CPUs are doing tasks at the moment that are better suited for massively parallel arithmetic performance of a GPU. And that's where the HSA comes into play.

Indeed. But that doesn't make Jaguar a stronger CPU than it is. Jaguar is a weak CPU and I believe this will be something many developers struggle with.
 

strata8

Member
Indeed. But that doesn't make Jaguar a stronger CPU than it is. Jaguar is a weak CPU and I believe this will be something many developers struggle with.

Guerilla certainly seems to be having no trouble with it at least, looking at their profiling.

G9WmPO0.jpg
 

FranXico

Member
Multithreading is awesome, it really is. But since I'm a senior programmer I can tell you that not everything can be solved by multithreading. Some tasks are linear by nature. For example repeated tasks in which each repetition relies on the result of the previous repetition. Thus, from a programming standpoint, fewer, stronger cores are always better than more, weaker cores. If we somehow had a GPU that had one or two cores that could give the same performance as all those shader cores combined, it would be so much easier to work with; and you could do things that is impossible today. There are no such GPU, so graphics programming have had to adjust accordingly. Some things are possible, other aren't. But forcing normal CPU programming to be scalable to n+ cores (or threads, really), is almost impossible. And certainly not a benefit to the developers. It consumes less power and is cheaper however.

If by multithreading you are referring to parallelization, I agree in full.
Otherwise, when we have repetitive tasks that need to be executed synchronously, we assign them dedicated threads that wait and process the task(s) repeatedly. Multithreading is good precisely to free up the main thread.
 

stryke

Member
Can certainly appreciate this huma stuff on a technical discussion level. Looking forward to visual demonstration of it down the road.
 

Robbok

Member
More information from a user of the German ComputerBase site:

He is at Gamescom right now and was able to play FIFA14 which is demonstrated on PS4. He said it looks amazing. He had a little chat with the devs and they assured him that the PS4 version of FIFA14 will look better than Xbox One version. PC version will be based on PS360 version. He's now standing in the line at Sony's booth for PS4 which has the longest waiting time on the whole exhibition. source

This is not official so expect this information to contain a huge amount of subjectivity.

He has updated his post. He has seen XboxOne Fifa now and can´t see a difference to PS4 himself.
 

Perkel

Banned
Multithreading is awesome, it really is. But since I'm a senior programmer I can tell you that not everything can be solved by multithreading. Some tasks are linear by nature. For example repeated tasks in which each repetition relies on the result of the previous repetition. Thus, from a programming standpoint, fewer, stronger cores are always better than more, weaker cores. If we somehow had a GPU that had one or two cores that could give the same performance as all those shader cores combined, it would be so much easier to work with; and you could do things that is impossible today. There are no such GPU, so graphics programming have had to adjust accordingly. Some things are possible, other aren't. But forcing normal CPU programming to be scalable to n+ cores (or threads, really), is almost impossible. And certainly not a benefit to the developers. It consumes less power and is cheaper however.

You are perfectly right but one of the things which are traditionally used in logic thread is AI. Most of games does not move AI from that thread. A4 dev did move it to other thread.

Naturally there are things like you said that can't be simply moved to other cores and then put together after calculation that is nature of some CPU tasks.

Point is that devs should move as much as they can leaving to logic thread only what can't be done especially if we are looking into future where multicore CPU will be standard and single core performance will sharply go stale and producers will move into even more cores.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Guerilla certainly seems to be having no trouble with it at least, looking at their profiling.

Modern game engine architectures are generally striving towards decoupeling their components and orchestrate them asynchronously. While that task is not trivial since many subsystems have delicate interdependencies, it is generally possible. Sony teams in particular seem to have moved rapidly from a thread-per-task to a jobs-and-dispatcher architecture supported by frameworks like SPURS, since this architecture style helps in maxing out the Cell's SPUs. It should be very straight-forward to port those architectures to Jaguar and GPGPU since those target architectures are much less limited by local storage, cache sizes, and memory access penalties than Cell is.
 

Horp

Member
Guerilla certainly seems to be having no trouble with it at least, looking at their profiling.

As if one screen shot of the profiling of a few seconds of one single game is any proof of anything.
You don't normally have tons of CPU load running all the time. CPU load often comes in spikes, when you have to run a calculation of something. You want a strong CPU to handle those spike without stuttering.

If by multithreading you are referring to parallelization, I agree in full.
Otherwise, when we have repetitive tasks that need to be executed synchronously, we assign them dedicated threads that wait and process the task(s) repeatedly. Multithreading is good precisely to free up the main thread.

That isn't always possible. Imagine having to parse a big binary file. That is really hard to divide on multiple threads. But maybe you have to do it right now, and the rest of the program must wait. In that case you must have one strong core to do all that parsing. You could assign it to some other thread, but it wouldn't "free up" the main thread, if the continuation of the program relies on the result of that file.
 
Digital foundry should simply shut down at this point, all comparisons will be a wipewash.

Dream on. MS is trying hard to confuse people that there isn't that much difference.

If spec are true then we will see difference between multiplatform games day1
.

pretty much.
 

strata8

Member
As if one screen shot of the profiling of a few seconds of one single game is any proof of anything.
You don't normally have tons of CPU load running all the time. CPU load often comes in spikes, when you have to run a calculation of something. You want a strong CPU to handle those spike without stuttering.

That image is representing a single 33ms frame, not a few seconds of game time.

Can you give some examples of game scenarios that can't be easily multithreaded?
 

cebri.one

Member
BTW how do we know that support ticket is real. Consumer service talking about what an exec says in Germany? Seems quite odd.
 

Durante

Member
It should also be noted that some of the most effective solutions to fully multithreading game code (e.g. software pipelining processing across multiple frames) come with some disadvantages -- like increased input lag in this case.

Soon in AAA gaming having more cores will be better than having less faster cores.
If the total performance is equivalent, this will never be the case, regardless of application area. It simply doesn't make sense from a software development perspective.
 

benny_a

extra source of jiggaflops
It should also be noted that some of the most effective solutions to fully multithreading game code (e.g. software pipelining processing across multiple frames) come with some disadvantages -- like increased input lag in this case.
Oh no. Killzone: Shadow Fall had a significant "jobification" increase compared to their previous PS3 games.

200ms input lag confirmed. :-D
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Can you give some examples of game scenarios that can't be easily multithreaded?

Depends on what you mean by "easily". Parallelizing an architecture on a large scale is almost always complicated, especially if you are working with a legacy code base. I have nice example from Jason Gregory's (Naughty Dog developer) book that showcases dependencies between several components when using the more traditional tread-per-task model.

threadtaskvaufn.png


There is apparently a lot of unused potential due to dependencies between the results and parameters of different phases.

/edit: by the way, the "Submit Primitives" task is one example for CPU/GPU interaction where information has to be copied in memory in a non-hUMA architecture.
 

Durante

Member
Oh no. Killzone: Shadow Fall had a significant "jobification" increase compared to their previous PS3 games.

200ms input lag confirmed. :-D
That doesn't necessarily mean it uses more software pipelining stages though. There are many ways to achieve higher parallelism -- but in a non-trivial application none of them are perfect.
 

dr_rus

Member
Meanwhile other poster showed link to anadtech comparison where 8 core from AMD beats more expensive 4 core "faster" intel CPU in almost all multicore apps with one or two of them beating it hard.
Riiiiiiight.

54816.png

54817.png


XPS12 is i7-3517U - that's dual core Ivy Bridge at 1.9 GHz.
Kabini A4-5000 is a 4 core Jaguar @ 1.5 GHz.
You can pretty much double the single threading result to see how PS4/XBO CPUs will compare to a 4 core Haswell. Basically - a 4 core Haswell is 6+ times faster on a single thread and ~2 times faster in multithreading than an 8 core Jaguar.

But my point wasn't about apps where it is known that multicore will be better my point for this post was gaming and how game now on PC do not scale well with cores and often they don't use other cores evenly and in this case Intel will always be winner since they prioritize single core performance and since games do not scale to n+ cores properly 8 core from AMD can't "win".
Games do not scale on faster CPUs basically because the GPU is the bottleneck for like 99% of the time. This won't change in any way with the coming of the new console generation.
 
Update 4 seems to tell us everything we already knew. Anyone that thinks PS4 won't be more powerful is nuts, but to the extint of it... I haven't seen it yet. I will abandon the Xbox One immediately if there is a clear multiplatform advantage for PS4 that makes the Xbox One titles night and day different. I suspect at HDTV resolutions that difference won't be that great. I haven't seen it yet, but I've also not seen much comparisons between the two platforms on games I want like AC4 or Watch Dogs.
 

Perkel

Banned
If the total performance is equivalent, this will never be the case, regardless of application area. It simply doesn't make sense from a software development perspective.

If.

Single core performance is much harder to improve than just adding next core and it will be harder and harder in future.

i7 is a thing because intel can't create 4 core i5 that is as fast as 8 core i7 same as Intel can't create one core that is as fast as 4 core i5.

We started from 1 core, then in ~2005 was 2 core, ~ 2010 standard is 4 core and 2014-2015 it will be 8 core and that process will go on unless there will be some mayor breakthrough in CPU design.

Single core performance will improve but as i said it will be much harder with every new upgrade. Moving from single core performance to multicore n+ is only option for future since soon single core performance improvement will be small in comparison to total amount of power provided with amount of cores and to task that will be more expensive in future.
 

astraycat

Member
That image is representing a single 33ms frame, not a few seconds of game time.

Can you give some examples of game scenarios that can't be easily multithreaded?
Multi threading is only easy in the most trivial of cases. Sure, it may look relatively simple in paper but once you actually get to implementing any sort of parallel system you'll find that the devil is in the details.

Some of the hardest to find bugs are those where slightly incorrect assumptions were made about exactly what parts should be protected by critical sections.
 

strata8

Member
Right, but you just didn't mention that i7 3517U has two cores with four threads and a turbo of 3GHz. On top of that it's super expensive (like most of the Intel CPUs).

Obviously Jaguar is far cheaper and more efficient than the i5. He was just dispelling the notion that an 8 core CPU is automatically faster than a 4 core CPU.

Multi threading is only easy in the most trivial of cases. Sure, it may look relatively simple in paper but once you actually get to implementing any sort of parallel system you'll find that the devil is in the details.

Some of the hardest to find bugs are those where slightly incorrect assumptions were made about exactly what parts should be protected by critical sections.

Worded better: prohibitively difficult. I was wondering what routines in games can't be threaded under any circumstances to the point that they can cause stuttering.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Some of the hardest to find bugs are those where slightly incorrect assumptions were made about exactly what parts should be protected by critical sections.

Absolutely. I just spend half a day tracing down a race condition and spend the other half of the day finding the bugfix with the smallest performance impact. That race condition only occurred in less than 0,05% of all cases, and since purposefully provoking it in simulated test runs is almost impossible, testing and debugging is a b*tch.
 
Top Bottom