• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Recent PS4 SDK update unlocked 7th CPU core for gaming

DonMigs85

Member
The PS4 menu can get choppy at times if you access it while playing a game, and I assume that's with 2 cores handling it, so I hope this doesn't make it even worse with future games that support the 7th core.
 

yurinka

Member
The PS4 menu can get choppy at times if you access it while playing a game, and I assume that's with 2 cores handling it, so I hope this doesn't make it even worse with future games that support the 7th core.
If it's choppy it's because the menu is poorly optimized, not because of lack of horsepower. It's a really simple menu, they can optimize it to run properly with a single core.
 
Only in multiples of one.
So, any non-zero amount of cores?
cheeky.gif
 

Jhn

Member
If it's choppy it's because the menu is poorly optimized, not because of lack of horsepower. It's a really simple menu, they can optimize it to run properly with a single core.

I'd wager that almost all of the menu choppiness people experience is due more to HDD reading speed than anything CPU bound. Bad menu performance pretty much always correlates with loading screens and things of that nature for me.

Which makes sense, in a way. The menu needs to read their content while the game is doing its own thing.
 

mrklaw

MrArseFace
CPU and RAM wise PS4, not having to worry about snap like features, can assume the game is either backgrounded or full screen using the full system resources minus a known set of system services (notifications, custom background music, PS Store downloads and uploads, etc...) some of which are backed by a secondary CPU with its own private memory pool (background video recording, streaming, etc...). Although it is a quick transition when you go for example to be share screen, the game is paused and control given to the OS including the bulk of system resources, CPU cores included.

Essentially, the OS has likely at least two modes of operation, shrinking when the game is full screen and only game companion services so to speak are needed (streaming a game and keeping the chat comments coming and the watchers notification, etc...), and reclaiming resources whenever brought in full control again. The system is never running complex and resource hogging operations when the game is active and viceversa.

Nice summary. It does appear to have simpler set of situations to cover off comoared to X1. And a lot of those background tasks are video encode/upload based (streaming, remote play, share play all use that model). Streaming with a PS camera feed and chat feed overlaid is probably the most intensive thing the OSmwill ever have to do, and that is already covered so they should have a pretty clear and high confidence view of future ram/CPU needs for the OS for the rest of the generation.
 

TimFL

Member
I'd wager that almost all of the menu choppiness people experience is due more to HDD reading speed than anything CPU bound. Bad menu performance pretty much always correlates with loading screens and things of that nature for me.

Which makes sense, in a way. The menu needs to read their content while the game is doing its own thing.

Don't forget that the OS heavily relies on a network connection at times. If your connection drops it takes the fluidity with it (atleast it does here, I've had the OS lock up for a few seconds at times when my internet connection was down).
 

Javin98

Banned
Pretty funny that people scoffed at the idea of the Xbox One getting an advantage from extra CPU core and a slight overclock. Now all of a sudden the PS4 might have an extra core available and it's release the Kraken.
You do realize that most of those Dragonball and Release The Kraken GIF's are all jokes, right? No one is seriously thinking that freeing a core will result in substantially better visuals on its own.

Also, what's with the shitposts that games can now run at solid 30FPS with the 7th core freed? Most games on PS4 do run at a solid 30FPS, several even solid 60FPS. I wonder how these people played any games on last gen. As a matter of fact, 60FPS was more rare last gen than this gen.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
I'm stupid when it comes to this shit so as a gamer, what kind of improvements do you think we will see in future games on the ps4?

Marginally better FPS. Of course when your looking at a difference of a stable 30 versus an unstable 30, it could be quite important
 

LordOfChaos

Member
Two of the basis for the A9X "miracle" are the geekbench number which is oriented around mobile plus our conjecture that it somehow scales very linearly to desktop workloads if Apple simply wants to.

Explain what these words mean? Because a lot of people that say it isn't comparable don't know of or ignore Primate labs own statement on it. They use smaller worksets on mobile only because it used to take so long, but the end scores are comparable within 1-4%. In Geekbench 4 they're going to make the working sets the same size so that people can stop making a fake fuss about that, and because phones have gotten so fast they finish very quickly even on the desktop working set.

The only valid criticism of comparing them I already brought up in my post, about the encryption score dragging it up, but as a geometric mean, with so many subtests one outlier doesn't drag it up much.

Anyways. Most other tests fall in line with the relative placements in Geekbench anyways. Don't trust GB? Use it in conjunction with a dozen other tests and kick out the outliers. You still find A9 dominating the smartphone camp, dominating Atom, and by extension being ahead of Jaguar per core.

And I wasn't talking about any upwards clock scaling for desktop, I was talking about the cores as they already perform.
 

Man

Member
Uncharted 4 and other upcoming games using jobs to multi-thread their workload (but specifically UC4 as the whole engine is parallelized, not even a main) should benefit from this quite a bit.
 

RedSonja

Banned
Perhaps Fallout 4 won't now randomly drop to 10 fps...hang on... that would require Bethesda to optimize their engine...haha
 

JaseC

gave away the keys to the kingdom.
Is the RAM that the OS is eating up gonna get lowered too?

Sure, eventually. It has long been rumoured that RAM issues prevented Sony from implementing cross-party chat on the PS3 (put simply, the X360 has a broad 512MB pool that can be utilised as needed, whereas the PS3 has 256MB of system RAM and 256MB of VRAM, each of which have to be micro-managed) and so the common -- and, quite likely, accurate -- assumption is that reserving 3GB RAM allows Sony (and MS) to implement OS-level features without needlessly inconveniencing developers. "Better safe than sorry" is, when you're dealing with closed systems, the best approach to take. Better to over-extend your reach than to have to later fight for every megabyte.
 

vpance

Member
How much difference would it make if they freed up 1GB of RAM? Less pop in maybe? Could we start to see more med/high textures being used or are they still limited by bandwidth?
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
How much difference would it make if they freed up 1GB of RAM? Less pop in maybe? Could we start to see more med/high textures being used or are they still limited by bandwidth?

High and ultra textures on PS4 and XB1 are already 99% in line with PC multiplat titles. There's no more need for RAM for that. And world size/complexity is already pretty big.
 

zsynqx

Member
Uncharted 4 and other upcoming games using jobs to multi-thread their workload (but specifically UC4 as the whole engine is parallelized, not even a main) should benefit from this quite a bit.

Could you explain this to me as if I was a 5 year old?
 

Vashetti

Banned
High and ultra textures on PS4 and XB1 are already 99% in line with PC multiplat titles. There's no more need for RAM for that. And world size/complexity is already pretty big.

Yeah as a few have already said, reducing the RAM footprint can't be a huge priority right now.
 

vpance

Member
High and ultra textures on PS4 and XB1 are already 99% in line with PC multiplat titles. There's no more need for RAM for that. And world size/complexity is already pretty big.

Really? I thought a lot of the big titles still had some low to med texture settings on console.

I guess at the very least load times might improve.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Really? I thought a lot of the big titles still had some low to med texture settings on console.

I guess at the very least load times might improve.

Not the case. AF might be spotty in some games, and AA may be of a lower quality, but actual texture resolution is always consistently high or ultra
 

kitch9

Banned
Whatever they can use to try shitting over the current gen consoles. Which is hardly necessary really, since we already know the CPU in the current gen consoles is probably a weak point.

Mobile phones don't have a gpu with large amounts of compute or hardware tessellation available either.
 

Renekton

Member
Explain what these words mean? Because a lot of people that say it isn't comparable don't know of or ignore Primate labs own statement on it. They use smaller worksets on mobile only because it used to take so long, but the end scores are comparable within 1-4%. In Geekbench 4 they're going to make the working sets the same size so that people can stop making a fake fuss about that, and because phones have gotten so fast they finish very quickly even on the desktop working set.

The only valid criticism of comparing them I already brought up in my post, about the encryption score dragging it up, but as a geometric mean, with so many subtests one outlier doesn't drag it up much.

Anyways. Most other tests fall in line with the relative placements in Geekbench anyways. Don't trust GB? Use it in conjunction with a dozen other tests and kick out the outliers. You still find A9 dominating the smartphone camp, dominating Atom, and by extension being ahead of Jaguar per core.

And I wasn't talking about any upwards clock scaling for desktop, I was talking about the cores as they already perform.
http://www.pcworld.com/article/3006...-pro-really-isnt-as-fast-a-laptop.html?page=3

Linus Torvalds said:
Wilco, Geek Bench has apparently replaced dhrystone as your favourite useless benchmark. Geekbench is SH*T.
 

"D"

I'm extremely insecure with how much f2p mobile games are encroaching on Nintendo
Sooo, if the PS4 opens the 8th gate, er....core, would it simply turn to ash after performing Evening Elephant and Night Guy?
 

Heidern98

Member
Not sure if this is related or my memory has gone bad, but I could have sworn that I would be able to use voice commands after starting a game. I would use it to initiate broadcasting. Now I can't get it to work with the latest games. Could that now be a feature affected by the most recent changes in the OS?
 
High and ultra textures on PS4 and XB1 are already 99% in line with PC multiplat titles. There's no more need for RAM for that. And world size/complexity is already pretty big.

You're thinking backwards. Most textures are made with the ~3.5GB of memory most games probably use out of the available 5.5 in mind, and then the Ultra settings on PC includes original textures which were dumbed down on consoles for going over budget. If you give another gigabyte to games, and average RAM consumption stays the same, then you've increased the texture budget a whole lot.

Wilco, Geek Bench has apparently replaced dhrystone as your favourite useless benchmark. Geekbench is SH*T.

I wouldn't take anything Torvalds says seriously. He's too fond of hyperbole for the sake of it.
 

LordOfChaos

Member

You left out the part where the GB people specifically address Linus's criticism.

but he also disagrees with Torvalds.

“We have a lot of respect for him,” Pool said. “I think he’s wrong in this case.”

Torvalds argues against the value of small code loops in measuring performance, but Poole said the future is mostly about smaller loops. Poole said moving a window around a screen or opening a window is mostly a solved problem for CPUs.

“What happens when we get to games or applications like Photoshop? Then you see the movement to smaller, hotter loops. Your’e going to see things where you’re running the core loop of a physics engine or the core loop of a rendering engine or a core loop of a Javascript interpreter,” Poole said. ”You’re talking about these much smaller, much hotter loops, and I think Geek Bench measures this quite nicely.”

Poole said they’ve been very transparent with what the test measures and have provided extensive documentation as well. In order to measure the chip performance, Geek Bench tries to execute the same code on every platform, Poole said.

Poole claims the question of whether the A9X is faster than, say, a Core m3 is beside the point. Today, the software that you can run on a laptop just isn’t available on the iPad Pro, rendering Apple’s productivity tablet mostly a curiosity until software changes that.

Linus gets mad at a lot of things. I have mad respect for him, but he's not always right. And his criticism is quite weird since he must know what a geometric mean is. And as much as I respect his work, I kind of think at this point in his life he mouths off more than he should to grab some headlines.

Remember this bit of maturity?
nvidia-f_ck-you-no-digo-dice-linus-torvalds-L-qnel0H.jpeg


And nowhere does he say anything about your initial claim of it being "oriented around Mobile".
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
The PS4 menu can get choppy at times if you access it while playing a game, and I assume that's with 2 cores handling it, so I hope this doesn't make it even worse with future games that support the 7th core.

What annoys me a bit is that voice chat stutters when you are using the menu with a game running in the background.

But I take more resources for games over stuff like that.
 

Man

Member
Could you explain this to me as if I was a 5 year old?
Jobs: It is a way to encapsulate/fine-grain every little computation the game engine needs to do (usually called jobs/tasks) so these can be freely distributed to any available CPU core (it just picks the next queued-job once it finishes the current one). So if your console and/or computer receives more cores overnight then not much work is needed (if at all) to put them into use.

More and more game engines have switched over to this method over the last few years as a great means to harness multi-core CPU's and to be able to easily scale as more advanced CPU's with more cores are introduced. This was ecspecially useful on the PS3 CELL as it had one master-core and six slave-cores (SPU's).

Naughty Dog has taken it a step further with their PS4 engine starting with TLOU: Remastered. Usually you have one CPU core dedicated to 'orchestrating'/manage the whole Jobs system but ND's PS4 engine is fully parallelized in that it doesn't need this at all. All of the six cores are happily computing jobs (and with this new SDK they have seven). This thanks to the magic fairy dust of 'fibers' (jobs-within-jobs) that can cooperatively yield to each other and resume their work where they left of later on combined with core-locked worker threads... Basically: Their engine is more or less perfectly scalable. Enabling and fully utilizing a 7th core (or in theory: a hundred more cores) probably took them 30mins of coding.

They have a really cool technical talk here (one hour long): http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine
 

zsynqx

Member
Jobs: It is a way to encapsulate/fine-grain every little computation the game engine needs to do (usually called jobs/tasks) so these can be freely distributed to any available CPU core (it just picks the next queued-job once it finishes the current one). So if your console and/or computer receives more cores overnight then not much work is needed (if at all) to put them into use.

More and more game engines have switched over to this method over the last few years as a great means to harness multi-core CPU's and to be able to easily scale as more advanced CPU's with more cores are introduced. This was ecspecially useful on the PS3 CELL as it had one master-core and six slave-cores (SPU's).

Naughty Dog has taken it a step further with their PS4 engine starting with TLOU: Remastered. Usually you have one CPU core dedicated to 'orchestrating'/manage the whole Jobs system but ND's PS4 engine is fully parallelized in that it doesn't need this at all. All of the six cores are happily computing jobs (and with this new SDK they have seven). This thanks to the magic fairy dust of 'fibers' (jobs-within-jobs) that can cooperatively yield to each other and resume their work where they left of later on combined with core-locked worker threads... Basically: Their engine is more or less perfectly scalable. Enabling and fully utilizing a 7th core (or in theory: a hundred more cores) probably took them 30mins of coding.

They have a really cool technical talk here (one hour long): http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine

Wow thanks a lot. Watching the talk now.
 

DeepEnigma

Gold Member
Jobs: It is a way to encapsulate/fine-grain every little computation the game engine needs to do (usually called jobs/tasks) so these can be freely distributed to any available CPU core (it just picks the next queued-job once it finishes the current one). So if your console and/or computer receives more cores overnight then not much work is needed (if at all) to put them into use.

More and more game engines have switched over to this method over the last few years as a great means to harness multi-core CPU's and to be able to easily scale as more advanced CPU's with more cores are introduced. This was ecspecially useful on the PS3 CELL as it had one master-core and six slave-cores (SPU's).

Naughty Dog has taken it a step further with their PS4 engine starting with TLOU: Remastered. Usually you have one CPU core dedicated to 'orchestrating'/manage the whole Jobs system but ND's PS4 engine is fully parallelized in that it doesn't need this at all. All of the six cores are happily computing jobs (and with this new SDK they have seven). This thanks to the magic fairy dust of 'fibers' (jobs-within-jobs) that can cooperatively yield to each other and resume their work where they left of later on combined with core-locked worker threads... Basically: Their engine is more or less perfectly scalable. Enabling and fully utilizing a 7th core (or in theory: a hundred more cores) probably took them 30mins of coding.

They have a really cool technical talk here (one hour long): http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine

Thanks for this as well. Watching now.
 
Oh, that's good to know. Thanks, guys.
Basically, it's the same as the upgrade to the Retina display. Non-Retina content will look a bit nicer on the new display simply because the pixels are smaller, but it still won't look nearly as nice as the real Retina content.


Jobs: It is a way to encapsulate/fine-grain every little computation the game engine needs to do (usually called jobs/tasks) so these can be freely distributed to any available CPU core (it just picks the next queued-job once it finishes the current one). So if your console and/or computer receives more cores overnight then not much work is needed (if at all) to put them into use.

More and more game engines have switched over to this method over the last few years as a great means to harness multi-core CPU's and to be able to easily scale as more advanced CPU's with more cores are introduced. This was ecspecially useful on the PS3 CELL as it had one master-core and six slave-cores (SPU's).

Naughty Dog has taken it a step further with their PS4 engine starting with TLOU: Remastered. Usually you have one CPU core dedicated to 'orchestrating'/manage the whole Jobs system but ND's PS4 engine is fully parallelized in that it doesn't need this at all. All of the six cores are happily computing jobs (and with this new SDK they have seven). This thanks to the magic fairy dust of 'fibers' (jobs-within-jobs) that can cooperatively yield to each other and resume their work where they left of later on combined with core-locked worker threads... Basically: Their engine is more or less perfectly scalable. Enabling and fully utilizing a 7th core (or in theory: a hundred more cores) probably took them 30mins of coding.

They have a really cool technical talk here (one hour long): http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine
Great breakdown and great news. This is the future of coding, and will be especially useful in VR, I suspect.

Thanks for the link. <3
 

tuxfool

Banned
Their engine is more or less perfectly scalable. Enabling and fully utilizing a 7th core (or in theory: a hundred more cores) probably took them 30mins of coding.

Not in theory. If the overhead has not been been accounted for cooperatively communicating across 100 cores could potentially saturate them with traffic. With that many cores you'd almost need some sort of orchestration.

There is no such thing as perfectly scalable.
 

Shin-Ra

Junior Member
Basically, it's the same as the upgrade to the Retina display. Non-Retina content will look a bit nicer on the new display simply because the pixels are smaller, but it still won't look nearly as nice as the real Retina content.
Apple actually did a piss-poor job upscaling low ppi content on retina displays.
 
Top Bottom