family_guy
Member
I'm glad the consoles reserve so much RAM.
A smooth and fully featured OS is the most important feature of these consoles.
Right now it doesn't even do as much as a PS3 OS which got by on much less RAM.
I'm glad the consoles reserve so much RAM.
A smooth and fully featured OS is the most important feature of these consoles.
Right now it doesn't even do as much as a PS3 OS which got by on much less RAM.
First of all check my first post in this thread.Second of all if Crytek is doing same render target in lower precision without losing quality that means that they spend less bandwidth with the same effect, so their solution is more efficient by definition.
And finally The Order is using 4xRGBA8 not sure about one, we have only one inconclusive slide about that [in material properties section].
I dont know about Killzone SF, because in post-mortem demo slides they were still finding out what is the best precision setup, but they used 5x buffers + 32b stencil/depth, but even this unoptimized setup is smaller than Infamous one.
I'm using Crytek data, because they gave the most inside into development from both Crysis 3 and Ryse and they actually researched g-buffer packing methods and their efficiency and quality in the past.
.
I'm glad the consoles reserve so much RAM.
A smooth and fully featured OS is the most important feature of these consoles.
You're assuming all of the data accessed in a given frame is unique, but it typically isn't.
2,073,600/1,440,000 ~ 2.1/1.4 = 1.5 i.e. 50% more. Unless something's off on my end.Ryse uses a 900p buffer... 1080p buffer almost doubles the size of a 900p buffer
Fully featured huh
I wish that for 1 game, naughty dog or sucker punch would have unlimited power to use. Just imagine the quality of that game.
2,073,600/1,440,000 ~ 2.1/1.4 = 1.5 i.e. 50% more. Unless something's off on my end.
I have a 5 yr old FIOS DVR that constantly buffers 20 mins of live HD footage and can record from two separate HD streams at the same time, all while offering the standard FIOS service OS, tv guides, etc. As OSes go it's nothing special but the basic HD video recording has been nothing but rock solid. The DVR has a total of 256 MB of RAM.
Personally, I also think 3.5 GB is super bloat for the OS. But then I don't care about any of the other functionality really. I hope they get it down.
I got confused, they said the CPU was being used 100% all the time?
Poor resource management (in other words, pretty good resource management but could be better) or am I lost?
How is a game using up all RAM a good thing in any way? It's not a fantastically beautiful game, it's not an incredibly big or detailed world, character models are good but not great and there is no AI to speak of so that would mean that it's just poorly optimized?
Why brag about yourself being bad at your job?
PS3 had the same thing, but it's memory foot print was reduced during it's life time. I guess it's fair to expect the same from the PS4? Also, this 3.5 gigs of ram dedicated for OS, yet there are times where the OS is laggy and icons take time to load. Is the ram got to do with any of these issues?
I hope they give more of that RAM back to developers soon.
There is nothing in the OS, even running a game and a video service at the same time, that takes anywhere near to 3.5GB. The original PS4 OS only had a 1GB maximum RAM allocation (back when there was only 4GB of RAM total) and even then there were plans to reduce that footprint.
A bit OT but anyone's else ps4 getting loud when playing this game? I thought the ps4 was quiet until this. Game is digital btw.
I disagree. Its not a desktop operating system. They should be able to do alot with 2 gigs or less. Its a games console and the games should be the focus. Not the OS. Taking away available game RAM for OS stuff is not a good idea. Android doesnt need that and does more than PS4's OS. I'll be honest and say that Sony is terrible at OS development when it comes to efficiency compared to MS and other companies.
How is a game using up all RAM a good thing in any way? It's not a fantastically beautiful game, it's not an incredibly big or detailed world, character models are good but not great and there is no AI to speak of so that would mean that it's just poorly optimized?
Why brag about yourself being bad at your job?
Eh I did. And I came away with the same conclusion. You who couldn't recall one piece of the games source code and know nothing about why they made certain game design decisions can make only flawed comparisions to other games.
Not games, but engine. CryEngine is general purpose and multiplatform engine [and even used in movie industry], so their solution must work for every case, not selected one, so must be the most optimal.
Now find me any source that claims that You need as big buffers for anything and it does increase performance or precision to noticeable degree.
Also explain why both KZ:SF and The Order have smaller g-buffer setup.
Have You even read those slides about g-buffer packing i posted? Or just trying to argue semantics, because hey its different dev and they must know the best.
And sure, better to have stupid discussion about RAM allocation for 30 times than actual some tech discussion here.
I'd wager that an SCEI backed studio has a firmer grip on PS4 optimization than Crytek. I don't need to find you anything, or read anything from whicherver random developer you post. Unless Sucker Punch details their design choices then we won't know why they chose the paths they did and instead we can make the same baseless conjectures and critiques that you are making now. Which im not interested in.
Their G-Buffer seems large too for what it does. Pretty sure anyone can see that without calling their competence into question.
Because of some arbitrary framebuffer numbers from random games?
How is a game using up all RAM a good thing in any way? It's not a fantastically beautiful game, it's not an incredibly big or detailed world, character models are good but not great and there is no AI to speak of so that would mean that it's just poorly optimized?
Why brag about yourself being bad at your job?
My theory is that I don't have access to anything internal to sucker punch. So making arbitrary implications based on random other titles and decisions they make is flawed and will lead to fallacies.Yeah, Your theory is better "its big, because it must be"
Give one reason why it should be, one that has any scientific basis. If You dont have, just stop arguing semantics.
I'm glad the consoles reserve so much RAM.
A smooth and fully featured OS is the most important feature of these consoles.
Why are you getting so incensed about this? I wasn't trying to make an absolute 1:1 comparison, just providing perspective to define what should be a reasonable upper bound on the RAM requirements for what is very well established tech at this point. Besides, this may well be a moot point if the game DVR really is implemented entirely on the low-energy ARM chip that has its own embedded RAM, like others have speculated.And does it run a game like Second Son in real time alongside? Didn't think so.
And you have no idea how either of the two "DVR" solutions work on an OS level. Stop trying to make bizarre comparisons that have nothing to do with each other to build this ridiculous narrative that Sony dropped the ball with memory allocation based on nothing but pure conjecture and armchair analysis.
"Bu-bu-bu- some DVR kinda does something like this other OS does! They're like the same thing or something!" Give me a break.
A breakdown wouldn't be very interesting. The OS was originally designed to run on 512MB, and it probably still could. Right now, that extra memory is mostly being used to improve performance. Think of it like upgrading the RAM in your computer. You can do everything you do now with 1GB, but everything just runs nicer with 4GB; you don't need to wait through a bunch of paging when you try to switch apps, you don't need to reload every time you hit Back in the browser, etc. Your workflow hasn't really changed; everything just goes more smoothly now. You may have access to a few apps you couldn't run before, but the main advantage is being able to run the new stuff alongside the old stuff without hampering overall system performance.Very interesting and impressive read op, thank you.
By the way the first two slides in your post are identical (titled "what to do with 8GB".)
(I agree with others that the implied 3.5GB for OS is hard to understand - perhaps we can get a breakdown of that from Sony some day.)
My theory is that I don't have access to anything internal to sucker punch. So making arbitrary implications based on random other titles and decisions they make is flawed and will lead to fallacies.
The fact that you keep questioning me on why they did what they did shows just how much this simple theory is beyond you. It's a dead cause at this point, believe what you want I don't really care enough to keep wasting energy typing responses to you.
Even if you're right and they could've made an effort to shrink them without affecting performance, wouldn't that just leave them more time to optimize elsewhere? Meaning, there is an advantage to the larger targets, right? Plus, it means there are still gains to be made later.I question You, because You assume there is higher purpose by wasting BW and memory on those buffers. It does not give them meaningful higher precision or performance thats is known.
The thing is that SP probably didnt need smaller ones and just didnt care or they didnt have time to research.
But hey, You can believe what You want. You can believe that they are using hidden dGPU for what i care.
I question You, because You assume there is higher purpose by wasting BW and memory on those buffers. It does not give them meaningful higher precision or performance thats is known.
The thing is that SP probably didnt need smaller ones and just didnt care or they didnt have time to research.
But hey, You can believe what You want. You can believe that they are using hidden dGPU for what i care.
That's probably a huge part of it. You usually don't want to be totally rethinking your buffering scheme by platform, and using an extremely fat G-buffer format on PS360 is more in the realm of cartoonish supervillainy than game development. So, the newer platforms also get slim G-buffers.You also seem to be ignoring one of the golden rules of optimization: don't optimize things that don't need to be optimized. It could be that for Crysis 3, as they were also targeting the PS3/360, had to really squeeze down in order to get it to work well on those platforms.
Its more about wasting BW.You know, it could just be that they found the performance hit to not be all that high when exporting to that many render targets. For one thing, I think GCN cards export pixels at full rate at up to 64-bits at a time (on the console variants, I think the big variants like the 290x actually can export at 128-bits at full rate), so if they can get away with exporting 64-bit buffer (for their normals, for example) without a performance hit then why not?
I did not ignored that. I included it in one of my posts.You also seem to be ignoring one of the golden rules of optimization: don't optimize things that don't need to be optimized.
Even if you're right and they could've made an effort to shrink them without affecting performance, wouldn't that just leave them more time to optimize elsewhere? Meaning, there is an advantage to the larger targets, right? Plus, it means there are still gains to be made later.
While I can't find some good reason to think you're wrong in this buffer size assessment, the position you're taking in what you're saying about Cryengine, is that of a blind faith. You can see this just by reading about SMAA implementation in AC4, and how much extra work or deviation from some Crytek's suggested solution they had to do. And this is the work that went just into AA that someone else seemingly already figured out, and documented well -- and despite that turned out to have a number of problems specific just to what their game was doing. Or even reading Crytek's own Ryse presentation, where it's clear they were thinking up solutions to problems as they went along, and inventing things that weren't already available (or done well) in their engine, just because the new game had special requirements for them.Not games, but engine. CryEngine is general purpose and multi platform engine [and even used in movie industry], so their solution must work for every case, not selected one, so must be the most optimal