• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Guerrilla Games: Regarding Killzone Shadow Fall and 1080p

Serious questions.

In general, whenever a image is upscaled (or downscaled for that matter) be it by a hardware upscaler or a software solution, does this happen after the image comes out of the framebuffer ?

In the case of KZ MP, is the image coming out of the framebuffer 1080p?

When it comes to the true definition of calling a frame native 1080p in game development, is this applied to the image that comes out the framebuffer?


Thanks in advance.
Good question. I'd like an answer to this also.
 

sono

Member
I like how they've taken the time to explain the technique, but with all this detail...



...I have to wonder if just rendering at a slightly lower resolution and upscaling would have been more efficient? This sounds like an awful lot of work to be doing every frame.


Yes I was thinking about that, it made me wonder if there is anything in the PS4 hardware that supports this algorithm - e.g compare with some TVs have sports mode that interpolate frames to give higher frame rate than source.
 

sono

Member
Serious questions.

In general, whenever a image is upscaled (or downscaled for that matter) be it by a hardware upscaler or a software solution, does this happen after the image comes out of the framebuffer ?

In the case of KZ MP, is the image coming out of the framebuffer 1080p?

When it comes to the true definition of calling a frame native 1080p in game development, is this applied to the image that comes out the framebuffer?


Thanks in advance.

My reading of this is that yes it is 1080p/60fps coming out of framebuffer, what is being described is how the framebuffer is being built up before then.
 

Horp

Member
One thing is for sure; it's a very cool and new technique. Glad they shared this. I would like to implement this myself and try it out, but I imagine that the algorithms for blending based on motion vectors is quite intricate. Guerilla Games have some crazy good graphics engineers. Seen a couple of presentations by them on other topics, and the stuff they do is just crazily impressive.
 

SappYoda

Member
I'm loving all these developers that use video compression techniques to optimize the performance.

If the end result was nearly perfect image quality (not the current case) nobody would deem them as cheating.

I hope they keep up improving them as in the end it will mean that we will get better looking games.
 

DGaio

Member
From the info provided by GG, I would say only the first frame isn't 1920x1080 and the ones coming after are, interpolating the missing data with the previous ones. It's a nifty way of achieving a native frame buffer to be sent to the TV/monitor.

I think where people are hung up is on the concepts of native and scaled (up or down),

The usual pipeline to get an image to the a monitor is:

Game engine computing > FrameBuffer(s) > Frame > Monitor Buffer > Monitor Image (the image you see on the screen)

With the advent of fixed resolution monitors (LCDs and the alikes) there was a need to have a new step/component to get a properly formatted frame on the screen if the frame coming up isn't of the same resolution of the monitor, a scaler, so the pipeline this days translates to:

Game Engine computing > FrameBuffer(s) > Frame > Scaler (if it's being handled by the machines hardware) > Monitor Buffer > Scaler (if it's handled by the monitors hardware, usually a frame always passes through this one, if it's native it's a simple pass through, if not it's processed accordingly) > Monitor Image (the image you see on the screen)

To put it very simplified all the visual information computed by the machine is always stored in a framebuffer that is then translated to a frame so it can be sent through the normal pipeline. This information can be used in many different ways and the framebuffer isn't a fixed size or number (you can have multiple frame buffers for different things to create the final frame).

Usually a frame (created from framebuffer data) is always created in a fixed resolution, like 1280x720 or 1920x1080 and when it hits a scaler is treated accordingly: if it's native to the screen resolution it passes through untouched, if not, it's scaled to the screen resolution.

What I gather from the GG explanation is that they reuse frame data to create missing data and patch the gaps that are not being created by the usual game engine rendering pipeline (meaning that in a 1920x1080 frame, half of the horizontal resolution is being 'guessed' based on information in the previous frame). Meaning the actual frame being sent to the monitor is still native (meaning 1:1 mapping to the screen resolution), apart from the first one. This means there's no up scale of the frame and therefor it can be considered native.
 
I'm loving all these developers that use video compression techniques to optimize the performance.

That's a pretty good analogy actually. It's like video compression in the sense that there is no need to render image information that either a) doesn't change or b) is easy to predict.

For video compression it reduces the size of the file, for real time rendering it reduces the processor load.
 
From the info provided by GG, I would say only the first frame isn't 1920x1080 and the ones coming after are, interpolating the missing data with the previous ones. It's a nifty way of achieving a native frame buffer to be sent to the TV/monitor.

I think where people are hung up is on the concepts of native and scaled (up or down),

The usual pipeline to get an image to the a monitor is:

Game engine computing > FrameBuffer(s) > Frame > Monitor Buffer > Monitor Image (the image you see on the screen)

With the advent of fixed resolution monitors (LCDs and the alikes) there was a need to have a new step/component to get a properly formatted frame on the screen if the frame coming up isn't of the same resolution of the monitor, a scaler, so the pipeline this days translates to:

Game Engine computing > FrameBuffer(s) > Frame > Scaler (if it's being handled by the machines hardware) > Monitor Buffer > Scaler (if it's handled by the monitors hardware, usually a frame always passes through this one, if it's native it's a simple pass through, if not it's processed accordingly) > Monitor Image (the image you see on the screen)

To put it very simplified all the visual information computed by the machine is always stored in a framebuffer that is then translated to a frame so it can be sent through the normal pipeline. This information can be used in many different ways and the framebuffer isn't a fixed size or number (you can have multiple frame buffers for different things to create the final frame).

Usually a frame (created from framebuffer data) is always created in a fixed resolution, like 1280x720 or 1920x1080 and when it hits a scaler is treated accordingly: if it's native to the screen resolution it passes through untouched, if not, it's scaled to the screen resolution.

What I gather from the GG explanation is that they reuse frame data to create missing data and patch the gaps that are not being created by the usual game engine rendering pipeline (meaning that in a 1920x1080 frame, half of the horizontal resolution is being 'guessed' based on information in the previous frame). Meaning the actual frame being sent to the monitor is still native (meaning 1:1 mapping to the screen resolution), apart from the first one. This means there's no up scale of the frame and therefor it can be considered native.
This is how I read the situation. It's not like 960 x 1080 is being sent to a scaler to produce the 1080p output.
 
I believe Gran Turismo 5 used a much simpler implementation of this...
No, the GT5 approach is traditional scaling and very different from this.
So the truth is 1080p every two frames rendered.
so bascially 1080i doubled pumped? nice!
No, not this either. They're rendering all pixels in every frame. It has nothing to do with interlacing.
I wonder if the type of temporal reprojection seen here can be compiled with AA samples from something like T1x SMAA for better results.
That would actually give worse results. By definition AA samples are blur; using them as the base of your render pipeline (rather than at the end) would degrade the quality.
 

BigTnaples

Todd Howard's Secret GAF Account
Okay, Guerrilla, please don't use this technique again, or improve it, as it makes the mp portion of the game a blurry mess. Battlefield 4 being 900p, has a much more pleasant IQ.
Thanks.


As someone who has play 150+ hours of BF4 and only a couple hours of KZMP the very first thing that was aparant to me was the increase in IQ, the cleaness of the image, and then the glorious graphics.
 
This sounds like some technical wizardry rather than a simple "dumbed down from 1080p" that certain agendas want to spin it as. They quite clearly developed the game with this in mind, it is not a case of the game simply having had to drop resolution due to issues in the hardware
 
lol wow Guerrilla Games is too kind to some people. That's a good explanation and the stuff they are doing is crazy. Still not enough for some people though 'yeah but it ain't that native shit'
 

pottuvoi

Banned
People should expect to see a lot similar methods be implemented in future titles.
Meaning that there are ways to not do unnecessary work and still get pretty much perfect edges at full resolution without doing all the work.

Also If games blur most of the screen to hell, why would you want to shade every pixel at full quality/resolution.
 

Vire

Member
I'm kind of blown away they took the time to explain their technology in such detail.

Kudos.
 

McFadge

Member
No, not this either. They're rendering all pixels in every frame. It has nothing to do with interlacing.

Well, they're compositing all the pixels every frame. I don't think it's quite honest to say they're rendering 1920 x 1080 pixels per frame, as there are 960 x 1080 old pixels being 'recycled'. In the same way Guerilla's post discusses the use of the word native, you might think differently based on your definition of the word 'render'. Sure, its not just a slap-bang copy paste procedure, but it's also not a 'fresh' 1920x1080 render - to me.

I haven't been keeping track of this thread, but I'm sure it's probably going through similar cycles to the last one, which must be painful for everyone. The way I see it this technique is an impressive alternative solution to scaling, but not 1080p native.

Can't wait to see their GDC presentation, Guerilla always has great stuff to show off.
 

Truespeed

Member
Much ado about nothing.

In the single-player mode, the game runs at full 1080p with an unlocked frame-rate (though a 30fps cap has been introduced as an option in a recent patch), but it's a different story altogether with multiplayer. Here Guerrilla Games has opted for a 960x1080 framebuffer, in pursuit of a 60fps refresh.

SP is a full 1920x1080p at 30FPS while MP is 960x1080P that averages 50FPS. GG went for FPS on MP and sacrifices were made.
 

BONKERS

Member
This is generally still called native

No, it's called undersampling.
Which results in ugliness especially if you can't contain the resulting temporal issues from said undersampling. (Which is especially apparent in light buffers and things like DoF)

Much ado about nothing.



SP is a full 1920x1080p at 30FPS while MP is 960x1080P that averages 50FPS. GG went for FPS on MP and sacrifices were made.



IMHO, they should've just parred back the graphics even more and even rendered sub-1080p (Vertically as well) to obtain true stable 60FPS. (Unless it's the CPU holding them back, which for an online game, I could believe considering Jaguar)

An unstable framerate in a MP game is not cool.
 

Bluenova

Neo Member
I think this is very cool and collected technical explanation, from one of my favorite studios.. SO they do predict pixel motion, that is pretty cool imho.

Its good alternative to typical upscaling.
 

Nags

Banned
I'm kind of blown away they took the time to explain their technology in such detail.

Kudos.

kudos-262x300.jpg
 

JAYSIMPLE

Banned
At the end of the day they couldn't hit true 1080p at 60fps and they made cut backs. The same decisions as why ryse is 900p. It's all just trickery and decisions to get the best looking game possible at an intended frame rate. It nullifies all stupid talk from when these consoles launched. Neither of these consoles are powerhouses that can throw out 1080p 60 easily. Hopefully the resolution talk can be dropped and we can enjoy the games. I prefer the ps4 too
 

mrklaw

MrArseFace
I'm assuming this was done entirely by guerrilla - they've always played around with rendering techniques. But I wonder if there is an opportunity for sony's TV engineers to help out here? Their motion flow is generally recognised as one of the best implementations on TVs, perhaps they could share their techniques or algorithms and see if they could be adapted for PS4?
 
So they actually tried to explain what and why they did what they did, something that rarely, if ever happens, and people are still bitching?
 

Zephyx

Member
Good and satisfying explanation. I'm not a graphics programmer but I was able to understand the logic behind such implementation.
 
I'm sure lots of games use prediction elements all over the place, not just for drawing a frame either. And there's no upscaling. 1080P is drawn per frame. I'm fine with calling this a 1080P image.
 

Tulerian

Member
So, does this technique mean support for Sony's new VR Headset will be able to run in a higher res/fr than conventional methods given the specs of the PS4?

It seems to be a difficult tech to have implemented just for one game, or series, so I was wondering if there were other benefits. Or perhaps we will see more internal studios using this?
 

Fezan

Member
So, does this technique mean support for Sony's new VR Headset will be able to run in a higher res/fr than conventional methods given the specs of the PS4?

It seems to be a difficult tech to have implemented just for one game, or series, so I was wondering if there were other benefits. Or perhaps we will see more internal studios using this?
Now that u have mentioned ot can this technique be used for vr without sacrificing much ? It almost increased the fps by 60-70% for miltiplayer
 

dr guildo

Member
Still calling it native 1080p, did GG had some lessons from MS at spinning?

Beware of the mistake, 1080p (####x1080) doesn't necessary mean Full HD which implies systematically 1920x1080. Just look at GT5, the in-game is running at 1440x1080p, it's not full HD, but it's still 1080p.
 

hawk2025

Member
So, does this technique mean support for Sony's new VR Headset will be able to run in a higher res/fr than conventional methods given the specs of the PS4?

It seems to be a difficult tech to have implemented just for one game, or series, so I was wondering if there were other benefits. Or perhaps we will see more internal studios using this?



...huh.

That could be a good point. Perhaps this also doubled as a tech-developing endeavor?
 
I see people giving GG a lot of pats on the back, and in some cases it's deserved (it is very interesting tech, and their eventually response to this issue was at least a full one)

But I think people are still ignoring a big part of this... And that is that this technique STILL results in a loss of quality to the final image and they knew this ahead of time (which is why they only use it in MP). They then said it ran in full 1080P which they had to know was at the very least, a bit misleading.
 

hawk2025

Member
I see people giving GG a lot of pats on the back, and in some cases it's deserved (it is very interesting tech, and their eventually response to this issue was at least a full one)

But I think people are still ignoring a big part of this... And that is that this technique STILL results in a loss of quality to the final image and they knew this ahead of time (which is why they only use it in MP). They then said it ran in full 1080P which they had to know was at the very least, a bit misleading.



I don't think anyone is ignoring that at all. In fact, that aspect has completely dominated the discussion.
 
Wow!! I was waiting just to hear this. As other said, this is all very interesting on the technical side. Pretty damn cool. This is one way to work with what you got by making the most out of time management. Very neat, would like to know more.
 
I see people giving GG a lot of pats on the back, and in some cases it's deserved (it is very interesting tech, and their eventually response to this issue was at least a full one)

But I think people are still ignoring a big part of this... And that is that this technique STILL results in a loss of quality to the final image and they knew this ahead of time (which is why they only use it in MP). They then said it ran in full 1080P which they had to know was at the very least, a bit misleading.

I concur. Going forward we cannot take it at face value what resolution the developer states the game is running at. GG conveniently forgot to mention some VERY important details regarding the resolution. It seems a little slimy.
 

gtj1092

Member
People keep saying they were misleading but they apparently told Digital Foundry this a long time ago. Unless they asked DF not to publish the info I don't see what else they need to do. They divulged the tech details to the website people go to get tech info.



It's more suspect DF releases this info on the eve of Titanfall dropping to lessen criticism of its resolution /tinfoil hat
 

borius

Neo Member
It's fantastic to see Guerrilla explaining themselves, this is part of the new course of Sony, which I hope it will be adopted widly in the future.

Their solutions works pretty well, the image quality wasn't the same but pretty close.
 
Top Bottom