• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Killzone: Shadow Fall Multiplayer Runs at 960x1080 vertically interlaced

Oh it does actually seem to be true. 1408x792 vs apparently 960x1080
Though Titanfall is indeed rendering some more pixels per frame in the traditional meaning, the result on a 1080p display is overall worse IQ than Shadow Fall's approach. That's because, after rendering what pixels it does, Killzone then calculates an approximation for what the rest would've looked like if rendered. Titanfall, on the other hand, simply enlarges all its pixels to cover over the holes. That's why KZ MP, though blurrier than SP, it still sharper than Titanfall.

Some players might prefer heavy blur spread across the entire screen as opposed to light blur and localized artifacts. But Guerilla's method is factually a better approximation of native 1080p rendering.
 
1080i is 'essentially' 60 complete separate images per second @ 960x540 layered as 1920x1080. This doesn't sound like what they are doing.

information from previously rendered frames is used to plug the gaps

Would explain why it looks like blur
 

HTupolev

Member
1080i is 'essentially' 60 complete separate images per second @ 960x540 layered as 1920x1080. This doesn't sound like what they are doing.
I'd personally be pretty hesitant to call alternating 960x540 fields "native 1080i."

1440x540 maybe, if we're going by common CRT line pixel counts. 1920x540, I'd feel comfortable with.
 

Sean*O

Member
1080i is 'essentially' 60 complete separate images per second @ 960x540 layered as 1920x1080. This doesn't sound like what they are doing.



Would explain why it looks like blur

I don't think that's right, I believe it's 1920 always and it interleaves fields of 540 to make 1080i.

This one is a head scratcher because it seems they would still need to render the graphics at 50-60FPS even if they were interleaving the output for some reason. How can they just split up the geometry like that and render every other scan line every other frame? I can see how it outputs that way, but rendering that way seems totally different.
 

thuway

Member
This is an interesting development. So the solution in use is a strange and esoteric trade off that presents a faux-1080p. Interesting.
 

HTupolev

Member
This one is a head scratcher because it seems they would still need to render the graphics at 50-60FPS even if they were interleaving the output for some reason. How can they just split up the geometry like that and render every other scan line every other frame? I can see how it outputs that way, but rendering that way seems totally different.
They don't, the output is a 1920x1080 progressive signal.

They render the fresh "960x1080" component of the current field. This data fills half the lines in the 1920x1080 buffer for the current frame. Then they fill in the gaps in the 1920x1080 buffer by projecting data from the previous frame into the missing pixels.
 
There are definitely ghosting artifacts.

4TUC2s5.png
Yes, but only for stuff with high parallax; the concertina further on doesn't have them. Overall it's an interesting choice. There's that ghosting on thin, fast-moving objects, and textures are blurred, but edge aliasing is reduced. Perhaps that's why Guerilla misleadingly referred to this technique as a "different type of AA in multiplayer"?

As for people saying this approach is 1080i, it absolutely isn't. Nor is it "1920i", even though that's closer because the render is skipping columns, not rows. But it's not an interlaced output of any type; it does send 1920x1080 pixels in every frame (i.e. there are no fields).

And it's not a horizontal-only upscale either. That approach was used relatively frequently on PS3 last gen: take the pixels you've rendered, and stretch them wider to fill the screen. But Killzone's method doesn't do that; it fills the missing pixels with different information.

And that different information is not just the same pixels from the last frame. If it was, there would be vertical combing artifacts on every single object in motion, not just thin stuff that's crossing quickly, as seen above. So there's some algorithm in place which tries to interpolate things in an intelligent way (and fails on certain stuff in the ~20ms window it has).

All in all, there's no existing jargon to describe what's going on. The only accurate approach is a full description, like "It's 960x1080 temporally reprojected to 1080p." I know that's wordy and unsatisfying, but it's the best way. Talking about interlacing or upscaling is just not correct. (Though it's hard to get away from those words; I believe I may have used "interlaced" somewhere earlier in the thread myself.)
 

Sean*O

Member
They don't, the output is a 1920x1080 progressive signal.

They render the fresh "960x1080" component of the current field. This data fills half the lines in the 1920x1080 buffer for the current frame. Then they fill in the gaps in the 1920x1080 buffer by projecting data from the previous frame into the missing pixels.

OK that makes sense.
 

Capella

Member
I hope we get 1080p for 30fps games and 900p for 60fps games. It seems realistic enough to expect.
I hope so too but you know, depends on the game and the dev team.

There are four words you're going to hear a lot of when to PS4 and together, they change the game.

better graphics
bigger worlds



Give me 960x1080 or give me death.
lol. Though, I'm genuinely excited about those two things in next gen games. I guess my expectations for us getting 1080p all of the time are just really low.
 

HTupolev

Member
There's that ghosting on thin, fast-moving objects, and textures are blurred, but edge aliasing is reduced.
It shouldn't be, not compared to native 1920x1080, since they aren't actually increasing sampling. Maybe it's being turned into noise because of reprojection inaccuracies making things a bit more "stochastic," but any overall reduction in edge garbage should be a result of actual AA and not simply reprojecting things.
 

Orayn

Member
Yes, but only for stuff with high parallax; the concertina further on doesn't have them. Overall it's an interesting choice. There's that ghosting on thin, fast-moving objects, and textures are blurred, but edge aliasing is reduced. Perhaps that's why Guerilla misleadingly referred to this technique as a "different type of AA in multiplayer"?

As for people saying this approach is 1080i, it absolutely isn't. Nor is it "1920i", even though that's closer because the render is skipping columns, not rows. But it's not an interlaced output of any type; it does send 1920x1080 pixels in every frame (i.e. there are no fields).

And it's not a horizontal-only upscale either. That approach was used relatively frequently on PS3 last gen: take the pixels you've rendered, and stretch them wider to fill the screen. But Killzone's method doesn't do that; it fills the missing pixels with different information.

And that different information is not just the same pixels from the last frame. If it was, there would be vertical combing artifacts on every single object in motion, not just thin stuff that's crossing quickly, as seen above. So there's some algorithm in place which tries to interpolate things in an intelligent way (and fails on certain stuff in the ~20ms window it has).

All in all, there's no existing jargon to describe what's going on. The only accurate approach is a full description, like "It's 960x1080 temporally reprojected to 1080p." I know that's wordy and unsatisfying, but it's the best way. Talking about interlacing or upscaling is just not correct. (Though it's hard to get away from those words; I believe I may have used "interlaced" somewhere in here myself.)

I think calling it an "internal horizontal interlace with blending" is fair.
 

belmonkey

Member
I always thought the multiplayer looked off for some reason, and I also assumed it was bad FXAA, since we were told it would be 1080p already (although I always had an inkling that that would only mean 1080 vertical pixels so it could be called "1080p"). Is the game still going to be an example of "getting more out of closed hardware"?
 
Multiplayer is always a step down from single player. Killzone already had the best looking MP I've ever seen, and I have to disagree with Robo about it looking ugly and blurry.

BF4's MP, in comparison, looks a lot worse to me.

So whatever method GG was able to do in order to make it look like 1080p (which it does, it certainly looks higher resolution than this thread title suggests), hats off to them. I'm sure with more time they could have optimized even more.

Also, the article suggests that the technique they are using, while certainly allowing them to achieve a higher framerate, does not come cheap. It's why many people were surprised about the interlacing technique because it looks native.

"we can almost certainly assume that this effect is not cheap from a computational perspective."
 

RoboPlato

I'd be in the dick
I always thought the multiplayer looked off for some reason, and I also assumed it was bad FXAA, since we were told it would be 1080p already (although I always had an inkling that that would only mean 1080 vertical pixels so it could be called "1080p"). Is the game still going to be an example of "getting more out of closed hardware"?

The campaign is still really impressive.
 
I thought I'd repost my explanation of the resolution since it has gotten kinda buried.

The grey lines are dead space where nothing is being rendered on this frame.

So techinically the game is rendering at 1920x1080, just not on each frame.

The last sentence makes no sense. If you never render a frame natively in 1920x1080 then you are never rendering natively in 1920x1080...

You could make that claim if the game rendered at twice the target framerate and combined the two frames or something like that (which is similar to what some passive 3d sets do).
 

Orayn

Member
"Internal horizontal interlace with reprojection" might be more accurate.

"Blending" makes more sense when temporal sampling is used for "supersampling."

Right. I had assumed that they were doing some form of the temporal AA that's used in the campaign, but I'm just now scrolling back and reading that it doesn't seem to use much/any interpolation.
 

Kuro

Member
I always thought the multiplayer looked off for some reason, and I also assumed it was bad FXAA, since we were told it would be 1080p already (although I always had an inkling that that would only mean 1080 vertical pixels so it could be called "1080p"). Is the game still going to be an example of "getting more out of closed hardware"?

I really hope not. I'm chalking it up to it being a launch game.
 

angrygnat

Member
I totally respect that some people are bean counters. We need people like that. Especially us lazy people that don't have the desire to count beans. I tried to get all upset about the resolutions being sub 1080p. Then I started seeing the games and they look damn good 1080p or not. I hope it's not a trend where devs are so afraid to release a sub 1080p 60 fps game that they do so to the detriment of the game. (Thief?) We might be our own worst enemy here.
 
The fact that no one knew or cared about this shows just how little all this shit matters outside of petty console wars on either side. Always built up into the worst news ever when publicized, but if no one bothers to check, no one knows or cares at all.

Agreed.
 

Orayn

Member
The last sentence makes no sense. If you never render a frame natively in 1920x1080 then you are never rendering natively in 1920x1080...

You could make that claim if the game rendered at twice the target framerate and combined the two frames or something like that (which is similar to what some passive 3d sets do).

Well, each frame sent to the display is 1920x1080, but half of the pixels are recycled from the last frame. I think ArchedThunder is trying to say that it's not using an actual interlaced video mode as far as your screen can tell.
 

KiraXD

Member
so... the parallax frame buffer only renders FXAA when Tessellation is shown in 1080 lines while the Textures only render at half the current resolution at 60 frames versus 30 frames as long as Blast Processing is enabled?
 
"Internal horizontal interlace with reprojection" might be more accurate.

"Blending" makes sense when temporal sampling is used for "supersampling."
Okay, perhaps I misunderstood the terminology you're using. By "reprojection", do you mean simply repeating the identical pixels from the previous frame? Because that definitely doesn't seem like what they're doing. If it were, every object in motion should have vertical combing, but they certainly don't.

Or does "reprojection" subsume some kind of interpolation, differing from "blending" only in the number of samples?
 
I don't think that's right, I believe it's 1920 always and it interleaves fields of 540 to make 1080i.

This one is a head scratcher because it seems they would still need to render the graphics at 50-60FPS even if they were interleaving the output for some reason. How can they just split up the geometry like that and render every other scan line every other frame? I can see how it outputs that way, but rendering that way seems totally different.

yeah that would make more sense and, i have no idea.
 

Havel

Member
I always thought the multiplayer looked off for some reason, and I also assumed it was bad FXAA, since we were told it would be 1080p already (although I always had an inkling that that would only mean 1080 vertical pixels so it could be called "1080p"). Is the game still going to be an example of "getting more out of closed hardware"?

I'm fairly certain the 60fps target for MP was an afterthought.

The singleplayer looks and runs leaps and bounds better, all the while running at native 1920x1080.
 
The last sentence makes no sense. If you never render a frame natively in 1920x1080 then you are never rendering natively in 1920x1080...

You could make that claim if the game rendered at twice the target framerate and combined the two frames or something like that (which is similar to what some passive 3d sets do).

The native output is still 1920x1080, every other column is just not being rendered at the exact same time.
 

HTupolev

Member
Okay, perhaps I misunderstood the terminology you're using. By "reprojection", do you mean simply repeating the identical pixels from the previous frame? Because that definitely doesn't seem like what they're doing. If it were, every object in motion should have vertical combing, but they certainly don't.

Or does "reprojection" subsume some kind of interpolation, differing from "blending" only in the number of samples?
"Reprojection" in this case means they're likely shifting the positions of things based on a motion buffer, so that the content from the previous frame lines up better when placed alongside pixels from the current field. The most naive form of reprojection would indeed be to just repeat the pixels, which as you note would create more combing artifacts than we see.

"Blending" sounds more like combining the new pixels with ones from the previous frame, which is what you do with temporal "supersampling." (To combine the ideas, developers messing around with TAA will reproject the pixels from the previous frame onto the new one, and then blend them in.)
 
The native output is still 1920x1080, every other column is just not being rendered at the exact same time.

So does this essentially trick the brain into feeling 1080p @ 60 fps?

It's certainly sharp and fluid, it's just that it introduces dithering and other artifacts when the interpolation doesn't quite work.

There's so much action in the game that you tend not to notice.

I tried going back and forth between MP and SP and they both looked just as sharp, but the SP had a better picture quality and lesser framerate. I'm pretty good at noticing sub-resolution, so this really isn't the case of sub-1080p....it's a 1080p image done in 2 frames @ ~ 60 fps...which would be similar to 1080p without interpolation @ 30 fps?
 

SatansReverence

Hipster Princess
The native output is still 1920x1080, every other column is just not being rendered at the exact same time.

So every game on the xbone is "native" 1080p than because by the time the image reaches the screen, it's displaying 1920x1080 pixels per frame?

It's quite poor form that the devs and sony lied about the resolution. It's like Crytek claiming their internal upscaling meant Ryse was "native 1080p" but at least they were up front about the internal rendering resolution.
 

Havel

Member
So does this essentially trick the brain into feeling 1080p @ 60 fps?

It's certainly sharp and fluid, it's just that it introduces dithering and other artifacts when the interpolation doesn't quite work.

There's so much action in the game that you tend not to notice.

I tried going back and forth between MP and SP and they both looked just as sharp, but the SP had a better picture quality and lesser framerate.

The SP looks noticeably sharper than MP on my U3011.
 
Top Bottom