• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Why 792p on XBO?

Can anyone explain why respawn and ubisoft both decided to go with 792p?

My guess would be that it either has something to do with how the eSRAM is being allocated/used or relating to the properties of the ROPs/upscaler but I can't really figure out how they might be related.

1408 * 792 is far too specific of a value to have come from two different studios working on two massively different games with different resources/priorities/technologies independently.

There must be something deeper going on.

EDIT: solved

If we do the math, 792p uses the eSRAM about as perfectly as possible.

(32 MB * 1024 KB/MB * 1024 KB/B) / (1408 * 792 pixels) = 30.01 bytes per pixel

Going for 720p results in 36.4 bytes per pixel
Going for 900p results in 23.3 bytes per pixel
Going for 1080p results in 16.18 bytes per pixel


Deferred shading uses a G-Buffer that is 20 bytes per pixel and a radiance target that is 8 bytes per pixel for a total of 28 bytes per pixel.
source

deferred shading vs deferred lighting



We also know Infamous SS uses 40 bytes per pixel so the extra 2 bytes is definitely useful.

iNFAMOUSSSgbuffers-670x376.jpg



That was my suspicious as the reason.
Modern g-buffers* in deferred renderer can get pretty damn huge, especially when they haven't been built with bandwidth in mind (ie, they just keep getting bigger as people dump more stuff in them).
* ('geometry buffer' - basically writing out things like normal, diffuse color, etc to a set of intermediate render target buffers)

For comparison UE4's gbuffer has 4 32bit RGBA render targets, and a 64bit FP16 RGBA target for emissive contribution. When adding depth + stencil on top of that you're looking at a similar size per pixel.
Needless to say, smaller is better for bandwidth.

For a more extreme example, Imfamous:SS has a very large g-buffer, at well over 40 Bytes per pixel. That's over 80MB at FHD...


Based on this I'm all but certain 792p will become a recurring theme on the XBO going forward.
 

Foffy

Banned
To be above baseline HD, I would guess. 720p is heavily associated last generation. There's also probably a more technical reason.

Then again, sub-1080p is too....
 

sjay1994

Member
Well, you have 72 more p.... so there is that.

I don't know jack shit about resolution, framerate or graphics, and frankly I can't say I have ever cared about it
 

RedAssedApe

Banned
wouldn't doing something like 768p look better since its closer to a standard resolution (i.e. scaling and stuff)? or am i talking outta my ass
 

Jobbs

Banned
because it isn't 720P

I think this is the main reason, honestly. No one on earth can tell the difference with the naked eye between 720p and 792p. It's pure marketing, "not 1080p" is a strike no matter what, sure, but 720p has become a dirty word and they're avoiding the dirty word.
 
Yeah it's funny I've been gaming for 20+ years and never even seen 792p as an option before. Now we have 2 X1 games using it.

Threw me off too seeing that. I immediately thought that it avoids 720p headlines. Who knows!
 

Durante

Member
I don't think it's directly ESRAM size related, at least not to the extent common sub-720p resolutions were on xbox 360.
(32 MB) / (1408 * 792 * (4 byte)) = 7.5224977
So not particularly well rounded.

Also, 792/1080 is 11/15, so I really don't see why it would scale better than 720/1080 (2/3).

It may just work out that way for both titles before they are GPU bound.
 
Guys I realize 792 is just 1.1 * 720 but why such a specific ratio?

IIRC nice and simple fractions were the best for upscale

900/1080 = 5/6
720/1080 = 2/3
540/1080 = 1/2

792/1080 = 11/15?!


WHY?!
 

tuna_love

Banned
I think this is the main reason, honestly. No one on earth can tell the difference with the naked eye between 720p and 792p. It's pure marketing, "not 1080p" is a strike no matter what, sure, but 720p has become a dirty word and they're avoiding the dirty word.

How far away do you sit to your tv?
 
actually (32 * 1024^2) / (1408*792) = 30.01 bytes per pixel


that might be the answer... but 30 Bytes per pixel seems a bit big for a pixel buffer ... no?
 

d0nnie

Banned
because it isn't 720P

Maybe there's a Marketing/PR angle to it, I'd bet. Plus, they're both highly-anticipated titles.

It doesn't bother me as much since I can get Watch_Dogs for the PS4. It bugged me for Titanfall, since it's probably why the performance is lacking.
 

Stare-Bear

Banned
Because it's not 720p.

I wonder when that patch for Titanfall comes out making it 1080p I wonder if Respawn is still working on that...
 

BubbaMc

Member
Why 900p on a PS4?

Should have been 720p on both, would give us half a chance to run the games at a native resolution (720p display required of course, but at least they exist).
 

nib95

Banned
Is it not to do with the maximum frame buffer size they could fit in to the Esram?

I remember Respawn mentioning it when quizzed by Eurogamer about their choice with the resolution.

Eurogamer said:
We asked Baker whether we can expect to see any post-beta resolution shifts, bearing in mind comments attributed to Respawn community manager Abbie Heppe, who indicated that final resolution may be in the region of 900p.

"We've been experimenting with making it higher and lower. One of the big tricks is how much ESRAM we're going to use, so we're thinking of not using hardware MSAA and instead using FXAA to make it so we don't have to have this larger render target," Baker told us.

"We're going to experiment. The target is either 1080p non-anti-aliased or 900p with FXAA. We're trying to optimise... we don't want to give up anything for higher res. So far we're not 100 per cent happy with any of the options, we're still working on it. For day one it's not going to change. We're still looking at it for post-day one. We're likely to increase resolution after we ship."

http://www.eurogamer.net/articles/digitalfoundry-2014-titanfall-ships-at-792p
 
You may be onto something with this.. Could be about some bandwidth/framebuffer/etc. bottleneck.

actually that's exactly the bottleneck

First let’s consider the memory requirements of the two techniques. Deferred shading uses a G-Buffer that is 20 bytes per pixel and a radiance target that is 8 bytes per pixel for a total of 28 bytes per pixel.

source

Ubisoft and Respawn must have found some use for the extra two bytes - probably related to some advanced rendering trick, anybody want to hazard a guess?

I'm not too familiar with deferred methods.
 

KMS

Member
My guess would be to maximize resolution that fit in the esram in regards to framebuffer and multi render targets. Will be interesting if AMD's forward+ rendering pans out as it would allow them to have the dynamic lighting of differed rendering engines while fitting in the esram at 1080p. Only problem then is being ROP limited with the Xbox's 16 rops when AMD themselves recommends 32+ ROPS for 1080p graphics.

/straight from where the sun don't shine
 

foxbeldin

Member
M°°nblade;112042045 said:
But why 792p and not 768 or or 802?

Titanfall was 792p as well so it's not some random number.

Is it for optimal pixel scaling on 16:9 TV's?

Actually i think 720p scales better.
 

Eusis

Member
Actually, now that I think about it it probably DOES look significantly better than 720p on any TV that'll downscale from 1080p well but is still a 720p TV... because it's probably actually 1366x768 or whatever, and so you DO get more detail out of it, just past native resolution in theory.
 

Tagg9

Member
actually (32 * 1024^2) / (1408*792) = 30.01 bytes per pixel


that might be the answer... but 30 Bytes per pixel seems a bit big for a pixel buffer ... no?

I believe this is it. They probably can't go above 32 bytes per pixel without seriously lowering the render quality (per pixel).
 

lord pie

Member
actually (32 * 1024^2) / (1408*792) = 30.01 bytes per pixel


that might be the answer... but 30 Bytes per pixel seems a bit big for a pixel buffer ... no?

That was my suspicious as the reason.
Modern g-buffers* in deferred renderer can get pretty damn huge, especially when they haven't been built with bandwidth in mind (ie, they just keep getting bigger as people dump more stuff in them).
* ('geometry buffer' - basically writing out things like normal, diffuse color, etc to a set of intermediate render target buffers)

For comparison UE4's gbuffer has 4 32bit RGBA render targets, and a 64bit FP16 RGBA target for emissive contribution. When adding depth + stencil on top of that you're looking at a similar size per pixel.
Needless to say, smaller is better for bandwidth.

For a more extreme example, Imfamous:SS has a very large g-buffer, at well over 40 Bytes per pixel. That's over 80MB at FHD...
 

GeometryHead

Neo Member
If they want a pixel ratio of exactly 16:9, they would have to be able to divide (in this instance) 792 with 9 and multiply it with 16 and still get an integer. Which they can. But that doesn't work with 791p or 793p, so you would have to either increase the resolution to 801p or decrease it to 783p. Maybe 792p was the sweet spot for both games?
 
Actually, now that I think about it it probably DOES look significantly better than 720p on any TV that'll downscale from 1080p well but is still a 720p TV... because it's probably actually 1366x768 or whatever, and so you DO get more detail out of it, just past native resolution in theory.


Doesn't the image still get sent as 720p if the TV reports as 720p?

You definitely lose some sharpness whenever you scale, especially at non integer ratios since you're losing some of that data whenever you mix.
 
Top Bottom