• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry: Neo GPU are point-for-point a match for RX 480

Panajev2001a

GAF's Pleasant Genius
I only glanced over said patent but the base principle seems to be very simple.

Instead of increasing the rendering and frame buffer resolution as it is common, the method indeed just renders at the original native resolution (something like 640×448 for PS2 games as quoted above).
It's not just rendered once though, but multiple times with the only difference being shifted sample positions.* Then in order to create the final larger resolution image, all of the smaller images are combined.

So for example for a 4x increase in resolution (2x along each axis), the same scene is rendered 4 times at native resolution. In the end you have 4 color values per pixel, one from each of the 4 images. These 4 values are then spread out to the final image, so that eg the 4 pixel square in the upper left corner of the final image consists of the 4 top-left pixels of the smaller images.

This method is a bit more expensive than rendering the scene at 4x the resolution right away, but it can help eliminate errors that can occur otherwise if the application was meant to be rendered at a sinle fixed resolution. It should be noted that there are no disadvantages in IQ. The result is a true high resolution image.

* A sample position is a point within each pixel at which the geometry is evaluated, eg. whether a triangle is hit (and at which coordinate). Usually when not using any traditional anti-aliasing the pixel center is taken as the default sample position. The sample pattern examples at Wikipedia may explain it better; in this case the top left grid is used.



The patent isn't about temporal reconstruction (unless we're not talking about the same one).



I think there might be a general misconception about 2D elements. Elements like HUD or text are just textures slapped onto simple polygons (usually a quad) and are rendered the same as everything else. The GPU doesn't know about 2D, there's no explicit upscaling, just the usual texture sampling that is used for every other texture too. Of course the texture resolution doesn't increase when the rendering resolution is increased, so at some point every texture will look blurry (as soon as they are magnified).


edit:
Seems like I took too long to type this post. Panajev2001a already explained.

I really liked your clear explanation, thanks for taking the time and posting this :).
 

Fafalada

Fafracer forever
ethomaz said:
The method used in the patent try to recreate a higher resolution using lower resolution rendering...
Other posts have already explained at length what the patent actually does, and the misconceptions around it, so I won't repeat that.

But one of the big clues to what PS4 emulator is doing is the very article from DF you quoted. The resolution of emulated PS2 games is an exact (4x in the cases tested) multiple of original game's resolution (1280x896 for common cases, but it varies depending on each game), and NOT 1080p.
 

THE:MILKMAN

Member
There have been two Neos floated as being in development for a long time, depending on what Sony wants to go with.

This is the initial rumor on that.

None of the big 3 of Kotaku, DF and GB have talked about two different spec Neo's being in development. VR World have claimed that the retail console will have Zen Lite LP cores which would be a replacement for the cat CPU family.

According to DF devs had the Jaguar spec dev kit in April then would receive a test kit/debug station in a non-final chassis which Sony asked devs not to show soon after (this sounds to me like a early version of a retail shell test kit). Then in June a second-gen test kit went out to devs.

I may be wrong but test kits are final retail consoles with 'TEST' written on them and debug software installed? Usually distributed to devs once everything is final?

The only hope for a better CPU in Neo would be if all along the "Jaguar" cores in the dev kit were actually "Zen Lite" cores as VR World claim. Even if that turns out true, it still would mean there was only ever one spec all along, not two.
 

ethomaz

Banned
The patent I saw was talking about essentially generating 4+ frames and the pixels in each were unique (sub pixel camera projection changes would produce Just that, a kind of weird way to do SSAA and sell the output unmerged :)), but the point was compatibility with the way games used to treat the GS and its eDRAM that would break many of them if you simply forcefully increased the backbuffer and front buffer resolution. Given the amount of hacks in PS2 emulators on PC necessary to achieve it with a decent degree of compatibility, it is understandable Sony went a different way to minimise compatibility issues.

Their approach is clever if you think about it, instead of messing with how PS2 games made direct and exact use of GS eDRAM, depending on its size and various buffers exact locations and dimensions, they are using 4+ parallel virtual PS2 consoles each running the same code except for a very very minor change in how the scene is projected initially.
I did not think about 4 virtual PS2 emulation... yeap it can work with each one referring one part of the scenes.

In any case the result will be the same result just how it reaches it is different.

Other posts have already explained at length what the patent actually does, and the misconceptions around it, so I won't repeat that.

But one of the big clues to what PS4 emulator is doing is the very article from DF you quoted. The resolution of emulated PS2 games is an exact (4x in the cases tested) multiple of original game's resolution (1280x896 for common cases, but it varies depending on each game), and NOT 1080p.
Because the proportion.

Original = 640x448
2x each axis = 1280x869
3x each axis = 1920x1344 (it is over 1080)

And the resolution on PS2 emulation variates based on original game... not all PS2 games had 640x448 resolution.

I think there might be a general misconception about 2D elements. Elements like HUD or text are just textures slapped onto simple polygons (usually a quad) and are rendered the same as everything else. The GPU doesn't know about 2D, there's no explicit upscaling, just the usual texture sampling that is used for every other texture too. Of course the texture resolution doesn't increase when the rendering resolution is increased, so at some point every texture will look blurry (as soon as they are magnified).


edit:
Seems like I took too long to type this post. Panajev2001a already explained.
Yeap... I added textures later (I forget small textures are repeated until fill all the area instead to upscale) but my original comment was more about artwork like the images that are showed when you load the save on Wild Arms 3 for example... it is clear upscaled... you can think about the 2D background in fighting games too.

My girl is sleeping on my room right now but when she wake up I will take a picture.
 

dogen

Member
I did not think about 4 virtual PS2 emulation... yeap it can work with each one referring one part of the scenes.

In any case the result will be the same result just how it reaches it is different.

Right, but the reason they're doing it this way is because they can avoid all the problems that happen when you just set a resolution and hope it works(Trust me, it really doesn't).

It's the same reason pcsx2 is going to be removing custom resolutions soon. Sure, we don't use the method in the patent(I guess we can't now), that would probably solve most or all of our resolution problems, but it'll still be nice to get rid of the completely broken one.


Yeap... I added textures later (I forget small textures are repeated until fill all the area instead to upscale) but my original comment was more about artwork like the images that are showed when you load the save on Wild Arms 3 for example... it is clear upscaled... you can think about the 2D background in fighting games too.

Oh, yeah. 2D stuff is definitely upscaled. I don't think anyone said otherwise.
 
Nope. This simply isn't true. You could easily get there or almost there with dynamic resolutions. Staying at 1080p is a horrible waste. At least go to 1440p.
Well, if we're counting dynamic resolution as the real thing, why not just render at a hojillion peez? ;)


Why? Until 4K adoption becomes widespread there's no need to go higher than 1080p.
We were discussing Phil's claim that waiting until holiday 2017 to release XBox Two would make it a good box for 4k, though to be fair, he also referred to the Bone as "a great box for a 1080p television," so, grain of salt, I guess.


The way Sony is approaching higher resolution rendering is more accurate without requiring hacks by virtue of how it is rendering: you are essentially rendering the same scene 4+ times only changing the camera projection very slightly each time then accumulating the frames once you collect them.
Instead of increasing the rendering and frame buffer resolution as it is common, the method indeed just renders at the original native resolution (something like 640×448 for PS2 games as quoted above).
It's not just rendered once though, but multiple times with the only difference being shifted sample positions.* Then in order to create the final larger resolution image, all of the smaller images are combined.
Other posts have already explained at length what the patent actually does, and the misconceptions around it, so I won't repeat that.
Can any of y'all point me to where it says the frame is being rendered multiple times? My reading of the patent says they're taking a single source image, and then producing "shifted" copies which are then recombined to the higher res output. If you look at Fig 3, it looks like the Emulator is only responsible for producing the "Original" image, and then the sub-pixel shifts are all produced from that original. Fig 4 describes some "boundary check" to be performed after the shift, but I'm not totally clear on how that works.

In the Abstract, they also talking about doing this for "each source image of the multimedia content" (emphasis mine), which again sounds like they're producing a set of these "shifts" for, well, each image in the source feed, rather than combining four source images in to a single output image.

So what am I missing here? =/
 

pixelbox

Member
Other posts have already explained at length what the patent actually does, and the misconceptions around it, so I won't repeat that.

But one of the big clues to what PS4 emulator is doing is the very article from DF you quoted. The resolution of emulated PS2 games is an exact (4x in the cases tested) multiple of original game's resolution (1280x896 for common cases, but it varies depending on each game), and NOT 1080p.

The Emulator is not in the PS4's OS either. GTA:SA recently had a patch to "fix bugs"; indicating the emulator is wrapped around the EXE.
 

geordiemp

Member
My thoughts now is that I hope that Sony is using TSMC to make the APU, and just maybe the 16 nm Finfet process gives less heat than the Glofo RX480 14 nm and that Sony can push things a little more.

I know posters in this thread stating Sony will be with TSMC, but its impossible to find out any confirmation at all.

If Sony's APU generates same heat as the AMD RX480 usng the Glofo 14 nm, then that would be kinda worrying.
 

gatti-man

Member
Well, if we're counting dynamic resolution as the real thing, why not just render at a hojillion peez? ;)

=/

You know what I mean lol. I mean you could have a game that renders at 4K then when things get too hectic drop its resolution a bit when it needs to.

I hope they try and give us higher resolutions. When I upgraded my PC way back when and ditched 1080p that was the biggest difference. Aliasing smoothed out, textures had more detail, it was my favorite thing about having a high end PC. I really don't want 1080p to be a thing on neo or Scorpio if I can help it.
 
Edit: ^^^ But that's not "real 4k," IMHO.

Oops, missed this one…
None of the big 3 of Kotaku, DF and GB have talked about two different spec Neo's being in development. VR World have claimed that the retail console will have Zen Lite LP cores which would be a replacement for the cat CPU family.

According to DF devs had the Jaguar spec dev kit in April then would receive a test kit/debug station in a non-final chassis which Sony asked devs not to show soon after (this sounds to me like a early version of a retail shell test kit). Then in June a second-gen test kit went out to devs.

I may be wrong but test kits are final retail consoles with 'TEST' written on them and debug software installed? Usually distributed to devs once everything is final?
No, because as you said just before that, they just sent out a v2 test kit in June, so obviously not every test kit contains the final hardware (or v1 would be the only version needed). The "final" of three different Orbis test kits went out in January 2013, and had four, dual-core Piledriver CPUs, 8GB RAM, and 2.2GB VRAM, so although the "final" test kit, it was still almost entirely unlike what the console itself turned out to be. It didn't even have unified memory.
 

wachie

Member
My thoughts now is that I hope that Sony is using TSMC to make the APU, and just maybe the 16 nm Finfet process gives less heat than the Glofo RX480 14 nm and that Sony can push things a little more.

I know posters in this thread stating Sony will be with TSMC, but its impossible to find out any confirmation at all.

If Sony's APU generates same heat as the AMD RX480 usng the Glofo 14 nm, then that would be kinda worrying.
As far as I know or have read, Sony doesnt pick the foundry. AMD does.

Sony and AMD collaborate on the requirement, design, spec but once the chip is tape out and validated then AMD sells the APUs for a fixed price to Sony.
 

gatti-man

Member
Edit: ^^^ But that's not "real 4k," IMHO.

Oops, missed this one…

No, because as you said just before that, they just sent out a v2 test kit in June, so obviously not every test kit contains the final hardware (or v1 would be the only version needed). The "final" of three different Orbis test kits went out in January 2013, and had four, dual-core Piledriver CPUs, 8GB RAM, and 2.2GB VRAM, so although the "final" test kit, it was still almost entirely unlike what the console itself turned out to be. It didn't even have unified memory.

Yeah it won't be real 4K but still far better than 1080p. I'd be happy with a rock solid 1440p too.
 
Yeah it won't be real 4K but still far better than 1080p. I'd be happy with a rock solid 1440p too.
I'm not sure, but I suspect 1080p on a 4k display would look nearly as clean as 1440p-on-4k, if not better, just because of the even scaling. Plus, I think that same power would be better spent on frame rates and splosions. Plus, I don't even have a 4k TV. ;p
 
Can any of y'all point me to where it says the frame is being rendered multiple times? My reading of the patent says they're taking a single source image, and then producing "shifted" copies which are then recombined to the higher res output. If you look at Fig 3, it looks like the Emulator is only responsible for producing the "Original" image, and then the sub-pixel shifts are all produced from that original. Fig 4 describes some "boundary check" to be performed after the shift, but I'm not totally clear on how that works.

In the Abstract, they also talking about doing this for "each source image of the multimedia content" (emphasis mine), which again sounds like they're producing a set of these "shifts" for, well, each image in the source feed, rather than combining four source images in to a single output image.

So what am I missing here? =/

A lot of the talk doesn't make sense otherwise.

Still, I did a bit of reading to find a passage where it's explicitly stated:

[0038]
[...]
Combination of multiple rendered images with sub-pixel offsets and coalescing of these images results in a single uprendered result.
[...]
 

mrklaw

MrArseFace
I'm not sure, but I suspect 1080p on a 4k display would look nearly as clean as 1440p-on-4k, if not better, just because of the even scaling. Plus, I think that same power would be better spent on frame rates and splosions. Plus, I don't even have a 4k TV. ;p

1440p is about 3.7m pixels - not far off double the number of oispxels vs 1080p. With consoles having decent onboard scalers 1440p would look way better at 4K than 1080p.

As for not having a 4K TV...well 1440p will also look great scaled down to 1080p, improving the image quality a lot. And you'd still have capacity left over for some nice added details.
 

DonMigs85

Member
1440p is about 3.7m pixels - not far off double the number of oispxels vs 1080p. With consoles having decent onboard scalers 1440p would look way better at 4K than 1080p.

As for not having a 4K TV...well 1440p will also look great scaled down to 1080p, improving the image quality a lot. And you'd still have capacity left over for some nice added details.

Chances are they'll use weird non-standard resolutions or dynamic ones, though
 

THE:MILKMAN

Member
Edit: ^^^ But that's not "real 4k," IMHO.

Oops, missed this one…

No, because as you said just before that, they just sent out a v2 test kit in June, so obviously not every test kit contains the final hardware (or v1 would be the only version needed). The "final" of three different Orbis test kits went out in January 2013, and had four, dual-core Piledriver CPUs, 8GB RAM, and 2.2GB VRAM, so although the "final" test kit, it was still almost entirely unlike what the console itself turned out to be. It didn't even have unified memory.

Dev kits and test kits are different. If you check this link out it gives a full overview of the history of PS4 proto, dev and test kits: http://www.psdevwiki.com/ps4/

"Final" dev kit (submitted to FCC in July 2013):

800px-DUH-D1000AA_-_front_lateral.png


Dev kit Motherboard:

800px-CVN-K12_0-000-000-15_DUH-D1000AA_board.png


Test kit (submitted to FCC in July 2013):

Testing_Kit_DUH-T1000AA_-_with_TEST_marking.jpg


All I'm trying to get across here is that the time from hardware being locked down (the main APU not changing specs outside tweaks to clocks) to launch of the console is much greater than a lot of people think. Talk of Sony still making up their mind what spec to go with in July 2016 for a console launching before the end of the year or even March 2017 just doesn't fit when you look at lead times for FCC testing and the like.

Of course if all these leaks turn out to be a big fat ruse by Sony to fool everybody then I'll take it all back.....
 
A lot of the talk doesn't make sense otherwise.
There was a talk? ^.^

Still, I did a bit of reading to find a passage where it's explicitly stated:
It was actually paragraph 38, but thanks! <3 So, can it also work like I thought, and upscale content that's not being rendered multiple times, like a 1080p feed from a Blu-ray or a game? I was actually thinking this patent described the upscaler in their 4k TVs, but perhaps it can be used for both purposes?


1440p is about 3.7m pixels - not far off double the number of oispxels vs 1080p. With consoles having decent onboard scalers 1440p would look way better at 4K than 1080p.

As for not having a 4K TV...well 1440p will also look great scaled down to 1080p, improving the image quality a lot. And you'd still have capacity left over for some nice added details.
Has any of this been tested? Even if downsampling from 1440p30 to 1080p30 does improve the image quality substantially, would it really be better than simply running the game at 1080p60 instead? What about running at 1080p on a 4k TV with Sony's magic upscaler? How would that compare to 1440p-native quality-wise, especially if we're getting the former for free from some DSP? Wouldn't that same GPU power be better spent simply sprucing up a 1080p render for the magical scaler?
 
Dev kits and test kits are different.
Oh, now I see what you mean. Sorry, I guess I wasn't up on the lingo.

All I'm trying to get across here is that the time from hardware being locked down (the main APU not changing specs outside tweaks to clocks) to launch of the console is much greater than a lot of people think. Talk of Sony still making up their mind what spec to go with in July 2016 for a console launching before the end of the year or even March 2017 just doesn't fit when you look at lead times for FCC testing and the like.
But didn't you just say the final kits were submitted to the FCC ~4 months before launch? July 2013 -> November 2013, right? Has Neo been submitted yet?

Of course if all these leaks turn out to be a big fat ruse by Sony to fool everybody then I'll take it all back.....
That's my theory, given the conflicting information we've gotten. Sounds like Sony let leak that they were targeting a modest increase for $399, or a more substantial power increase for $499, to launch sometime next spring. Then they let leak they might be launching this year, with the weaker spec and the higher price point. As you say, I think this was a big, fat ruse by Sony to tease out some sense of what MS were planning, and they responded by announcing a vague performance target for well over a year from now, coyly explaining they are waiting so they weren't forced to release a mere "XBox One and a Half," so it seems their pre-announcement was indeed a direct response to the rumors that Sony would be releasing ~4TF Neo this year.

But now that MS have "detailed" their own plans, I see no reason for Sony to not continue with their original plans of a 1Q17 launch, especially if that makes Option B available to them. I also don't really see how moving from one x86 CPU to another and/or from one GCN4 GPU to another effectively constitutes more than an upclock when it's all said and done. We're talking about the same architectures here.
 
There was a talk? ^.^

The patent talks. :p

It was actually paragraph 38, but thanks! <3 So, can it also work like I thought, and upscale content that's not being rendered multiple times, like a 1080p feed from a Blu-ray or a game? I was actually thinking this patent described the upscaler in their 4k TVs, but perhaps it can be used for both purposes?

Seems like it, though obviously there will be no additional image information if the shifted images are created just by interpolating between pixel values. I also fail to see why it would be better than applying a Gauss filter (or similar) the usual way. The patent's main focus seems to be rendering or even emulation in specific though (with all the talk about rasterization, UV (texture) coordinates, reduction of rendering artifacts etc., it also explicitly mentions PlayStation).
 

El_Chino

Member
Because people buying premium sku consoles probably have premium TVs. It's not a stretch. I own a 75" 4K tv and have had 4k for 3 years now. 1440p minimum. 1080p is really ass once you've had 1440 or 4K. I've been gaming on 1440 on PC for 5-6 years now it's great.
Sounds like "I have the best thing so I want things for it."

Not everyone will have such a massive 4K TV and most gamers probably don't know what resolution there games are running at. Just as long as it looks pretty.

You have the assumption that every gamer who will buy Neo or the Scorpio know all the technical details about the games that will be running on it.

In the end, it'll be up to the developers and if they think it will be worth it.
 

ethomaz

Banned
As far as I know or have read, Sony doesnt pick the foundry. AMD does.

Sony and AMD collaborate on the requirement, design, spec but once the chip is tape out and validated then AMD sells the APUs for a fixed price to Sony.
You are right.

Edit - AMD really sells the APU to Sony... Sony just helped to design the chip.
 

mrklaw

MrArseFace
Has any of this been tested? Even if downsampling from 1440p30 to 1080p30 does improve the image quality substantially, would it really be better than simply running the game at 1080p60 instead? What about running at 1080p on a 4k TV with Sony's magic upscaler? How would that compare to 1440p-native quality-wise, especially if we're getting the former for free from some DSP? Wouldn't that same GPU power be better spent simply sprucing up a 1080p render for the magical scaler?


Well for some game types they'll be CPU limited so no amount of extra GPU power will let you go from 30 to 60fps. But you can render at a higher resolution at the same refresh rate, or add detail like fancier shadows etc
 

THE:MILKMAN

Member
But didn't you just say the final kits were submitted to the FCC ~4 months before launch? July 2013 -> November 2013, right? Has Neo been submitted yet?

Well, in inverted commas! The problem with all these kits is there are so many of them over a relatively short period and devs getting different iterations at various times it becomes a confusing mess to work out.

The dev kit I pictured was the one submitted by Sony to the FCC but there were earlier ones that also had, IMO, the "final" SoC/APU on the motherboard.

The simplest way I can state my position is like this: Whatever spec Neo ends launching with, the main specs of APU/RAM would have been effectively locked down and unchanged for 1 year plus going by the timeline of PS4 as reference. So all the talk from late March about two specs still being on the table just never made sense to me.

If Neo is to launch in November this year then I would expect a submission this month at the FCC so hopefully not long to wait. I see looking at the FCC there has been a wireless controller submitted last month by Sony: https://fccid.io/AK8CUHZCT2 Something or nothing?
 

Lady Gaia

Member
Yeah, definitely a cool trick.

It's a nice trick, but it's only helpful if you're trying to deal with legacy code designed around a fixed framebuffer resolution and strict memory limits. It's also more computationally intensive, doesn't take advantage of cache as effectively, and it's less accurate than just rendering at your desired resolution in the first place. If you're dealing with someone else's code and raw performance isn't a concern, though, it's not very labor intensive and relatively easy.
 

ps3ud0

Member
Has any of this been tested? Even if downsampling from 1440p30 to 1080p30 does improve the image quality substantially, would it really be better than simply running the game at 1080p60 instead? What about running at 1080p on a 4k TV with Sony's magic upscaler? How would that compare to 1440p-native quality-wise, especially if we're getting the former for free from some DSP? Wouldn't that same GPU power be better spent simply sprucing up a 1080p render for the magical scaler?
Love to see this tested properly as when Ive asked it seems the quality that 1080p divides into UHD so nicely that having a higher internal resolution like 1440p to scale from might be diminished somewhat.

If that means a game runs 60fps rather than 30fps then it seems a useful 'compromise' in some cases.

I believe DF havent released all the Sony documentation on how their upscaling techniques will work...

ps3ud0 8)
 

onQ123

Member
I'll say this again PS4 Neo is the PS4 to the devs but Sony will use the extra GPU processing power for co-processing /accelerators.


Devs will not have to do much different from what they are doing with the normal PS4 games.
 
Oh, now I see what you mean. Sorry, I guess I wasn't up on the lingo.


But didn't you just say the final kits were submitted to the FCC ~4 months before launch? July 2013 -> November 2013, right? Has Neo been submitted yet?


That's my theory, given the conflicting information we've gotten. Sounds like Sony let leak that they were targeting a modest increase for $399, or a more substantial power increase for $499, to launch sometime next spring. Then they let leak they might be launching this year, with the weaker spec and the higher price point. As you say, I think this was a big, fat ruse by Sony to tease out some sense of what MS were planning, and they responded by announcing a vague performance target for well over a year from now, coyly explaining they are waiting so they weren't forced to release a mere "XBox One and a Half," so it seems their pre-announcement was indeed a direct response to the rumors that Sony would be releasing ~4TF Neo this year.

But now that MS have "detailed" their own plans, I see no reason for Sony to not continue with their original plans of a 1Q17 launch, especially if that makes Option B available to them. I also don't really see how moving from one x86 CPU to another and/or from one GCN4 GPU to another effectively constitutes more than an upclock when it's all said and done. We're talking about the same architectures here.

You know there are outlets that have the spec sheets, and have outed them, right?
 
The patent talks. :p
lol Ah, right on.

Seems like it, though obviously there will be no additional image information if the shifted images are created just by interpolating between pixel values. I also fail to see why it would be better than applying a Gauss filter (or similar) the usual way. The patent's main focus seems to be rendering or even emulation in specific though (with all the talk about rasterization, UV (texture) coordinates, reduction of rendering artifacts etc., it also explicitly mentions PlayStation).
Yeah, my math is sorta weak, so I only have a vague idea of how upscaling is normally done. Any chance you could explain what's going on in Fig 3-4? It looks to me like all of the work is being done by the scaler with a single image provided by the Emulator, and I'm not clear what the Boundary Detection is all about if the emulator is really rendering 25%x4 like you guys say. It sounds to me like they use the combination of this half-pixel shift followed by boundary detection to reduce jaggies at the higher res.

But again, I barely understand this. lol


Well for some game types they'll be CPU limited so no amount of extra GPU power will let you go from 30 to 60fps. But you can render at a higher resolution at the same refresh rate, or add detail like fancier shadows etc
That's what I was getting at though. Even if you're CPU-bound, wouldn't that budget be better spent on more and nicer effects than a simple res bump, especially if you'll never reach native res anyway? Rather than bump to 1440p, wouldn't it be better to use the super-nice PBR shaders that were simply too expensive to run on the 2013 hardware?


Well, in inverted commas!
Err, wut? =/

The simplest way I can state my position is like this: Whatever spec Neo ends launching with, the main specs of APU/RAM would have been effectively locked down and unchanged for 1 year plus going by the timeline of PS4 as reference. So all the talk from late March about two specs still being on the table just never made sense to me.
What do we consider "the main specs," and what do we consider "locked down"? Again, less than a year before the PS4 launch they were distributing kits with hardware nothing like the final hardware and simply told devs it was representative of a minimum performance. My understanding was they said exactly the same about the "Super Jag" kits they sent out in April.

I'm also not clear on why having two options using basically the same architectures seems so outlandish to people.

If Neo is to launch in November this year then I would expect a submission this month at the FCC so hopefully not long to wait. I see looking at the FCC there has been a wireless controller submitted last month by Sony: https://fccid.io/AK8CUHZCT2 Something or nothing?
I'm not convinced Neo was ever planned for a November launch. The original leak said Q1, I've never heard a good argument for pushing it forward from there, and tons of great arguments for leaving it be. ;p


You know there are outlets that have the spec sheets, and have outed them, right?
Just the specs of the dev kits, right? Can you link me to the final hardware specs? Jim Ryan seemed to think we hadn't seen them yet.
 

ZoyosJD

Member
People not understanding the difference between up-scaling & up-rendering is not my problem.

The new Xbox One will upscale all games to 4K & your 4K TV can upscale all games to 4K but PS4 Neo & Xbox Scorpio will most likely up-render games to 4K..

Still going on about this BS, eh?

The fact you are stretching your expectations so far that Xbox Scorpio would use the Sony patented "up-rendering" technique in future hopes to save grace is hilarious.

Sony won't even wind up using the technique with neo because it is inefficient and specialized for ps2 to ps4 games. Take a look at the situations "uprendering" is used in currently.

PS2 was something like 6 GFLOPS while the PS4 is ~300x the flop count at 1.82 TFLOPS.

It is not some magic 4k bullet. Anyone expecting a 2-3x increase in FLOPS to scale the same as a 300x jump is deluding themselves even if the architecture is near identical.

The bottom line is that technique makes no sense to use when they could just increase the native resolution if that is what they decide to do.
 

onQ123

Member
Still going on about this BS, eh?

The fact you are stretching your expectations so far that Xbox Scorpio would use the Sony patented "up-rendering" technique in future hopes to save grace is hilarious.

Sony won't even wind up using the technique with neo because it is inefficient and specialized for ps2 to ps4 games. Take a look at the situations "uprendering" is used in currently.

PS2 was something like 6 GFLOPS while the PS4 is ~300x the flop count at 1.82 TFLOPS.

It is not some magic 4k bullet. Anyone expecting a 2-3x increase in FLOPS to scale the same as a 300x jump is deluding themselves even if the architecture is near identical.

The bottom line is that technique makes no sense to use when they could just increase the native resolution if that is what they decide to do.

Sony didn't patent up-rendering they patent a method of up-rendering & yes I believe Xbox Scorpio will use a form of up-rendering for 4K games.
 

dogen

Member
It's a nice trick, but it's only helpful if you're trying to deal with legacy code designed around a fixed framebuffer resolution and strict memory limits. It's also more computationally intensive, doesn't take advantage of cache as effectively, and it's less accurate than just rendering at your desired resolution in the first place. If you're dealing with someone else's code and raw performance isn't a concern, though, it's not very labor intensive and relatively easy.

Yeah, I mean for PS2 emulation.
 

THE:MILKMAN

Member
Err, wut? =/

The description I typed above the image of the dev kit I posted I typed the word final in inverted commas i.e. "Final" to emphasis the ever changing nature of these kits.

serversurfer said:
What do we consider "the main specs," and what do we consider "locked down"? Again, less than a year before the PS4 launch they were distributing kits with hardware nothing like the final hardware and simply told devs it was representative of a minimum performance. My understanding was they said exactly the same about the "Super Jag" kits they sent out in April.

I'm also not clear on why having two options using basically the same architectures seems so outlandish to people.

For me the main aspects of the specs are the APU/SoC and RAM and I have to disagree with you when you say the specs of PS4 dev kits less than a year before release were nowhere near final. I say the pictures I've posted show a dev kit with the SoC looking identical to the retail one along with even the motherboard as a whole, that dates back as far as January 2013 (11 months before release) or before. Now some devs that weren't working on a launch/launch window game may still have had a older kit even near PS4 launch and that could confuse things as far as specs were concerned.


serversurfer said:
I'm not convinced Neo was ever planned for a November launch. The original leak said Q1, I've never heard a good argument for pushing it forward from there, and tons of great arguments for leaving it be. ;p

Could be Q1 but I doubt it changes anything as far as specs are concerned. If the CPU upgrade is Zen rather than Jaguar then that is a big change and I can't see how it is compatible that less than a year before launch devs have Jaguar dev kits that some (according to the leaked docs) will be submitting game code for in August.

Like I said before if Sony have played a blinder here and fooled DF, GB and game retail (Osiris) then well done them. I doubt all three of the aforementioned have been fooled, though.
 

ZoyosJD

Member
Sony didn't patent up-rendering they patent a method of up-rendering & yes I believe Xbox Scorpio will use a form of up-rendering for 4K games.

There is no other way to perform "uprendering" without stepping all over that patent.

If you think it is possible then provide an explanation of how to achieve the same results without infringing.

It's absurd you recognize "uprendering" as a non-optimal technique and still believe it to be the saving grace of hardware under-powered for 4k via "smart decisions".
 

onQ123

Member
There is no other way to perform "uprendering" without stepping all over that patent.

If you think it is possible then provide an explanation of how to achieve the same results without infringing.

It's absurd you recognize "uprendering" as a non-optimal technique and still believe it to be the saving grace of hardware under-powered for 4k via "smart decisions".

People been up-rendering way before that patent came out.
 

Metfanant

Member
There is no other way to perform "uprendering" without stepping all over that patent.

If you think it is possible then provide an explanation of how to achieve the same results without infringing.

It's absurd you recognize "uprendering" as a non-optimal technique and still believe it to be the saving grace of hardware under-powered for 4k via "smart decisions".

Don't forget at the end of the day...you can either render at a higher resolution natively...or youre just talking about a fancy way of saying your upscaling the image
 

ZoyosJD

Member

WRONG!!!

ppsspp uses resolution scaling where the original resolution a game is running at is intercepted and multiplied by a value 1-10, then the screen is rendered in a single pass at the resolution that is output by that multiplication.

You can find the code for this on their github since the project is open source:

https://github.com/hrydgard/ppsspp

That is not "uprendering".


edit:More specifically, line 647-657 on this page shows they are setting the internal rendering resolution by each of those multiples. https://github.com/hrydgard/ppsspp/...b594e00663d5f0d426/Windows/MainWindowMenu.cpp


How ironic.
 

onQ123

Member
WRONG!!!

ppsspp uses resolution scaling where the original resolution a game is running at is intercepted and multiplied by a value 1-10, then the screen is rendered in a single pass at the resolution that is output by that multiplication.

You can find the code for this on their github since the project is open source:

https://github.com/hrydgard/ppsspp

That is not "uprendering".


edit:More specifically, line 647-657 on this page shows they are setting the internal rendering resolution by each of those multiples. https://github.com/hrydgard/ppsspp/...b594e00663d5f0d426/Windows/MainWindowMenu.cpp



How ironic.


If you render a game at 2X - 10X it's native resolution what did you just do?

TkqdmJS.png
 

Lady Gaia

Member
How ironic.

Actually, this is one of the few times onQ123 has been right in this whole crusade. It isn't upscaling. It's a half-baked approach to rendering a higher resolution image when dealing with code that only expects to see a fixed resolution. It's not any of the other things he wants it to be, though. It's both less efficient and less accurate than simply rendering at the target resolution to begin with – so nobody in their right mind would use the strategy for anything but emulating older software.
 

onQ123

Member
Actually, this is one of the few times onQ123 has been right in this whole crusade. It isn't upscaling. It's a half-baked approach to rendering a higher resolution image when dealing with code that only expects to see a fixed resolution. It's not any of the other things he wants it to be, though. It's both less efficient and less accurate than simply rendering at the target resolution to begin with – so nobody in their right mind would use the strategy for anything but emulating older software.

The same way they will be able to use compute to reproject 60fps to 120FPS with just a small amount of compute power from the PS4 1.84TF GPU what make you think they can't uprender from 1080P to 4K using an extra 2TF of processing power?
 

ZoyosJD

Member
If you render a game at 2X - 10X it's native resolution what did you just do?

I said it in my last post. It's called "resolution scaling" aka setting the internal resolution. DICE uses this technique in their Frostbite engine with a modified sampling form. It is heavily documented and is not "uprendering".

Otherwise it may be referenced generally as SSAA (Super Sampling Anti Aliasing). Go read one of Durante's many posts on the topic. Again not "uprendering".

Actually, this is one of the few times onQ123 has been right in this whole crusade. It isn't upscaling. It's a half-baked approach to rendering a higher resolution image when dealing with code that only expects to see a fixed resolution. It's not any of the other things he wants it to be, though. It's both less efficient and less accurate than simply rendering at the target resolution to begin with &#8211; so nobody in their right mind would use the strategy for anything but emulating older software.

He's not right. These techniques have been long established and have very specific meaning and differences in impact on performance. He is continually trying to claim there will somehow be 4k native output at the cost of a upscaled image through any terminology he can screenshot. He purely doesn't have understanding of what he is talking about and is spreading misinformation.

The same way they will be able to use compute to reproject 60fps to 120FPS with just a small amount of compute power from the PS4 1.84TF GPU what make you think they can't uprender from 1080P to 4K using an extra 2TF of processing power?

Reprojection is nothing like increasing resolution. It is merely updating the players position of the HMD at twice that of the games logic.
 

Lady Gaia

Member
The same way they will be able to use compute to reproject 60fps to 120FPS with just a small amount of compute power from the PS4 1.84TF GPU what make you think they can't uprender from 1080P to 4K using an extra 2TF of processing power?

The two are completely unrelated. Reprojection has nothing to do with improving the quality of an image and everything to do with minimizing latency for head tracking in order to combat one of predictable and avoidable causes of nausea. It just amounts to panning the image.

He's not right.

Not about much, no, but it's not just upscaling from the original resolution, either.

He is continually trying to claim there will somehow be 4k native output at the cost of a upscaled image through any terminology he can screenshot. He purely doesn't have understanding of what he is talking about and is spreading misinformation.

No disagreement there.
 

onQ123

Member
I said it in my last post. It's called "resolution scaling" aka setting the internal resolution. DICE uses this technique in their Frostbite engine with a modified sampling form. It is heavily documented and is not "uprendering".

Otherwise it may be referenced generally as SSAA (Super Sampling Anti Aliasing). Go read one of Durante's many posts on the topic. Again not "uprendering".



He's not right. These techniques have been long established and have very specific meaning and differences in impact on performance. He is continually trying to claim there will somehow be 4k native output at the cost of a upscaled image through any terminology he can screenshot. He purely doesn't have understanding of what he is talking about and is spreading misinformation.



Reprojection is nothing like increasing resolution. It is merely updating the players position of the HMD at twice that of the games logic.

The two are completely unrelated. Reprojection has nothing to do with improving the quality of an image and everything to do with minimizing latency for head tracking in order to combat one of predictable and avoidable causes of nausea. It just amounts to panning the image.



Not about much, no, but it's not just upscaling from the original resolution, either.



No disagreement there.


How does me saying they can use the extra compute for up-rendering from 1080 to 4K like they use a small amount of compute to reproject 60fps to 120fps = reprojection & up-rendering being alike?
 

Lord Error

Insane For Sony
How does me saying they can use the extra compute for up-rendering from 1080 to 4K like they use a small amount of compute to reproject 60fps to 120fps = reprojection & up-rendering being alike?
Maybe I'm missing something in the larger discussion, but you cannot just reproject 60FPS game to 120FPS when you don't have some kind of new data being sampled at least at 120Hz (in case of PSVR, that being the position and rotation of the helmet). VR is practically the only scenario where the reprojection as a concept makes sense.

Actually, this is one of the few times onQ123 has been right in this whole crusade. It isn't upscaling. It's a half-baked approach to rendering a higher resolution image when dealing with code that only expects to see a fixed resolution..
It's actually a pretty brilliant approach for that particular scenario. Everything else usually amounts to a pile of rendering hacks where the resulting image doesn't look the same as the original (in ways other than resolution I mean). This allows for increasing the resolution while preserving everything else about the original look.
 

ZoyosJD

Member
The two are completely unrelated. Reprojection has nothing to do with improving the quality of an image and everything to do with minimizing latency for head tracking in order to combat one of predictable and avoidable causes of nausea. It just amounts to panning the image.

Not about much, no, but it's not just upscaling from the original resolution, either.

No disagreement there.

Also a good, clear explanation.

Depends on what you assume he's blabbing about at any given time. He can't seem to separate the concepts.

Yeah, I just get tired of shit like this:

The same way they will be able to use compute to reproject 60fps to 120FPS with just a small amount of compute power from the PS4 1.84TF GPU what make you think they can't uprender from 1080P to 4K using an extra 2TF of processing power?

How does me saying they can use the extra compute for up-rendering from 1080 to 4K like they use a small amount of compute to reproject 60fps to 120fps = reprojection & up-rendering being alike?

Yo, WTF? How does me telling you that the processes are different not explain that you can't compare the usage of compute under these circumstances?
 

onQ123

Member
Maybe I'm missing something in the larger discussion, but you cannot just reproject 60FPS game to 120FPS when you don't have some kind of new data being sampled at least at 120Hz (in case of PSVR, that being the position and rotation of the helmet). VR is practically the only scenario where the reprojection as a concept makes sense.

It will also make sense if you was trying to create 4 shifted frames that you can use in 1 final frame.
 

onQ123

Member
Also a good, clear explanation.

Depends on what you assume he's blabbing about at any given time. He can't seem to separate the concepts.

Yeah, I just get tired of shit like this:





Yo, WTF? How does me telling you that the processes are different not explain that you can't compare the usage of compute under these circumstances?

One would be using a small amount of compute that's already there on the PS4 & the other would be from a extra 2TF that's coming with Neo.
 
Top Bottom