• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel beats AMD and Nvidia to crowd-pleasing graphics feature: integer scaling

LordOfChaos

Member
Enthusiasts have been calling out for the functionality for quite some time, even petitioning AMD and Nvidia for driver support. Why, you ask? Essentially integer scaling is an upscaling technique that takes each pixel at, let’s say, 1080p, and times it by four – a whole number. The resulting 4K pixel values are identical to their 1080p original values, however, the user retains clarity and sharpness in the final image.

Current upscaling techniques, such as bicubic or bilinear, interpolate colour values for pixels, which often renders lines, details, and text blurry in games. This is particularly noticeable in pixel art games, whose art style relies on that sharp, blocky image. Other upscaling techniques, such as nearest-neighbour interpolation, carry out a similar task to integer scaling but on a more precise scale, which can similarly cause image quality loss.



Hopefully a third horse in the graphics race kicks things into a higher gear, Nvidia/AMD copying this would be a win for everyone already whether you ever get an Intel GPU or not

Gen 11 graphics only tho


Old low res pixel perfect LCD:

Typical (bilinear?) GPU scaling:

Pixel perfect integer scaling:
 
Last edited:

iconmaster

Banned
Interesting. I didn't understand how it differed from nearest-neighbor, but found an explanation here:

The the “Nearest Neighbour” interpolation is lossless only at integer ratios of resulting and original images, but results in distortion at fractional ratios. Integer-ratio scaling is always lossless.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Enthusiasts have been calling out for the functionality for quite some time, even petitioning AMD and Nvidia for driver support. Why, you ask? Essentially integer scaling is an upscaling technique that takes each pixel at, let’s say, 1080p, and times it by four – a whole number. The resulting 4K pixel values are identical to their 1080p original values, however, the user retains clarity and sharpness in the final image.

Current upscaling techniques, such as bicubic or bilinear, interpolate colour values for pixels, which often renders lines, details, and text blurry in games. This is particularly noticeable in pixel art games, whose art style relies on that sharp, blocky image. Other upscaling techniques, such as nearest-neighbour interpolation, carry out a similar task to integer scaling but on a more precise scale, which can similarly cause image quality loss.



Hopefully a third horse in the graphics race kicks things into a higher gear, them copying this would be a win already

Gen 11 graphics only tho
Uhhh no. It's not that a simple. A "feature" does not put Intel anywhere within a light year of a "win."

In order for Intel to get a win they would have to have a GPU on the market that is competitive in price and performance to whatever nvidia or AMD has on the market.
 
Last edited:

LordOfChaos

Member
Uhhh no. It's not that a simple. A "feature" does not put Intel anywhere within a light year of a "win."

In order for Intel to get a win they would have to have a GPU on the market that is competitive in price and performance to whatever nvidia or AMD has on the market.

"Hopefully a third horse in the graphics race kicks things into a higher gear, them copying this would be a win already", as in, we'd all benefit just by AMD and Nvidia copying the feature thanks to a third entry into the market...Not a win for Intel specifically.

Who do you think doesn't know that this single feature wouldn't win them the overall market lol?
 
Last edited:

Clear

CliffyB's Cock Holster
Yeah... no. Not impressed. Integer upscaling is basically putting a beginner grade pixel-shader program into a driver function and calling it an innovation.
 

Clear

CliffyB's Cock Holster
Why haven't the others done so yet?

You do realize how simple a process this is yeah? Its actually the most computationally simple way of enlarging an image. Interpolating to smooth the image where all the cost comes in.
 

LordOfChaos

Member
You do realize how simple a process this is yeah? Its actually the most computationally simple way of enlarging an image. Interpolating to smooth the image where all the cost comes in.

I'm literally just asking why it hasn't been done in AMD and Nvidia drivers if it's simple, it's not a trick question. Is it power consumption? They're limiting it to Gen11 and said it wouldn't work well on Gen9. People requested it of AMD and Nvidia for years and didn't get anything.

Here's an 91 page thread on the Nvidia forums asking for this feature, but no other driver has implemented it yet
A petition with over 2300 signatures


>Interpolating to smooth the image where all the cost comes in.

Sounds like this nullifies the previous statement, yeah? It's easy except the step that makes it workable, which is costly?
 
Last edited:

Clear

CliffyB's Cock Holster
I'm literally just asking why it hasn't been done in AMD and Nvidia drivers if it's simple, it's not a trick question. Is it power consumption? They're limiting it to Gen11 and said it wouldn't work well on Gen9.

>Interpolating to smooth the image where all the cost comes in.

Sounds like this nullifies the previous statement, yeah? It's easy except the step that makes it workable, which is costly?

I can only assume its because its such a trivial and simplistic function that they figured if someone wanted to do it, they'd simply write their own.

I mean literally what we're talking about is reading one pixel in a source buffer, and writing it unmodified 4 times into the destination buffer. Interpolation requires many times the number of reads as it needs to examine adjacent pixels and modify the color values of the pixels being written accordingly.

It is computationally impossible for it not to be more expensive because ultimately the ratio of input to output is identical regardless, and simply replicating the input data requires no additional processing whatsoever.
 
Last edited:
Top Bottom