• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Anyone else having color banding issues with various games lately?

nkarafo

Member
I'm seeing this in many games, usually when there's darkness. Rise of the Tomb Raider was the later game where i noticed this.

This is color banding:

hqdefault.jpg


Instead of having a smooth color transition like in the right part of the picture, i usually get those stripes of color (not as severe but close).

I'm not sure if this is a problem with all games. But some games seem to have severe banding issues.

I have a Nvidia card (960). Naturally, i use 32 bit color and i also use "full" dynamic range. There's also a "12 bpc" option that i have also enabled. Alien Isolation supposedly support this option for smoother colors but i don't think i have seen a difference, although i don't remember this game suffering from banding issues but i don't have it installed atm so i can't test it.

Is this a game issue? A card/drivers issue? I thought 32bit color fixed this issue in the mid 90's or something.
 

On Demand

Banned
I see it in a few games when looking at the sky. Games are still made in 8 bit color aren't they? Then there's the display you're viewing it on.

Those settings you mention are similar to upscaling, it'll help but if the source isn't natively made with that color depth it won't get rid of it completely.
 
Samurai Warriors 4ii on PC suffers horribly. For some reason it is greatly diminished by setting the game to windowed (even if you use borderless windowed), and pretty much eliminated if you force a limited RGB range after that (either on your TV or through SweetFX). Dunno what's up with the windowed stuff, but I think the game just doesn't support full RGB on PC for some reason.

None of these issues happen on PS4, but they persist across nVidia/AMD on PC.
 

lazygecko

Member
Has been a major annoyance for me for as long as I can remember. The animated smoke in the Skyrim loading screens look fucking terrible, for example. I've heard people say that your monitor specs is related, though to what extent it can mitgate the effect I have no clue.

What's worse is how ignored this issue is when it comes to product specifications. Last year I got a GSync monitor. Problem is, while GSync is active, it's actually using a much lower color depth, adding so much dithering to screen I feel like I'm back in the 16-bit era. And there was absolutely no information about this from the retailer or even the product description on its official website.
 

nkarafo

Member
Last year I got a GSync monitor. Problem is, while GSync is active, it's actually using a much lower color depth, adding so much dithering to screen I feel like I'm back in the 16-bit era. And there was absolutely no information about this from the retailer or even the product description on its official website.
Holy shit, that's awful. Did you return it?

Do you have a PSVR? The processor unit will cause this.
No, i'm gaming on PC with no VR.
 

nkarafo

Member
Here some notable examples:

s8Wv5t1.jpg


9jG4XjS.jpg


88vuw6vdvt97vaxzg.jpg


And that reminds me, Dying Light also had horrible color banding issues for me.
 
Here some notable examples:

s8Wv5t1.jpg


9jG4XjS.jpg


88vuw6vdvt97vaxzg.jpg


And that reminds me, Dying Light also had horrible color banding issues for me.

Probably a game engine issue if I'm also seeing it here. Even though my monitor is technically 6-bit-per-channel, it also happens to FRC and dither, so actual gradients look smooth. These, though? Visible banding.

Afraid not a lot can be done with renderer imprecision, though. It's just an inherent thing with how they blend stuff.
 

nkarafo

Member
Is there a way to test if a TV supports full range RGB or, to put it more simply, to test if the TV itself is not the issue here?
 

taoofjord

Member
I have this issue and I think it could be game engine specific. Happens both on the PS4 Pro and on my PC (GTX1080) with my Samsung KS8000. I did hear that certain recent NVIDIA cards, at some point, were having this issue. Do not think it was ever addressed.

Also, HDR has not fixed this issue for me and neither has RGB vs. YCbCr or Full vs. Limited.
 

J4g3r

Member
I have this issue and I think it could be game engine specific. Happens both on the PS4 Pro and on my PC (GTX1080) with my Samsung KS8000. I did hear that certain recent NVIDIA cards, at some point, were having this issue. Do not think it was ever addressed.

Also, HDR has not fixed this issue for me and neither has RGB vs. YCbCr or Full vs. Limited.

I noticed colour banding in some lighting in FFXV, turning on HDR got rid of it though.
 

tonypark

Member
I do big time with games when i enable HDR. The worst offender being the last guardian.
Ff15 had some but i barely noticed.
Rise of tomb raider also in dark areas. Its a shame really... (tv is ks8000)
I wonder if its the games fault or my tv...
 

taoofjord

Member
I do big time with games when i enable HDR. The worst offender being the last guardian.
Ff15 had some but i barely noticed.
Rise of tomb raider also in dark areas. Its a shame really... (tv is ks8000)
I wonder if its the games fault or my tv...

Did you try TLG with HDR off? It had the same amount of color banding for me. Especially noticeable in the first foggy room you enter.
 

missile

Member
Leaving 6 or 8-bit monitors aside (producing banding on their own) for the
moment, the occurrence of banding as of late is due to quantizing HDR buffers
down to 8 or 10-bits. The main problem here is that the eyes are much more
sensible to changes in shades for low/dark intensities. As brighter a shade
gets, as less you will recognize any jumps in intensity of any nearby shade.
For the eye to not be able to discern any jumps in shades (esp. in the blacks)
one needs about 16-bits per channel. However, with some special techniques it
is possible to reduce that limit to 12-bit (that's why HDR standard is set at
12-bit (Dolby Vision), yet most HDR TVs just run on 10-bits) without getting
much objectionable artifacts. But with < 12-bits you will get banding in the
darker shades. And it's here where the lack of knowledge among game developers
occurs. Unfortunately, this problem of quantizing a high bit depth in such a
way that the bands are minimal or even hidden is quite a difficult problem and
exists since the '90 for producing high-quality gifs from true-color images.
It's the same problem which is still present today as see in most animated gifs.


For those who are interested:
I'm working on a 3d retro graphics engine and I do face many if these problem
again and again over the years and have as such developed my own solutions
over time, up to the point where I can say (as of late) that I'm about to solve
the banding problem not only for quantizing HDR images down to any bit depth
but also for gifs making them almost banding free.

Here is some of my work posted over at indie gamedev thread leading up to my
solution which now produces even better result.

Here is a little something from the tonemapper am currently working on;

XwKFMNP.png


iz8fesB.png


The mapper still produces high quality shades at the lowest possible luminance
levels mapped to 8-bit output levels with the help of of my new perceptual
quantizer which quantizes the levels according to human perception. The
dithering mechanism, to cover the excessive banding at low luminance levels,
is also a very special develop one. It's not random nor an ordered one. It
produces no flickering and no distorting pattern (when in motion) yet covers
(can be adjusted) the whole bands very nicely as can be seen on the ground
which only consist of three true shades (incl. black). This combination
together produced an almost banding free rendering no matter how dark the
image becomes. The dithering itself becomes imperceptible at normal resolution
and viewing distance and gradually disappears for higher luminance levels (i.e.
not much dithering needed due to the threshold vs. intensity characteristic of
the human eye at higher (background) luminance levels (Weber/Fechner law)).

The image above was modified to show the effect more clearly, i.e. it was
brightened up artificially. The maximum luminance of the original image is
about 10^-5 (mapped to RGB(4,4,4) in the 8-bit RGB output image) of a current
usable range of 6 to 7 orders of magnitude realized by imitating the
auto-exposure (gain control) of the human eye and some other stuff. The mapper
is a global one for fast execution. I want to have at least one with said
features running fast. But I will also spend some time on a local one to
better retain the relative contrast of the HDR image within the LDR image.

But first I plan to add some more perceptual adaptation effects of the eye
like modeling the adaptation speeds, acuity issues, etc. and stuff like
chromatic adaptation, loss of color sensation at scotopic levels and so on
which may make you feel more embedded into an environment. Well, will see
what comes out of it and if it works the way I think.


yNNE3FF.gif

(upscaled, banding)

VP8ztDm.gif

(upscaled, PQ)

Here is an animation of dimming the lights down. The luminance is scaled
down from 10^2 to 10^-5. There is no objectionably banding at low lights for
the second animation. (There is a slight one but that's due to GifCam.). At
low light levels you can see how the dithering is working hard to cover the
long band. But see, the pattern is not flickering. ...

So for example I'm able to eliminate the banding resulting from displaying an
8-bit (per channel) image on a 6-bit monitor with ease.

I want to apply this technique to retro graphics in rendering some effects in
HDR and quantizing down to very low bit depths afterwards.
 

Paragon

Member
Are you using a custom ICC profile or other tool to modify the GPU LUT?
NVIDIA doesn't handle this properly and any changes you make will cause there to be banding - though if you're outputting 12-bit the impact should be minimal compared to an 8-bit output.
In-game brightness/gamma settings can also cause this to appear if it uses the GPU LUT instead of processing it in the engine.

Even without that, it's just generally been a problem in games for a long time, and something that I'm surprised more developers haven't worked on fixing. A lot of the time they're using enough precision that banding shouldn't be a problem, but then they don't process the image properly to prevent it. (not using dithering etc.)
Fortunately, HDR is likely to make more developers aware of this problem.

Alien Isolation definitely shows less banding with the deep color option enabled on a 10-bit display, but you probably won't notice it unless you disable the film grain. Film grain/noise tends to mask banding problems really well.
Tomb Raider 2013 had a "high precision" mode which helped reduce banding. Does Rise of the Tomb Raider have the same option?

What's worse is how ignored this issue is when it comes to product specifications. Last year I got a GSync monitor. Problem is, while GSync is active, it's actually using a much lower color depth, adding so much dithering to screen I feel like I'm back in the 16-bit era. And there was absolutely no information about this from the retailer or even the product description on its official website.
What monitor is it?
That problem is not exclusive to G-Sync though. My Sony TV noticeably drops the color depth when switching to 3D mode.
2D mode, nice smooth 10-bit gradients, 3D mode, ugly banding everywhere like it's less than 8-bit.

The dithering mechanism, to cover the excessive banding at low luminance levels, is also a very special develop one. It's not random nor an ordered one
Your posts are always so interesting.
I've always found that random dithering is best for hiding banding.
When you use non-random dithering like ordered or diffusion, you are left with bands of solid color in-between the shades which required dithering because there was no error that required it.
Uniform noise across the image is preferable to having areas that are noise-free.
 

hiryu64

Member
What's worse is how ignored this issue is when it comes to product specifications. Last year I got a GSync monitor. Problem is, while GSync is active, it's actually using a much lower color depth, adding so much dithering to screen I feel like I'm back in the 16-bit era. And there was absolutely no information about this from the retailer or even the product description on its official website.
Is this a common issue with G-Sync monitors? I haven't heard of G-Sync kicking down color depth, but that's interesting if true.
 

Izuna

Banned
I always thought that it's just intentional to save memory/resources etc.

I've noticed it more these days. Took me by surprise when I went back to Blue Dragon.
 

EvB

Member
Is this a game issue? A card/drivers issue? I thought 32bit color fixed this issue in the mid 90's or something.

Nope, 32bit colour (8bit colour) is gearing up to be superseded by 10bit Colour , but you need a display to support it.
Games are only made with 8bit in mind, there are some slight computational overheads with greater bit-depth , so it hasn't happened yet, as the display uptake is limited.
Even in 8bit games , I suspect the banding that you see in certain circumstances are th result of using lower quality and lower precision settings in non-essential areas in the name of optimisation.

In game-mode, this is where some of the degradation in image quality comes from, as you see the native image, banding et al
TVs typically try to deal with banding/posterisation in their standard picture modes.
 

EvB

Member
Did you try TLG with HDR off? It had the same amount of color banding for me. Especially noticeable in the first foggy room you enter.

^that will be either

A: the game doesn't output any 10bit color (no PS4 game is confirmed to)

B: You have an 8bit panel.

C: the fog are low resolution alpha effects
 

horkrux

Member
I've started to notice this a few years ago in GT5 and subsequently in other games right up until now. It's really annoying, but it's strangely an issue that's not talked about a lot.
 

nkarafo

Member
I've started to notice this a few years ago in GT5 and subsequently in other games right up until now. It's really annoying, but it's strangely an issue that's not talked about a lot.
I still remember when i upgraded my Voodoo 3 card to a 32bit capable one. This was the day i said goodbye to poor color banding like this. I had no idea games will continue to have this issue 20 years later.

So many people complain about other little things like IQ, AA, SSAO issues and other things. But why not this one? Isn't Dying Light the same for everyone, despite card/console or monitor/TV?
 
I have been noticing it in some games lately, most recently in Dead Rising 4. I have a native 8-bit VA display, so definitely not the display's fault.
 

Paragon

Member
It's the same fault we have had for a long time, 8bits per channel is not enough.
8-bits is enough if it's sufficiently dithered.
If you're dithering correctly, bit-depth should only affect the amount of noise in the image, not the amount of banding.
10 or even 12 bit displays would be ideal though.
 

missile

Member
Posts like these are why I love gaf ava it's community.
... Your posts are always so interesting. ...
Thx a lot! Glad it serves anything.

... I've always found that random dithering is best for hiding banding. ...
Seems like it, but isn't. There is dithering producing less objectionable
artifacts than random or Bayer. The optimum lies between these two techniques,
i.e. pattern (dispersed-dot ordered dithering) and random, and follows sort of
a pattern-diffusion process. Floyd-Steinberger is one of such techniques but
has many other flaws. Producing good diffusion patterns is hard and
computational much more complex than any other technique. The optimum looks
like when shading an image using pointillism. If you compare such an image
against a Bayer dithered or random one, you will see that pointillism produces
the least amount of artifacts / most pleasing image to the eye.

The question is; how to produce limited sized pointillism/diffusion patterns
which are stable, do not flicker, do not alias, have minimum objectionable
structure etc.?

And this is where the following paper is a bit lacking;

There was nice presentation on subject a while back.
http://loopit.dk/banding_in_games.pdf ...
This paper focuses just on random and has skipped the diffusion techniques
right away by saying it (Floyd-Steinberger) doesn't map well to GPU, which is
correct, but this class (not the specific Floyd-Steinberger algorithm) of
dithering is able to produce the best patterns considering human
perception/vision, and as such is able to produce better pictures (less
objectionable artifacts) than random.

... When you use non-random dithering like ordered or diffusion, you are left with bands of solid color in-between the shades which required dithering because there was no error that required it.
Uniform noise across the image is preferable to having areas that are noise-free.
I faced this problem as well. Having done some cool dithering yet got the
bands of solid colors in-between. I've solved that problem. My dithering
algorithm is able to blend the halftones together as much as needed as seen
here no matter what pattern/technique I use;

iz8fesB.png


I can reduce the width of the halftones, reducing it to just the "border" for
two true shades or spreads them as needed.
 

Paragon

Member
As I said, very impressive work.
I'm still a bit skeptical about there being no banding at all in motion, but perhaps that is just GifCam as you say. Your dither technique is a massive improvement regardless.
When working at very low resolutions and bit-depths, I can see where you would want something that is say 95% banding-free and low noise, vs 2LSB TPDF dither which is 100% banding free but high noise.
When dealing with high resolutions and in a demanding 3D game, I would think that a relatively simple TPDF dither solution might be preferable to a computationally expensive error diffusion technique.
At the same time, I don't know if the noise which is added by TPDF dither is necessarily a bad thing either. A lot of developers add a "film grain" filter on top of their image, and TPDF dither would typically show much less noise than that when you're dealing with converting say a 16-bit buffer to an 8-bit or 10-bit output. The higher the bit-depth, the less visible the noise is.
I think we can all agree that this is something which many developers have been overlooking and it results in very ugly artifacts which shouldn't be there - especially when their rendering pipeline is using buffers with high internal precision to do things like HDR lighting. (and now HDR output)
 

Auctopus

Member
The mission where you lead the tank through the fog in the forest in BF1 has such bad banding on PS4.

I wouldn't describe it as a "problem" though or something I've seen freqeuntly recently.
 

Ardenyal

Member
I have had color banding issues pretty much always. I don't know if it's me being involved in graphics design or what but I have always noticed the gross effects of "smooth" gradients.
 

missile

Member
As I said, very impressive work.
I'm still a bit skeptical about there being no banding at all in motion, but perhaps that is just GifCam as you say. ...
Technically, the bands will always be there unless adding way too much noise
to the signal. But since the eye has a finite resolving power, it suffices to
add only a certain amount of some specifically shaped noise to make the image
banding free with the noise being least objectionable. Well, that's at least
the goal.

The banding problem is actually two-folded. If the bit depth is very low, then
one can see the bands produced by the pattern itself, if the pattern is small.
For, any pattern can only reproduce so many shades. As you know, a 2x2 Bayer
pattern can only produce 5 halftones (shades). These halftones can't be
covered unless introducing some sort of random (breaking the pattern). On high
bit depths these halftones (4x4 is used a lot) may already suffice to hide the
quantisation bands and also to hide the bands of the pattern itself taking
human perception into account (integration).

... When working at very low resolutions and bit-depths, I can see where you would want something that is say 95% banding-free and low noise, vs 2LSB TPDF dither which is 100% banding free but high noise. ...
Indeed. By adding noise to the signal everything can be hidden, even the
signal! xD The only question is, after having done all other stuff; how to
hide the damn noise?

... When dealing with high resolutions and in a demanding 3D game, I would think that a relatively simple TPDF dither solution might be preferable to a computationally expensive error diffusion technique. ...
Sure. Error diffusion is out of the question, for, it produces some good
artifacts, flickers when used in animations (see many of the (ugly)
animated-gifs), and is computationally cumbersome (but various optimization
exist). But, well, error diffusion isn't the end of the story considering
diffusion techniques. As I wrote in a previous post, pointelism (another
diffusion technique) is far superior. Question is: how to make it fast?

... At the same time, I don't know if the noise which is added by TPDF dither is necessarily a bad thing either. A lot of developers add a "film grain" filter on top of their image, and TPDF dither would typically show much less noise than that when you're dealing with converting say a 16-bit buffer to an 8-bit or 10-bit output. The higher the bit-depth, the less visible the noise is. ...
When going for a grainy look TPDF may do the job esp. at higher bit depths.
But looks odd when doing color dithering.

However, most of the issue reoccur if the HDR shades get very dark. It's in
the dark region where you will see the noise/bands much more again, which is
basically the problem here (thread). For one, you need a good noise function
to hide the quantisation but that noise function also needs to stay less
visible on its own, which can only be realized sufficiently when taking human
vision into account, for, the eyes' spatial frequency response isn't uniform
across the retina, it detects higher frequency components in the diagonal to a
much lesser degree, which is also the reason why most patterns (see offset
printing (color black), the Bayer patterns >=4x4 etc.) are rotated by 45
degrees making the pattern less visible to the eye. Doing so basically amounts
to noise shaping (like in music), but done here in 2d. The noise/whatever is
shaped in such a way that it produces less artifacts. For the classic patterns,
i.e. the Bayer pattern (class: dispersed-dot ordered dithering) it was proven
by Robert Ulichney (see: Digital Halftoning) that these patterns are optimal
for the given class in producing the least amount of artifacts for the eyes.
That's the reason why the Bayer patterns have all these crosses (x, 45 degrees)
in there, because a cross is less objectionable to the eye whereas a plus (+)
is. Any other pattern within that class will produce more objectionable
artifacts.

The same principle is applied to any other method. TPDF is just one way to
shape the noise (away from white noise). And there are million ways to shape
noise or any other signal for that matter. However, the eyes (like the ears)
have limits and it is perhaps best to shape the noise in such a way that it
becomes less objectionable to the eyes, producing better dithering if done
rightfully. Doing so involves a 2d Fourier analysis of the pattern/signal
and matching it with the eyes. The work of Robert Ulichney (Digital Halftoning)
shows how all this works. And if you look closely, you will see that the
pattern produced via diffusion and patterns on hexagonal grids have some very
interesting power spectra.

Second. Another issue usually not address is the Weber/Fechner law (TVI,
threshold vs. intensity). That is to say; with respect to human vision,
uniform quantisation makes no sense. You will waste a lot of bits for shades
you can't really distinguish that well. Hence, it's better to spend more bits
for the dark shades than for the bright ones, because jumps in bright shades
aren't that much objectionable to the eye due to the Weber/Fechner law
underlying human vision. That's also the reason why we don't need 16 bit HDR
TVs, for, 12 bit (see: Dolby Vision) suffices for the human eye given the
quantizer is based on human vision.

None if this was addressed in the paper posted above. So there is a lot left
on the table to further improve/suppress banding in games.

... I think we can all agree that this is something which many developers have been overlooking and it results in very ugly artifacts which shouldn't be there - especially when their rendering pipeline is using buffers with high internal precision to do things like HDR lighting. (and now HDR output)
Overlooked? Nope. Today's programmers were rise by a 24 bit framebuffer! ;)
 

Paragon

Member
I don't really have anything to add other than saying thanks again for taking the time to write up that detailed post because I agree with just about everything that you wrote.
 

missile

Member
^ You are welcome!


I'm working on all this stuff on a daily basis, currently. Here is some partial
result as of late (work in progress). The following images were quantized down
from an HDR buffer down to 1 bit per color channel, hence, 8 colors only!

K7u0MUJ.png

TPDF

bl71tOw.png

Bayer 8x8

B1N1d5T.png

Pointillism approximation (work in progress)

Look how smooth the shades are in the last image. 1 bit per color channel. xD
 

Paragon

Member
There's no question about it, when you're working at low bit-depths and low resolutions with flat-shaded objects, TPDF is not the best solution - though I will say that it should look a bit better in motion rather than a static image since it's randomized.
Some diffusion/pattern dither techniques can actually look a bit strange in motion where the position of an object is moving but the dither doesn't change in areas which remain the same color. Depends how it's implemented.
As a solution for modern 3D engines that work high bit-depths buffers for HDR, outputting to a 10-bit display, I think TPDF would be an acceptable solution.
Pointillism may produce better results, but when you factor in the performance cost, and compare TPDF against the results that many games produce now, it would be a big improvement.

I haven't done much with 3D rendering, but when working with video for example, converting a 16-bit gradient to a 10-bit output at high resolutions looks better using TPDF dither than error diffusion - at least with the options I have available to me - and it runs a lot quicker too.
With error diffusion there's still the appearance of some very subtle banding resulting from the areas which are free of dither, compared to the very low level of randomized noise across the entire image with TPDF.
The noise it produces on a 10-bit display is very minimal too - especially when looking at actual video rather than test images.

But I think we're on the same page.
For the type of image that you're using for your examples, and where the performance impact is negligible, that Pointillism technique is definitely the way to go.
 

missile

Member
There's no question about it, when you're working at low bit-depths and low resolutions with flat-shaded objects, TPDF is not the best solution - though I will say that it should look a bit better in motion rather than a static image since it's randomized.
Some diffusion/pattern dither techniques can actually look a bit strange in motion where the position of an object is moving but the dither doesn't change in areas which remain the same color. Depends how it's implemented.
As a solution for modern 3D engines that work high bit-depths buffers for HDR, outputting to a 10-bit display, I think TPDF would be an acceptable solution.
Pointillism may produce better results, but when you factor in the performance cost, and compare TPDF against the results that many games produce now, it would be a big improvement.

I haven't done much with 3D rendering, but when working with video for example, converting a 16-bit gradient to a 10-bit output at high resolutions looks better using TPDF dither than error diffusion - at least with the options I have available to me - and it runs a lot quicker too.
With error diffusion there's still the appearance of some very subtle banding resulting from the areas which are free of dither, compared to the very low level of randomized noise across the entire image with TPDF.
The noise it produces on a 10-bit display is very minimal too - especially when looking at actual video rather than test images.

But I think we're on the same page.
For the type of image that you're using for your examples, and where the performance impact is negligible, that Pointillism technique is definitely the way to go.
Same page, opposite end of the spectrum, I guess. Well, indeed, video is a
different thing altogether. High-res, high bit depths, large viewing distance
etc. all play in favor for hiding quantization artifacts contrary to low-res
(visible pixels, small viewing distance), low bit depths etc. graphics, where
the requirements on the quantizer are much more demanding. For video I also
think that TPDF and friends are good enough, however, for low-res graphics it
doesn't work, even not in motion (yet looks better in motion than the static
one, as you've indicated).

My approach to dither is quite different than yours, I think. I want to mimic
and improve the limited graphics style of old video games using similar and
modern techniques to produce shades and colors in some interesting ways based
in the same principles, yet also in 3d. Looking at old shaded graphics it can
be seen that only two techniques (neglecting those painted by hand) were used,
i.e. random and Bayer pattern (modified variants exist for making them CRT
save). Floyd-Steinberger and friends weren't used that much due to computation
issued. Random never looked any good on low-res/bit, yet has a certain appeal
of its own looking at all these fragments. Bayer faces a similar problems (the
crosses are just too big on low-res) but overall produces a more coherent
picture than random.

For 3d I also tried to counter the static behavior of a pattern (and of random
made static) when in motion while displaying a constant shade/color. Instead
of tagging the pattern to the screen, you can tag the pattern to the polygon
moving with it such that the origin of the pattern is a lil different every
time and as such the dither-dots will move as well. This already improves the
whole thing. More could be done using advanced texture mapping.

Here is an old demo of mine showing the pattern being tagged to the polygon:

27991861.gif

(DCPU-16)

Without it, it would look like the cube would rotate on a static background
if the shades remain constant, which is annoying, indeed.

Another issue I faced a couple of month ago, while diving deeper into the
art of retro graphics, was trying to build a pixelized defocusing blur without
using any blending (also usable with fixed color palettes) and with the effect
being static producing the blur solely by doting/weaving pixels together.
Ideally, on a high-res/bit/refresh display, random would be an easy way out,
and easy to implement as well, but it looks horrible on low-res. In motion it
looks a bit better but there comes another problem with it. For, with the
pixels jumping around it becomes impossible to see if the object in defocus is
moving/rotating slightly. Standard patterns are also domed to failure. And
with the eyes' ability to lock on a raster it was clear that something way
different was needed. Weeks later, I came out with this;

Ie2tq8F.gif

(work in progress)

I'm pretty sure you've never seen something like that. ;)


With all that knowledge gained on low-res/retro graphics over the years, it
was just an exercise for me applying the techniques for suppress the HDR
banding. But I understand why the problem is difficult for many developers.
And I think we will see banding in the years to come unless the standard
engines (Unity, Unreal) have a fix by default. But it seems that the HDR TVs
of the future may solve this problem altogether, because they will quantize
the HDR image down on their own after having applied all the tone-mapping and
correction to it. Will see.
 

Kyrios

Member
Oh wow I thought it was an effect of my TV possibility having the brightness setting too high. Didn't know this was an actual thing, the more you know I guess.
 
Top Bottom