Displaying Images in 16 Colours

As part of my OS development, one of the design tasks is to figure out what hardware to support out of the box.  In terms of video hardware, the default target I will support is VGA.

VGA has been around for many years, and other graphics technologies have been developed to supersede it.  VGA still has a lot of benefits, not least of which being that it is one of the few technologies that almost every working graphics card supports out of the box without drivers.  It can do a 640x480 screen mode at an impressive 16 colours, and it has some hardware features which can be used to get a reasonable performance.

The 640x480x16 colour screen (also known as mode 12h) is going to be my default and minimum screen.  Some people would say that a text-based console should be the minimum, but my intention is for the OS to drop the user straight into a window system (like a certain 1980s operating system to which modern systems pale in comparison).

Aside from the limited spatial resolution of mode 12h, that 16 colours tends to be a bit of a problem, so we have to try to get the most out of it.  I considered having the ability to use a dynamic palette, but think that's going to be unnecessary.  We're not looking to play movies on a 16 colour screen, and device-specific drivers can be made available and used to give us higher screen depth if needed.

So instead I've gone with a fixed palette.  This palette consists of white, black, two greys, and then a light and a dark shade for each of Red, Yellow, Green, Cyan, Blue, and Magenta.  Specifically:

0x000000 // Black
0x0000A8 // Dark Blue
0x00A800 // Dark Green
0xA800A8 // Dark Magenta
0xA8A800 // Dark Yellow
0x555555 // Dark Grey
0xAAAAAA // Light Grey
0x5656FF // Light Blue
0x56FF56 // Light Green
0x56FFFF // Light Cyan
0xFF5656 // Light Red
0xFF56FF // Light Magenta
0xFFFF56 // Light Yellow
0x00A8A8 // Dark Cyan
0xA80000 // Dark Red
0xFFFFFF // White

Now that is set up, we'll display some bitmap data, such as a nice colourful parrot:

<A Colourful Parrot>

Now unfortunately, this parrot uses rather more than 16 colours, so we need a way to map a 24-bit colour to one of our palette entries.  A naive way would be to take each pixel in turn, then work out the Euclidean distance between that pixel colour and each palette colour in turn, then paint the pixel with the closest one.  This turns out to be slow and bad and exactly what I implemented first

Rather than do that, we'll create an array that we'll fill with pre-calculated Euclidean distances, then we'll be able to map any colour a LOT faster.  We'll just take the red, green and blue components of the colour, use them as indices into an array, and the value at that location will be the palette index to use.

We could make the array map every possible 24-bit colour to a palette entry.  The mapping table would be 256 by 256 by 256, take up 16Mb and would be very accurate.  However, as it turns out, we can not only get away with using a much much smaller mapping table, but the accuracy lost isn't actually that significant.  We're displaying a 16 million colour image on a 16 colour display, it's never going to be pixel-perfect.

So with a bit of working stuff out, I settled on a 512 byte mapping table, that's 8 by 8 by 8.  It may not sound like much, but with 512 entries split between 16 colours actually gives us 32 entries that map to each colour on average and it doesn't take up a whole lot of space.

As we only have 8 values for each of the red, green and blue components, we'll need a way to take an 8-bit component colour to a 3-bit array index.  This is really easy as we can just take the 3 most significant bits.  We could also consider the fourth bit to round the value up if we wanted it to be a little better.

So we've define the array, then populate it by looping through all the array entries, converting the array indices into a colour by bit shifting them into a 24-bit value, working out the Euclidean distance to each palette entry, taking the nearest colour, and setting it into the array.  Having done this (which we only need to do when we change the palette) we stash it away somewhere until it's needed.

When we need to render an image, such as a Parrot, onto our screen, we process each pixel one at a time, take the three most significant bits from each component, look up the array index, and plot the pixel with the colour.  As it's all bit shifting and masking, it's all very fast and very good.

Finally, the Parrot in all of its glory.

<A 16-colour Parrot>

Ok, so it's not great.

We can see some very obvious artefacts, large blocks of colour, and the general aura of looking-a-bit-rubbish.  So can we improve on this?  Obviously yes.

We're going to implement possibly the simplest form of error-diffusion dithering.  As we process each pixel and work out what colour to draw it from the colour it's supposed to be, we calculate the difference between the two (the "error" part).  When we move to the next pixel, we add the error from the previous pixel onto this pixel's colour before we map it (the "diffusion" part).  For example. if we want to draw a 70% red, but the nearest we can get is an 60% red (less red than we wanted), we take an error of +10% to try to make the next pixel more red in order to compensate.  This error value can accumulate across multiple pixels until it is significant enough to tip the scales and make the mapping choose a different palette entry.

For the implementation side, all we need is a single variable to carry the error part from one pixel to the next, remembering to reset it at the end of each row so we don't carry the error from the right edge to the left edge.

After this simple modification, we'll have another go at this parrot...

<A Dithered Parrot>

Comparing this to the previous one, there are some very obvious improvements.  For one thing, the blue tinge on the perch has returned and it looks all manner of awesome.  Ok, awesome is probably a strong word, but it's very good considering the limitations of the technology and the simplicity of the algorithm.

We could make this better and use any number of different error-diffusion dithering methods, such as Floyd-Steinberg, but these will require us to buffer at least one entire row of error values so that the error can be diffused to pixels below the current one, not just the neighbours to the right.

This "one-dimensional" error diffusion has at least one significant drawback.  The image above is a photo, so each row of pixels is different from the one preceding it, so it looks good.  However, if you have an image where rows are repeated, this gives some horrible vertical line artefacts as each dithered output is the same.  This makes sense given that nothing is carried from one row to the next to alter the input to get a different output.

But that's a problem for another day.


 <Side by Side Parrots for Comparison>

<Zoomed Dithered Parrot>

The parrot image is from the Wikipedia article on Error Diffusion and was been placed into the public domain by the author, Ricardo Cancho Niemietz.

No comments:

Post a Comment