Does anyone know of any application/tool that can perform palette dithering? The idea is "here is the n-color palette specified in their RGB values, here is the full-color RGB image, give me the best possible dithered image using the provided palette". The tools that I've used were underwhelming and produced results full of banding and artifacts.
Basically, great dithering in color instead of B/W.
The only catch is that generating blue noise is a roughly O(n^2) algorithm. Its not feasible to be generated on the fly, so in practice you just pregenerate a bunch of blue-noise textures and tile them.
It is surprisingly difficult to get really crisp dithering on modern displays, you have to do it on the client to match 1-1 the user’s display. Notice that the pre-rendered examples on this page actually look a little blurry if you magnify them. This is not really a problem unless you really want the crispness of the original Mac screen.
A few years ago I got annoyed with this and made a little web-component that attempts to make really sharp 1-bit dithered images by rendering the image on the client to match whatever display device the user has.
Dithering has similar importance in digital audio. Dithered 8-bit audio sounds way better than non-dithered (harsh artifacts are replaced with tolerable white noise, and quiet details are preserved). Higher end digital equipment even applies dithering to high-bit samples, as do plug-ins in digital audio workstations.
Bresenham's line drawing algorithm is another error diffusion algorithm except its goal is not approximating colors that don't exist but rather approximating lines at angles that are not a multiple of 45 degrees on pixel grids.
I use a Photoshop plugin for complex dithering (DITHERTONE Pro [0] -- this is NOT AN AD lol, I'm not the creator, just a happy customer and visual nerd)
I'm only dropping it in here because the marketing site for the plugin demonstrates a lot of really interesting, full-color, wide spectrum of use-cases for different types of dithering beyond what we normally assume is dithering.
Something I think about sometimes is how it's usually more important to maintain shape rather than color. For example, in the first 2 images on the page, quantizing the pink hearts results in some pink, white, and grey. An error diffusion alg will result in pink speckled with white and grey, whereas it might be preferable to have a solid pink that's slightly off color but has no speckles.
Are there existing techniques that do this sort of thing? I'm imagining something like doing a median filter on the image, run clustering on the pixels in the colorspace, and then shift/smudge clusters towards "convenient" points in the colorspace, e.g. the N points of the quantized palette and the N^2 points halfway between each pair. Then a partial-error-diffusion alg like atkinson smooths out the final result.
It's also worth to mention noise-based dithering - where some noise pattern is added atop of the image and then rounding is performed. Usually some sort of blue noise is used for this approach.
The major dithering algorithm that's missing from this list is blue-noise dithering. This is very similar to "ordered dithering"; you can think of ordered dithering as either thresholding the pixel values with a different threshold value on each pixel, following a regular pattern, or as adding a different offset value to each pixel, following a regular pattern, and thresholding the result with a constant threshold. Blue-noise dithering replaces the regular pattern with a random pattern that's been high-pass filtered. This has all the advantages of ordered dithering, in particular avoiding "crawling" patterns during animation, but avoids the repetitive patterns and line artifacts it introduces.
Ulichney introduced blue noise to dithering in 01988 as a refinement of "white-noise dithering", also known as "random dithering", where you just add white noise before thresholding: https://cv.ulichney.com/papers/1988-blue-noise.pdf. Ulichney's paper is also a pretty comprehensive overview of dithering algorithms at the time, and he also makes some interesting observations about high-pass prefiltering ("sharpening", for example with Laplacians). Error-diffusion dithering necessarily introduces some low-pass filtering into your image, because the error that was diffused is no longer in the same place, and high-pass prefiltering can help. He also talks about the continuum between error-diffusion and non-error-diffusion dithering, for example adding a little bit of noise to your error-diffusion algorithm.
But Ulichney is really considering blue noise as an output of conventional error-diffusion algorithms; as far as I can tell from a quick skim, nowhere in his paper does he propose using a precomputed blue-noise pattern in place of the white-noise pattern for "random dithering". That approach has really only come into its own in recent years with real-time raytracing on the GPU.
An interesting side quest is Georgiev and Fajardo's abstract "Blue-Noise Dithered Sampling" from SIGGRAPH '16 http://web.archive.org/web/20170606222238/https://www.solida..., sadly now memory-holed by Autodesk. Georgiev and Fajardo attribute the technique to the 02008 second edition of Lau and Arce's book "Modern Digital Halftoning", and what they were interested in was actually improving the sampling locations for antialiased raytracing, which they found improved significantly when they used a blue-noise pattern to perturb the ray locations instead of the traditional white noise. This has a visual effect similar to the switch from white to blue noise for random dithering. They also reference a Ulichney paper from 01993, "The void-and-cluster method for dither array generation," which I haven't read yet, but which certainly sounds like it's generating a blue-noise pattern for thresholding images.
Lau, Arce, and Bacca Rodriguez also wrote a paper I haven't read about blue-noise dithering in 02008, saying, "The introduction of the blue-noise spectra—high-frequency white noise with minimal energy at low frequencies—has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray," suggesting that blue-noise dithering was already well established in inkjet-land long before it became a thing on GPUs.
Maxime Heckel has a nice interactive WebGL demo of different dithering algorithms at https://blog.maximeheckel.com/posts/the-art-of-dithering-and..., with mouse-drag orbit controls, including white-noise dithering, ordered dithering, and blue-noise dithering. Some of her examples are broken for me.
Does anyone have a primer on multi-color dithering? I made a fun dither like program for monotone style dithering, but I'm not really sure how to adapt it to color palettes with more than two tones.
Ulichney (who wrote the book on halftoning) came up with ordered dithering matrices that give much nicer results than Bayer's, as good error as diffusion, and parallelizable. Look up "void and cluster".
Image Dithering: Eleven Algorithms and Source Code (2012)
(tannerhelland.com)110 points by Bogdanp 24 October 2025 | 29 comments
Comments
Basically, great dithering in color instead of B/W.
The only catch is that generating blue noise is a roughly O(n^2) algorithm. Its not feasible to be generated on the fly, so in practice you just pregenerate a bunch of blue-noise textures and tile them.
If you google 'pregenerated blue noise' you find plenty of them: https://momentsingraphics.de/BlueNoise.html
A few years ago I got annoyed with this and made a little web-component that attempts to make really sharp 1-bit dithered images by rendering the image on the client to match whatever display device the user has.
https://sheep.horse/2023/1/improved_web_component_for_pixel-...
I'm only dropping it in here because the marketing site for the plugin demonstrates a lot of really interesting, full-color, wide spectrum of use-cases for different types of dithering beyond what we normally assume is dithering.
[0] https://www.doronsupply.com/product/dithertone-pro
https://doodad.dev/dither-me-this/
Are there existing techniques that do this sort of thing? I'm imagining something like doing a median filter on the image, run clustering on the pixels in the colorspace, and then shift/smudge clusters towards "convenient" points in the colorspace, e.g. the N points of the quantized palette and the N^2 points halfway between each pair. Then a partial-error-diffusion alg like atkinson smooths out the final result.
2016 (199 points, 61 comments) https://news.ycombinator.com/item?id=11886318
2017 (125 points, 53 comments) https://news.ycombinator.com/item?id=15413377
https://nelari.us/post/quick_and_dirty_dithering/ is the best quick introduction to the technique that I've seen. There's a more comprehensive introduction at https://momentsingraphics.de/BlueNoise.html. https://bartwronski.com/2016/10/30/dithering-part-three-real... also demonstrates it, comparing it to other dithering algorithms.
Ulichney introduced blue noise to dithering in 01988 as a refinement of "white-noise dithering", also known as "random dithering", where you just add white noise before thresholding: https://cv.ulichney.com/papers/1988-blue-noise.pdf. Ulichney's paper is also a pretty comprehensive overview of dithering algorithms at the time, and he also makes some interesting observations about high-pass prefiltering ("sharpening", for example with Laplacians). Error-diffusion dithering necessarily introduces some low-pass filtering into your image, because the error that was diffused is no longer in the same place, and high-pass prefiltering can help. He also talks about the continuum between error-diffusion and non-error-diffusion dithering, for example adding a little bit of noise to your error-diffusion algorithm.
But Ulichney is really considering blue noise as an output of conventional error-diffusion algorithms; as far as I can tell from a quick skim, nowhere in his paper does he propose using a precomputed blue-noise pattern in place of the white-noise pattern for "random dithering". That approach has really only come into its own in recent years with real-time raytracing on the GPU.
An interesting side quest is Georgiev and Fajardo's abstract "Blue-Noise Dithered Sampling" from SIGGRAPH '16 http://web.archive.org/web/20170606222238/https://www.solida..., sadly now memory-holed by Autodesk. Georgiev and Fajardo attribute the technique to the 02008 second edition of Lau and Arce's book "Modern Digital Halftoning", and what they were interested in was actually improving the sampling locations for antialiased raytracing, which they found improved significantly when they used a blue-noise pattern to perturb the ray locations instead of the traditional white noise. This has a visual effect similar to the switch from white to blue noise for random dithering. They also reference a Ulichney paper from 01993, "The void-and-cluster method for dither array generation," which I haven't read yet, but which certainly sounds like it's generating a blue-noise pattern for thresholding images.
Lau, Arce, and Bacca Rodriguez also wrote a paper I haven't read about blue-noise dithering in 02008, saying, "The introduction of the blue-noise spectra—high-frequency white noise with minimal energy at low frequencies—has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray," suggesting that blue-noise dithering was already well established in inkjet-land long before it became a thing on GPUs.
Maxime Heckel has a nice interactive WebGL demo of different dithering algorithms at https://blog.maximeheckel.com/posts/the-art-of-dithering-and..., with mouse-drag orbit controls, including white-noise dithering, ordered dithering, and blue-noise dithering. Some of her examples are broken for me.
It's probably worth mentioning the redoubtable https://surma.dev/things/ditherpunk/ and the previous discussion here: https://news.ycombinator.com/item?id=25633483.
Important for lo-fi displays and printing etc
I do think that well dithered images looked better in some texts than colour images which had more wow but were more distracting.
on topic: https://surma.dev/things/ditherpunk/ is a great companion read to the subject