Dither is random or semi-random noise added to a signal in order to mask quantization noise and/or extend dynamic range. The simplest dither is quiet white noise, but more complicated forms of dither are possible using noise shaping, and they can even be completely inaudible.
Bit depth reduction
A common use for dither is to improve the perceived audio quality when converting a digital signal from a higher bit depth to a lower one, e.g. from 24-bit to 16-bit.
From a HydrogenAudio forum post by AndyH-ha:
"When you reduce bit depth there will be resulting distortion. This has two aspects, the change (error) in the waveform itself and an unpleasant addition to the sound because of the error. The waveform error is expressed in the output signal as noise, but noise related to (correlated to) the audio from which it is derived. This is a quality of noise that is generally found to be unpleasant.
"Adding dither before reducing the bit depth randomizes the error. This completely eliminates the unpleasant sound aspect of the error. Instead it will result in a benign white noise kind of background sound. There is thus new noise from two sources in the bit depth reduced audio. The first is the dither noise added prior to bit reduction. The second is the error noise (quantization error) of the bit reduction process. Without the first, randomizing noise, the second noises add an unpleasant aspect to the audio. With the dither, the total noise is much less objectionable than the quantization noise alone.
"This is mathematically correct, and observable with the proper equipment. It is doctrinally correct in the church of quality audio: always add dither when reducing the bit depth. However, I challenge you or anyone else to be able to ABX any real 16 bit music, dithered against non-dithered — unless perhaps some really bad software is used to do the bit reduction. Going to lower bit depths, such as to 8 bit, frequently makes the dither vs non-dither difference obvious and so this is the way it is demonstrated."
Is dither really necessary?
Going from 24-bit to 16-bit, the quantization error is very small and the distortion/noise is extremely unlikely to be heard in any real music. Since quantization error isn’t audible in any real music at useable listening levels, whether dither must be used is more a matter of doctrine than functionality.
Many sources (e.g. cassettes and LPs) already have considerable noise such as tape hiss. Even the best live recordings get some noise from the equipment, especially microphone preamplifiers. This might not make the best dither, but it acts in the same way, to largely de-correlate the quantization error from the signal.
Anyone who believes dither is always necessary for conversions to 16-bit should submit, on the HydrogenAudio forums, samples that can be discriminated in blind testing.