Difference between revisions of "Huffman coding"

From Hydrogenaudio Knowledgebase
Jump to: navigation, search
 
Line 1: Line 1:
Most [[lossy]] audio encoders use the classic technique of the Huffman algorithm. It acts at the end of the compression to code information, and this is not therefore itself a compression algorithm but rather a coding method. This coding creates variable length codes on a whole number of bits. Higher probability symbols have shorter codes. Huffman codes have the property to have a unique prefix, they can therefore be decoded correctly in spite of their variable length. The decoding step is very fast (via a correspondence table). This kind of coding allows to save on the average a bit less than 20% of space.
+
Most [[lossy]] audio encoders use the classic technique of the Huffman algorithm. It acts at the end of the compression to code information (entropy coding), and this is not therefore itself a compression algorithm but rather a coding method. This coding creates variable length codes on a whole number of bits. Higher probability symbols have shorter codes. Huffman codes have the property to have a unique prefix, they can therefore be decoded correctly in spite of their variable length. The decoding step is very fast (via a correspondence table). This kind of coding allows to save on the average a bit less than 20% of space.
  
 
It is an ideal complement of the perceptual coding: During big polyphonies, the perceptual coding is very efficient because many sounds are masked or lessened, but little information is identical, so the Huffmann algorithm is very seldom efficient. During "pure" sounds there are few [[masking]] effects, but Huffman is then very efficient because digitalized sound contains many repetitive bytes, that will then be replaced by shorter codes.
 
It is an ideal complement of the perceptual coding: During big polyphonies, the perceptual coding is very efficient because many sounds are masked or lessened, but little information is identical, so the Huffmann algorithm is very seldom efficient. During "pure" sounds there are few [[masking]] effects, but Huffman is then very efficient because digitalized sound contains many repetitive bytes, that will then be replaced by shorter codes.
  
 
text © Gabriel Bouvigne - http://mp3-tech.org/
 
text © Gabriel Bouvigne - http://mp3-tech.org/

Revision as of 04:53, 23 June 2005

Most lossy audio encoders use the classic technique of the Huffman algorithm. It acts at the end of the compression to code information (entropy coding), and this is not therefore itself a compression algorithm but rather a coding method. This coding creates variable length codes on a whole number of bits. Higher probability symbols have shorter codes. Huffman codes have the property to have a unique prefix, they can therefore be decoded correctly in spite of their variable length. The decoding step is very fast (via a correspondence table). This kind of coding allows to save on the average a bit less than 20% of space.

It is an ideal complement of the perceptual coding: During big polyphonies, the perceptual coding is very efficient because many sounds are masked or lessened, but little information is identical, so the Huffmann algorithm is very seldom efficient. During "pure" sounds there are few masking effects, but Huffman is then very efficient because digitalized sound contains many repetitive bytes, that will then be replaced by shorter codes.

text © Gabriel Bouvigne - http://mp3-tech.org/