Data Compression Codes, Lossy

Introduction. In many data compression applications, one does not insist that the reconstructed data (= decoder output) is absolutely identical to the original source data (— encoder input). Such applications typically involve image or audio compression or compression results of various empirical measurements.

Small reconstruction errors may be indistinguishable to a human observer, or small compression artifacts may be tolerable to the application. For example, human eye and ear are not equally sensitive to all frequencies, so information may be removed from less important frequencies without visual or audible effect.

The advantage of lossy compression stems from the fact that allowing reconstruction errors typically results in much higher compression rates than is admitted by lossless methods. This higher compression rate is understandable because several similar inputs now are represented by the same compressed file; in other words, the encoding process is not one to one. Hence, fewer compressed files are available, which results in the reduction in the number of bits required to specify each file.

It also is natural that a tradeoff exists between the amounts of reconstruction error and compression: Allowing bigger error results in higher compression. This tradeoff is studied by a subfield of information theory called the ratedistortion theory.

Information theory and rate-distortion theory can be applied to general types of input data. In the most important lossy data compression applications, the input data consists of numerical values, for example, audio samples or pixel intensities in images. In these cases, a typical lossy compression step is the quantization of numerical values.

Quantization refers to the process of rounding some numerical values into a smaller range of values. Rounding an integer to the closest multiple of 10 is a typical example of quantization. Quantization can be performed to each number separately (scalar quantization), or several numbers can be quantized together (vector quantization). Quantization is a lossy operation as several numbers are represented by the same value in the reduced range.

In compression, quantization is not performed directly on the original data values. Rather, the data values are transformed first in such a way that quantization provides maximal compression gain. Commonly used transformations are orthogonal linear transformations (e.g., discrete cosine transform DCT).

Also, prediction-based transformations are common: Previous data values are used to form a prediction for the next data value. The difference between the actual data value and the predicted value is the transformed number that is quantized and then included in the compressed file. The purpose of these transformations is to create numerical representations where many data values are close to zero and, hence, will be quantized to zero.

The quantized data sets have highly uneven distributions, including many zeroes, and therefore they can be effectively compressed using entropy coding methods such as Huffman or arithmetic coding. Notice that quantization is the only operation that destroys information in these algorithms. Otherwise they consist of lossless transformations and entropy coders.

For example, the widely used lossy image compression algorithm JPEG is based on DCT transformation of 8 x 8 image blocks and scalar quantization of the transformed values. DCT transformation separates different frequencies so the transformed values are the frequency components of the image blocks.

Taking advantage of the fact that the human visual system is more sensitive to low than high frequencies, the high frequency components are quantized more heavily than the low frequency ones. Combined with the fact that typical images contain more low than high frequency information, the quantization results in very few non zero high-frequency components. The components are ordered according to the frequency and entropy coded using Huffman coding.

The following sections discuss in more detail the basic ideas of lossy information theory, rate-distortion theory, scalar and vector quantization, prediction-based coding, and transform coding. Image compression algorithm JPEG is presented also.

 






Date added: 2024-06-15; views: 56;


Studedu.org - Studedu - 2022-2024 year. The material is provided for informational and educational purposes. | Privacy Policy
Page generation: 0.013 sec.