MXPA99005602A - Pixel block compression apparatus in an image processing system - Google Patents

Pixel block compression apparatus in an image processing system

Info

Publication number
MXPA99005602A
MXPA99005602A MXPA/A/1999/005602A MX9905602A MXPA99005602A MX PA99005602 A MXPA99005602 A MX PA99005602A MX 9905602 A MX9905602 A MX 9905602A MX PA99005602 A MXPA99005602 A MX PA99005602A
Authority
MX
Mexico
Prior art keywords
pixel
data
block
pixels
compressed
Prior art date
Application number
MXPA/A/1999/005602A
Other languages
Spanish (es)
Inventor
Alan Canfield Barth
Lam Waiman
Wesley Beyers Billy Jr
Yu Haoping
Original Assignee
Thomson Licensing Sa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing Sa filed Critical Thomson Licensing Sa
Publication of MXPA99005602A publication Critical patent/MXPA99005602A/en

Links

Abstract

A memory efficient image processor (20) receives DPCM prediction error values from decompressed MPEG coded digital video signals in the form of pixel blocks containing luminance and chrominance data in a 4:2:2 or 4:2:0 format and recompresses the pixel bocks to a predetermined resolution. Luminance and chrominance data are processed with different compression laws during recompression. Luminance data are recompressed to an average of six bits per pixel, and only a reference pixel and one other pixel are processed separately from all other luminance pixels in a block. Chrominance data are recompressed to an average of four bits per pixel. Each pixel block is stored with overhead information facilitating efficient and accurate reconstruction. Accurate pixel reconstruction is facilitated by processing a reference pixel accurately (31);scaling the pixel block (28);employing quantization tables (28) which are symmetrical and fitted to the domain of the pixel block;biasing negative prediction error values (27) to positive values;using short codewords in quantization tables (28) at levels which are most likely to occur statistically;and processing each pixel with three, four or five bit quantization (28) to ensure maximum resolution and an overall four-bit average for the pixel block.

Description

COMPRESSION APPARATUS OF PIXEL BLOCKS IN AN IMAGE PROCESSING SYSTEM This invention deals with an apparatus for decreasing the memory requirements of a digital video processor. In particular, the invention describes the apparatus for accurately compressing the pixel information (picture elements), before it is stored in the memory. The efficient use of memory in the design and operation of image processors is important. For example, consumer products such as television systems can use image processors including MPEG-2 signal processing. The standard (ISO / IEC 12181-1, May 10, 1994) of signal compression of the MPEG (Cinematographic Film Expert Group) is a widely accepted image processing standard, which is Particularly attractive for use with satellite, cable and terrestrial transmission systems, which employ high-definition television (HDTV) processing among other forms of image processing. Products that use high-definition visual displays require 96 Mbits or more of memory to temporarily store decoded MPEG structures before visual display. An MPEG processor requires these structures for calculation and motion compensation to reconstruct accurate images for visual display.
Systems that reconstruct images from the elements of decoded images (pixels or pels) of the MPEG, use the Differential Impulse Coding Modulation (DPCM, for its acronym in English). In it processing of the DPCM, a generator generates a prediction value, which anticipates the next value of the pixel. A sum network subtracts the prediction from the current pixel value, resulting in a difference, which is used to represent the video data. This difference, which is known as a prediction error, is generally smaller than the values of the data, so that the processing of the difference, rather than the original value of the pixel, reduces the bandwidth requirements of the system. The prediction error can be a positive or negative value. Ang et al., "Video Compression Makes Big Gains," IEEE Spectrum, October 1991, describe an MPEG encoder and decoder. Efficient memory image processors use less memory to store the image structures, by re-encoding (re-compressing) the block data before storage. In the spatial domain, the reduction in the number of bits per pixel that is used to store the image structures adversely affects the quality of the image if the pixels can not be exactly reconstructed to their original value. Artifacts can occur, especially in the flat areas of the image. Memory reduction image processors should accurately quantify and dequantize the decoded MPEG signal, as efficiently and economically as possible. It is known to take advantage of the limitations of human optical reception and process the luminance and cro-nancy data in a different way. In U.S. Patent Number 4,575,749, Acampora et al. Describes the optimization of the compression laws for each type of data to account for the energy and frequency components in the data, as well as what can be seen. the human eye Acampora directs amplitude compression to reduce noise in television signals, before transmission. Visual display formats such as 4: 2: 2 and 4: 2: 0 also describe the compression of the video data where the luminance and chrominance data have been processed differently. The proportions of the 4: 2: 2 and 4: 2: 0 format indicate that a chroma data block contains one-half or one-fourth of the amount of information contained in a luminance data block. However, once the video data is received in a visual display processor, the data is represented as n-bit pixel data. The above known compression techniques do not refer to compression relative to the visual display processor.
In the visual display processor, the luminance and chrominance data can be processed separately, but with respect to recompression. An example of the visual display processor processing the luminance and chrominance data differently, would be to convert the data of 4: 2: 2 or 4: 2: 0 proportions to line data of the previously determined tracking pattern, in the that no pixel is defined with the chrominance information. However, this has nothing to do with compression or recompression data. Until the MPEG format was made available, there was little concern for the distribution of memory for a visual display processor, because there was no need to calculate an image structure from the motion vectors or the composition information of movement. With the advent of the MPEG format, multiple pixel data structures have to be stored in the visual display associated with the reconstruction of the image structures. Copending application 08 / 579,129 discloses recompression of the pixel data before storage in the memory of the frame, before it is received in the visual display processor. More specifically, because the chrominance data is commonly defined by fewer pixels (limited per bit), compared to the luminance data (e.g., in the 4: 2: 2 or 4: 2 format). : 0), compression or additional recompression of the chrominance data is contraindicated. The compression or recompression of the chrominance data, such as by means of quantization, now seriously comprises the ability to accurately reconstruct the original chrominance data for visual display, resulting in reduced image quality. The memory reduction requirements for visual display processors, such as can be obtained through the data of luminance pixels and recompression chrominance before storage in the memory of the frame, and the need to reconstruct the image data of Exact way for visual display, are competing interests in relation to one another. This is particularly true in the case of a high-definition system, such as an HDTV, where the details are clearly displayed visually. The present invention recognizes the desirability of providing an efficient data reduction system, employing minimal hardware and software, which will save memory and reduce the physical size of the processor while minimizing the artifacts that are introduced in the reconstructed image. This described system solves these problems, by processing the luminance and chrominance data differently, in accordance with the principles of the present invention.
An efficient memory image processor in accordance with the present invention receives a stream of digital data from the video data formatted from the MPEG. The MPEG data is decoded and decompressed and presented to a processor as a pixel block of luminance and chrominance data images. The luminance and chrominance data are recompressed to a predetermined number of bits per pixel block, where each pixel representation is distributed an average number of bits for storage in the frame memory. The average number of bits per pixel representation is at least one bit less for the chrominance data than for the luminance data.
A Brief Description of the Drawings Figure 1 is a block diagram of a pixel block processor that includes a system in accordance with the principles of the present invention. Figure 2 shows the details of the compression portion of the system of Figure 1. Figure 3 describes a packet data format suitable for use in a system including the present invention. Figure 4 shows the details of the decompression portion of the system of Figure 1.
Figure 5A shows the details of the quantization mapping portion of Figure 2. Figure 5B is a true table for the Selection block of Figure 5A. Figures 6A, 6B and 6C are quantization / dequantization tables of three bits, four bits, and five bits, respectively. Figure 7 shows the apparatus for producing the symmetric dequantization tables. Figure 8 is a table showing the general quantization bits. Figures 9A, 9B and 9C depict a flow chart of an encoding controller in accordance with the principles of the present invention. Figure 10 is a block diagram of a television system compatible with MPEG, which employs the present invention. As an introduction, an exemplary embodiment of the invention will be briefly described, before detailing the elements of the invention. The exemplary embodiment facilitates compressing the data of the image elements (pixels) from eight-bit values to four-bit values for the chrominance data. This is a decrease in loss of 16 to 1 in the resolution, which, ordinarily, would result in severe degradation in the quality of the video image. The techniques included in this modality allow the exact reconstruction of the data. An efficient memory image processor determines that the prediction error values of the DPCM for the chrominance data components of the pixel blocks are quantized. The luminance data is compressed by a six-bit, 64-level quantization table, while the chrommancy data is compressed with a set of quantization tables of three, four, and five bits constructed for and accessed by a selected range to from a set of previously determined ranges. A reference pixel is compressed from each block of pixels differently from other pixels, to achieve initial accuracy in the prediction network. The parameters of the block are determined, encoded and stored with the compressed pixel block to facilitate reconstruction. The quantization tables output short keyword symbols at the levels to which, statistically, access is possible, compensating by the same for the storage of the block parameter in the physical memory space. Pixels are processed individually to ensure maximum resolution and an average of four bits per total pixel, including block parameters. Prior to quantization, the values of the negative prediction error are ignored to provide a positive value within the scope of the quantizer. In this way, the quantifier receives the positive values and the tables include only the positive decision points. Symmetric tables allow the middle points of the table and one half of the tables to reside in the ROM, while the other half is mapped by the circuit system. In practice, a television receiver can include an MPEG decoder. A data reduction network quantizes a decoded and decompressed MPEG signal representing blocks of images prior to storage in the frame memory, reconstructing the blocks as necessary for a visual display of the image. A visual display device visually displays the image data that is derived from the memory of the frame. The data that is received and processed through the network, is a high definition video signal of 4: 2: 2 or 4: 2: 0, pixel of 1920x1080. The luminance data are split into 8x8 pixel blocks in the spatial domain with the chroma data split according to the particular format. The network processes the data of the pixel block as described above. For each pixel block of the luminance data, the first pixel is changed to seven bits, discarding the least significant bit. The last pixel is quantized with the five-bit quantization table that is supplied for the range of 256. All other bits are quantized with a six-bit quantization table. The total result is a recompression of six bits per pixel. For the chrominance data, the network tracks a block of pixels and determines the range, the maximum and minimum values of the pixel for the block. The representative values that were previously determined are replaced by the range and the minimum pixel values, and are stored with the reference pixel value as a spindle to the data. The reference pixel can be the first pixel of the block, for example. A controller uses registers for each block of chrominance pixels and selects a quantization table of three, four or five bits, to process each pixel and maintain an average of four bits per pixel after compression. The three-bit symbols in the levels that were selected from the four- and five-bit tables compensate for the bits that are needed to store the spindle. The three-bit symbols reside at the levels where the input data is most likely to occur, minimally affecting the resolution of the compression. If the excess bits are saved by outputting the three-bit symbols, the high-resolution five-bit symbols are output. In the same way, if not enough bits are saved, access is given to a three-bit table to maintain an average of four bits per pixel, including the spindle. The luminance data (luma) ee are reduced by 25 percent and the chrominance (chroma) data is reduced by 50 percent in relation to the available block of pixels after the data received from the current is decompressed of transportation that was received. With the system described, the bit-limited chroma data can be re-compressed with fewer bits than the luminance data that was recompressed, without adversely affecting the quality of the image. The previous system facilitates accuracy during the reconstruction of the pixel. The following description is an example within the context of the high definition television receiver compatible with the MPEG-2, to aid the explanation of the invention. The system described allows fixed length compression on a block of data-per-data block basis for a given or selected compression ratio. The storage of a fixed-length data block allows random access to the block. The fixed-length data block is obtained, using a combination of features that is described in this document. The system may be employed in accordance with the principles of the present invention to effectively compress and decompress block data from any source and shall not be limited to television receivers.
In Figure 1, a decoder, e.g., an MPEG decoder (not shown), provides a decoded pixel data block of the MPEG to the input 10 of a memory reduction processor that includes the compressor 12. The compressor 12 it includes a predictor 18, a quantizer 20, and a combiner 22. Predictor 18 employs well-known principles and may be of the type described by Jain, "Fundamentis or Digital Image Processing," Prentice-Hall, p. 484 (1989), for example. The quantizer 20 provides a reduced pixel block of data to the memory 14. When a visual display processor (not shown) has access to the reduced data block in the memory 14 to visually display an image, the decompressor 16 reconstructs the original data block. The decompressor 16 includes the predictor 24 and the dequantizer 26 for removing the reduced data from the memory 14 and for reconstructing the reduced data block. The quantizer 20 and the dequantizer 26 are configured in accordance with the principles of the present invention, as will be discussed below. The predictor 24 is similar to the predictor 18. The input 10 of the compressor 12 receives a block from an MPEG decoder, which will be discussed in association with Figure 10. The pixel block is in the spatial domain, and comprises a block of 8 x 8 pixel image representing the luminance data (luma), for example. In a system that processes 4: 2: 2 data, the chrominance (chroma) data comprises a block of 8 x 4 pixels of image, and in a system that processes data of 4: 2: 0, the chrominance data (chroma) comprise a block of 4 x 4 pixels of image, for example. The input 10 supplies the data of the pixel block to a non-transforming input of the combiner 22, and to the quantizer 20. The predictor 18 supplies the pixel prediction data to a trainer input of the combiner 22 and the quantizer 20. The combiner 22 combines its signals from the non-transforming and transforming inputs and provides the difference to the quantizer 20. The quantizer 20 outputs the quantized image values, towards the predictor 18 and the quantized prediction error values towards the memory 14 for storage . The luminance data is processed differently than the chrominance data. Each pixel of the luminance pixel block is distributed to six bits of storage space in memory 14, on average. The quantizer 20 selects a reference pixel from a block of pixels that was received. The reference pixel can be the first pixel of the block, for example. The reference pixel is bitwise changed to the right and stored in a previously determined location with the remaining quantized pixels of the block in memory 14. The alternating pixels are processed differently from the block, starting from the pixel of reference and all other pixels remaining from the pixel block. This other pixel can be the last pixel of the pixel block, for example. The compression of this pixel with five bits compensates for the processing of the reference pixel because when saving one bit at the position of the last pixel, it maintains the six-bit average, which was compressed by using seven bits for the first pixel. . If the level accessed in the five-bit table contains a short keyword, the keyword is increased from zero to five bits. All other pixels of the luminance pixel block are compressed using a 64-level, six-bit quantization table. The quantization table is designed to accept only the positive prediction error values of the DPCM. The design details of this table are the same as for all quantification tables in this system, and will be discussed later. Subsequent discussion of the skew of negative prediction error values will also be discussed, to ensure the positive input values within the quantization tables, which are the same for both luma and chroma data. Chroma data is processed and compressed differently than luma data. Figure 2 illustrates quantizer 20 in greater detail, as corresponds to the chroma data. The same reference numbers identify the common elements in Figures 1 and 2. Specifically, the quantizer 20 includes prediction error processor 27, quantization mapper 28, coding controller 20, minimum-maximum range processor (MMRP, by its acronym in English) 30, first pixel processor 31, and multiplexer 32. Input 10 provides the data of the block pixel to MMRP 30, which tracks the block of pixels and determines the minimum pixel value, the maximum value of the pixel, and the range for the block. The MMPR 30 selects a predetermined range from a set of previously determined ranges, as a function of the actual range, and exchanges the selected range previously determined for the actual range for subsequent use within the network. The MMPR 30 comprises the values of the block parameter of minimum, maximum and predetermined range and transfers them to the multiplexer 32. The minimum pixel value and the range are also transferred to the first pixel processor 31, and the previously determined range is transferred to prediction error processor 27, as will be discussed later. The prediction error processor 27 receives the error prediction data from the combiner 22 and soelaya the negative values with the previously determined selected range. The quantization mapper 28 receives the predicted error values ignored and not ignored from the prediction error processor 27. Quantities are quantized and sent to the multiplexer 32. The quantization mapper 28 also outputs the prediction error values quantized to predictor 18, predictor 18 is used to calculate prediction data. The multiplexer 32 sends the parameters of the block and the quantized data to the memory 14, under timing and control, which will be discussed later. The block parameters represent the general data, which are stored in the memory 14 within a parameter field associated with the quantized pixel block. The parameter field and the quantized data together form a packet which consolidates all the information needed by the decompressor 16 to obtain appropriate access to the dequantization tables and to reconstruct the block of pixels. The coding controller 29 monitors the transfer of the block parameters and the compressed data, as well as the selection of the quantization tables for the individual pixel blocks, as will be discussed later. The first pixel processor 31 receives the pixel block from the input 10, and identifies a predetermined reference pixel value. The minimum pixel value of the block that is received from the MMPR 30, facilitates the compression of the reference pixel independently of the other pixels of the block. The compressed reference pixel is represented with enough bits for the dequantizer 26 to reconstruct its original value in a lossless or almost lossless manner. The first pixel processor 31 passes the value of the compressed reference pixel as a block parameter to the multiplexer 32, which transfers the block parameters, including the value of the reference pixel, and the quantized data to the memory 14. The dequantizer 26 uses the reference pixel as a prediction value for the pixels of the quantized block during decompression of the pixel. Because the first value (the value of the reference pixel) that is used in the prediction network during decompression is independent, a block of given pixels without the information can be decompressed from other pixel blocks. This value is also accurate, which eliminates a propagation prediction error from the reconstructed data. The reference pixel is compressed using the minimum value of the pixel block as a predictor, to derive the compressed value. The minimum value is subtracted from the reference value and the difference between two is divided. The result is stored in memory 14 with one bit less than is necessary for a binary representation of the previously determined range. The given range predefines the number of bits used to store the pixel value compressed reference because, when the pixel values of the block as predictors for other values in the same block of pixels are used, the difference between any of the Two pixel values of the block, such as the reference and minimum pixel values, will fall within the domain of the range. The compressed reference value uses one bit less than the one necessary to represent the range, because the difference between two is divided, which reduces the number of bits required for a binary representation by one bit. The quantizer 20 and the dequantizer 26 obtain access to the quantization and dequantization tables, respectively, which are optimized for each block. The quantization and deequantification tables include values that are based on an approximate range of the pixel block. The minimum-maximum range MMPR 30 receives a block of input data and tracks it to determine the minimum pixel value and the maximum value of the pixel. Then, the MMPR subtracts the minimum pixel value from the maximum value of the pixel and adds one (max-min + 1), to calculate the range for the pixel block. Quantizer 20 compares the range was calculated with a set of ranges previously determined, at least one of which is greater than or equal to the range that was calculated, select a certain range previously, and accesses quantization tables that they are derived from the previously determined selected range. The previously determined range is selected by a more adequate analysis that identifies the previously determined range, which is the smallest value of the set that is larger than, or equal to, the actual range that was calculated. The quantization and dequantization tables are manufactured to include the values within the domain of the previously determined selected range, and thus to include the values of the entire real range. The quantizer 20 employs the DPCM processing and produces the difference values which are prediction errors. These prediction erroree lie in the domain of the actual range if pixel values supplied by the predictor 18 come from the same block of pixels that the pixel for which predictor 18 is currently generating a prediction value. The compressor 12 follows and maintains this parameter. The actual range of a given block of pixels is usually significantly less than 256 (the maximum value of an 8-bit pixel value), and the levels of the table that are derived from the previously determined range produce a better resolution than the levels of the table that are derived from 256, because the previously determined selected range is generally close in value to the actual range. In this way, the accuracy and efficiency of the system are increased by custom manufacturing the table levels to the range. To reconstruct the data input block, the dequantizer 26 must know the range determined quantizer 20 previously used to access the quantization table that was used when the block of pixels quantitated. The representation of the range and the other parameters of the pixel block are stored in the memory 14, within a parameter field with the quantized pixel block. By storing a representation of the block parameter in memory 14, together with the quantized pixel block, the decompreector 16 can have access to the appropriate deequantification table and reconstruct the pixel block efficiently and accurately. Other parameters of the block of pixels that are included in the parameter field may include, for example, the value of the minimum pixel block or the value of the reference pixel block. Figure 3 illustrates a possible configuration of a parameter field and the compressed data. The parameter field consists of those block parameters contained within the box in lines of Figure 3. In this mode, a parameter field is configured as a head of a data packet containing a payload of compressed data. To maximize the frame memory reduction without significantly degrading the image that was displayed visually, the general load information that is represented by the parameters of the block in the parameter field is stored in memory 14. Each bit that used to store the parameter field decreases the available memory to store two of the parameters of the block, namely the values of the range and the minimum, from 8 bits to three bits for each parameter in most cases. This process works as follows. The real range is compared with a set of previously determined ranges, to determine a better fit. The previously determined range is converted into the value that is used to represent the range for the pixel block that is currently processing. The previously determined range is greater than the actual range to ensure that all pixel values within the block of pixels are represented. The set of previously determined ranges includes seven values, which are 16, 32, 64, 96, 128, and 256. Because the set is available for both the quantizer 20 and the dequantizer 26, the range can be represented previously determined in the parameter field by an index value. The index requires only tree bits for a binary representation because there are only seven previously determined ranges to represent. The seventh handles the minimum pixel value in a similar way. For five of the seven previously determined ranges, the system gains access to a previously determined set of eight minimum pixel values, unique to the previously determined selected range. The quantizer 20 compares the actual minimum value of the pixel with the previously determined set and selects the previously determined minimum value larger, which is less than, or equal to the actual minimum value. Then, the previously determined minimum becomes the value that is used to represent the minimum pixel for the block of pixels that is being processed. The set is available for both the quantizer 20 and the dequantizer 26, so that the minimum previously determined in the parameter field can be represented by an index value. This index also requires three bits for a binary representation, because there are only eight previously determined minimum pixel values to represent. The set of eight minimum pixel values previously determined for five of the seven ranges are defined by equation (la) below. The five ranges for which equation (1) applies are 32, 64, 96, 128, and 192. The equation provides a constant linear step for each range minimum that starts with zero. Equation (1) selects the minimum pixel value previously determined from the set Qmin (Rs, i) (equation (la)), which is replaced by the value of the real minimum pixel block. The MAX¿. { f (x) } indicates that the maximum value of i, which satisfies the condition within the parentheses, should be used to generate the Qm? n.
Qmin = MAXi. { Qmm (RsA)! Q n (Rs, i) < Xmin; 0 < = i < = 7 } , 1 where: Qmin (Rs, i) = INT. { i ((256-Rs) / 7)} , 0 < = i < = 7 (the) In these equations, i is the value of the index that is represented by three bits in the field of the general load parameter. INT. { f (x) } indicates that only the entire portion of the resulting value is used. The expression f (x) within the parentheses is representative of any expression, such as the one in equation (1), on which the INT function operates. For the previously determined range of 256, no minimum value is stored because the minimum value for 256 is zero (0) for an eight-bit word. For the previously determined range of 16, the original minimum value of eight bits is used, because the resolution for this range is small in relation to a minimum value for the range of 16, it may cause the actual pixel values to fall outside of the reconstructed data, after the reconstruction. This minimum value is a compensation that represents the distance between zero and the value of the minimum block pixel. Equation (1) can select a previously determined range, which is not enough to cover the values of the block of real pixels when the quantized block of pixels is reconstructed, because the previously determined minimum values are less than the actual minimum value. For example, if in a given block of pixels, the minimum pixel value is 100 and the maximum value of the pixel is 140, then the previously determined range is 64. The previously determined minimum pixel value determined from equation (1 ), is 82. The result of adding the selected minimum to the selected range is 146, which is greater than the actual maximum value of the pixel. Therefore, all the values of the pixel block will be represented by the previously selected selected values. However, if the value of the maximum pixel block is rather 160, the previously determined selected values will remain the same, but do not completely represent the domain of the pixel block (160> 146). In this case, the previously determined higher rank of 96 is selected and a previously determined minimum selected value is 91. The sum of 91 and the previously determined range of 96 is 187, which is greater than the block value of maximum real pixels of 160. In this way, the quantization and dequantization tables that were selected from this range will provide the levels for all the pixels in the block. The quantization mapper 28 performs the analysis described above, to determine if the first range selection previously determined and the minimum pixel values are valid, or if the next larger previously determined range is necessary. As stated above, if the prediction network derives its prediction values from the pixel values within the same block, then the difference (E) between a real pixel value and the predicted pixel value will be within the following limits: -Rango < E < Range, where (2) Range = Xmax-Xmin + l (3) In equation (2), E is the prediction error. In equation (3), Xmax and Xmin are the pixel values of the maximum and minimum real blocks, respectively. In this way, the range of pixel data from the block defines the values that the quantification and dequantization tables will receive, and the limits for which the tables for that particular block will have to provide. If the range is smaller than the maximum value of the word size (256 for an 8-bit word), then you can increase the resolution of the quantization and dequantization tables. Both the luma and the chroma data are processed with the neglected negative prediction errors. Therefore, the quantization and dequantization tables are designed for the luma and chroma data to accept only the positive input values. Quantification and dequantization tables that use negative prediction error values, have twice the resolution as the tables that are designed only for the pixel block range. The resolution is doubled because the tables only need to cover values from zero to the positive range value, rather than all values between the negative and positive range. Figures 6A, 6B, and 6C show tables of three bits, four bits, and five bits, respectively, for the previously determined range of 64. Prior to quantization, the prediction error processor 27 (Figure 2) detects whether the The prediction error of combiner 22 is positive or negative. If the value is positive, it passes unchanged to the quantization mapper 28. If the value is negative, the prediction error processor 27 adds the previously determined range to the value of the negative prediction error, before the value passes to the mapper. quantization 28. Since a negative prediction error value is within the domain of the negative range value, adding the positive range value to the negative prediction error value results in an error value being bypassed. This ignored error value is positive (greater than zero) and is less than the positive range value. The quantization mapper 28 receives the predicted error values both bypassed and not bypassed and quantizes them with a quantization table that adapts to the domain of the previously determined positive range. The quantized error values are passed to multiplexer 32 and then stored in memory 14 under the control of a seventh controller (not shown). Because the table only quantifies the values from zero to the range -1, instead of from the negative range value to the positive range value, the resolution of the table is doubled. Figure 4 is a block diagram of the dequantizer 26 of Figure 1. Under the control of a system microprocessor, the demultiplexer 34 receives a data packet containing a parameter field and the quantized data. The demultiplexer 34 sends the index of the minimum pixel value and the index of the range previously determined to the minimum-maximum-range decoder (MMRD) 38. The demultiplexer 34 sends the first compressed pixel value to the first pixel decoder 37, which it also receives the previously determined reconstructed range and the minimum pixel values of the MMRD 38. The first pixel decoder 37 uses these three values to reconstruct the reference pixel and send it to the predictor 24. After dequantization, the demultiplexer 34 sends the values. quantized to the dequantization mapper 36, which dequantizes the prediction error values and passes them to the adder 39. The adder 39 adds the predicted value to the dequantized error value and passes the result to the prediction error processor 35, which compares the result with the value of the reconstructed pixel block. If the error value is ignored in order to translate a negative value to a positive value before quantization, the result will be greater than the maximum value of the reconstructed pixel. If not, the result will be less than, or equal to, the maximum value of the reconstructed pixel. If the prediction error processor 35 determines that the error value was bypassed, the value of the previously determined range of the result is subtracted, correcting by means of the same the oblique that is introduced in the quantification side of the network. The prediction error processor 35 and the first pixel decoder 37 pass the reconstructed data, including the reference pixel in the proper order to an output network (not demoted). The estimated value for dequantizer 26 are quantized and / or coded values. The quantized minimum value of the reconstructed pixel (Qm? N) must be less than, or equal to the actual minimum value of the pixel, and the maximum quantized value of the reconstructed pixel (Qmax) and the value of the reconstructed quantized range must be greater than, or equals their real values. The MMPR 30 ensures that these requirements are met, as previously discussed. Since any pixel value must be greater than, or equal to, QmIn addition, the addition of the previously determined range to any reconstructed pixel value, which includes the skew, generally results in a value greater than Qmax by at least one. However, the quantization noise may cause an incorrect determination of whether the quantizer 20 detected a negative prediction error value and if it ignored the value. The quantization noise is the difference between the actual pixel value and the reconstructed value that causes the resolution in the loss quantization tables. The prediction error processor 35 adds the reconstructed level to the previously determined range and compares the result with Qmax. If the result is greater than Qmax, the previously determined range is subtracted from the result, to obtain the correct reconstructed value of the pixel. But, if Nq is positive, it could cause the result to be greater than Qmax, and the prediction error processor 35 would falsely identify an overshadowed prediction error. Similarly, if Nq is negative, it could cause the result to be less than Qmax, and the prediction error processor 35 would falsely identify a prediction error without bypassing. Figure 5A illustrates how the quantization mapper 28 (Figure 2) ensures that its output will not be misinterpreted due to quantization noise. The quantizer 80 provides three outputs for each quantized pixel value. The three values are the best reconstruction level for the decision point of the quantification table (I), and the reconstruction level on either side of the best level (1 + 1, 1-1). The combiner 84 calculates the value of the reconstructed pixel for the best reconstruction value and compares the result with Qmax by the combiner 86. If the prediction error is ignored (S2 is negative) and the result of the combiner 86 is less than Qmax ( IF it is negative), it is possible that after reconstruction, the prediction error processor 35 will incorrectly determine that the unquantified prediction error value was not bypassed. To avoid this problem, the keyword corresponding to the next largest reconstruction level for the prediction error is sent to the multiplexer 32. If the prediction error value was not ignored (S2 is positive) and the result of the combiner 86 is greater than Qmax (IF positive), it is possible that after reconstruction, the prediction error processor 35 will incorrectly determine that the unquantified prediction error value was bypassed. To avoid this problem, the code word corresponding to the next smallest reconstruction level for the prediction error is sent to multiplexer 32. In all other cases, the best level is selected and sent to multiplexer 32. When the First or last level in a quantization table is better, only the next higher or lower quantization level is provided, with the best level. Figure 5B gives a true table illustrating the selections of the quantizer 80 available for output by the quantization mapper 28, and when the selection unit 82 uses each of the selects. Because the quantization noise can cause the correction of the error correction correction to be incorrect, the choice of a quantization noise value with an opposite one will not influence the relationship between Qma? And the reconstructed pixel value. Because the absolute value of N is generally not large, the quantization mapper 28 will normally choose the best quantization level. When the quantization mapper 28 chooses the next largest or smallest level, the selection will induce an aggregate error within the reconstructed pixel. However, the error is minimized by selecting the closest level, which will correct the problem in a table with the resolution that is much better than the known DPCM quantization tables. Usually, this correction does not cause significant degradation in the quality of the image that was displayed visually The resolution of quantification is increased more frequently than just the factor of two, which occurs from the side of the negative prediction errors, to produce positive values . The selection of a previously determined range also leads to the increased quantization resolution. For example, if for a pixelee block given the previously determined selected range is 16, then the four-bit table will quantify exactly what is predicted by prediction error in a lossless manner. The resolution increases by a factor of 16 from a range of 256 (for 8 bits) to 16 (256/16). By requiring only positive values in the quantization / dequantization table for the same positive range value, the resolution is increased by another factor of 2 to a general load factor of 32. This process can be used to calculate the increment in the resolution for the quantification / dequantization tables that are derived from any of the previously determined range values. When the MMPR 30 (Figure 2) tracks a block of chroma pixels and selects a predetermined range from the set of seven, the data can be scaled to reduce the actual quantization tables that are required to compress the data. For the previously determined ranges of 32, 192, and 256, there is a set of three fabricated tables. Exempt tables of three, four and five bits as discussed above. The data that fit three of the other previously determined ranges, scale the data up by a factor of two. The pixelee blocks that fit the previously determined range of 32 are scaled to use the quantization tables that access the range of 64. For the pixel blocks that fit the previously determined range of 96, the quantization tables are used for 192 Likewise, for the pixel blocks that adjust the previously determined range of 128, the quantization tables for 256 are used. The scaling of the data reduces the complexity of the hardware and the software, and reduces the amount of the memory Read-only (ROM) that is required within the network. After the reconstruction, the blocks of pixels scaled after the quantization are divided into two to maintain the accuracy of the data. All quantization tables for both luma and chroma data are constructed to be symmetric around their midpoint. Figures 6A, 6B and 6C show the quantization tables of three, four and five bits for the previously determined range of 64. The eimetry allows only half of each table to be stored in the ROM, while a simple hardware circuit implements the other half of each table. This reduces the size of the ROM, thereby reducing production costs. The quantization tables are designed in a simple set of relationships, which are given later, creating the symmetry around the midpoint of the table. In these relationships, I is the level of quantification index; D is the Iero decision point; Qj is the reconstruction level; M is the general number of levels in a table; and Rd is the range of quantification. A decision point is the value at which a pixel value entry within a quantization table moves from one level to another within the table. The relationships are as follows: M is an even number; (4) DI + DM-1-I = Rd_1 Pard 0 < =! < = tM / 2) -1; 0 Q0 = 0; (6) QM / 2 = Rd / 2 (7) QI + QM.I = Rd for l < = I < = M / 2 $ The tables in Figures 6A, 6B and 6C conform to these relationships, as do all the quantization tables used in the network. If the tables accepted the prediction error values that were not echoed, in other words both positive and negative values, then M would be a non number. The symbols of the codeword output from the tables also have relations to ensure symmetry advantageously, allowing only half of the tables to be stored in the ROM. All the symbols in these relationships are the same as in the previous relationships. The only additions are Ct, which is the code word for the 1st level, and n, which is the number of bits in the codeword. These relationships are as follows C1 = 0 and CM.1 = 1 for short code words Cn = (2 '10: CM / 2 = (2n) -l; (li; Cj = 2I for 2 < = I < = (M / 2 ) -1; (12! C (M-2) + I = C (M / 2) + I + 1 For l < = I < = (M / 2) -l. 13; The relations (9) to (13) are represented in the tables in the binary form. If the tables accepted the prediction error values that were not ignored, in other words both positive and negative values, then there would be a non-keyword number of tree bits as well. The relationship (9) defines the statistically optimal placement for the three-bit codewords for the quantization tables that receive the predictive error values from the DPCM. By placing the short codewords at the reconstruction levels to which access is most likely to be obtained, the system is optimized to store the storage bits for other props. The use of short codewords decreases the number of available levels in a quantization table, but the bit savings oppress the lost resolution because generally enough bits are stored in memory, to accommodate the general charge information and maintain the average of four bits per code word. Because the tables are symmetrically around their midpoint, only half of the table levels, including the midpoint level, really need to be stored in memory. You can map the remaining levels through the circuit system. Figure 7 shows a simple hardware implementation to map the reconstructing reconstruction values of the table, after the dequantization, and outputs all the eight-bit reconstructed pixel values for the two halves of the tables. The coding controller 29 (Figure 2) and the decoding controller 33 (Figure 4) perform mutually eimilar but inverse operations for the chroma pixel blocks. Both include four registers that count the number of pixels that were processed for each block, and the number of bits that were retained or needed for the general load information. A record, the rank record, is a flag record that identifies which previously determined range represents the block of pixels that are currently being processed. Using the registers, the controllers 29 and 33 select for each pixel that was processed, the quantization table of either three, four or five bits and ensure that the pixel block, including all the general charge information, is compressed to a size previously determined for storage in memory 14, and then decompressed and reconstructed to the original pixel block. The general load information that is included for the chroma data requires a predetermined number of bits, depending on the block parameters that are to be stored. Figure 8 shows the number of general load bits required for each block parameter for each previously determined range. Each pixel, including the reference pixel, has an average of four bits that are reserved in memory 14. The controller 29 compensates for the four minor bits that the general charge number of bite that they used for the general load. The last row of Figure 8 shows the number of bits needed as a compensation for the general load bits for each previously determined range. The main objective of the controller 29 is to encode each pixel with the tables of either four or five bits, and only use the tree table when necessary, to ensure that all the pixels of the block will fit in the reserved space. The short, three-bit codewords in the four- and five-bit tables provide the best opportunity to accomplish this goal. Since the short code words are placed statistically within the tables at the most likely access levels for the DPCM data, each block will be compressed frequently without using the three-bit quantization table. In addition, different pixels will usually be quantified within any block of pixels given with five-bit codewords, thereby increasing the reelection and quality of the French deepliegue. However, if the pixel block does not gain access to the short code word in the four or five bit quantization tables with sufficient frequency to cope with the required number of general load bits, the controller 29 will have access to the three-bit quantization table. The controller 29 identifies the last bits N in each block of pixels, such as low priority pixels (LPP), where N is the number of general load bits compensated for that block of pixels. Based on the counters that identify when an LPP is being processed and how many general load bits are uncompensated, the controller 29 selects the tree bit quantization table for the LPPs. The controller 29 will not select the three-bit quantization table until the number of pixels that are missing is equal to the number of general load bits that remain uncompensated. Figures 9A, 9B, and 9C describe a flow chart of controllers 29 and 33. The two controllers operate in the same manner, and perform the same steps to either compress or decompress a pixel value. To simplify the discussion of the controllers 29 and 33, only the compression controller 29 will be explained. At the start, four registers are initialized at the beginning of each chroma pixel block. The rank record in accordance with the range previously determined for the current pixel block is encoded. The general load record is adjusted to the general bite number for which controller 29 must compensate, as shown in Figure 8. This record will be decreased by one each time the LPP is processed. The bit-saving region is initialized to the negative value of the general load register, and is incremented each time a short codeword is used. The pixel count register is initialized to the negative value of the number of pixels in the current pixel block. The region increases each time a pixel is processed. The pixel count is used to identify if the pixel being processed is an LPP. In the Start in Figure 9A, a pixel value of eight bits goes to step 100 and identifies whether the pixel is a low priority pixel (LPP). If so, step 102 adds savings and general bite load and compares the result with zero. There is a significant difference because not enough pixels have been saved at this point, access is gained to the three-bit quantization table and a three-bit code word is used to compress the pixel in step 104. Since the current pixel is an LPP, all previous processed pixels could not have been compressed with the short codewords, of three bits in the four-bit table, by a sufficient number of times to compensate for all the general load bits in this block of pixels. Therefore, if the register values are bit-saving + general load >; 0, a bit must be saved here and the low resolution table, of three bits, is used to compress the pixel. At the same time, the bit saving and pixel count registers are increased, and the general load record is decreased. At this point, the pixel is compressed and the next pixel is processed from the Start. If in step 102 the result is greater than zero, step 106 determines whether the bit savings are greater than zero. If not, there are enough bits saved to compensate for the general load at that point, and there is the number of remaining LPPs, just as there are bits of general load without compensation. Then, step 104 increases bit savings and pixel counts, decreases the overall load, and outputs the three-bit codeword from the three-bit table. If the bit saving is not greater than zero in step 106, step 108 checks the unique circumstance of bit saving = 0 and range = 16. If this happens, step 110 gets access to the sixteen-level, four-bit table, without the short codewords because there is no need to save a bit for that pixel. The pixel count is increased, the overall load is decreased and the next pixel is removed for compression. If the result of step 102 is no, access is gained to the four-bit table for the current previously determined range in step 112. Step 114 verifies whether the pixel value falls within a short code word level of the four-bit table. If so, step 116 increases bit savings and pixel counting, decreases the overall load, and outputs the three-bit coded word. If not, step 118 increases the pixel count, decreases the overall load, and outputs the four-bit code word. Deepuée of steps 116 and 118, the next pixel is removed for processing starting at the Start. Returning to step 106, and remembering that the pixel was determined to be an LPP in step 100, if the bit savings are greater than zero, then the process continues in Figure 9B. Because the bit savings are greater than zero, there has been more than a sufficient number of bits saved at that point in the pixel block. Therefore, the high resolution five-bit table is used in step 120. The five-bit table contains the short codewords, and step 122 determines whether the pixel value falls within a codeword level. cut from the table. If not, step 132 increases the pixel count, decreases the bit and general load savings and outputs a five-bit codeword. If so, step 124 adds bite and pixel count savings to determine if you have saved too many bits. If the number of bits that were saved becomes too large, system synchronization is not maintained. To avoid this, and the need for a buffer to hold the compressed data until the system is updated, the three-bit codewords can be offset to zero. The result of the pairing 124 determines the trajectory that the compression of the pixel will follow. If the result is less than zero, then enough bits have been saved and zero compensation does not occur in step 126. The bit savings and the pixel count are increased, the overall load is decreased and the word is output in a three-bit key. If the result is equal to zero, then the three-bit codeword will save one bit more. Therefore, in step 128 the code word is compensated with a zero, ee increases the pixel count, the overall load is decreased, and the compensated four-bit codeword is output. If the result of step 124 is equal to one, then the three-bit code word will store two extra bits. Therefore, in step 130 the codeword is compensated with doe zeros, the pixel count is increased, the bit savings and the general load are decreased, and the compensated five-bit code word is output. After steps 126, 128, 130 and 132, the compression of the pixel is complete and the next pixel is removed for processing at startup. If in step 100 the pixel is identified as not being an LPP, the process goes to step 134 where the bite savings are compared with zero. If the bit savings are greater than zero, steps 136 to 146 repeat the steps from 108 to 118, with one difference. Steps 110, 116 and 118 decrease the overall load because for this trajectory, the pixel that is being processed is an LPP, whereas in steps 138, 144, and 146, the pixel being processed is not an LPP and does not the general load is decreased. If the bit savings are greater than zero in step 134, steps 148 to 160 repeat the steps 120 to 132, using the five-bit table and the zero-offset analysis. Again, because the pixel being processed by steps 148 to 160 is not an LPP, the overall load is not decreased in steps 154, 156, 158 and 160. After all steps 138, 144, 146, 154, 156, 158, and 160, the compression is complete and the next pixel is removed for processing at startup. The goal of controller 29 is to process each chroma pixel with the highest possible resolution quantization table. Because the values of the pixel data are spatial representations, each pixel value contains important information for the visual display, unlike the data of the transformed block that contains most of its information at the beginning of the block. This is the reason why each pixel is processed individually, considering its relative position in the block of pixels and the number of bits for which the controller 29 must compensate. Referring again to Figure 1, memory 14 stores the quantized block of pixels and the field of the parameter, until they are no longer needed for the reconstruction and visual display of the pixel. During the time in which the data resides in the memory 14, it can be accessed and decoded by a subsequent visual display processor, by the decompressor 16 under the control of a microcontroller using a common data bus. The compreeor 12 and the deecompreeor 16 reside in a common integrated circuit and exhibit a similar design and construction, to simplify the integrated circuit. The memory 14 advantageously resides outside the integrated circuit, thereby allowing the size of the memory 14 to be selected as necessary to accommodate the signal processing requirements of a particular system. This results in savings in manufacturing cost, for example, in the case of a low cost consumer television receiver, using a reduced resolution visual display that requires less structure memory for the MPEG decoder. Further, although the memory 14 may reearch out of the integrated circuit, the architecture of the unified memory of the state of the art allows other components of the system to use the storage area that is not used within the memory 14. This further reduces the total cost of the system and increases the overall capacity of the loading system. Also, the stored storage area can typically be used by another component of the system, which increases the overall capacity of the system. Figure 10 exemplifies the portions of a practical digital signal processing seventh in a television receiver, including the apparatus according to the present invention, as discussed above. The system of the digital television receiver of Figure 10 is simplified, so as not to load the drawing with excessive detail. For example, FIFO input or output dampers associated with different elements are not displayed, read / write controls, clock generator networks, and control signals to interconnect with external memories, which may be of the extended output data type (EDO, for its acronym in English) or synchronous type (SDRAM, by sue eiglas in English), DRAM Rambus (RDRAM, for its acronym in English) or any other type of RAM. The common elements in Figure 1 and Figure 10 have the same identifier. The elements in the signal processor 72, except for the unit 70, correspond to the elements that are in the integrated circuit of the STi 3500A MPEG-2 / CCIR 600 Video Decoder, commercially available with SGS-Thomson Microelectronics. Briefly, the system of Figure 10 includes the microprocessor 40, bus interconnect unit 42 and controller 44 coupled with an internal control bus 46. In this example, the microprocessor 40 is located external to the integrated circuit containing the MPEG decoder 72. A busbar 48 of the internal memory of 192 bytes in width, is a conduit for the data towards and from the compressor 12, the similar decompressors 16 and 50, and the memory 14 of external structure. The units 12, 16 and 50 receive the control signals of the compression and decompression factor from the microprocessor 40, via the controller 44, together with the enabling control signals. A local memory unit 52 is also included, which receives the Request inputs and provides Recognition outputs, as well as Memory Address outputs, Read Enablement, and Write Enable outputs. The memory control unit 52 also provides the Output Clock signals of the output clock, in response to the Input Register of the clock input signals from a local clock generator (not shown). The microprocessor 40 allocates the memory 14 in bit dampers, storage structure of video structure and structure storage dampers for the decoding of the MPEG, and processing of the French display and visual deep map on the screen. The visual deepliegue processor 54 includes horizontal and vertical filters to resample as necessary, to convert an uncompressed image format to a common format determined for visual display by a visual display die 56 of image reproduction. For example, the system can receive and decode the image sequences corresponding to the formats such as line interleaving 525, line interleaving 1125 or line 720 continuous tracking. A television receiver will probably use a common visual display format for all formats of the receiver. The external interconnection networks 58 transmit the control and configuration information between the MPEG decoder and the external microprocessor 40, in addition to entering the compressed video data for processing by the MPEG decoder. The decoding system of the MPEG resembles a coprocessor-processor for the microprocessor 40. For example, the microprocessor 40 issues a decoding command to the MPEG decoder for each structure to be decoded. The decoder locates the information of the associated head, which then reads the microprocessor 40. With this information, the microprocessor 40 outputs the data for the configuration of the decoder, for example, with respect to the type of structure, configuration matrices, and so on, after whereof the decoder emits the appropriate decoding commands. The technical specification materials for the SGS-Thomson STi 3500A integrated circuit device noted above provide additional information regarding the operation mode of the MPEG decoder. The microprocessor 40 transports the mode control data, which the receiver manufacturer programmed, to the memory controller 52 to control the operation of the multiplexer 32 (Figure 2) and the demultiplexer 34 (Figure 4), and to establish compression / decompression factors for units 12, 16, and 50, as required. The seven that it describes can be used with all the Profiles and all the Levels of the MPEG specification in the context of different digital data processing schemes, such as those that can be associated with terrestrial transmissions, cable traction systems and satellite, for example. Figure 10 also describes a portion of a digital video signal processor 72 such as can be found in a television receiver for processing a high definition input video signal. The signal processor 72 may be included in an integrated circuit, which includes provision for receiving and processing the standard definition video signals, via an analog channel (not shown). Signal processor 72 includes a conventional MPEG decoder consisting of blocks 60, 62, 64, 66, 68 and 70, including memory 14 of the frame. For example, Ang et al., "Video Compression Makes Big Gains," IEEE Spectrum, October 1991, describe the operation of an MPEG encoder and decoder. The signal processor 72 receives a controlled data stream of the MPEG encoded data from a preceding input processor (not shown), for example, a transport encoder that separates the data packets after the demodulation of the data. entrance sign. In this example, the input data stream that was received represents the high-definition image material (1920 x 1088 pixelee), as specified in the Grand Alliance specification for the compilation of high-definition terrestrial television broadcasting. United States of North America. Periodically, the data blocks represent the information of the infrastructure and the compressed, codified interestructure. The information of the intra-structure includes the anchor structures of the I-frame. In general, the information of the interstructure comprises residual coded information of prediction movement, which represents the image difference between the adjacent image structures. The coding of the movement of the structure includes the generation of movement vectors that represent the compensation between a current block that is being processed and a block in a previous reconstructed image. The motion vector that represents the best correspondence between the current and previous blocks is encoded and transmitted. Also, the difference (residual) between each block of 8x8 compensated for movement and the previous reconstructed block is transformed by discrete cosine (DCT), the variable length (VLC) is quantified and coded before being transmitted . Different publications, including that of Ang et al. Above, describe in greater detail the processes of motion-compensated coding. The buffer 60 accepts input pixel data blocks before being decoded by variable length by the variable length decoder (VLD) 62. The buffer 60 exhibits a storage capacity of 1.75 Mbits. in the case of a MPEG data stream of the main profile, of the main level. The inverse quantizer 64 and the inverse discrete cosine transformer (IDCT) 66 decompress the decoded compressed data from the VLD 62. The output data of the IDCT 66 is coupled to an input of the adder 68. A Buffer signal 60 controls the size of the quantization step of the inverse quantizer 64, to ensure continuous data flow. The VLD 62 provides the decoded motion vectors to the motion compensation unit 70, as will be discussed later. The VLD 62 also produces a signal selection control mode of inter / intra structure, as is known (it is not shown by simplification). The operation performed by units 62, 64, and 66, are corresponding inverse operations of an encoder that is located in the transmitter. By adding the residual image data of the unit 66 with the predicted image data that is provided from the output of the unit 70, the adder 68 provides a reconstructed pixel that is based on the contents of the memory 14 of the video frame. When the signal processor 72 has processed an entire structure of pixel blocks, the structure memory 14 stores the resulting reconstructed image. In the interstructure mode, the motion vectors that were obtained from the VLD 62 provide the location of the predicted blocks of the unit 70. The image reconstruction process that includes the adder 68, the memory 14 and the unit of motion compensation 70, conveniently exhibits the reduced memory requirements significantly, due to the use of a block compressor 12 before data storage in the memory 14 of the frame. The size of the frame memory 14 can be reduced by up to fifty percent (50%), for example, when a 50 percent compression factor is used. The unit 50 performs the inverse function of the unit 12, and is similar to the decompressor 16 described above. The decompressor 50 reconstructs the image block so that the motion compensator 70 can operate as described above. The compressor 12 and the decompressors 16 and 50 are constructed in accordance with the principles of the present invention. Figures 1, 2, 4, 5A and 7 illustrate the details within units 12, 16 and 50.

Claims (31)

1. In a digital image processing system for receiving a data stream of blocks of compressed image pixels, the apparatus comprising: a decompressor (72) for decompressing a block of compressed pixels; a processor (20) for recompressing the data of the decompressed pixel block and for encoding the data that was recompressed into a data format that includes the range information of the pixel block representing a range of pixel values present in the pixel block, where the data format uses a predetermined number of bits to accommodate the data that was recompressed and the range information; and a memory (14) for storing the pixel block that was recompressed.
2. The apparatus according to claim 1, wherein the processor recompresses the data of the decompressed pixel block, using a set of quantization key words selected from a plurality of quantization key word sets.
3. The apparatus according to claim 2, wherein the set of quantization code word is adaptively selected from the plurality of quantization key word sets, to range information.
The apparatus according to claim 1, wherein when coding the individual pixels in the block on a pixel by pixel basis, a codeword is adaptively selected from a set of quantization key words, in response to an indication of the remaining bits not occupied in the data format of the previously determined number of bits.
The apparatus according to claim 1, wherein when coding the individual pixels in the block, a code word selected from a set of quantization code words is co-selectably selected to code a code word representative of the pixel with a codeword length longer or shorter than an average codeword length, to adjust the pixel data quality on a pixel-by-pixel basis, in response to an indication of the remaining bits not occupied in the data format of the previously determined number of bits.
6. The apparatus according to claim 1, wherein when encoding individual pixels in the block on a pixel by pixel basis, the processor encodes the codeword representative of the individual pixel with compensation data, in response to a indication of the remaining bits not occupied in the data format to fill the data format with the previously determined target number of bits.
7. The apparatus according to claim 1, wherein the processor encodes the rank information in the compressed form.
The apparatus according to claim 1, wherein the rank information is incorporated into a first data field of a data packet and the block of pixels that was recompressed in a second data field of the pack is incorporated. of data.
The apparatus according to claim 8, wherein the first data field is a head of the data packet, and the second data field is a payload of the data packet.
The apparatus according to claim 8, wherein, for the individual input pixel blocks, the processor provides a previously determined fixed bit length data packet, which represents a block of pixels that was recompressed.
11. The apparatus in accordance with the claim I, where the processor stores the data that was recompressed and the range information in equidistant fi length locations in the memory.
12. The apparatus in accordance with the claim II, wherein the processor directs the equidistant fixed-length locations, using a predetermined base address value and fixed-length compensation values, the compensation values corresponding to the previously determined bit length.
13. The apparatus in accordance with the claim 1, characterized in that it includes a decoder for decompressing the data that was recompressed, for providing the reconstructed pixels for use by at least one of, (a) a visual display processor and (b) a motion compensation network.
The apparatus according to claim 1, wherein the plurality of the sets of quantifying key words includes a first set of codewords and a second set of codewords, and the codewords in the first set of codewords are of different lengths for the codewords in the second set of codewords.
The apparatus according to claim 1, wherein the processor encodes the data that was recompressed in a data format, which includes at least one of, (a) a minimum pixel value, (b) a value of maximum pixel, and (c) a reference pixel value.
16. The apparatus in accordance with the claim 15, wherein the processor encodes the reference pixel value differently than the other pixel values in the block.
17. In a digital image processing system for receiving a data stream of blocks of compressed image pixels, the apparatus comprising: a memory (14) for storing a compressed pixel block; a first processor (16) for receiving the range information of the pixel block representing a range of pixel values present in the block of pixels compressed in the memory; and a second processor (16) for decompressing and decoding the compressed pixel block that was received from the memory, by adaptively selecting a set of dequantizing codewords from sets of quantifying key words, in response to the range information of the pixel block that was received.
18. The apparatus according to claim 17, wherein when coding the individual pixels on a pixel by pixel basis, the second processor removes the peer data in representative codewords from the individual pixel.
19. The apparatus according to claim 17, wherein the first processor decompresses the rank data to provide the range information.
The apparatus according to claim 17, wherein the range information is derived from a first data field of a data packet and the compressed pixel block is derived from the second data field of the packet of data. data.
21. The apparatus according to claim 17, wherein the second processor acquires the pixel block compressed from one of the plurality of equidistant locations in the memory, each of the plurality of locations storing one of the plurality of blocks of memory. compressed pixels representative of the image.
22. The apparatus according to claim 17, wherein the second processor acquires the pixel block compressed from the memory, using a predetermined base address value and length compensation values fi a, the compensation values corresponding to the Bit length previously determined.
23. The apparatus according to claim 17, characterized in that it further includes: a prediction network for processing the prediction error values, and wherein the compressed pixel data represents the prediction error values.
24. The apparatus according to claim 18, wherein the prediction error values are differential pulse code modulation values.
25. In a digital image processing system for receiving a blocking data stream of compressed image pixels, a method comprises the steps of: decompressing a compressed pixel block; recompress the data of the uncompressed pixel block; encode the data that was recompressed into a data format that includes the range information of the pixelee block, which represents a range of pixels value present in the pixel block, where the data format uses a previously determined number of bits to accommodate the data and range information that were recompressed; and store the block of pixels that was recompressed.
26. A method according to claim 25, characterized in that it includes the step of: selecting a set of words in quantization key from a plurality of sets of words in quantization key, in response to the rank information.
A method according to claim 25, characterized in that it includes the step of encoding individual pixels in the block on a pixel by pixel basis by adaptively selecting a code word from a set of code words of quantification, in response to an indication of the remaining bits not occupied in the data format of the previously determined number of bits.
28. In a digital image processing system for receiving a data stream of blocks of compressed image pixels, a method comprises the steps of: storing a block of compressed pixels; receiving the range information of the pixel block representing a range of pixel values present in the block of pixels compressed in the memory; and decompressing and decoding the compressed pixel block that was received from the memory, by adaptively selecting a quantization key word set from a plurality of quantization key word sets, in response to the range information of the compressed pixel block.
29. A method according to claim 28, characterized in that it includes the step of acquiring the pixelee block compressed from one of the plurality of equidistant locations in a memory, each of the plurality of locations storing one of the plurality of the repreeentative pixelee blocks of the compressed image.
30. A method in accordance with the claim 28, characterized in that it includes the step of decoding the individual pixels on a pixel by pixel basis by means of removing the compensation data in codewords representative of the individual pixel.
31. A method in accordance with the claim 28, characterized in that it includes the step of decoding a repreeentative value of the reference pixel differently from another e values representative of the pixel in the compressed pixel block.
MXPA/A/1999/005602A 1996-12-17 1999-06-16 Pixel block compression apparatus in an image processing system MXPA99005602A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US60/033,608 1996-12-17
US08911525 1997-08-12

Publications (1)

Publication Number Publication Date
MXPA99005602A true MXPA99005602A (en) 2000-04-24

Family

ID=

Similar Documents

Publication Publication Date Title
US6256347B1 (en) Pixel block compression apparatus in an image processing system
US5847762A (en) MPEG system which decompresses and then recompresses MPEG video data before storing said recompressed MPEG video data into memory
US5844608A (en) Picture element processor for a memory management system
US5838597A (en) MPEG-2 decoding with a reduced RAM requisite by ADPCM recompression before storing MPEG-2 decompressed data
EP0947102B1 (en) Memory efficient compression apparatus and quantizer in an image processing system
US5757967A (en) Digital video decoder and deinterlacer, format/frame rate converter with common memory
EP0782341A2 (en) Image data compression system
US5946421A (en) Method and apparatus for compensating quantization errors of a decoded video image by using an adaptive filter
EP0956704B1 (en) Overhead data processor in a memory efficient image processing system
US5614953A (en) Image signal decoding apparatus having an encoding error compensation
US6205250B1 (en) System and method for minimizing clock cycles lost to overhead data in a video decoder
MXPA99005602A (en) Pixel block compression apparatus in an image processing system
JP2698641B2 (en) Color image data encoding method and decoding method
MXPA99005592A (en) Memory efficient compression apparatus and quantizer in an image processing system
KR19980030711A (en) Data Deformatting Circuit of Image Decoder
KR19980030712A (en) Data Formatting Circuit of Image Encoder
MXPA96006644A (en) Ima data compression system