WO1998028917A1 - Improved estimator for recovering high frequency components from compressed image data - Google Patents

Improved estimator for recovering high frequency components from compressed image data Download PDF

Info

Publication number
WO1998028917A1
WO1998028917A1 PCT/US1997/022685 US9722685W WO9828917A1 WO 1998028917 A1 WO1998028917 A1 WO 1998028917A1 US 9722685 W US9722685 W US 9722685W WO 9828917 A1 WO9828917 A1 WO 9828917A1
Authority
WO
WIPO (PCT)
Prior art keywords
frequency components
high frequency
low
image
filtering
Prior art date
Application number
PCT/US1997/022685
Other languages
French (fr)
Other versions
WO1998028917A9 (en
Inventor
Angél L. DECEGAMA
Original Assignee
Westford Technology Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to RU99116256/09A priority Critical patent/RU99116256A/en
Application filed by Westford Technology Corporation filed Critical Westford Technology Corporation
Priority to BR9714419-3A priority patent/BR9714419A/en
Priority to IL13050697A priority patent/IL130506A0/en
Priority to CA002275320A priority patent/CA2275320A1/en
Priority to AU53794/98A priority patent/AU5379498A/en
Priority to EP97950916A priority patent/EP0947101A1/en
Priority to JP52881898A priority patent/JP2001507193A/en
Publication of WO1998028917A1 publication Critical patent/WO1998028917A1/en
Publication of WO1998028917A9 publication Critical patent/WO1998028917A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • This invention relates to digital signal processing techniques in general and in particular to the use of digital signal processing techniques for compression and decompression of data and for the reliable recovery of high frequency components discarded during compression of the data.
  • the last step of conventional image compression schemes is to apply a lossless coding techniques such as Huffman coding or arithmetic coding.
  • a lossless coding technique such as Huffman coding or arithmetic coding.
  • the invention comprises a wavelet transform based method and system for estimating missing high frequency components of an image based on those frequency components which are present in the image. As mentioned above, these components may be missing because they were discarded, typically by being set to zero, during compression.
  • the method of the invention can also be used where the high frequency coefficients were never present in the image to begin with.
  • the method of the invention can also estimate these values based on existing pixel values, thereby enhancing the quality of the enlarged image.
  • Application of the method of the invention to the enlargement of an original image starts with the assumption that the original image includes the low frequency components of the wavelet transform of an enlarged image which is four times larger than the original image.
  • the high frequency coefficients of the enlarged image are then estimated from the low frequency coefficients in the original image. This is followed by the application of the inverse wavelet transform.
  • the result is an image which is not only four times larger but is also of enhanced quality because the resolution has been doubled.
  • This process can be performed repeatedly to obtain successively larger images.
  • Application of the method of the invention to the compression of an image and to the subsequent estimation of the discarded high frequency coefficients begins with applying the wavelet transform to the original image and discarding some or all of the high frequency coefficients.
  • the wavelet transform is then applied to the remaining coefficients of the wavelet transform of the original image.
  • this process optionally continues for three additional levels of transformation. At this level, only those high frequency coefficients most important for perception are kept. These must then be efficiently encoded, typically by a lossless arithmetic encoding algorithm.
  • the compressed file is decoded to recreate the wavelet transform up to the number of levels performed in the compression step described above.
  • the inverse wavelet transform produces an image which is for example l A of the original image size.
  • This image is then enlarged using the method of the invention as described above in connection with image enlargement. This results in a full size reproduced image.
  • the high frequency coefficients of the first two levels of the wavelet transform of each frame are discarded resulting in frames that are 1/16 of the original size.
  • the amount of data to be processed per second is 480 times smaller than with state-of-the-art approaches (e.g., MPEG-1 , MPEG-2, H.231 , H.234).
  • state-of-the-art approaches e.g., MPEG-1 , MPEG-2, H.231 , H.234.
  • the two-level expansion of each reproduced frame at 1/16 the original size produces a full-size video sequence of high quality.
  • the ability to estimate the high frequency coefficients of the wavelet transform of the one-dimensional sound signal from the low frequency coefficients also results in high levels of compression and improved signal quality.
  • processing speed is critical and compression ratio is of secondary importance.
  • compression ratio is of secondary importance.
  • the ability to perform compression in real-time is important.
  • the Haar wavelet transform is used to compress the image on a line-by-line or block-by-block basis. Because of the simplification introduced by the use of the Haar wavelet transform, convolution of the input signal by the filter coefficients reduces to the multiplication of the sum of two adjacent pixel values by the filter coefficients followed by a one pixel shift. This procedure is one that can readily be implemented in hardware with its concomitant increase in performance.
  • the system according to the invention optionally passes the retained frequency components to a low-pass synthesis filter which is biorthogonal to the low-pass analysis filter used to generate the wavelet transform of the image. This results in the generation of the low- frequency sub-band of the original image.
  • the retained frequency components are also passed to an estimation system which generates an estimate of the high-frequency sub-band of the image.
  • the low frequency sub-band and the high frequency sub-band are then combined at a combining stage to form the original image.
  • the estimation system includes an estimation filter having a transfer function derived from the wavelet transforms high and low frequency analysis filters and their corresponding biorthogonal synthesis filters. The output of this estimation filter is then filtered by the wavelet transforms high-frequency synthesis filter before being combined, at the combining stage, with the output of the low-frequency synthesis filter.
  • the output of the estimation filter is used as a starting estimate which is iteratively refined at a refining stage.
  • the preferred iterative method is a conjugate gradient method.
  • those high frequency coefficients which were retained rather than discarded are clamped at their known values during successive iterations of the process executed by the refining stage.
  • the output of the refining stage is then filtered by the wavelet transforms high-frequency synthesis filter before being combined, at the combining stage, with the output of the low- frequency synthesis filter.
  • the estimation system in yet another embodiment of the invention, used in the case in which there are no known high frequency coefficients, includes an estimation filter having a transfer function derived from the wavelet transforms high and low frequency analysis filters and their corresponding biorthogonal synthesis filters. The output of this estimation filter is then combined, at the combining stage, with the output of the low- frequency synthesis filter. In this case, there is no need to filter the output of the estimation filter with the wavelet transforms high-frequency synthesis filter before passing it to the combining stage.
  • FIG. 1 shows a system for evaluating the wavelet transform of an original image, compressing the wavelet transform after discarding is high frequency components, and then recovering the discarded high frequency components to generate a reconstructed image;
  • FIG. 2 shows the data recovery stage of FIG. 1 ;
  • FIG. 3 shows an embodiment of the data recovery stage shown in FIG. 1 in which the estimation system processes only the low frequency coefficients to obtain an estimate of the original high frequency coefficients;
  • FIG. 4 shows a prior art process for reconstructing an image based on the high frequency and low frequency components of its wavelet transform
  • FIG. 5 shows the data recovery stage of FIG. 1 in which the estimation system relies on known high frequency coefficients to iteratively converge on an estimate of the missing high frequency coefficient;
  • FIG. 6 shows the data recovery stage of FIG. 1 in which uses a single filter is used to estimate high frequency components with low frequency components
  • FIG. 7 shows an image to be processed by the system of FIG. 1
  • FIG. 8 shows the image of FIG. 7 after application of the high and low frequency analysis filters to the rows of the image
  • FIG. 9 shows the image of FIG. 8 after application of the high and low frequency analysis filters to the columns of the image
  • FIG. 10 shows the image of FIG. 7 after three levels of the transformation described in FIGS. 8 and 9;
  • FIG. 11a shows an image such as that shown in FIG. 7 with its wavelet transform coefficients separated into three sub-bands;
  • FIG. 1 lb shows two alternative paths to reconstructing the left image L from quadrants a and b;
  • FIG. l ie shows two alternative paths to reconstructing the complete image / using the two sub-bands L and R.
  • a system 29 incorporating the invention includes a wavelet transform stage 17 having a low frequency analysis filter 10 and a high frequency analysis filter 14 for evaluating the wavelet transform of an original image 11.
  • the original image 11 can be provided by a scanner, a digital camera, a photocopy machine or any other device generating a digital signal representative of an image.
  • the source of the original image 11 can also be another wavelet transform stage having a pair of analysis filters identical to the high and low frequency synthesis filters 10, 14 shown in FIG. 1.
  • the wavelet transform generated by the wavelet transform stage 17 includes a low frequency portion 12 generated by convolving the original image with the low frequency analysis filter 10 and a high frequency portion 18a generated by convolving the original image through the high frequency analysis filter 14.
  • a thresholding stage 16 which compares the high frequency portion 18a of the original image 11 with a preselected threshold value. If the wavelet transform coefficient associated with a portion of the image falls on one side of the threshold, that coefficient is disregarded, typically by setting it to zero. If it falls on the other side, that coefficient is retained, typically by passing it through unchanged. This thresholding process results in a diminished high frequency portion 18.
  • the diminished high-frequency portion 18 and the low frequency portion 12 of the original image 11 are compressed in a conventional manner at a compression stage 13.
  • the compressed image can then be stored or transmitted in its compressed form.
  • the compressed image must be decompressed and the disregarded high frequency portions of the original image must be estimated.
  • the decompression step is performed in a conventional fashion and is not shown in FIG. 1..
  • the process of estimating the disregarded high frequency portions to generate a faithful reproduction of the original image 11a takes place in the data recovery stage 19 which is described below in connection with FIGS. 2-6.
  • the wavelet transform of an original image 11 is obtained by first convolving each row of the image with two orthogonal filters: a high-pass filter 14 and a low-pass filter 10 as depicted in FIG. 1. These filters, which are also referred to as high and low frequency analysis filters respectively, are obtained from the coefficients of the scaling function defining the wavelet basis used in the transformation. Thus, the effect of the wavelet transform is to determine the high frequency and low frequency spatial energy distribution of the image.
  • the low frequency spatial distribution of the original image 11 is represented by the low-pass filtered image 12 and the high frequency spatial distribution is represented by the shown in a high-pass filtered image 18.
  • FIG. 1 illustrates a wavelet transform stage 17 for performing one step in the performance of successive wavelet transforms.
  • Each row of the image 11 is convolved with a low-pass filter 10. However, the convolution proceeds by shifting by two pixels rather than by a single pixel. This results in a low-pass filtered image 12 half as wide as the original image 11.
  • Each row of the image 11 is also convolved with a high pass filter 14 biorthogonal to the low-pass filter 10.
  • the convolution again proceeds by shifting two pixels at a time rather than by a single pixel. This shift results in a high-pass filtered image 18a that is likewise half as wide as the original image.
  • the high-frequency wavelet transform coefficients generating the high-pass filtered image 18 are then compared with a preselected threshold at a thresholding stage 16. Those coefficients falling to one side of this threshold are set to zero. The remaining coefficients are left unchanged.
  • the reduced set of high frequency wavelet transform coefficients thus formed generates a post-threshold high-pass filtered image 18 half as wide as the original image 11 and containing those high frequency components that are important for human perception.
  • the low-pass filtered image shown on the left hand side of FIG. 8 is clearly recognizable as a distorted version of the image in FIG. 7.
  • the high-pass filtered image, shown on the right-hand side of FIG. 8, is, however, only barely recognizable. This is because, as mentioned above, it is the low frequency components of an image that are most important for human visual perception.
  • the wavelet transform stage 17 convolves the high and low frequency analysis filters 10, 14 with the columns of the original image 11.
  • the resulting four images, shown in FIG. 9, represent the wavelet transform coefficients present in four frequency bands. The lowest frequencies are in the upper-left image, the highest are in the lower-right image, and intermediate frequencies are in the remaining two images. That the upper-left image 16 is the most recognizable is also no coincidence since, as stated above, it is the low- frequency components of the image that are most important for human perception.
  • FIG. 9 shows an original image
  • FIG. 10 shows its wavelet transform carried out to four levels of transformation.
  • the many black areas in FIG. 5 represent wavelet transform coefficients that are either zero or very small. The abundance of such coefficients is useful in providing a more compact representation of the original image 11.
  • the choice of filter coefficients for the analysis filters 10, 14 determines the wavelet basis used in the transform. This basis must be well localized in both spatial and frequency domains and, in order to avoid redundancy that hinders compression, it must constitute a biorthogonal set.
  • the filter coefficients for the analysis filters 10, 14 are chosen to implement the Haar wavelet transform. This is accomplished by choosing the filter coefficients for the high pass filter 14 to be 0.5 and -0.5 and the filter coefficients for the low pass filter 10 to be 0.5 and 0.5.
  • convolution of the original image 11 can be accomplished efficiently on a scan line-by-scan line basis by halving the sum (or the difference in the case of the high-pass filter), of two adjacent pixel values, shifting one pixel, and repeating the process.
  • the foregoing convolution algorithm is amenable to economical implementation in hardware. For applications in which real-time compression is critical, for example in photocopiers, scanners, digital cameras, or printers, the loss in compression ratio is more than offset by the increased throughput resulting from a hardware implementation of the convolution step.
  • the foregoing hardware implementation can occur either in the course of compression or decompression.
  • a scanner it may be more convenient to implement the convolution in hardware at the scanner and to decompress the resulting image at the host in software.
  • a printer it may be more convenient to perform the compression in software at the host and to implement the decompression step in hardware at the printer.
  • a photocopier which can be thought of as a scanner and printer working together, one can save memory and improve performance by implementing both the compression and the decompression in hardware.
  • the threshold stage 16 incorporates a preselected threshold for determining whether or not a particular high frequency component is to be kept.
  • the selection of this threshold requires consideration of the frequency dependent characteristics of human perception to determine what transform coefficients to keep in order to achieve a particular compression ratio.
  • Level 2 0.16
  • Level 1 0.64
  • the method discards all high-pass coefficients of level 4 that are below 1% of the maximum absolute value of the coefficients of level 4.
  • the other thresholds are 4% of the maximum for level 3, 16% of the maximum for level 2, and 64% of the maximum for level 1.
  • the compression stage 13 encodes, using the fewest number of bits, the remaining wavelet transform coefficients associated with each sub-band. Two values must be encoded: the location within the sub-band, and the value (including the sign) of each wavelet transform coefficient. A coefficient's location within the sub-band is expressed as the distance in rows to either the previous non-zero coefficient or, in the case of the first non-zero coefficient of the sub-band, the distance to the upper left corner of the sub-band. These location values must be encoded exactly.
  • a coefficient's values can be encoded efficiently by dividing the interval between the maximum and minimum threshold values into quantization bins. If the number of quantization bins is large enough, given the difference between the maximum and minimum absolute values at each level, the quantization error will not be noticeable.
  • a preferred value for the number of quantization bins in this embodiment of the invention is 32.
  • the next step is to apply a lossless coding scheme such as arithmetic coding to obtain the final compressed binary file.
  • a lossless coding scheme such as arithmetic coding
  • the first step in decompression of the compressed image is to arithmetically decode the binary compressed file. Then the coefficient values and locations are calculated and the wavelet transform of the original data, in which most if not all coefficients of the higher frequency sub-bands are zero, is recreated.
  • FIG. 2 shows a data recovery stage 19 according to the invention having a low frequency synthesis filter 22 which corresponds to the low frequency analysis filter 10 of the wavelet transform stage 17 and an estimation system 38.
  • the low frequency wavelet transform coefficients 12 are convolved with the low frequency analysis filter 10 and the result of the convolution is passed to a combining stage 23.
  • the low frequency wavelet transform coefficients 12 are also passed to an estimation system 28 which estimates the discarded wavelet transform coefficients from the low frequency wavelet transform coefficients.
  • the estimation system also uses the high frequency wavelet transform coefficients 18 that exceeded the threshold at the thresholding stage 16.
  • Wavelets are functions generated from a single function ⁇ by dilations and translations.
  • the basic idea of the wavelet transform is to represent an arbitrary function / as a superposition of wavelets.
  • is a function associated with the corresponding synthesis filter coefficients defined below.
  • I' is the identity matrix.
  • x be a vector of low frequency wavelet transform coefficients at scale
  • FIG. 4 shows low-pass filtered wavelet transform coefficients representative of the low frequency image 12 being passed through a low- frequency synthesis filter 22 corresponding to the low-pass filter 10. Similarly, high-pass filtered wavelet transform coefficients representative of the high-frequency image 18 are passed through a high-frequency synthesis filter 21. The outputs of both synthesis filters 21, 22 are combined at a combining stage 23 to generate the original image 11. However, if some of the high frequency coefficients J+ have been discarded, then x' will lack details that would have been provided by the missing
  • Equation (13a) can be written
  • G ' is the regularization operator and ⁇ is a positive scalar such that ⁇ - 0 as the accuracy of x /+ increases.
  • G ' must be a high-pass filter.
  • H' is the low-pass filter matrix of the biorthogonal wavelet transform, G 7 , must be the corresponding high-pass filter matrix.
  • Equation (15) may be also written with respect to the estimated wavelet transform coefficients cT and x J+ .
  • T refers to the matrix transpose
  • a data recovery stage 19 for implementing equations (18) and (19), shown in FIG. 3, provides a way to estimate the high frequency components c' + of the image using only the low frequency components x +1 and matrices derived from the known properties of the two orthogonal filters: G , G , H , and H .
  • FIG. 3 shows the wavelet transform coefficients representative of the low-frequency image 12 being passed through a low-frequency synthesis filter 22 as they were in FIG. 4. However, unlike the system of FIG. 4, these same wavelet transform coefficients are used to estimate the high-frequency coefficients representative of the high-frequency image 18. This is accomplished by passing the low-frequency coefficients to an estimation system 28 consisting of an estimation filter 24 followed by a high-frequency synthesis filter 21.
  • the matrix M associated with the estimation filter 24 can be precalculated for the selected biorthogonal wavelet set.
  • the output of the estimation filter 24 is then filtered by the high-frequency synthesis filter 21.
  • the outputs of both synthesis filters 21, 22 are then combined at the combining stage 23 as was the case in FIG. 7.
  • the combining stage output represents an estimate of the original image 11a.
  • the estimation system 28 of FIG. 3 provides a good initial estimate of c /+ , the missing wavelet transform high-frequency coefficients.
  • this estimate can be further refined by an iterative conjugate gradient algorithm using the above initial estimate of c/ and an initial search direction given by the gradient vector VJ ⁇ c , j .
  • the search for the global minimum of J is greatly helped by clamping the known values of the vector c ⁇ J .
  • FIG. 5 shows a data recovery stage 19 incorporating this clamping function.
  • the illustrated data recovery stage 19 is similar to that depicted in FIG. 3 with the exception that the estimation system 28 includes a refinement stage 25 implementing the conjugate gradient method interposed between the estimation filter 24 and the high- frequency synthesis filter 21.
  • the refinement stage 25 accepts known values of the high- frequency wavelet transform coefficients 18 and clamps then at those values throughout the iterations of the conjugate gradient algorithm.
  • the actual values of the inverse wavelet transform i.e., the x J can be calculated directly without first calculating the J+
  • T G ( ⁇ I + G T H- /T H- / G- / ) G- /T H-' T G-'G- /
  • FIG. 6 depicts a data recovery stage 19 for implementing the foregoing method.
  • the illustrated data recovery stage 19 is similar to that shown in FIG. 3 with the exceptions that the high-pass synthesis filter 21 is no longer necessary and that the estimation filter 26 incorporates the matrix T given in Equation (22). To reduce computation time, T is precalculated for a given biorthogonal wavelet set. In the system of FIG. 6, the output of the estimation filter 26 is passed directly to the combining stage 23.
  • the data recovery stage 19 of FIG. 6 is particularly useful for enlarging an image.
  • enlarging an image can be accomplished by inserting additional wavelet transform coefficients between known wavelet transform coefficients. The values of these additional wavelet transform coefficients can be estimated using the data recovery stage 19 of FIG. 6.
  • FIG. 11 The decompression procedure is illustrated in FIG. 11 for one level of the wavelet transform of data representing an image.
  • quadrant a represents the low frequency sub-band and quadrant b and half R represent the higher frequency sub- bands in increasing order.
  • FIG. 11 b shows the process of recovering the left side L of a given transform level. If b is empty, i.e., if there are no known high frequency coefficients, matrices T and H are used to compute the columns of L directly, one by one. If the most important coefficients of "6" are known, then matrix M is used to compute an initial estimate of a given column. This estimate is refined by the conjugate gradient method with clamping of the known coefficients to obtain a complete set b ' of high frequency coefficients.
  • the inverse wavelet transform on a and b * gives the left side L.
  • L and 7? respectively, by rows, we obtain the reproduction of the entire level which is either the low frequency component of the next level or the final decompressed image I if the level is 1, as shown in FIG. l ie.
  • This reconstruction process is applied to the luminance and chrominance components the only difference being that no clamping is normally required for the chrominance components since adequate estimates of the high frequency coefficients can be obtained from the low frequency coefficients alone.
  • This approach results in higher compression and higher quality reproduced images than any other known method. It will thus be seen that the invention efficiently attains the objects set forth above.

Abstract

A method of compressing and decompressing digitally encoded data resulting in improved compression ratios is disclosed. The method utilizes the wavelet transform and the frequency response of human perception to determine which transform coefficients are important for perception. The method estimates the wavelet transform's discarded high frequency coefficients. At each level of the inverse transform, the method estimates the missing high frequency coefficients based on the complete set of low frequency coefficients and the filter coefficients. The resulting inverse wavelet transform is a high quality reproduction of the original image.

Description

IMPROVED ESTIMATOR FOR RECOVERING HIGH FREQUENCY COMPONENTS FROM COMPRESSED IMAGE DATA
FIELD OF THE INVENTION
This invention relates to digital signal processing techniques in general and in particular to the use of digital signal processing techniques for compression and decompression of data and for the reliable recovery of high frequency components discarded during compression of the data.
BACKGROUND The wavelet transform has been shown to be better than traditional Fourier methods for representing the different spatial frequencies (edges and textures) in images. This is important for image compression since it is well known that humans are more sensitive to low and medium frequencies than they are to high frequencies. As a result, many compression schemes quantize lower frequency coefficients more finely than they do higher frequency coefficients.
It is also well known that detail barely perceptible in a given frequency sub-band must be at least four times as intense in the next higher frequency sub-band to be perceivable. The transformed image is divided into contiguous blocks of coefficients, e.g., 4 x 4, 8 x 8, etc., and each block is matched against a dictionary or code book of blocks belonging to a sufficiently large training set. A pointer to the closest match in the dictionary is saved in place of a transform block thus achieving compression. This method requires a complex algorithm to build an efficient dictionary for describing an ensemble of data. The disadvantage in this approach is that it is not general enough for any type of data. After quantization of the transform coefficients, the last step of conventional image compression schemes is to apply a lossless coding techniques such as Huffman coding or arithmetic coding. The disadvantage with conventional approaches is that quantization results in artifacts in the decompressed images that are very visible at medium to high compression ratios. As a result, compression ratios resulting in reproduced images of acceptable quality are rather modest.
Broadly, it is an object of the present invention to overcome the most serious limitations of current approaches to data compression by taking advantage of the mathematics of wavelet theory and of the frequency sensitive characteristics of human perception. This and other objects of the present invention will become apparent to those skilled in the art from the following description of the invention. SUMMARY OF THE INVENTION
It is well known that human visual perception relies most heavily on the low frequency components of an image. Consequently, it is advantageous, when compressing an image, to discard those portions of an image which contain high frequencies and which are therefore less important for human perception. This permits a file representative of the image to require less storage space.
Although the high frequency coefficients are less important for human visual perception, their presence nevertheless enhances the overall clarity of the image. High frequency components are associated with edges in an image, hence their presence tends to sharpen edges and delineate boundaries more precisely. For this reason, it is advantageous to incorporate the high frequency components back into the image before displaying it. However, since the high frequency components were presumably discarded in the interest of reducing storage requirements for the image, they must somehow be estimated. The invention comprises a wavelet transform based method and system for estimating missing high frequency components of an image based on those frequency components which are present in the image. As mentioned above, these components may be missing because they were discarded, typically by being set to zero, during compression. However, the method of the invention can also be used where the high frequency coefficients were never present in the image to begin with. For example, in the course of image enlargement, it is desirable to assign a pixel value to the gaps that form between pixels as the image grows. The method of the invention can also estimate these values based on existing pixel values, thereby enhancing the quality of the enlarged image. Application of the method of the invention to the enlargement of an original image starts with the assumption that the original image includes the low frequency components of the wavelet transform of an enlarged image which is four times larger than the original image. The high frequency coefficients of the enlarged image are then estimated from the low frequency coefficients in the original image. This is followed by the application of the inverse wavelet transform. The result is an image which is not only four times larger but is also of enhanced quality because the resolution has been doubled. This process can be performed repeatedly to obtain successively larger images. Application of the method of the invention to the compression of an image and to the subsequent estimation of the discarded high frequency coefficients begins with applying the wavelet transform to the original image and discarding some or all of the high frequency coefficients. The wavelet transform is then applied to the remaining coefficients of the wavelet transform of the original image. In a preferred practice of the invention, this process optionally continues for three additional levels of transformation. At this level, only those high frequency coefficients most important for perception are kept. These must then be efficiently encoded, typically by a lossless arithmetic encoding algorithm. For decompression, the compressed file is decoded to recreate the wavelet transform up to the number of levels performed in the compression step described above. The inverse wavelet transform produces an image which is for example lA of the original image size. This image is then enlarged using the method of the invention as described above in connection with image enlargement. This results in a full size reproduced image.
This approach results in higher levels of compression and reproduced image quality than is possible without the use of the novel expansion technique.
For data representing a video signal, the high frequency coefficients of the first two levels of the wavelet transform of each frame are discarded resulting in frames that are 1/16 of the original size. Thus, at 30 frames per second, the amount of data to be processed per second is 480 times smaller than with state-of-the-art approaches (e.g., MPEG-1 , MPEG-2, H.231 , H.234). This results in extremely high compression ratios of up to 10,000:1, enough to make possible the real-time transmission of full-size, full- color, full-motion video through telephone lines using 28.8 Kbps modems. For decompression, the two-level expansion of each reproduced frame at 1/16 the original size, produces a full-size video sequence of high quality.
For data representing an acoustic signal, the ability to estimate the high frequency coefficients of the wavelet transform of the one-dimensional sound signal from the low frequency coefficients also results in high levels of compression and improved signal quality.
In some applications, processing speed is critical and compression ratio is of secondary importance. For example, in scanners, printers and photocopiers, the ability to perform compression in real-time is important. These applications call for an embodiment of the invention in which the Haar wavelet transform is used to compress the image on a line-by-line or block-by-block basis. Because of the simplification introduced by the use of the Haar wavelet transform, convolution of the input signal by the filter coefficients reduces to the multiplication of the sum of two adjacent pixel values by the filter coefficients followed by a one pixel shift. This procedure is one that can readily be implemented in hardware with its concomitant increase in performance. To use the retained low frequency components of the wavelet transform to recover the missing high frequency components, the system according to the invention optionally passes the retained frequency components to a low-pass synthesis filter which is biorthogonal to the low-pass analysis filter used to generate the wavelet transform of the image. This results in the generation of the low- frequency sub-band of the original image. The retained frequency components are also passed to an estimation system which generates an estimate of the high-frequency sub-band of the image. The low frequency sub-band and the high frequency sub-band are then combined at a combining stage to form the original image.
In one optional embodiment, the estimation system includes an estimation filter having a transfer function derived from the wavelet transforms high and low frequency analysis filters and their corresponding biorthogonal synthesis filters. The output of this estimation filter is then filtered by the wavelet transforms high-frequency synthesis filter before being combined, at the combining stage, with the output of the low-frequency synthesis filter.
In another optional embodiment of the invention, the output of the estimation filter is used as a starting estimate which is iteratively refined at a refining stage. The preferred iterative method is a conjugate gradient method. Preferably, those high frequency coefficients which were retained rather than discarded are clamped at their known values during successive iterations of the process executed by the refining stage. The output of the refining stage is then filtered by the wavelet transforms high-frequency synthesis filter before being combined, at the combining stage, with the output of the low- frequency synthesis filter.
In yet another embodiment of the invention, used in the case in which there are no known high frequency coefficients, the estimation system includes an estimation filter having a transfer function derived from the wavelet transforms high and low frequency analysis filters and their corresponding biorthogonal synthesis filters. The output of this estimation filter is then combined, at the combining stage, with the output of the low- frequency synthesis filter. In this case, there is no need to filter the output of the estimation filter with the wavelet transforms high-frequency synthesis filter before passing it to the combining stage.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features and advantages of the invention will be apparent from the following description and the accompanying drawings, in which like reference characters refer to the same parts throughout the different views. FIG. 1 shows a system for evaluating the wavelet transform of an original image, compressing the wavelet transform after discarding is high frequency components, and then recovering the discarded high frequency components to generate a reconstructed image;
FIG. 2 shows the data recovery stage of FIG. 1 ;
FIG. 3 shows an embodiment of the data recovery stage shown in FIG. 1 in which the estimation system processes only the low frequency coefficients to obtain an estimate of the original high frequency coefficients;
FIG. 4 shows a prior art process for reconstructing an image based on the high frequency and low frequency components of its wavelet transform;
FIG. 5 shows the data recovery stage of FIG. 1 in which the estimation system relies on known high frequency coefficients to iteratively converge on an estimate of the missing high frequency coefficient;
FIG. 6 shows the data recovery stage of FIG. 1 in which uses a single filter is used to estimate high frequency components with low frequency components; FIG. 7 shows an image to be processed by the system of FIG. 1 ; FIG. 8 shows the image of FIG. 7 after application of the high and low frequency analysis filters to the rows of the image;
FIG. 9 shows the image of FIG. 8 after application of the high and low frequency analysis filters to the columns of the image;
FIG. 10 shows the image of FIG. 7 after three levels of the transformation described in FIGS. 8 and 9; FIG. 11a shows an image such as that shown in FIG. 7 with its wavelet transform coefficients separated into three sub-bands;
FIG. 1 lb shows two alternative paths to reconstructing the left image L from quadrants a and b; and
FIG. l ie shows two alternative paths to reconstructing the complete image / using the two sub-bands L and R.
DETAILED DESCRIPTION System Overview
Referring to FIG. 1 , a system 29 incorporating the invention includes a wavelet transform stage 17 having a low frequency analysis filter 10 and a high frequency analysis filter 14 for evaluating the wavelet transform of an original image 11. The original image 11 can be provided by a scanner, a digital camera, a photocopy machine or any other device generating a digital signal representative of an image. The source of the original image 11 can also be another wavelet transform stage having a pair of analysis filters identical to the high and low frequency synthesis filters 10, 14 shown in FIG. 1. The wavelet transform generated by the wavelet transform stage 17 includes a low frequency portion 12 generated by convolving the original image with the low frequency analysis filter 10 and a high frequency portion 18a generated by convolving the original image through the high frequency analysis filter 14.
Because human visual perception depends primarily on the low frequency components of the image, it is desirable, when compressing an image, to disregard selected high frequency components of the image. This is accomplished by a thresholding stage 16 which compares the high frequency portion 18a of the original image 11 with a preselected threshold value. If the wavelet transform coefficient associated with a portion of the image falls on one side of the threshold, that coefficient is disregarded, typically by setting it to zero. If it falls on the other side, that coefficient is retained, typically by passing it through unchanged. This thresholding process results in a diminished high frequency portion 18.
The diminished high-frequency portion 18 and the low frequency portion 12 of the original image 11 are compressed in a conventional manner at a compression stage 13. The compressed image can then be stored or transmitted in its compressed form.
To recover the original image, the compressed image must be decompressed and the disregarded high frequency portions of the original image must be estimated. The decompression step is performed in a conventional fashion and is not shown in FIG. 1.. The process of estimating the disregarded high frequency portions to generate a faithful reproduction of the original image 11a takes place in the data recovery stage 19 which is described below in connection with FIGS. 2-6.
Evaluation of wavelet transform
The wavelet transform of an original image 11 is obtained by first convolving each row of the image with two orthogonal filters: a high-pass filter 14 and a low-pass filter 10 as depicted in FIG. 1. These filters, which are also referred to as high and low frequency analysis filters respectively, are obtained from the coefficients of the scaling function defining the wavelet basis used in the transformation. Thus, the effect of the wavelet transform is to determine the high frequency and low frequency spatial energy distribution of the image. The low frequency spatial distribution of the original image 11 is represented by the low-pass filtered image 12 and the high frequency spatial distribution is represented by the shown in a high-pass filtered image 18.
FIG. 1 illustrates a wavelet transform stage 17 for performing one step in the performance of successive wavelet transforms. Each row of the image 11 is convolved with a low-pass filter 10. However, the convolution proceeds by shifting by two pixels rather than by a single pixel. This results in a low-pass filtered image 12 half as wide as the original image 11.
Each row of the image 11 is also convolved with a high pass filter 14 biorthogonal to the low-pass filter 10. The convolution again proceeds by shifting two pixels at a time rather than by a single pixel. This shift results in a high-pass filtered image 18a that is likewise half as wide as the original image.
The high-frequency wavelet transform coefficients generating the high-pass filtered image 18 are then compared with a preselected threshold at a thresholding stage 16. Those coefficients falling to one side of this threshold are set to zero. The remaining coefficients are left unchanged. The reduced set of high frequency wavelet transform coefficients thus formed generates a post-threshold high-pass filtered image 18 half as wide as the original image 11 and containing those high frequency components that are important for human perception.
Application of the above wavelet transform to the rows of the original image 11 shown in FIG. 7 results in the two images shown in FIG. 8. The low-pass filtered image, shown on the left hand side of FIG. 8 is clearly recognizable as a distorted version of the image in FIG. 7. The high-pass filtered image, shown on the right-hand side of FIG. 8, is, however, only barely recognizable. This is because, as mentioned above, it is the low frequency components of an image that are most important for human visual perception.
To complete the first level of the wavelet transform of the original image 11, the wavelet transform stage 17 convolves the high and low frequency analysis filters 10, 14 with the columns of the original image 11. The resulting four images, shown in FIG. 9, represent the wavelet transform coefficients present in four frequency bands. The lowest frequencies are in the upper-left image, the highest are in the lower-right image, and intermediate frequencies are in the remaining two images. That the upper-left image 16 is the most recognizable is also no coincidence since, as stated above, it is the low- frequency components of the image that are most important for human perception.
Although not explicitly shown in FIG. 1 , it is readily apparent that the foregoing process can be repeated several times resulting in a multiple frequency sub-band decomposition. FIG. 9 shows an original image and FIG. 10 shows its wavelet transform carried out to four levels of transformation. The many black areas in FIG. 5 represent wavelet transform coefficients that are either zero or very small. The abundance of such coefficients is useful in providing a more compact representation of the original image 11.
The significance of such a transformation from the standpoint of compression lies in the fact that information important for human perception is concentrated in the frequency sub-bands near the upper left corner of FIG. 10. The other coefficients can be discarded without a major impact on the perceived quality of the reproduced image obtained by applying the inverse wavelet transform to the remaining coefficients.
The choice of filter coefficients for the analysis filters 10, 14 determines the wavelet basis used in the transform. This basis must be well localized in both spatial and frequency domains and, in order to avoid redundancy that hinders compression, it must constitute a biorthogonal set.
In one aspect of the invention, the filter coefficients for the analysis filters 10, 14 are chosen to implement the Haar wavelet transform. This is accomplished by choosing the filter coefficients for the high pass filter 14 to be 0.5 and -0.5 and the filter coefficients for the low pass filter 10 to be 0.5 and 0.5. In this embodiment, convolution of the original image 11 can be accomplished efficiently on a scan line-by-scan line basis by halving the sum (or the difference in the case of the high-pass filter), of two adjacent pixel values, shifting one pixel, and repeating the process. Although the compression ultimately achieved by the use of the Haar wavelet transform is not optimal, the foregoing convolution algorithm is amenable to economical implementation in hardware. For applications in which real-time compression is critical, for example in photocopiers, scanners, digital cameras, or printers, the loss in compression ratio is more than offset by the increased throughput resulting from a hardware implementation of the convolution step.
The foregoing hardware implementation can occur either in the course of compression or decompression. For example, in a scanner, it may be more convenient to implement the convolution in hardware at the scanner and to decompress the resulting image at the host in software. In a printer, it may be more convenient to perform the compression in software at the host and to implement the decompression step in hardware at the printer. In a photocopier, which can be thought of as a scanner and printer working together, one can save memory and improve performance by implementing both the compression and the decompression in hardware.
Establishing the threshold for coefficients
The threshold stage 16 incorporates a preselected threshold for determining whether or not a particular high frequency component is to be kept. The selection of this threshold requires consideration of the frequency dependent characteristics of human perception to determine what transform coefficients to keep in order to achieve a particular compression ratio.
It is well known that human vision is more sensitive to low and medium spatial frequencies than it is to high frequencies. This is why the coefficients associated with the upper left image of FIG. 9 namely the low frequency coefficients, are the most important.
It is also well known that if a detail (edge) is barely visible at a particular frequency, its contrast with respect to grey (intensity) must be quadrupled when the frequency is doubled for that detail to remain distinguishable. Since in the dyadic wavelet transform each frequency sub-band width is one half the width of the next higher frequency sub-band, we can ignore those coefficients of the higher frequency sub- band having absolute values less than four times the maximum absolute value of the coefficients that were discarded in the next lower frequency sub-band. This establishes the proper relationship between the thresholds 16 used to discard coefficients in all the frequency sub-bands.
The combination of these two principles leads to the following scheme to achieve high compression with minimal loss of important observable information:
(1) Keep all the coefficients of the lowest frequency sub-band.
(2) Establish a threshold for the next sub-band as a fraction of the absolute value of the largest coefficient of the sub-band.
(3) Multiply this threshold by 4 for each higher sub-band.
(4) Discard coefficients having absolute values below the corresponding thresholds. The following values are used to compute the actual threshold values using the maximum coefficient values in each sub-band. These values are for four levels of transformation:
Level 4 : 0.01
Level 3 : 0.04
Level 2 : 0.16 Level 1 : 0.64
Using the above values, the method discards all high-pass coefficients of level 4 that are below 1% of the maximum absolute value of the coefficients of level 4. The other thresholds are 4% of the maximum for level 3, 16% of the maximum for level 2, and 64% of the maximum for level 1.
The above values normally result in low to moderate compression, i.e. 30:1 to 40:1, with the quality of the reconstructed image being almost identical to that of the original. For higher compression, the above values are increased proportionately subject to the constraint that the ratio between consecutive levels should be 4 and that no value can be greater than 1.0. At high compression ratios, this results in the elimination of all the coefficients of levels 1 and 2 but with a very slow and graceful degradation of the quality of the reconstructed image after decompression.
Encoding the Coefficients Once the wavelet transform of the original image 11 has been decomposed into the desired number of sub-bands as described above, the compression stage 13 encodes, using the fewest number of bits, the remaining wavelet transform coefficients associated with each sub-band. Two values must be encoded: the location within the sub-band, and the value (including the sign) of each wavelet transform coefficient. A coefficient's location within the sub-band is expressed as the distance in rows to either the previous non-zero coefficient or, in the case of the first non-zero coefficient of the sub-band, the distance to the upper left corner of the sub-band. These location values must be encoded exactly.
A coefficient's values can be encoded efficiently by dividing the interval between the maximum and minimum threshold values into quantization bins. If the number of quantization bins is large enough, given the difference between the maximum and minimum absolute values at each level, the quantization error will not be noticeable. A preferred value for the number of quantization bins in this embodiment of the invention is 32. The above discussion applies to both the luminance and chrominance components of an image. However, humans are much less sensitive to changes in the chrominance than they are to changes in luminance. As a result, only the lowest frequency coefficients of the chrominance components need be kept. This permits very high compression for the chrominance components. This will only work, i.e., give good quality colors after the decompression, when it is combined with the unique wavelet transform reconstruction procedure described below.
Once all the coefficients are encoded into a binary file, the next step is to apply a lossless coding scheme such as arithmetic coding to obtain the final compressed binary file. Note that if no coefficients from the highest levels are kept, the size of the compressed file can be reduced significantly. No codebook or dictionary is involved in the above scheme. Consequently the scheme is general and applicable to any kind of data.
Decompression The first step in decompression of the compressed image is to arithmetically decode the binary compressed file. Then the coefficient values and locations are calculated and the wavelet transform of the original data, in which most if not all coefficients of the higher frequency sub-bands are zero, is recreated.
Recovery of discarded coefficients
In order to improve the quality of the reconstructed data, all the coefficients that were discarded for compression must be estimated. The coefficients to be estimated include the high frequency coefficients that were disregarded at the thresholding stage 16 as well as low frequency coefficients generated by the convolution of an analysis filter 10, 14 with one or more such coefficients. The remainder of this section describes the data recovery stage 19 for accomplishing this task. FIG. 2 shows a data recovery stage 19 according to the invention having a low frequency synthesis filter 22 which corresponds to the low frequency analysis filter 10 of the wavelet transform stage 17 and an estimation system 38. The low frequency wavelet transform coefficients 12 are convolved with the low frequency analysis filter 10 and the result of the convolution is passed to a combining stage 23. The low frequency wavelet transform coefficients 12 are also passed to an estimation system 28 which estimates the discarded wavelet transform coefficients from the low frequency wavelet transform coefficients. In one embodiment of the invention, to be described in connection with FIG. 5, the estimation system also uses the high frequency wavelet transform coefficients 18 that exceeded the threshold at the thresholding stage 16.
Wavelets are functions generated from a single function Ψ by dilations and translations.
(1) ψJ(x) = 2-υ2 ¥(2~Jx -n)
where y corresponds to the level of the transform, and hence governs the dilation, and n governs the translation.
The basic idea of the wavelet transform is to represent an arbitrary function / as a superposition of wavelets.
Figure imgf000013_0001
I,"
Since the Ψ^ constitute an orthonormal basis, the wavelet transform coefficients are given by the inner product of the arbitrary function/and the wavelet basis functions: (3) a!l (f) = (Ψt1 l ,f
In a multiresolution analysis, one really has two functions: a mother wavelet Ψ and a scaling function φ. Like the mother wavelet, the scaling function φ generates a family of dilated and translated versions of itself:
(4) ψJ n(x) = 2-J/2φ(2-J χ-n)
When compressing data files representative of images, it is important to preserve symmetry. As a result, the requirement of an orthonormal basis is relaxed and biorthogonal wavelet sets are used. In this case, the ΨJ no longer constitute an orthonormal basis, hence the computation of the coefficients an J is carried out via the dual basis
Figure imgf000014_0001
where Ψ is a function associated with the corresponding synthesis filter coefficients defined below.
When / is given in sampled form, one can take these samples as the coefficients xJ for sub-bandy. The coefficients for sub-band y+7 are then given by the convolution sums:
(6a) xn J+] = l 2n-k xi k
(6b) cn J+ ] = ∑ g{2„_k)xk J k
This describes a sub-band algorithm with:
(7a) hn = 2 φ(x - n)ψ(x)dx
representing a low pass filter and
(7b) ge = (-\)£h_M representing a high pass filter. Consequently, the exact reconstruction is given by:
Figure imgf000015_0001
The relation between the different biorthogonal filters is given by:
(9a) g„ = (-l)"h_„+l
Figure imgf000015_0002
where hn and gn represent the low-pass analysis filter and the high-pass analysis filter respectively, and hn and gn represent the corresponding synthesis filters. We now turn to a matrix modified formulation of the one-dimensional biorthogonal wavelet transform. Using the above impulse responses hn and gn we can define the circular convolution operators at resolution 2~J HJ ,GJ ,flJ ,GJ . These four matrices are circulant and symmetric.
The fundamental matrix relation for exactly reconstructing the data at resolution 2~' is
(10) H Ή7 + G G7 = I
where I' is the identity matrix. Let x be a vector of low frequency wavelet transform coefficients at scale
2~( /+1) and let J be the vector of associated wavelet coefficients. We have, in augmented vector form:
^+1 H 0
(11) 0 GJ
where — x/+1 is the smoothed vector obtained from — \J and the wavelet coefficients - cJx+ contain information lost in the transition between scales 2"J and 2_< /+1). (12) xJ = H GJ J + l
A prior art system for implementing equation (12), depicted schematically in FIG. 4, provides a way to recover xJ given x +l and c/+ . FIG. 4 shows low-pass filtered wavelet transform coefficients representative of the low frequency image 12 being passed through a low- frequency synthesis filter 22 corresponding to the low-pass filter 10. Similarly, high-pass filtered wavelet transform coefficients representative of the high-frequency image 18 are passed through a high-frequency synthesis filter 21. The outputs of both synthesis filters 21, 22 are combined at a combining stage 23 to generate the original image 11. However, if some of the high frequency coefficients J+ have been discarded, then x' will lack details that would have been provided by the missing
—x
Since, from equation (11), x + = H xJ we can, in principle, recover x' from x'+ merely by inverting H' . However, this is generally not practical both because of the presence of inaccuracies in x/+ and because H' is generally an ill-conditioned matrix. As a result, the above problem is ill-posed and there is, in general, no unique solution.
If we discard the high frequency coefficients, c/ , then equation (12) reduces to xJ = HJ xJ+ • This results in y; , a blurred approximation of xJ From equation (12), we have:
(13) y' = H' x'+I which, when combined with equation (11), reduces to
(13a) HV+1 = H'HV
During decompression, the x/+ (transformed rows or columns of level y + 1) are known and the problem is to determine the x' of the next higher level. Equation (13a) can be written
Figure imgf000016_0001
This can be thought of as an image restoration problem in which the image defined by the vector xJ has been blurred by the operator H' , which, due to its low pass nature, is an ill-conditioned matrix.
Regularization, as disclosed in "Methodes de resolution des problemes mal- poses" by A.N. Tikhanov and V.Y. Arsenin, Moscow, Edition MIR, hereinafter incorporated by reference, is a method used to solve ill-posed problems of this type. This method is analogous to a constrained least squares minimization technique. A solution for this type of problem is found by minimizing the following Lagrangian function:
(15) J(x-/ ,α) = xy+ l _HV + α G-V
where G ' is the regularization operator and α is a positive scalar such that α- 0 as the accuracy of x/+ increases.
It is also known from regularization theory that if H acts as a low-pass filter, G ' must be a high-pass filter. In other words, since H' is the low-pass filter matrix of the biorthogonal wavelet transform, G7 , must be the corresponding high-pass filter matrix.
Equation (15) may be also written with respect to the estimated wavelet transform coefficients cT and xJ+ .
(16) J(x/ ,α) = χJ+ ϊ - χJ+] + α /+
-x
Using the exact reconstruction matrix relation shown in Equation 10, we get:
(16a) xJ+] = HjHJxJ+] + &GJχJ+
Also, we can write
(i6b) x^+1 = H = H (HV+1 + GV+1) Then, subtracting (16b) from (16a) gives:
Figure imgf000017_0001
Substituting (16c) into (16) results in:
Figure imgf000018_0001
By setting the derivative of J with respect to c +I equal to zero, we obtain the following estimate for the high frequency coefficients c / +' !
(18) c/+' = Mx'+1 where the estimation matrix M is given by
i ~ / /T / _ Λ _ /τ jτ / _ / (19) M = αl + G H H G G H G G
in which "T" refers to the matrix transpose.
A data recovery stage 19 for implementing equations (18) and (19), shown in FIG. 3, provides a way to estimate the high frequency components c'+ of the image using only the low frequency components x +1 and matrices derived from the known properties of the two orthogonal filters: G , G , H , and H . FIG. 3 shows the wavelet transform coefficients representative of the low-frequency image 12 being passed through a low-frequency synthesis filter 22 as they were in FIG. 4. However, unlike the system of FIG. 4, these same wavelet transform coefficients are used to estimate the high-frequency coefficients representative of the high-frequency image 18. This is accomplished by passing the low-frequency coefficients to an estimation system 28 consisting of an estimation filter 24 followed by a high-frequency synthesis filter 21. The matrix M associated with the estimation filter 24 can be precalculated for the selected biorthogonal wavelet set. The output of the estimation filter 24 is then filtered by the high-frequency synthesis filter 21. The outputs of both synthesis filters 21, 22 are then combined at the combining stage 23 as was the case in FIG. 7. The combining stage output represents an estimate of the original image 11a.
Refining the estimate of the high-frequency coefficients
The estimation system 28 of FIG. 3 provides a good initial estimate of c/+ , the missing wavelet transform high-frequency coefficients. In another aspect of the invention, this estimate can be further refined by an iterative conjugate gradient algorithm using the above initial estimate of c/ and an initial search direction given by the gradient vector VJ^c , j . The search for the global minimum of J is greatly helped by clamping the known values of the vector cχ J . FIG. 5 shows a data recovery stage 19 incorporating this clamping function.
The illustrated data recovery stage 19 is similar to that depicted in FIG. 3 with the exception that the estimation system 28 includes a refinement stage 25 implementing the conjugate gradient method interposed between the estimation filter 24 and the high- frequency synthesis filter 21. The refinement stage 25 accepts known values of the high- frequency wavelet transform coefficients 18 and clamps then at those values throughout the iterations of the conjugate gradient algorithm.
If there are no known values of c/ at a given row or column at a given wavelet transform level, then the actual values of the inverse wavelet transform, i.e., the xJ can be calculated directly without first calculating the J+
Since
Figure imgf000019_0001
we can rewrite equation (20) as
Figure imgf000019_0002
where the matrix T is given by
(22) T = G (αI + G TH-/TH-/G-/ ) G-/TH-'TG-'G-/
FIG. 6 depicts a data recovery stage 19 for implementing the foregoing method. The illustrated data recovery stage 19 is similar to that shown in FIG. 3 with the exceptions that the high-pass synthesis filter 21 is no longer necessary and that the estimation filter 26 incorporates the matrix T given in Equation (22). To reduce computation time, T is precalculated for a given biorthogonal wavelet set. In the system of FIG. 6, the output of the estimation filter 26 is passed directly to the combining stage 23.
The data recovery stage 19 of FIG. 6 is particularly useful for enlarging an image. In such a case, enlarging an image can be accomplished by inserting additional wavelet transform coefficients between known wavelet transform coefficients. The values of these additional wavelet transform coefficients can be estimated using the data recovery stage 19 of FIG. 6.
The above expression for ϊ is an initial estimate because of the parameter α. The actual vector (row or column) is again obtained by the conjugate gradient algorithm as described in connection with FIG. 5.
The decompression procedure is illustrated in FIG. 11 for one level of the wavelet transform of data representing an image. In FIG. 11a, quadrant a represents the low frequency sub-band and quadrant b and half R represent the higher frequency sub- bands in increasing order. FIG. 11 b shows the process of recovering the left side L of a given transform level. If b is empty, i.e., if there are no known high frequency coefficients, matrices T and H are used to compute the columns of L directly, one by one. If the most important coefficients of "6" are known, then matrix M is used to compute an initial estimate of a given column. This estimate is refined by the conjugate gradient method with clamping of the known coefficients to obtain a complete set b ' of high frequency coefficients. The inverse wavelet transform on a and b * gives the left side L. By processing the left and right sides, L and 7?, respectively, by rows, we obtain the reproduction of the entire level which is either the low frequency component of the next level or the final decompressed image I if the level is 1, as shown in FIG. l ie. This reconstruction process is applied to the luminance and chrominance components the only difference being that no clamping is normally required for the chrominance components since adequate estimates of the high frequency coefficients can be obtained from the low frequency coefficients alone. This approach results in higher compression and higher quality reproduced images than any other known method. It will thus be seen that the invention efficiently attains the objects set forth above. Since certain changes may be made in the above constructions without departing from the scope of the invention, it is intended that all matter contained in the above description or shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense. It is also to be understood that the following claims are intended to cover all generic and specific features of the invention described herein, and all statements of the scope of the invention which as a matter of language might be said to fall there between.
Having described the invention, what is claimed as new and secured by Letters Patent is:

Claims

1. A method for recovering information missing from a digital signal generated by evaluating the wavelet transform of a source image, said digital signal having a low frequency portion generated by filtering said source image with a low-frequency analysis filter and a high frequency portion, said method comprising the steps of: filtering said low-frequency portion with a low-frequency synthesis filter corresponding to said low-frequency analysis filter, thereby generating a low-frequency sub-band of said source image; processing said low-frequency portion with an estimation system, thereby generating an estimation system output representative of a high-frequency sub-band of said source image; and combining said high-frequency sub-band with said low-frequency sub-band, thereby generating an estimate of said source image.
2. The method of claim 1 wherein said high frequency portion of said digital signal is generated by filtering said source image through a high-frequency analysis filter biorthogonal to said low-frequency analysis filter, and a portion of said high frequency portion is disregarded, thereby generating a diminished high-frequency portion and wherein said processing step further comprises the steps of: filtering said low frequency portion with an estimation filter, thereby generating an estimation filter output; and filtering said estimation filter output through a high-frequency synthesis filter corresponding to a high-frequency analysis filter biorthogonal to said low frequency analysis filter.
3. The method of claim 2 wherein said first filtering step comprises the step of filtering said low frequency portion with a filter having a matrix transfer function M given by
M = (αI/ + G TH/THJG/ )"1 G'τH'τG G7 wherein G ' represents said high-frequency analysis filter, GJ represents said high- frequency synthesis filter, H represents a low-frequency analysis filter, HJ represents said corresponding low-frequency synthesis filter, I' represents an identity matrix, and α is a number.
4. The method of claim 1 wherein said processing step comprises the step of filtering said low frequency portion with a filter having a matrix transfer function T given by
Υ = G'( l' + GfTϊifTUJG'ylGfTHG GJ wherein HJ represents said low-frequency analysis filter, H represents said low- frequency synthesis filter, G ' represents a high-frequency analysis filter corresponding to said low frequency analysis filter, GJ represents a synthesis filter corresponding to G7, V represents an identity matrix, and α is a number.
5. The method of claim 1 further comprising the step of iteratively refining said estimate of said source image.
6. The method of claim 5 wherein said refining step comprises the steps of supplying an initial estimate for initiating a conjugate gradient search; and initiating a conjugate gradient search for an optimal estimate of said high frequency portion of said digital signal.
7. The method of claim 6 further comprising the step of clamping said estimation system output at values determined by said high frequency portion of said digital signal.
8. A system for recovering information missing from a digital signal generated by evaluating the wavelet transform of a source image, said digital signal having a low frequency portion generated by filtering said source image with a low-frequency analysis filter and a high frequency portion, said system comprising: a low- frequency synthesis filter corresponding to said low-frequency analysis filter for filtering said low-frequency portion, to generate a low-frequency sub-band of said source image; an estimation system for processing said low-frequency portion, thereby generating an estimation system output representative of a high-frequency sub-band of said source image; and means for combining said high-frequency sub-band with said low-frequency sub-band, thereby generating an estimate of said source image.
9. The system of claim 8 wherein said high frequency portion of said digital signal is generated by filtering said source signal through a high-frequency analysis filter biorthogonal to said low-frequency analysis filter, and a portion of said high frequency portion is disregarded, thereby generating a diminished high-frequency portion and wherein said estimation system further comprises: an estimation filter for filtering said low frequency portion, thereby generating an estimation filter output; and a high-frequency synthesis filter for filtering said estimation filter output, said high-frequency synthesis filter corresponding to a high-frequency analysis filter biorthogonal to said low frequency analysis filter.
10. The system of claim 9 wherein said estimation filter comprises means for filtering said low frequency portion with a filter having a matrix transfer function M given by = (αF + GfTllfTIlJGl ) ] GrTllfTGJGJ wherein G ' represents said high-frequency analysis filter, G ' represents said high- frequency synthesis filter, HJ represents a low-frequency analysis filter, Η.J represents said corresponding low-frequency synthesis filter, V represents an identity matrix, and α is a number.
11. The system of claim 8 wherein said estimation system comprises means for filtering said low frequency portion with a filter having a matrix transfer function T given by
T = G ' (αU + G^H^H'G ' )"' G TH TG 'GJ wherein Η.J represents said low-frequency analysis filter, HJ represents said low- frequency synthesis filter, G represents a high-frequency analysis filter corresponding to said low frequency analysis filter, GJ represents a synthesis filter corresponding to G', V represents an identity matrix, and α is a number.
12. The system of claim 8 further comprising means for iteratively refining said estimate of said source image.
13. The system of claim 12 wherein said refining means comprises: means for supplying an initial estimate for initiating a conjugate gradient search; and means for initiating a conjugate gradient search for an optimal estimate of said high frequency portion of said digital signal.
14. The system of claim 13 further comprising means for clamping said estimation system output at values determined by said high frequency portion of said digital signal.
15. A method of decompressing a compressed image, said compressed image containing both high frequency components and low frequency components, to obtain a high integrity reproduction of an original image, said method comprising the steps of: performing a wavelet transform operation on the original image to generate a wavelet transformed image, said wavelet transformed image containing both high frequency components and low frequency components, comparing the high frequency components to a threshold value to disregard selected high frequency components, compressing the wavelet transformed image to generate the compressed image, and decompressing the compressed image by filtering the low frequency components of the compressed image, estimating the selected high frequency components disregarded in the comparing step, and combining the filtered low frequency components and the estimated high frequency components to construct the decompressed reproduction of the original image with high integrity.
16. The method of claim 15 wherein the original image includes high frequency components and low frequency components, and wherein said step of performing a wavelet transform operation on the original image comprises the steps of filtering the high frequency components of the original image, and filtering the low frequency components of the original image.
17. The method of claim 16 further comprising the step of performing the high frequency filtering step and the low frequency filtering step in parallel.
18. The method of claim 15 wherein said step of comparing the high frequency components comprises the step of retaining those components having values above the threshold value and disregarding those components having values below the threshold value.
19. The method of claim 15, wherein said estimating step comprises the steps of performing a matrix operation on the low frequency components to obtain an estimation filter output, and performing a filtering operation on the estimation filter output to generate an estimate of the high frequency components.
20. The method of claim 15 wherein said estimating step comprises the step of performing a matrix operation on the low frequency components to generate the corresponding high frequency components.
21. The method of claim 15 wherein said comparing step comprises the step of retaining selected high frequency components that meet the threshold value.
22. The method of claim 21 further comprising the step of combining the low frequency components with the retained high frequency components to generate the wavelet transformed image.
23. The method of claim 16 wherein said step of filtering the high frequency components of the original image comprises the steps of performing a convolving filtering operation on the original image wherein during the convolution, the pixels of the image are shifted by two to produce a filtered image half the size of the original image.
24. The method of claim 23 wherein said step of filtering the low frequency components of the original image comprises the steps of performing a convolving filtering operation on the original image, wherein during the convolution, the pixels of the image are shifted by two to produce a filtered image half the size of the original image.
25. The method of claim 16 wherein said low frequency filtering step and said high frequency filtering step use mutually biorthogonal filters.
26. The method of claim 15 wherein said step of comparing the high frequency components comprises the step of setting the value of those components below the threshold value generally to zero.
27. The method of claim 15 wherein said step of performing a wavelet transform operation comprises the step of performing a Harr wavelet transform by setting a first high frequency filtering coefficients to about 0.5 and setting a second high frequency filtering coefficient to about -0.5, and setting first and second low frequency filtering coefficients to be about 0.5.
28. The method of claim 15 wherein said comparing step comprises the step of selecting the threshold value as a function of the number of iterations of the wavelet transform.
29. The method of claim 15 wherein said step of estimating the disregarded high frequency components of the image comprises the step of employing a regularization operator.
30. The method of claim 15 wherein said step of estimating the disregarded high frequency components of the image comprises the step of using an estimator filter to estimate the high frequency components using only the low frequency components.
31. The method of claim 15 wherein said step of estimating the disregarded high frequency components of the image comprises the step of refining an initial estimate of the high frequency components.
32. The method of claim 15 wherein said step of estimating the disregarded high frequency components of the image comprises the step of refining the estimated high frequency components with an iterative conjugate gradient.
33. The method of claim 32 wherein said step of refining further comprises the steps of providing an initial estimate of the disregarded high frequency components, clamping known values of the high frequency components, and iteratively refining the initial estimate.
34. A method of decompressing a compressed image, said compressed image containing both high frequency components and low frequency components, to obtain the original image, wherein the original image is compresses by a wavelet transform operation to generate a wavelet transformed image containing both high frequency components, at least a portion of which are discarded, and low frequency components, said method comprising the steps providing the compressed image, and decompressing the compressed image by filtering the low frequency components of the compressed image, estimating the discarded high frequency components, and combining the filtered low frequency components and the estimated high frequency components to construct a decompressed reproduction of the original image with high integrity and accuracy.
35. A system for decompressing a compressed image, said compressed image containing both high frequency components and low frequency components, to obtain a high integrity reproduction of an original image, said system comprising: means for performing a wavelet transform operation on the original image to generate a wavelet transformed image, said wavelet transformed image containing both high frequency components and low frequency components, means for comparing the high frequency components to a threshold value to disregard selected high frequency components, means for compressing the wavelet transformed image to generate the compressed image, and means for decompressing the compressed image, said decompressing means including means for filtering the low frequency components of the compressed image, means for estimating the selected high frequency components disregarded by the comparing means, and means for combining the filtered low frequency components and the estimated high frequency components to construct the decompressed reproduction of the original image with high integrity.
36. The system of claim 35 wherein the original image includes high frequency components and low frequency components, and wherein said means for performing a wavelet transform operation on the original image comprises: means for filtering the high frequency components of the original image, and means for filtering the low frequency components of the original image.
37. The system of claim 36 further comprising parallel filtering means for performing the high frequency filtering step and the low frequency filtering step in parallel.
38. The system of claim 35 wherein said high frequency comparing means comprises means for retaining those components having values above the selected threshold and disregarding those components having values below the threshold value.
39. The system of claim 35, wherein said estimating means comprises means for performing a matrix operation on the low frequency components to obtain an estimation output, and means for performing a filtering operation on the estimation filter output to generate an estimate of the high frequency components.
40. The system of claim 35 wherein said estimating means comprises means for performing a matrix operation on the low frequency components to generate the corresponding high frequency components.
41. The system of claim 35 wherein said comparing means comprises means for of retaining selected high frequency components that meet the threshold value.
42. The system of claim 41 further comprising the means for combining the low frequency components with the retained high frequency components to generate the wavelet transformed image.
43. The system of claim 36 wherein said high frequency filtering means comprises means for performing a convolving filtering operation on the high frequency components of the original image, said convolving means including means for shifting pixels of the image by two to produce a filtered image half the size of the original image.
44. The system of claim 43 wherein said low frequency filtering means comprises means for performing a convolving filtering operation on the low frequency components of the original image, said convolving means including means for shifting pixels of the image by two to produce a filtered image half the size of the original image.
45. The system of claim 36 wherein said low frequency filtering means and said high frequency filtering means include mutually biorthogonal filters.
46. The system of claim 35 wherein said threshold comparing means comprises means for setting the value of those components below the threshold value generally to zero.
47. The system of claim 35 wherein said step of performing a wavelet transform operation comprises: means for performing a Harr wavelet transform, said Harr transform means including high frequency filtering means having a first high frequency filtering coefficient set to about 0.5 and a second high frequency filtering coefficient set to about -0.5, and low frequency filtering means having a first and second low frequency filtering coefficients set to about 0.5.
48. The system of claim 1 wherein said comparing means comprises means for selecting the threshold value as a function of the number of iterations of the wavelet transform.
49. The system of claim 35 wherein said estimating means comprises means for employing a regularization operator.
50. The system of claim 35 wherein said estimating means comprises means for estimating the high frequency components using only the low frequency components.
51. The system of claim 35 wherein said estimating means comprises means for refining the estimated high frequency components.
52. The system of claim 35 wherein said estimating means comprises means for refining the estimated high frequency components with an iterative conjugate gradient.
53. The system of claim 52 wherein said refining means further comprises: means for providing an initial estimate of the disregarded high frequency components, and means for clamping known values of the high frequency components, and means for iteratively refining the initial estimate.
54. A system for decompressing a compressed image, said compressed image containing both high frequency components and low frequency components, to obtain the original image, wherein the original image is compresses by a wavelet transform operation to generate a wavelet transformed image containing both high frequency components, at least a portion of which are discarded, and low frequency components, said system comprising: means for providing the compressed image, and means for decompressing the compressed image, said decompressing means including means for filtering the low frequency components of the compressed image, means for estimating the discarded high frequency components, and means for combining the filtered low frequency components and the estimated high frequency components to construct a decompressed reproduction of the original image with high integrity and accuracy.
55. An apparatus for recovering discarded high frequency wavelet transform coefficients from a digital signal representative of a set a complete set of low frequency wavelet transform coefficients and an incomplete set of high frequency wavelet transform coefficients, both sets being derived from a biorthogonal wavelet transform consisting of a high pass filter and a low pass filter, said apparatus comprising: means for selecting an operator to correspond to said high pass filter and to said low pass filter of said biorthogonal wavelet transform; and means for applying said operator to said complete set of low frequency wavelet transform coefficients.
56. The apparatus of claim 55 wherein said complete set of low frequency wavelet transform coefficients and said incomplete set of high frequency wavelet transform coefficients are derived from application of a biorthogonal wavelet transform to an image obtained from a scanner.
57. A method for estimating the values of a first digital signal from the values from a second digital signal, said second digital signal obtained by filtering said first digital signal with a low-pass filter to obtain its low frequency components, filtering said first digital signal with a high-pass filter orthogonal to said low- pass filter to obtain its high frequency components, and discarding all high frequency components having an amplitude smaller than a threshold, said method comprising the steps of: creating a first data vector, xJ+ , by filtering said first digital signal with a low- pass filter and discarding every other point, creating a second data vector, c/+ , by filtering said first digital signal with a high-pass filter and discarding every other point, creating a unit vector having as many elements as there are values in said first digital signal, each element being unity, creating a first matrix, HJ , having half as many rows as there are values in said first digital signal, a first row corresponding to the circular convolution of said low-pass filter with said unit vector, and in which each row corresponds to the previous row shifted two elements to the right, creating a second matrix, GJ, having half as many rows as there are values in said first digital signal, a first row corresponding to the circular convolution of said high-pass filter with said unit vector, and in which each row corresponds to the previous row shifted two elements to the right, creating a third matrix, G ' , to reverse the operation of said second matrix G ' , creating a fourth matrix, H , to reverse the operation of said first matrix H7, determining a third data vector, c ; /+l , and a positive scalar α to minimize
G ' ifGlJ - . /'+' - H J GC' JJ c « '+1 + α c ^ / + l
— x whereby said third data vector corresponds to an estimate of the high frequency components of said first digital signal, premultiplying said first data vector by said fourth matrix to create a first product, premultiplying said third data vector by said third matrix to create a second product, adding together said first and second product to create a vector xJ , whereby said vector xJ is an estimate of the values in said first digital signal.
PCT/US1997/022685 1996-12-20 1997-12-16 Improved estimator for recovering high frequency components from compressed image data WO1998028917A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
RU99116256/09A RU99116256A (en) 1996-12-20 1997-12-12 METHOD (OPTIONS) AND SYSTEM FOR EVALUATING THE SOURCE SIGNAL
BR9714419-3A BR9714419A (en) 1996-12-20 1997-12-16 Enhanced evaluation device to recover high frequency components of compressed image data
IL13050697A IL130506A0 (en) 1996-12-20 1997-12-16 Improved estimator for recovering high frequency components from compressed data
CA002275320A CA2275320A1 (en) 1996-12-20 1997-12-16 Improved estimator for recovering high frequency components from compressed image data
AU53794/98A AU5379498A (en) 1996-12-20 1997-12-16 Improved estimator for recovering high frequency components from compressed ima ge data
EP97950916A EP0947101A1 (en) 1996-12-20 1997-12-16 Improved estimator for recovering high frequency components from compressed image data
JP52881898A JP2001507193A (en) 1996-12-20 1997-12-16 Improved estimator for recovering high frequency components from compressed data

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US3370696P 1996-12-20 1996-12-20
US60/033,706 1996-12-20
US6663797P 1997-11-14 1997-11-14
US60/066,637 1997-11-14

Publications (2)

Publication Number Publication Date
WO1998028917A1 true WO1998028917A1 (en) 1998-07-02
WO1998028917A9 WO1998028917A9 (en) 1998-11-12

Family

ID=26710032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/022685 WO1998028917A1 (en) 1996-12-20 1997-12-16 Improved estimator for recovering high frequency components from compressed image data

Country Status (11)

Country Link
EP (1) EP0947101A1 (en)
JP (1) JP2001507193A (en)
KR (1) KR20000062277A (en)
CN (1) CN1246242A (en)
AU (1) AU5379498A (en)
BR (1) BR9714419A (en)
CA (1) CA2275320A1 (en)
ID (1) ID19225A (en)
IL (1) IL130506A0 (en)
RU (1) RU99116256A (en)
WO (1) WO1998028917A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000026809A1 (en) * 1998-10-30 2000-05-11 Caterpillar Inc. Automatic wavelet generation system and method
WO2001037575A2 (en) * 1999-11-18 2001-05-25 Quikcat.Com, Inc. Method and apparatus for digital image compression using a dynamical system
WO2001097528A2 (en) * 2000-06-09 2001-12-20 Hrl Laboratories, Llc Subband coefficient prediction with pattern recognition techniques
FR2813001A1 (en) * 2000-08-11 2002-02-15 Thomson Multimedia Sa Image display/composition having detection unit copying preceding inter type without residue pixel group or movement compensating using preceding converted image
US6456744B1 (en) 1999-12-30 2002-09-24 Quikcat.Com, Inc. Method and apparatus for video compression using sequential frame cellular automata transforms
WO2005022463A1 (en) * 2003-08-28 2005-03-10 Koninklijke Philips Electronics N.V. Method for spatial up-scaling of video frames
US7792390B2 (en) * 2000-12-19 2010-09-07 Altera Corporation Adaptive transforms
EP3407604A4 (en) * 2016-03-09 2019-05-15 Huawei Technologies Co., Ltd. Method and device for processing high dynamic range image
US10515440B2 (en) 2017-08-30 2019-12-24 Samsung Electronics Co., Ltd. Display apparatus and image processing method thereof
CN110874581A (en) * 2019-11-18 2020-03-10 长春理工大学 Image fusion method for bioreactor of cell factory
CN115712154A (en) * 2022-11-02 2023-02-24 中国人民解放军92859部队 Displacement double-wavelet iteration method for detecting shipborne gravity measurement gross error

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4990924B2 (en) * 2009-01-29 2012-08-01 日本電信電話株式会社 Decoding device, encoding / decoding system, decoding method, program
CN102378011B (en) * 2010-08-12 2014-04-02 华为技术有限公司 Method, device and system for up-sampling image
US9282328B2 (en) 2012-02-10 2016-03-08 Broadcom Corporation Sample adaptive offset (SAO) in accordance with video coding
US9380320B2 (en) 2012-02-10 2016-06-28 Broadcom Corporation Frequency domain sample adaptive offset (SAO)
CN105427247B (en) * 2015-11-26 2018-08-24 努比亚技术有限公司 A kind of mobile terminal and image processing method of image procossing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0679032A2 (en) * 1994-04-20 1995-10-25 Oki Electric Industry Co., Ltd. Image encoding and decoding method and apparatus using edge systhesis and inverse wavelet transform

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0679032A2 (en) * 1994-04-20 1995-10-25 Oki Electric Industry Co., Ltd. Image encoding and decoding method and apparatus using edge systhesis and inverse wavelet transform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ATSUMI EIJI ET AL: "Image data compression with selective preservation of wavelet coefficients", VISUAL COMMUNICATIONS AND IMAGE PROCESSING '95, TAIPEI, TAIWAN, 24-26 MAY 1995, vol. 2501, pt.1, ISSN 0277-786X, PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, 1995, USA, pages 545 - 554, XP002060387 *
BRUNEAU J M ET AL: "Image restoration using biorthogonal wavelet transform", VISUAL COMMUNICATIONS AND IMAGE PROCESSING '90, LAUSANNE, SWITZERLAND, 1-4 OCT. 1990, vol. 1360, pt.3, ISSN 0277-786X, PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, 1990, USA, pages 1404 - 1415, XP002060377 *
KUROKI N ET AL: "HAAR WAVELET TRANSFORM WITH INTERBAND PREDICTION AND ITS APPLICATION TO IMAGE CODING", ELECTRONICS & COMMUNICATIONS IN JAPAN, PART III - FUNDAMENTAL ELECTRONIC SCIENCE, vol. 78, no. 4, 1 April 1995 (1995-04-01), pages 103 - 114, XP000549679 *
VAISEY J: "SUBBAND PREDICTION USING LEAKAGE INFORMATION IN IMAGE CODING", IEEE TRANSACTIONS ON COMMUNICATIONS, vol. 43, no. 2/04, PART 01, 1 February 1995 (1995-02-01), pages 216 - 221, XP000506549 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2349022A (en) * 1998-10-30 2000-10-18 Caterpillar Inc Automatic wavelet generation system and method
GB2349022B (en) * 1998-10-30 2004-03-03 Caterpillar Inc Automatic wavelet generation system and method
WO2000026809A1 (en) * 1998-10-30 2000-05-11 Caterpillar Inc. Automatic wavelet generation system and method
US6539319B1 (en) * 1998-10-30 2003-03-25 Caterpillar Inc Automatic wavelet generation system and method
WO2001037575A2 (en) * 1999-11-18 2001-05-25 Quikcat.Com, Inc. Method and apparatus for digital image compression using a dynamical system
WO2001037575A3 (en) * 1999-11-18 2002-01-17 Quikcat Com Inc Method and apparatus for digital image compression using a dynamical system
US6393154B1 (en) 1999-11-18 2002-05-21 Quikcat.Com, Inc. Method and apparatus for digital image compression using a dynamical system
US6456744B1 (en) 1999-12-30 2002-09-24 Quikcat.Com, Inc. Method and apparatus for video compression using sequential frame cellular automata transforms
US6678421B1 (en) * 2000-06-09 2004-01-13 Hrl Laboratories, Llc Subband coefficient prediction with pattern recognition techniques
WO2001097528A2 (en) * 2000-06-09 2001-12-20 Hrl Laboratories, Llc Subband coefficient prediction with pattern recognition techniques
WO2001097528A3 (en) * 2000-06-09 2002-04-11 Hrl Lab Llc Subband coefficient prediction with pattern recognition techniques
EP1185107A2 (en) * 2000-08-11 2002-03-06 Thomson Licensing S.A. Process for the colour format conversion of an image sequence
FR2813001A1 (en) * 2000-08-11 2002-02-15 Thomson Multimedia Sa Image display/composition having detection unit copying preceding inter type without residue pixel group or movement compensating using preceding converted image
US7415068B2 (en) 2000-08-11 2008-08-19 Thomson Licensing Process for the format conversion of an image sequence
EP1185107A3 (en) * 2000-08-11 2011-12-21 Thomson Licensing Process for the colour format conversion of an image sequence
US7792390B2 (en) * 2000-12-19 2010-09-07 Altera Corporation Adaptive transforms
WO2005022463A1 (en) * 2003-08-28 2005-03-10 Koninklijke Philips Electronics N.V. Method for spatial up-scaling of video frames
EP3407604A4 (en) * 2016-03-09 2019-05-15 Huawei Technologies Co., Ltd. Method and device for processing high dynamic range image
US10515440B2 (en) 2017-08-30 2019-12-24 Samsung Electronics Co., Ltd. Display apparatus and image processing method thereof
US11062430B2 (en) 2017-08-30 2021-07-13 Samsung Electronics Co., Ltd. Display apparatus and image processing method thereof
US11532075B2 (en) 2017-08-30 2022-12-20 Samsung Electronics Co., Ltd. Display apparatus for restoring high-frequency component of input image and image processing method thereof
CN110874581A (en) * 2019-11-18 2020-03-10 长春理工大学 Image fusion method for bioreactor of cell factory
CN110874581B (en) * 2019-11-18 2023-08-01 长春理工大学 Image fusion method for bioreactor of cell factory
CN115712154A (en) * 2022-11-02 2023-02-24 中国人民解放军92859部队 Displacement double-wavelet iteration method for detecting shipborne gravity measurement gross error
CN115712154B (en) * 2022-11-02 2023-11-03 中国人民解放军92859部队 Shifting double wavelet iteration method for detecting on-board gravity measurement rough difference

Also Published As

Publication number Publication date
AU5379498A (en) 1998-07-17
CA2275320A1 (en) 1998-07-02
CN1246242A (en) 2000-03-01
JP2001507193A (en) 2001-05-29
EP0947101A1 (en) 1999-10-06
BR9714419A (en) 2000-05-02
RU99116256A (en) 2001-05-10
IL130506A0 (en) 2000-06-01
ID19225A (en) 1998-06-28
KR20000062277A (en) 2000-10-25

Similar Documents

Publication Publication Date Title
US5379122A (en) Decompression of standard ADCT-compressed images
WO1998028917A1 (en) Improved estimator for recovering high frequency components from compressed image data
US8068683B2 (en) Video/audio transmission and display
US5432870A (en) Method and apparatus for compressing and decompressing images of documents
US6389176B1 (en) System, method and medium for increasing compression of an image while minimizing image degradation
WO1998028917A9 (en) Improved estimator for recovering high frequency components from compressed image data
US7454080B1 (en) Methods and apparatus for improving quality of block-transform coded images
JP4987480B2 (en) Conversion to remove image noise
AU6347890A (en) Improved image compression method and apparatus
EP0859517A2 (en) Video coder employing pixel transposition
CA3108454A1 (en) Transformations for signal enhancement coding
CA2476904C (en) Methods for real-time software video/audio compression, transmission, decompression and display
WO2006036796A1 (en) Processing video frames
EP2370934A1 (en) Systems and methods for compression transmission and decompression of video codecs
US6633679B1 (en) Visually lossless still image compression for CMYK, CMY and Postscript formats
Xiong et al. Wavelet-based approach to inverse halftoning
KR100982835B1 (en) Deblocking method and equipment for image data
MXPA99005731A (en) Improved estimator for recovering high frequency components from compressed image data
Mallikarjuna et al. Compression of noisy images based on sparsification using discrete rajan transform
Florea et al. Computationally efficient formulation of sparse color image recovery in the JPEG compressed domain
Pogrebnyak et al. Fast algorithm of byte-to-byte wavelet transform for image compression applications
Asai et al. A study of convex coders with an application to image coding
Al-Ghaib Lossy image compression using wavelet transform
Kiselyov Multiresolutional/fractal compression of still and moving pictures
JPH10294942A (en) Image data encoding device

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 97181833.9

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE GH HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
COP Corrected version of pamphlet

Free format text: PAGES 1/8-8/8, DRAWINGS, REPLACED BY CORRECT PAGES 1/11-11/11

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2275320

Country of ref document: CA

Ref document number: 2275320

Country of ref document: CA

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: PA/a/1999/005731

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 1998 528818

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1019997005668

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 1997950916

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1997950916

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1019997005668

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 1997950916

Country of ref document: EP

WWR Wipo information: refused in national office

Ref document number: 1019997005668

Country of ref document: KR