WO1991019272A1 - Systeme ameliore de compression d'images - Google Patents

Systeme ameliore de compression d'images Download PDF

Info

Publication number
WO1991019272A1
WO1991019272A1 PCT/US1991/003802 US9103802W WO9119272A1 WO 1991019272 A1 WO1991019272 A1 WO 1991019272A1 US 9103802 W US9103802 W US 9103802W WO 9119272 A1 WO9119272 A1 WO 9119272A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
component
component images
generating
pixel
Prior art date
Application number
PCT/US1991/003802
Other languages
English (en)
Inventor
Howard Leonard Resnikoff
David Pollen
David C. Plummer Linden
Original Assignee
Aware, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US07/531,468 external-priority patent/US5101446A/en
Application filed by Aware, Inc. filed Critical Aware, Inc.
Publication of WO1991019272A1 publication Critical patent/WO1991019272A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/649Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding the transform being applied to non rectangular image segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present invention relates to methods and apparatuses for transforming images and, more specifically, to a method and apparatus for reducing the amount of data needed to store an image.
  • Images are conventionally represented by a two-dimensional array of values in which each value represents a property of the image at a corresponding point on the image.
  • each value represents a property of the image at a corresponding point on the image.
  • gray-scale images a single number representing the gradations of intensity from white to black, referred to as the gray scale, is stored.
  • each "value" is a vector whose components represent the gradations in intensity of the various primary colors, or some alternative color code, at the corresponding point in the image.
  • This representation of an image corresponds to the output of a typical image-sensing device such as a television camera. Such a representation is convenient in that it is easily regenerated on a display device such as a CRT tube.
  • a display device such as a CRT tube.
  • it has at least two short-comings. First, the number of bits needed to represent the data is prohibitively large for many applications. Second, if the image is to be processed to extract features that are arranged in the same order of importance as that perceived by a person viewing the image, the amount of processing needed can be prohibitively large.
  • the number of bits needed to store a typical image is sufficiently large to limit the use of images in data processing and communication systems.
  • a single 512x512 gray-scale image with 256 gray levels requires in excess of 256,000 bytes.
  • a small-scale computer user is limited to disk storage systems having a capacity of typically 300 Mbytes. Hence, less than 1200 images can be stored without utilizing some form of image compression.
  • the transmission of images over conventional telephone circuitry is limited by the high number of bits needed to represent the image. If an 8x11 inch image were digitized to 256 gray levels at 200 dots per inch (the resolution utilized in typical FAX transmissions) in excess of 28 million bits would be required. Normal consumer-quality analog telephone lines are limited to a digital communication rate of 9600 bits per second. Hence, the transmission of the image would require in excess of 45 minutes in the absence of some form of image compression.
  • image compression methods can be conveniently divided into two classes, invertible and non-invertible methods.
  • the invertible methods reduce redundancy but do not destroy any of the information present in the image.
  • These methods transform the two-dimensional array representing the image into a form requiring fewer bits to store.
  • the original two-dimensional array is generated by the inverse transformation prior to display.
  • the regenerated image is identical to the original image.
  • the image consists of a two-dimensional array of bits.
  • a one-dimensional list of bits can be generated from the two-dimensional array by copying, in order, each row of the two- dimensional array into the one-dimensional array. It has been observed that the resultant one-dimensional array has long runs of ones or zeros.
  • a run of 100 ones One hundred bits are required to represent the run in the one- dimensional array.
  • the same 100 bits could be represented by a 7-bit counter value specifying the length of the run and the value "one" specifying the repeated gray level.
  • the 100 bits can be reduced to 8-bits. This is the basis of an invertible transformation of the one-dimensional array in which the transformed image consisting of a sequence of paired values, each pair consisting of a count and a bit value.
  • the success of any image compression scheme may be viewed in terms of the compression ratio obtained with the method in question.
  • the compression ratio is the ratio of the number of bits in the original two-dimensional array to the number of bits needed to store the transformed image.
  • compression ratios of the order of 5 to 10 can be obtained utilizing these methods.
  • the gains obtained decrease rapidly as the number of gray levels is increased.
  • the probability of finding repeated runs of the same gray level decreases.
  • Each time the gray level changes a new pair of values must be entered into the file. As a result, compression ratios exceeding 3 are seldom obtained utilizing invertible compression methods for gray-level images.
  • quantization replaces each pixel by an integer having a finite precision.
  • Each of the pixel values is to be replaced by an integer having a predetermined number of bits. The number of bits will be denoted by P.
  • the integers in question are then transmitted in place of the individual pixel values.
  • the inverse of the mapping used to assign the integer values to the pixel values is used to produce numbers that are used in place of the original pixel values.
  • the overall error in approximating an image will depend on the statistical distribution of the intensity values in the image and the degree of image compression needed.
  • an image compression ratio is set, the total number of bits available for all of the pixels of the quantized image is determined.
  • the optimum assignment of the available bits is determined from the statistical properties of the image. It has been found experimentally that the statistical distributions of a large class of images are approximated by Laplacian distributions. Hence, the optimum allocation of the bits may be made from a knowledge of the variance of the pixel values in the image.
  • a quantization scheme will be defined to be "optimum" if it provides the lowest distortion in the reconstructed image for a given compression ratio.
  • the distortion is measured in terms of mean-squared-error between the image prior to compression and image obtained after compressing and decompressing the original image. It will be apparent to those skilled in the art that other statistical measures of distortion between the original and reconstructed images may be used.
  • the transformation is chosen such that the variance of the first component image is approximately the same as that of the original image while the variance of the second component image is significantly less than that of the original image. Since there are one half the number of pixels in the first image, the first image can be quantized with one half the number of bits needed to quantize the original image to the same precision. However, since the second component image has a smaller variance then the original image, the number of bits needed to quantize the second component image will be less than one half the number needed to quantize the original image. Thus, a net reduction in the number of bits needed to represent the image in the quantized form is achieved.
  • transformations having the desired property can be constructed. It is known that the information in images tends to be concentrated in the lower spatial frequencies. Hence, transformations that divide the image into component images having different spatial frequency content provide the desired property. Such transformations can be implemented utilizing perfect reconstruction filter banks. By appropriate application of such filter banks to the image, a number of component images are generated. In general, a low-frequency image having a small fraction of the original image size is generated together with a number of high-frequency images. The low- frequency image has a pixel intensity variance of the same order as the original image and is quantized accordingly. The high-frequency component images have much smaller variances, and hence, are quantized with fewer bits per pixel.
  • I(x,y) ⁇ p k F k (x,y) + ⁇ q f cG (x,y) (1) k
  • the set of functions ⁇ F k (x,y) ⁇ and ⁇ G (x,y) ⁇ are the basis functions for the transformation.
  • the coefficients sets ⁇ p k ⁇ and ⁇ qj are determined by fitting the observed image values using the basis functions. These coefficient sets are the "pixels" of the component images referred to above.
  • the basis functions ⁇ F k ⁇ and ⁇ G ⁇ are chosen such that the most "important" information contained in the image is represented by the p's and the least important information is represented by the q's.
  • the basis functions must be chosen such that ⁇ F k ⁇ will extract the low spatial frequency information in the image, and ⁇ G k ⁇ will extract the high spatial frequency information. This condition guarantees that the transformation will lead to component images that differ in spatial frequency content.
  • a second problem occurs with basis functions having large support. Images tend to contain structures whose spatial extent is small compared to the size of the image. To represent such a structure with basis functions that have support which much larger than the structure in question often requires the superposition of many such basis functions. Hence, the number of coefficients which contain useful information is likely to be larger if basis functions having support which is much larger than the objects found in the image are used.
  • a third problem with transformations that utilize basis functions whose support is large is the computation cost.
  • the computational workload inherent in fitting the image to the expansion discussed above is related to the support of the basis functions. If the support is large, the computational workload becomes correspondingly large.
  • one prior art image compression system utilizes a two-dimensional Fourier transform.
  • the Fourier basis functions have the same support as the size of the image.
  • the transformation in question requires that one compute the two-dimensional Fourier transform of the entire image.
  • the computational workload is of the order of N ⁇ tlogCN) 2 ] operations. This workload is too great to allow a practical image compression apparatus to be constructed that operates on an entire image.
  • the image is typically broken into sub-images that are individually compressed. The decompressed sub-images are then recombined to form the decompressed image. At large compression ratios, the boundaries of the sub-regions appear as artifacts in the reconstructed image.
  • the human visual system is very sensitive to linear boundaries as is indicated by the ability of the visual system to detect such boundaries even when the difference in intensity is only a few percent across the boundary in question. Hence, these boundaries give rise to highly objectionable artifac ⁇ s. Individuals viewing highly compressed images from this type of compression system often report a "blocky" picture.
  • the variance of the coefficient sets ⁇ p k ⁇ and ⁇ qj, ⁇ is related to the basis functions selected.
  • the ability of the quantization scheme to compress the image is related to the variance of the coefficient sets.
  • basis functions which are orthogonal to each other give rise to coefficients sets having lower variance than the coefficient sets obtained with non-orthogonal basis functions.
  • Prior art systems with small support have not utilized transformations having orthogonal basis functions.
  • Adelson, et al. refer to their QMF method as being equivalent to expanding the image in an orthonormal basis set, this method does not provide the claimed orthonormal expansion.
  • the basis functions corresponding to a QMF are, by definition, symmetric. Using this property of the QMF basis functions, it can be shown that the QMF basis functions can not be an orthonormal set. Hence, this method does not provide the advantages of an orthonormal transformation of the image.
  • the present invention comprises an image compression/decompression system together with recordings of images made thereby.
  • An image compression apparatus according to the present invention generates a plurality of component images from an input image.
  • the pixel intensity values of the component images are approximated by integers of a predetermined precision, the precision depending on the information content of the component image in question.
  • the pixel intensity values of the component images are the coefficients of an expansion of the input image in a two-dimensional irreducible basis.
  • a decompression apparatus reverses the compression process.
  • Such a decompression apparatus regenerates an approximation to the original component images from the integer values used to replace the original pixel intensity values of the component images.
  • the reconstructed component images are then recombined to generate an approximation to the original image.
  • the decomposition of the input image into the component images is carried out in the preferred embodiment of the present invention with the aid of a special class of two-dimensional finite impulse response filters.
  • An analogous class of filters is used to reconstruct the image.
  • FIG. 1 is a block diagram of an image compression apparatus 10 according to the present invention.
  • Figure 2 is a block diagram of a decompression apparatus according to the present invention.
  • Figure 3 is a block diagram of one embodiment of a two-dimensional FIR according to the present invention together with frame and component buffers.
  • Figure 4 is a block diagram of an analyzer according to the present invention.
  • Figure 6 is a block diagram of an injector operating on a component image stored in a buffer to generate an expanded image.
  • Figure 7 is a block diagram of a synthesizer according to the present invention.
  • Figure 8 illustrates the shape of the support area for one set of basis functions according to the present invention.
  • Figure 9 illustrates the shape of the support area for a second set of basis functions according to the present invention.
  • Figure 10 illustrates the shape of the support area for third set of basis functions according to the present invention.
  • Figure 11 is a table of possible values for some the parameters defining some of the low multiplier bases according to the present invention.
  • Figure 12 is a block diagram for a pipelined implementation of an analyzer system according to the present invention.
  • Figure 12 is a block diagram for an image reconstruction and display system according to the present invention.
  • FIG. 1 is a block diagram of an image compression apparatus 10 according to the present invention.
  • the image to be compressed is stored initially in a frame buffer 12.
  • a filter bank 14 is used to transform the image in frame buffer 12 into a plurality of component images comprising at least one component image representing the low spatial frequency information and one component image representing the high spatial frequency information in the image.
  • the component images are stored in a buffer 16.
  • the filter bank may be iteratively applied to the component images stored in buffer 16 to produce additional component images.
  • buffer 16 may include all of frame buffer 12.
  • the component images stored in buffer 16 are then quantized by quantizer 18 to generate the compressed image 20.
  • quantizer 18 replaces each pixel in a component image by integer having some predetermined precision. In general, the precision in question will vary with the component image being quantized and will depend on the statistical distribution, or an approximation thereto, of the pixels in the component image.
  • Compressed image 20 comprises the quantized pixel values and information specifying the quantization applied to each component image.
  • some of the component images will not be quantized.
  • the pixels of these images will be replaced by zeros.
  • the information in question includes the identity of the component images that were quantized and information specifying an inverse mapping which allows an approximation to each of the quantized component images to be generated by a decompression apparatus.
  • FIG. 2 A block diagram of a decompression apparatus according to the present invention is shown in Figure 2 at 22.
  • the compressed image is received by an inverse quantizer 24 which generates the relevant component images which are stored in buffer 25.
  • Inverse quantizer 24 utilizes the inverse mapping information and the quantized component image values to generate approximations to the original component images. Those component images that were quantized to zero intensity pixels are replaced by component images of the same size having pixel values of zero.
  • the component images are then input to an inverse filter bank 26 which combines the component images to generate an approximation to the original image which is stored in a frame buffer 27.
  • component image buffer 25 may include frame buffer 27.
  • the filter bank and inverse filter bank described above are two- dimensional analogs of the one-dimensional perfect reconstruction filter banks utilized in a number of signal processing applications.
  • the filter bank that decomposes the image into the component images will be referred to as an analyzer, and the filter bank the reconstructs an image from its component images will be referred to as a synthesizer in the following discussion.
  • each filter bank includes a plurality of two-dimensional finite impulse response filters (FIRs). The number of FIRs in each filter bank will be denoted by M.
  • a block diagram of one embodiment of a two-dimensional FIR according to the present invention is shown at 30 in Figure 3 together with a frame buffer 32 and a component buffer 34.
  • the image to be processed comprises a two- dimensional array of pixel values which are stored in frame buffer 32.
  • the pixels of the component image are stored in component buffer 34.
  • the number of pixels in the component image will be approximately 1/M times the number of pixels in the image.
  • FIR 30 includes a controller 37 whose operations are synchronized by clock 38.
  • Clock 38 defines two time cycles, a major cycle and a minor cycle. During each major cycle, one pixel of the component image is generated. A typical pixel is shown at 35.
  • Each pixel of the component image is generated by performing a two- dimensional convolution operation on the pixels in a window 40 whose location is determined by the location of the pixel in question in the component image.
  • vector notation will be used.
  • vectors and matrices will be written in bold print.
  • a location in either of buffers 32 and 34 is identified by two integer index values.
  • a location is addressed by inserting the index values into an address register.
  • the address register for buffer 34 is shown at 33, and the address register for buffer 32 is shown at 31.
  • the index values will be denoted by an ordered integer pair (k x ,k y ).
  • a pixel of the component image stored at a location (p x ,p y ) in buffer 34 will be denoted by C p , where the vector p denotes the location (p x ,p y ) in component buffer 34.
  • the location of window 40 in buffer 32 will be denoted by w.
  • the coordinates specified by w will be assumed to be those of the lower left-hand corner; however, other labeling schemes will also function adequately as will be apparent to those skilled in the art.
  • the intensity value of a pixel of the image stored in buffer 32 will be denoted by I k , where the vector k denotes the location (k x ,k y ) in component buffer 32.
  • the weights used in the convolution operation are stored in memory 39.
  • the weights may be regarded as a two-dimensional array g k , where k denotes the location (k x ,k y ) in memory 39.
  • FIR 30 For each pixel C p in the component image, FIR 30 performs the following calculation:
  • the summation is carried out at for those k values corresponding to locations in window 40 for which the corresponding weights are non-zero.
  • window 40 and memory 39 have corresponding locations which may be specified by a two-dimensional pointer k.
  • controller 37 causes multiply and add circuit 36 to compute the weighted sum of all pixels in window 40.
  • One product is calculated each minor clock cycle in the preferred embodiment of the present invention.
  • the sum is carried out by selecting the weight corresponding to the pixel in question and multiplying it by the pixel value.
  • the products may be accumulated in multiply and add circuit 36 or stored in component buffer 34.
  • the location of window 40 depends on the specific pixel in the component image that is being calculated and other parameters relating to the specific transformation that is being implemented by the FIR. For the moment, it is sufficient to note that the location of the window is a function of p as shown in Eq. (2).
  • Two or more FIR filters may be combined to form an analyzer.
  • a block diagram of an analyzer 50 according to the present invention is shown in Figure 4.
  • M FIR filters in an analyzer. The significance of this number will be discussed in detail below.
  • an analyzer having a complete set of FIR filters has M such filters which will be labeled from 0 to (M-l).
  • Representative filters are shown at 59-61 in Figure 4.
  • Each FIR filter has an associated set of weights. The weight sets are preferably stored in separate buffers, one such buffer being associated with each FIR. Representative buffers are shown at 62-64.
  • the FIR filters are under the control of controller 56 which includes a clock 58 for synchronizing the various operations.
  • controller 56 also includes a master weight library from which the various weight sets are loaded into their respective buffers prior to processing an image.
  • the image to be processed is preferably stored in a buffer 52.
  • Image buffer 52 is shared by the various FIR filters.
  • Controller 56 uses clock 58 to divide the image processing into a number of time cycles. During each time cycle, one pixel is calculated in each of the component images. Controller 56 specifies the pixel to be calculated by appropriate signals to registers in the various component image buffers. Controller 56 also specifies the location of the window in image buffer 52.
  • all of the FIR filters are synchronized such that all of the FIR filters process the same point in image buffer 52.
  • the m 111 FIR calculates the pixels of the m 1 * 1 component image according to the formula
  • controller ⁇ g k ⁇ denotes the m 4 set of weights. The summation is carried out over all k values for which any m g k is non-zero.
  • component image In the preferred embodiment of the present invention, component image
  • °C p represents the low spatial frequency information in the image and the remaining component images represent higher spatial frequency information.
  • the number of pixels in each of the component images will be approximately N2 M where the image being analyzed is assumed to be NxN pixels in size.
  • An analyzer may be recursively applied to an image to further resolve the image into different component images. Any of the component images may be used as an input image to the same analyzer or another analyzer that utilizes the same, or different, filter weights. In image compression applications, this type of processing is usually limited to the low-frequency component image.
  • Each application of the analyzer generates two component images, a low-frequency component image and one high-frequency component image.
  • the application of the analyzer to image 70 yields two component images 71 and 72.
  • the analyzer has been set to operate on a periodic image; hence, the number of pixels in each of the component images is one half that in image 70.
  • Low-frequency image 71 is then analyzed as if it were an image having one half the number of pixels of the original image.
  • the output of the analyzer for this image is a low-frequency component image 73 and a high-frequency component image 74.
  • Component images 73 and 74 will each have one quarter the number of pixels of image 70.
  • Low-frequency image 73 may again be analyzed to generate a still smaller low- frequency image 75 and a high-frequency image 76, each of these component images being one eighth the size of the original image.
  • high-Frequency component image 72 will represent generally higher spatial frequency information than high-Frequency component image 74, which in turn, will represent generally higher frequency information than high-Frequency component image 76.
  • the total number of pixels in the aggregated component images will be the same as that in the original image if the original image is assumed to be periodic.
  • the component images generated by each decomposition may be stored back in the image buffer in the analyzer.
  • the operation of the analyzer is restricted to the portion of the image buffer that is used to store the current low-frequency component image.
  • the statistical distributions of the pixel values in the component images are calculated by a quantizing circuit
  • the number of bits to be allocated to the pixels of each component image is determined by the overall compression ratio selected for the image and the calculated statistical distributions of pixel intensities for the component images.
  • methods for performing such bit allocations are known to the art.
  • the number of bits allocated to the various component images may differ somewhat from the statistical optimum. These differences reflect computational efficiency and the manner in which the subjective image quality differs from quantitative measures of image distortion.
  • the computation of the statistical distributions of all the component images for each image that is to be compressed may exceed the available processing time in applications in which the image is being compressed in real time.
  • the form of the statistical distribution is usually assumed. For example, it has been found experimentally that large classes of images generate component images with Laplacian statistical distributions. Hence, it is assumed that the various component images have Laplacian statistical distributions. This reduces the bit allocation problem to determining the variances of the pixel distributions in the various component images.
  • the above example considered the unusual case of a decomposition into a single low-Frequency component image and a single high-Frequency component image of the same size and variance.
  • an improvement in image quality, or image compression for a given subjective image quality can be obtained by increasing the number of bits per pixel used to quantize the low- Frequency component image at the expense of the high-Frequency component images.
  • the cost of increasing the number of bits allocated the low-Frequency component image is relatively small in most cases, since the low-Frequency component image has a small fraction of the total pixels in the component images.
  • the highest frequency component images on the other hand, have a significant fraction of the total number of pixels in the component images.
  • component images may be quantized with reduced precision relative to the statistical optimum thereby generating a significant saving in the total number of bits needed to quantize the component images.
  • the highest frequency component images may be discarded in many cases. That is, zero bits are allocated per pixel.
  • the corresponding FIR filters in the analyzer may be omitted, thereby reducing the hardware cost of an analyzer.
  • an analyzer While the above described embodiments of an analyzer according to the present invention utilized multiple FIR filters, it will be apparent to those skilled in the art that the decompositions could be carried out in a serial fashion with a single FIR filter. That is, the pixels of each component image would be calculated before going on to calculate the pixels of the next component image. In such an embodiment, the coefficients used in the convolution operation would be changed between the component image calculations. Further, it will be apparent to those skilled in the art that the functions carried out by an analyzer may be simulated on a general purpose computer.
  • the component images generated by an analyzer may be recombined to recover the original image.
  • An apparatus for recombining the M component images generated by an analyzer will be referred to as a synthesizer.
  • a synthesizer is preferably constructed from M FIR filters that will be referred to as injectors in the following discussion.
  • injectors operate on one component image to generate an expanded image having the spatial frequency information represented by the component image in question.
  • the expanded image has the same number of pixels as the original image from which the component images were generated.
  • the reconstructed image is obtained by summing corresponding pixels in each the expanded images.
  • FIG. 6 A block diagram of an injector 80 operating on a component image stored in a buffer 84 to generate an expanded image stored in a buffer 82 is shown in Figure 6.
  • the component image is approximately 1 M the size of the expanded image.
  • Injector 80 includes a controller 87 whose operations are synchronized by clock 88.
  • Clock 88 defines two time cycles, a major cycle and a minor cycle. During each major cycle, one pixel of the expanded image is generated. A typical pixel is shown at 79. The minor timing cycles are used to synchronize the individual arithmetic operations used in the generation each pixel.
  • Each pixel of the expanded image is generated by performing a two- dimensional convolution operation on the pixels in a window 85 whose location is determined by the location of the pixel in question in the component image.
  • vector notation will be used.
  • a pixel of the reconstructed image stored at a location (p x ,p y ) in buffer 82 will be denoted by I' p , where the vector p denotes the location (p x ,p y ) in frame buffer 82.
  • the location of window 85 in buffer 84 will be denoted by w.
  • the coordinates specified by w will be assumed to be those of the lower left-hand corner.
  • a pixel of the image stored in buffer 84 will be denoted by C k , where the vector k denotes the location buffer 84.
  • the weights used in the convolution operation are stored in memory 89.
  • the weights are a two-dimensional array g' k , where k denotes the location in memory 89.
  • the weights g' k used in an injector are related to the weights g k used by the FIR that generated the component image in question.
  • injector 80 For each pixel in the expanded image, injector 80 performs the following calculation:
  • window 85 and memory 89 have corresponding locations which may be specified by a two-dimensional pointer k.
  • controller 87 causes multiply and add circuit 86 to compute the weighted sum of all pixels in window 85.
  • the sum is carried out by selecting the weight corresponding to the pixel in question and multiplying it by the pixel value.
  • the products may be accumulated in multiply and add circuit 86 or stored in frame buffer 82.
  • the location of window 85 depends on the specific pixel in the component image that is being calculated and other parameters relating to the specific transformation that is being implemented by the injector. For the moment, it is sufficient to note that the location of the window is a function of p as shown in Eq. (4).
  • FIG. 7 A block diagram of a synthesizer 100 according to the present invention is shown in Figure 7.
  • M injectors there will be M injectors in a synthesizer.
  • Each injector corresponds to a FIR filter in the analyzer used to decompose the original image into the component images.
  • a synthesizer having a complete set of injectors has M injectors which will be labeled from 0 to (M-l).
  • Representative injectors are shown at 93, 96, and 99 in Figure 7.
  • Each injector has an associated set of weights which will be referred to as coefficient sets in the following discussion.
  • the weight sets are preferably stored in separate buffers, one such buffer being associated with each injector.
  • Representative buffers are shown at 91, 94, and 97.
  • Each weight set is the same as that used by the corresponding FIR filter in the decomposition of the original image into component images.
  • controller 104 which includes a clock 105 for synchronizing the various operations.
  • controller 104 also includes a master weight library from which the various weight sets are loaded into their respective buffers prior to processing the component images.
  • the reconstructed image is preferably stored in a buffer 102 which is shared by the various injectors.
  • Controller 104 uses clock 102 to divide the image processing into a number of time cycles. During each time cycle, one pixel in the reconstructed image is generated by combining M pixels generated by the injectors. The M pixels in question are combined by adder 100 and stored at the relevant location in the reconstructed image. Controller 104 specifies the pixel to be calculated by appropriate signals to registers in the various component image buffers.
  • Controller 104 also specifies the location of the window in the various component image buffers.
  • all of the injectors are synchronized such that all of the injector windows are located at the location for the reconstructed image pixel currently being processed. That is, all of the injectors use the same value for w.
  • each injector computes the pixel of its expanded image corresponding to the pixel in question in the reconstructed image.
  • Adder 100 then adds the results from the various injectors and controller 104 causes the sum to be stored at the correct location in the reconstructed image.
  • the m ⁇ injector calculates the pixels of the m 4 expanded image according to the formula
  • the corresponding synthesizer may be used to reconstruct the image by iteratively applying the synthesizer.
  • the synthesizer is applied to the component images in the reverse of the order used by the analyzer. Referring again it Figure 5.
  • Component images 75 and 76 would first be recombined to regenerate component image 73.
  • Component images 73 and 74 would then be recombined to regenerate component image 71.
  • component images 71 and 72 would be recombined to regenerate the original image 70.
  • the convolution operations described above with reference to the injectors and synthesizers are equivalent to computing the correlation of the portion of the image, or component image, defined by the window with a second "image" whose pixels are corresponding weight set. While these operations are preferably carried out in a digital manner, it will be apparent to those skilled in the art that analog circuitry for generating the correlation values may be used. For example, a series of masks, light sources, and light detectors may be used with a transparency of the image to be decomposed to generate the component images. Similarly, the convolution operations described with reference to the analyzers and filters comprising the same are equivalent to computing the correlation of the portion of the image defined by the widow with a second "image" whose pixels wre the corresponding weight set.
  • each function m F k [Q(x,y) in the present invention is non-zero over a small region of the original image.
  • the region in question will, of course, differ for different functions. If all of the support areas of all basis functions at a given level in the decomposition are combined, the combined area must include the entire area covered by the image. If this is not the case, there will be some images which can not be represented in this manner.
  • the basis functions m F k fl](x,y) inherent in the present invention are translates of one another for fixed m and I. That is, there exists a vector p such that m F k [Q(x,y) is identical to m F k > +p [Q(x,y) for k different from k'. This condition guarantees that there is no preferred region in the image. This condition is reflected in the hardware of the present invention in that the size of the window and weight sets do not change when the pixel being calculated changes.
  • One set of basis functions which satisfies both of these constraints would be a set of functions whose supports are a set of areas which exactly tile the xy-plane.
  • the size of the support for the basis functions increases with the level of the decomposition in a multi-level decomposition. That is m F k [Q(x,y) has a support which is M times larger than m F k [.-l](x,y). If the same weight sets are used at each level of the decomposition, the support area of m F k [Q(x,y) will be the same as the sum of the support areas of M of the basis functions in the (l-l) ⁇ 1 level.
  • the errors in the reconstructed image are a function of the basis functions used in the decomposition of the image and reflect both the shape and support of the basis functions.
  • the support area at each higher level in the decomposition includes M of the support areas in the next lowest level.
  • the boundary of the support area at the higher level coincides with a portion of the boundary of each of the support areas at the lower level contained therein.
  • the various quantization errors may thus add or subtract along these common boundaries either enhancing the artifact or reducing the same.
  • the effect of the enhancements is more noticeable than the improvements obtained when two errors cancel.
  • the human eye is very adept at following even a broken line.
  • the length of the maximum edge that can be generated is the size of the largest support area, i.e., the support areas of the basis functions at Level L. Hence, the linear artifacts can be quite large.
  • reducible basis functions have other linear artifacts. If one were to view a reducible basis function as a "mountain range", the ridges would be aligned with the x-axis, or the y-axis.
  • quantization errors introduce an error over the basis function support area that has the appearance of a texture artifact.
  • the texture is related to the ridges and valleys of the basis function.
  • the resulting texture artifacts will generally be composed of lines
  • the present invention utilizes filters in which the underlying functional expansion is in terms of an irreducible basis. That is, the basis functions can not be written as a product of two one-dimensional functions.
  • the linearly aligned ridges described above can be avoided in an irreducible basis.
  • the present invention also avoids the texture artifacts inherent in a system based on a reducible basis.
  • the ability of the quantizer to compress the component image depends on the form of the underlying basis functions used by the analyzer.
  • the analyzer concentrates all of the "power" in the image into the low-frequency component image at the highest level and a few of the high- frequency component images at the levels just below the highest level.
  • the variances of the high-Frequency component images at the lower levels will be very small and these high-Frequency component images will require very few bits to adequately preserve their portion of the image information.
  • the number of pixels in lower frequency component images is much smaller than the number in the high-frequency component images at the lower levels.
  • the quantizer will need to allocate bits needed by the upper levels to these lower levels, thereby increasing the quantization errors.
  • the ability of the analyzer to concentrate information in the low- frequency component image depends on the ability of the basis functions ⁇ °F k [L](x,y) ⁇ and ⁇ m F k [Q(x,y) ⁇ for ( close to L to represent the structures normally found in images. Suppose that these functions alone could represent all of the structures in an image. Than the pixels of all of the component images for the smaller values of I would be zero and could, therefore, be discarded. That is, there would be no information in the highest frequency component images that would be needed in the image reconstruction. In essence, the highest frequency component images represent the inability of the analyzer to concentrate information in the lower frequency component images.
  • Frequency component images depends on the ability of the analyzer to set the ratio of the sizes of the component images at the various levels. As noted above, the size of the component images changes by a factor of M from one level to the next i general, there will be some optimum value for M which depends on the objects in the image being compressed. In principle, the image compression can be optimized for particular images in terms of the specific coefficient sets. As will be explained in more detail below, M is dependent on coefficient sets. Such an optimization would examine the distortion in the reconstructed image as a function of the available weight sets and chose the weight set providing the minimum distortion. The ability to optimize in this manner depends on being able to vary M. Prior art analyzer do not provide the ability to use M values other than 4.
  • the present invention provides the ability to chose any integer value for M which is greater than 1.
  • the manner in which the weight sets utilized in the present invention are calculated will now be discussed in detail.
  • the simplest weight sets according to the present invention correspond to basis functions having the following properties. All of the basis functions at any given level . for . >0 have the same shape, up to a dilation. Each of the support areas forms a tile. The set of tiles completely covers the portion of the xy-plane over which the image to be compressed is defined.
  • the support area of a basis function at level .+1 is M times the area of the support of a basis function at level t and includes M of the support areas of the basis functions at level ..
  • the shape of the support area at level (+1 is the same as that at level I; however, the support area will, in general, be rotated relative to the support areas at level .. Since the support areas at any given level are congruent and tile the plane, a grid is defined in the plane by selecting one specific point on each of the support areas such that the selected points occupy congruent positions in the different support areas. The location of the corresponding point on the support areas at the next highest level will also be on one of the grid points. Since the support at the next highest level is larger than that at the lower level, not all grid points defined with reference to the support regions at level I will have a support region for a level l+l basis function located thereon. The location of the grid point k' at level l+l corresponding to a grid point k at level I is given by the transformation
  • the simplest support shape is a rectangle having sides in the ratio of j il. Two such support areas placed side by side have the same shape as a single support area rotated by 90 degrees.
  • FIG. 8 illustrates a number of rectangular support having sides in the ratio of ⁇ 2:1. These dimensions are shown explicitly on support area 111. Each support area can be identified by an index associated with its lower left-hand corner. Support area 112 is associated with grid point (0,0); support area 113 is associated with grid point (0,1), and so on.
  • Support area 116 obtained by combining support areas 113 and 115. This support has one side with length yj ⁇ and one with length 2. Hence, the sides are in the ratio y ⁇ :l. If the short side is defined to be the base, support area 106 can be viewed as being a support area of twice the area which has been rotated by 90 degrees relative to its two constituent support areas. Its lower left hand corner is located at grid point (0,2); hence it satisfies the condition that it also lies on a grid point.
  • Figure 9 illustrates a surface 200 which is tiled with twin ⁇ dragon shaped support areas of which support areas 201 and 202 are typical. It will be seen that the larger support area 203 obtained by combining support areas 201 and 202 is itself a twindragon shaped support area located at the same grid point as support area 201. The combined support area is rotated by an angle of 45° relative to the single support areas.
  • Figure 10 illustrates a surface 300 which is tiled with novon shaped support areas of which support areas 301 and 302 are typical.
  • the larger support area 303 obtained by combining support areas 301 and 302 is itself a novon shaped support area located at the same grid point as support area 301.
  • the combined support area is rotated by an angle of arctangent(7) which is approximately equal to 69.295° relative to the single support areas.
  • novon and twindragon support areas have boundaries that do not contain any linear segments. Hence, quantization errors in the corresponding component image pixels do not give rise to linear artifacts in the reconstructed images.
  • a quantization error in a system utilizing basis functions defined on these support areas gives rise to an error over an irregularly shaped area which does not tend to reinforce quantization errors on the support areas at the next highest level since the support areas at each higher level are rotated with respect to the those at the lower levels.
  • may have the values 0, ⁇ 1, and ⁇ 2 which correspond to the rec ⁇ tangle, novon, and twindragon cases discussed above.
  • the associated support area is the same as support area 110 shown in Figure 8.
  • the more general case in which the transformation has M greater than or equal to two will now be discussed.
  • the parameter M will be referred to as the multiplier of the transformation.
  • the multiplier of the trans ⁇ formation can be any integer which is greater than one.
  • the number of possible values of ⁇ in Eq. (9) depends on M. It can be shown that ⁇ can take on those integer values for which
  • ⁇ and M determine the shape of the support area and the grid on which the support areas are located.
  • a grid is shown in Figure 11 at 800 together with a table of the permissible ⁇ values for M less than 6.
  • the grid is a parallelogram as shown at 801.
  • the grid points are labeled with integer indices.
  • the grid has been drawn with x-axis as one of the grid directions.
  • the other grid direction is specified by the values of dx and dy.
  • the angle of rotation between the support areas of successive levels is given by the arctangent (dy/dx).
  • sets of coefficients ⁇ m a k ⁇ are chosen.
  • the weight sets m g k and m g' k are related to these coefficient sets.
  • the coefficient values are complex numbers.
  • the minimum number of coefficients in the low-frequency filter coefficient set ⁇ °a ⁇ which can be non-zero is M.
  • the coefficients sets for which there are M values of k for which °a k is different from zero and for which those M values are all equal to one, correspond to the case in which the support areas do not overlap.
  • the remaining cases correspond to the cases in which the support areas overlap.
  • q is any vector connecting two grid points
  • (*) denotes complex conjugation
  • the component images may be generated from the coefficient sets by the following formula
  • the relationship between the coefficients m a k and the weight sets m g k can be ascertained by comparing Eqs. (3) and (14). Assume that m a and m g k are defined on the same grid, i.e., the two-dimensional coordinates in the relevant weight buffer. As will be discussed in more detail below, not all of the points in this grid will have a corresponding m a k .
  • the weight sets can be divided into two groups, those that are zero and those that have values related to the m a k as follows.
  • the size of the window is specified by the extreme value of Sp+k.
  • the convolution can be carried out directly utilizing the coefficient sets ⁇ "&____ ⁇ .
  • the coefficient sets would be stored in the memory arrays used to store the weights.
  • the controller would then select the points to be convolved with the coefficients by performing the above described matrix multiplication and vector additions. This is equivalent to defining a window the image or component image buffers and convolving selected pixels in that window with the coefficients.
  • the present invention differs from prior art coding schemes such as that taught by Adelson, et al. in the above-mentioned United States Patent in the type of basis functions and the multipliers of the corresponding transformations.
  • prior art image coding schemes only transformations having multipliers of 4 and two-dimensional basis functions which could be written as the product of two one-dimensional basis functions are utilized. Such functions are usually referred to as "reducible".
  • the multiplier of the corresponding one-dimensional transformation is 2.
  • the relationship between the weighting coefficients used in the filters and injectors and the coefficient sets m a k the manner in which the coefficients are calculated will now be explained in more detail.
  • the low-frequency coefficients °a k completely determine the remaining high-frequency coefficients 1 a k up to a common factor that can be selected to be either +1 or -1.
  • the relationship between the low and high-frequency coefficients may be more easily written without vector notation. The relationship is as follows;
  • the high-Frequency coefficients are not completely determined by the low-Frequency coefficients.
  • a computer search utilizing the constraint equations described above may be used to find coefficient sets.
  • additional constraints may be imposed on the m a k to provide additional advantageous properties to the transformation.
  • the analyzer and synthesizer store several sets. The analyzer can then test several sets against a given image to determine which provides the lowest distortion for a given compression ratio.
  • One method for calculating a two-dimensional low-frequency set of coefficients °a k utilizes a one-dimensional set of low-Frequency coefficients.
  • a set of one-dimensional orthonormal coefficients ⁇ bj ⁇ , j running from 0 to N c -1, has multiplier M if
  • the two-dimensional coefficient sets are obtained by deploying a one- dimensional set of coefficients in two-dimensions. There are an infinite number of two-dimensional deployments of the b j which satisfy the equations in question.
  • the preferred method of deploying a one-dimensional set of coefficients utilizes a 2x2 integer- valued matrix G having a determinant with absolute value equal to 1.
  • r (1,0).
  • the general two-dimensional deployment of b j is given by
  • one advantageous feature of the present invention lies in the variety of basis of functions provided thereby.
  • Each transformation provides different basis functions.
  • the transformation in which the corresponding basis functions most closely represents the image features may be selected.
  • the number of possible permutations of multipliers and ⁇ m a k ⁇ is too large to exhaustively search.
  • Basis functions which have this property may be selected by utilizing the sets of coefficients, ⁇ m a ⁇ , in which the corresponding coefficients
  • the undesirable subjective effects resulting from quantization errors can be further reduced by using different transformations at each level of the decomposition.
  • different transformations correspond to different basis functions. If the same weight sets are used at each level of the transformation, the basis functions corresponding to a given level will have a portion of the support area of their boundary which is coincident with portions of the boundaries of the basis functions at the next lowest level. Hence, quantization errors in pixels at the two levels can add, thereby enhancing the edge of the support area and generating an artifact.
  • This situation can be avoided by utilizing weight sets corresponding to transformations having different shaped support areas at the different levels of the transformation. In this case, the basis functions at any given level will not have support areas which are coincident with the boundary of a support area at another level.
  • An image recording according to the present invention comprises a low-frequency component image and one or more high-frequency component images.
  • the pixels of the component images are approximations to the coefficients of an orthonormal expansion of the image in an irreducible basis in which the basis functions have compact support. That is, each basis function is non-zero over a small region of the image.
  • the recorded image also includes sufficient information specifying the basis functions in question to allow an approximation to the image to be constructed. In general, this information includes information specifying the coefficients sets used to generate the component images and the identity of any component images that were discarded.
  • weight sets that may be utilized in compressing an image in the manner taught in the present invention. Different sets may be better suited to specific classes of images. Hence, provided computation time is not limiting in the compression step of making a recording, it is advantageous to compress an image with a number of different weight sets and then measure the distortion in the decompressed image.
  • the component images generated by the compression giving the minimum distortion can then be used to generate the recorded image.
  • the component images can then be recorded on a suitable recording medium such as optical or magnetic disks or tapes.
  • This approach is particularly relevant in applications in which images are distributed in libraries to end users.
  • the number of images that can be stored on a given medium is dependent on the degree of compression that can be obtained at an acceptable image quality. Since the cost of optimizing the compression can be spread over a number of different end users, more computational time can be devoted to compressing the images.
  • Image compression and de-compression systems may be implemented in a pipelined fashion in those applications in which real time compression and decompression are needed. For example, in FAX transmission, both the compression and de-compression systems must operate in real time. Furthermore, it will generally be advantageous to provide both a compression apparatus and a decompression apparatus in the same instrument In contrast image display systems operating from libraries of compressed images may require only a decompression system that must operate at real-time speeds.
  • Analyzer 500 includes L stages, one said stage corresponding to each level of the multi ⁇ level decomposition c " the -images input to analyzer 500.
  • the first, second, and Ifi 1 stages are shown i - Figure 12.
  • Each stage includes an analyzer such as that described above with reference to Figure 4 and two output buffers.
  • One of the output buffers is used to store the low-Frequency component image generated by the analyzer and one is used for to store the high-Frequency component images generated by the analyzer.
  • the first stage comprises analyzer 502 and buffers 503-504.
  • the second stage comprises analyzer 506 and buffers 507-508, and the I * stage comprises analyzer 510 and buffers 512 and 514.
  • the first stage also includes an image buffer 501 which receives the incoming images to be analyzed.
  • Analyzer system 500 is under the controller of a controller 520 which includes a clock 522 for defining a series of time periods. During each time period, one image is input to analyzer system 500, and the component images corresponding to the image input L cycles time periods earlier are output by controller 520.
  • each of the analyzers operates on the low-Frequency component image stored in the low-Frequency buffer in the stage ahead of it.
  • the component images generated by each analyzer are stored in the buffers associated with that analyzer.
  • each analyzer includes internal buffers used by that analyzer during the generation of the component images.
  • controller 520 collects the high-Frequency component images generated by the various stages and the low-Frequency component image generated by the last stage. Controller 520 includes sufficient memory to store the various high-Frequency component images generated in L time cycles.
  • the time savings provided by a pipelined analyzer implementation will depend on the multiplier M of the transformation being implemented.
  • the computational workload of each of the successive stages decrease by a factor of M.
  • the time needed to analyze an image can be reduced by approximately a factor of two.
  • the number of levels in the decomposition will be less than those in systems with smaller M.
  • the computational workload per level is greater in systems with larger M, the parallel processing provided by the M FIR filters in each stage provides the necessary computational speed.
  • Image synthesis systems employing a similar pipelined architecture will be apparent to those skilled in the art from the foregoing discussion. However, one class of image synthesis system merits additional comment.
  • the present invention is particularly useful in displaying images that have been compressed and stored in their compressed form.
  • One problem inherent in high resolution graphics system is the bottleneck presented by the communication bus that connects the image storage device and the graphics interface.
  • the image storage device is a magnetic disk or the like. The time needed to transfer this amount of data is sufficiently large that a human operator who is browsing through the images in a library finds the delay annoying if not unacceptable.
  • the browsing operation can be carried out more efficiently for two reasons.
  • the amount of data that must be read from the disk and transferred to the graphics interface is reduced by the compression ratio. This significantly reduces the transfer time.
  • a low-Frequency approximation to the image can be displayed prior to receiving all of the component images. It is known from studies of the human visual system, that an observer can not appreciate the high resolution detail in a motion picture in those parts of the picture that are changing rapidly in time. Hence, when a new image is placed on a display screen, it takes some time for the viewer to see the high resolution detail in the image. This observation may be use to provide an image display system in which the viewer perceives that he or she is seeing the image at high resolution without the delays inherent in transferring all of the component images. In such a system, the component images are transferred in the order of their frequency content.
  • a screen image is generated based on the data received as of that time, the missing component images being initially replaced by component images having zeros for all of their pixels.
  • the image sharpens over time Provided the sharpening occurs in a time consistent with the viewer's ability to appreciate the finer detail, the viewer will have the perception of viewing a high resolution image that was generated in a time much smaller that actually used by the system.
  • a display device 600 according to this aspect of the present invention is shown in Figure 13 at 600.
  • Display device 600 has L stages for reconstructing an image that has been compressed to at most L levels. Each stage includes a synthesizer and two buffers.
  • the first, second and Lfi stages are shown in Figure 13.
  • the first stage comprises synthesizer 604 and buffers 602-603.
  • the second stage comprises synthesizer 608 and buffers 605-606, and the Lth stage comprises synthesizer 614 and buffers 610-611. If the image to be reconstructed was decomposed to less than L levels, controller 630 causes the low-Frequency component image to be stored in the intermediate buffer in the L ⁇ stage. In the following discussion, it will be assumed that L stages are used.
  • the component images are input on a bus 601.
  • the information specifying the transformation actually used on the image to be reconstructed is received first by controller 630 which loads the appropriate weight sets into the various synthesizers.
  • the component images for each level of the decomposition are received.
  • one set of component images is received.
  • the low- Frequency component image and the high-Frequency component images corresponding to the highest level of the decomposition are stored in buffer 602 and 603 respectively.
  • controller 630 causes synthesizer 604 to reconstruct a low-Frequency image corresponding to level (L-1) and store that image in intermediate buffer 605.
  • controller 630 causes synthesizer 608 to generate a low-Frequency component image corresponding to level (L-1) from the component image stored in intermediate buffer 605.
  • This low-Frequency component image is stored in the intermediate buffer in the next stage, and so on.
  • synthesizer 614 constructs the final reconstructed image which is stored in video RAM 620 for display on screen 622.
  • This first approximation to reconstructed image is based only on component images from the highest level of the decomposition. Since it lacks the information in the lower levels of the decomposition, i.e., high-Frequency information, this approximation will be similar to a low-pass filtered image. Since the lower frequency component images require only a small fraction of the component image data, these component images can be transmitted in a time which is short compared to that needed to transmit the entire compressed image.
  • the high-Frequency component images of the next level of the decomposition are received on bus 601 and stored in buffer 606. It should be noted that each buffer is only used for a portion of the cycle; hence, component images may be stored in the buffers while the first approximation is being calculated.
  • controller After the first approximation is calculated, controller causes the next approximation to be calculated in a similar manner. However, this time, additional high-Frequency component images will have been stored in the relevant buffers. Hence, the new approximation will have greater detail than the first approximation. This process is repeated until the reconstructed image includes the information in all of the component images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Système de compression d'image dans lequel l'image à comprimer est développée sur une base bidimensionnelle irréductible. L'image d'entrée (52) est transformée en une pluralité d'images constitutives (65, 67...) ayant des contenus fréquenciels différents dans l'espace. Les pixels des images constitutives sont les coefficients d'une expansion de l'image d'entrée sur une base bidimensionnelle irréductible. Les fonctions de base préférées comportent des régions de support compactes qui sont petites comparées à l'image d'entrée, et lesquelles ont des limites dépourvues de segments de lignes droites. Le choix d'une base réduit significativement les artefacts dans des images reconstruites produites à partir d'images d'entrée fortement comprimées.
PCT/US1991/003802 1990-05-31 1991-05-29 Systeme ameliore de compression d'images WO1991019272A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US07/531,468 US5101446A (en) 1990-05-31 1990-05-31 Method and apparatus for coding an image
US531,468 1990-05-31
US70604291A 1991-05-28 1991-05-28
US706,042 1991-05-28

Publications (1)

Publication Number Publication Date
WO1991019272A1 true WO1991019272A1 (fr) 1991-12-12

Family

ID=27063553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/003802 WO1991019272A1 (fr) 1990-05-31 1991-05-29 Systeme ameliore de compression d'images

Country Status (2)

Country Link
AU (1) AU8091491A (fr)
WO (1) WO1991019272A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5481275A (en) * 1992-11-02 1996-01-02 The 3Do Company Resolution enhancement for video display using multi-line interpolation
US5572235A (en) * 1992-11-02 1996-11-05 The 3Do Company Method and apparatus for processing image data
US5596693A (en) * 1992-11-02 1997-01-21 The 3Do Company Method for controlling a spryte rendering processor
US5752073A (en) * 1993-01-06 1998-05-12 Cagent Technologies, Inc. Digital signal processor architecture
US5838389A (en) * 1992-11-02 1998-11-17 The 3Do Company Apparatus and method for updating a CLUT during horizontal blanking
US6330362B1 (en) 1996-11-12 2001-12-11 Texas Instruments Incorporated Compression for multi-level screened images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4802110A (en) * 1985-12-17 1989-01-31 Sony Corporation Two-dimensional finite impulse response filter arrangements
US4805129A (en) * 1986-11-17 1989-02-14 Sony Corporation Two-dimensional finite impulse response filter arrangements
US4817182A (en) * 1987-05-04 1989-03-28 General Electric Company Truncated subband coding of images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4802110A (en) * 1985-12-17 1989-01-31 Sony Corporation Two-dimensional finite impulse response filter arrangements
US4805129A (en) * 1986-11-17 1989-02-14 Sony Corporation Two-dimensional finite impulse response filter arrangements
US4817182A (en) * 1987-05-04 1989-03-28 General Electric Company Truncated subband coding of images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
STEPHEN J. LEON, "Linear Algebra with Applications", Third Edition, pp. 71-74 & 85-87, published 1980, 1986, 1990, MACMILLAN PUBLISHING CO. N.Y., N.Y. *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5481275A (en) * 1992-11-02 1996-01-02 The 3Do Company Resolution enhancement for video display using multi-line interpolation
US5572235A (en) * 1992-11-02 1996-11-05 The 3Do Company Method and apparatus for processing image data
US5596693A (en) * 1992-11-02 1997-01-21 The 3Do Company Method for controlling a spryte rendering processor
US5838389A (en) * 1992-11-02 1998-11-17 The 3Do Company Apparatus and method for updating a CLUT during horizontal blanking
US6191772B1 (en) 1992-11-02 2001-02-20 Cagent Technologies, Inc. Resolution enhancement for video display using multi-line interpolation
US5752073A (en) * 1993-01-06 1998-05-12 Cagent Technologies, Inc. Digital signal processor architecture
US6330362B1 (en) 1996-11-12 2001-12-11 Texas Instruments Incorporated Compression for multi-level screened images

Also Published As

Publication number Publication date
AU8091491A (en) 1991-12-31

Similar Documents

Publication Publication Date Title
US5101446A (en) Method and apparatus for coding an image
US5583952A (en) Method and apparatus for representing an image by iteratively synthesizing high and low frequency components
US5148498A (en) Image coding apparatus and method utilizing separable transformations
US5546477A (en) Data compression and decompression
US5453844A (en) Image data coding and compression system utilizing controlled blurring
US5661822A (en) Data compression and decompression
AU637020B2 (en) Improved image compression method and apparatus
Watson Image compression using the discrete cosine transform
Anderson et al. Image restoration based on a subjective criterion
US5572236A (en) Digital image processor for color image compression
JP4465112B2 (ja) Lcdパネルにおける画像表示に適したdwtに基づくアップサンプリング・アルゴリズム
US6002809A (en) Digital image processor for image scaling
US5293434A (en) Technique for use in a transform coder for imparting robustness to compressed image data through use of global block transformations
US5523847A (en) Digital image processor for color image compression
JPH0591331A (ja) 画像処理方法および中間調画像印刷システム
WO1997037324A1 (fr) Appareil pour evaluer la visibilite de differences entre deux sequences d'images
US5729484A (en) Processes, apparatuses, and systems of encoding and decoding signals using transforms
US5995990A (en) Integrated circuit discrete integral transform implementation
WO1991019272A1 (fr) Systeme ameliore de compression d'images
US6934420B1 (en) Wave image compression
Press Wavelet-based compression software for FITS images
US6633679B1 (en) Visually lossless still image compression for CMYK, CMY and Postscript formats
Kumar et al. An image compression algorithm for gray scale image
Kumar et al. Aggrandize Bit Plane Coding using Gray Code Method
KR20010077752A (ko) 인간 시각 시스템의 특성을 반영한 퍼지 논리를 적용한이산 웨이브렛 변환을 이용한 영상 압축 방법 및 장치

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU BR JP KR SU

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE