US20010019630A1  Method for transferring and displaying compressed images  Google Patents
Method for transferring and displaying compressed images Download PDFInfo
 Publication number
 US20010019630A1 US20010019630A1 US09/283,017 US28301799A US2001019630A1 US 20010019630 A1 US20010019630 A1 US 20010019630A1 US 28301799 A US28301799 A US 28301799A US 2001019630 A1 US2001019630 A1 US 2001019630A1
 Authority
 US
 United States
 Prior art keywords
 image
 miniature
 compressed
 data
 information
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Granted
Links
 239000003086 colorant Substances 0 Claims Description 5
 238000007906 compression Methods 0 Claims Description 142
 239000010410 layers Substances 0 Claims Description 50
 238000000034 methods Methods 0 Claims Description 155
 230000002829 reduced Effects 0 Claims Description 3
 230000001976 improved Effects 0 Claims 2
 238000009740 moulding (composite fabrication) Methods 0 Claims 1
 230000036948 MRT Effects 0 Description 19
 230000002159 abnormal effects Effects 0 Description 1
 238000009825 accumulation Methods 0 Description 1
 230000003044 adaptive Effects 0 Description 26
 238000007792 addition Methods 0 Description 2
 238000004458 analytical methods Methods 0 Description 10
 238000005284 basis set Methods 0 Description 2
 239000000872 buffers Substances 0 Description 2
 238000004422 calculation algorithm Methods 0 Description 2
 238000004364 calculation methods Methods 0 Description 2
 238000006243 chemical reaction Methods 0 Description 3
 230000001427 coherent Effects 0 Description 1
 230000001721 combination Effects 0 Description 4
 238000004891 communication Methods 0 Description 1
 230000000052 comparative effects Effects 0 Description 1
 230000000295 complement Effects 0 Description 2
 239000000562 conjugates Substances 0 Description 14
 230000021615 conjugation Effects 0 Description 1
 239000011162 core materials Substances 0 Description 4
 230000000875 corresponding Effects 0 Description 15
 238000000354 decomposition Methods 0 Description 4
 230000001419 dependent Effects 0 Description 1
 238000009795 derivation Methods 0 Description 1
 230000000694 effects Effects 0 Description 11
 238000005516 engineering processes Methods 0 Description 1
 230000002708 enhancing Effects 0 Description 4
 239000000686 essences Substances 0 Description 1
 239000000284 extracts Substances 0 Description 1
 238000001914 filtration Methods 0 Description 4
 238000009472 formulation Methods 0 Description 1
 230000014509 gene expression Effects 0 Description 2
 230000002068 genetic Effects 0 Description 1
 238000009499 grossing Methods 0 Description 1
 230000001965 increased Effects 0 Description 6
 230000002452 interceptive Effects 0 Description 1
 230000004301 light adaptation Effects 0 Description 3
 230000000670 limiting Effects 0 Description 1
 239000011159 matrix materials Substances 0 Description 37
 238000005259 measurements Methods 0 Description 14
 230000015654 memory Effects 0 Description 5
 239000000203 mixtures Substances 0 Description 14
 238000006011 modification Methods 0 Description 1
 230000004048 modification Effects 0 Description 1
 230000000051 modifying Effects 0 Description 7
 239000002365 multiple layers Substances 0 Description 3
 230000001537 neural Effects 0 Description 1
 230000003287 optical Effects 0 Description 1
 238000005457 optimization Methods 0 Description 20
 230000000737 periodic Effects 0 Description 11
 239000004323 potassium nitrate Substances 0 Description 8
 239000000047 products Substances 0 Description 3
 230000001603 reducing Effects 0 Description 17
 238000006722 reduction reaction Methods 0 Description 6
 238000009877 rendering Methods 0 Description 2
 230000003362 replicative Effects 0 Description 2
 230000004044 response Effects 0 Description 13
 230000000717 retained Effects 0 Description 3
 238000005070 sampling Methods 0 Description 1
 239000002356 single layers Substances 0 Description 1
 238000003860 storage Methods 0 Description 25
 238000006467 substitution reaction Methods 0 Description 2
 238000000844 transformation Methods 0 Description 4
 230000001131 transforming Effects 0 Description 27
 230000000007 visual effect Effects 0 Description 8
Images
Classifications

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10H04N19/85, e.g. fractals
 H04N19/94—Vector quantisation

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
 H04N19/124—Quantisation
 H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
 H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
 H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using subband based transform, e.g. wavelets

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
 H04N19/124—Quantisation

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
Abstract
An image of certain resolution higher than possible in a single transmission over a finite bandwidth channel is obtained by transferring a progressivelyrendered, compressed image. Initially, a low quality image is compressed and transmitted over a finite bandwidth channel. Then, a successively higher resolution image information is compressed at a source and is transmitted. The successively higher resolution image information received at the destination end is used to display a higher resolution image at the destination end.
Description
 1. Field of the Invention
 This invention relates to the compression and decompression of digital data and, more particularly, to the reduction in the amount of digital data necessary to store and transmit images.
 2. Background of the Invention
 Image compression systems are commonly used in computers to reduce the storage space and transmittal times associated with storing, transferring and retrieving images. Due to increased use of images in computer applications, and the increase in the transfer of images, a variety of image compression techniques have attempted to solve the problems associated with the large amounts of storage space (i.e., hard disks, tapes or other devices) needed to store images.
 Conventional devices store an image as a twodimensional array of picture elements, or pixels. The number of pixels determines the resolution of an image. Typically the resolution is measured by stating the number of horizontal and vertical pixels contained in the two dimensional image array. For example, a 640 by 480 image has 640 pixels across and 480 from top to bottom to total 307,200 pixels.
 While the number of pixels represents the image resolution, the number of bits assigned to each pixel represents the number of available intensity levels of each pixel. For example, if a pixel is only assigned one bit, the pixel can represent a maximum of two values. Thus the range of colors which can be assigned to that pixel is limited to two (typically black and white). In color images, the bits assigned to each pixel represent the intensity values of the three primary colors of red, green and blue. In present “true color” applications, each pixel is normally represented by 24 bits where 8 bits are assigned to each primary color allowing the encoding of 16.8 million (2^{8}×2^{8}×2^{8}) different colors.
 Consequently, color images require large amounts of storage capacity. For example, a typical color (24 bits per 5 pixel) image with a resolution of 640 by 480 requires approximately 922,000 bytes of storage. A larger 24bit color image with a 2000 by 2000 pixel resolution requires approximately twelve million bytes of storage. As a result, imagebased applications such as interactive shopping, multimedia products, electronic games and other imagebased presentations require large amounts of storage space to display high quality color images.
 In order to reduce storage requirements, an image is compressed (encoded) and stored as a smaller file which requires less storage space. In order to retrieve and view the compressed image, the compressed image file is expanded (decoded) to its original size. The decoded (or “reconstructed”) image is usually an imperfect or “lossy” representation of the original image because some information may be lost in the compression process. Normally, the greater the amount of compression the greater the divergence between the original image and the reconstructed image. The amount of compression is often referred to as the compression ratio. The compression ratio is the amount of storage space needed to store the original (uncompressed) digitized image file divided by the amount of storage space needed to store the corresponding compressed image file.
 By reducing the amount of storage space needed to store an image, compression is also used to reduce the time needed to transfer and communicate images to other locations. In order to transfer an image, the data bits that represent the image are sent via a data channel to another location. The sequence of transmitted bytes is called the data stream. Generally, the image data is encoded and the compressed image data stream is sent over a data channel and when received, the compressed image data is decoded to recreate the original image. Thus, compression speeds the transmission of image files by reducing their size.
 Several processes have been developed for compressing the data required to represent an image. Generally, the processes rely on two methods: 1) spatial or time domain compression, and 2) frequency domain compression. In frequency domain compression, the binary data representing each pixel in the space or time domain are mapped into a new coordinate system in the frequency domain.
 In general, the mathematical transforms, such as the discrete cosine transform (DCT), are chosen so that the signal energy of the original image is preserved, but the energy is concentrated in a relatively few transform coefficients. Once transformed, the data is compressed by quantization and encoding of the transform coefficients.
 Optimization of the process of compressing an image includes increasing the compression ratio while maintaining the quality of the original image, reducing the time to encode an image, and reducing the time to decode a compressed image. In general, a process that increases the compression ratio or decreases the time to compress an image results in a loss of image quality. A process that increases the compression ratio and maintains a high quality image often results in longer encoding and decoding times. Accordingly, it would be advantageous to increase the compression ratio and reduce the time needed to encode and decode an image while maintaining a high quality image.
 It is well known that image encoders can be optimized for specific image types. For example, different types of images may include graphical, photographic, or typographic information or combinations thereof. As discussed in more detail below, the encoding of an image can be viewed as a multistep process that uses a variety of compression methods which include filters, mathematical transformations, quantization techniques, etc. In general each compression method will compress different image types with varying comparative efficiency. These compression methods can be selectively applied to optimize an encoder with respect to a certain type of image. In addition to selectively applying various compression methods, it is also possible to optimize an encoder by varying the parameters (e.g., quantization tables) of a particular compression method.
 Broadly speaking, however, the prior art does not provide an adaptive encoder that automatically decomposes a source image, classifies its parts, and selects the optimal compression methods and the optimal parameters of the selected compression methods resulting in an optimized encoder that increases relative compression rates.
 Once an image is optimally compressed with an encoder, the set of compressed data are stored in a file. The structure of the compressed file is referred to as the file format. The file format can be fairly simple and common, or the format can be quite complex and include a particular sequence of compressed data or various types of control instructions and codes.
 The file format (the structure of the data in the file) is especially important when compressed data in the file will be read and processed sequentially and when the user desires to view or transmit only part of a compressed image file. Accordingly, it would be advantageous to provide a file format that “layers” the compressed image components, arranging those of greatest visual importance first, those of secondary visual importance second, and so on. Layering the compressed file format in such a way allows the first segment of the compressed image file to be decoded prior to the remainder of the file being received or read by the decoder. The decoder can display the first segment (layer) as a miniature version of the entire image or can enlarge the miniature to display a coarse or “splash” quality rendition of the original image. As each successive file segment or layer is received, the decoder enhances the quality of the displayed picture by selectively adding detail and correcting pixel values.
 Like the encoding process, the decoding of an image can be viewed as a multistep process that uses a variety of decoding methods which include inverse mathematical transformations, inverse quantization techniques, etc. Conventional decoders are designed to have an inverse function relative to the encoding system. These inverse decoding methods must match the encoding process used to encode the image. In addition, where an encoder makes contentsensitive adaptations to the compression algorithm, the decoder must apply a matching contentsensitive decoding process.
 Generally, a decoder is designed to match a specific encoding process. Prior art compression systems exist that allow the decoder to adjust particular parameters, but the prior art encoders must also transmit accompanying tables and other information. In addition, many conventional decoders are limited to specific decoding methods that do not accommodate contentsensitive adaptations.
 The problems outlined above are solved by the method and apparatus of the present invention. That is, the computerbased image compression system of the present invention includes a unique encoder which compresses images and a unique decoder which decompresses images. The unique compression system obtains high compression ratios at all image quality levels while achieving relatively quick encoding and decoding times.
 A high compression ratio enables faster image transmission and reduces the amount of storage space required to store an image. When compared with conventional compression techniques, such as the Joint Photographic Experts Group (JPEG), the present invention significantly increases the compression ratio for color images which, when decompressed, are of comparable quality to the JPEG images. The exact improvement over JPEG will depend on image content, resolution, and other factors.
 Smaller image files translate into direct storage and transmission time savings. In addition, the present invention reduces the number of operations to encode and decode an image when compared to JPEG and other compression methods of a similar nature. Reducing the number of operations reduces the amount of time and computing resources needed to encode and decode an image, and thus improves computer system response times.
 Furthermore, the image compression system of the present invention optimizes the encoding process to accommodate different image types. As explained below, the present invention uses fuzzy logic techniques to automatically analyze and decompose a source image, classify its components, select the optimal compression method for each component, and determine the optimal contentsensitive parameters of the selected compression methods. The encoder does not need prior information regarding the type of image or information regarding which compression methods to apply. Thus, a user does not need to provide compression system customization or need to set the parameters of the compression methods.
 The present invention is designed with the goal of providing an image compression system that reliably compresses any type of image with the highest achievable efficiency, while maintaining a consistent range of viewing qualities. Automating the system's adaptivity to varied image types allows for a minimum of human intervention in the encoding process and results in a system where the compression and decompression process are virtually transparent to the users.
 The encoder and decoder of the present invention contain a library of encoding methods that are treated as a “toolbox.” The toolbox allows the encoder to selectively apply particular encoding methods or tools that optimize the compression ratio for a particular image component. The toolbox approach allows the encoder to support many different encoding methods in one program, and accommodates the invention of new encoding methods without invalidating existing decoders. The toolbox approach thus allows upgradeability for future improvements in compression methods and adaptation to new technologies.
 A further feature of the present invention is that the encoder creates a file format that segments or “layers” the compressed image. The layering of the compressed image allows the decoder to display image file segments, beginning with the data at the front of the file, in a coherent sequence which begins with the decoding and display of the information that constitutes the core of the image as defined by human perception. This core information can appear as a good quality miniature of the image and/or as a full sized “splash” or coarse quality version of the image. Both the miniature and splash image enable the user to view the essence of an image from a relatively small amount of encoded data. In applications where the image file is being transmitted over a data channel, such as a telephone line or limited bandwidth wireless channel, display of the miniature and/or splash image occurs as soon as the first segment or layer of the file is received. This allows users to view the image quickly and to see detail being added to the image as subsequent layers are received, decoded, and added to the core image.
 The decoder decompresses the miniature and the full sized splash quality image from the same information. User specified preferences and the application determine whether the miniature and/or the full sized splash quality image are displayed for any given image.
 Whether the first layer is displayed as a miniature or a splash quality full size image, the receipt of each successive layer allows the decoder to add additional image detail and sharpness. Information from the previous layer is supplemented, not discarded, so that the image is built layer by layer. Thus a single compressed file with a layered file format can store both a thumbnail and a full size version of the image and can store the full size version at various quality levels without storing any redundant information.
 The layered approach of the present invention allows the transmission or decoding of only the part of the compressed file which is necessary to display a desired image quality. Thus, a single compressed file can generate a thumbnail and different quality full size images without the need to recompress the file to a smaller size and lesser quality, or store multiple files compressed to different file sizes and quality levels.
 This feature is particularly advantageous for on line service applications, such as shopping or other applications where the user or the application developer may want several thumbnail images downloaded and presented before the user chooses to receive the entire full size, high quality image. In addition to conserving the time and transmission costs associated with viewing a variety of high quality images that may not be of interest, the user need only subsequently download the remainder of each image file to view the higher detail versions of the image.
 The layered format also allows the storage of different layers of the compressed data file separate from one another. Thus, the core image data (miniature) can be stored locally (e.g., in fast RAM memory for fast access), and the higher quality “enhancement” layers can be stored remotely in lower cost bulk storage.
 A further feature of the layered file format of the present invention allows the addition of other compressed data information. The layered and segmented file format is extendable so that new layers of compressed information such as sound, text and video can be added to the compressed image data file. The extendable file format allows the compression system to adapt to new image types and to combine compressed image data with sound, text and video.
 Like the encoder, the decoder of the present invention includes a toolbox of decoding methods. The decoding process can begin with the decoder first determining the encoding methods used to encode each data segment. The decoder determines the encoding methods from instructions the encoder inserts into the compressed data file.
 Adding decoder instructions to the compressed image data provides several advantages. A decoder that recognizes the instructions can decode files from a variety of different encoders, accommodate contentsensitive encoding methods, and adjust to user specific needs. The decoder of the present invention also skips parts of the data stream that contain data that are unnecessary for a given rendition of the image, or ignore parts of the data stream that are in an unknown format. The ability to ignore unknown formats allows future file layers to be added while maintaining compatibility with older decoders.
 In a preferred embodiment of the present invention, the encoder compresses an image using a first Reed Spline Filter, an image classifier, a discrete cosine transform, a second and third Reed Spline Filter, a differential pulse code modulator, an enhancement analyzer, and an adaptive vector quantizer to generate a plurality of data segments that contain the compressed image. The plurality of data segments are further compressed with a channel encoder.
 The Reed Spline Filter includes a color space conversion transform, a decimation step and a least mean squared error (LMSE) spline fitting step. The output of the first Reed Spline Filter is then analyzed to determine an image type for optimal compression. The first Reed Spline Filter outputs three components which are analyzed by the image classifier. The image classifier uses fuzzy logic techniques to classify the image type. Once the image type is determined, the first component is separated from the second and third components and further compressed with an optimized discrete cosine transform and an adaptive vector quantizer. The second and third components are further compressed with a second and third Reed Spline Filter, the adaptive vector quantizer, and a differential pulse code modulator.
 The enhancement analyzer enhances areas of an image determined to be the most visually important, such as text or edges. The enhancement analyzer determines the visual priority of pixel blocks. The pixel block dimensions typically correspond to 16×16 pixel blocks in the source image. In addition, the enhancement analyzer prioritizes each pixel block so that the most important enhancement information is placed in the earliest enhancement layers so that it can be decoded first. The output of the enhancement analyzer is compressed with the adaptive vector quantizer.
 A user may set the encoder to compute a color palette optimized to the color image. The color palette is combined with the output of the discrete cosine transform, the adaptive vector quantizer, the differential pulse code modulator, and the enhancement analyzer to create a plurality of data segments. The channel encoder then interleaves and compresses the plurality of data segments.
 These and other aspects, advantages, and novel features of the invention will become apparent upon reading the following detailed description and upon reference to accompanying drawings in which:
 FIG. 1 is a block diagram of an image compression system that encodes, transfers and decodes an image and includes a source image, an encoder, a compressed file, a first storage device, a data channel, a data stream, a decoder, a display, a second storage device, and a printer;
 FIG. 2 illustrates the multistep decoding process and includes the source image, the encoder, the compressed file, the data channel, the data stream, the decoder, a thumbnail image, a splash image, a panellized standard image, and the final representation of the source image;
 FIG. 3 is a block diagram of the encoder showing the four stages of the encoding process;
 FIG. 4 is a block diagram of the encoder showing a first Reed Spline Filter, a color space conversion transform, a Y miniature, a U miniature, an X miniature, an image classifier, an optimized discrete cosine transform, a discrete cosine transform residual calculator, an adaptive vector quantizer, a second and third Reed Spline Filter, a Reed Spline residual calculator, a differential pulse coder modulator, an enhancement analyzer, a high resolution residual calculator, a palette selector, a plurality of data segments and a channel encoder;
 FIG. 5 is a block diagram of the image formatter;
 FIG. 6 is a block diagram of the Reed Spline Filter;
 FIG. 7 is a block diagram of the color space conversion transform;
 FIG. 8 is a block diagram of the image classifier;
 FIG. 9 is a block diagram of the optimized discrete cosine transform;
 FIG. 10 is a block diagram of the DCT residual calculator;
 FIG. 11 is a block diagram of the adaptive vector quantizer;
 FIG. 12 is a block diagram of the second and third Reed Spline Filters;
 FIG. 13 is a block diagram of the Reed Spline residual calculator;
 FIG. 14 is a block diagram of the differential pulse code modulator;
 FIG. 15 is a block diagram of the enhancement analyzer;
 FIG. 16 is a block diagram of the high resolution residual calculator;
 FIG. 17 is the block diagram of the palette selector;
 FIG. 18 is the block diagram of the channel encoder;
 FIG. 19 is a block diagram of the vector quantization process;
 FIGS. 20a and 20 b show the segmented architecture of the data stream;
 FIG. 21 illustrates the normal segment;
 FIG. 22a, 22 b, 22 c and 22 d illustrate the layering and interleaving of the plurality of data segments;
 FIG. 23 is a block diagram of the decoder of the present invention;
 FIG. 24 illustrates the multistep decoding process and includes a Ym miniature, a Um miniature, an Xm miniature, the thumbnail miniature, the splash image and the standard image, and the enhanced image;
 FIG. 25 is a block diagram of the decoder and includes an inverse Huffman encoder, an inverse DPCM, a dequantizer, a combiner, an inverse DCT, a demultiplexer, and an adder;
 FIG. 26 is a block diagram of the decoder and includes the interpolator, interpolation factors, a scaler, scale factors, a replicator, and an inverse color converter;
 FIG. 27 is a block diagram of the decoder that includes the inverse Huffman encoder, the combiner, the dequantizer, the inverse DCT, a pattern matcher, the adder, the interpolator, and an enhancement overlay builder;
 FIG. 28 is block diagram of the scaler with an input to output ratio of fivetothree in the one dimensional case;
 FIG. 29 illustrates the process of bilinear interpolation;
 FIG. 30 is a block diagram of the process of optimizing the compression methods with the image classifier, the enhancement analyzer, the optimized DCT, the AVQ, and the channel encoder;
 FIG. 31 is a block diagram of the image classifier;
 FIG. 32 is a flow chart of the process of creating an adaptive uniform DCT quantization table;
 FIG. 33 illustrates a table of several examples showing the mapping from input measurements to input sets to output sets;
 FIG. 34 is a block diagram of image data compression;
 FIG. 35 is a block diagram of a spline decimation/interpolation filter;
 FIG. 36 is a block diagram of an optimal spline filter;
 FIG. 37 is a vector representation of the image, processed image, and residual image;
 FIG. 38 is a block diagram showing a basic optimization block of the present invention;
 FIG. 39 is a graphical illustration of a onedimensional bilinear spline projection;
 FIG. 40 is a schematic view showing periodic replication of a twodimensional image;
 FIGS. 41a, 41 b and 41 c are perspective and plan views of a twodimensional planar spline basis;
 FIG. 42 is a diagram showing representations of the hexagonal tent function;
 FIG. 43 is a flow diagram of compression and reconstruction of image data;
 FIG. 44 is a graphical representation of a normalized frequency response of a onedimensional bilinear spline basis;
 FIG. 45 is a graphical representation of a onedimensional eigenfilter frequency response;
 FIG. 46 is a perspective view of a twodimensional eigenfilter frequency response;
 FIG. 47 is a plot of standard error as a function of frequency for a onedimensional cosinusoidal image;
 FIG. 48 is a plot of original and reconstructed onedimensional images and a plot of standard error;
 FIG. 49 is a first twodimensional image reconstruction for different compression factors;
 FIG. 50 is a second twodimensional image reconstruction for different compression factors;
 FIG. 51 is plots of standard error for representative images1 and 2;
 FIG. 52 is a compressed two miniature using the optimized decomposition weights;
 FIG. 53 is a block diagram of a preferred adaptive compression scheme in which the method of the present invention is particularly suited;
 FIG. 54 is a block diagram showing a combined sublevel and optimalspline compression arrangement;
 FIG. 55 is a block diagram showing a combined sublevel and optimalspline reconstruction arrangement;
 FIG. 56 is a block diagram showing a multiresolution optimized interpolation arrangement; and
 FIG. 57 is a block diagram showing an embodiment of the optimizing process in the image domain.
 FIG. 1 illustrates a block diagram of an image compression system that includes a source image100, an encoder 102, a compressed file 104, a first storage device 106, a communication data channel 108, a decoder 110, a display 112, a second storage device 114, and a printer 116. The source image 100 is represented as a twodimensional image array of picture elements, or pixels. The number of pixels determines the resolution of the source image 100, which is typically measured by the number of horizontal and vertical pixels contained in the twodimensional image array.
 Each pixel is assigned a number of bits that represent the intensity level of the three primary colors: red, green, and blue. In the preferred embodiment, the fullcolor source image100 is represented with 24 bits, where 8 bits are assigned to each primary color. Thus, the total storage required for an uncompressed image is computed as the number of pixels in the image times the number of bits used to represent each pixel (referred to as bits per pixel).
 As discussed in more detail below, the encoder102 uses decimation, filtering, mathematical transforms, and quantization techniques to concentrate the image into fewer data samples representing the image with fewer bits per pixel than the original format. Once the source image 100 is compressed with the encoder 102, the set of compressed data are assembled in the compressed file 104. The compressed file 104 is stored in the first storage device 106 or transmitted to another location via the data channel 108. If the compressed file 104 is transmitted to another location, the data stored in the compressed file 104 is transmitted sequentially via the data channel 108. The sequence of bits in the compressed file 104 that are transmitted via the data channel 108 is referred to as a data stream 118.
 The decoder110 expands the compressed file 104 to the original source image size. During the process of decoding the compressed file 104, the decoder 110 displays the expanded source image 100 on the display 112. In addition, the decoder 110 may store the expanded compressed file 104 in the second storage device 114 or print the expanded compressed file 104 on the printer 116.
 For example, if the source image100 comprises a 640×480, 24bit color image, the amount of memory needed to store and display the source image 100 is approximately 922,000 bytes. In the preferred embodiment, the encoder 102 computes the highest compression ratio for a given decoding quality and playback model. The playback model allows a user to select the decoding mode as is discussed in more detail below. The compressed data are then assembled in the compressed file 104 for transmittal via the data channel 108 or stored in the first storage device 106. For example, at a 92to1 compression ratio, the 922,000 bytes that represent the source image 100 are compressed into approximately 10,000 bytes. In addition, the encoder 102 arranges the compressed data into layers in the compressed file 104.
 Referring to FIG. 2, it can be seen that the layering of the compressed file104 allows the decoder 110 to display a thumbnail image and progressively improving quality versions of the source image 100 before the decoder 110 receives the entire compressed file 104. The first data expanded by the decoder 110 can be viewed as a thumbnail miniature 120 of the original image or as a coarse quality “splash” image 122 with the same dimensions as the original image. The splash image 122 is a result of interpolating the thumbnail miniature to the dimensions of the original image. As the decoder 110 continues to receive data from the data stream 118, the decoder 110 creates a standard image 124 by decoding the second layer of information and adding it to the splash image 122 data to create a higher quality image. The encoder 102 can create a userspecified number of layers in which each layer is decoded and added to the displayed image as data is received. Upon receiving the entire compressed file 104 via the data stream 118, the decoder 110 displays an enhanced image 105 that is the highest quality reconstructed image that can be obtained from the compressed data stream 118.
 FIG. 3 illustrates a block diagram of the encoder102 constructed in accordance with the present invention. The encoder 102 compresses the source image 100 in four main stages. In a first stage 126, the source image 100 is formatted, processed by a Reed Spline Filter and color converted. In a second stage 128, the encoder 102 classifies the source image 100 in blocks. In a third stage 130, the encoder 102 selectively applies particular encoding methods that optimize the compression ratio. Finally, the compressed data are interleaved and channel encoded in a fourth stage 132.
 The encoder102 contains a library of encoding methods that are treated as a toolbox. The toolbox allows the encoder 102 to selectively apply particular encoding methods that optimize the compression ratio for a particular image type. In the preferred embodiment, the encoder 102 includes at least one of the following: an adaptive vector quantizer (AVQ 134), an optimized discrete cosine transform (optimized DCT 136), a Reed Spline Filter 138 (RSF), a differential pulse code modulator (DPCM 140), a run length encoder (RLE 142), and an enhancement analyzer 144.
 FIG. 4 illustrates a more detailed block diagram of the encoder102. The first stage 126 of the encoder 102 includes a formatter 146, a first Reed Spline Filter 148 and a color space converter 150 which produces Y data 186, and U and X data 188. The second stage 128 includes an image classifier 152. The third stage includes an optimized discrete cosine transform and adaptive DCT quantization (optimized DCT 136), a DCT residual calculator 154, the adaptive vector quantizer (AVQ 134), a second and a third Reed Spline Filter 156, a Reed Spline residual calculator 158, the differential pulse code modulator (DPCM 140), a resource file 160, the enhancement analyzer 144, a high resolution residual calculator 162, and a palette selector 164. The fourth stage includes a plurality of data segments 166 and a channel encoder 168. The output of the channel encoder 168 is stored in the compressed file 104.
 The formatter146, as shown in more detail in FIG. 5, converts the source image 100 from its native format to a 24bit red, green and blue pixel array. For example, if the source image 100 is an 8bit palletized image, the formatter converts the 8bit palletized image to a 24bit red, green, and blue equivalent.
 The first Reed Spline Filter148, illustrated in more detail in FIG. 6, uses a twostep process to compress the formatted source image 100. The twostep process comprises a decimation step performed in block 170 and a spline fitting step performed in a block 172. As explained in more detail below, the decimation step in the block 170 decimates each color component of red, green, and blue by a factor of two along the vertical and horizontal dimensions using a Reed Spline decimation kernal. The decimation factor is called “tau.” The R_tau2′ decimated data 174 corresponds to the red component decimated by a factor of 2. The G_tau2′ decimated data 176 corresponds to the green component decimated by a factor of 2. The B_tau2′ decimated data 178 corresponds to the blue component decimated by a factor of 2.
 In the spline fitting step in block172, the first Reed Spline Filter 148 partially restores the source image detail lost by the decimation in block 170. The spline fitting step in block 172 processes the R_tau2′ decimated data 172, the G_tau2′ decimated data, and the B_tau2′ decimated data to calculate optimal reconstruction weights.
 As explained in more detail below, the decoder110 will interpolate the decimated data into a full sized image. In this interpolation, the decoder 110 uses the reconstruction weights which have been calculated by the Reed Spline Filter in such a way as to minimize the mean squared error between the original image components and the interpolated image components. Accordingly the Reed Spline Filter 148 causes the interpolated image to match the original image more closely and increases the overall sharpness of the interpolated picture. In addition, reducing the error arising from the decimation step in block 170 reduces the amount of data needed to represent the residual image. The residual image is the difference between the reconstructed image and the original image.
 The reconstruction weights output from the Reed Spline Filter148 form a “miniature” of the original source image 100 for each primary color of red, green, and blue, wherein each red, green, and blue miniature is onequarter the resolution of the original source image 100 when a tau of 2 is used.
 More specifically, the preferred color space converter150 transforms the R_tau2 miniature 180, the G_tau2 miniature 182 and the B_tau2 miniature 184 output by the first Reed Spline Filter 148 into a different color coordinate system in which one component is the luminance Y data 186 and the other two components are related to the chrominance U and X data 188. The color space converter 150 transforms the RGB to the YUX color space according to the following formulas:
Y = 0.29900R + 0.58700G + 0.11400B U = 0.16870R + 0.33120G + 0.50000B X = 0.50000R − 1.08216G + 0.91869B  Referring to FIG. 6, it can be seen that a R_tau2 miniature180 corresponds to a miniature that is decimated and spline fitted by a factor of 2. A G_tau2 miniature 182 corresponds to a green miniature that is decimated and spline fitted by a factor of 2. A B_tau2 miniature 184 corresponds to a blue miniature that is decimated and spline fitted by a factor of 2.
 FIG. 7 illustrates the color space converter150 of FIG. 4. The color space converter 150 transforms the R_tau2 miniature 180, the G_tau2 miniature 182 and the B_tau2 miniature 184 output by the first Reed Spline Filter 148 into a different color coordinate system in which one component is the luminance Y data 186 and the other two components are related to the chrominance U and X data 188 as shown in FIG. 4. Thus the color space converter 150 transforms the R_tau2 miniature 180, the G_tau2 miniature 182 and the B_tau2 miniature 184 into a Y_tau2 miniature 190, a U_tau2 miniature 192 and an X_tau2 miniature 194.
 Referring to FIG. 8, it can be seen that the second stage128 of the encoder 102 includes an image classifier 152 that determines the image type by analyzing the Y_tau2 miniature 190, the U_tau2 miniature 192 and the X_tau2 miniature 194. The image classifier 152 uses a fuzzy logic rule base to classify an image into one or more of its known classes. In the preferred embodiment, these classes include gray scale, graphics, text, photographs, high activity and low activity images. The image classifier 152 also decomposes the source image 100 into block units and classifies each block. Since the source image 100 includes a combination of different image types, the image classifier 152 subdivides the source image 100 into distinct regions. The image classifier 152 then outputs the control script 196 that specifies the correct compression methods for each region. The control script 196 specifies which compression methods to apply in the third stage 130, and specifies the channel encoding methods to apply in the fourth stage 132.
 As shown in FIG. 4, during the third stage130, the encoder 102 uses the control script 196 to select the optimal compression methods from its compression toolbox. The encoder 102 separates the Y data 186 from the U and X data 188. Thus, the encoder 102 separates the Y_tau2 miniature 190 from the U_tau2 miniature 192 and the X_tau2 miniature 194, and passes the Y_tau2 miniature 190 to the optimized DCT 136, and passes the U_tau2 miniature 192 and the X_tau2 miniature 194 to a second and third Reed Spline Filter 156.
 As illustrated in FIG. 9, the optimized DCT136 subdivides the Y_tau2 miniature 190 into a set of 8×8 pixel blocks and transforms each 8×8 pixel block into sixtyfour DCT coefficients 198. The DCT coefficients include the AC terms 200 and the DC terms 201. The DCT coefficients 198 are analyzed by the optimized DCT 136 to determine optimal quantization step sizes and reconstruction values. The optimized DCT 136 stores the optimal quantization step sizes (uniform or nonuniform) in a quantization table Q 202 and outputs the reconstruction values to the CS data segment 204. The optimized DCT 136 then quantizes the DCT coefficients 198 according to the quantization table Q 202. Once quantized, the optimized DCT 136 outputs the DCT quantized values 206 to the DCT data segment 208.
 In order to preserve the image information lost by the optimized DCT136, the DCT residual calculator 154 (shown in FIG. 10) computes and compresses the DCT residual. The DCT residual calculator 154 dequantizes in a dequantizer 209 the DCT quantized values 206 stored in the DCT data segment 208 by multiplying the reconstruction values in the CS data segment 204 with the DCT quantized values 206. The DCT residual calculator 154 then reconstructs the dequantized DCT components with an inverse DCT 210 to generate a reconstructed dY_tau2 miniature 211. The reconstructed dY_tau2 miniature 211 is subtracted from the original Y_tau2 miniature 190 to create an rY_tau2 residual 212.
 Referring to FIG. 11, it can be seen that the rY_tau2 residual212 is further compressed with the AVQ 134. The technique of vector quantization is used to represent a block of information as a single index that requires fewer bits of storage. As explained in more detail below, the AVQ 134 maintains a group of commonly occurring block patterns in a set of codebooks 214 stored in the resource file 160. The index references a particular block pattern within a particular codebook 214. The AVQ 134 compares the input block with the block patterns in the set of codebooks 214. If a block pattern in the set of codebooks 214 matches or closely approximates the input block, the AVQ 134 replaces the input block pattern with the index.
 Thus, the AVQ134 compresses the input block information into a list of indexes. The indexes are decompressed by replacing each index with the block pattern each index references in the set of codebooks 214. The decoder 110, as explained in more detail below, also has a set of the codebooks 214. During the decoding process the decoder 110 uses the list of indexes to reference block patterns stored in a particular codebook 214. The original source cannot be precisely recovered from the compressed representation since the indexed patterns in the codebook will not match the input block exactly. The degree of loss will depend on how well the codebook matches the input block.
 As shown in FIG. 11, the AVQ134 compresses the rY_tau2 residual 212, by subdividing the rY_tau2 residual 212 into 4×4 residual blocks and comparing the residual blocks with codebook patterns as explained above. The AVQ 134 replaces the residual blocks with the codebook indexes that minimize the squared error. The AVQ 134 outputs the list of codebook indexes to the VQ1 data segment 224. Thus, the VQ1 data segment 224 is a list of codebook indexes that identify block patterns in the codebook. As explained in more detail below, the AVQ 134 of the preferred embodiment also generates new codebook patterns that the AVQ 134 outputs to the set of codebooks 214. The added codebook patterns are stored in the VQCB data segment 223.
 FIG. 12 illustrates a block diagram of the second Reed Spline Filter225 and third Reed Spline Filter 227. Once the image classifier 152 determines the particular image type, the U_tau2 miniature 192 and the X_tau2 miniature 194 are further decimated and filtered by the second Reed Spline Filter 225. Like the first Reed Spline Filter 148 shown in FIG. 6, the second Reed Spline Filter 225 compresses the U_tau2 miniature 192 and the X_tau2 miniature 194 in a twostep process. First, the U_tau2 miniature 192 and the X_tau2 miniature 194 are vertically and horizontally decimated by a factor of two. The decimated data are then spline fitted to determine optimal reconstruction weights that will minimize the mean square error of the reconstructed decimated miniatures. Once complete, the second Reed Spline Filter 225 outputs the optimal reconstruction values to create a U_tau4 miniature 226 and an X_tau4 miniature 228.
 The third Reed Spline Filter227 decimates the U_tau4 miniature 226 and the X_tau4 miniature 228 vertically and horizontally by a factor of four. The decimated image data are again spline fitted to create a U_tau16 miniature 230 and an X_tau16 miniature 232.
 In FIG. 13 the Reed Spline residual calculator158 preserves the image information lost by the second Reed Spline Filter 225 and the third Reed Spline Filter 227 by computing and compressing the Reed Spline Filter residual. The Reed Spline residual calculator 158 reconstructs the U_tau4 miniature 226 and X_tau4 miniature 228 by interpolating the U_tau16 miniature 230 and the X_tau16 miniature 232. The interpolated U_tau16 miniature 230 is referred to as a dU_tau4 miniature 234. The interpolated X_tau16 miniature 232 is referred to as a dX_tau4 miniature 236. The dU_tau4 miniature 234 and dX_tau4 miniature 236 are subtracted from the actual U_tau4 miniature 226 and X_tau4 miniature 228 to create an rU_tau4 residual 238 and an rX_tau4 residual 240.
 As illustrated in FIG. 11, the rU_tau4 residual238 and the rX_tau4 residual 240 are further compressed with the AVQ 134. The AVQ 134 subdivides the rU_tau4 residual 238 and the rX_tau4 residual 240 into 4×4 residual blocks. The residual blocks are compared with blocks in the set of codebooks 214 to find the codebook patterns that minimize the squared error. The AVQ 134 compresses the residual block by assigning an index that identifies the corresponding block pattern in the set of codebooks 214. Once complete, the AVQ 134 outputs the compressed residual as the VQ3 data segment 242 and the VQ4 data segment 244.
 The U_tau16 miniature230 and the X_tau16 miniature 232 are also compressed with the DPCM 140 as shown in FIG. 14. The DPCM 140 outputs the lowdetail color components as the URCA data segment 246 and the XRCA data segment 248. The URCA data segment 246 and the XRCA data segment 248 form the lowdetail color components that the decoder 110 uses to create the color thumbnail miniature 120 if this is included as a playback option in the compressed data stream 118.
 FIG. 15 illustrates the enhancement analyzer144 of the preferred embodiment. The Y_tau2 miniature 190, the U_tau4 miniature 226, and the X_tau4 miniature 228 are analyzed to determine an enhancement list 250 that specifies the visual priority of every 16×16 image block. The enhancement analyzer 144 determines the visual priority of each 16×16 image block by convolving the Y_tau2 miniature 190, the U_tau4 miniature 226, and the X_tau4 miniature 228 and comparing the result of the convolution to a threshold value E 252. The threshold value E 252 is user defined. The user can set the threshold value E 252 from zero to 200. The threshold value E 252 determines how much enhancement information the encoder 102 adds to the compressed file 104. Thus, setting the threshold value E 252 to zero will suppress any image enhancement information.
 If the result of convolving a particular 16×16 high resolution block is greater than the threshold value E252, the 16×16 highresolution block is prioritized and added to the enhancement list 250. Thus the enhancement list 250 identifies which 16×16 blocks are coded and prioritizes how the 16×16 coded blocks are listed.
 The high resolution residual calculator162, as shown in FIG. 16, determines the high resolution residual for each 16×16 high resolution block identified in the enhancement list 250. The high resolution residual calculator 162 translates the VQ1 data segment 224 from the AVQ 134 into a reconstructed rY_tau2 residual 212 by mapping the indexes in the VQ1 data segment 224 to the patterns in the codebook. The reconstructed rY_tau2 residual is added to the dY_tau2 miniature 254 (dequantized DCT components). The result is interpolated by a factor of two in the vertical and horizontal dimensions and is subtracted from the original Y_tau2 190 miniature to form the high resolution residual.
 The high resolution residual calculator162 then extracts high resolution 16×16 blocks from the high resolution residual according to the priorities in the enhancement list 250. As will be explained in more detail below, the high resolution residual calculator 162 outputs the highest priority blocks in the first enhancement layer, the nexthighest priority blocks in the second enhancement layer, etc. The high resolution residual blocks are referred to as the xr_Y residual 256.
 The xr_Y residual256 is further compressed with the AVQ 134. The AVQ 134 subdivides the xr_Y residual 256 into 4×4 residual blocks. The residual blocks are compared with blocks in the codebook. If a residual block corresponds to a block pattern in the codebook, the AVQ 134 compresses the 4×4 residual block by assigning an index that identifies the corresponding block pattern in the codebook. Once complete, the AVQ 134 outputs the compressed high resolution residual to the VQ2 data segment 258.
 FIG. 17 illustrates a block diagram of the palette selector164. The palette selector 164 computes a “bestfit” 24bit color palette 260 for the decoder 110. The palette selector 164 is optional and is user defined. The palette selector 164 computes the color palette 260 from the Y_tau2 miniature 190, the U_tau2 miniature 192 and the X_tau2 miniature 194. The user can select a number of palette entries N 262 to range from 0 to 255 entries. If the user selects a zero, no palette is computed. If enabled, the palette selector 164 adds the color palette 260 to a plurality of data segments 166.
 The channel encoder168, as shown in FIG. 18, interleaves and channel encodes the plurality of data segments 166. Based on the user defined playback model 261, the plurality of data segments 166 are interleaved as follows: 1) as a single layer, singlepass comprising the entire image, 2) as two layers comprising the thumbnail miniature 120 and the remainder of the image 122 with enhancement information interleaved into each data block (panel) in the second layer, and 3) as multiple layers comprising the thumbnail miniature 120, the standard image 124, the sharp image 105, and additional layers as specified by the user. For each playback model an option exists to interleave the data for panellized or nonpanellized display. The user defined playback model 261 is described in more detail below.
 After interleaving the plurality of data segments166, the channel encoder 168 compresses the plurality of data segments 166 in response to the control script 196. In the preferred embodiment, the channel encoder 168 compresses the plurality of data segments 166 with: 1) a Huffman encoding process that uses fixed tables, 2) a Huffman process that uses adaptive tables, 3) a conventional LZ1 coding technique or 4) a runlength encoding process. The channel encoder 168 chooses the optimal compression method based on the image type identified in the control script 196.
 The Adaptive Vector Ouantizer
 The preferred embodiment of the AVQ134 is illustrated in FIG. 19. More specifically, the AVQ 134 optimizes the vector quantization techniques described above. The AVQ 134 subdivides the image data into a set of 4×4 pixel blocks 216. The 4×4 pixel blocks 216 include sixteen (16) elements X_{1},X_{2},X_{3 }. . . X_{16 } 218, that start at the upper lefthand corner and move left to right on every row to the bottom righthand corner.
 The codebook214 of the present invention comprises M predetermined sixteenelement vectors, P_{1},P_{2},P_{3}, . . . P_{M } 220, that correspond to common patterns found in the population of images. The indexes I_{1},I_{2},I_{3}, . . . I_{M } 222 refer respectively to the patterns P_{1},P_{2},P_{3}, . . . , P_{M } 220.

 where: X_{i }is the ith element of the input vector, X and P_{ik }is the ith element of the VQ pattern P_{k}.
 The comparison equation finds the best match by selecting the minimum error term that results from comparing the input block with the codebook patterns. In other words, the AVQ134 calculates the mean squared error term associated with each pattern in the codebook 214 in order to determine which pattern in the codebook 214 has the minimum squared error (also referred to as the minimum error). The error term is the mean square error produced by subtracting the pattern element P_{ik }from the input block element X_{i}, squaring the result and dividing by sixteen (16).
 The process of searching for a matching pattern in the codebook214 is timeconsuming. The AVQ 134 of the preferred embodiment accelerates the pattern matching process with a variety of techniques.
 First, in order to find the optimal codebook pattern, the AVQ134 compares each input block term X_{i }to the corresponding term in the codebook pattern P_{j }being tested and calculates the total squared error for the first codebook pattern. This value is stored as the initial minimum error. For each of the other patterns P_{j}=P_{2},P_{3}, . . . , P_{M}, the AVQ 134 subtracts the X_{1 }and P_{1j }terms and squares the result. The AVQ 134 compares the resulting squared error to the minimum error. If the squared error value is less than the minimum error, the AVQ 134 continues with the next input term X_{2 }and computes the squared error associated with X_{2 }and P_{2j}. The AVQ 134 adds the result to the squared error of the first two terms. The AVQ 134 then compares the accumulated squared error for X_{1 }and X_{2 }to the minimum error. If the accumulated squared error is less than the minimum error the squared error calculation continues until the AVQ 134 has evaluated all 16 terms.
 If at any time in the comparison, the accumulated squared error for the new pattern is greater than the minimum squared error, the current pattern is immediately rejected and the AVQ134 discontinues calculating the squared error for the remaining input block terms for that pattern. If the total squared error for the new pattern is less than the minimum error, the AVQ 134 replaces the minimum error with the squared error from the new pattern before making the comparisons for the remaining patterns.
 Also, if the accumulated squared error for a particular codebook pattern is less than a predetermined threshold, the codebook pattern is immediately accepted and the AVQ134 quits testing other codebook patterns. Furthermore, the codebook patterns in the present invention are ordered according to the frequency of matches. Thus, the AVQ 134 begins by comparing the input block with patterns in the codebook 214 that are most likely to match. Still further, the codebook patterns are grouped by the sum of their squared amplitudes. Thus the AVQ 134 selects a group of similar codebook patterns by summing the squared amplitude of an input block in order to determine which group of codebook patterns to search.
 Besides improving the time it takes for the AVQ134 to find an optimal codebook pattern, the AVQ 134 includes a set of codebooks 214 that are adapted to the input blocks (i.e., codebooks 214 that are optimized for input blocks that contain DCT residual values, high resolution residual values, etc.). Finally, the AVQ 134 of the preferred embodiment, adapts a codebook 214 to the source image 100 by devising a set of new patterns to add to a codebook 214.
 Therefore, the AVQ134 of the preferred embodiment has three modes of operation: 1) the AVQ 134 uses a specified codebook 214, 2) the AVQ 134 selects the bestfit codebook 214, or 3) the AVQ 134 uses a combination of existing codebooks 214, and new patterns that the AVQ 134 creates. If the AVQ 134 creates new patterns, the AVQ 134 stores the new patterns in the VQCB data segment 223.
 The Compressed File Format
 FIGS. 20a and 20 b illustrate the segmented architecture of the data stream 118 that results from transmitting the compressed file 104. The segmented architecture of the compressed file 104 in the preferred embodiment allows layering of the compressed image data. Referring to FIG. 2, the layering of the compressed file 104 allows the decoder 110 to display the thumbnail miniature 120, the splash image 122 and the standard image 124 before the entire compressed file 104 is transferred. As the decoder 110 receives each successive layer of components, the decoder 110 adds additional detail to the displayed image.
 In addition to layering the compressed data, the segmented architecture allows the decoder110 of the preferred embodiment: 1) to move from one segment to the next in the stream without fully decoding segments of data, 2) to skip parts of the data stream 118 that contain data that is unnecessary for a given rendition of the image, 3) to ignore parts of the data stream 118 that are in an unknown format, 4) to process the data in an order that is configurable on the fly if the entire data stream 118 is stored locally, and 5) to store different layers of the compressed file 104 separately from one another.
 As shown in FIG. 20a, the byte arrangement of the data stream 118 and the compressed file 104 includes a header segment 400 and a normal segment 402. The header segment 400 contains header information, and the normal segment 402 contains data. The header segment 400 is the first segment in the compressed file 104 and is the first segment transmitted with the data stream 118. In the preferred embodiment, the header segment 400 is eight bytes long.
 As shown in FIG. 20b, the byte arrangement of the header segment 400 includes a byte 0 406 and a byte 1 408 of the header segment 400. Byte 0 406 and byte 1 408 of the header segment 400 identify the data stream 118. Byte 1 408 also indicates if the data stream 118 contains image data (indicated by a “G”) or if it contains resource data (indicated by a “C”). Resource data includes color lookup tables, font information, and vector quantization tables.
 Byte2 410, byte 3 412, byte 4 414, byte 5 416, byte 6 418 and byte 7 420 of the header segment 400 specify which encoder 102 created the data stream 118. As new encoding methods are added to the encoder 102, new versions of the encoder 102 will be sold and distributed to decode the data encoded by the new methods. Thus, to remain compatible with prior encoders 102, the decoder 110 needs to identify which encoder 102 generated the compressed data. In the preferred embodiment, byte 7 420 identifies the encoder 102 and byte 2 410, byte 3 412, byte 4 414, byte 5 416, and byte 6 418 are reserved for future enhancements to the encoder 102.
 FIG. 21 illustrates the normal segment402 as a sequence of bytes that are logically separated into two sections: an identifier section 422 and a data section 424. The identifier section 422 precedes the data section 424. The identifier section 422 specifies the size of the normal segment 402, and identifies a segment type. The data section 424 contains information about the source image 100.
 The identification section422 is a sequence of one, two, or three bytes that identifies the length of the normal segment 402 and the segment type. The segment type is an integer number that specifies the method of data encoding. The compressed file 104 contains 256 possible segment types. The data in the normal segment 402 is formatted according to the segment type. In the preferred embodiment, the normal segments 402 are optimally formatted for the color palette, the Huffman bitstreams, the Huffman tables, the image panels, the codebook information, the vector dequantization tables, etc.
 For example, the file format of the preferred embodiment allows the use of different Huffman bitstreams such as an 8bit Huffman stream, a 10bit Huffman stream, and a DCT Huffman stream. The encoder102 uses each Huffman bitstream to optimize the compressed file 104 in response to different image types. The identification section 422 identifies which Huffman encoder was used and the normal segment 402 contains the compressed data.
 FIGS. 22a, 22 b, 22 c, and 22 d illustrate the layering and interleaving of the plurality of data segments 166 in the compressed file 104 of the preferred embodiment. The plurality of data segments 166 in the compressed file 104 are interleaved based on the user defined playback model 261 as follows: 1) as a singlepass, nonpanellized image (FIG. 22a), 2) as a singlepass, panellized image (FIG. 22b), 3) as two layers comprising the thumbnail miniature 120, and the sharp image 125 (FIG. 22c) and 4) as multiple layers comprising the thumbnail miniature 120, the standard image 124, and the sharp image 125 (FIG. 22d).
 Block diagram426 in FIG. 22a shows the compressed file format for the singlepass, nonpanellized image. The compressed file 104 begins with the header, the optional color palette and the resource data such as the tables and Huffman encoding information. The plurality of data segments 166 are not interleaved or layered. Thus, the decoder 110 must receive the entire compressed file 104 before any part of the source image 100 can be displayed.
 Block diagram428 in FIG. 22b shows the compressed file 104 for the singlepass, panellized image. The plurality of data segments 166 are interleaved panelbypanel, so that all of the segments for each panel are contiguously transmitted. The decoder 110 can expand and display a panel at a time until the entire compressed file 104 is expanded.
 Block diagram430 in FIG. 22c shows the compressed file format of the thumbnail miniature 120, the splash image 122 and the final or sharp image 125. The plurality of data segments 166 are interleaved panelbypanel and the resolution components for the thumbnail miniature 120 and splash image 122 exist in the first layer, the panels for the final image exist in the second layer. The first layer includes selected portions of the plurality of data segments 166 that are needed to decode the panels of the thumbnail miniature 120 and splash image 122. Thus, the compressed file 104 only stores the low detail color components (URCA data segment 246, the XRCA data segment 248), the DC terms 201 and as many as the first five AC terms 200 in the first layer. The number of AC terms 200 depends on the userselected quality of the thumbnail miniature 120.
 The plurality of data segments166 in the first layer are also interleaved panelbypanel to allow the thumbnail miniature 120 and splash image 122 to be decoded a panel at a time. The second layer contains the remaining plurality of data segments 166 needed to expand the compressed file 104 into the final image. The plurality of data segments 166 in the second layer are also interleaved panelbypanel.
 Block432 in FIG. 22d shows the compressed file format of the thumbnail image 120, the splash image 122, the layered standard image 124, and the sharp image 125. The thumbnail miniature 120 and splash image 122 are arranged in the first layer as described above. The remaining data segments 166 are layered at different quality levels. The multilayering is accomplished by layering and interleaving panel information associated with the VQ2 data segment 258 (high resolution residual). The multiple layers allow the display of all the panels at a particular level of detail before decoding the panels in the next layer.
 The Decoder
 FIG. 23 illustrates the decoder110 of the present invention. The decoder 110 takes as input the compressed data stream 118 and expands or decodes it into an image for viewing on the display 112. As explained above, the compressed file 104 and the transmitted data stream 118 include image components that are layered with a plurality of panels 433. The decoder 110 expands the plurality of panels 433 one at a time.
 As illustrated in FIG. 24, the decoder110 expands the compressed file 104 in four steps. In a first step 434, the decoder 110 expands the first layer of image data in the compressed file 104 or the data stream 118 into a Ym miniature 436, a Um miniature 438, and an Xm miniature 440. In a second step 442, the decoder 110 uses the Ym miniature 436, the Um miniature 438, and an Xm miniature 440 to generate the thumbnail miniature 120, and the splash image 122. In a third step 444, the decoder 110 receives a second layer of image data and generates the higher detail panels 445 needed to expand the thumbnail miniature 120 into a standard image 124, a fourth step 446 the decoder 110 receives a third layer of image data to generate higher detail panels to enhance the detail of the standard image in order to create an enhanced image 105 that corresponds to the source image 100.
 FIG. 25 illustrates the elements of the first step434 in which the decoder 110 expands the AC terms 200, the DC terms 201, the URCA data segment 246, and the XRCA data segment 248 into the Ym miniature 436, the Um miniature 438, and Xm miniature 440. The first step 434 includes an inverse Huffman encoder 458, an inverse DPCM 476, a dequantizer 450, a combiner 452, an inverse DCT 476, a demultiplexer 454, and an adder 456.
 The decoder110 then separates the DC terms 201 and the AC terms 200 from the URCA data segment 246 and the XRCA data segment 248. The inverse Huffman encoder 458 decompresses the first layer of the data stream 118 which includes the AC terms 200, the URCA data segment 246, and the XRCA data segment 248. The inverse DPCM 476 further expands the DC terms 201 to output DC terms 201′. The dequantizer 450 further expands the AC terms 200 to output AC terms 200′ by multiplying the output AC terms 200′ with the quantization factors 478 in the quantization table Q 202 to output 8×8 DCT coefficient blocks 482. The quantization table Q 202 is stored in the CS data segment 204 (not shown).
 The combiner452 combines the output DC terms 201′ with the 8×8 DCT coefficient blocks 482. The decoder 110 sets the inverse DCT factor 480, and the inverse DCT 476 outputs the DCT coefficient blocks 482 that correspond to the Ym miniature 436 that is 1/256th the size of the original image.
 The demultiplexer454 separates the inverse Huffman encoded URCA data segment 246 from the XRCA data segment 248. The inverse DPCM 476 then expands the URCA data segment 246 and the XRCA data segment 248 to generate the blocks that correspond to the Um miniature 438 and the Xm miniature 440.
 The adder456 translates the blocks corresponding to the Um miniature 438 and the Xm miniature 440 into blocks that correspond to a Xm miniature 460.
 FIG. 26 illustrates the second step442 in which the decoder 110 expands the Ym miniature 436, the Um miniature 438, and the Xm miniature 460 that the decoder 110 further includes the interpolator 462 that operates on the Um miniature 436, the Um miniature 438 and the Xm miniature 460. The interpolator 462 is controlled by a Ym interpolation factor 484, a Um interpolation factor 486, and a Xm interpolation factor 496. A scaler 466 is controlled by a Ym scale factor 490, a Um scale factor 492, a Xm scale factor 494. The decoder 110 further includes the replicator 464 and the inverse color converter. The interpolator 462 uses a linear interpolation process to enlarge the Ym miniature 436, the Um miniature 438, and the Xm miniature 460 by one, two or four times in both the horizontal and vertical directions.
 The Ym interpolation factor484, the Um interpolation factor 486, and the Xm interpolation factor 488 control the amount of interpolation. The size of the source image 100 in the compressed file 104 is fixed, thus the decoder 110 may need to enlarge or reduce the expanded image before display. The decoder 110 sets the Ym interpolation factor 484 to a power of 2 (i.e., 1, 2, 4, etc.) in order to optimize the decoding process. However, in order to display an expanded image at the proper size, the scaler 466 scales the interpolated image to accommodate different display formats.
 The interpolator462 also expands the Um miniature 438 and the Xm miniature 440. Like the Ym interpolation factor 484, the decoder 110 sets the Um interpolation factor 486 and the Xm interpolation factor 496 to a power of two. The decoder 110 sets the Ym interpolation factor 484, and the Um interpolation factor 486 so that the Um miniature 438 and Xm miniature 460 approximate the size of the interpolated and scaled Ym miniature 436.
 After interpolation, the scaler466 enlarges or reduces the interpolated Ym miniature based on the Ym scale factor 490. In the preferred embodiment, the decoder 110 sets the Ym interpolation factor 484 so that the interpolated Ym miniature 436 is nearly twice the size of the thumbnail miniature 120. The decoder 110 then sets the Ym scale factor 490 to reduce the interpolated Ym miniature 436 to the display size of the thumbnail miniature 120. The scaler 466 interpolates the Um miniature 458 and the Xm miniature 460 with the Um scale factor 492, and the Xm scale factor 494. The decoder 110 sets the Xm scale factor 494, the Um scale factor 492, as necessary to scale the image to the display size.
 The inverse color converter468 transforms the interpolated and scaled miniatures into a red, green, and blue pixel array or a palletized image as required by the display 112. When converting to a palletized image, the inverse color converter 468 also dithers the converted image. The decoder 110 displays the interpolated, scaled and color converted miniatures as the thumbnail miniature 120.
 In order to create the splash image122, the decoder 110 expands the interpolated Ym miniature 436, the interpolated Um miniature 438 and the interpolated Xm miniature 440 with a second interpolation process that uses a Ym splash interpolation factor 498, a Um splash interpolation factor 500, and an Xm splash interpolation factor 502. Like the thumbnail miniature 120, the decoder 110 also sets the splash interpolation factors to a power of two.
 The interpolated data are then expanded with the replicator464. The replicator 464 enlarges the interpolated data one or two times by replicating the pixel information. The replicator 464 enlarges the interpolated data based on a Ym replication factor 504, a Um replication factor 506, and an Xm replication factor 508. The decoder 110 sets the Ym replication factor 504, the Um replication factor 506, and the Xm replication factor 508 so that the replicated image is onefourth of the display size.
 The inverse color converter468 transforms the replicated image data into red, green and blue image data. The replicator 464 then again replicates the red, green, and blue image data to match the display size. The decoder 110 displays the resulting splash image 122 on the display 112.
 FIG. 27 illustrates the third step3 in which the decoder 110 generates the higher detail panels to expand the thumbnail miniature 120 into a standard image 124. FIG. 27 also illustrates the fourth step 446 in which the decoder 110 generates generate higher detail panels to enhance the detail of the standard image in order to create an enhanced image 105 that corresponds to the source image 100.
 The decoding of the standard image124 and the enhanced image 105 requires the inverse Huffman encoder 458, the combiner 452, the dequantizer 450, the inverse DCT 476, a pattern matcher 524, the adder 456, the interpolator 462, and an edge overlay builder 516. The decoder 110 adds additional detail to the displayed image as the decoder 110 receives new layers of compressed data. The additional layers include new panels of the DCT data segment 208 (containing the remaining AC terms 200′), the VQ1 data segment 224, the VQ2 data segment 258, the enhancement location data segment 510, the VQ3 data segment 242, and the VQ4 data segment 244.
 The decoder110 builds upon the Ym miniature 436, the Um miniature 438 and the Xm miniature 440 calculated for the thumbnail miniature 120 by expanding the next layer of image detail. The next layer contains a portion of the DCT data segment 208, the VQ1 data segment 224, the VQ2 data segment 258, the enhancement location data segment 510, the VQ3 data segment 242, and the VQ4 data segment 244 that correspond to the standard image.
 The inverse Huffman encoder458 decompresses the DCT data segment 208 and the VQ1 data segment 224 (the DCT residual). The combiner 452 combines the DCT information from the inverse Huffman encoder 458 with the AC terms 200 and the DC terms 201. The dequantizer 450 reverses the quantization process by multiplying the DCT quantized values 206 with the quantization factors 478. The dequantizer obtains the correct quantization factors 478 from the quantization table Q 202. The dequantizer outputs 8×8 DCT coefficient blocks 482 to the inverse DCT 476. The inverse DCT 476 in turn, outputs the 8×8 DCT coefficient blocks 482 that correspond to a Y image 509 that is ¼th the size of the original image.
 The pattern matcher524 replaces the DCT residual blocks 512 by finding an index to a matching pattern block in the codebook 214. The adder 456 adds the DCT residual blocks 512 to the DCT coefficient blocks 482 on a pixel by pixel basis. The interpolator 462 interpolates the output of the adder 456 by a factor of four to create a full size Y image 520. The interpolator 462 performs bilinear interpolation to enlarge the Y image 520 horizontally and vertically.
 The inverse Huffman encoder458 decompresses the VQ2 data segment 258 (the high resolution residual) and the enhancement location data segment 510. The pattern matcher 524 uses the codebook indexes to retrieve the matching pattern blocks stored in the codebook 214 to expand the VQ2 data segment 258 to create 16×16 high resolution residual blocks 514. An enhancement overlay builder 516 inserts the 16×16 high resolution residual blocks into a Y image overlay 518 specified by the edge location data segment 510. The Y image overlay 518 is the size of the original image. The adder 456 adds the Y image overlay 518 to the full sized Y image 520.
 To calculate the full sized U image522, the inverse Huffman encoder 458 expands the VQ3 data segment 242. The pattern matcher 524 uses the codebook indexes to retrieve the matching pattern blocks stored in the codebook 214 to expand the VQ3 data segment 242 into 4×4 rU_tau4 residual blocks 526. The interpolator 462 interpolates the Um miniature 438 by a factor of four and the adder 456 adds the 4×4 rU_tau4 residual blocks 526 to the interpolated Um miniature 438 in order to create a Um+r miniature 528. The interpolator 462 interpolates the Um+r miniature 528 by a factor of four to create the full sized U image 522.
 To calculate the full sized X image530, the inverse Huffman encoder 458 expands the VQ4 data segment 244. The pattern matcher 524 uses the codebook indexes to retrieve the matching pattern blocks stored in the codebook 214 to expand the VQ4 data segment 244 into 4×4 rX_tau4 residual blocks. The decoder 110 then translates the 4×4 rX_tau4 residual blocks 532 into 4×4 rV_tau4 residual blocks 534. The interpolator 462 interpolates the Xm miniature 460 by a factor of four, and the adder 456 adds the 4×4 rV_tau4 residual blocks 534 to the interpolated Xm miniature 460 in order to create a Xm+r miniature 536. The interpolator 462 interpolates the Xm+r miniature 536 by a factor of four to create the full sized X image 530.
 The decoder stores the full sized Y image520, the full sized U image 522, and the full sized X image 530 in local memory. The inverse color converter 468 then converts the full sized Y image 520, the full sized U image 522, and the full sized X image 530 into a full sized red, green, and blue image. The panel is then added to the displayed image. This process is completed for each panel until the entire source image 100 is expanded.
 In the forth step the decoder110 receives the third image layer and builds upon the full sized Y image 520, the full sized U image 522, and the full sized X image 530 stored in local memory to generate the enhanced image 105. The third image data layer contains the remaining portion of the DCT data segment 208, the VQ1 data segment 224, the VQ2 data segment 258, the enhancement location data segment 510, the VQ3 data segment 242, and the VQ4 data segment 244 that correspond to the enhanced image 105.
 The decoder110 repeats the process illustrated in FIG. 27 to generate a new full sized Y image 520, a new full sized U image 522, and a new full sized X image 530. The new full sized Y image 520 is added to the full sized Y image generated in the third step 444. The new full sized U image 522 is added to the full sized U image 522 generated in the third step 444. The new full sized X image 530 is added to the full sized X image generated in the third step 444.
 The inverse color converter468 converts the full sized Y image 520, the full sized U image 522, and the full sized X image 530 into a full sized red, green, and blue image. The panel is then added to the displayed image. This process is completed for each panel until the entire enhanced image 105 is expanded.
 The inverse DCT476 of the preferred embodiment is a mathematical transformation for mapping data in the time (or spatial) domain to the frequency domain, based on the “cosine” kernel. The two dimensional version operates on a block of 8×8 elements.
 Referring to FIG. 9, the compressed DCT coefficients198 are stored as DC terms 201 and AC terms 200. In the preferred embodiment, the inverse DCT 476 as shown in FIGS. 25 and 27 combines the process of transformation and decimation in the frequency and spatial domains (frequency and then spatial) into a single operation in the frequency domain. The inverse DCT 476 of the present invention provides at least a factor of 2 in implementation efficiency and is utilized by the decoder 110 to expand the thumbnail miniature 120 and splash image 122.
 The inverse DCT476 receives a sequence of DC terms 201 and AC terms 200 which are frequency coefficients. The high frequency terms are arbitrarily discarded at a predefined frequency to prevent aliasing. The discarding of the high frequency terms is equivalent to a low pass filter which passes everything below a predefine frequency while attenuating all the high frequencies to zero.
 The equation for an inverse DCT is:
${f}_{y,x}:=\frac{1}{4}\ue89e\sum _{u}\ue89e\sum _{v}\ue89e{C}_{u}\xb7{C}_{v}\xb7{F}_{v,u}\xb7\mathrm{cos}\ue89e\text{\hspace{1em}}\ue89e\left(\frac{2\xb7x+1}{16}\right)\xb7u\xb7\pi \xb7\mathrm{cos}\ue89e\text{\hspace{1em}}\ue89e\left(\frac{2\xb7y+1}{16}\xb7v\xb7\pi \right)$  where
 u:=0 . . . 7 v:=0 . . . 7
 x:=0 . . . 7 y:=0 . . . 7

 The inverse DCT476 generates an 8×8 output matrix that is decimated to a 4×4 matrix then to a 2×2 matrix. The inverse DCT 476 then decimates the output matrix by subsampling with a filter. After subsampling, an averaging filter smooths the output. Smoothing is accomplished by using a running average of the adjacent elements to form the output.
 For example, for a 4×4 output matrix the 8×8 matrix from the inverse DCT476 is subdivided into sixteen 2×2 regions, and adjacent elements within each 2×2 region is averaged to form the output. Thus the sixteen regions form a 4×4 matrix output.
 For a 2×2 output matrix, the 8×8 matrix from the inverse DCT476 is subdivided into four 4×4 regions. The adjacent elements within each 4×4 matrix region are averaged to form the output. Thus, the four regions form a 2×2 matrix output.
 In addition, since most of the AC coefficients are zero, the inverse DCT476 is simplified by combining the inverse DCT equations with the averaging and the decimation equations. Thus, the creation of a 2×2 output matrix where a given X is an 8×8 input matrix that consists of DC terms 201 and AC terms 200 is stated formally as:
$X:=\left[\begin{array}{cccccccc}{X}_{0,0}& {X}_{0,1}& 0& 0& 0& 0& 0& 0\\ {X}_{1,0}& {X}_{1,1}& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]$  All elements with i or j greater than 1 are set to zero. The setting of the high frequency index to zero is equivalent to filtering out the high frequency coef ficients from the signal.
 Assigning Y as the, 2×2 output matrix, the decimated output is thus equal to:
 Y _{0,0} :=X _{0,0}+(k _{1}·(X _{0,1}))+(k _{1}·(X _{1,0}))+(k _{2}·(X _{1,1}))
 Y _{0,1} :=X _{0,0}−(k _{1}·(X _{0,1}))+(k _{1}·(X _{1,0}))−(k _{2}·(X _{1,1}))
 Y _{1,0} :=X _{0,0}+(k _{1}·(X _{0,1}))−(k _{1}·(X _{1,0}))−(k _{2}·(X _{1,1}))
 Y _{1,1} :=X _{0,0}−(k _{1}·(X _{0,1}))−(k _{1}·(X _{1,0}))+(k _{2}·(X _{1,1}))

 k_{2}:=(k_{1})^{2 }
 The creation of a 4×4 output matrix where a given X is an 8×8 input matrix that consists of DC terms201 and AC terms 200 is stated formally as:
 All elements with i or j greater than 3 are set to zero.
 It is possible to implement the calculations in the 2×2 case where the two dimensional equation is decomposed downward; however, performing the one dimensional approach twice reduces complexity and decreases the calculation time. In the preferred embodiment, the inverse DCT476 computes an additional onedimensional row inverse DCT, and then a onedimensional
$X:=\left[\begin{array}{cccccccc}{X}_{0,0}& {X}_{0,1}& {X}_{0,2}& {X}_{0,3}& 0& 0& 0& 0\\ {X}_{1,0}& {X}_{1,1}& {X}_{1,2}& {X}_{1,3}& 0& 0& 0& 0\\ {X}_{2,0}& {X}_{2,1}& {X}_{2,2}& {X}_{2,3}& 0& 0& 0& 0\\ {X}_{3,0}& {X}_{3,1}& {X}_{3,2}& {X}_{3,3}& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]$  column inverse DCT.
 The equation for a one dimensional case is as follows: (1dout_{x }are the elements of the one dimensional case)
 1dout_{0} :=in _{0}+(k _{1} ·in _{1})+(k _{2} ·in _{2})+(k _{3} ·in _{3})
 1dout_{1} :=in _{0}+(k _{4} ·in _{1})−(k _{2} ·in _{2})−(k _{5} ·in _{3})
 1dout_{2} :=in _{0}−(k _{4} ·in _{1})−(k _{2} ·in _{2})+(k _{5} ·in _{3})
 1dout_{3} :=in _{0}−(k _{1} ·in _{1})+(k _{2} ·in _{2})−(k _{3} ·in _{3})

$\begin{array}{ccc}{k}_{1}:=\frac{c\ue8a0\left(1\right)+c\ue8a0\left(3\right)}{\sqrt{2}}& {k}_{2}:=\frac{c\ue8a0\left(2\right)+c\ue8a0\left(6\right)}{\sqrt{2}}& {k}_{3}:=\frac{c\ue8a0\left(3\right)c\ue8a0\left(7\right)}{\sqrt{2}}\\ {k}_{4}:=\frac{c\ue8a0\left(5\right)+c\ue8a0\left(7\right)}{\sqrt{2}}& {k}_{5}:=\frac{c\ue8a0\left(5\right)+c\ue8a0\left(2\right)}{\sqrt{2}}& \text{\hspace{1em}}\end{array}$  where c(k) is defined as in the 2×2 output matrix.
 The scaler466 of the preferred embodiment is also shown in FIG. 27. More specifically, the scaler 466 utilizes a generalized routine that scales the image up or down while reducing aliasing and reconstruction noise. Scaling can be described as a combination of decimation and interpolation. The decimation step consists of downsampling and using an antialiasing filter; the interpolation step consists of pixel filling using a reconstruction filter for any scale factor that can be represented by a rational number P/Q, where P and Q are integers associated with the interpolation and decimation ratios.
 The scaler466 decimates the input data by dividing the source image into the desired number of output pixels and then radiometrically weights the input data to form the necessary output. FIG. 28 illustrates the scaler 466 with an input to output ratio of fivetothree in the one dimensional case. Input pixel P_{1 } 538, pixel P_{2 } 540, pixel P_{3 } 542, pixel P_{4 } 544, and pixel P_{2 } 546 contain different data values. The output pixel X_{1 } 548, pixel X_{2 } 550, and pixel X_{3 } 552 are computed as follows:
 X _{1} =P _{1}+(P _{2}) (0.67)
 X _{2}=(P _{2}) (0.33)+P _{3}+(P _{4}) (0.33)
 X _{3}=(P _{4}) (0.66)+P _{5 }
 The decimated data is then filtered with a reconstruction filter and an area average filter. The reconstruction filter interpolates the input data by replicating the pixel data. The area average filter then area averages by integrating the area covered by the output pixel.
 If the output ratio is less than 1 (i.e, interpolation is necessary), the interpolator462 utilizes bilinear interpolation. FIG. 29 illustrates the operation of the bilinear interpolation. Input pixel A 554, input pixel B 556, input pixel C 558, and input pixel D 560, and reference point X 562 are interpolated to create output 564. For this example reference point X 562 is α to the right of pixel A 554 and 1α to the right of pixel C 558, and reference point X 562 is βdown from pixel A 554 and 1β up from pixel B 556. Reference point X 562 is stated formally as:
 X=(1−α)*((1−β)*A+62 *B)+α*((1−β)*C+β*D).
 The Image Classifier
 The preferred embodiment of the image classifier152 is illustrated in FIG. 8. More specifically, the image classifier 152 uses fuzzy logic techniques to determine which compression methods will optimize the compression of various regions of the source image 100. The image classifier 152 adds intelligence to the encoder 102 by providing the means to decide, based on statistical characteristics of the image, what “tools” (combinations of compression methods) will best compress the image.
 The source image100 may include a combination of different image types. For example, a photograph could show a person framed in a graphical border, wherein the person is wearing a shirt that contains printed text. In order to optimize the compression ratio for the regions of the image that contain different image types, the image classifier 152 subdivides the source image 100 and then outputs the control script 196 that specifies the correct compression methods for each region. Thus, the image classifier 152 provides a customized, “mostefficient” compression ratio for multiple image types.
 The image classifier152 uses fuzzy logic to infer the correct compression steps from the image content. Image content is inherently “fuzzy” and is not amenable to simple discrete classification. Images will thus tend to belong to several “classes.” For example, a classification scheme might include one class for textual images and a second class for photographic images. Since an image may comprise a photograph of a person wearing a shirt containing printed text, the image will belong to both classes to varying degrees. Likewise, the same image may be high contrast, “grainy,” black and white and/or high activity.
 Fuzzy logic is a settheoretic approach to classification of objects that assigns degrees of membership in a particular class. In classical set theory, an object either belongs to a set or it does not; membership is either 100% or 0%. In fuzzy set theory, an object can be partly in one set and partly in another. The fuzziness is of greater significance when the content must be categorized for the purpose of applying appropriate compression techniques. Relevant categories in image compression include photographic, graphical, noisy, and highenergy. Clearly the boundaries of these sets are not sharp. A scheme that matches appropriate compression tools to image content must reliably distinguish between content types that require different compression techniques, and must also be able to judge how to blend tools when types requiring different tools overlap.
 FIG. 30 illustrates the optimization of the compression process. The optimization process analyzes the input image600 at different levels. In the top level analysis 602 the image classifier 152 decomposes the image into a plurality of subimages 604 (regions) of relatively homogeneous content as defined by a classification map 606. The image classifier 152 then outputs the control script 196 that specifies which compression methods or “tools” to employ in compressing each region. The compression methods are further optimized in the second level analysis 608 by the enhancement analyzer 144 which determines which areas of an image are the most visually important (for example, text and strong luminance edges). The compression methods are then further optimized in the third level analysis 610 with the optimized DCT 156, AVQ 134, and adaptive methods in the channel encoder 168. The second level analysis 608 and the third level analysis 610 determine how to adapt parameters and tables to a particular image.
 The fuzzy logic image classifier152 provides adaptive “intelligent” branching to appropriate compression methods with a high degree of computational simplicity. It is not feasible to provide the encoder 102 with an exhaustive mapping of all possible combinations of inherently nonlinear, discontinuous, multidimensional inputs (image measurements) onto desired control scripts 196. The fuzzy logic image classifier 152 reduces such an analysis.
 Furthermore, the fuzzy logic image classifier152 ensures that the encoder 102 makes a smooth transition from one compression method (as defined by the control script 196) to another compression method. As image content becomes “more like” one class than another, the fuzzy controller avoids the discrete switching from one compression method to another compression method.
 The fuzzy logic image classifier152 receives the image data and determines a set of image measurements which are mapped onto one or more input sets. The image classifier 152 in turn maps the input sets to corresponding output sets that identify which compression methods to apply. The output sets are then blended (“defuzzified”) to generate a control script 196. The process of mapping the input image to a particular control script 196 thus requires three sets of rules: 1) rules for mapping input measurements onto input sets (e.g., degree of membership with the “high activity” input set=F[average of AC coefficients 5663]); 2) rules for mapping input sets onto output sets (e.g., if graphical image, use DCT quantization table 5 and 3) rules for defuzzification that mediate between membership of several output sets, i.e., how the membership of more than one output sets, should be blended to generate a single control script 196 that controls the compression process.
 Still further, the fuzzy logic rule base is easily maintained. The rules are modular. Thus, the rules can be understood, researched, and modified independently of one another. In addition, the rule bases are easily modified allowing new rules to make the image classifier152 more sensitive to different types of image content. Furthermore, the fuzzy logic rule base is extendable to include additional image types specified by the user or learned using neural network or genetic programming methods.
 FIG. 31 illustrates a block diagram of the image classifier152. In block 612 the image classifier 152 determines a set of input measurements 614 that correspond to the source image 100. In order to determine the input measurements 614, the image classifier 152 subdivides the source image 100 into a plurality of blocks. To conserve computations, the user can enable the image classifier 152 to select a random sample of the plurality of blocks to use as the basis of the input measurements 614.
 The image classifier152 determines the set of input measurements 614 from the plurality of blocks using a variety of methods. The image classifier 152 calculates the mean, the variance, and a histogram of all three color components. The image classifier 152 performs a discrete cosine transform of the image blocks to derive a set of DCT components wherein each DCT coefficient is histogrammed to provide a frequency domain profile of the imputed image. The image classifier 152 performs special convolutions to gather information about edge content, texture content, and the efficacy of the Reed Spline Filter. The image classifier 152 derives spatial domain blocks and matches the spatial domain blocks with a special VQlike pattern list to provide information about the types of activity contained in the picture. Finally, the image classifier scans the image for common and possibly localized features that bear on the compressibility of the image (such as typed text or scanning artifacts).
 In block616 the image classifier 152 analyzes the input measurements 614 generated in block 612 to determine the extent to which the source image 100 belongs to one of the fuzzy input sets 618 within the input rule base 620. The input rule base 620 identifies the list of image types. In the preferred embodiment, the image classifier 152 contains input sets 618 for the following image types: scale, text, graphics, photographic, color depth, degree of activity, and special features.
 Membership in the activity input set and the scale image input set are determined by the input measurements614 for the DCT coefficient histogram, the spatial statistics, and the convolutions. Membership in the text image input set and the graphic input set correspond to the input measurements 614 for a linear combination of high frequency DCT coefficients and gaps in the luminance histogram. The photographic input set is the complement of the graphic input set.
 The color depth input set includes four classifications: gray scale images, 4bit images, 8bit images and 24bit images. The color depth input corresponds to the input measurements614 for the Y, U and X color components. A small dynamic range in the U and X color components indicates that the picture is likely to be a gray scale image, while gaps in the Y component histogram reveals whether the image was once a palettized 4bit or 8bit image.
 The special feature input set corresponds to the input measurements614 for the common or localized features that bear on the compressibility of the image. Thus the special feature input set identifies such artifacts as black borders caused by inaccurate scanning and graphical titling on a photographic image.
 In block622 the image classifier 152 maps the input sets 618 onto output sets 624 according to the output rule base 626. The image classifier 152 applies the output rule base 626 to map each input set 618 onto membership of each fuzzy output set 624. The output sets 624 determine, for example, how many CS terms are stored in the CS data segment 204 and the optimization of the VQ1 data segment 224, the VQ2 data segment 258, the VQ3 data segment 242, the VQ4 data segment 244, and the number of VQ patterns to use. The output sets also determine whether the encoder 102 performs an optimized DCT 136 and which quantization tables Q 202 to apply.
 For the second Reed Spline Filter225 and the third Reed Spline Filter 227, the output sets 624 adjust the decimation factor tau and the orientation of the kernal function. Finally, the output sets determine whether the channel encoder 168 utilizes a fixed Huffman encoder, and adaptive Huffman encoder or an LZ1. FIG. 33 illustrates several examples of mapping from input measurements 614 to input sets 618 to output sets 624.
 Referring to FIG. 31, in block626 the image classifier constructs a classification map 628 based upon membership within the output sets. The classification map 628 identifies independent regions in the source image 100 that are independently compressed. Thus the image classifier 152 identifies the regions of the image that belong to compatible output sets 624. These are regions that contain relatively homogenous image contrast and call for one method or set or complementary methods to be applied to the entire region.
 In block630 the image classifier 152 converts (defuzzifies), based on the defuzzification rule base 632, the membership of the fuzzy output sets 624 of each independent region in order to generate the control script 196. The control script 196 contains instructions for which compression methods to perform and what parameters, tables, and optimization levels to employ for a particular region of the source image 100.
 The Enhancement Analyzer
 The preferred embodiment of the enhancement analyzer144 is illustrated in FIGS. 4, 15 and 30. More specifically, the enhancement analyzer 144 examines the Y_tau2 miniature 190, the U_tau2 miniature 192, and the X_tau4 miniature 228 to determine the enhancement priority of image blocks that correspond to 16×16 blocks in the original source image 100. The enhancement analyzer 144 prioritizes the image blocks by 1) calculating the mean of the Y_tau2 miniature 190, the U_tau2 miniature 192, and the X_tau4 miniature 228, and 2) testing every color block against a normalized threshold value E 252 for the Y_tau2 miniature 190, the U_tau2 miniature 192, and the X_tau4 miniature 228. A list of blocks that exceed the threshold value E 252 are added to the enhancement list 250.
 The enhancement analyzer144 determines a threshold value E_{Y }for the Y_tau2 miniature 190, a threshold value E_{U }for the U_tau2 miniature 192, and a threshold value E_{X }for the X_tau4 miniature 228. Once the enhancement analyzer 144 computes the threshold value E_{Y}, the threshold value F_{U }and the threshold value E_{X}, the enhancement analyzer 144 tests each 8×8 Y_tau2 block, each 4×4 U_tau4 block and each 4×4 X_tau4 block (each block corresponds to a 16×16 block in the source image 100) as follows:
 Every pixel in the test block is convolved with the following filter masks:
 M_{1}={−1,−2,−1,0,0,0,1,2,1}
 M_{2}={1,0,−1,2,0,−2,1,0,−1}
 to compute two statistics S_{1 }and S_{2}.
 Masks M_{1 }and M_{2 }are convolved with a three by three block of pixels centered on the pixel being tested. The three by three block of pixels is represented as:
 x_{11}x_{12}x_{13 }
 x_{21}x_{22}x_{23 }
 x_{31}x_{32}x_{33 }
 where the pixel x_{22 }is the pixel being tested. Thus the statistics are calculated with the following equations:
 S _{1}=(−1·x _{11})−(2·x _{12})−(1·x _{13})+(1·x _{31})+(2·x _{32})+(1·x _{33})
 S_{2}=(1·x _{11})−(1·x _{13})+(2·x _{21})−(1·x _{23})+(1·x _{31})−(1·x _{33})
 If S_{1 }plus S_{2 }is greater than the threshold value E_{Y }for a particular 8×8 Y_tau2 block, the enhancement analyzer 144 adds the 8×8 Y_tau2 block to the enhancement list 250. If S_{1 }plus S_{2 }is greater than the threshold value E_{U }for a particular 4×4 U_tau4 block, the enhancement analyzer 144 adds the 4×4 U_tau4 block to the enhancement list 250. If S_{1 }plus S_{2 }is greater than the threshold value E_{X }for a particular 4×4 X_tau4 block the enhancement analyzer 144 adds the 4×4 X_tau4 block to the enhancement list 250.
 In addition to the enhancement list250, the enhancement analyzer 144 also uses the DCT coefficients 198 to identify visually unimportant “texture” regions where the compression ratio can be increased without significant loss to the image quality.
 Optimized DCT
 The preferred embodiment of the optimized DCT136 is illustrated in FIG. 9. More specifically, the optimized DCT 136 uses the quantization table Q 202 to assign the DCT coefficients (DC terms 200 and AC terms 201) quantization step values. In addition, the quantization step values in the quantization table Q 202 vary depending on the optimized DCT 136 operation mode. The optimized DCT 136 operates in four DCT modes as follows: 1) switched fixed uniform DCT quantization tables that correspond to image classification, 2) optimal reconstruction values, 3) adaptive uniform DCT quantization tables, and 4) adaptive nonuniform DCT quantization tables.
 The fixed DCT quantization tables are tuned to different image types, including eight standard tables corresponding to images differing along three dimensions: photographic versus graphic, smallscale versus largescale, and highactivity versus lowactivity. In the preferred embodiment, additional tables can be added to the resource file160 (not shown).
 The control script196 defines which standard table the optimized DCT 136 uses in the fixedtable DCT mode. In the fixedtable mode, quantized step values for each DCT coefficient is obtained by linearly quantizing each x_{i }DCT coefficient with the quantization value q_{i }in quantization table Q. The mathematical relationship for the quantization procedure is:
 for i=0, 1, . . . , 63


 Reconstruction is also linear unless reconstruction values have been computed and stored in the CS data segment204. Letting r denote the dequantized DCT coefficients, the linear dequantization formula is:
 for i=0, 1, . . . , 63
 r _{i} =c _{i} ·q _{i }
 In the fixedtable DCT mode, the optimized DCT136 can also compute the optimal reconstruction values stored in the CS data segment 204. While the DC term 201 is always calculated linearly, the CS reconstruction values represent the conditional expected value of each quantized level of each AC term 200. The CS reconstruction values are calculated for each AC term 200 by first calculating an absolute value frequency histogram, H_{i }for the ith coefficient (for i=1, 2, . . . , 63) over all DCT blocks in the source image, N, as follows:
 for j=0, 1, . . . , N
 H_{i}(k)=frequency (abs(x_{ij})=k)
 where x_{ij}=the value of the ith coefficient in the jth DCT block.
 Second, the centroid of coefficient values is calculated between each quantization step. The formula for the centroid of the ith coefficient in the kth quantization interval is:
${\mathrm{CS}}_{i}\ue89e\left(k\right)=\sum _{j=\mathrm{kq}\frac{q}{2}}^{\mathrm{kq}+\frac{q}{2}}\ue89e\left[\frac{{H}_{i}\ue89e\left(j\right)}{{T}_{i}\ue89e\left(k\right)}\right]$ 
 This provides a nonlinear mapping of quantized coefficients onto reconstructed values as follows:
 r_{i}=CS_{i}(q_{i}) for i=1, 2, . . . , 63
 In the adaptive uniform DCT quantization mode, the image the classifier152 outputs the control script 196 that directs the optimized DCT 136 to adjust a given DCT uniform quantization table Q 202 to provide more efficient compression while holding the visual quality constant. This method adjusts the DCT quantization step sizes such that the compressed bit rate (entropy) after quantizing the DCT coefficients is minimized subject to the constraint that the visuallyweighted mean squared error arising from the DCT quantization is held constant with respect to the base quantization table and the usersupplied quantization parameter L.
 The optimized DCT136 uses marginal analysis to adjust the DCT quantization step sizes. A “marginal rate of transformation (MRT)” is computed for each DCT coefficient. The MRT represents the rate at which bits are “transformed” into (a reduction of) the visually weighted mean squared error (VMSE). The MRT of a coefficient is defined as the ratio of 1) the marginal change in the encoded bit rate with respect to a quantization step value q to 2) the marginal change in the visual mean square error with respect to the quantization step value q.
 MRT (bits/VMSE) ratio is calculated as follows:
 MRT (bits/VMSE)=((Δbits/Δq)/I(ΔVMSE/Δq)).
 Increasing the quantization step value q will add more bits to the representation of the corresponding DCT coefficient. However, adding more bits to the representation of a DCT coefficient will reduce the VMSE. Since the bits added to the step value q are usually transformed into VMSE reduction, the MRT is generally negative.
 The MRT is calculated for all of the DCT coefficients. The adaptive method utilized by the optimized DCT136 adjusts the quantization step values q of the quantized table Q 202 by reducing the quantization step value q corresponding to the maximum MRT and increasing the quantization step value q corresponding to the minimum MRT. The optimized DCT 136 repeats the process until the MRT is equalized across all of the DCT coefficients while holding the VMSE constant.
 FIG. 32 shows a flow chart of the process of creating an adaptive uniform DCT quantization table. In a step700 the optimized DCT 136 computes the MRT values for all DCT coefficients i. In step 702 the optimized DCT 136 compares the MRT values, if the MRT values are the same, the optimized DCT 136 uses the resulting quantization table Q 202. If the MRT values are not equal, the optimized DCT 136 finds the minimum MRT value and the maximum MRT value for the DCT coefficients i in step 706.
 In step708, the optimized DCT 136 increases the quantization step value q_{low }corresponding to the minimum MRT value and decreases the quantization step value q_{high }associated with the maximum MRT value. Increasing q_{low }which reduces the number of bits devoted to the corresponding DCT coefficient but does not increase VMSE appreciably. Reducing the quantization step value q_{high }increases the number of bits devoted to the corresponding dCT coefficient and reduces the VMSE significantly. The optimized DCT 136 offsets the adjustments for the quantization step values q_{low }and q_{high }in order to keep the VMSE constant.
 The optimized DCT136 returns to step 700, where the process is repeated until all MRT values are equal. Once all of the quantization step values q are determined the resulting quantization table Q 202 is complete.
 The Reed Spline Filter
 FIGS.3457 illustrate a preferred embodiment of the Reed Spline Filter 138 which is advantageously used for the first, second and third Reed Spline Filters 148, 225, and 227. The Reed Spline Filter described in FIG. 3457 is in terms of a generic image format. In particular the image input data comprises Y image input which corresponds for example to the red, green and blue image data in the first Reed Spline Filter 148 in the foregoing discussion. In like manner the outputs of the Reed Spline Filter 138 described as reconstruction values should be understood to correspond, for example, to the R_tau2 miniature 180, the G_tau2 miniature 182 and the B_tau2 miniature 184 of the first Reed Spline Filter 138.
 The Reed Spline Filter is based on the a leastmeansquare error (LMS)error spline approach, which is extendable to N dimensions. One and twodimensional image data compression utilizing linear and planar splines, respectively, are shown to have compact, closedform optimal solutions for convenient, effective compression. The computational efficiency of this new method is of special interest, because the compression/reconstruction algorithms proposed herein involve only the Fast Fourier Transform (FFT) and inverse FFT types of processors or other highspeed direct convolution algorithms. Thus, the compression and reconstruction from the compressed image can be extremely fast and realized in existing hardware and software. Even with this high computational efficiency, good image quality is obtained upon reconstruction. An important and practical consequence of the disclosed method is the convenience and versatility with which it is integrated into a variety of hybrid digital data compression systems.
 I. SPLINE FILTER OVERVIEW
 The basic process of digital image coding entails transforming a source image X into a “compressed” image Y such that the signal energy of Y is concentrated into fewer elements than the signal energy of X, with some provisions regarding error. As depicted in FIG. 34, digital source image data1002 represented by an appropriate Ndimensional array X is supplied to compression block 1004, whereupon image data X is transformed to compressed data Y′ via a first generalized process represented here as G (X)=Y′. Compressed data may be stored or transmitted (process block 1006) to a “remote” reconstruction block 1008, whereupon a second generalized process, G′(Y′)=X′, operates to transform compressed data Y′ into a reconstructed image X′.
 G and G′ are not necessarily processes of mutual inversion, and the processes may not conserve the full information content of image data X. Consequently, X′ will, in general, differ from X, and information is lost through the coding/reconstruction process. The residual image or socalled residue is generated by supplying compressed data Y′ to a “local” reconstruction process1005 followed by a difference process 1010 which computes the residue ΔX=XX′ 1012. Preferably, X and X′ are sufficiently close, so that the residue ΔX 1012 is small and may be transmitted, stored along with the compressed data Y′, or discarded. Subsequent to the remote reconstruction process 1008, the residue ΔX 1012 and reconstructed image X′ are supplied to adding process 1007 to generate a restored image X′+ΔX=X″ 1003.
 In practice, to reduce computational overhead associated with large images during compression, a decimating or subsampling process may be performed to reduce the number of samples. Decimation is commonly characterized by a reduction factor τ (tau), which indicates a measure of image data elements to compressed data elements. However, one skilled in the art will appreciate that image data X must be filtered in conjunction with decimation to avoid aliasing. As shown in FIG. 35, a lowpass input filter may take the form of a pointwise convolution of image data X with a suitable convolution filter1014, preferably implemented using a matrix filter kernel. A decimation process 1016 then produces compressed data Y′, which is substantially free of aliasing prior to subsequent process steps. While the convolution or decimation filter 1014 attenuates aliasing effects, it does so by reducing the number of bits required to represent the signal. It is “lowpass” in nature, reducing the information content of the reconstructed image X′. Consequently, the residue ΔX 1012 will be larger, and in part, will offset the compression attained through decimation.
 The present invention disclosed herein solves this problem by providing a method of optimizing the compressed data such that the meansquareresidue <ΔX^{2}> is minimized, where “< >” shall herein denote an averaging process. As shown in FIG. 36, compressed data Y′, generated in a manner similar to that shown in FIG. 35, is further processed by an optimization process 1018. Accordingly, the optimization process 1018 is dependent upon the properties of convolution filter 1014 and is constrained such that the variance of the meansquareresidue is zero, δ<ΔX^{2}>=0. The disclosed method of filter optimization “matches” the filter response to the image data, thereby minimizing the residue. Since the decimation filter 1014 is lowpass in nature, the optimization process 1018, in part, compensates by effectively acting as a “selftuned” highpass filter. A brief descriptive overview of the optimization procedure is provided in the following sections.
 A. Image Approximation by Spline Functions

 where X′ is the reconstructed image vector and χ_{k }is the decomposition weight. The image data vector X is thus approximated by an array of preferably computationally simple, continuous functions, such as lines or planes, allowing also an efficient reconstruction of the original image.
 According to the method, the basis functions need not be orthogonal and are preferably chosen to overlap in order to provide a continuous approximation to image data, thereby rendering a nondiagonal basis correlation matrix:
 A _{jk}=ψ_{j}(x)·ψ_{k}(x).
 This property is exploited by the method of the present invention, since it allows the user to “adapt” the response of the filter by the nature and degree of crosscorrelation. Furthermore, the basis of spline functions need not be complete in the sense of spanning the space of all image data, but preferably generates a close approximation to image X. It is known that the decomposition of image vector X into components of differing spline basis functions {ψ_{k}(X) } is not unique. The method herein disclosed optimizes the projection by adjusting the weights χ_{k }such that the differential variations of the average residue vanishes, δ<ΔX^{2}>=0, or equivalently <ΔX^{2}>=min. In general, it will be expected that a more complete basis set will provide a smaller residue and better compression, which, however, requires greater computational overhead and greater compression. Accordingly, it is preferable to utilize a computationally simple basis set, which is easy to manipulate in closed form and which renders a small residual image. This residual image or residue ΔX is preferably retained for subsequent processing or reconstruction. In this respect there is a compromise between computational complexity, compression, and the magnitude of the residue.

 The “best” X′ is determined by the constraint that ΔX=X−X′ is minimized with respect to variations in the weights χ_{j}:
$\frac{\partial}{\partial {\chi}_{j}}\ue89e\u3008\Delta \ue89e\text{\hspace{1em}}\ue89e{\underset{\_}{x}}^{2}\u3009=\frac{\partial}{\partial {\chi}_{j}}\ue89e\u3008{\left(\underset{\_}{X}\sum _{k}\ue89e{\chi}_{k}\ue89e{\Psi}_{k}\ue89e\left(\underset{\_}{x}\right)\right)}^{2}\u3009=0,$  which by analogy to FIG. 37, described an orthogonal projection of X onto S′.
 Generally, the above system of equations which determines the optimal χ_{k }may be regarded as a linear transformation, which maps X onto S′ optimally, represented here by:
 A (χ_{k})=X*ψ_{k}(X)
 where A_{ij}=ψ_{i}*ψ_{j }is a transformation matrix having elements representing the correlation between bases vectors ψ_{i }and ψ_{j}. The optimal weights ψ_{k }are determined by the inverse operation A^{−1}:
 χ_{k}=A^{−1}(X*ψ_{k }(x)),
 rendering compression with the least residue. One skilled in the art of LMS criteria will know how to express the processes given here in the geometry of multiple dimensions. Hence, the processes described herein are applicable to a variety of image data types.
 The present brief and general description has direct processing counterparts depicted in FIG. 36. The operation
 X*ψ_{k}(x)
 represents a convolution filtering process1014, and
 A^{−1}(X*ψ_{k}(x))
 represents the optimizing process1018.

 where DFT is the familiar discrete Fourier transform (DFT) and λ_{m }are the eigenvalues of A. The equivalent optimization block 1018, shown in FIG. 38, comprises three steps: (1) a discrete Fourier transformation (DFT) 1020; (2) inverse eigenfiltering 1022; and (3) an inverse discrete Fourier transformation (DFT^{−1}) 1024. The advantages of this embodiment, in part, rely on the fast coding/reconstruction speed, since only DFT and DFT^{−1 }are the primary computations, where now the optimization is a simple division. Greater elaboration into the principles of the method are provided in Section II where also the presently contemplated preferred embodiments are derived as closed form solutions for a onedimensional linear spline basis and twodimensional planar spline bases. Section III provides an operational description for the preferred method of compression and reconstruction utilizing the optimal procedure disclosed in Section II. Section IV discloses results of a reduction to practice of the preferred embodiments applied to one and twodimensional images. Finally, Section V discloses a preferred method of the filter optimizing process implemented in the image domain.
 II. IMAGE DATA COMPRESSION BY OPTICAL SPLINE INTERPOLATION
 A. OneDimensional Data Compression by LMSError Linear Splines
 For onedimensional image data, bilinear spline functions are combined to approximate the image data with a resultant linear interpolation, as shown in FIG. 39. The resultant closedform approximating and optimizing process has a significant advantage in computational simplicity and speed.
 Letting the decimation index τ and image sampling period t be fixed, positive integers τ, t=1,2, . . . , and letting X(t) be a periodic sequence of data of period nτ, where n is also an integer, consider a periodic, linear spline1014 of period nτ of the type,
 F(t)=F(t+nτ), (1)
 where (2)
 as shown by the functions ψ_{k}(t) 1014 of FIG. 39.
 The family of shifted linear splines F(t) is defined as follows:
 ψ_{k}(t)=F(tkτ) for (k=0,1,2, . . . , (n−1)). (3)

 in a leastmeansquares fashion where X_{0}, . . . , X_{n−1 }are n reconstruction weights. Observe that the twopoint sum in the interval 0<t<τ is:
$\begin{array}{cc}L\ue8a0\left({X}_{0},{X}_{1},\dots \ue89e\text{\hspace{1em}},{X}_{n1}\right)=\sum _{t=\tau}^{n\ue89e\text{\hspace{1em}}\ue89e\tau}\ue89e\u3008{\left[X\ue8a0\left(t\right)\sum _{k=0}^{n1}\ue89e{X}_{k}\ue89e{\Psi}_{k}\ue8a0\left(t\right)\right]}^{2}\u3009,& \left(6\right)\end{array}$  Hence, S(t)1030 in Equation 4 represents a linear interpolation of the original waveform X(t) 1002, as shown in FIG. 39.
 To find the “best” weights X_{0}, . . . , X_{n−1}, the quality L (X_{0},X_{1}, . . . , X_{n−1}) is minimized:
$\begin{array}{cc}\begin{array}{c}{X}_{0}\ue89e{\psi}_{0}\ue8a0\left(t\right)+{X}_{1}\ue89e{\Psi}_{1}\ue8a0\left(t\right)={X}_{0}\ue8a0\left(1\frac{t}{\tau}\right)+{X}_{1}\ue8a0\left(1\frac{\uf603t\tau \uf604}{\tau}\right)\\ ={X}_{0}+\left({X}_{1}{X}_{0}\right)\ue89e\frac{t}{\tau}\end{array},& \left(5\right)\end{array}$  where the sum has been taken over one period plus τ of the data. X_{k }is minimized by differentiating as follows:
$\begin{array}{cc}\begin{array}{c}\frac{\partial L}{\partial {X}_{j}}=\sum _{t=\tau}^{n\ue89e\text{\hspace{1em}}\ue89e\tau}\ue89e2\ue8a0\left[X\ue8a0\left(t\right)\sum _{k=0}^{n1}\ue89e{X}_{k}\ue89e{\Psi}_{k}\ue8a0\left(t\right)\right]\ue89e{\Psi}_{j}\ue8a0\left(t\right)\\ =\u30082\ue8a0\left[\sum _{t=\tau}^{n\ue89e\text{\hspace{1em}}\ue89e\tau}\ue89eX\ue8a0\left(t\right)\ue89e{\Psi}_{j}\ue8a0\left(t\right)\sum _{k=0}^{n1}\ue89e{X}_{k}\ue89e\sum _{t=\tau}^{n\ue89e\text{\hspace{1em}}\ue89e\tau}\ue89e{\Psi}_{k}\ue8a0\left(t\right)\ue89e{\Psi}_{j}\ue8a0\left(t\right)\right]\u3009\equiv 0.\end{array}& \left(7\right)\end{array}$ 


 The term Y_{j }in Equation 10 is reducible as follows:
$\begin{array}{cc}\begin{array}{c}{Y}_{j}\ue89e\text{\hspace{1em}}=\sum _{t=\tau}^{\mathrm{n\tau}}\ue89eX\ue8a0\left(t\right)\ue89eF\ue8a0\left(tj\ue89e\text{\hspace{1em}}\ue89e\tau \right)\\ \text{\hspace{1em}}\ue89e=\sum _{t=\left(j1\right)\ue89e\tau}^{\left(j+1\right)\ue89e\tau}\ue89eX\ue8a0\left(t\right)\ue89eF\ue8a0\left(tj\ue89e\text{\hspace{1em}}\ue89e\tau \right).\end{array}& \left(11\right)\end{array}$  Letting (tjτ)=m, then:
$\begin{array}{cc}{Y}_{j}=\sum _{m=\tau +1}^{\tau 1}\ue89eX\ue8a0\left(m+j\ue89e\text{\hspace{1em}}\ue89e\tau \right)\ue89eF\ue8a0\left(m\right)\ue89e\text{\hspace{1em}}\ue89e\mathrm{for}\ue89e\text{\hspace{1em}}\ue89e\left(j=0,1,2,\dots ,n1\right).& \left(12\right)\end{array}$  The Y_{j}'s in Equation 12 represent the compressed data to be transmitted or stored. Note that this encoding scheme involves n correlation operations on only 2τ−1 points.
 Since F(t) is assumed to be periodic with period nτ, the matrix form of A_{jk }in Equation 9 can be reduced by substitution Equation 3 into Equation 9 to obtain:
$\begin{array}{cc}\begin{array}{c}{A}_{\mathrm{jk}}=\sum _{m=T+1}^{\tau 1}\ue89eF\ue8a0\left(m+\left(jk\right)\ue89e\tau \right)\ue89eF\ue8a0\left(m\right)\\ =\{\begin{array}{c}\sum _{m=\tau +1}^{\tau 1}\ue89e{\left(F\ue8a0\left(m\right)\right)}^{2}\ue89e\stackrel{\Delta}{=}\ue89e\alpha \ue89e\text{\hspace{1em}}\ue89e\mathrm{if}\ue89e\text{\hspace{1em}}\ue89ejk\equiv 0\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89en\\ \sum _{m=\tau +1}^{\tau 1}\ue89eF\ue8a0\left(m\pm \tau \right)\ue89eF\ue8a0\left(m\right)\ue89e\stackrel{\Delta}{=}\ue89e\beta \ue89e\text{\hspace{1em}}\ue89e\mathrm{if}\ue89e\text{\hspace{1em}}\ue89ejk\equiv \pm 1\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89en\\ \text{\hspace{1em}}\ue89e0\ue89e\text{\hspace{1em}}\ue89e\mathrm{otherwise}\end{array}\end{array}& \left(13\right)\end{array}$  By Equation 13, A_{jk }can be expressed also in circulant form in the following manner:
 A_{jk}=a_{(kj)} _{ n }, (14)
 where (kj)_{n }denotes (kj) mod n, and
 a_{0}=α, a_{1}=β, a_{2}=0, . . . , a_{n−1}=β (15)
 Therefore, A_{jk }in Equations 14 and 15 has explicitly the following equivalent circulant matrix representations:
$\begin{array}{cc}\begin{array}{c}\left[{A}_{\mathrm{jk}}\right]\ue89e\text{\hspace{1em}}\ue89e\stackrel{\Delta}{=}\ue89e\left[\begin{array}{cccc}{A}_{0,0}& {A}_{0,1}& \cdots & {A}_{o,n1}\\ {A}_{1,0}& {A}_{1,1}& \cdots & {A}_{1,n1}\\ \vdots & \vdots & \u22f0& \vdots \\ {A}_{n1,0}& {A}_{n1,1}& \cdots & {A}_{n1,n1}\end{array}\right]\\ \text{\hspace{1em}}\ue89e=\left[\left\{{a}_{{\left(kj\right)}_{a}}\right\}\right]\\ \text{\hspace{1em}}\ue89e\stackrel{\Delta}{=}\ue89e\left[\begin{array}{ccccc}{a}_{0}& {a}_{1}& {a}_{2}& \cdots & {a}_{n1}\\ {a}_{n1}& {a}_{0}& {a}_{1}& \cdots & {a}_{n2}\\ {a}_{n2}& {a}_{n1}\ue89e{a}_{0}& \cdots & {a}_{n3}& \text{\hspace{1em}}\\ \vdots & \vdots & \vdots & \u22f0& \vdots \\ {a}_{1}& {a}_{2}& {a}_{3}& \cdots & {a}_{0}\end{array}\right]\\ \text{\hspace{1em}}\ue89e=\left[\begin{array}{ccccc}\alpha & \beta & 0& \cdots & \beta \\ \beta & \alpha & \beta & \cdots & 0\\ 0& \beta & \alpha & \cdots & 0\\ \vdots & \vdots & \vdots & \u22f0& \vdots \\ \beta & 0& 0& \cdots & \alpha .\end{array}\right]\end{array}& \left(16\right)\end{array}$  One skilled in the art of matrix and filter analysis will appreciate that the periodic boundary conditions imposed on the data lie outside the window of observation and may be defined in a variety of ways. Nevertheless, periodic boundary conditions serve to simplify the process implementation by insuring that the correlation matrix [A_{jk}] has a calculable inverse. Thus, the optimization process involves an inversion of [A_{jk}], of which the periodic boundary conditions and consequent circulant character play a preferred role. It is also recognized that for certain spline functions, symmetry rendered in the correlation matrix allows inversion in the absence of periodic image boundary conditions.
 B. TwoDimensional Data Compression by Planar Splines
 For twodimensional image data, multiplanar spline functions are combined to approximate the image data with a resultant planar interpolation. In FIG. 40, X(t_{1},t_{2}) is a doubly periodic array of image data (e.g., still image) of periods n_{1}τ and n_{2}τ, with respect to the integer variables t_{1 }and t_{2 }where τ is a multiple of both t_{1 }and t_{2}. The actual image 1002 to be compressed can be viewed as being repeated periodically throughout the plane as shown in the FIG. 40. Each subimage of the extended picture is separated by a border 1032 (or gutter) of zero intensity of width τ. This border is one of several possible preferred “boundary conditions” to achieve a doublyperiodic image.
 Consider now a doubly periodic planar spline, F(t_{1}, t_{2}) which has the form of a sixsided pyramid or tent, centered at the origin and is repeated periodically with periods n_{1}τ and n_{2}τ with respect to integer variables t_{1 }and t_{2}, respectively. A perspective view of such a planar spline function 1034 is shown in FIG. 41a and may hereinafter be referred to as “hexagonal tent.” Following the onedimensional case by analogy, letting:
 ψ_{k} _{ 1 } _{k} _{ 2 }(t_{1},t_{2})=F(t_{1}k_{1}τ, t_{2}k_{2}τ) (17)
 for (k_{1}=0,1, . . . , n_{1}−1) and (k_{2}=0,1, . . . ,n_{2}−1), the “best” weights X_{k} _{ 1 } _{k} _{ 2 }are found such that:
$\begin{array}{cc}L\ue8a0\left({X}_{{k}_{1}\ue89e{k}_{2}}\right)=\sum _{{t}_{1}\ue89e{t}_{2}=\tau}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89e\u3008{\left[X\ue8a0\left({t}_{1},{t}_{2}\right)\sum _{{k}_{1},{k}_{2}=0}^{{n}_{1}1,{n}_{2}1}\ue89e{X}_{{k}_{1}\ue89e{k}_{1}}\ue89e{\Psi}_{{k}_{1}\ue89e{k}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\right]}^{2}\u3009& \left(18\right)\end{array}$  is a minimum.
 A condition for L to be a minimum is
$\begin{array}{cc}\begin{array}{c}\frac{\partial L}{\partial {X}_{{j}_{1}\ue89e{j}_{2}}}=\text{\hspace{1em}}\ue89e2\ue89e\sum _{{t}_{1}\ue89e{t}_{2}=\tau}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89e\u3008[X\ue8a0\left({t}_{1},{t}_{2}\right)\\ \text{\hspace{1em}}\ue89e\sum _{{k}_{1},{k}_{2}=0}^{{n}_{1}1,{n}_{2}1}\ue89e{X}_{{k}_{1}\ue89e{k}_{2}}\ue89e{\Psi}_{{k}_{1}\ue89e{k}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)]\ue89e{\Psi}_{{j}_{1}\ue89e{j}_{2}\ue89e\text{\hspace{1em}}}\ue8a0\left({t}_{1},{t}_{2}\right)\u3009\\ =\text{\hspace{1em}}\ue89e2\ue89e\u3008[\sum _{{t}_{1}\ue89e{t}_{2}=\tau}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89eX\ue8a0\left({t}_{1},{t}_{2}\right)\ue89e{\Psi}_{{j}_{1}\ue89e{j}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\\ \text{\hspace{1em}}\ue89e\sum _{{k}_{1},{k}_{2}=0}^{{n}_{1}1,{n}_{2}1}\ue89e{X}_{{k}_{1}\ue89e{k}_{2}}\ue89e\sum _{{t}_{1}\ue89e{t}_{2}=\tau}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89e{\Psi}_{{j}_{1}\ue89e{j}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\ue89e{\Psi}_{{k}_{1}\ue89e{k}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)]\u3009\\ \equiv \text{\hspace{1em}}\ue89e0.\end{array}& \left(19\right)\end{array}$  The best efficients X_{k} _{ 1 } _{k} _{ 2 }are the solution of the 2ndorder tensor equation,
 A_{j} _{ 1 } _{j} _{ 2 } _{k} _{ 1 } _{k} _{ 2 }X_{k} _{ 1 } _{k} _{ 2 }=Y_{j} _{ 1 } _{j} _{ 2 }, (20)
 where the ummation is on k_{1 }and k_{2},
$\begin{array}{cc}{A}_{{j}_{1}\ue89e{j}_{2}\ue89e{k}_{1}\ue89e{k}_{2}}=\sum _{{t}_{1},{t}_{2}=\tau}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89e{\Psi}_{{j}_{1}\ue89e{j}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\ue89e{\Psi}_{{k}_{1}\ue89e{k}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)& \left(21\right)\end{array}$ 
 With the visual aid of FIG. 41a, the tensor Y_{j1j2 }reduces as follows:
$\begin{array}{cc}\begin{array}{c}{Y}_{{j}_{1}\ue89e{j}_{2}}=\sum _{{t}_{1},{t}_{2}=T}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89eX\ue8a0\left({t}_{1},{t}_{2}\right)\ue89e{\Psi}_{{j}_{1}\ue89e{j}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\\ =\sum _{{t}_{1},{t}_{2}=T}^{{n}_{1},{n}_{2}\ue89e\tau}\ue89eX\ue8a0\left({t}_{1},{t}_{2}\right)\ue89eF\ue8a0\left({t}_{1}{j}_{1}\ue89e\tau ,{t}_{2}{j}_{2}\ue89e\tau \right)\\ =\sum _{{t}_{1}=\left({j}_{1}1\right)\ue89e\tau}^{\left({j}_{1}+1\right)\ue89e\tau}\ue89e\sum _{{t}_{2}=\left({j}_{2}1\right)\ue89e\tau}^{\left({j}_{2}+1\right)\ue89e\tau}\ue89eX\ue8a0\left({t}_{1},{t}_{2}\right)\ue89eF\ue8a0\left({t}_{1}{j}_{1}\ue89e\tau ,{t}_{2}{j}_{2}\ue89e\tau \right).\end{array}& \left(23\right)\end{array}$ 
 for (j_{1}=0,1, . . . , n_{1}−1) and (j_{2}=0,1, . . . , n_{2}−1), where F(m_{1},m_{2}) is the doubly periodic, sixsided pyramidal function, shown in FIG. 41a. The tensor transform in Equation 21 is treated in a similar fashion to obtain
$\begin{array}{cc}\begin{array}{c}{A}_{{j}_{1}\ue89e{j}_{2}\ue89e{k}_{1}\ue89e{k}_{2}}\ue89e\text{\hspace{1em}}=\sum _{{t}_{1},{t}_{2}=\tau}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89e{\Psi}_{{j}_{1},{j}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\ue89e{\Psi}_{{k}_{1}\ue89e{k}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\\ \text{\hspace{1em}}\ue89e=\sum _{{m}_{1},{m}_{2}=\tau +1}^{\tau 1}\ue89eF\ue8a0\left({m}_{1}+\left({j}_{1}{k}_{1}\right)\ue89e\tau ,{m}_{2}+\left({j}_{2}{k}_{2}\right)\ue89e\tau \right)\ue89eF\ue8a0\left({m}_{1},{m}_{2}\right)\\ \text{\hspace{1em}}\ue89e=\{\begin{array}{c}\sum _{{m}_{1},{m}_{2}=\tau +1}^{\tau 1}\ue89e{\left(\left[F\ue8a0\left({m}_{1},{m}_{2}\right)\right]\right)}^{2}\ue89e\stackrel{\Delta}{=}\ue89e\alpha \\ \mathrm{if}\ue89e\text{\hspace{1em}}\ue89e\left({j}_{1}{k}_{1}\right)\equiv 0\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{1}\ue374\left({j}_{2}{k}_{2}\right)\equiv 0\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{2}\\ \sum _{{m}_{1},{m}_{2}=\tau +1}^{\tau 1}\ue89eF\ue8a0\left({m}_{1}\pm \tau ,{m}_{2}\right)\ue89eF\ue8a0\left({m}_{1},{m}_{2}\right)\ue89e\stackrel{\Delta}{=}\ue89e\beta \\ \mathrm{if}\ue89e\text{\hspace{1em}}\ue89e\left({j}_{1}{k}_{1}\right)\equiv \pm 1\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{1}\ue374\left({j}_{2}{k}_{2}\right)\equiv 0\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{2}\\ \sum _{{m}_{1},{m}_{2}=\tau +1}^{\tau 1}\ue89eF\ue8a0\left({m}_{1},{m}_{2}\pm \tau \right)\ue89eF\ue8a0\left({m}_{1},{m}_{2}\right)\ue89e\stackrel{\Delta}{=}\ue89e\gamma \\ \mathrm{if}\ue89e\text{\hspace{1em}}\ue89e\left({j}_{1}{k}_{1}\right)\equiv 0\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{1}\ue374\left({j}_{2}{k}_{2}\right)\equiv \pm 1\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{2}\\ \sum _{{m}_{1},{m}_{2}=\tau +1}^{\tau 1}\ue89eF\ue8a0\left({m}_{1}\pm \tau ,\text{\hspace{1em}}\ue89e{m}_{2}\pm \tau \right)\ue89eF\ue8a0\left({m}_{1},{m}_{2}\right)\ue89e\stackrel{\Delta}{=}\ue89e\xi \\ \mathrm{if}\ue89e\text{\hspace{1em}}\ue89e\left({j}_{1}{k}_{1}\right)\equiv \pm 1\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{1}\ue374\left({j}_{2}{k}_{2}\right)\equiv \pm 1\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{2}\\ \sum _{{m}_{1},{m}_{2}=\tau +1}^{\tau 1}\ue89eF\ue8a0\left({m}_{1}\mp \tau ,\text{\hspace{1em}}\ue89e{m}_{2}\pm \tau \right)\ue89eF\ue8a0\left({m}_{1},{m}_{2}\right)\ue89e\stackrel{\Delta}{=}\ue89e\eta \\ \mathrm{if}\ue89e\text{\hspace{1em}}\ue89e\left({j}_{1}{k}_{1}\right)\equiv \mp 1\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{1}\ue374\left({j}_{2}{k}_{2}\right)\equiv \pm 1\ue89e\text{\hspace{1em}}\ue89e\mathrm{mod}\ue89e\text{\hspace{1em}}\ue89e{n}_{2}\end{array}\end{array}& \left(25\right)\end{array}$  The values of α, β, γ, and ξ depend on τ, and the shape and orientation of the hexagonal tent with respect to the image domain, where for example m_{1 }and m_{2 }represent row and column indices. For greater flexibility in tailoring the hexagonal tent function, it is possible to utilize all parameters of the [A_{j1j2k1k2}]. However, to minimize calculational overhead it is preferable to employ symmetric hexagons, disposed over the image domain with a bidirectional period τ. Under these conditions, β=γ=ξ and η=0, simplifying [A_{j1j2k1k2}] considerably. Specifically, the hexagonal tent depicted in FIG. 41a and having an orientation depicted in FIG. 41b is described by the preferred case in which β=γ=ξ and η=0. It will be appreciated that other orientations and shapes of the hexagonal tent are possible, as depicted, for example, in FIG. 41c. Combinations of hexagonal tents are also possible and embody specific preferable attributes. For example, a superposition of the hexagonal tents shown in FIG. 41b and 41 c effectively “symmetrizes” the compression process.
 From Equation 25 above, A_{j1j2k1k2 }can be expressed in circulant form by the following expression:
 A_{j} _{ 1 } _{j} _{ 2 } _{k} _{ 1 } _{k} _{ 2 }=a_{(k} _{ 1 } _{j} _{ 1 } _{)n} _{ 1 } _{,(k} _{ 2 } _{j} _{ 2 } _{)n} _{ 2 }. (26)
 where (k_{l}j_{l})_{nl }denote (k_{1}j_{1}) mod n_{l}, l=1,2, and
$\begin{array}{cc}\begin{array}{c}\left[{a}_{{s}_{1}\ue89e{s}_{2}}\right]\ue89e\text{\hspace{1em}}=\left[\begin{array}{ccccc}{a}_{00}& {a}_{01}& {a}_{02}& \cdots & {a}_{a,{n}_{2}1}\\ {a}_{10}& {a}_{11}& {a}_{12}& \cdots & {a}_{1,{n}_{2}1}\\ {a}_{20}& {a}_{21}& {a}_{22}& \cdots & {a}_{2,{n}_{2}1}\\ \vdots & \vdots & \vdots & \u22f0& \vdots \\ {a}_{{n}_{1}1,0}& {a}_{{n}_{1}1,1}& {a}_{{n}_{1}1,2}& \cdots & {a}_{{n}_{1}1,{n}_{2}1}\end{array}\right]\\ \text{\hspace{1em}}\ue89e=\left[\begin{array}{cccccc}\alpha & \beta & 0& \cdots & 0& \beta \\ \beta & \beta & 0& \cdots & 0& 0\\ 0& 0& 0& \cdots & 0& 0\\ \vdots & \vdots & \vdots & \u22f0& \vdots & \vdots \\ 0& 0& 0& \cdots & 0& 0\\ \beta & 0& 0& \cdots & 0& \beta \end{array}\right],\end{array}& \left(27\right)\end{array}$  where (s_{1}=0, 1, 2, . . . n_{1}−1) and (s_{2}=1, 2, 3, . . . , n_{2}−1). Note that when [a_{s} _{ 1, } _{s} _{ 2 }] is represented in matrix form, it is “block circulant.”
 C. CompressionReconstruction Alqorithms
 Because the objective is to apply the abovedisclosed LMS error linear spline interpolation techniques to image sequence coding, it is advantageous to utilize the tensor formalism during the course of the analysis in order to readily solve the linear systems in equations 8 and 20. Here, the tensor summation convention is used in the analysis for one and two dimensions. It will be appreciated that such convention may readily apply to the general case of N dimensions.
 1. Linear Transformation of Tensors
 A linear transformation of a 1storder tensor is written as
 Y_{r}=A_{rs}X_{s }(sum on s), (28)
 where A_{rs }is a linear transformation, and Y_{r},X_{s }are 1storder tensors. Similarly, a linear transformation of a second order tensor is written as:
 Y_{r} _{ 1 } _{r} _{ 2 }=A_{r} _{ 1 } _{r} _{ 2 } _{s} _{ 1 } _{s} _{ 2 }X_{s} _{ 1 } _{s} _{ 2 }(sum on s_{1},s_{2}). (29)
 The product or composition of linear transformations is defined as follows. When the above Equation 29 holds, and
 Z_{1} _{ a } _{q} _{ 2 }=B_{q} _{ 1 } _{q} _{ 2 } _{r} _{ 1 } _{r} _{ 2 }Y_{r} _{ 1 } _{r} _{ 2 }, (30)
 then
 Z_{q} _{ 1 } _{q} _{ 2 }=B_{q} _{ 1 } _{q} _{2} _{r} _{ 1 } _{r} _{ 2 }A_{r} _{ 1 } _{r} _{ 2 } _{s} _{ 1 } _{s} _{ 2 }. (31)
 Hence,
 C_{q} _{ 1 } _{q} _{ 2 } _{s} _{ 1 } _{s} _{ 2 }=B_{q} _{ 1 } _{q} _{ 2 } _{r} _{ 1 } _{r} _{ 2 }A_{r} _{ 1 } _{r} _{ 2 } _{s} _{ 1 } _{s} _{ 2 } (32)
 is the composition or product of two linear transformations.
 2. Circulant Transformation of 1stOrder Tensors
 The tensor method for solving equations 8 and 20 is illustrated for the 1dimensional case below: Letting A_{rs }represent a circulant tensor of the form:
 A_{rs}=a_{(sr)mod n }for(r,s=0, 1, 2, . . . , n−1), (33)
 and considering the n special 1storder tensors as
 W_{S} ^{(l)}≡(ω^{l})^{s }for (t=0, 1, 2, . . . ,n−1), (34)
 where ω is the nth root of unity, then
 A_{rs}W_{S} ^{(l)}=λ(l)W_{r} ^{(l)}, (35)

 are the distinct eigenvalues of A_{rs}. The terms W_{S} ^{(l) }are orthogonal.
$\begin{array}{cc}{W}_{s}^{\left(l\right)}\ue89e{W}_{s}^{\left(j\right)*}=\{\begin{array}{c}0\ue89e\text{\hspace{1em}}\ue89e\mathrm{for}\ue89e\text{\hspace{1em}}\ue89el\ne j\\ n\ue89e\text{\hspace{1em}}\ue89e\mathrm{for}\ue89e\text{\hspace{1em}}\ue89el=j.\end{array}& \left(37\right)\end{array}$  At this point it is convenient to normalize these tensors as follows:
$\begin{array}{cc}\varphi \ue89e\begin{array}{c}\left(l\right)\\ S\end{array}\ue89e\stackrel{\Delta}{=}\ue89e\frac{1}{\sqrt{n}}\ue89eW\ue89e\begin{array}{c}\left(l\right)\\ S\end{array}\ue89e\text{\hspace{1em}}\ue89e\mathrm{for}\ue89e\text{\hspace{1em}}\ue89e\left(l=0,1,2,\dots \ue89e\text{\hspace{1em}},n1\right).& \left(38\right)\end{array}$  Φ_{s} ^{(l) }evidently also satisfies the orthonormal property, i.e.,
 Φ_{S} ^{(l)}Φ_{S} ^{(j)*}=δ_{lj } (39)
 where δ_{lj }is the Kronecker delta function and * represents complex conjugation.
 A linear transformation is formed by summing the n dyads Φ_{r} ^{(l)}Φ_{s} ^{(l)* }for l=0,1, . . . ,n−1 under the summation sign as follows:
$\begin{array}{cc}{\stackrel{~}{A}}_{\mathrm{rs}}=\sum _{l=0}^{n1}\ue89e\lambda \ue8a0\left(l\right)\ue89e\varphi \ue89e\begin{array}{c}\left(l\right)\\ r\end{array}\ue89e\varphi \ue89e{\begin{array}{c}\left(l\right)\\ S\end{array}}^{*}.& \left(40\right)\end{array}$  Then
$\begin{array}{cc}\begin{array}{c}{\stackrel{~}{A}}_{\mathrm{rs}}\ue89e\varphi \ue89e\begin{array}{c}\left(j\right)\\ S\end{array}=\sum _{j\ne 0}^{n1}\ue89e\lambda \ue8a0\left(l\right)\ue89e{\varphi}_{r}\ue8a0\left(l\right)\ue89e{\varphi}_{s}\ue8a0\left(l\right)*{\varphi}_{s}\ue8a0\left(j\right)\\ =\sum _{n1}^{l=0}\ue89e\lambda \ue8a0\left(l\right)\ue89e{\varphi}_{r}\ue8a0\left(l\right)\ue89e{\delta}_{\mathrm{lj}}\\ =\lambda \ue8a0\left(j\right)\ue89e{\varphi}_{r}\ue8a0\left(j\right).\end{array}& \left(41\right)\end{array}$  Since Ã_{rs }has by a simple verification the same eigenvectors and eigenvalues as the transformation A_{rs }has in Equations 9 and 33, the transformation Ã_{rs }and A_{rs }are equal.
 3. Inverse Transformation of 1stOrder Tensors.

 This is proven easily, as shown below:
$\begin{array}{cc}\begin{array}{c}{A}_{\mathrm{rs}}\ue89e{A}_{\mathrm{st}}^{1}=\sum _{l=0}^{n1}\ue89e\sum _{{l}^{\prime}=0}^{n1}\ue89e\lambda \ue8a0\left(l\right)\ue89e\frac{1}{\lambda \ue8a0\left({l}^{\prime}\right)}\ue89e{\varphi}_{r}^{l}\ue89e{\varphi}_{s}^{{l}^{*}}\ue89e{\varphi}_{s}^{{l}^{\prime}}\ue89e{\varphi}_{t}^{{l}^{\prime *}}\\ =\sum _{l=0}^{n1}\ue89e\sum _{{l}^{\prime}=0}^{n1}\ue89e\lambda \ue8a0\left(l\right)\ue89e\frac{1}{\lambda \ue8a0\left({l}^{\prime}\right)}\ue89e{\varphi}_{\tau}^{l}\ue89e{\delta}_{l\ue89e\text{\hspace{1em}}\ue89e{l}^{\prime}}\ue89e{\varphi}_{t}^{{l}^{\prime *}}=\sum _{l=0}^{n1}\ue89e{\varphi}_{r}^{l}\ue89e{\varphi}_{t}^{{l}^{*}}\\ =\sum _{l=0}^{n1}\ue89e\frac{1}{n}\ue89e{\left({\omega}^{l}\right)}^{\mathrm{rt}}=\sum _{l=0}^{n1}\ue89e\frac{1}{n}\ue89e{\left({\omega}^{\mathrm{rt}}\right)}^{l}={\delta}_{\mathrm{rt}}\end{array}& \left(43\right)\end{array}$  4. Solving 1stOrder Tensor Equations
 The solution of a1storder tensor equation Y_{r}=A_{rs}X is given by
 A_{qr} ^{−1}Y_{r}=A_{gr} ^{−1}A_{rs}X_{s}=δ_{qs}X_{s}=X_{q}, (44)
 so that
$\begin{array}{cc}\begin{array}{c}{X}_{r}={A}_{\mathrm{rs}}^{1}\ue89e{Y}_{s}\\ =\sum _{l=0}^{n1}\ue89e\frac{1}{\lambda \ue8a0\left(l\right)}\ue89e{\varphi}_{r}^{l}\ue89e{\varphi}_{s}^{{l}^{*}}\ue89e{Y}_{s}\\ =\sum _{l=0}^{n1}\ue89e\left[\frac{{\varphi}_{s}^{{l}^{*}}\ue89e{Y}_{s}}{\lambda \ue8a0\left(l\right)}\right]\ue89e{\varphi}_{r}^{l}=\sum _{l=0}^{n1}\ue89e\left[\frac{1}{\lambda \ue8a0\left(l\right)}\ue8a0\left[\frac{1}{n}\ue89e\sum _{k=0}^{n1}\ue89e{Y}_{k}\ue89e{\omega}^{\mathrm{lk}}\right]\right]\ue89e{\omega}^{\mathrm{lr}}\\ =\mathrm{DFT}\ue8a0\left[\frac{1}{\lambda \ue8a0\left(l\right)}\ue89e{\mathrm{DFT}}^{1}\ue8a0\left({Y}_{k}\right)\right].\end{array}& \left(45\right)\end{array}$  where DFT denotes the discrete Fourier Transform and DFT^{−1 }denotes its inverse discrete Fourier Transform.
 An alternative view of the above solution method is derived below for one dimension using standard matrix methods. A linear transformation of a 1storder tensor can be represented by a matrix. For example, let A denote A_{rs }in matrix form. If A_{rs }is a circulant transformation, then A is also a circulant matrix. From matrix theory it is known that every circulant matrix is “similar” to a DFT matrix. If Q denotes the DFT matrix of dimension (n×n), and Q^{t }the complex conjugate of the DFT matrix, and Λ is defined to be the eigenmatrix of A, then:
 A=QΛQ^{t}. (46)
 The solution to y=Ax is then
 x=A^{−1}y=QΛ^{−1}(Q^{t}y).
 For the onedimensional process described above, the eigenvalues of the transformation operators are:
$\begin{array}{cc}\begin{array}{c}\lambda \ue8a0\left(l\right)=\sum _{j=0}^{n1}\ue89e{{a}_{j}\ue8a0\left({w}^{l}\right)}^{j}\\ =\mathrm{DFT}\ue8a0\left({a}_{j}\right).\end{array}& \left(47\right)\end{array}$  where a_{0}=α, a_{1}=β, . . . , a_{n−2}=0, a_{n−1}=β, and ω^{n}=1. Hence:
 λ(l)=α+βω^{l}+βω^{(n−1)l }=α+β(ω^{l}+ω^{−l}). (48)
 A direct extension of the 1storder tensor concept to the 2ndorder tensor will be apparent to those skilled in the art. By solving the 2ndorder tensor equations, the results are extended to compress a 2D image. FIG. 42 depicts three possible hexagonal tent functions for 2dimensioned image compression indices τ=2,3,4. The following table exemplifies the relevant parameters for implementing the hexagonal tent functions:
Decimation Index τ = 2 τ = 3 τ = 4 (τ) Compression Ratio 4 9 16 (τ^{2}) α a^{2 }+ 6b^{2} a^{2 }+ 6b^{2 }+ 12c^{2} a^{2 }+ 6b^{2 }+ 12c^{2 }+ 18d^{2} β b^{2} 2(c^{2 }+ bc) 2d^{2 }+ 2db + 4dc + c^{2} gain a + 6b a + 6b + 12c a + 6b + 12c + 18d  The algorithms for compressing and reconstructing a still image are explained in the succeeding sections.
 III. OVERVIEW OF CODINGRECONSTRUCTION SCHEME
 A block diagram of the compression/reconstruction scheme is shown in FIG. 43. The signal source1002, which can have dimension up to N, is first passed through a lowpass filter (LPF). This lowpass filter is implemented by convolving (in a process block 1014) a chosen spline filter 1013 with the input source 1002. For example, the normalized frequency response 1046 of a onedimensional linear spline is shown in FIG. 44. Referring again to FIG. 43, it can be seen that immediately following the LPF, a subsampling procedure is used to reduce the signal size 1016 by a factor τ. The information contained in the subsampled source is not optimized in the leastmeansquare sense. Thus, an optimization procedure is needed to obtain the best reconstruction weights. The optimization process can be divided into three consecutive parts. A DFT 1020 maps the nonoptimized weights into the image conjugate domain. Thereafter, an inverse eigenfilter process 1022 optimizes the compressed data. The frequency response plots for some typical eigenfilters and inverse eigenfilters are shown in FIG. 45 and 46. After the inverse eigenfilter 1022, a DFT^{−1 }process block 1024 maps its input back to the original image domain. When the optimized weights are derived, reconstruction can proceed. The reconstruction can be viewed as oversampling followed by a reconstruction lowpass filter.

 which improves with the size of the image.
 A. The Compression Method
 The coding method is specified in the following steps:
 1. A suitable value of τ (an integer) is chosen. The compression ratio is τ^{2 }for twodimensional images.
 2. Equation 23 is applied to find Y_{j1,j2}, which is the compressed data to be transmitted or stored:
$\begin{array}{c}{Y}_{{j}_{1}\ue89e{j}_{2}}=\sum _{{t}_{1},{t}_{2}=T}^{{n}_{1}\ue89e\tau ,{n}_{2}\ue89e\tau}\ue89eX\ue8a0\left({t}_{1},{t}_{2}\right)\ue89e{\Psi}_{{j}_{1}\ue89e{j}_{2}}\ue8a0\left({t}_{1},{t}_{2}\right)\\ =\sum _{{t}_{1},{t}_{2}=T}^{{n}_{1},{n}_{2}}\ue89eX\ue8a0\left({t}_{1},{t}_{2}\right)\ue89eF\ue8a0\left({t}_{1}{j}_{1}\ue89e\tau ,{t}_{2}{j}_{2}\ue89e\tau \right)\\ =\sum _{{t}_{1}=\left({j}_{1}1\right)\ue89e\tau}^{\left({j}_{1}+1\right)\ue89e\tau}\ue89e\sum _{{t}_{2}=\left({j}_{2}1\right)\ue89e\tau}^{\left({j}_{2}+1\right)\ue89e\tau}\ue89eX\ue8a0\left({t}_{1},{t}_{2}\right)\ue89eF\ue8a0\left({t}_{1}{j}_{1}\ue89e\tau ,{t}_{2}{j}_{2}\ue89e\tau \right)\end{array}$  B. The Reconstruction Method
 The reconstruction method is shown below in the following steps:
 1. Find the FFT^{−1 }of Y_{j1,j2 }(the compressed data).
 2. The results of step 1 are divided by the eigenvalues λ(l, m) set forth below. The eigenvalues λ(l,m) are found by extending Equation 48 to the twodimensional case to obtain:
 λ(l,m)=α+β(ω_{1} ^{l}+ω_{1} ^{−l}+ω_{2} ^{m}+ω_{2} ^{−m}+ω_{1} ^{l}ω_{2} ^{−m}+ω_{1} ^{−l}ω_{2} ^{m}), (49)
 where ω_{1 }is the n_{1}th root of unity and ω_{2 }is the n_{2}th root of unity.
 3. The FFT of the results from step 2 is then taken. After computing the FFT, X_{k} _{ 1 } _{k} _{ 2 }(the optimized weights) are obtained.

 5. Preferably, the residue is computed and retained with the optimized weights:
 ΔX(t_{1}, t_{2})=X(t_{1},t_{2})−S(t_{1}, t_{2}).
 Although the optimizing procedure outlined above appears to be associated with an image reconstruction process, it may be implemented at any stage between the aforementioned compression and reconstruction. It is preferable to implement the optimizing process immediately after the initial compression so as to minimize the residual image. The preferred order has an advantage with regard to storage, transmission and the incorporation of subsequent image processes.
 C. Response Considerations

 where λ(i,j) can be considered as an estimation of the frequency response of the combined decimation and interpolation filters. The optimization process H(i,j) attempts to “undo” what is done in the combined decimation/interpolation process. Thus, H(i,j) tends to restore the original signal bandwidth. For example, for τ=2, the decimation/interpolation combination is described as having an impulse response resembling that of the following 3×3 kernel:
$\begin{array}{cc}R=\left(\begin{array}{ccc}0& \beta & \beta \\ \beta & \alpha & \beta \\ \beta & \beta & 0\end{array}\right).& \left(52\right)\end{array}$  Then, its conjugate domain counterpart, λ(i,j)_{α,β, N}, will be
$\begin{array}{cc}{(\lambda \ue8a0\left(i,j\right)\uf604)}_{\alpha ,\beta ,N}\equiv \alpha +2\ue89e\beta \ue89e\text{\hspace{1em}}\left[\mathrm{cos}\ue89e\text{\hspace{1em}}\ue89e\left(\frac{2\ue89e\pi \ue89e\text{\hspace{1em}}\ue89ei}{N}\right)+\mathrm{cos}\ue89e\text{\hspace{1em}}\ue89e\left(\frac{2\ue89e\pi \ue89e\text{\hspace{1em}}\ue89ej}{N}\right)+\mathrm{cos}\ue89e\text{\hspace{1em}}\left[2\ue89e\pi \ue8a0\left(\frac{i}{N}\frac{j}{N}\right)\right]\right],& \left(53\right)\end{array}$  where i,j are frequency indexes and N represents the number of frequency terms. Hence, the implementation accomplished in the image conjugate domain is the conjugate equivalent of the inverse of the above 3×3 kernel. This relationship will be utilized more explicitly for the embodiment disclosed in Section V.
 IV. NUMERICAL SIMULATIONS
 A. OneDimensional Case
 For a onedimensional implementation, two types of signals are demonstrated. A first test is a cosine signal which is useful for observing the relationship between the standard error, the size of τ and the signal frequency. The standard error is defined herein to be the square root of the average error:
${\left[\frac{1}{N}\ue89e\sum _{t}\ue89e{\left(\Delta \ue89e\text{\hspace{1em}}\ue89eX\ue89e\left(t\right)\right)}^{2}\right]}^{1/2}.$  A second onedimensional signal is taken from one line of a greyscale still image, which is considered to be realistic data for practical image compression.
 FIG. 47 shows the plots of standard error versus frequency of the cosine signal for different degrees of decimation τ1056. The general trend is that as the input signal frequency becomes higher, the standard error increases. In the low frequency range, smaller values of τ yield a better performance. One abnormal phenomenon exists for the τ=2 case and a normalized input frequency of 0.25. For this particular situation, the linear spline and the cosine signal at discrete grid points can match perfectly so that the standard error is substantially equal to 0.
 Another test example comes from one line of realistic still image data. FIG. 48a and 48 b show the reconstructed signal waveform 1060 for τ=2 and τ=4, respectively, superimposed on the original image data 1058. FIG. 48a shows a good quality of reconstruction for τ=2. For τ=4, in FIG. 48b, some of the high frequency components are lost due to the combined decimation/interpolation procedure. FIG. 48c presents the error plot 1062 for this particular test example. It will be appreciated that the nonlinear error accumulation versus decimation parameter τ may be exploited to minimize the combination of optimized weights and image residue.
 B. TwoDimensional Case
 For the twodimensional case, realistic still image data are used as the test. FIG. 49 and50 show the original and reconstructed images for τT=2 and τ=4. For τ=2, the reconstructed image 1066, 1072 is substantially similar to the original. However, for τ=4, there are zigzag patterns along specific edges in images. This is due to the fact that the interpolation less accurately tracks the high frequency components. As described earlier, substantially complete reconstruction is achieved by retaining the minimized residue ΔX and adding it back to the approximated image. In the next section, several methods are proposed for implementing this process. FIG. 51 shows the error plots as functions of τ for both images.
 An additional aspect of interest is to look at the optimized weights directly. When these optimal weights are viewed in picture form, highquality miniatures1080, 1082 of the original image are obtained, as shown in FIG. 52. Hence, the present embodiment is a very powerful and accurate method for creating a “thumbnail” reproduction of the original image.
 V. ALTERNATIVE EMBODIMENTS
 Video compression is a major component of highdefinition television (HDTV) According to the present invention, video compression is formulated as an equivalent threedimensional approximation problem, and is amenable to the technique of optimum linear or more generally by hyperplanar spline interpolation. The main advantages of this approach are seen in its fast speed in coding/reconstruction, its suitability in a VLSI hardware implementation, and a variable compression ratio. A principal advantage of the present invention is the versatility with which it is incorporated into other compression systems. The invention can serve as a “frontend” compression platform from which other signal processes are applied. Moreover, the invention can be applied iteratively, in multiple dimensions and in either the image or image conjugate domain. The optimizing method can for example apply to a compressed image and further applied to a corresponding compressed residual image. Due to the inherent lowpass filtering nature of the interpolation process, some edges and other highfrequency features may not be preserved in the reconstructed images, but which are retained through the residue. To address this problem, the following procedures are set forth:
 Procedure (a)
 Since the theoretical formulation, derivation, and implementation of the disclosed compression method do not depend strongly on the choice of the interpolation kernel function, other kernel functions can be applied and their performances compared. So far, due to its simplicity and excellent performance, only the linear spline function has been applied. Higherorder splines, such as the quadratic spline, cubic spline could also be employed. Aside from the polynomial spline functions, other more complicated function forms can be used.
 Procedure (b)
 Another way to improve the compression method is to apply certain adaptive techniques. FIG. 53 illustrates such an adaptive scheme. For a 2D image1002, the whole image can be divided into subimages of smaller size 1084. Since different subimages have different local features and statistics, different compression schemes can be applied to these different subimages. An error criterion is evaluated in a process step 1086. If the error is below a certain threshold determined in a process step 1088, a higher compression ratio is chosen for that subimage. If the error goes above this threshold, then a lower compression ratio is chosen in a step 1092 for that subimage. Both multikernel functions 1090 and multilocalcompression ratios provide good adaptive modification.
 Procedure (c)
 Subband coding techniques have been widely used in digital speech coding. Recently, subband coding is also applied to digital image data compression. The basic approach of subband coding is to split the signal into a set of frequency bands, and then to compress each subband with an efficient compression algorithm which matches the statistics of that band. The subband coding techniques divide the whole frequency band into smaller frequency subbands. Then, when these subbands are demodulated into the baseband, the resulting equivalent bandwidths are greatly reduced. Since the subbands have only low frequency components, one can use the above described, linear or planar spline, data compression technique for coding these data. A 16band filter compression system is shown in FIG. 54, and the corresponding reconstruction system in FIG. 55. There are, of course, many ways to implement this filter bank, as will be appreciated by those skilled in the art. For example, a common method is to exploit the Quadrature Mirror Filter structure.
 V. IMAGE DOMAIN IMPLEMENTATION
 The embodiments described earlier utilize a spline filter optimization process in the image conjugate domain using an FFT processor or equivalent thereof. The present invention also provides an equivalent image domain implementation of a spline filter optimization process which presents distinct advantages with regard to speed, memory and process application.
 Referring back to Equation 45, it will be appreciated that the transform processes DFT and DFT^{−1 }may be subsummed into an equivalent conjugate domain convolution, shown here briefly:
$\begin{array}{cc}\begin{array}{c}{X}_{j}=\mathrm{DFT}\ue8a0\left[\frac{1}{{\lambda}_{m}}\ue89e{\mathrm{DFT}}^{1}\ue8a0\left({Y}_{k}\right)\right]\\ =\mathrm{DFT}\ue89e\text{\hspace{1em}}\left[{\mathrm{DFT}}^{1}\ue8a0\left[\mathrm{DFT}\ue8a0\left(\frac{1}{{\lambda}_{m}}\right)\right]\ue89e{\mathrm{DFT}}^{1}\ue8a0\left({Y}_{k}\right)\right]\end{array}& \left(54\right)\end{array}$  If Ω=DFT (1/λ_{m}), then:
 X_{j}=DFT[DFT^{−1}(Ω) DFT^{−1}(Y_{k})]=Ω*Y_{k}.
 Furthermore, with λ_{m}=DFT(a_{j}), the optimization process may be completely carried over to an image domain implementation knowing only the form of the input spline filter function. The transform processes can be performed in advance to generate the image domain equivalent of the inverse eigenfilter. As shown in FIG. 57, the image domain spline optimizer Ω operates on compressed image data Y′ generated by a first convolution process 1014 followed by a decimation process 1016, as previously described. Offline or perhaps adaptively, the tensor transformation A (as shown for example in Equation 25 above) is supplied to an FFT type processor 1032, which computes the transformation eigenvalues λ. The tensor of eigenvalues is then inverted at process block 1034, followed by FFT^{−1 }process block 1036, generating the image domain tensor Ω. The tensor Ω is supplied to a second convolution process 1038, whereupon Ω is convolved with the nonoptimized compressed image data Y′ to yield optimized compressed image data Y″.

 On the other hand, the term Ω should be small enough to be computationally tractable for the online convolution process1038. It has been found that twodimensional image compression using the preferred hexagonal tent spline is adequately optimized by a 5×5 matrix, and preferably a 7×7 matrix, for example, with the following form:
$\Omega =\left\{\begin{array}{ccccccc}0& h& g& g& e& e& g\\ h& f& e& d& c& d& e\\ g& e& c& b& b& c& e\\ g& d& b& a& b& d& g\\ e& c& b& b& c& e& g\\ e& d& c& d& e& f& h\\ g& e& e& g& g& h& 0\end{array}\right\}.$  Additionally, to reduce computational overhead, the smallest elements (i.e., the elements near the perimeter) such as f, g, and h may be set to zero with little noticeable effect in the reconstruction.
 The principal advantages of the present preferred embodiment are in computational saving above and beyond that of the previously described conjugate domain inverse eigenfilter process (FIG. 38, 1018). For example, a twodimensional FFT process may typically require about N^{2}log_{2}N complex operations or equivalently 6N^{2}log_{2}N multiplications. The total number of image conjugate filter operations is of order 10N^{2}log_{2}N. On the other hand, the presently described (7×7) kernel with 5 distinct operations per image element will require only 5N^{2 }operations, lower by an important factor of log_{2}N. Hence, even for reasonably small images, there is significant improvement in computation time.
 Additionally, there is substantial reduction in buffer demands because the image domain process1038 requires only a 7×7 image block at a given time, in contrast to the conjugate process which requires a fullframe buffer before processing. In addition to the lower demands on computation with the image domain process 1038, there is virtually no latency in transmission as the process is done in pipeline. Finally, “power of 2” constraints desirable for efficient FFT processing is eliminated, allowing convenient application to a wider range of image dimensions.
 The above detailed description is intended to be exemplary and not limiting. From this detailed description, taken in conjunction with the appended drawings, the advantages of the present invention will be readily understood by one who is skilled in the relevant technology. The present apparatus and method provides a unique encoder, compressed file format and decoder which compresses images and decodes compressed images. The unique compression system increases the compression ratios for comparable image quality while achieving relatively quick encoding and decoding times, optimizes the encoding process to accommodate different image types, selectively applies particular encoding methods for a particular image type, layers the image quality components in the compressed image, and generates a file format that allows the addition of other compressed data information.
 While the above detailed description has shown, described and pointed out the fundamental novel features of the invention as applied to various embodiments, it will be understood that various omissions and substitutions and changes in the form and details of the illustrated device may be made by those skilled in the art, without departing from the spirit of the invention.
Claims (17)
1. A method of transferring a progressivelyrendered, compressed image, over a finite bandwidth channel, comprising:
producing a coarse quality compressed image at a source and transmitting said coarse quality compressed image over a channel as a first part of a transmission to a destination end;
receiving the coarse quality compressed image at a receiver at the destination end at a first time and displaying an image based on said coarse quality compressed image on a display system of the receiver when received at said first time;
creating additional information about the image, at the source end, from which a standard quality image can be displayed, said standard quality image being of a higher quality than said coarse quality image, and sending compressed information over said channel indicative of information for said standard quality image, said sending said standard quality image information occurring subsequent in time to said sending of all of said information for said coarse quality image;
receiving said standard quality information at the receives at a second time, subsequent to the first time, and decompressing said standard quality image information, to improve the quality of the image displayed on said display system, and to display said standard quality image;
obtaining further information about the image beyond the information in said standard quality image, to provide an enhanced quality image, and compressing said information for said enhanced quality image, said enhanced quality image having more image details than said standard quality image;
transmitting said information for said enhanced quality image, at a time subsequent to transmitting said information for said coarse quality image and said standard quality image; and
receiving said enhanced quality image information at said receiver, at a third time subsequent to said first and second times, and updating a display on said display system to display the additional enhanced quality image.
2. A method as in , wherein said producing the coarse quality image uses a different compression technique than said creating additional information indicative of the standard quality image.
claim 1
3. A method as in , wherein said coarse quality image includes information indicative of a miniature version of an original image, and said displaying the coarse quality image comprises interpolating said miniature to a size of the original image and displaying said image.
claim 1
4. A method as in , wherein said creating additional information comprises determining a characteristic of the image, determining which of a plurality of different as compression technique will best compress the characteristic determined; and compressing said image using the determined technique.
claim 2
5. A method as in , further comprising determining a plurality of areas in said image, and determining, for each area, which of the plurality of different compression techniques will optimize the compression ratio.
claim 4
6. A method as in , further comprising interleaving and channel encoding different portions of the compressed image.
claim 5
7. A method as in , wherein said compression technique include vector quantization and discrete cosine transform.
claim 5
8. A method as in , wherein said obtaining a miniature comprises decimating along vertical and horizontal axes.
claim 3
9. A method of transmitting and displaying a compressed image comprising:
first obtaining and sending a first layer of information indicative of a compressed miniature image at a first time;
first receiving said first layer at said decoder end and decompressing and displaying a first coarse image indicative thereof;
second obtaining and sending information indicative of a compressed improved resolution image having more details than said first coarse image, and transmitting said information at a second time subsequent to said first time; and
second receiving and decompressing said improved resolution image information to provide an updated display which improves the resolution of said first coarse image.
10. A method as in , wherein said obtaining coarse information comprises:
claim 9
transmitting information indicative of a compressed miniature of the image;
receiving the compressed miniature of the image;
interpolating the compressed miniature of the image into a full sized image; and
displaying the full sized image.
11. A method as in , wherein the first coarse image is compressed using a first compression technique and the second image is compressed using a second compression technique which is different from the first compression technique.
claim 10
12. A method as in , further comprising determining which of a plurality of different image compression technique will most efficiently code information indicative of said image.
claim 11
13. A method as in , wherein said determining uses fuzzy logic technique.
claim 12
14. A method as in , wherein said first obtaining comprises decimating data on the image to form a reduced quality image, fitting the decimated data to a first model which partially restores source image detail lost by decimation, and calculating reconstruction values from the fitting.
claim 11
15. A method as in , further comprising using said reconstruction weights to interpolate the decimated data into a full sized image while minimizing a mean squared error between original image components and interpolated image components.
claim 14
16. A method as in , wherein said first step comprises forming miniature versions of the original source image for each of a plurality of primary colors.
claim 11
17. A method as in , wherein said first obtaining comprises obtaining a miniature image, and further comprising analyzing the miniature image to classify the image into one of a plurality of classes indicative of which of a plurality of compression techniques will best compress said image.
claim 9
Priority Applications (3)
Application Number  Priority Date  Filing Date  Title 

US27616194A true  19940714  19940714  
US08/636,170 US5892847A (en)  19940714  19960422  Method and apparatus for compressing images 
US09/283,017 US6453073B2 (en)  19940714  19990331  Method for transferring and displaying compressed images 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

US09/283,017 US6453073B2 (en)  19940714  19990331  Method for transferring and displaying compressed images 
Related Parent Applications (1)
Application Number  Title  Priority Date  Filing Date  

US08/636,170 Division US5892847A (en)  19940714  19960422  Method and apparatus for compressing images 
Publications (2)
Publication Number  Publication Date 

US20010019630A1 true US20010019630A1 (en)  20010906 
US6453073B2 US6453073B2 (en)  20020917 
Family
ID=23055447
Family Applications (2)
Application Number  Title  Priority Date  Filing Date 

US08/636,170 Expired  Lifetime US5892847A (en)  19940714  19960422  Method and apparatus for compressing images 
US09/283,017 Expired  Lifetime US6453073B2 (en)  19940714  19990331  Method for transferring and displaying compressed images 
Family Applications Before (1)
Application Number  Title  Priority Date  Filing Date 

US08/636,170 Expired  Lifetime US5892847A (en)  19940714  19960422  Method and apparatus for compressing images 
Country Status (8)
Country  Link 

US (2)  US5892847A (en) 
EP (1)  EP0770246A4 (en) 
JP (1)  JP2000511363A (en) 
AU (1)  AU698055B2 (en) 
BR (1)  BR9508403A (en) 
CA (1)  CA2195110A1 (en) 
MX (1)  MX9700385A (en) 
WO (1)  WO1996002895A1 (en) 
Cited By (88)
Publication number  Priority date  Publication date  Assignee  Title 

US20020122067A1 (en) *  20001229  20020905  Geigel Joseph M.  System and method for automatic layout of images in digital albums 
US20030028359A1 (en) *  20010315  20030206  Julian Eggert  Simulation of convolutional network behavior and visualizing internal states of a network 
WO2003036984A1 (en) *  20011026  20030501  Koninklijke Philips Electronics N.V.  Spatial scalable compression 
US6625308B1 (en) *  19990910  20030923  Intel Corporation  Fuzzy distinction based thresholding technique for image segmentation 
US6628827B1 (en)  19991214  20030930  Intel Corporation  Method of upscaling a color image 
US6658399B1 (en) *  19990910  20031202  Intel Corporation  Fuzzy based thresholding technique for image segmentation 
US20040114813A1 (en) *  20021213  20040617  Martin Boliek  Compression for segmented images and other types of sideband information 
US20040114814A1 (en) *  20021213  20040617  Martin Boliek  Layout objects as image layers 
US20050111741A1 (en) *  20031126  20050526  Samsung Electronics Co., Ltd.  Color image residue transformation and/or inverse transformation method and apparatus, and color image encoding and/or decoding method and apparatus using the same 
US6937362B1 (en) *  20000405  20050830  Eastman Kodak Company  Method for providing access to an extended color gamut digital image and providing payment therefor 
US20050196052A1 (en) *  20040302  20050908  Jun Xin  System and method for joint deinterlacing and downsampling using adaptive frame and field filtering 
US20060031554A1 (en) *  20021029  20060209  Lopez Ricardo J  Service diversity for communication system 
US7053944B1 (en)  19991001  20060530  Intel Corporation  Method of using hue to interpolate color pixel signals 
US20060161959A1 (en) *  20050114  20060720  Citrix Systems, Inc.  Method and system for realtime seeking during playback of remote presentation protocols 
US7127525B2 (en) *  20000526  20061024  Citrix Systems, Inc.  Reducing the amount of graphical line data transmitted via a low bandwidth transport protocol mechanism 
US20060269147A1 (en) *  20050531  20061130  Microsoft Corporation  Accelerated image rendering 
US20060274951A1 (en) *  20030227  20061207  TMobile Deutschland Gmbh  Method for the compressed transmission of image data for threedimensional representation of scenes and objects 
US7158178B1 (en)  19991214  20070102  Intel Corporation  Method of converting a subsampled color image 
US20070014478A1 (en) *  20050715  20070118  Samsung Electronics Co., Ltd.  Apparatus, method, and medium for encoding/decoding of color image and video using intercolorcomponent prediction according to coding modes 
US20070030816A1 (en) *  20050808  20070208  Honeywell International Inc.  Data compression and abnormal situation detection in a wireless sensor network 
US20070171490A1 (en) *  20050722  20070726  Samsung Electronics Co., Ltd.  Sensor image encoding and/or decoding system, medium, and method 
US20080008402A1 (en) *  20060710  20080110  Aten International Co., Ltd.  Method and apparatus of removing opaque area as rescaling an image 
US20080046616A1 (en) *  20060821  20080221  Citrix Systems, Inc.  Systems and Methods of Symmetric Transport Control Protocol Compression 
US20080228933A1 (en) *  20070312  20080918  Robert Plamondon  Systems and methods for identifying long matches of data in a compression history 
US20080282298A1 (en) *  20050309  20081113  Prasanna Ganesan  Method and apparatus for supporting file sharing in a distributed network 
US20090007196A1 (en) *  20050309  20090101  Vudu, Inc.  Method and apparatus for sharing media files among network nodes with respect to available bandwidths 
US20090025046A1 (en) *  20050309  20090122  Wond, Llc  Hybrid architecture for media services 
US20090179913A1 (en) *  20080110  20090716  Ali Corporation  Apparatus for image reduction and method thereof 
US20090190848A1 (en) *  20080129  20090730  Seiko Epson Corporation  Image Processing Device and Method for Image Processing 
US20090232393A1 (en) *  20080312  20090917  Megachips Corporation  Image processor 
US20090238477A1 (en) *  20080324  20090924  Megachips Corporation  Image processor 
US20090278916A1 (en) *  20051214  20091112  Masahiro Ito  Image display device 
US7664763B1 (en) *  20031217  20100216  Symantec Operating Corporation  System and method for determining whether performing a particular process on a file will be useful 
US20100086026A1 (en) *  20081003  20100408  Marco Paniconi  Adaptive decimation filter 
US20100095021A1 (en) *  20081008  20100415  Samuels Allen R  Systems and methods for allocating bandwidth by an intermediary for flow control 
US20100124380A1 (en) *  20081120  20100520  Canon Kabushiki Kaisha  Image encoding apparatus and method of controlling the same 
US20100142840A1 (en) *  20081210  20100610  Canon Kabushiki Kaisha  Image encoding apparatus and method of controlling the same 
US20100148483A1 (en) *  20061204  20100617  Ralf Kopp  Sports Equipment and Method for Designing its Visual Appearance 
US7756826B2 (en)  20060630  20100713  Citrix Systems, Inc.  Method and systems for efficient delivery of previously stored content 
US20100239226A1 (en) *  20090319  20100923  Eldon Technology Limited  Archiving broadcast programs 
US20100254675A1 (en) *  20050309  20101007  Prasanna Ganesan  Method and apparatus for instant playback of a movie title 
US7831728B2 (en)  20050114  20101109  Citrix Systems, Inc.  Methods and systems for realtime seeking during realtime playback of a presentation layer protocol data stream 
US7865585B2 (en)  20070312  20110104  Citrix Systems, Inc.  Systems and methods for providing dynamic ad hoc proxycache hierarchies 
US7865027B2 (en)  20041109  20110104  Samsung Electronics Co., Ltd.  Method and apparatus for encoding and decoding image data 
US7872597B2 (en)  20070312  20110118  Citrix Systems, Inc.  Systems and methods of using application and protocol specific parsing for compression 
US7916047B2 (en)  20070312  20110329  Citrix Systems, Inc.  Systems and methods of clustered sharing of compression histories 
WO2011042898A1 (en) *  20091005  20110414  I.C.V.T Ltd.  Apparatus and methods for recompression of digital images 
US7937379B2 (en) *  20050309  20110503  Vudu, Inc.  Fragmentation of a file for instant access 
US20110135284A1 (en) *  20091208  20110609  Echostar Technologies L.L.C.  Systems and methods for selective archival of media content 
US20110208833A1 (en) *  19990311  20110825  Realtime Data LLC DBA IXO  System and Methods For Accelerated Data Storage And Retrieval 
US8063799B2 (en)  20070312  20111122  Citrix Systems, Inc.  Systems and methods for sharing compression histories between multiple devices 
CN102263868A (en) *  20100525  20111130  富士施乐株式会社  An image processing apparatus, image processing method and an image transmitting apparatus 
US8099511B1 (en)  20050611  20120117  Vudu, Inc.  Instantaneous mediaondemand 
US8191008B2 (en)  20051003  20120529  Citrix Systems, Inc.  Simulating multimonitor functionality in a single monitor environment 
US8200828B2 (en)  20050114  20120612  Citrix Systems, Inc.  Systems and methods for single stack shadowing 
US8219635B2 (en)  20050309  20120710  Vudu, Inc.  Continuous data feeding in a distributed environment 
US8230096B2 (en)  20050114  20120724  Citrix Systems, Inc.  Methods and systems for generating playback instructions for playback of a recorded computer session 
US8255570B2 (en)  20070312  20120828  Citrix Systems, Inc.  Systems and methods of compression history expiration and synchronization 
US8296812B1 (en)  20060901  20121023  Vudu, Inc.  Streaming video using erasure encoding 
US8296441B2 (en)  20050114  20121023  Citrix Systems, Inc.  Methods and systems for joining a realtime session of presentation layer protocol data 
US8340130B2 (en)  20050114  20121225  Citrix Systems, Inc.  Methods and systems for generating playback instructions for rendering of a recorded computer session 
US8422851B2 (en)  20050114  20130416  Citrix Systems, Inc.  System and methods for automatic timewarped playback in rendering a recorded computer session 
US8589579B2 (en)  20081008  20131119  Citrix Systems, Inc.  Systems and methods for realtime endpoint application flow control with network structure component 
US8615159B2 (en)  20110920  20131224  Citrix Systems, Inc.  Methods and systems for cataloging text in a recorded session 
WO2014004486A2 (en) *  20120626  20140103  Dunling Li  Low delay low complexity lossless compression system 
US8692695B2 (en)  20001003  20140408  Realtime Data, Llc  Methods for encoding and decoding data 
US20140104289A1 (en) *  20121011  20140417  Samsung Display Co., Ltd.  Compressor, driving device, display device, and compression method 
US8717203B2 (en) *  19981211  20140506  Realtime Data, Llc  Data compression systems and methods 
US8745675B2 (en)  20050309  20140603  Vudu, Inc.  Multiple audio streams 
US8805109B2 (en)  20100429  20140812  I.C.V.T. Ltd.  Apparatus and methods for recompression having a monotonic relationship between extent of compression and quality of compressed image 
US20140286525A1 (en) *  20130325  20140925  Xerox Corporation  Systems and methods for segmenting an image 
US8867610B2 (en)  20010213  20141021  Realtime Data Llc  System and methods for video and audio data distribution 
US8880862B2 (en)  20000203  20141104  Realtime Data, Llc  Systems and methods for accelerated loading of operating systems and application programs 
US8904463B2 (en)  20050309  20141202  Vudu, Inc.  Live video broadcasting on distributed networks 
US8935316B2 (en)  20050114  20150113  Citrix Systems, Inc.  Methods and systems for insession playback on a local machine of remotelystored and real time presentation layer protocol data 
US20150016501A1 (en) *  20130712  20150115  Qualcomm Incorporated  Palette prediction in palettebased video coding 
US8943304B2 (en)  20060803  20150127  Citrix Systems, Inc.  Systems and methods for using an HTTPaware client agent 
WO2015050774A1 (en) *  20131001  20150409  Gopro, Inc.  Image capture accelerator 
US9014471B2 (en)  20100917  20150421  I.C.V.T. Ltd.  Method of classifying a chroma downsampling error 
US9042670B2 (en)  20100917  20150526  Beamr Imaging Ltd  Downsizing an encoded image 
US9143546B2 (en)  20001003  20150922  Realtime Data Llc  System and method for data feed acceleration and encryption 
US9407608B2 (en)  20050526  20160802  Citrix Systems, Inc.  Systems and methods for enhanced client side policy 
US20160241627A1 (en) *  20020129  20160818  FiveOpenBooks, LLC  Method and System for Delivering Media Data 
US9621666B2 (en)  20050526  20170411  Citrix Systems, Inc.  Systems and methods for enhanced delta compression 
US9654777B2 (en)  20130405  20170516  Qualcomm Incorporated  Determining palette indices in palettebased video coding 
US9692725B2 (en)  20050526  20170627  Citrix Systems, Inc.  Systems and methods for using an HTTPaware client agent 
US9953436B2 (en)  20120626  20180424  BTS Software Solutions, LLC  Low delay low complexity lossless compression system 
US10362309B2 (en)  20171204  20190723  Beamr Imaging Ltd  Apparatus and methods for recompression of digital images 
Families Citing this family (161)
Publication number  Priority date  Publication date  Assignee  Title 

US6169820B1 (en) *  19950912  20010102  Tecomac Ag  Data compression process and system for compressing data 
DE19518705C1 (en) *  19950522  19961121  Siemens Ag  Method for encoding image sequences in a transmitter unit 
JP3516534B2 (en) *  19950831  20040405  シャープ株式会社  Video information encoding apparatus and a video information decoding apparatus 
US5987181A (en)  19951012  19991116  Sharp Kabushiki Kaisha  Coding and decoding apparatus which transmits and receives tool information for constructing decoding scheme 
US5764921A (en) *  19951026  19980609  Motorola  Method, device and microprocessor for selectively compressing video frames of a motion compensated predictionbased video codec 
EP0886968A4 (en) *  19960214  19990922  Olivr Corp Ltd  Method and systems for progressive asynchronous transmission of multimedia data 
EP0817121A3 (en) *  19960606  19991222  Matsushita Electric Industrial Co., Ltd.  Image coding method and system 
GB9613039D0 (en) *  19960621  19960828  Philips Electronics Nv  Image data compression for interactive applications 
JP3864400B2 (en) *  19961004  20061227  ソニー株式会社  Image processing apparatus and image processing method 
JP3408094B2 (en) *  19970205  20030519  キヤノン株式会社  Image processing apparatus and method 
US7284187B1 (en) *  19970530  20071016  Aol Llc, A Delaware Limited Liability Company  Encapsulated document and format system 
US5973734A (en)  19970709  19991026  Flashpoint Technology, Inc.  Method and apparatus for correcting aspect ratio in a camera graphical user interface 
EP0899960A3 (en) *  19970829  19990609  Canon Kabushiki Kaisha  Digital signal coding and decoding 
US6281873B1 (en) *  19971009  20010828  Fairchild Semiconductor Corporation  Video line rate vertical scaler 
US6216119B1 (en) *  19971119  20010410  Netuitive, Inc.  Multikernel neural network concurrent learning, monitoring, and forecasting system 
US6141017A (en) *  19980123  20001031  Iterated Systems, Inc.  Method and apparatus for scaling an array of digital data using fractal transform 
US6400776B1 (en) *  19980210  20020604  At&T Corp.  Method and apparatus for high speed data transmission by spectral decomposition of the signaling space 
EP0936813A1 (en) *  19980216  19990818  CANAL+ Société Anonyme  Processing of digital picture data in a decoder 
US7453498B2 (en) *  19980326  20081118  Eastman Kodak Company  Electronic image capture device and image file format providing raw and processed image data 
US6567119B1 (en) *  19980326  20030520  Eastman Kodak Company  Digital imaging system and file format for storage and selective transmission of processed and unprocessed image data 
US6377706B1 (en) *  19980512  20020423  Xerox Corporation  Compression framework incorporating decoding commands 
US6256415B1 (en)  19980610  20010703  Seiko Epson Corporation  Two row buffer image compression (TROBIC) 
US6275614B1 (en) *  19980626  20010814  Sarnoff Corporation  Method and apparatus for block classification and adaptive bit allocation 
US6148106A (en) *  19980630  20001114  The United States Of America As Represented By The Secretary Of The Navy  Classification of images using a dictionary of compressed timefrequency atoms 
US6583890B1 (en) *  19980630  20030624  International Business Machines Corporation  Method and apparatus for improving page description language (PDL) efficiency by recognition and removal of redundant constructs 
US6646761B1 (en) *  19980812  20031111  Texas Instruments Incorporated  Efficient under color removal 
US6266442B1 (en) *  19981023  20010724  Facet Technology Corp.  Method and apparatus for identifying objects depicted in a videostream 
US6298169B1 (en) *  19981027  20011002  Microsoft Corporation  Residual vector quantization for texture pattern compression and decompression 
US6260031B1 (en) *  19981221  20010710  Philips Electronics North America Corp.  Code compaction by evolutionary algorithm 
US6577772B1 (en) *  19981223  20030610  Lg Electronics Inc.  Pipelined discrete cosine transform apparatus 
US6317141B1 (en)  19981231  20011113  Flashpoint Technology, Inc.  Method and apparatus for editing heterogeneous media objects in a digital imaging device 
JP3496559B2 (en) *  19990106  20040216  日本電気株式会社  Image feature value generating device, and image feature amount generating method 
US6604158B1 (en) *  19990311  20030805  Realtime Data, Llc  System and methods for accelerated data storage and retrieval 
FR2792151B1 (en) *  19990408  20040430  Canon Kk  Processes and devices for encoding and decoding digital signals, and systems implementing them 
US6788811B1 (en) *  19990510  20040907  Ricoh Company, Ltd.  Coding apparatus, decoding apparatus, coding method, decoding method, amd computerreadable recording medium for executing the methods 
US6421467B1 (en) *  19990528  20020716  Texas Tech University  Adaptive vector quantization/quantizer 
WO2001001349A1 (en) *  19990628  20010104  Iterated Systems, Inc.  Jpeg image artifacts reduction 
JP3784583B2 (en)  19990813  20060614  沖電気工業株式会社  Voice storage device 
US6779040B1 (en) *  19990827  20040817  HewlettPackard Development Company, L.P.  Method and system for serving data files compressed in accordance with tunable parameters 
US6768817B1 (en)  19990903  20040727  Truong, T.K./ Chen, T.C.  Fast and efficient computation of cubicspline interpolation for data compression 
WO2001018743A1 (en) *  19990903  20010315  Cheng T C  Fast and efficient computation of cubicspline interpolation for data compression 
EP1091576A1 (en)  19990930  20010411  Philips Electronics N.V.  Picture signal processing 
US6393154B1 (en)  19991118  20020521  Quikcat.Com, Inc.  Method and apparatus for digital image compression using a dynamical system 
US6992671B1 (en) *  19991209  20060131  Monotype Imaging, Inc.  Method and apparatus for compressing Bezier descriptions of letterforms in outline fonts using vector quantization techniques 
US7068854B1 (en) *  19991229  20060627  Ge Medical Systems Global Technology Company, Llc  Correction of defective pixels in a detector 
US6600495B1 (en) *  20000110  20030729  Koninklijke Philips Electronics N.V.  Image interpolation and decimation using a continuously variable delay filter and combined with a polyphase filter 
US6700589B1 (en) *  20000217  20040302  International Business Machines Corporation  Method, system, and program for magnifying content downloaded from a server over a network 
AU4566201A (en) *  20000313  20010924  Point Cloud Inc  Twodimensional image compression method 
US6938024B1 (en) *  20000504  20050830  Microsoft Corporation  Transmitting information given constrained resources 
US6781608B1 (en) *  20000630  20040824  America Online, Inc.  Gradual image display 
AU7687101A (en) *  20000711  20020121  Mediaflow Llc  Video compression using adaptive selection of groups of frames, adaptive bit allocation, and adaptive replenishment 
GB2364843A (en)  20000714  20020206  Sony Uk Ltd  Data encoding based on data quantity and data quality 
US7194128B1 (en)  20000726  20070320  Lockheed Martin Corporation  Data compression using principal components transformation 
US6754383B1 (en) *  20000726  20040622  Lockheed Martin Corporation  Lossy JPEG compression/reconstruction using principal components transformation 
US6891960B2 (en)  20000812  20050510  Facet Technology  System for road sign sheeting classification 
US20020035566A1 (en) *  20000920  20020321  Choicepoint, Inc.  Method and system for the wireless delivery of images 
US7417568B2 (en)  20001003  20080826  Realtime Data Llc  System and method for data feed acceleration and encryption 
US6549674B1 (en) *  20001012  20030415  Picsurf, Inc.  Image compression based on tiled waveletlike transform using edge and nonedge filters 
JP4627110B2 (en) *  20001016  20110209  富士通株式会社  Data storage device 
JP3893474B2 (en) *  20001110  20070314  富士フイルム株式会社  Image data forming method and an image data recording device 
US6870961B2 (en) *  20001110  20050322  Ricoh Company, Ltd.  Image decompression from transform coefficients 
US7162080B2 (en) *  20010223  20070109  Zoran Corporation  Graphic image reencoding and distribution system and method 
US6624769B2 (en) *  20010427  20030923  Nokia Corporation  Apparatus, and associated method, for communicating content in a bandwidthconstrained communication system 
US6944357B2 (en) *  20010524  20050913  Microsoft Corporation  System and process for automatically determining optimal image compression methods for reducing file size 
US6832006B2 (en) *  20010723  20041214  Eastman Kodak Company  System and method for controlling image compression based on image emphasis 
US7453936B2 (en) *  20011109  20081118  Sony Corporation  Transmitting apparatus and method, receiving apparatus and method, program and recording medium, and transmitting/receiving system 
NZ515527A (en) *  20011115  20030829  Auckland Uniservices Ltd  Method, apparatus and software for lossy data compression and function estimation 
US7206457B2 (en) *  20011127  20070417  Samsung Electronics Co., Ltd.  Method and apparatus for encoding and decoding key value data of coordinate interpolator 
US7042942B2 (en) *  20011221  20060509  Intel Corporation  Zigzag inorder for image/video encoder and decoder 
US7085422B2 (en) *  20020220  20060801  International Business Machines Corporation  Layer based compression of digital images 
US6976026B1 (en) *  20020314  20051213  Microsoft Corporation  Distributing limited storage among a collection of media objects 
US7180943B1 (en) *  20020326  20070220  The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration  Compression of a data stream by selection among a set of compression tools 
EP1359773B1 (en) *  20020415  20160824  Microsoft Technology Licensing, LLC  Facilitating interaction between video renderers and graphics device drivers 
US7219352B2 (en)  20020415  20070515  Microsoft Corporation  Methods and apparatuses for facilitating processing of interlaced video images for progressive video displays 
US7451457B2 (en) *  20020415  20081111  Microsoft Corporation  Facilitating interaction between video renderers and graphics device drivers 
JP2003319184A (en) *  20020418  20031107  Toshiba Tec Corp  Image processing apparatus, image processing method, image display method, and image storage method 
AU2003238771A1 (en) *  20020529  20031219  Simon Butler  Predictive interpolation of a video signal 
US6928065B2 (en) *  20020611  20050809  Motorola, Inc.  Methods of addressing and signaling a plurality of subscriber units in a single slot 
AU2003285891A1 (en) *  20021015  20040504  Digimarc Corporation  Identification document and related methods 
US7113185B2 (en) *  20021114  20060926  Microsoft Corporation  System and method for automatically learning flexible sprites in video layers 
US7212676B2 (en) *  20021230  20070501  Intel Corporation  Match MSB digital image compression 
FR2850512B1 (en) *  20030128  20050311  Medialive  Method and system automatic and adaptive analysis and scrambling for digital video stream 
US20040150840A1 (en) *  20030130  20040805  Farrell Michael E.  Methods and systems for structuring a raster image file for parallel streaming rendering by multiple processors 
JP2004304469A (en) *  20030331  20041028  Minolta Co Ltd  Regional division of original image, compression program, and regional division and compression method 
US7212666B2 (en) *  20030401  20070501  Microsoft Corporation  Generating visually representative video thumbnails 
US20040202326A1 (en) *  20030410  20041014  Guanrong Chen  System and methods for realtime encryption of digital images based on 2D and 3D multiparametric chaotic maps 
US7420625B1 (en) *  20030520  20080902  Pixelworks, Inc.  Fuzzy logic based adaptive Y/C separation system and method 
US7110606B2 (en) *  20030612  20060919  TrieuKien Truong  System and method for a direct computation of cubic spline interpolation for realtime image codec 
US7158668B2 (en)  20030801  20070102  Microsoft Corporation  Image processing using linear light values and other image processing improvements 
US7643675B2 (en) *  20030801  20100105  Microsoft Corporation  Strategies for processing image information using a color information data structure 
US7657102B2 (en) *  20030827  20100202  Microsoft Corp.  System and method for fast online learning of transformed hidden Markov models 
US9614772B1 (en)  20031020  20170404  F5 Networks, Inc.  System and method for directing network traffic in tunneling applications 
US7302475B2 (en) *  20040220  20071127  Harris Interactive, Inc.  System and method for measuring reactions to product packaging, advertising, or product features over a computerbased network 
US7948501B2 (en) *  20040309  20110524  Olympus Corporation  Display control apparatus and method under plural different color spaces 
US7590310B2 (en)  20040505  20090915  Facet Technology Corp.  Methods and apparatus for automated true objectbased image analysis and retrieval 
US7664173B2 (en) *  20040607  20100216  Nahava Inc.  Method and apparatus for cached adaptive transforms for compressing data streams, computing similarity, and recognizing patterns 
US7492953B2 (en) *  20040617  20090217  Smith Micro Software, Inc.  Efficient method and system for reducing update requirements for a compressed binary image 
US7764844B2 (en) *  20040910  20100727  Eastman Kodak Company  Determining sharpness predictors for a digital image 
US8024483B1 (en)  20041001  20110920  F5 Networks, Inc.  Selective compression for network connections 
US7450723B2 (en) *  20041112  20081111  International Business Machines Corporation  Method and system for providing for security in communication 
US20060117268A1 (en) *  20041130  20060601  Micheal Talley  System and method for graphical element selection for region of interest compression 
EP1844612B1 (en) *  20050204  20170510  Barco NV  Method and device for image and video transmission over lowbandwidth and highlatency transmission channels 
JP2006236475A (en) *  20050224  20060907  Toshiba Corp  Coded data reproduction apparatus 
US7583844B2 (en) *  20050311  20090901  Nokia Corporation  Method, device, and system for processing of still images in the compressed domain 
JP2006279850A (en) *  20050330  20061012  Sanyo Electric Co Ltd  Image processing apparatus 
JP4687216B2 (en) *  20050418  20110525  ソニー株式会社  Image signal processing apparatus, a camera system, and an image signal processing method 
US7451041B2 (en) *  20050506  20081111  Facet Technology Corporation  Networkbased navigation system having virtual drivethru advertisements integrated with actual imagery from along a physical route 
JP4321496B2 (en) *  20050616  20090826  ソニー株式会社  Image data processing apparatus, the image data processing method, and program 
KR101045205B1 (en) *  20050712  20110630  삼성전자주식회사  Apparatus and method for encoding and decoding of image data 
KR100813258B1 (en)  20050712  20080313  삼성전자주식회사  Apparatus and method for encoding and decoding of image data 
US7783781B1 (en)  20050805  20100824  F5 Networks, Inc.  Adaptive compression 
US8533308B1 (en)  20050812  20130910  F5 Networks, Inc.  Network traffic management through protocolconfigurable transaction processing 
US20070074007A1 (en) *  20050928  20070329  Arc International (Uk) Limited  Parameterizable clip instruction and method of performing a clip operation using the same 
WO2007044556A2 (en) *  20051007  20070419  Innovation Management Sciences, L.L.C.  Method and apparatus for scalable video decoder using an enhancement stream 
US8275909B1 (en)  20051207  20120925  F5 Networks, Inc.  Adaptive compression 
TWI261975B (en) *  20051213  20060911  Inventec Corp  File compression system and method 
US7882084B1 (en)  20051230  20110201  F5 Networks, Inc.  Compression of data transmitted over a network 
US8565088B1 (en)  20060201  20131022  F5 Networks, Inc.  Selectively enabling packet concatenation based on a transaction boundary 
US7873065B1 (en)  20060201  20110118  F5 Networks, Inc.  Selectively enabling network packet concatenation based on metrics 
CN100536523C (en) *  20060209  20090902  佳能株式会社  Method, device and storage media for the image classification 
WO2007099327A2 (en) *  20060301  20070907  Symbian Software Limited  Data compression 
US20080037880A1 (en) *  20060811  20080214  Lcj Enterprises Llc  Scalable, progressive image compression and archiving system over a low bit rate internet protocol network 
US9224145B1 (en)  20060830  20151229  Qurio Holdings, Inc.  Venue based digital rights using capture device with digital watermarking capability 
KR100820678B1 (en) *  20060928  20080411  엘지전자 주식회사  Apparatus and method for transmitting data in digital video recorder 
US9356824B1 (en)  20060929  20160531  F5 Networks, Inc.  Transparently cached network resources 
KR100867131B1 (en) *  20061110  20081106  삼성전자주식회사  Apparatus and method for image displaying in portable terminal 
US8417833B1 (en)  20061129  20130409  F5 Networks, Inc.  Metacodec for optimizing network data compression based on comparison of write and read rates 
US8155462B2 (en) *  20061229  20120410  Fastvdo, Llc  System of master reconstruction schemes for pyramid decomposition 
US9106606B1 (en)  20070205  20150811  F5 Networks, Inc.  Method, intermediate device and computer program code for maintaining persistency 
US7477192B1 (en)  20070222  20090113  L3 Communications Titan Corporation  Direction finding system and method 
US7813588B2 (en) *  20070427  20101012  HewlettPackard Development Company, L.P.  Adjusting source image data prior to compressing the source image data 
WO2008147565A2 (en) *  20070525  20081204  Arc International, Plc  Adaptive video encoding apparatus and methods 
CN100573385C (en) *  20070601  20091223  华南理工大学  Distributed type double realtime compression method and system 
JP4930462B2 (en) *  20070629  20120516  ブラザー工業株式会社  The data transmission device 
KR101345287B1 (en) *  20071012  20131227  삼성전자주식회사  Scalable video encoding method and apparatus and the image decoding method and apparatus 
US20090110313A1 (en) *  20071025  20090430  Canon Kabushiki Kaisha  Device for performing image processing based on image attribute 
US8064408B2 (en)  20080220  20111122  Hobbit Wave  Beamforming devices and methods 
US20090290326A1 (en) *  20080522  20091126  Kevin Mark Tiedje  Color selection interface for ambient lighting 
WO2010050152A1 (en) *  20081027  20100506  日本電信電話株式会社  Pixel prediction value generation procedure automatic generation method, image encoding method, image decoding method, devices using these methods, programs for these methods, and recording medium on which these programs are recorded 
US20100257174A1 (en) *  20090402  20101007  Matthew Dino Minuti  Method for data compression utilizing patternanalysis and matching means such as neural networks 
US20110122224A1 (en) *  20091120  20110526  WangHe Lou  Adaptive compression of background image (acbi) based on segmentation of three dimentional objects 
US8774267B2 (en) *  20100707  20140708  Spinella Ip Holdings, Inc.  System and method for transmission, processing, and rendering of stereoscopic and multiview images 
US20130188878A1 (en) *  20100720  20130725  Lockheed Martin Corporation  Image analysis systems having image sharpening capabilities and methods using same 
US8705880B2 (en) *  20110713  20140422  Panasonic Corporation  Image compression device, image expansion device, and image processing apparatus 
WO2013134506A2 (en)  20120307  20130912  Hobbit Wave, Inc.  Devices and methods using the hermetic transform 
US9154353B2 (en)  20120307  20151006  Hobbit Wave, Inc.  Devices and methods using the hermetic transform for transmitting and receiving signals using OFDM 
KR101367777B1 (en) *  20120822  20140306  주식회사 핀그램  Adaptive predictive image compression system and method thereof 
US9245714B2 (en) *  20121001  20160126  KlaTencor Corporation  System and method for compressed data transmission in a maskless lithography system 
US8824812B2 (en) *  20121002  20140902  Mediatek Inc  Method and apparatus for data compression using error plane coding 
TWI502999B (en) *  20121207  20151001  Acer Inc  Image processing method and electronic apparatus using the same 
US9405015B2 (en)  20121218  20160802  Subcarrier Systems Corporation  Method and apparatus for modeling of GNSS pseudorange measurements for interpolation, extrapolation, reduction of measurement errors, and data compression 
US9250327B2 (en) *  20130305  20160202  Subcarrier Systems Corporation  Method and apparatus for reducing satellite position message payload by adaptive data compression techniques 
US10165227B2 (en) *  20130312  20181225  Futurewei Technologies, Inc.  Context based video distribution and storage 
JP6143521B2 (en) *  20130401  20170607  キヤノン株式会社  The information processing apparatus, information processing method, and program 
US20150006390A1 (en) *  20130626  20150101  Visa International Service Association  Using steganography to perform payment transactions through insecure channels 
US9531431B2 (en)  20131025  20161227  Hobbit Wave, Inc.  Devices and methods employing hermetic transforms for encoding and decoding digital information in spreadspectrum communications systems 
WO2015105592A2 (en)  20131122  20150716  Hobbit Wave  Radar using hermetic transforms 
US9877035B2 (en) *  20140317  20180123  Qualcomm Incorporated  Quantization processes for residue differential pulse code modulation 
US9871684B2 (en)  20141117  20180116  VertoCOMM, Inc.  Devices and methods for hermetic transform filters 
KR20170030968A (en) *  20150910  20170320  삼성전자주식회사  Method and apparatus for processing image 
US10305717B2 (en)  20160226  20190528  VertoCOMM, Inc.  Devices and methods using the hermetic transform for transmitting and receiving signals using multichannel signaling 
Family Cites Families (37)
Publication number  Priority date  Publication date  Assignee  Title 

US4122440A (en) *  19770304  19781024  International Business Machines Corporation  Method and means for arithmetic string coding 
US4222076A (en) *  19780915  19800909  Bell Telephone Laboratories, Incorporated  Progressive image transmission 
US4467317A (en) *  19810330  19840821  International Business Machines Corporation  Highspeed arithmetic compression coding using concurrent value updating 
US4414580A (en)  19810601  19831108  Bell Telephone Laboratories, Incorporated  Progressive transmission of twotone facsimile 
US4654484A (en) *  19830721  19870331  Interand Corporation  Video compression/expansion system 
US4903317A (en) *  19860624  19900220  Kabushiki Kaisha Toshiba  Image processing apparatus 
US4905297A (en) *  19860915  19900227  International Business Machines Corporation  Arithmetic coding encoder and decoder system 
US4891643A (en) *  19860915  19900102  International Business Machines Corporation  Arithmetic coding data compression/decompression by selectively employed, diverse arithmetic coding encoders and decoders 
US4849810A (en) *  19870602  19890718  Picturetel Corporation  Hierarchial encoding method and apparatus for efficiently communicating image sequences 
US4764805A (en) *  19870602  19880816  Eastman Kodak Company  Image transmission system with line averaging preview mode using twopass blockedge interpolation 
US4774562A (en) *  19870602  19880927  Eastman Kodak Company  Image transmission system with preview mode 
DE3883701T2 (en) *  19871030  19940210  Nippon Telegraph & Telephone  Method and apparatus for multiplexed vector quantization. 
US4897717A (en) *  19880330  19900130  Starsignal, Inc.  Computerbased video compression system 
US4847677A (en) *  19880427  19890711  Universal Video Communications Corp.  Video telecommunication system and method for compressing and decompressing digital color video data 
US5187755A (en) *  19880630  19930216  Dainippon Screen Mfg. Co., Ltd.  Method of and apparatus for compressing image data 
US5353132A (en)  19890206  19941004  Canon Kabushiki Kaisha  Image processing device 
US5031053A (en) *  19890601  19910709  At&T Bell Laboratories  Efficient encoding/decoding in the decomposition and recomposition of a high resolution image utilizing pixel clusters 
US5299025A (en) *  19891018  19940329  Ricoh Company, Ltd.  Method of coding twodimensional data by fast cosine transform and method of decoding compressed data by inverse fast cosine transform 
US5270832A (en) *  19900314  19931214  CCube Microsystems  System for compression and decompression of video data using discrete cosine transform and coding techniques 
US5196946A (en) *  19900314  19930323  CCube Microsystems  System for compression and decompression of video data using discrete cosine transform and coding techniques 
US5150209A (en) *  19900511  19920922  Picturetel Corporation  Hierarchical entropy coded lattice threshold quantization encoding method and apparatus for image and video compression 
US5189526A (en) *  19900921  19930223  Eastman Kodak Company  Method and apparatus for performing image compression using discrete cosine transform 
US5070532A (en) *  19900926  19911203  Radius Inc.  Method for encoding color images 
US5249053A (en) *  19910205  19930928  Dycam Inc.  Filmless digital camera with selective image compression 
US5148272A (en) *  19910227  19920915  Rca Thomson Licensing Corporation  Apparatus for recombining prioritized video data 
US5111292A (en) *  19910227  19920505  General Electric Company  Priority selection apparatus as for a video signal processor 
US5262958A (en) *  19910405  19931116  Texas Instruments Incorporated  Splinewavelet signal analyzers and methods for processing signals 
US5157488A (en) *  19910517  19921020  International Business Machines Corporation  Adaptive quantization within the jpeg sequential mode 
US5262878A (en) *  19910614  19931116  General Instrument Corporation  Method and apparatus for compressing digital still picture signals 
DE69227217T2 (en) *  19910715  19990325  Canon Kk  image coding 
JP3116967B2 (en) *  19910716  20001211  ソニー株式会社  Image processing apparatus and image processing method 
US5168375A (en) *  19910918  19921201  Polaroid Corporation  Image reconstruction by use of discrete cosine and related transforms 
US5335088A (en) *  19920401  19940802  Xerox Corporation  Apparatus and method for encoding halftone images 
US5325125A (en) *  19920924  19940628  Matsushita Electric Corporation Of America  Intraframe filter for video compression systems 
JP2621747B2 (en) *  19921006  19970618  富士ゼロックス株式会社  Image processing apparatus 
JPH06180948A (en) *  19921211  19940628  Sony Corp  Method and unit for processing digital signal and recording medium 
US5586200A (en) *  19940107  19961217  Panasonic Technologies, Inc.  Segmentation based image compression system 

1995
 19950714 AU AU30979/95A patent/AU698055B2/en not_active Ceased
 19950714 CA CA 2195110 patent/CA2195110A1/en not_active Abandoned
 19950714 JP JP50515796A patent/JP2000511363A/en active Pending
 19950714 WO PCT/US1995/008827 patent/WO1996002895A1/en not_active Application Discontinuation
 19950714 BR BR9508403A patent/BR9508403A/en not_active IP Right Cessation
 19950714 EP EP95926684A patent/EP0770246A4/en not_active Withdrawn
 19950714 MX MX9700385A patent/MX9700385A/en not_active IP Right Cessation

1996
 19960422 US US08/636,170 patent/US5892847A/en not_active Expired  Lifetime

1999
 19990331 US US09/283,017 patent/US6453073B2/en not_active Expired  Lifetime
Cited By (178)
Publication number  Priority date  Publication date  Assignee  Title 

US9054728B2 (en)  19981211  20150609  Realtime Data, Llc  Data compression systems and methods 
US8717203B2 (en) *  19981211  20140506  Realtime Data, Llc  Data compression systems and methods 
US10033405B2 (en)  19981211  20180724  Realtime Data Llc  Data compression systems and method 
US8933825B2 (en)  19981211  20150113  Realtime Data Llc  Data compression systems and methods 
US8719438B2 (en)  19990311  20140506  Realtime Data Llc  System and methods for accelerated data storage and retrieval 
US20110208833A1 (en) *  19990311  20110825  Realtime Data LLC DBA IXO  System and Methods For Accelerated Data Storage And Retrieval 
US10019458B2 (en)  19990311  20180710  Realtime Data Llc  System and methods for accelerated data storage and retrieval 
US9116908B2 (en)  19990311  20150825  Realtime Data Llc  System and methods for accelerated data storage and retrieval 
US6625308B1 (en) *  19990910  20030923  Intel Corporation  Fuzzy distinction based thresholding technique for image segmentation 
US6658399B1 (en) *  19990910  20031202  Intel Corporation  Fuzzy based thresholding technique for image segmentation 
US7053944B1 (en)  19991001  20060530  Intel Corporation  Method of using hue to interpolate color pixel signals 
US6628827B1 (en)  19991214  20030930  Intel Corporation  Method of upscaling a color image 
US7158178B1 (en)  19991214  20070102  Intel Corporation  Method of converting a subsampled color image 
US9792128B2 (en)  20000203  20171017  Realtime Data, Llc  System and method for electrical bootdevicereset signals 
US8880862B2 (en)  20000203  20141104  Realtime Data, Llc  Systems and methods for accelerated loading of operating systems and application programs 
US6937362B1 (en) *  20000405  20050830  Eastman Kodak Company  Method for providing access to an extended color gamut digital image and providing payment therefor 
US7127525B2 (en) *  20000526  20061024  Citrix Systems, Inc.  Reducing the amount of graphical line data transmitted via a low bandwidth transport protocol mechanism 
US8290907B2 (en)  20000526  20121016  Citrix Systems, Inc.  Method and system for efficiently reducing graphical display data for transmission over a low bandwidth transport protocol mechanism 
US8099389B2 (en)  20000526  20120117  Citrix Systems, Inc.  Method and system for efficiently reducing graphical display data for transmission over a low bandwidth transport protocol mechanism 
US9667751B2 (en)  20001003  20170530  Realtime Data, Llc  Data feed acceleration 
US8692695B2 (en)  20001003  20140408  Realtime Data, Llc  Methods for encoding and decoding data 
US8717204B2 (en)  20001003  20140506  Realtime Data Llc  Methods for encoding and decoding data 
US9859919B2 (en)  20001003  20180102  Realtime Data Llc  System and method for data compression 
US9141992B2 (en)  20001003  20150922  Realtime Data Llc  Data feed acceleration 
US9143546B2 (en)  20001003  20150922  Realtime Data Llc  System and method for data feed acceleration and encryption 
US8723701B2 (en)  20001003  20140513  Realtime Data Llc  Methods for encoding and decoding data 
US9967368B2 (en)  20001003  20180508  Realtime Data Llc  Systems and methods for data block decompression 
US8742958B2 (en)  20001003  20140603  Realtime Data Llc  Methods for encoding and decoding data 
US10284225B2 (en)  20001003  20190507  Realtime Data, Llc  Systems and methods for data compression 
US7340676B2 (en) *  20001229  20080304  Eastman Kodak Company  System and method for automatic layout of images in digital albums 
US20020122067A1 (en) *  20001229  20020905  Geigel Joseph M.  System and method for automatic layout of images in digital albums 
US8867610B2 (en)  20010213  20141021  Realtime Data Llc  System and methods for video and audio data distribution 
US10212417B2 (en)  20010213  20190219  Realtime Adaptive Streaming Llc  Asymmetric data decompression systems 
US8934535B2 (en)  20010213  20150113  Realtime Data Llc  Systems and methods for video and audio data storage and distribution 
US9762907B2 (en)  20010213  20170912  Realtime Adaptive Streaming, LLC  System and methods for video and audio data distribution 
US9769477B2 (en)  20010213  20170919  Realtime Adaptive Streaming, LLC  Video data compression systems 
US8929442B2 (en)  20010213  20150106  Realtime Data, Llc  System and methods for video and audio data distribution 
US20030028359A1 (en) *  20010315  20030206  Julian Eggert  Simulation of convolutional network behavior and visualizing internal states of a network 
US7236961B2 (en) *  20010315  20070626  Honda Research Institute Europe Gmbh  Simulation of convolutional network behavior and visualizing internal states of a network 
WO2003036984A1 (en) *  20011026  20030501  Koninklijke Philips Electronics N.V.  Spatial scalable compression 
US10277656B2 (en) *  20020129  20190430  FiveOpenBooks, LLC  Method and system for delivering media data 
US20160241627A1 (en) *  20020129  20160818  FiveOpenBooks, LLC  Method and System for Delivering Media Data 
US7698378B2 (en) *  20021029  20100413  Qualcomm Incorporated  Service diversity for communication system 
US20060031554A1 (en) *  20021029  20060209  Lopez Ricardo J  Service diversity for communication system 
US20040114813A1 (en) *  20021213  20040617  Martin Boliek  Compression for segmented images and other types of sideband information 
US8769395B2 (en) *  20021213  20140701  Ricoh Co., Ltd.  Layout objects as image layers 
US20040114814A1 (en) *  20021213  20040617  Martin Boliek  Layout objects as image layers 
US8036475B2 (en)  20021213  20111011  Ricoh Co., Ltd.  Compression for segmented images and other types of sideband information 
US7212662B2 (en) *  20030227  20070501  TMobile Deutschland Gmbh  Method for the compressed transmission of image data for 3dimensional representation of scenes and objects 
US20060274951A1 (en) *  20030227  20061207  TMobile Deutschland Gmbh  Method for the compressed transmission of image data for threedimensional representation of scenes and objects 
US8036478B2 (en) *  20031126  20111011  Samsung Electronics Co., Ltd.  Color image residue transformation and/or inverse transformation method and apparatus, and color image encoding and/or decoding method and apparatus using the same 
US20050111741A1 (en) *  20031126  20050526  Samsung Electronics Co., Ltd.  Color image residue transformation and/or inverse transformation method and apparatus, and color image encoding and/or decoding method and apparatus using the same 
US7664763B1 (en) *  20031217  20100216  Symantec Operating Corporation  System and method for determining whether performing a particular process on a file will be useful 
US7483577B2 (en) *  20040302  20090127  Mitsubishi Electric Research Laboratories, Inc.  System and method for joint deinterlacing and downsampling using adaptive frame and field filtering 
US20050196052A1 (en) *  20040302  20050908  Jun Xin  System and method for joint deinterlacing and downsampling using adaptive frame and field filtering 
US20110081078A1 (en) *  20041109  20110407  Samsung Electronics Co., Ltd.  Method and apparatus for encoding and decoding image data 
US8326065B2 (en)  20041109  20121204  Samsung Electronics Co., Ltd.  Method and apparatus for encoding image data including generation of bit streams 
US7865027B2 (en)  20041109  20110104  Samsung Electronics Co., Ltd.  Method and apparatus for encoding and decoding image data 
US8296441B2 (en)  20050114  20121023  Citrix Systems, Inc.  Methods and systems for joining a realtime session of presentation layer protocol data 
US8422851B2 (en)  20050114  20130416  Citrix Systems, Inc.  System and methods for automatic timewarped playback in rendering a recorded computer session 
US20060161959A1 (en) *  20050114  20060720  Citrix Systems, Inc.  Method and system for realtime seeking during playback of remote presentation protocols 
US7831728B2 (en)  20050114  20101109  Citrix Systems, Inc.  Methods and systems for realtime seeking during realtime playback of a presentation layer protocol data stream 
US8935316B2 (en)  20050114  20150113  Citrix Systems, Inc.  Methods and systems for insession playback on a local machine of remotelystored and real time presentation layer protocol data 
US8230096B2 (en)  20050114  20120724  Citrix Systems, Inc.  Methods and systems for generating playback instructions for playback of a recorded computer session 
US8200828B2 (en)  20050114  20120612  Citrix Systems, Inc.  Systems and methods for single stack shadowing 
US8145777B2 (en)  20050114  20120327  Citrix Systems, Inc.  Method and system for realtime seeking during playback of remote presentation protocols 
US8340130B2 (en)  20050114  20121225  Citrix Systems, Inc.  Methods and systems for generating playback instructions for rendering of a recorded computer session 
US9635318B2 (en)  20050309  20170425  Vudu, Inc.  Live video broadcasting on distributed networks 
US8745675B2 (en)  20050309  20140603  Vudu, Inc.  Multiple audio streams 
US8312161B2 (en)  20050309  20121113  Vudu, Inc.  Method and apparatus for instant playback of a movie title 
US9176955B2 (en)  20050309  20151103  Vvond, Inc.  Method and apparatus for sharing media files among network nodes 
US9705951B2 (en)  20050309  20170711  Vudu, Inc.  Method and apparatus for instant playback of a movie 
US20100254675A1 (en) *  20050309  20101007  Prasanna Ganesan  Method and apparatus for instant playback of a movie title 
US20090025046A1 (en) *  20050309  20090122  Wond, Llc  Hybrid architecture for media services 
US20090007196A1 (en) *  20050309  20090101  Vudu, Inc.  Method and apparatus for sharing media files among network nodes with respect to available bandwidths 
US8219635B2 (en)  20050309  20120710  Vudu, Inc.  Continuous data feeding in a distributed environment 
US20080282298A1 (en) *  20050309  20081113  Prasanna Ganesan  Method and apparatus for supporting file sharing in a distributed network 
US7810647B2 (en)  20050309  20101012  Vudu, Inc.  Method and apparatus for assembling portions of a data file received from multiple devices 
US20110179449A1 (en) *  20050309  20110721  Prasanna Ganesan  Fragmentation of a file for instant access 
US8539536B2 (en) *  20050309  20130917  Vudu, Inc.  Fragmentation of a file for instant access 
US8904463B2 (en)  20050309  20141202  Vudu, Inc.  Live video broadcasting on distributed networks 
US7937379B2 (en) *  20050309  20110503  Vudu, Inc.  Fragmentation of a file for instant access 
US9621666B2 (en)  20050526  20170411  Citrix Systems, Inc.  Systems and methods for enhanced delta compression 
US9692725B2 (en)  20050526  20170627  Citrix Systems, Inc.  Systems and methods for using an HTTPaware client agent 
US9407608B2 (en)  20050526  20160802  Citrix Systems, Inc.  Systems and methods for enhanced client side policy 
US20060269147A1 (en) *  20050531  20061130  Microsoft Corporation  Accelerated image rendering 
US8121428B2 (en) *  20050531  20120221  Microsoft Corporation  Accelerated image rendering 
US8099511B1 (en)  20050611  20120117  Vudu, Inc.  Instantaneous mediaondemand 
US20070014478A1 (en) *  20050715  20070118  Samsung Electronics Co., Ltd.  Apparatus, method, and medium for encoding/decoding of color image and video using intercolorcomponent prediction according to coding modes 
US8107749B2 (en) *  20050715  20120131  Samsung Electronics Co., Ltd.  Apparatus, method, and medium for encoding/decoding of color image and video using intercolorcomponent prediction according to coding modes 
US20070171490A1 (en) *  20050722  20070726  Samsung Electronics Co., Ltd.  Sensor image encoding and/or decoding system, medium, and method 
US7903306B2 (en) *  20050722  20110308  Samsung Electronics Co., Ltd.  Sensor image encoding and/or decoding system, medium, and method 
US20070030816A1 (en) *  20050808  20070208  Honeywell International Inc.  Data compression and abnormal situation detection in a wireless sensor network 
US8191008B2 (en)  20051003  20120529  Citrix Systems, Inc.  Simulating multimonitor functionality in a single monitor environment 
US20090278916A1 (en) *  20051214  20091112  Masahiro Ito  Image display device 
US7756826B2 (en)  20060630  20100713  Citrix Systems, Inc.  Method and systems for efficient delivery of previously stored content 
US8838630B2 (en)  20060630  20140916  Citrix Systems, Inc.  Method and systems for efficient delivery of previously stored content 
US20080008402A1 (en) *  20060710  20080110  Aten International Co., Ltd.  Method and apparatus of removing opaque area as rescaling an image 
US7660486B2 (en) *  20060710  20100209  Aten International Co., Ltd.  Method and apparatus of removing opaque area as rescaling an image 
US8943304B2 (en)  20060803  20150127  Citrix Systems, Inc.  Systems and methods for using an HTTPaware client agent 
US9948608B2 (en)  20060803  20180417  Citrix Systems, Inc.  Systems and methods for using an HTTPaware client agent 
US8694684B2 (en)  20060821  20140408  Citrix Systems, Inc.  Systems and methods of symmetric transport control protocol compression 
US20080046616A1 (en) *  20060821  20080221  Citrix Systems, Inc.  Systems and Methods of Symmetric Transport Control Protocol Compression 
US8296812B1 (en)  20060901  20121023  Vudu, Inc.  Streaming video using erasure encoding 
US20100148483A1 (en) *  20061204  20100617  Ralf Kopp  Sports Equipment and Method for Designing its Visual Appearance 
US8051127B2 (en)  20070312  20111101  Citrix Systems, Inc.  Systems and methods for identifying long matches of data in a compression history 
US8352605B2 (en)  20070312  20130108  Citrix Systems, Inc.  Systems and methods for providing dynamic ad hoc proxycache hierarchies 
US7872597B2 (en)  20070312  20110118  Citrix Systems, Inc.  Systems and methods of using application and protocol specific parsing for compression 
US8786473B2 (en)  20070312  20140722  Citrix Systems, Inc.  Systems and methods for sharing compression histories between multiple devices 
US8255570B2 (en)  20070312  20120828  Citrix Systems, Inc.  Systems and methods of compression history expiration and synchronization 
US8832300B2 (en)  20070312  20140909  Citrix Systems, Inc.  Systems and methods for identifying long matches of data in a compression history 
US7916047B2 (en)  20070312  20110329  Citrix Systems, Inc.  Systems and methods of clustered sharing of compression histories 
US20080228933A1 (en) *  20070312  20080918  Robert Plamondon  Systems and methods for identifying long matches of data in a compression history 
US8063799B2 (en)  20070312  20111122  Citrix Systems, Inc.  Systems and methods for sharing compression histories between multiple devices 
US7827237B2 (en)  20070312  20101102  Citrix Systems, Inc.  Systems and methods for identifying long matches of data in a compression history 
US7865585B2 (en)  20070312  20110104  Citrix Systems, Inc.  Systems and methods for providing dynamic ad hoc proxycache hierarchies 
US20090179913A1 (en) *  20080110  20090716  Ali Corporation  Apparatus for image reduction and method thereof 
US20090190848A1 (en) *  20080129  20090730  Seiko Epson Corporation  Image Processing Device and Method for Image Processing 
US8295616B2 (en) *  20080129  20121023  Seiko Epson Corporation  Image processing device and method for image processing 
US20090232393A1 (en) *  20080312  20090917  Megachips Corporation  Image processor 
US8224081B2 (en) *  20080312  20120717  Megachips Corporation  Image processor 
US20090238477A1 (en) *  20080324  20090924  Megachips Corporation  Image processor 
US8571336B2 (en) *  20080324  20131029  Megachips Corporation  Image processor for inhibiting noise 
US8270466B2 (en) *  20081003  20120918  Sony Corporation  Adaptive decimation filter 
US20100086026A1 (en) *  20081003  20100408  Marco Paniconi  Adaptive decimation filter 
US8589579B2 (en)  20081008  20131119  Citrix Systems, Inc.  Systems and methods for realtime endpoint application flow control with network structure component 
US20100095021A1 (en) *  20081008  20100415  Samuels Allen R  Systems and methods for allocating bandwidth by an intermediary for flow control 
US8504716B2 (en)  20081008  20130806  Citrix Systems, Inc  Systems and methods for allocating bandwidth by an intermediary for flow control 
US9479447B2 (en)  20081008  20161025  Citrix Systems, Inc.  Systems and methods for realtime endpoint application flow control with network structure component 
US20100124380A1 (en) *  20081120  20100520  Canon Kabushiki Kaisha  Image encoding apparatus and method of controlling the same 
US8331705B2 (en) *  20081120  20121211  Canon Kabushiki Kaisha  Image encoding apparatus and method of controlling the same 
US8396308B2 (en) *  20081210  20130312  Canon Kabushiki Kaisha  Image coding based on interpolation information 
US20100142840A1 (en) *  20081210  20100610  Canon Kabushiki Kaisha  Image encoding apparatus and method of controlling the same 
US20100239226A1 (en) *  20090319  20100923  Eldon Technology Limited  Archiving broadcast programs 
US9723249B2 (en)  20090319  20170801  Echostar Holdings Limited  Archiving broadcast programs 
US9503738B2 (en)  20091005  20161122  Beamr Imaging Ltd  Apparatus and methods for recompression of digital images 
WO2011042898A1 (en) *  20091005  20110414  I.C.V.T Ltd.  Apparatus and methods for recompression of digital images 
US9866837B2 (en)  20091005  20180109  Beamr Imaging Ltd  Apparatus and methods for recompression of digital images 
US8908984B2 (en)  20091005  20141209  I.C.V.T. Ltd.  Apparatus and methods for recompression of digital images 
US8452110B2 (en)  20091005  20130528  I.C.V.T. Ltd.  Classifying an image's compression level 
US20110135284A1 (en) *  20091208  20110609  Echostar Technologies L.L.C.  Systems and methods for selective archival of media content 
US8873927B2 (en)  20091208  20141028  Echostar Technologies L.L.C.  Systems and methods for selective archival of media content 
US8315502B2 (en)  20091208  20121120  Echostar Technologies L.L.C.  Systems and methods for selective archival of media content 
US8805109B2 (en)  20100429  20140812  I.C.V.T. Ltd.  Apparatus and methods for recompression having a monotonic relationship between extent of compression and quality of compressed image 
CN102263868A (en) *  20100525  20111130  富士施乐株式会社  An image processing apparatus, image processing method and an image transmitting apparatus 
US9042670B2 (en)  20100917  20150526  Beamr Imaging Ltd  Downsizing an encoded image 
US9014471B2 (en)  20100917  20150421  I.C.V.T. Ltd.  Method of classifying a chroma downsampling error 
US8615159B2 (en)  20110920  20131224  Citrix Systems, Inc.  Methods and systems for cataloging text in a recorded session 
US9953436B2 (en)  20120626  20180424  BTS Software Solutions, LLC  Low delay low complexity lossless compression system 
US10349150B2 (en) *  20120626  20190709  BTS Software Software Solutions, LLC  Low delay low complexity lossless compression system 
WO2014004486A3 (en) *  20120626  20140403  Dunling Li  Low delay low complexity lossless compression system 
US9542839B2 (en)  20120626  20170110  BTS Software Solutions, LLC  Low delay low complexity lossless compression system 
WO2014004486A2 (en) *  20120626  20140103  Dunling Li  Low delay low complexity lossless compression system 
US20180213303A1 (en) *  20120626  20180726  BTS Software Solutions, LLC  Low Delay Low Complexity Lossless Compression System 
US20140104289A1 (en) *  20121011  20140417  Samsung Display Co., Ltd.  Compressor, driving device, display device, and compression method 
US20140286525A1 (en) *  20130325  20140925  Xerox Corporation  Systems and methods for segmenting an image 
US9123087B2 (en) *  20130325  20150901  Xerox Corporation  Systems and methods for segmenting an image 
US9654777B2 (en)  20130405  20170516  Qualcomm Incorporated  Determining palette indices in palettebased video coding 
US9558567B2 (en) *  20130712  20170131  Qualcomm Incorporated  Palette prediction in palettebased video coding 
US20150016501A1 (en) *  20130712  20150115  Qualcomm Incorporated  Palette prediction in palettebased video coding 
US9485419B2 (en)  20131001  20161101  Gopro, Inc.  Camera system encoder/decoder architecture 
US9684949B2 (en)  20131001  20170620  Gopro, Inc.  Camera system encoder/decoder architecture 
US9628718B2 (en)  20131001  20170418  Gopro, Inc.  Image sensor alignment in a multicamera system accelerator architecture 
US9818169B2 (en)  20131001  20171114  Gopro, Inc.  Onchip upscaling and downscaling in a camera architecture 
WO2015050774A1 (en) *  20131001  20150409  Gopro, Inc.  Image capture accelerator 
US9635262B2 (en)  20131001  20170425  Gopro, Inc.  Motion estimation and detection in a camera system accelerator architecture 
US9628704B2 (en)  20131001  20170418  Gopro, Inc.  Camera configuration in a camera system accelerator architecture 
US9485418B2 (en)  20131001  20161101  Gopro, Inc.  Camera system transmission in bandwidth constrained environments 
US9591217B2 (en)  20131001  20170307  Gopro, Inc.  Camera system encoder/decoder architecture 
US9584720B2 (en)  20131001  20170228  Gopro, Inc.  Camera system dualencoder architecture 
US9420173B2 (en)  20131001  20160816  Gopro, Inc.  Camera system dualencoder architecture 
US9491356B2 (en)  20131001  20161108  Gopro, Inc.  Motion estimation and detection in a camera system accelerator architecture 
US9485417B2 (en)  20131001  20161101  Gopro, Inc.  Image sensor alignment in a multicamera system accelerator architecture 
US9485422B2 (en)  20131001  20161101  Gopro, Inc.  Image capture accelerator 
US9420182B2 (en)  20131001  20160816  Gopro, Inc.  Camera system dualencoder architecture 
US9420174B2 (en)  20131001  20160816  Gopro, Inc.  Camera system dualencoder architecture 
US10096082B2 (en)  20131001  20181009  Gopro, Inc.  Upscaling and downscaling in a camera architecture 
US10362309B2 (en)  20171204  20190723  Beamr Imaging Ltd  Apparatus and methods for recompression of digital images 
Also Published As
Publication number  Publication date 

EP0770246A1 (en)  19970502 
BR9508403A (en)  19971111 
MX9700385A (en)  19980531 
JP2000511363A (en)  20000829 
CA2195110A1 (en)  19960201 
AU3097995A (en)  19960216 
US6453073B2 (en)  20020917 
EP0770246A4 (en)  19980114 
WO1996002895A1 (en)  19960201 
US5892847A (en)  19990406 
AU698055B2 (en)  19981022 
Similar Documents
Publication  Publication Date  Title 

Strobach  Treestructured scene adaptive coder  
CA1333501C (en)  Hierarchical encoding method and apparatus for efficiently communicating image sequences  
US8989267B2 (en)  High dynamic range codecs  
US5544284A (en)  Sequential product code quantization of digital color image  
US6002794A (en)  Encoding and decoding of color digital image using wavelet and fractal encoding  
US5289548A (en)  Compression and reconstruction of radiological images  
EP0864135B1 (en)  Storage and retrieval of large digital images  
US5694484A (en)  System and method for automatically processing image data to provide images of optimal perceptual quality  
US4780761A (en)  Digital image compression and transmission system visually weighted transform coefficients  
US6233279B1 (en)  Image processing method, image processing apparatus, and data storage media  
EP1347411B1 (en)  Fractal image enlargement  
US5414527A (en)  Image encoding apparatus sensitive to tone variations  
US5629778A (en)  Method and apparatus for reduction of image data compression noise  
US5754698A (en)  Image signal encoding device having first and second encoding means  
US5835627A (en)  System and method for automatically optimizing image quality and processing time  
US5903676A (en)  Contextbased, adaptive, lossless image codec  
US6125201A (en)  Method, apparatus and system for compressing data  
US5790269A (en)  Method and apparatus for compressing and decompressing a video image  
US20020172418A1 (en)  Method of compressing digital images  
US5936669A (en)  Method and system for threedimensional compression of digital video signals  
US7003168B1 (en)  Image compression and decompression based on an integer wavelet transform using a lifting scheme and a correction method  
EP0562672A2 (en)  Process of picture representation by data compression  
Schuster et al.  RateDistortion based video compression: optimal video frame compression and object boundary encoding  
US6865291B1 (en)  Method apparatus and system for compressing data that wavelet decomposes by color plane and then divides by magnitude range nondc terms between a scalar quantizer and a vector quantizer  
US6574372B2 (en)  Wavelet transform coding technique 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: AMERICA ONLINE, INC., VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JOHNSON, STEPHEN G.;REEL/FRAME:010021/0422 Effective date: 19990527 

STCF  Information on status: patent grant 
Free format text: PATENTED CASE 

FPAY  Fee payment 