USRE42257E1 - Computationally efficient modeling of imagery using scaled, extracted principal components - Google Patents

Computationally efficient modeling of imagery using scaled, extracted principal components Download PDF

Info

Publication number
USRE42257E1
USRE42257E1 US12/238,031 US23803108A USRE42257E US RE42257 E1 USRE42257 E1 US RE42257E1 US 23803108 A US23803108 A US 23803108A US RE42257 E USRE42257 E US RE42257E
Authority
US
United States
Prior art keywords
image
tiles
principal component
size
reduced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US12/238,031
Inventor
Leonard E. Russo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OL Security LLC
Original Assignee
Frantorf Investments GmbH LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Frantorf Investments GmbH LLC filed Critical Frantorf Investments GmbH LLC
Priority to US12/238,031 priority Critical patent/USRE42257E1/en
Assigned to FRANTORF INVESTMENTS GMBH, LLC reassignment FRANTORF INVESTMENTS GMBH, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BAE SYSTEMS INFORMATION AND ELECTRONIC SYSTEMS INTEGRATION, INC.
Application granted granted Critical
Publication of USRE42257E1 publication Critical patent/USRE42257E1/en
Assigned to OL SECURITY LIMITED LIABILITY COMPANY reassignment OL SECURITY LIMITED LIABILITY COMPANY MERGER (SEE DOCUMENT FOR DETAILS). Assignors: FRANTORF INVESTMENTS GMBH, LLC
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer

Definitions

  • This invention relates to image processing and more particularly to an efficient system for image modeling and compression.
  • Principal components are those which have self-same characteristics or features from one section of an image to another. This self-same characteristic or feature is encoded in principal component tiles in which the image is first subdivided into rectilinear subsections or tiles. A transform is then applied to the tiles which results in a small number of principal component tiles. The dot product of the principal component tiles with the original image results in a set of weights which when transmitted with the principal component tile permits reconstruction of the image.
  • the principal components are the basis of a matrix analysis where one is looking for orthogonal tiles ordered by energy.
  • the original image is modeled through extraction of principal components.
  • the modeling at least in one instance permits compression so that the transmission of the image can be accomplished on a reduced time scale.
  • Standard compression has not been thought suitable for principal component image modeling and compression, which can involve temporal and characteristics other than spatial characteristics.
  • Standard compression methods such as JPEG or wavelet transforms focus only on the spatial characteristics of the image, with JPEG and wavelet transforms being described in U.S. Pat. Nos. 6,347,157; 6,343,155; 6,343,154; 6,229,926; 6,157,414; 6,249,614; 6,137,914; 6,292,591, and 6,298,162.
  • Standard image compression uses fixed bases. The results are good for standard imagery and are oriented to same. However, for more exotic imagery, e.g., hyperspectral imagery, there is a need for new modeling and compression techniques.
  • hyperspectral imagery the number of features used to characterized an image is multiplied. For instance, non-spatial features such as heat, hardness, texture, and color are often times used in image presentation.
  • JPEG and others cannot handle the expanded feature set associated with hyperspectral imagery.
  • these techniques handle voxels which are used to encode numbers of additional features of an image. Transmission of voxel images is computationally intense and less computationally intense compression techniques are required for their transmission.
  • JPEG Joint Photographic Experts Group
  • DCT transform to provide compaction.
  • the essence of this technique is based on two factors: the approximation of the Karhunen-Loeve (KL) transform by the DCT and the extent of the autocorrelation function which seems to optimize for most images to 8 ⁇ 8 tiles.
  • KL Karhunen-Loeve
  • the JPEG compression standard made a compromise decision omitting the use of scale.
  • the initial DCT transform on 8 ⁇ 8 tiles provides compaction which is then further compressed using zigzag scanning followed by run length and Huffman coders. JPEG produces good images at moderate compression.
  • Wavelet compression The other relevant technique is wavelet compression.
  • Wavelet technology has challenged assumptions in the JPEG standard on several fronts. Most important, scale is implicit to wavelet techniques. Scale allows ordered extraction of fine and coarse features. Use of scale, from fine to coarse, means that subsequent decomposition will be on decimated image. As a result, wavelet decomposition which provides control over computation is limited by decimation. Each level has 1 ⁇ 4 the points of the previous level so computation is about 1.33 N 2 k where k is the size of the wavelet filter and N is image size in one dimension. Second, wavelets are usually applied to images on a separable though fixed basis. Thus, wavelet decomposition is applied in the x and y dimensions separately. This seems to fit well with human visual perception which is oriented to horizontal and vertical detail.
  • the subject invention a method is described which makes feasible a complete principal component analysis of an image (whether standard or hyperspectral). This is because the subject system includes a method which significantly reduces computation. Moreover, the features derived are image adaptive, unlike fixed basis methods, with the adaptability allowing the possibility of better representation, especially for non-standard imagery.
  • the subject system allows extraction of principal components from any kind of image in a computationally efficient manner.
  • the method is based on self-similarity in the same way the wavelet methods described above are based on self-similarity.
  • the goal is to introduce scale not just for its own sake but also to reduce computation and the overhead of using data adaptive features. While there are methods for image compression and methods for principal component extraction, the combination of using principal component features to represent imagery while extracting them in a computationally efficient way is unique.
  • a computationally efficient modeling system for imagery scales both the original image and corresponding principal component tiles in the same proportion to be able to extract scaled principal components.
  • the system includes recovery of feature weights for the image model by extracting the weights from the reduced size principal component tiles.
  • the use of the reduced size tiles to derive weights dramatically reduces computer overhead, and is made possible by the finding that the weights from the scaled down tiles are nearly equal to the weights of the tiles associated with the full size image.
  • the scaled down tiles are self similar. This permits the scaled down tiles to be used to generate weights. Using scaled down tiles dramatically reduces computation and the number of bits required to represent features.
  • First scaling the image and then tiling the image in the same proportion provides reduced size tiles which when dot multiplied by the original image produces the required weights.
  • Image transmission involves transmitting only the principal component tiles and the weights which effects the compression.
  • the computational savings using the scaled down tiles is both in generating the tiles and in generating the weights.
  • the scaled down tiles are used as training exemplars used to generate the principal components.
  • Departure from prior scaling techniques results in a system in which not only is the image scaled, so are the tiles. Since the tiles associated with a scaled down image are similar to tiles extracted from the full size image, the scaled tiles can be used to generate the weights for creating an image model.
  • the subject invention rests on this finding that 1) for principal component extraction the full scale image may be scaled down and 2) the image can be decomposed into a number of smaller sized tiles. It is a finding of the subject invention that these tiles will in fact be similar to the larger tiles extracted from full image. In short, it is the finding of the subject invention that the smaller tiles will in fact be similar to the larger tiles extracted from the full size image.
  • a computationally efficient modeling system for imagery scales both the original image and corresponding principal component tiles in the same proportion to be able to extract scaled principal components.
  • the system includes recovery of feature weights for the image model by extracting the weights from the reduced size principal component tiles.
  • the use of the reduced size tiles to derive weights dramatically reduces computer overhead both in the generation of the files and in the generation of the weights, and is made possible by the fact that the weights from the scaled down tiles are nearly equal to the weights of the tiles associated with the full size image.
  • the subject system thus reduces computation and the number of bits required to represent features by first scaling the image and then tiling the image in the same proportion.
  • the scaled down tiles are used as training exemplars used to generate the principal components.
  • FIG. 1 is a diagrammatic illustration of the modeling of an image utilizing principal component feature tiles, with reconstruction of the original image through the utilization of the transmission of the principal component feature tiles and the weights associated therewith;
  • FIG. 2 is a diagrammatic illustration of the modeling of an image utilizing a scaled original image, scaled principal component feature tiles, interpolation of the scaled principal component feature tiles to full sized principal component feature tiles and the utilization of the weights associated with the scaled principal component feature tiles in combination with the reconstructed full size principal component feature tiles to reconstruct an approximation of the original image;
  • FIG. 3 is a diagrammatic illustration of the refinement of the image associated with FIG. 2 in which a residual approximation is added to the rough approximation;
  • FIG. 4 is a reconstruction of an original image containing a model, Lena, in which the reconstructed image was derived from extracted principal component feature tiles scaled identically to the original image;
  • FIG. 5 is a diagrammatic representation of the extracted principal component tiles used for the reconstruction of FIG. 4 ;
  • FIG. 6 is a series of reconstructed images, the first of the images reconstructed from full scale extracted principal component tiles and the second image constructed from scaled down principal component tiles, with images and the features associated with the two sets of principal component tiles being quite similar, thus leading to the ability to utilize smaller scale principal component tiles to reduce computational load;
  • FIG. 7 is a rendering of the reconstructed original image utilizing scaled principal component feature tiles, indicating very little difference in this reconstruction from the reconstruction of FIG. 4 ;
  • FIG. 8 is a table showing rate distortion using scale which indicates compression at PSNR for extraction using scale.
  • FIG. 9 is a table showing the result of using scaled feature extraction, with computation for the scaled feature extraction being about 5 ⁇ 8 of the original scheme.
  • modeling and compression of an original image 10 is illustrated in which after the subject process is performed an approximation 12 of the original image is generated.
  • the original image is divided up into 1 ⁇ M segments with each of the segments being reflected in a different tile 14 , with the tiled being shown as stacked. These tiles are of the same scale as the original image.
  • a transform 16 is applied to tiles 14 which results in a reduced set of tiles 18 referred herein as principal component feature tiles. These tiles are utilized to characterize features in the original image with the transform being one of a number of transforms which extract principal components.
  • U.S. Pat. No. 5,377,305 incorporated herein by reference and assigned to the assignee hereof describes a neural network technique for deriving principal components.
  • the principal component feature tiles 18 are utilized in generating weights which are to be transmitted along with the principal component feature tiles to generate a rough approximation of the original image as illustrated at 12 .
  • a principal component feature tile T 1 is dot multiplied by a segment S 1 from the original image to from a weight ⁇ 11 . This is done for all principal component feature tiles and for all image segments. Each segment may be approximated by the appropriate sum of the weighted principal component feature tiles. The image may be reconstructed from the appropriately positioned segment tile approximations, e.g., the weighted sum of the principal component feature tiles weighted by the weights for that segment. The image is then reconstructed using all of the segments.
  • the principal component feature tiles are multiplied with the segment of the original image to which they apply such that a dot product results.
  • This dot product results in a weight for each of the segments of the original image.
  • These weights, herein illustrated at 20 are utilized in combination with the principal component feature tiles to arrive at the approximation of the original image.
  • the approximation of the original image is a reconstructed image utilizing only the weights and the principal component tiles, it being understood that the transmission of the weights and the principal component tiles involves a transmission of much less data than would be necessary in transmitting the original image.
  • tiling the image and deriving weights is one way to compress the image for transmission.
  • the principal component feature tiles would be a stack of 16 ⁇ 16 tiles.
  • the computational load for generating the tiles using of transform 16 and the generation of weights is excessive.
  • scaled down tiles 14 ′ which also are one half the size of the original tiles associated with the system of FIG. 1 .
  • the scaled image if it is half sized, would be a 256 ⁇ 256 image in which the scaled tiles would be an array of 8 ⁇ 8 tiles. It will be appreciated that the computation in number of bits required to represent the features of the image are cut by a factor of 4, assuming the scaled tiles were transformed as illustrated at 16 ′. The result is a set of scaled principal component feature tiles 18 ′ which are used to generate appropriate weights.
  • an original 512 ⁇ 512 image is scaled down to a 256 ⁇ 256 image which results in scaled extraction feature tiles going from 16 ⁇ 16 to 8 ⁇ 8.
  • a scaled down image 30 is utilized to generate scaled down principal component feature tiles 32 which are in turn utilized to obtain feature weights 34 that are transmitted at 36 along with the scaled feature tiles to obtain the aforementioned rough approximation, here illustrated at 38 .
  • the scaled feature tiles are transmitted along with the associated weights shown at 42 .
  • the process can continue by subtracting the rough image 51 reconstructed from tiles from the original image here illustrated at 52 to obtain a residual image 54 .
  • One then scales down the residual image by changing the tile size to a smaller tile size as illustrated at 56 , where again, one obtains weights as illustrated at 58 which are transmitted along with the smaller tiles to obtain a residual approximation 60 .
  • the residual approximation is added to the rough approximation there is a reconstructed image 62 with a finer detail than possible with the rough approximation.
  • the process may be iteratively applied, with new residual approximations being added to the next previous reconstruction for even further fineness of detail.
  • the residual image may be further decomposed and additional features extracted. Furthermore, these features need not be at the same scale as the features extracted to create the original image model. That is, one may retile the residual image at a different scale and train on the resultant tile set. After training, each tile in the image will have a weight for each principal component feature at each scale. The weights yield a compressed model for the image.
  • the extracted principal feature tiles as well as the coefficient weights for each feature for all image tiles must be transmitted to the receiver.
  • the reconstruction of FIG. 4 uses five 8 ⁇ 8 and five 4 ⁇ 4 principal feature tiles. The result is at or near current state of the art compression: about 32 dB at 0.14 bpp. To this one adds another 12% for principal component feature tiles. Note that the principal component feature tiles are shown in FIG. 5 .
  • the Hilbert-scan is utilized for scanning the image, with the image result being delta coded.
  • the Hilbert scan ensures that component weights in x and y dimensions will be scanned in close two-dimensional proximity.
  • the correlations in these weights combined with delta coding contribute to an entropy reduction improving the potential rate. This, in effect, allows one to exploit local correlation in the image at the next higher scale.
  • Other schemes for encoding could be used which may result in improved results.
  • the first 8 ⁇ 8 tile contains the first four 4 ⁇ 4 tiles
  • the second 8 ⁇ 8 tile contains the next four 4 ⁇ 4 tiles and so forth. This allows scaling without reconstituting the image while maintaining the Hilbert scanned order in all scales.
  • FIG. 6 shows the scaled images and corresponding principal component feature tiles. Note the similarity in the extracted principal component feature tiles. However, one can go one step further. The tile coefficient weights will be nearly identical for the two decompositions. Therefore one can train on the quarter size image with quarter size tiles and derive principal component tiles which are similar to the full scale extraction which yield coefficient weights which are nearly identical to the full scale extraction.
  • the table indicates compression and PSNR for extraction using scale, whereas as shown in FIG. 9 , the table shows the result of using scaled feature extraction. Note that the PSNR and the rate are approximately equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

A computationally efficient modeling system for imagery scales both the original image and corresponding principal component tiles in the same proportion to be able to extract scaled principal components. The system includes recovery of feature weights for the image model by extracting the weights from the reduced size principal component tiles. The use of the reduced size tiles to derive weights dramatically reduces computer overhead both in the generation of the files and in the generation of the weights, and is made possible by the fact that the weights from the scaled down tiles are nearly equal to the weights of the tiles associated with the full size image. The subject system thus reduces computation and the number of bits required to represent features by first scaling the image and then tiling the image in the same proportion. In one embodiment, the scaled down tiles are used as training exemplars used to generate the principal components.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application claims rights under U.S. Provisional Application Ser. No. 60/353,476, filed Mar. 31, 2002.
This application is a Reissue application of U.S. Ser. No. 10/334,816, filed Dec. 31, 2002, now U.S. Pat. No. 7,113,654, granted Sep. 26, 2006, which claims the benefit of Provisional Application No. 60/353,476, filed Jan. 31, 2002.
STATEMENT OF GOVERNMENT INTEREST
This invention was made with U.S. Government support under Contract No. DAAL01-96-2-0002 with the Army Research Laboratory, and the U.S. Government has certain rights in the invention.
FIELD OF INVENTION
This invention relates to image processing and more particularly to an efficient system for image modeling and compression.
BACKGROUND OF THE INVENTION
The extraction of principal components from images is well known, with one extraction technique using neural networks as described in U.S. Pat. No. 5,377,305. Principal components are those which have self-same characteristics or features from one section of an image to another. This self-same characteristic or feature is encoded in principal component tiles in which the image is first subdivided into rectilinear subsections or tiles. A transform is then applied to the tiles which results in a small number of principal component tiles. The dot product of the principal component tiles with the original image results in a set of weights which when transmitted with the principal component tile permits reconstruction of the image. Mathematically speaking the principal components are the basis of a matrix analysis where one is looking for orthogonal tiles ordered by energy.
Thus the original image is modeled through extraction of principal components. The modeling at least in one instance permits compression so that the transmission of the image can be accomplished on a reduced time scale.
By way of background, as to standard compression methods, first, there is the process of compaction. This is done for conventional applications by some suitable transformation which provides an initial compact representation. In the case of JPEG, for example, the discrete cosine transformation (DCT) provides compaction. Associated with each transformation is a basis. The bases may be of fixed scale as with the JPEG-DCT, or may vary in scale motivated by the prospect for very low bit rate transmission as with current wavelet techniques.
Up until recently, standard compression has not been thought suitable for principal component image modeling and compression, which can involve temporal and characteristics other than spatial characteristics. Standard compression methods such as JPEG or wavelet transforms focus only on the spatial characteristics of the image, with JPEG and wavelet transforms being described in U.S. Pat. Nos. 6,347,157; 6,343,155; 6,343,154; 6,229,926; 6,157,414; 6,249,614; 6,137,914; 6,292,591, and 6,298,162.
Standard image compression uses fixed bases. The results are good for standard imagery and are oriented to same. However, for more exotic imagery, e.g., hyperspectral imagery, there is a need for new modeling and compression techniques.
More specifically, in hyperspectral imagery the number of features used to characterized an image is multiplied. For instance, non-spatial features such as heat, hardness, texture, and color are often times used in image presentation. The fixed basis of JPEG and others cannot handle the expanded feature set associated with hyperspectral imagery. Nor can these techniques handle voxels which are used to encode numbers of additional features of an image. Transmission of voxel images is computationally intense and less computationally intense compression techniques are required for their transmission.
In the past, principal component analysis has been used to indicate what features or characteristics of an image are to be utilized in a compression process. Such characteristics can be spatial or temporal or indeed any of a wide variety of characteristics such as for instance color, heat, or other hyperspectral components. In order to achieve modeling or indeed compression, it is important to identify correlations in an image. How to do this in a computationally efficient manner and one which is universal across all platforms is a challenge.
By way of further background, there are currently two main compression techniques and both are dependent on fixed bases. One, the JPEG standard, is based on the DCT transform to provide compaction. The essence of this technique is based on two factors: the approximation of the Karhunen-Loeve (KL) transform by the DCT and the extent of the autocorrelation function which seems to optimize for most images to 8×8 tiles. Using these factors, the JPEG compression standard made a compromise decision omitting the use of scale. The initial DCT transform on 8×8 tiles provides compaction which is then further compressed using zigzag scanning followed by run length and Huffman coders. JPEG produces good images at moderate compression.
The other relevant technique is wavelet compression. Wavelet technology has challenged assumptions in the JPEG standard on several fronts. Most important, scale is implicit to wavelet techniques. Scale allows ordered extraction of fine and coarse features. Use of scale, from fine to coarse, means that subsequent decomposition will be on decimated image. As a result, wavelet decomposition which provides control over computation is limited by decimation. Each level has ¼ the points of the previous level so computation is about 1.33 N2k where k is the size of the wavelet filter and N is image size in one dimension. Second, wavelets are usually applied to images on a separable though fixed basis. Thus, wavelet decomposition is applied in the x and y dimensions separately. This seems to fit well with human visual perception which is oriented to horizontal and vertical detail. Two dimensional bases are implicit in this decomposition. Third, a particularly good scheme for quantizing wavelet coefficients, Zero Tree Encoding2, has significantly advanced the state of the art in wavelet image compression. The combination of scale, compaction and quantization made wavelets the likely candidate for future generation JPEG compression standards.
SUMMARY OF THE INVENTION
As will be seen, in the subject invention a method is described which makes feasible a complete principal component analysis of an image (whether standard or hyperspectral). This is because the subject system includes a method which significantly reduces computation. Moreover, the features derived are image adaptive, unlike fixed basis methods, with the adaptability allowing the possibility of better representation, especially for non-standard imagery.
In one embodiment, the subject system allows extraction of principal components from any kind of image in a computationally efficient manner. The method is based on self-similarity in the same way the wavelet methods described above are based on self-similarity. However, in the subject invention the goal is to introduce scale not just for its own sake but also to reduce computation and the overhead of using data adaptive features. While there are methods for image compression and methods for principal component extraction, the combination of using principal component features to represent imagery while extracting them in a computationally efficient way is unique.
In the subject invention, a computationally efficient modeling system for imagery scales both the original image and corresponding principal component tiles in the same proportion to be able to extract scaled principal components. The system includes recovery of feature weights for the image model by extracting the weights from the reduced size principal component tiles. The use of the reduced size tiles to derive weights dramatically reduces computer overhead, and is made possible by the finding that the weights from the scaled down tiles are nearly equal to the weights of the tiles associated with the full size image. In short, not only are the scaled down images self similar, the scaled down tiles are self similar. This permits the scaled down tiles to be used to generate weights. Using scaled down tiles dramatically reduces computation and the number of bits required to represent features. First scaling the image and then tiling the image in the same proportion provides reduced size tiles which when dot multiplied by the original image produces the required weights. Image transmission involves transmitting only the principal component tiles and the weights which effects the compression. The computational savings using the scaled down tiles is both in generating the tiles and in generating the weights. In one embodiment, the scaled down tiles are used as training exemplars used to generate the principal components.
Departure from prior scaling techniques results in a system in which not only is the image scaled, so are the tiles. Since the tiles associated with a scaled down image are similar to tiles extracted from the full size image, the scaled tiles can be used to generate the weights for creating an image model. The subject invention rests on this finding that 1) for principal component extraction the full scale image may be scaled down and 2) the image can be decomposed into a number of smaller sized tiles. It is a finding of the subject invention that these tiles will in fact be similar to the larger tiles extracted from full image. In short, it is the finding of the subject invention that the smaller tiles will in fact be similar to the larger tiles extracted from the full size image. It is also the finding of the subject invention that principal component tile weights computed from the reduced size and full size images will be almost identical. This permits interpolation between the smaller and larger sized tiles so that the principal component features can be weighted with the extracted weights from the reduced sized tiles, along with the reduced sized tiles themselves being interpolated into full size tiles utilized to reconstruct the original image.
In summary, a computationally efficient modeling system for imagery scales both the original image and corresponding principal component tiles in the same proportion to be able to extract scaled principal components. The system includes recovery of feature weights for the image model by extracting the weights from the reduced size principal component tiles. The use of the reduced size tiles to derive weights dramatically reduces computer overhead both in the generation of the files and in the generation of the weights, and is made possible by the fact that the weights from the scaled down tiles are nearly equal to the weights of the tiles associated with the full size image. The subject system thus reduces computation and the number of bits required to represent features by first scaling the image and then tiling the image in the same proportion. In one embodiment, the scaled down tiles are used as training exemplars used to generate the principal components.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features of the subject invention will be better understood in connection with the Detailed Description in conjunction with the Drawings, of which:
FIG. 1 is a diagrammatic illustration of the modeling of an image utilizing principal component feature tiles, with reconstruction of the original image through the utilization of the transmission of the principal component feature tiles and the weights associated therewith;
FIG. 2 is a diagrammatic illustration of the modeling of an image utilizing a scaled original image, scaled principal component feature tiles, interpolation of the scaled principal component feature tiles to full sized principal component feature tiles and the utilization of the weights associated with the scaled principal component feature tiles in combination with the reconstructed full size principal component feature tiles to reconstruct an approximation of the original image;
FIG. 3 is a diagrammatic illustration of the refinement of the image associated with FIG. 2 in which a residual approximation is added to the rough approximation;
FIG. 4 is a reconstruction of an original image containing a model, Lena, in which the reconstructed image was derived from extracted principal component feature tiles scaled identically to the original image;
FIG. 5 is a diagrammatic representation of the extracted principal component tiles used for the reconstruction of FIG. 4;
FIG. 6 is a series of reconstructed images, the first of the images reconstructed from full scale extracted principal component tiles and the second image constructed from scaled down principal component tiles, with images and the features associated with the two sets of principal component tiles being quite similar, thus leading to the ability to utilize smaller scale principal component tiles to reduce computational load;
FIG. 7 is a rendering of the reconstructed original image utilizing scaled principal component feature tiles, indicating very little difference in this reconstruction from the reconstruction of FIG. 4;
FIG. 8 is a table showing rate distortion using scale which indicates compression at PSNR for extraction using scale; and,
FIG. 9 is a table showing the result of using scaled feature extraction, with computation for the scaled feature extraction being about ⅝ of the original scheme.
DETAILED DESCRIPTION
Referring to FIG. 1, modeling and compression of an original image 10 is illustrated in which after the subject process is performed an approximation 12 of the original image is generated. The original image is divided up into 1−M segments with each of the segments being reflected in a different tile 14, with the tiled being shown as stacked. These tiles are of the same scale as the original image.
In order to extract principal components relating to features of the image, a transform 16 is applied to tiles 14 which results in a reduced set of tiles 18 referred herein as principal component feature tiles. These tiles are utilized to characterize features in the original image with the transform being one of a number of transforms which extract principal components. As mentioned hereinbefore U.S. Pat. No. 5,377,305 incorporated herein by reference and assigned to the assignee hereof describes a neural network technique for deriving principal components.
The principal component feature tiles 18 are utilized in generating weights which are to be transmitted along with the principal component feature tiles to generate a rough approximation of the original image as illustrated at 12. As can be seen at 19, a principal component feature tile T1 is dot multiplied by a segment S1 from the original image to from a weight ω11. This is done for all principal component feature tiles and for all image segments. Each segment may be approximated by the appropriate sum of the weighted principal component feature tiles. The image may be reconstructed from the appropriately positioned segment tile approximations, e.g., the weighted sum of the principal component feature tiles weighted by the weights for that segment. The image is then reconstructed using all of the segments.
In the generation of the weights the principal component feature tiles are multiplied with the segment of the original image to which they apply such that a dot product results. This dot product results in a weight for each of the segments of the original image. These weights, herein illustrated at 20 are utilized in combination with the principal component feature tiles to arrive at the approximation of the original image. The approximation of the original image is a reconstructed image utilizing only the weights and the principal component tiles, it being understood that the transmission of the weights and the principal component tiles involves a transmission of much less data than would be necessary in transmitting the original image. As such, tiling the image and deriving weights is one way to compress the image for transmission.
It will be appreciated that if for instance the original image was 512×512 in one embodiment the principal component feature tiles would be a stack of 16×16 tiles. Thus while there would be significant compaction in this compression process, easily a 20 to 1 reduction in transmitted data, the computational load for generating the tiles using of transform 16 and the generation of weights is excessive.
Referring to FIG. 2, assuming that one scales the original image so as to reduce it by half as illustrated at 10′, this results in scaled down tiles 14′ which also are one half the size of the original tiles associated with the system of FIG. 1. The scaled image, if it is half sized, would be a 256×256 image in which the scaled tiles would be an array of 8×8 tiles. It will be appreciated that the computation in number of bits required to represent the features of the image are cut by a factor of 4, assuming the scaled tiles were transformed as illustrated at 16′. The result is a set of scaled principal component feature tiles 18′ which are used to generate appropriate weights.
It is the finding of this invention that such scaled principal component feature tiles in fact result in appropriate weights such that the reconstruction can take place utilizing the weights generated and the scaled principal component feature tiles.
In order to reconstruct the full size approximation of the original image as shown at 12′, after generation of weights 20′, one optionally needs to interpolate the scaled principal component feature tiles to increase their scale to the original size through a simple interpolation scheme here illustrated at 26. This results in reconstructed full size principal component feature tiles 18 which are then used in the reconstruction of the approximation of the original image. Alternatively no interpolation may be necessary and the scaled tiles can be used in the reconstruction.
As will be seen, an original 512×512 image is scaled down to a 256×256 image which results in scaled extraction feature tiles going from 16×16 to 8×8.
It is a finding of the subject invention that the weights associated with the dot product of the scaled principal component feature tiles with a scaled image and the full size principal component feature tiles multiplied with the full scale image are nearly equal. The result is that one may train on a scaled image with scaled features and recover feature weights which constitute the image model. By utilizing scaled images and scaled feature tiles one can reduce the computation load by a factor of 4. This factor may be increased for multiple levels of decomposition.
Referring now to FIG. 3, one can reconstruct a rough approximation of the original image in the above manner. Thus a scaled down image 30 is utilized to generate scaled down principal component feature tiles 32 which are in turn utilized to obtain feature weights 34 that are transmitted at 36 along with the scaled feature tiles to obtain the aforementioned rough approximation, here illustrated at 38. Note as illustrated at 40 the scaled feature tiles are transmitted along with the associated weights shown at 42.
As illustrated by dotted line 50 the process can continue by subtracting the rough image 51 reconstructed from tiles from the original image here illustrated at 52 to obtain a residual image 54. One then scales down the residual image by changing the tile size to a smaller tile size as illustrated at 56, where again, one obtains weights as illustrated at 58 which are transmitted along with the smaller tiles to obtain a residual approximation 60. When the residual approximation is added to the rough approximation there is a reconstructed image 62 with a finer detail than possible with the rough approximation. For even further refinement, the process may be iteratively applied, with new residual approximations being added to the next previous reconstruction for even further fineness of detail.
Referring to FIG. 4, was is depicted is a reconstruction of a model, here Lena, using extracted principal component tiles as illustrated in FIG. 5.
Note that with respect to FIG. 3, since the residual and rough approximation or model images are orthogonal, the residual image may be further decomposed and additional features extracted. Furthermore, these features need not be at the same scale as the features extracted to create the original image model. That is, one may retile the residual image at a different scale and train on the resultant tile set. After training, each tile in the image will have a weight for each principal component feature at each scale. The weights yield a compressed model for the image. The extracted principal feature tiles as well as the coefficient weights for each feature for all image tiles must be transmitted to the receiver. The reconstruction of FIG. 4 uses five 8×8 and five 4×4 principal feature tiles. The result is at or near current state of the art compression: about 32 dB at 0.14 bpp. To this one adds another 12% for principal component feature tiles. Note that the principal component feature tiles are shown in FIG. 5.
In obtaining the result of FIG. 4 one avails oneself of a scanning method which benefits from residual correlation in the image. The Hilbert-scan is utilized for scanning the image, with the image result being delta coded. The Hilbert scan ensures that component weights in x and y dimensions will be scanned in close two-dimensional proximity. The correlations in these weights combined with delta coding contribute to an entropy reduction improving the potential rate. This, in effect, allows one to exploit local correlation in the image at the next higher scale. Other schemes for encoding could be used which may result in improved results.
Since the Hilbert scan is fractal in nature, the first 8×8 tile contains the first four 4×4 tiles, the second 8×8 tile contains the next four 4×4 tiles and so forth. This allows scaling without reconstituting the image while maintaining the Hilbert scanned order in all scales.
Although the results in FIG. 4 match the rate of the state of the art in PSNR vs rate, one has expended much more computation to achieve them.
In the subject invention it is the finding that one can use scale to limit the computation that is done.
How this is done is as follows: suppose one scales an image by averaging adjacent points. Then, for example, a 512×512 image could become a 256×256 image. Looking at the two images does not reveal much difference in the images in that they appear to be similar. The question then becomes would similarly scaled extraction of tiles yield similar principal component features. It is the finding of the subject invention that the answer to this question is yes. This is especially true for simple features where averaging and aliasing typically do not have a large effect.
FIG. 6 shows the scaled images and corresponding principal component feature tiles. Note the similarity in the extracted principal component feature tiles. However, one can go one step further. The tile coefficient weights will be nearly identical for the two decompositions. Therefore one can train on the quarter size image with quarter size tiles and derive principal component tiles which are similar to the full scale extraction which yield coefficient weights which are nearly identical to the full scale extraction.
The net result is that one can have reduced computation by a factor proportional to the square of the scaling. Moreover, one may reconstruct the full scale principal component feature tiles using interpolation although one only needs to send the scaled principal component feature tiles. Therefore, one can also reduce the overhead in transmitting the principal component feature tiles by a factor proportional to scaling squared. What this means, referring back to FIG. 2, is that it is not necessary in generating the approximation of the original image to use the reconstructed full size principal component feature tiles. The approximation of the original image 12′ may in fact be generated utilizing the scale principal component feature tiles 18′ thus reducing the overhead as described above.
It will be noted that if one has multiple levels of decomposition, the above savings will be increased, albeit with a minor loss in PSNR or rate this is because of the averaging of the image and the coarseness of the scaled tiles. However, this may not be noticeable.
In order to practice the subject invention one first scales the image to the appropriate level. Then one scales the image tiles. Then, one trains on the scaled image tiles and transmits the scaled principal component tiles and the coefficient weights to a receiver.
One then optionally interpolates the scaled principal component feature tiles and uses them with the weights to construct an image model.
Finally one repeats the process on the current image residual for all scales taking direct sum of image models to obtain the final model.
This process yields an image model of good fidelity requiring much less computation and fewer bits transmitted for the principal component tiles. The down side is some small loss in PSNR or, correspondingly, increase in rate for the same PSNR. However, can be seen in FIG. 7 there is hardly any difference between the scaled feature Lena reconstruction and the Lena reconstruction of FIG. 4 utilizing full size tiles.
As can be seen from FIG. 8, the table indicates compression and PSNR for extraction using scale, whereas as shown in FIG. 9, the table shows the result of using scaled feature extraction. Note that the PSNR and the rate are approximately equivalent.
As expected, computations for the scale feature extraction is about ⅝ of the original scheme, with the advantage improving dramatically as one adds more levels of processing.
Having now described a few embodiments of the invention, and some modifications and variations thereto, it should be apparent to those skilled in the art that the foregoing is merely illustrative and not limiting, having been presented by the way of example only. Numerous modifications and other embodiments are within the scope of one of ordinary skill in the art and are contemplated as falling within the scope of the invention as limited only by the appended claims and equivalents thereto.

Claims (20)

1. A method for modeling an image comprising the steps of:
tiling an image at a predetermined scale to form small tile segments of the image;
combining the small segments of the image into a data matrix extracting principal components of the data matrix in terms of principal component feature tiles;
generating a set of coefficient weights corresponding to the principal component tiles;
scaling the principal component tiles to reduce the data therewith;
transmitting from a transmitting side the scaled principal component tiles and the weights associated with each image segment to a remote location;
interpolating the principal component tiles at the remote location to obtain full-scale principal component tiles;
computing a weighted sum of full-scale principal component tiles for each segment to obtain a coarse image at full scale;
constructing a coarse image at the transmitting side;
obtaining the difference between the original image and the coarse image at the transmitting side to obtain a residual image;
selecting a finer scale for the residual image;
producing finer scale residual image tiles from the finer scale residual image;
obtaining from the finer scale residual image tiles a finer set of principal component tiles;
forming a weighted sum of the finer-scaled principal component tiles to represent each residual image segment;
transmitting to the remote location the newly-obtained finer principal component tiles and the new weights associated with each residual image segment;
reconstructing the residual image at the remote location from the transmitted new, finer principal component tiles and the new weights associated therewith; and,
at the remote location summing the coarse and residual images to obtain an improved image representation.
2. A method of modeling an image, the method comprising:
generating reduced-size image tiles from an original image in a same proportion as a scaled image of the original image;
transforming the reduced-size image tiles into corresponding reduced-size principal component tiles;
extracting a set of weights corresponding to the reduced-size image tiles from the reduced-size principal component tiles; and
generating an image approximation of the original image from the reduced-size principal component tiles and the extracted weights.
3. The method of claim 2, further comprising scaling the original image before generating reduced-size image tiles.
4. The method of claim 3, wherein the scaling reduces the original image by half.
5. The method of claim 2, further comprising combining the reduced-size image tiles into a data matrix.
6. The method of claim 2, wherein extracting the set of weights corresponding to the reduced-size image tiles from the reduced-size principal component tiles comprises multiplying reduced-size principal component tiles and a corresponding segment of the original image.
7. The method of claim 2, further comprising communicating the reduced-size principal component tiles and the extracted weights.
8. The method of claim 2, further comprising obtaining from a finer scale residual image tiles a finer set of principal component tiles.
9. The method of claim 8, further comprising forming a weighted sum of the finer set of principal component tiles to represent each residual image segment.
10. The method of claim 9, further comprising communicating the finer set of principal component tiles and the new weights associated with each residual image segment.
11. The method of claim 9, further comprising reconstructing the residual image from the finer set of principal component tiles and the new weights associated therewith.
12. The method of claim 11, further comprising summing the coarse and residual images to obtain an improved image representation.
13. A method of modeling an image, the method comprising:
obtaining a difference between an original image and a coarse image, the difference defining a residual image;
producing finer scale residual image tiles from a finer scale residual image of the residual image;
obtaining from the finer scale residual image tiles a finer set of principal component tiles;
forming a weighted sum of the finer set of principal component tiles to represent each residual image segment;
constructing a reconstructed image from the finer set of principal component tiles and associated weights; and
summing the coarse image and reconstructed image to obtain an improved image representation.
14. The method of claim 13, further comprising scaling the original image before obtaining the difference between the original image and a coarse image.
15. The method of claim 14, wherein the scaling reduces the original image by half.
16. The method of claim 13, further comprising communicating the finer set of principal component tiles and the new weights associated with each residual image segment.
17. The method of claim 16, wherein constructing the reconstructed image and summing the coarse image and reconstructed image are done at a remote location.
18. A system for modeling an image, the system comprising:
an interface configured to receive an original image; and
a processor with programmed instructions to:
generate reduced-size image tiles from the original image in a same proportion as a scaled image of the original image;
transform the reduced-size image tiles into corresponding reduced-size principal component tiles;
extract a set of weights corresponding to the reduced-size image tiles from the reduced-size principal component tiles; and
generate an image approximation of the original image from the reduced-size principal component tiles and the extracted weights.
19. The system of claim 18, wherein the interface is configured to communicate the reduced-size principal component tiles and the extracted weights.
20. A computer program product including a computer readable medium having instructions stored thereon that when carried out by a computer cause the computer to perform the steps comprising:
generating reduced-size image tiles from an original image in a same proportion as a scaled image of the original image;
transforming the reduced-size image tiles into corresponding reduced-size principal component tiles;
extracting a set of weights corresponding to the reduced-size image tiles from the reduced-size principal component tiles; and
generating an image approximation of the original image from the reduced-size principal component tiles and the extracted weights.
US12/238,031 2002-01-31 2008-09-25 Computationally efficient modeling of imagery using scaled, extracted principal components Expired - Lifetime USRE42257E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/238,031 USRE42257E1 (en) 2002-01-31 2008-09-25 Computationally efficient modeling of imagery using scaled, extracted principal components

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US35347602P 2002-01-31 2002-01-31
US10/334,816 US7113654B2 (en) 2002-01-31 2002-12-31 Computationally efficient modeling of imagery using scaled, extracted principal components
US12/238,031 USRE42257E1 (en) 2002-01-31 2008-09-25 Computationally efficient modeling of imagery using scaled, extracted principal components

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/334,816 Reissue US7113654B2 (en) 2002-01-31 2002-12-31 Computationally efficient modeling of imagery using scaled, extracted principal components

Publications (1)

Publication Number Publication Date
USRE42257E1 true USRE42257E1 (en) 2011-03-29

Family

ID=27668871

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/334,816 Ceased US7113654B2 (en) 2002-01-31 2002-12-31 Computationally efficient modeling of imagery using scaled, extracted principal components
US12/238,031 Expired - Lifetime USRE42257E1 (en) 2002-01-31 2008-09-25 Computationally efficient modeling of imagery using scaled, extracted principal components

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/334,816 Ceased US7113654B2 (en) 2002-01-31 2002-12-31 Computationally efficient modeling of imagery using scaled, extracted principal components

Country Status (3)

Country Link
US (2) US7113654B2 (en)
AU (1) AU2003216129A1 (en)
WO (1) WO2003065306A2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7995652B2 (en) * 2003-03-20 2011-08-09 Utc Fire & Security Americas Corporation, Inc. Systems and methods for multi-stream image processing
EP1703513A1 (en) * 2005-03-15 2006-09-20 Deutsche Thomson-Brandt Gmbh Method and apparatus for encoding plural video signals as a single encoded video signal, method and and apparatus for decoding such an encoded video signal
US7742660B2 (en) * 2005-03-31 2010-06-22 Hewlett-Packard Development Company, L.P. Scale-space self-similarity image processing

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5377305A (en) 1991-10-01 1994-12-27 Lockheed Sanders, Inc. Outer product neural network
US5398121A (en) * 1990-04-23 1995-03-14 Linotype-Hell Ag Method and device for generating a digital lookup table for printing inks in image reproduction equipment
US5488422A (en) * 1991-03-19 1996-01-30 Yves C. Faroudja Video scan converter including the modification of spatially interpolated pixels as a function of temporal detail and motion
US5500744A (en) * 1994-08-05 1996-03-19 Miles Inc. Method and appparatus for image scaling using parallel incremental interpolation
US5703965A (en) * 1992-06-05 1997-12-30 The Regents Of The University Of California Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening
US5838840A (en) * 1996-08-29 1998-11-17 Bst/Pro Mark Inspection device using a field mode video camera with interpolation to replace missing pixels
US6111988A (en) * 1994-07-01 2000-08-29 Commonwealth Scientific And Industrial Research Organisation Fractal representation of data
US6137914A (en) 1995-11-08 2000-10-24 Storm Software, Inc. Method and format for storing and selectively retrieving image data
US6157414A (en) 1997-08-25 2000-12-05 Nec Corporation Image display apparatus for enlargement or reduction of an image using an interpolation process
US6229926B1 (en) 1998-07-24 2001-05-08 Picsurf, Inc. Memory saving wavelet-like image transform system and method for digital camera and other memory conservative applications
US6249614B1 (en) 1998-03-06 2001-06-19 Alaris, Inc. Video compression and decompression using dynamic quantization and/or encoding
US6266452B1 (en) * 1999-03-18 2001-07-24 Nec Research Institute, Inc. Image registration method
US6292591B1 (en) 1996-07-17 2001-09-18 Sony Coporation Image coding and decoding using mapping coefficients corresponding to class information of pixel blocks
US6298162B1 (en) 1992-12-23 2001-10-02 Lockheed Martin Corporation Image compression/expansion using parallel decomposition/recomposition
US6343154B1 (en) 1998-01-20 2002-01-29 At&T Corp. Compression of partially-masked image data
US6347157B2 (en) 1998-07-24 2002-02-12 Picsurf, Inc. System and method for encoding a video sequence using spatial and temporal transforms
US6510254B1 (en) 1998-04-06 2003-01-21 Seiko Epson Corporation Apparatus and method for image data interpolation and medium on which image data interpolation program is recorded
US6879716B1 (en) * 1999-10-20 2005-04-12 Fuji Photo Film Co., Ltd. Method and apparatus for compressing multispectral images
US7035457B2 (en) * 2000-03-06 2006-04-25 Fuji Photo Film Co., Ltd. Method and apparatus for compressing multispectral images

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5398121A (en) * 1990-04-23 1995-03-14 Linotype-Hell Ag Method and device for generating a digital lookup table for printing inks in image reproduction equipment
US5488422A (en) * 1991-03-19 1996-01-30 Yves C. Faroudja Video scan converter including the modification of spatially interpolated pixels as a function of temporal detail and motion
US5377305A (en) 1991-10-01 1994-12-27 Lockheed Sanders, Inc. Outer product neural network
US5703965A (en) * 1992-06-05 1997-12-30 The Regents Of The University Of California Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening
US6298162B1 (en) 1992-12-23 2001-10-02 Lockheed Martin Corporation Image compression/expansion using parallel decomposition/recomposition
US6111988A (en) * 1994-07-01 2000-08-29 Commonwealth Scientific And Industrial Research Organisation Fractal representation of data
US5500744A (en) * 1994-08-05 1996-03-19 Miles Inc. Method and appparatus for image scaling using parallel incremental interpolation
US6137914A (en) 1995-11-08 2000-10-24 Storm Software, Inc. Method and format for storing and selectively retrieving image data
US6292591B1 (en) 1996-07-17 2001-09-18 Sony Coporation Image coding and decoding using mapping coefficients corresponding to class information of pixel blocks
US5838840A (en) * 1996-08-29 1998-11-17 Bst/Pro Mark Inspection device using a field mode video camera with interpolation to replace missing pixels
US6157414A (en) 1997-08-25 2000-12-05 Nec Corporation Image display apparatus for enlargement or reduction of an image using an interpolation process
US6343154B1 (en) 1998-01-20 2002-01-29 At&T Corp. Compression of partially-masked image data
US6249614B1 (en) 1998-03-06 2001-06-19 Alaris, Inc. Video compression and decompression using dynamic quantization and/or encoding
US6510254B1 (en) 1998-04-06 2003-01-21 Seiko Epson Corporation Apparatus and method for image data interpolation and medium on which image data interpolation program is recorded
US6343155B1 (en) 1998-07-24 2002-01-29 Picsurf, Inc. Memory saving wavelet-like image transform system and method for digital camera and other memory conservative applications
US6229926B1 (en) 1998-07-24 2001-05-08 Picsurf, Inc. Memory saving wavelet-like image transform system and method for digital camera and other memory conservative applications
US6347157B2 (en) 1998-07-24 2002-02-12 Picsurf, Inc. System and method for encoding a video sequence using spatial and temporal transforms
US6266452B1 (en) * 1999-03-18 2001-07-24 Nec Research Institute, Inc. Image registration method
US6879716B1 (en) * 1999-10-20 2005-04-12 Fuji Photo Film Co., Ltd. Method and apparatus for compressing multispectral images
US7035457B2 (en) * 2000-03-06 2006-04-25 Fuji Photo Film Co., Ltd. Method and apparatus for compressing multispectral images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Leonard E. Russo, "Prospects For Adapative Principal Component Image Compression"; ARL Federated Laboratory 3rdAnnual Symposium, Feb. 2-4, 1999; College Park, MD.

Also Published As

Publication number Publication date
US7113654B2 (en) 2006-09-26
WO2003065306A2 (en) 2003-08-07
AU2003216129A1 (en) 2003-09-02
WO2003065306A3 (en) 2004-03-11
US20030156764A1 (en) 2003-08-21

Similar Documents

Publication Publication Date Title
Hussain et al. Image compression techniques: A survey in lossless and lossy algorithms
Kaur et al. A review of image compression techniques
US7489827B2 (en) Scaling of multi-dimensional data in a hybrid domain
Meyer-Bäse et al. Medical image compression using topology-preserving neural networks
Latha et al. Collective compression of images using averaging and transform coding
Singh et al. Novel adaptive color space transform and application to image compression
Saenz et al. Evaluation of color-embedded wavelet image compression techniques
USRE42257E1 (en) Computationally efficient modeling of imagery using scaled, extracted principal components
US6760479B1 (en) Super predictive-transform coding
Kountchev et al. Inverse pyramidal decomposition with multiple DCT
Wang et al. Three-dimensional medical image compression using a wavelet transform with parallel computing
Baviskar et al. Performance evaluation of high quality image compression techniques
Arya et al. Medical image compression using two dimensional discrete cosine transform
Lin et al. The application of multiwavelet transform to image coding
He Peak transform for efficient image representation and coding
CN105872536B (en) A kind of method for compressing image based on dual coding pattern
US6633679B1 (en) Visually lossless still image compression for CMYK, CMY and Postscript formats
Zemliachenko et al. Peculiarities of hyperspectral image lossy compression for sub-band groups
Keser An Image Compression Method Based on Subspace and Downsampling
Ranjan et al. An Efficient Compression of Gray Scale Images Using Wavelet Transform
Yeo et al. A feedforward neural network compression with near to lossless image quality and lossy compression ratio
Fazli et al. JPEG2000 image compression using SVM and DWT
Wang et al. Image compression using wavelet transform and self-development neural network
Hachemi et al. Enhancement of DCT-Based Image Compression Using Trigonometric Functions
Singh et al. COMPARATIVE STUDIES OF VARIOUS TECHNIQUES FOR IMAGE COMPRESSION ALGORITHM

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: FRANTORF INVESTMENTS GMBH, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BAE SYSTEMS INFORMATION AND ELECTRONIC SYSTEMS INTEGRATION, INC.;REEL/FRAME:025577/0962

Effective date: 20080826

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: OL SECURITY LIMITED LIABILITY COMPANY, DELAWARE

Free format text: MERGER;ASSIGNOR:FRANTORF INVESTMENTS GMBH, LLC;REEL/FRAME:037564/0078

Effective date: 20150826

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment: 12