GB2305277A - A lossy data compression method - Google Patents
A lossy data compression method Download PDFInfo
- Publication number
- GB2305277A GB2305277A GB9617347A GB9617347A GB2305277A GB 2305277 A GB2305277 A GB 2305277A GB 9617347 A GB9617347 A GB 9617347A GB 9617347 A GB9617347 A GB 9617347A GB 2305277 A GB2305277 A GB 2305277A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- bit
- lossy
- data
- compressing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/005—Statistical coding, e.g. Huffman, run length coding
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3084—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method
- H03M7/3088—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method employing the use of a dictionary, e.g. LZ78
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/41—Bandwidth or redundancy reduction
- H04N1/411—Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Record Information Processing For Printing (AREA)
Abstract
In a method for data compressing a raster-arranged pixel image (101), data segments of the image are compared to data segment patterns in a table to determine a classification of the image, where the classification is either clustered or dispersed. After the image is classified, a lossy compressing procedure (102) is performed by reducing each n x n-bit block of the pixel image to an m-bit segment (count of black dots). The lossy compressed image (103) is accompanied by a classification indicator. Next, the lossy compressed image (103) is compressed with a lossless compressor (104) such as LZS8. Decompression is accomplished by decompressing (104) the compressed lossy compressed image (105) with a lossless decompressor (104) to recover the lossy compressed image (103). A lossy decompressing procedure operates on the lossy compressed image (103) by employing the m-bit data segment to access a stored n x n bit matrix from a pair of matrix tables in accordance with the classification indicator. One table is n x n-bit images classified as clustered and the other table is n x n-bit images classified as dispersed.
Description
A LOSSY DATA COMPRESSION METHOD
Cross-Reference To Related Application
The present application is related to the following co-pending U.S.
Patent application being assigned to the same assignee, entitled:
"AN OPTIMIZED HARDWARE COMPRESSION AND
DECOMPRESSION ARCHITECT FOR USE BY AN IMAGE PROCESSOR IN A
LASER PRINTER", Attorney Docket number 10950615-1 incorporated herein by reference.
Technical Field
This invention relates to page printers, and more particularly, to a page printer having data compression facilities to accommodate a buffer memory size that is less than that required for a full page of print data.
Background of the Invention
Prior art page printers typically capture an entire page before any image is placed on paper. In such printers, formatting is either performed on the host computer, or on a formatter within the printer. Since a laser printer engine operates at a constant speed, if new rasterized data is not available at a rate that keeps up with the engines operation, a page "overrun" (also referred to as punt) occurs and the page is not printable.
To prevent print overruns, a variety of techniques are used. At first, a full raster bit map of an entire page was stored so that the print mechanism always had rasterized data awaiting action. Early laser printers, which printed at 300 dots per inch resolution, could use this technique because approximately 1 megabyte of raster memory for each page is used. However, with a 600 dot per inch printer, approximately 4 megabytes of memory are now required. Additionally, because laser printers achieve their rated speeds by pipelining of raster data, additional raster memory is needed to run the printer at its rated speed. Without the additional memory, composition of the subsequent page cannot begin until the present page has been printed. To remain cost competitive, substantial effort has been directed to reducing the amount of required memory in a laser printer.
One technique for memory reduction involves the construction of a page description language. A page description language is built in two steps: during formatting, data received from the host computer is converted into a list of simple commands, called display commands, which describe what must be printed. The second step prepares the display command list for printing and entails a parsing of the display commands and a rendering of the described objects into a raster bit map. This procedure requires a full page raster bit map memory because the same memory is used for succeeding pages.
A further refinement to the page description language helps reduce the amount of required memory by sorting the display command list procedure according to their vertical position on a page. The page is then divided into sections called page strips or "page intermediate" and each page strip is passed, sequentially, to the print engine for printing. When display commands within a page strip are rendered into rastered data at a fast enough pace, the same memory used to store a first page strip can be reused for subsequent page strips further down the page.
Presently, printer resolution has reached 600 dots per inch and beyond. Such printers handle not only text but also line art and various types of images. Many of these printers use general purpose data compression techniques to minimize the amount of memory required by the printer.
Data compression systems, which are known in the art, encode a stream of digital data signals into compressed digital code signals and decode the compressed digital code signals back into the original data. Data compression refers to any process that attempts to convert data in a given format into an alternative format requiring less space than the original. The objective of data compression systems is to effect a savings in the amount of storage required to hold a given body of digital information. When that digital information is a digital representation of an image or text, data compression systems are divided into two general types: lossy, and non-lossy.
The non-lossy systems have what is referred to as reciprocity. In order for the data compression system to posses the property of reciprocity it must be possible to re-expand or decode the compressed data back into its original form without any alteration or loss of information. The decoded and original data must be identical and indistinguishable with respect to each other.
Thus, the property of reciprocity is synonymous to that of strict noiselessness used in information theory.
Some applications do not require strict adherence to the property of reciprocity. As stated above, one such application in particular, is when dealing with graphical data. Because the human eye is not sensitive to noise, some alteration or loss of information during the compression decompression process is acceptable. This loss of information gives the lossy data compression systems their name.
An important criteria in the design of data compression systems is the compression effectiveness, which is characterized by the compression ratio. The compression ratio is the ratio of data size in uncompressed form divided by the size in compressed in form. In order for data to be compressible the data must contain redundancy. Compression effectiveness is determined by how effectively the compression procedure uses the redundancy in the inputting data. In typical computer stored data, redundancy occurs both in the non-uniform usage of individual symbology, example digits, bytes, or characters, and in frequent reoccurrence of symbol sequences, such as common words, blank record fields and the like.
The data compression system should provide sufficient performance with respect to the data rates provided by and accepted by the printer. The rate at which data can be compressed is determined by the input data processing rate of the compression system. Sufficient performance is necessary to maintain the data rates achieved, thereby not allowing a "page punt" because the laser is starved for data. Thus, the data compression and decompression system must have enough data band width so as not to adversely effect the overall system.
Typically, the performance of data compression and decompression systems is limited by the computations necessary to compress and decompress and the speed of the system components such as random access memory and the like, utilized to store statistical data and guide the compression process. This is particularly true when the compression and decompression systems are implemented in firmware, wherein firmware guides a general purpose type central processing unit to perform the data compression decompression process. In such a system, performance for a compression device is characterized by the number of processor cycles required per input character during compression. The fewer the number of cycles, the higher the performance.The firmware solutions are limited by the speed of the firmware compression decompression because firmware takes several central processor unit cycles to decompress each byte. Thus, the firmware process generally was tailored to decrease compression ratios in order to increase decompression speed.
General purpose data compression procedures are known in the prior art; three relevant procedures being the Huffman method, the Tunstall method and the Lempel-Ziv method. One of the first general purpose data compression procedures developed is the Huffman method. Briefly described, the Huffman method maps full length segments of symbols into variable length words. The Tunstall method, which maps variable length segments of symbols into fixed length binary words, is complimentary to the Huffman procedure.
Uke the Huffman procedure, the Tunstall procedure requires a foreknowledge of the source data probabilities. Again this fore knowledge requirement can be satisfied to some degree by utilizing an adaptive version which accumulates the statistic strength processing of the data.
The Lempel-Ziv procedure maps variable length segments of symbols into variable length binary words. It is asymptotically optimal when there are no constraints on the input or output segments. In this procedure the input data string is parsed into adaptively grown segments. Each segments consisting of an exact copy of an earlier portion of the input string suffixed by one new symbol from the input data. The copy which is to be made is the longest possible and is not constrained to coincide with an earlier parsed segment. The code word which represents the segment in the output contains information consisting of a pointer to where there earlier copy portions begin, the length of the code, and the new symbol. Additional teaching for the
Lempel-Ziv data compression technique can be found in the U.S. Patent
Number 4,558,302 incorporated herein by reference.
While the aforementioned data compression procedures are good general purpose lossless procedures, some specific types of redundancy may be compressed using other methods. One such lossless method commonly known as run length encoding (RLE), is well suited for graphical image data.
With RLE, sequences of individual characters can be encoded as a count field plus an identifier of the repeated character. Typically, two characters are needed to mark each character run, so that this encoding would not be used for nuns of two or fewer characters. However, when dealing with a graphical image represented in digital data form, there can be large wns of the same character in any given line making RLE an effective compression procedure for such information.
All of the aforementioned data compression procedures are highly dependent upon redundancy in the data to achieve significant compression ratios. One significant disadvantage with these procedures, is that with certain types of data, the compressed output may actually be larger than the input because input data lacks any specific redundancy. In the art of printing, such "incompressible" data is easily generated. Certain types of images are classified as either "ordered dither" or "error diffused". An ordered dither image (also called "cluster") is a half-tone image that includes half-tone gray representations throughout the page. Such images generally reflect substantial data redundancy and lend themselves to lossless techniques of data encoding such as those described above.However, error diffused images (also called "dispersed") exhibit little redundancy in their data and require different methods of compression. As a result, the use of a single data compression scheme in a page printer no longer enables such a printer to handle image data.
In U.S. Patent Application Serial No. 07/940,111 filed September 3, 1992, entitled "Page Printer Having Adaptive Data Compression For Memory
Minimization", assigned to the same assignee as this application and incorporated herein by reference, a page printer steps through various compression techniques as outlined in an attempt to accommodate a limited memory size that is less than that required for a full page of printed data. In that application, when an image is unprintable because of memory low conditions, first a "mode-M" compression technique is used. Using this technique, an attempt is made to compress the block using run length encoding for each row and by encoding delta changes that occur from row to row within the block.If the "mode-M" compression technique is unsuccessful in providing enough of a compression ratio to allow printing of the page, a second attempt is made using an LZW type compression. Finally, if the LZW based compression technique is unsuccessful in obtaining a high enough compression ratio to allow printing of the page, a lossy compression procedure is used.
As shown in Fig. 1, the lossy compression technique commences by first sorting images on the page based upon levels of data compression.
The method described here is also known as a cell-based" compression method. The examined cell starting from the top right comer of the image and ends after processing the bottom left comer cell. The algorithm looks at each row of cells before proceeding to the next row of cells. During compression the number of black dots in a cell are counted. The number is then saved in the same order in which the cells were read. During decompression the number of black dots in a cell is read and mapped into a dither pattern. The dither pattern presents the final output for the cell. The final image is a close approximation of the original image.
In detail, tuming to FIGs. 1, 2, and 3, the lossy compression method "cell based" will be described in greater detail. First, a 4x4 bit blocks of the image are accessed 201, the number of "on" bits in the image is counted 202 and stored 203. A count of 16 is stored as 15. In this manner, each 4x4 bit cell of the image is represented by a 4 bit value and a further assigned value that indicates whether the image was clustered or dispersed.
The 4 bit compressed value are used to represent the original black coverage of the original cells. Decompression will be able to maintain that coverage even though the black data will not necessarily be in the same positions.
When the lossy compression image is decompressed back to a 4x4 cell, it is initially determined whether the block had a clustered or dispersed value assigned to it. If it was assigned a clustered value, a 4x4 matrix from Fig. 2 is chosen in accordance with the 4 bit value of the block.
For instance, if the 4 bit value indicated that there were 11 on black dots, the matrix 252 is the one chosen from those shown in Fig.2. By contrast, if the block is indicated as dispersed, and the 4 bit value is again equal to 11, the matrix of 254 in Fig. 3 is the one chosen. Thus, the clustered or dispersed indicator determines which group of matrixes is employed to perform the decompression and the 4 bit value determines which matrix in the group of matrices is the one chosen. The patterns in the matrices of Figs. 2 and 3 are empirically derived and provide for recovery of some of the lost information that is not otherwise directly recoverable from the 4 bit compression value.
These tables are representative of a general case. Tables may vary from device to device as know by one skilled in the art.
Summary of the Invention
In order to accomplish the present invention there is provided a method for data compressing a raster-arranged pixel image. To perform the method, first a first table of data segment pattems indicative of an image of clustered bit patterns and of an image comprised of dispersed bit patterns is provided. Next, data segments of an image are compared to the data segment pattems in the first table to determine a classification of the image, where the classification is either clustered or dispersed. After the image is classified, a lossy compressing procedure is performed by reducing each n x n-bit block of the pixel image to an m-bit segment. The lossy compressed image is accompanied by a classification indicator. Next, the lossy compressed image is compressed with a lossless compressor. This twice compressed image must be decompressed to regain a reasonably facsimile of the original image.
Decompression is accomplished by decompressing the compressed lossy compressed image with a lossless decompressor to recover the lossy compressed image. A lossy decompressing procedure operates on the lossy compressed image by employing the m-bit data segment to access a stored n x n pixel matrix from a pair of matrix tables in accordance with the classification indicator. One the table is n x n-bit images classified as clustered and another table is n x n-bit images classified as dispersed.
Brief Description of the Drawings
A better understanding of the invention may be had from the consideration of the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a logical flow diagram showing the "cell-based" lossy compression procedure.
FIG. 2 illustrates decompression matrices employed for clustered images.
FIG. 3 illustrates decompression matrices employed for dispersed images.
FIG. 4 provides a graphical representation of the preferred embodiment in accordance with the present invention.
Detailed Description Of The Preferred Embodiments
The present invention is not limited to a specific embodiment illustrated herein. Not discussed in the aforementioned application, it was discovered that the lossy data has a unique property that helps further compression to achieve a higher compression ratio during a second pass of compression. The data stream produced by the cell-based lossy compression can be thought of as a half-tone representation of the image. The lossy data therefore, closely resembles the original image data. Most compression methods produce a data stream that does not resemble the original data stream and is not compressible by other compression methods. The two most popular lossless compression algorithms are either dictionary based (Huffman, I-ZW) or arithmetically based (Q-Coder and skew coder).The compressed data streams from these algorithms are very different from the data they represent and do not compress well using a second pass compressor. In contrast, the lossy data stream is a close approximation of the original data. In fact, the lossy compressor filters out the random placement of dots replacing the actual dot placement with a token representing the dither pattern used during decompression. The lossy data therefore, has noticeable patterns and can be compressed further by one of the dictionary based or arithmetically based compression techniques.
A second property of the lossy data, the token size, increases the compression ratio during the second pass. The lossy 4x4 cell uses a 4 bit token, thus 2 neighboring cells are placed in every byte of lossy data. Using a dictionary based compression algorithm with a byte as the base size such as
LZS8, additional compression of the lossy data can be achieved. A commonality of the two token sizes increases the compression ratio for the
LZS8. The LZS8 looks for repeated strings of bytes, having the lossy 4x4 data divisible into bytes increases the likelihood of repeated strings in the lossy data. A 4 bit token size of the lossy 4x4 typically, increases the compression ratio of the LZS8. By removing repeated matrix pattems, the LZS8 compression ratio for the lossy data may increase. For example, in FIG. 2, 9 black dots and 10 black dots use the same pattern.Thus, if all cells that contain 10 black dots are mapped into the 9 black dot pattern, the lossy data may exhibit an increased redundancy that the LZS8 can compress.
In the preferred embodiment of the lossy compression method described above, there are two sizes of cells; 4x4 and 8x8 cells. A lossy cell size of 4x4 guarantees a compression ratio of 4 to 1, with each 4x4 cell represented by a 4 bit token. A lossy cell size of 8x8 guarantees a 10.66 to 1 compression ratio with each 8x8 cell represented by an 6 bit token.
With this observation in mind, the present invention compresses images first using a lossy algorithm and then further compressing the output from the lossy algorithm with the LZS8 compressor. In the preferred embodiment to meet real time constraints, the LZS8 compressor is implemented by a hardware compression unit described in co-pending application Attorney Docket Number 10950633 entitled "An Optimized
Hardware Compression And Decompression Architect For Use By An Image
Processor In A Laser Printer". However, the use of a hardware compression unit is not necessary for the present invention. The image data is first compressed using the lossy compression method and then compressed using the hardware compression unit. The hardware compressor compresses the lossy data well, improving the overall compression ratio without additional loss and image quality.
During the compression process, after the first pass through the lossy compressor, a second compression pass is made through the hardware compression unit. The lossy compressed data is compressed an average of between 1.5 to 1 and 4 to 1 when using the hardware compressor unit, for an overall compression ratio between 6 to 1 and 16 to 1. If, on the other hand, the hardware compressor unit expands the lossy data, then the worst case compression ratio is 4 to 1, which is accomplished by saving the lossy data without hardware compression applied to it.
Decompression operates in the reverse order. Namely, the image data is first decompressed with the hardware compression unit using the LZS8 type dictionary based technique. Output of the hardware compressor unit is then passed through the lossy image decompression process, thereby producing a reasonable facsimile of the original image data. Referring now to
Fig. 4, the lossy compression technique described is shown in a graphical representation. Original image data 101 is first passed through the cell base lossy compressor 102. This process may be implemented either in firmware or in a dedicated hardware unit. Output of lossy 102 procedure is temporarily stored as lossy image data 103. Next, image processor 106 instructs the hardware compression unit 104 to determine if it can further reduce the size of lossy image data 103. As described in more detail in co-pending application 10950633, the hardware compression unit can be configured to quickly determine whether it can further compress the input data. Assuming the hardware compression unit 104 is successful in further compressing the lossy image data 103, the output of hardware compression unit is stored as hardware compressed lossy image data 105. Decompression operates in the opposite direction, namely if the image data has passed through the hardware compression unit during the compression process, it must first now pass through the hardware compression unit 104 for decompression. Decompressed data is then stored as lossy image data 103. Finally, the lossy image data 103 is processed by the lossy decompressor as described above.
This new compression method increases the ability to print complex pages in a low memory printer. For example, with a 1 megabyte ram, 600 DPI printer compression must frequently be used to print complex pages.
For such a printer to print a letter size raster image (roughly a 4 megabyte raster image) the compression method needs to achieve an overall compression ratio greater than 6 to 1. While the hardware compression can print most pages without using lossy compression, there exists pages in which the hardware compression unit cannot achieve the necessary compression ratio.
For these pages to be printable on a low memory printer, a lossy compression technique is used. However, the 4x4 cell based lossy compression technique guarantees a 4 to 1 compression ratio. Thus, it does not quite provide the necessary compression ratio to fit a 600 DPI letter sized raster image into 1 megabyte of ram. By passing the cell based lossy compression data through a dictionary based compressor, the necessary compression ratios can generally be achieved.
Although the preferred embodiment of the invention has been illustrated, and that form described, it is readily apparent to those skilled in the art that various modifications may be made therein without departing from the spirit of the invention or from the scope of the appended claims.
Claims (10)
1. A method for data compressing a raster-arranged pixel image (101), comprising the steps of:
providing a first table of data segment patterns indicative of an image of clustered bit patterns (Fig. 2) and of an image comprised of dispersed bit patterns (Fig. 3);
comparing (200) data segments (201) of an image to said data segment patterns in said first table to determine a classification of said image, said classification comprising clustered or dispersed;
lossy compressing (202, 102) said image by reducing each n x nbit block of said pixel image to an m-bit segment and accompanying (203) said lossy compressed image by a classification indicator;
compressing (104) said lossy compressed image (103) with a lossless compressor (104);;
decompressing (104) said compressed lossy compressed image (105) with a lossless decompressor (104) to recover said lossy compressed image (103); and
lossy decompressing (102) said lossy compressed image (103) by employing said m-bit data segment to access a stored n x n pixel matrix from a pair of matrix tables (Fig. 2, Fig. 3) in accordance with said classification indicator, one said table including n x n-bit images classified as clustered (Fig. 2) and another said table including n x n-bit images classified as dispersed (Fig. 3), each said table addresses in accordance with said n-bit data segment.
2. The method of claim 1 wherein said step of lossy compressing further including the steps of:
counting (202) a number of pixels in said n x n-bit block that are set to a first value; and
equating (203) said m-bit segment to said number.
3. A method for data compressing a raster-arranged pixel image (101), comprising the steps of:
providing a table of data segment patterns indicative of an image of bit patterns;
lossy compressing (202, 102) said image by reducing each n x nbit block of said pixel image to an m-bit segment;
compressing (104) said lossy compressed image (103) with a lossless compressor (104);
decompressing (104) said compressed lossy compressed image (105) with a lossless decompressor (104) to recover said lossy compressed image (103); and
lossy decompressing (102) said lossy compressed image (103) by employing said m-bit data segment to access a stored n x n pixel matrix from a matrix table (Fig. 2, Fig. 3), said matrix table including n x n-bit images, said table addressed in accordance with said m-bit data segment.
4. The method of claim 3 wherein said lossless compressor (104) and said lossless decompressor (104) are LempellZiv/Welch (LZW) based procedures.
5. The method of claim 3 wherein said step of lossy compressing (102) further including the steps of:
counting (202) a number of pixels in said n x n-bit block that are set to a first value; and
equating (203) said m-bit segment to said number.
6. A method for data compressing a raster-arranged pixel image, comprising the steps of:
providing a table of data segment patterns (Fig. 2, Fig. 3) indicative of an image of bit patterns;
lossy compressing (202, 102) said image by reducing each n x n bit block of said pixel image to an m-bit segment; and
compressing (104) said lossy compressed image (103) with a lossless compressor (104).
7. The method of claim 6 wherein said step of lossy compressing (202, 102) further including the steps of:
counting (20) a number of pixels in said n x n-bit block that are set to a first value; and
equating (203) said m-bit segment to said number.
8. The method of claim 6 further comprising the steps of
decompressing (104) said compressed lossy compressed image (105) with a lossless decompressor (104) to recover said lossy compressed image (103); and
lossy decompressing (102) said lossy compressed image (103) by employing said m-bit data segment to access a stored n x n pixel matrix from a matrix table (Fig. 2, Fig. 3), said matrix table including n x n-bit images, said table addressed in accordance with said m-bit data segment.
9. The method of claim 8 wherein said table of data segments including patterns indicative of an image of clustered bit patterns (Fig. 2) and of an image comprised of dispersed bit patterns (Fig. 3), the method comprising an additional steps of:
comparing (200) data segments (201) of said image to said data segment patterns in said first table to determine a classification of said image, said classification comprising clustered or dispersed; and
associating a classification indicator with said lossy compressed image (103).
10. The method of claim 9 wherein said step of lossy decompression (102) further employing said m-bit data segment to access a stored n x n pixel matrix from a pair of matrix tables in accordance with said classification indicator, one said table including n x n-bit images classified as clustered (Fig. 2) and another said table including n x n-bit images classified as dispersed (Fig. 3), each said table addresses in accordance with said n-bit data segment.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US52858295A | 1995-09-15 | 1995-09-15 |
Publications (2)
Publication Number | Publication Date |
---|---|
GB9617347D0 GB9617347D0 (en) | 1996-10-02 |
GB2305277A true GB2305277A (en) | 1997-04-02 |
Family
ID=24106288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9617347A Withdrawn GB2305277A (en) | 1995-09-15 | 1996-08-19 | A lossy data compression method |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2305277A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5757852A (en) * | 1997-01-24 | 1998-05-26 | Western Atlas International, Inc. | Method for compression of high resolution seismic data |
WO1999053677A3 (en) * | 1998-04-09 | 2000-01-06 | Koninkl Philips Electronics Nv | Lossless encoding/decoding in a transmission system |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111327327A (en) * | 2020-03-20 | 2020-06-23 | 许昌泛网信通科技有限公司 | Data compression and recovery method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4903317A (en) * | 1986-06-24 | 1990-02-20 | Kabushiki Kaisha Toshiba | Image processing apparatus |
-
1996
- 1996-08-19 GB GB9617347A patent/GB2305277A/en not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4903317A (en) * | 1986-06-24 | 1990-02-20 | Kabushiki Kaisha Toshiba | Image processing apparatus |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5757852A (en) * | 1997-01-24 | 1998-05-26 | Western Atlas International, Inc. | Method for compression of high resolution seismic data |
WO1999053677A3 (en) * | 1998-04-09 | 2000-01-06 | Koninkl Philips Electronics Nv | Lossless encoding/decoding in a transmission system |
Also Published As
Publication number | Publication date |
---|---|
GB9617347D0 (en) | 1996-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5857035A (en) | Arithmetic coding compressor for encoding multiple bit values | |
US6731814B2 (en) | Method for compressing digital documents with control of image quality and compression rate | |
US7813575B2 (en) | Enhancing compression while transcoding JPEG images | |
US5745608A (en) | Storing data compressed with arithmetic coding in non-contiguous memory | |
US5886655A (en) | Arithmetic coding context model that accelerates adaptation for small amounts of data | |
US20030184809A1 (en) | Grayscale and binary image data compression | |
JPH114170A (en) | Method and device for double run length-encoding binary data | |
CA2168284C (en) | Apparatus and associated method for compressing and decompressing digital data | |
US6919825B2 (en) | Split runlength encoding method and apparatus | |
JP3211640B2 (en) | Two-dimensional method and system for binary image compression | |
US6356660B1 (en) | Method for compressing raster data | |
US5901251A (en) | Arithmetic coding compressor using a context model that is adaptive to variable length patterns in bi-level image data | |
US4972497A (en) | Image coding system | |
JPH11168632A (en) | Binary expression processing method for dither image, method for uncompressing dither image expressed in compression binary representation and compression and uncompression system for dither image | |
US5880688A (en) | Arithmetic coding context model that adapts to the amount of data | |
GB2305277A (en) | A lossy data compression method | |
EP0797348A2 (en) | A one dimensional context model for entropy encoding digital halftone images with arithmetic coding | |
JP3266419B2 (en) | Data compression / decompression method | |
JPH10108026A (en) | Irreversible data compression method | |
JPH05151349A (en) | Image data compressing method and encoding circuit | |
US5745603A (en) | Two dimensional context model obtained without a line buffer for arithmetic coding | |
US20030026489A1 (en) | Image quality processing of a compressed image | |
EP0660587B1 (en) | Reversible video compression method | |
US20020191225A1 (en) | Methods and arrangements for compressing raster data | |
Gupta et al. | Coding of Devanagari Composite Character Patterns for Data Compression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |