AU2005202481A1 - Compression of image data - Google Patents

Compression of image data Download PDF

Info

Publication number
AU2005202481A1
AU2005202481A1 AU2005202481A AU2005202481A AU2005202481A1 AU 2005202481 A1 AU2005202481 A1 AU 2005202481A1 AU 2005202481 A AU2005202481 A AU 2005202481A AU 2005202481 A AU2005202481 A AU 2005202481A AU 2005202481 A1 AU2005202481 A1 AU 2005202481A1
Authority
AU
Australia
Prior art keywords
image
resolution
computer program
edge
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2005202481A
Inventor
James Philip Andrew
Michael Jan Lawther
Timothy Merrick Long
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2004903604A external-priority patent/AU2004903604A0/en
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2005202481A priority Critical patent/AU2005202481A1/en
Publication of AU2005202481A1 publication Critical patent/AU2005202481A1/en
Abandoned legal-status Critical Current

Links

Description

S&FRef: 717643
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo, 146, Japan Timothy Merrick Long Michael Jan Lawther James Philip Andrew Spruson Ferguson St Martins Tower Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) Compression of image data Associated Provisional Application Details: [33] Country:
AU
[31] Appl'n No(s): 2004903604 [32] Application Date: 30 Jun 2004 The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5845c S COMPRESSION OF IMAGE DATA t-- COPYRIGHT NOTICE This patent specification contains material that is subject to copyright 00 protection. The copyright owner has no objection to the reproduction of this Spatent specification or related materials from associated patent office files for the Spurposes of review, but otherwise reserves all copyright whatsoever.
FIELD OF INVENTION The present invention relates generally to the compression of image data, and in particular to the compression of such data in a printing system.
BACKGROUND
Image compression is widely used in color raster-image-processing systems, such as those found in printers. These systems use image compression to reduce the amount of memory required to store raster images in the stages between pixel generation and the final output printed by a printer. Reducing the amount of memory reduces the total system cost.
In color printing environments, the total uncompressed size of a generated pixel image is often large, and it is advantageous to avoid storing the entire image.
Such pixel images are typically generated in raster or band order. In particular, pixels, scanlines, groups of scanlines, or tiles are emitted in a stream from a raster image processor (RIP) that has as input an object graphic description of the page to be printed. In some simple cases, this raster image data stream may be fed directly to a print engine, but in many circumstances this is not possible. This may be because the print engine requires real-time data delivery, since the image must be kept in case error recovery is required, or for printing later in a duplexing system, amongst other reasons. In systems where the raster image data must be kept, the raster image data is often fed to a compressor in a pipeline fashion such that intermediate storage between the RIP and the compressor is small compared 717643.doc with the total uncompressed image size. The output of the compressor is also considerably smaller than the uncompressed image size.
Many compression schemes can be used in this environment. One widely used method for color images is the baseline method of the Joint Photographic S 5 Experts Group (JPEG) standard. However, compression using baseline JPEG standard suffers a number of flaws that make its use difficult in such printing I systems. Pages to be printed can comprise text, graphics, and natural image data.
JPEG compresses natural image data well, but typically does not compress text and graphics well. In particular, significant artefacts result as the amount of compression used increases.
The so-called mixed-raster-content imaging model attempts to overcome this problem by using a foreground, background, and mask raster image to represent a page. The binary mask image, typically at the resolution of an original raster image and at a higher resolution than the foreground and background images, selects for each pixel whether the pixel is a foreground or background pixel. Each image is compressed separately with an appropriate compression technique. While facilitating better compression for mixed content (text, graphics, and images), there remain problems with this method. Firstly, there are three separate images (layers) that require compression. Secondly, the segmentation into foreground and background images from a given raster image can be complex. Thirdly, this simple three-layer model is often unable to represent accurately the intersection of more than two objects in a local region at a highresolution.
Other existing compression methods target compressing raster images that are a mix of text, graphics and natural images. However, these methods suffer from various disadvantages. In many cases, segmentation of the page is required, which is difficult to perform accurately in the general case. In other cases, classification of blocks of pixel data is required, which also can be difficult to perform accurately in the general case. Inaccurate classification or segmentation can result in a degraded image quality.
717643.doc These problems may also occur in many other related raster-imageprocessing systems, such as digital cameras and scanners. Further, images generated on a general-purpose computer and then compressed can suffer from these problems.
00
SUMMARY
In accordance with an aspect of the invention, there is provided a method of encoding a high-resolution image. The method comprises: finding at least one N edge in the high-resolution image; encoding the found edges in raster format; and encoding an alternate representation of the high-resolution image.
The method may further comprise the step of generating at least one edge mask comprising information representing the found edges from the highresolution image. Two 1-bit edge masks or one 2-bit edge mask may be created.
The encoding of found edges may comprise compressing the found edges.
The encoding of found edges may comprise losslessly compressing the edge mask.
Encoding the alternate representation may comprise reducing the resolution of the high-resolution image. Encoding of the alternate representation may comprise lossy compressing the reduced resolution image.
In accordance with another aspect of the invention, there is provided a method of compressing an input raster image. The method comprises the steps of: generating one or more mask images from the input raster image, the one or more masks being representative of edges in the input raster image; compressing the one or more mask images; generating a lower resolution image from the input raster image; and compressing the lower resolution image.
The one or more mask images may be compressed using LZW compression. The lower resolution image may be compressed using JPEG compression.
In accordance with still another aspect of the invention, there is provided a method of forming a high-resolution image. The method comprises forming the high-resolution image from a low resolution image and high-resolution edge 717643.doc information stored in raster format, the high-resolution edge information comprising information about at least one edge found in the high-resolution image.
The forming step may comprise the step of selecting between at least two 00 5 pixels of the low-resolution image dependent upon the high-resolution edge (NI information to form at least one pixel of the high-resolution image.
I In accordance with a further aspect of the invention, there is provided a method of generating a high-resolution image from a low resolution image and N high-resolution edge information, the method comprising determining colors of pixels of the high-resolution image from colors of pixels of the low resolution image, wherein the color of at least one high-resolution image pixel is determined from the color of at least one low resolution image pixel dependent on the highresolution edge information.
The high-resolution edge information may control selection between at least two low-resolution image pixels in the determination of color of the at least one high-resolution image pixel.
In accordance with yet another aspect of the invention, there is provided a method of forming an output image on a raster output device. The method comprises: receiving one or more compressed edge images of an input image representation and a compressed lower resolution raster image of the input image representation, the input image representation having a maximum resolution; decompressing the compressed raster image to an uncompressed raster image; decompressing the compressed edge images; and forming the output image on the raster output device dependent upon the uncompressed raster image and the decompressed edge images.
The method may further comprise the steps of: determining the one or more edge images representative of edges in the input image representation; compressing the edge images; determining a raster image representative of the input image representation at a resolution lower than the maximum resolution; and compressing the raster image.
717643.doc
I
The method may further comprise the step of repairing one or more pixels of the uncompressed raster image using the one or more decompressed edge r- O images to form the output image.
In accordance with yet another aspect of the invention, there is provided a 00 5 method of determining the approximate color of a group of unknown highresolution pixels. The method comprising calculating an approximate average N color of the group of unknown high-resolution pixels from colors of a group of Sknown high-resolution pixels and a color of a known low resolution pixel, Swherein an area of the low resolution pixel covers areas of the known and unknown high-resolution pixels and approximates an average color of the known and unknown high-resolution pixels.
In accordance with further aspects of the invention, there are provided apparatuses and computer program products for implementing the foregoing methods.
BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention are described hereinafter with reference to the drawings, in which: Fig. 1 is a high-level flowchart providing an overview of the compression part of the compression system in accordance with an embodiment of the invention; Fig. 2 is a flowchart illustrating details of a mask creation step of Fig. 1; Fig. 3 is a flowchart illustrating details of a step of finding edges and updating masks in Fig. 2; Fig. 4 is a high-level flowchart providing an overview of the decompression part of the compression system in accordance with the embodiment of the invention; Fig. 5 is a flowchart illustrating details of a step of Fig. 4 for finding damaged pixels using horizontal edges; Fig. 6 is a flowchart illustrating details of a step of Fig. 4 for finding damaged pixels using vertical edges; 717643.doc Fig 7 is a flowchart illustrating details of a step of Fig. 4 for searching for pixels indicating damaged blocks; Fig. 8' is a flowchart illustrating details of a step of Fig. 7 for repairing all the pixels in a block individually; 00oO 5 Fig. 9 is a flowchart illustrating details of a step of Fig. 8 for repairing a single pixel; NI Fig. 10 is a flowchart illustrating details of a step of Fig. 9 for adding surrounding pixels to the queue; N Fig. 11 illustrates examples of an original input image, horizontal and vertical edge masks and a resized image; Fig 12 shows an example of an input image for which no clean pixel can be found to repair a pixel; Fig. 13 shows the image of Fig. 12 after it has been downsampled and upsampled; Fig. 14 shows the image of Fig. 13 after an initial pass using the decompression method of Fig. 4; Fig 15 shows a resulting image obtained by replacing the un-repaired pixels of Fig. 14 with the values calculated above; Fig. 16 is a block diagram illustrating portions of an input image and a corresponding edge mask determined in accordance with Figs. 1-3; Fig. 17 is a high-level block diagram of a compression system in accordance with an embodiment of the invention; Fig. 18 is a high-level block diagram of a decompression system in accordance with an embodiment of the invention; and Fig. 19 is a block diagram of a computer system with which the embodiments of the invention may be practiced.
DETAILED DESCRIPTION Methods, systems and gateways are disclosed for encoding a highresolution image, for compressing an input raster image, and for forming a high- 717643.doc Sresolution image, amongst others. In the following description, numerous specific ;details, including particular lossless compression techniques, lossy compression O techniques, image resolution reduction techniques and the like are set forth.
However, from this disclosure, it will be apparent to those skilled in the art that 00oO 5 modifications and/or substitutions may be made without departing from the scope I and spirit of the invention. In other circumstances, specific details may be omitted I so as not to obscure the invention.
SWhere reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
In the context of this specification, the word "comprising" has an openended, non-exclusive meaning: "including principally, but not necessarily solely", but neither "consisting essentially or" nor "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises", have corresponding meanings.
Overview The embodiments of the invention are suited to compressing image data output by a raster image processor in a printing system. However, the embodiments of the present invention are not limited to this particular and may also, for example, be useful for compressing other types of image data.
One embodiment of the invention may be implemented as a software application executing on a standard personal computer (PC) that compresses image data. Alternatively, the algorithm may be implemented in an embedded environment, such as a photocopier, for example. A Raster Image Processor (RIP) may generate the data input to the compression method.
A RIP system generally takes as input a high-level description of a page or document that is to be printed. For example, typical inputs include PostScript, PDF, and Windows GDI. An optional interpreter converts the high-level page description into a display list, which is in turn converted into a raster image by the 717643.doc RIP system. The raster image is a raster-ordered array of image pixels, for example in the RGB color space, which is subsequently printed by a given printer.
One embodiment of the compression system takes this raster image output and compresses the raster image output. The compressed representation is 00 5 transmitted to a printer. The printer decompresses the compressed image, color ,I converts the decompressed image to an appropriate CMYK printer color space, ,I and prints the converted image using the printer engine to output a printed page.
In another embodiment, the RIP system is resident on the printer (or e.g. a photocopier). In this embodiment, a high-level page description language, or display list, may be transmitted from a host computer to the printer. An interpreter and a RIP system embedded in the printer convert a page description to a raster image. The raster image is compressed according to the description below and spooled (stored) for later printing on the printer. Spooling is desired for a number of reasons. In a long printing pipeline, several pages are spooled to aid error recovery and re-printing of pages. Laser printer engines usually require realtime printing, and real-time delivery cannot be guaranteed by most RIP systems.
Hence, a whole page is generated, compressed, and spooled. The printer decompresses in real-time and prints the page. Spooling is also useful for duplex and 2-up printing.
In the foregoing embodiments, compression is desired to reduce bandwidth and storage requirements. Generally speaking, a compression system has two parts: a compression part and a decompression part. The compression part is described first hereinafter.
Fig. 17 is a block diagram of a compression system 1700 in accordance with an embodiment of the invention. In this system, an input image 1710 having an initial high resolution 1200 dpi) is provided as input to modules 1720 and 1740. While a parallel path configuration is shown in Fig. 17, it will be readily appreciated that the modules may be implemented in a sequential manner. The module 1720 generates at least one edge mask from the input image. That is, at least one edge is found in the input image. Details of the edge mask generation are set forth in detail hereinafter. The mask(s) is used in a decompression system 717643.doc Sto repair an upsampled image. Another module 1730 losslessly compresses the generated mask(s). The compressed edge mask constitutes part of the compressed O data 1760 produced by the compression system 1700. The module 1740 also receives the input image 1710 and reduces the resolution of this image from 00 5 1200 dpi to 300 dpi). The reduced resolution image is lossy compressed by a ,Ic module 1750. The resulting lossy compressed, downsampled image constitutes ,I the other part of the compressed data 1760 produced by the system 1700.
SFig. 18 is a block diagram of a decompression system 1800 in accordance ,IC with an embodiment of the invention. In this system, the compressed data 1760 of Fig. 17 is input to two modules 1810 and 1820. The modules 1810 and 1840 comprise one parallel path, while the modules 1820 and 1830 comprise another parallel path. Module 1810 decompresses at least one edge mask of the compressed data 1760. The decompressed edge mask is provided to the module 1840. Module 1820 decompresses the reduced image of the compressed data 1760. The decompressed reduced image is provided to module 1830, which inflates the resolution of the image. The inflated image is provided to module 1840. The module 1840 repairs the inflated image using the decompressed edge mask(s) to produce the output image 1850. The systems of Figs. 17 and 18 are described hereinafter in greater detail.
Compression Part The input to the compression part is an image I, having a size ofX x Y pixels and C channels. The input image may comprise Red, Green, and Blue (RGB) channels, where C 3. Alternatively, the input image may comprise Cyan, Yellow, Magenta and Black channels (CMYK), where C 4. It will be appreciated by those skilled in the art that the embodiments of the invention may be practiced with channels belonging to another color space, as well.
Fig. 1 shows a high-level overview of the compression process 100. In step 110, masks are created for the input image by finding edges in the input image. The masks may comprise two 1-bit edge masks. In step 120, the masks are compressed. This is done using a lossless compression technique. One 717643.doc Sexample of such a compression technique for use with the masks is the LZW algorithm, as used in TIFF. Alternatively, another lossless compression O algorithm, such as JBIG, may be used.
In step 130, the original image is downsampled to produce a downsampled 00 5 image DI with dimensions M x N pixels. One downsampling technique that may be used is a box filter, in which regions of input pixels are averaged to produce an N output downsampled image. Another technique that may be used is a bicubic Sfilter. The amount by which the image is downsampled is represented by the N parameter SCALEAMT. In one embodiment, SCALE AMT is 4. This indicates that the input image is downsampled by a factor of 4 in each direction. For example, an input image I measuring 4800x6400 (X 4800, Y 6400) pixels is downsampled to produce a downsampled image DI measuring 1200x 1600 (M 1200, N 1600) pixels. Alternatively, SCALE_AMT may be different for each of the horizontal and vertical directions. In step 140, the subsampled image DI is compressed. This step 140 may be done using the lossless LZW algorithm, as used in TIFF, for example. Alternatively, a lossy compression technique such as JPEG may be used.
Mask Creation Fig. 2 provides further details of the mask creation step 110. In Fig. 2, two loops pass over the columns and rows of the image. Each of the two 1-bit masks that are produced have the same dimensions ofX x Y as the original image I. One edge mask describes the edges that are found to exist in the horizontal, or X direction, and is referred to as EX. The second edge mask describes the edges that are found to exist in the vertical direction, and is referred to as EY.
In step 210, the pixels of the edge mask are initialised to zero That is, every pixel of the created edge mask is set to logic 0. In step 220, variable y is initialised. In step 230, the variable x also initialised. In one embodiment, these coordinate variables x and y are both initialised to be 1, because of the method performed in step 240. If an alternative method is used for step 240, these variables may be initialised to other values.
717643.doc -11- SIn step 240 (the main body of the two loops), the image pixel I(x, y) is checked for horizontal and vertical edges and the masks EX and EY are updated, O if necessary. The details of this step are set forth in Fig. 3. In step 250, the variable x is incremented by 1. In decision step 260, a check is made to determine 00 5 if the coordinate variable x is less than the width X of the input image. If step 260 N returns true (Yes), processing continues at step 240. Otherwise, if decision step N 260 returns false processing continues at step 270, where the coordinate Svariable y is incremented by 1. In decision step 280, a check is made to determine if the variable y is less than the height Y of the input image I. If step 280 returns true (Yes), processing continues at step 230. Otherwise, if step 280 returns false processing terminates in step 290. That is, the mask creation process 110 finishes.
Finding Pixel Edges and Updating Mask Fig. 3 provides details of step 240 of Fig. 2. In Fig 3, the process 240 has two loops. The first loop processes all the channels looking for a horizontal edge, and the second loop processes all the channels looking for a vertical edge. An edge is determined to exist if a calculated gradient between two adjacent pixels exceeds a threshold, denoted as EDGE_THRESH. Given a range of values for each channel, say 0-255, one value for EDGE_THRESH that may be practiced is 16. In one embodiment, EDGE_THRESH may be a fixed value for all channels, but in another embodiment EDGE_THRESH may have a different value for each channel. The value of EDGE_THRESH may remain constant during processing.
Alternatively, the value of EDGE_THRESH may vary over the image.
In step 305, a color-channel variable c is set to zero. In step 310, a horizontal gradient value gx is found for the color channel c. That is, the horizontal gradient is calculated by taking the value of the channel at I(x, y, c) and subtracting the value of the channel at I(x 1, y, The absolute value of this result is assigned to the variable gx: gx lI(x, y, c) I(x- y, (1) 717643.doc -12- SAlternative calculation methods may be used to calculate the horizontal gradient.
For example, a horizontal Sobel filter may be used in step 310.
O In decision step 315, a check is made to determine if this value gx is greater than the threshold EDGE_THRESH. If step 315 returns true (Yes), 00oO 5 processing continues at step 320. In step 320, the bit in the horizontal edge mask N EX(x,y) is set to logic 1, indicating the presence of an edge. Processing continues N at step 335. If the step 315 returns false processing continues at step 325.
SIn step 325, the color-channel variable c is incremented by 1. In step 330, a check Sis made to determine if c is less than the total number of channels C. If step 330 returns true (Yes), processing continues at step 310. Otherwise, processing continues at step 335. This completes the processing for the horizontal gradient value gx in each color channel.
In step 335, the variable c is set to zero. In step 340, a vertical gradient gy is found for color channel c. The vertical gradient gy is calculated in a manner similar to the calculation in step 310: gy yIx, yc) I(x, y c) (2) Similar to the horizontal gradient case, an alternative gradient calculation may be used. For example, a vertical Sobel filter may be used.
In decision step 345, a check is made to determine if the vertical gradient gy is greater than EDGE_THRESH. If step 345 returns true (Yes), processing continues at step 350. In step 350, the bit in the vertical edge mask EY(x, y) is set to logic 1. Processing continues at step 365. Otherwise, if step 345 returns false processing continues at step 355. In step 355, the channel variable c is incremented by 1. In step 360, a check is made to determine if the channel variable c is less than the total number of channels C. If step 360 returns true (Yes), processing continues at step 340. Otherwise, if step 360 returns false (No), processing continues at step 365. In step 365, processing terminates.
After the processing of Fig 3 completes, the 1-bit images EX and EY contain representations of the horizontal and vertical edges in the input image I.
Given coordinates x and y, if EX(x, y) is equal to logic 1, this means that an edge 717643.doc Sexists between pixel I(x 1, y) and pixel I(x, y) in at least one of the C color channels. If EY(x, y) is equal to logic 1, this means that an edge exists between O pixel I(x, y 1) and pixel I(x, in at least one of the C color channels.
Fig 16 shows an example of a row of input pixels 1610 from an input 00o 5 image I and a corresponding edge mask EX 1620 produced by the process of Fig.
N 3. The pixels 1650 and 1680 in the edge mask 1620 each have a value of logic 1, N and the remaining pixels in the edge mask 1620 have values of logic 0. Consider Sinput pixels 1610 and edge mask pixel EX(x, y) 1650. An edge exists between the Spixels 1630, 1640 corresponding to I(x 1, y) and I(x, Similarly, when considering edge mask pixel EX(x,y) 1680, an edge exists between the pixels 1660, 1670 corresponding to I(x 1, y) and I(x, The example shown in Fig 16 can be equally applied in the vertical direction.
Thus, the processing of Figs. 1-3 produces two compressed 1-bit images for the edge masks, EX and EY, and a compressed color image DI. The edge masks can be used to repair the compressed color image DI when the latter image is decompressed.
Decompression Part The input to the decompression part is the same as the output of the compression part described hereinbefore. In one embodiment, this input comprises two compressed 1-bit images for the edge masks, EX and EY, and a compressed color image DI. A high-level overview of the decompression part 400 is shown in Fig 4.
In step 410, the edge masks EX and EY are decompressed. The decompression technique employed depends upon the compression technique employed in step 120 of Fig. 1. In step 420, the input color image DI is decompressed. Again, the decompression technique employed depends upon the compression technique employed in step 140 of Fig. 1. Steps 410 and 420 are the first functions of the decompression part 400.
Referring to step 130 of Fig. 1, the original input image I with dimensions ofX x Y pixels is downsampled by a factor of SCALEAMT to produce a 717643.doc B downsampled image DI with dimensions M x N pixels. In step 430, the shrunken or downsampled image is resampled to the original size of the image I. The decompresser upsamples the downsampled image DI back to the size of the original input image I, producing an upsampled image O. This may be done using 00 5 a pixel replication method, but alternative methods, such as a bilinear filter, may C also be used.
The next steps are to find blocks in the upsampled image O that have been S"damaged" by the downsampling and/or upsampling processes. The meaning of N "damage" is elaborated upon hereinafter. In step 440, damaged pixels are found using horizontal edges, and in step 450, damaged pixels are found using vertical edges. Once the damaged pixels are identified, those pixels are repaired in step 460.
In one embodiment, the original image I is downsampled by a factor of 4 (SCALE_AMT) to produce the downsampled image DI. Consequently, each 4x4 block of pixels in the image I has been averaged to produce a single pixel of the downsampled image DI. A "damaged" block is one that contains an edge in the original image I, but is blurred in the upsampled image O.
Fig 11 depicts an original image 1105, which contains 16 blocks 1108.
Each of those blocks 1108 has 4x4 pixels. Eight of the sixteen blocks 1108 have black pixels only (on the right), six of the blocks have white pixels only (on the left), and two of the blocks have a mixture of black and white pixels.
Fig. 11 also depicts at 1120 the upsampled image O after downsampling/upsampling by a factor of 4. Each block 1122 of the upsampled image O depicted at 1120 is the average color of the corresponding block 1108 in the input image 1105. Blocks 1125 and 1130 have been blurred by the downsampling/upsampling and are considered "damaged". Each pixel contained within the blocks 1125 and 1130 is also considered damaged. None of the blocks 1135, 1140, 1145 and 1150 are damaged.
Fig. 11 also depicts edge masks 1110, 1115 representing the horizontal and the vertical edges in the original image 1105, respectively. Rules can be deduced for finding damaged blocks. A block is damaged in the horizontal direction if the 717643.doc corresponding block in the horizontal edge mask 1110 contains a pixel set to logical 1 anywhere except pixels in the leftmost column of the block. Two such O blocks are damaged as indicated by horizontal edge mask 1110, while two other blocks with horizontal edges are not. A block is damaged in the vertical direction 00o 5 if the corresponding block if the vertical edge mask 1115 contains a pixel set to IC logical 1 anywhere except for pixels in the topmost row of the block. One such NI block is damaged as indicated by vertical edge mask 1115, while another block Swith a vertical edge is not.
Finding Damaged Pixels Using Horizontal Edges Step 440 of Fig. 4 is expanded upon in Fig 5. The process loops over all the pixels in the horizontal edge mask EX, looking for blocks containing edges that indicate damage. These blocks are marked in a 1-bit image damage mask DM. The damage mask DM has the same dimensions M x N pixels as the downsampled image DI, since each 1-bit pixel in the damage mask DM corresponds to a block.
In step 505, all the pixels in the damage mask DM are set to logic 0. In step 510, the coordinate variable y is initialised to 0. In step 515, the coordinate variable x is initialised to 0. In decision step 520, a check is made to determine whether the current pixel of the damage mask DM belongs to the first leftmost) column of a block. The check may be performed using the formula (x MOD SCALE_AMT), where MOD is the modulo operator. Alternatively, this check may be implemented using bitwise operations. In one embodiment, the leftmost column of the input image I has an x coordinate of zero. Hence the result produced by the formula is compared to the value 0. Alternatively, if the leftmost column of the input image I has a x coordinate of 1, the result of the formula may be compared against the value 1. If decision step 520 returns true (Yes), processing continues at step 535. The pixel is in the leftmost column of a block and may be ignored. Otherwise, if decision step 520 returns false (No), processing continues at step 525.
717643.doc
I
-16- SIn decision step 525, a check is made to determine if the horizontal edge mask EX(x, y) is equal to 1. If decision step 525 returns false processing O continues at step 535. Otherwise, if decision step 525 returns true (Yes), the pixel of the upsampled image O(x, y) belongs to a block that is damaged in the 00o 5 horizontal direction. In step 530, this block is marked as damaged in the damage C mask DM. The relevant pixel ofDM(x DIV SCALEAMT, y DIV (i SCALEAMT) is set to logic 1, indicating the damage. DIV is integer division.
SIn step 535, the coordinate variable x is incremented by 1. In decision step N, 540, a check is made to determine ifx is less than the width X of the edge mask EX. If decision step 540 returns true (Yes), processing continues at step 520.
Otherwise, if decision step 540 returns false processing continues at step 545. In step 545, the coordinate variable y is incremented by 1. In decision step 550, a check is made to determine if the variable y is less than the height Y of the edge mask EX. If decision step 550 returns true (Yes), processing continues at step 515. Otherwise, if decision step returns false processing terminates in step 555.
Finding Damaged Pixels Using Vertical Edges Fig 6 shows the details of step 450 of Fig. 4 for finding damaged blocks in the vertical direction. The process 450 loops over all the pixels in the vertical edge mask EY, looking for blocks containing edges that indicate damage. These blocks are marked in the same 1-bit image DM that was used to mark horizontal edges in step 440. At the conclusion of step 450, all pixels set to logic 1 in the damage mask DM represent damaged blocks in the upsampled image O.
In step 610, the coordinate variable y is initialised to zero. In step 615, a check is made to determine if this row of pixels is the topmost in a block. The test may be performed using the equation y MOD SCALE_AMT, where MOD is the modulo operator. Alternatively, this may be accomplished using bitwise operations. In one embodiment of the invention, the topmost row of the input image I has a y coordinate of zero, and hence the result of the equation is compared to the value 0. Alternatively, if the topmost row of the input image I 717643.doc has a y coordinate of 1, the result of the equation is compared against the value 1.
If step 615 returns true (Yes), this row of pixels is the topmost row of a block, and can be ignored. Processing continues at step 645. Otherwise, if step 615 returns false processing continues at step 620.
00 5 In step 620, the coordinate variable x is initialised to 0. In decision step 625, a check is made to determine if the current pixel of the vertical edge mask EY(x, y) is equal to logic 1. If decision step 625 returns false processing Scontinues at step 635. If step 625 returns true (Yes), processing continues at step 630. That is, if EY(x, y) is equal to logic 1, the pixel of the upsampled image O(x, y) belongs to a block that is damaged in the vertical direction.
In step 630, the block is marked as damaged. In particular, the pixel DM(x DIV SCALE_AMT, y DIV SCALE_AMT) is set to logic 1, indicating the damage. In step 635, the coordinate variable x is incremented by 1. In decision step 640, the coordinate variable x is checked to determine if it is less than the width X of the edge mask EY. If step 640 returns true (Yes), processing continues at step 625. Otherwise, if step 640 returns false processing continues at step 645.
In step 645, the coordinate variable y is incremented by 1. In decision step 650, a check is made to determine if the coordinate variable y is less than the height Y of the edge mask EY. If step 650 returns true (Yes), processing continues at step 615. Otherwise, if step 650 returns false processing terminates in step 655.
Repairing Damaged Pixels After steps 440 and 450 of Figs. 4-6 are performed, the 1-bit image DM is correctly populated, with pixels of logic 1 indicating a damaged block. The subsequent step 460 of Fig. 4 is expanded upon in Fig 7.
Fig 7 shows processes that loop over the damage mask DM, searching for pixels indicating damaged blocks. Each damaged block that is found is repaired.
In step 710, a variable n is initialised to logic 0. In step 720, a variable m is initialised to logic 0. In decision step 730, a check is made to determine if the 717643.doc
I
-18- Sblock (min, n) is damaged. That is, the pixel at location DM(m, n) is checked for damage. If it is equal to logic 1, then it corresponds to a damaged block.
O Otherwise, the block is not damaged. If decision step 730 returns true (Yes), processing continues at step 740. In step 740, each pixel in the block of the 00o 5 upsampled image O identified as damaged in 730 is repaired. The process 740 for N repairing a block is expanded upon in Fig 8. Processing then continues at step C 750. Otherwise, if decision step 730 returns false processing continues at Sstep 750.
SIn step 750, the variable is incremented by 1. In decision step 760, a check is made if m is less than the width M of the damage mask DM. If step 760, returns true (Yes), processing continues at step 730. Otherwise, if step 760 returns false processing continues at step 770. In step 770, the variable n is incremented by 1. In decision step 780, a check is made to determine ifn is less than the height N of the damage mask DM. If decision step 780 returns true (Yes), processing continues at step 720. Otherwise if step 780 returns false (No), processing terminates in step 790.
Repairing Pixels in a Block Individually Fig 8 shows processes that loop over all the pixels in a block, repairing them individually. In step 810, a coordinate variable b is initialised to the top coordinate of the block. In particular, the coordinate variable b may be initialized to n SCALE_AMT. In step 820, a coordinate variable a is initialised to the left coordinate of the block. In particular, the variable a may be initialised to m SCALE_AMT. The variables a and b are thus the pixel coordinates in the upsampled image O, corresponding to the top left pixel of the block to be repaired.
In step 830, the pixel O(a, b) is repaired. In step 840, the variable a is incremented by 1. In decision step 850, a check is made to determine if a full row of the block has been processed. This may be done by checking if the coordinate variable a is less than (m 1) SCALE_AMT. If step 850 returns false the processing continues at 830. Otherwise, if step 850 returns true (Yes), the variable b is incremented by 1 in step 860.
717643.doc -19- In decision step 870, a check is made to determine if all the rows of the block have finished being processed. This check may done by testing ifb is less O than (n 1) SCALE_AMT. If step 870 returns false processing continues at step 820. Otherwise, if step 870 returns true (Yes), processing terminates in 00 5 step 880.
N The purpose of repairing pixels in a damaged block is to restore the edges that have been blurred during compression. The method used to accomplish this Sincludes the following steps for each damaged pixel O(a, b): finding the closest pixel O(i, j) belonging to an undamaged, or clean, block, and copying the color value from this clean pixel O(i, j) to the damaged pixel O(a, b).
When searching for a clean pixel, the search process is not allowed to cross any edges as detected in step 110 of Fig. 1. The search process used may be similar to a floodfill. The search is constrained in how far the search is allowed to search. This may be accomplished by parameters LOWXSEARCH, HIGHXSEARCH, LOW Y SEARCH and HIGH Y SEARCH. These parameters define a box that the search must stay within. That is LOW_ X _SEARCH< i HIGH_ X SEARCH, LOW_ Y_ SEARCH j HIGH_ Y_ SEARCH Alternatively, the search may be constrained by limiting the distance that the search is allowed to move from the original pixel O(a, b).
The parameters LOWXSEARCH, HIGHXSEARCH, LOWYSEARCH and HIGHYSEARCH may be set such that these parameters allow searching up to three blocks away from the original block that O(a, b) belongs to. Alternatively, the parameters may be set such that searching is allowed over the entire image. In all cases, the search is restricted to lie within the image boundaries, i.e. 0 i X and 0<=j Y.
To ensure that the search process does not process pixels that have already been visited, the pixels that have already been visited must be kept track of. This may be done with a 1-bit image VM with dimensions equal to the maximum size 717643.doc r Sof the search area. A logic 0 at a coordinate indicates the pixel has not previously been visited, and a logic 1 at a coordinate indicates that the pixel has already been O visited. Alternatively, this step may be done by storing the coordinates of the visited pixels and ensuring that a pixel is not processed if the coordinates exist in 00o 5 this store.
Repairing a Pixel O(a, b) SFig. 9 provides further details of the process 830 of repairing a single pixel N, of Fig. 8. In step 905, all pixels are marked as initially unvisited. This is accomplished by setting all pixels of the 1-bit image VM to logic 0. In step 910, the current coordinates b) are put onto a queue. The queue is a FIFO queue used to queue up coordinates that need to be tested for cleanliness. In decision step 915, a check is made to determine if the queue is empty. If step 915 returns true (Yes), a clean block cannot be found, meaning the queue is empty.
Processing terminates at step 920. If step 915 returns true (Yes) indicating the queue is non-empty, processing continues at step 925.
In step 925, the coordinates at the head or front of the queue are popped or removed, and stored in variables i and j. In step 930, the coordinates i and j are marked as having been visited. Setting the corresponding pixel in VM to logic 1 may do this.
In decision step 935, the pixel O(i, j) is tested to see if that pixel belongs to a clean block. This may be done by testing the value ofDM(i DIV SCALE_AMT, j DIV SCALEAMT) for equality with logic 0. If step 935 returns true (Yes), the pixel O(i, j) belongs to a clean block, and processing continues at step 940. In step 940, the color values at the pixel O(a, b) are replaced with the color values at O(i, The color values from O(i, j) are copied into the pixel at O(a, The repair process for this pixel has succeeded, and processing terminates in step 950. If decision step 935 returns false the pixel O(i, j) does not belong to a clean block, and processing continues at step 945. In step 945, the pixels surrounding the pixel O(i, j) are added onto the queue.
Processing continues at decision step 915.
717643.doc SAdding Surrounding Pixels to the Oueue O Fig. 10 illustrates the process 945 for adding surrounding pixels to the queue. The four pixels surrounding the current pixel j) may be added the 00oO 5 queue, but alternatively, the eight surrounding pixels could be added. Decision steps 1010, 1030, 1050, and 1070 test whether the pixel to the left (i the pixel to the right (i 1, the pixel above j and the pixel below (i,j 1) Srespectively, can be added to the queue. This ordering is preferred, but alternative orderings may be practiced. These tests take into account whether the pixel in question has already been visited, whether the pixel is inside the search space as defined by the preferable parameters, and whether adding this pixel involves crossing an edge. The test for whether the pixel in question has already been visited is performed by testing the value of the corresponding pixel in the 1-bit VM against logic 1. Steps 1020, 1040, 1060, and 1080 add their respective pixels to the queue, if the respective tests were passed. The steps of Fig. 10 are described in greater detail hereinafter.
In decision step 1010, a check is made to determine if the pixel to the left (i 1, j) can be added to the queue. Step 1010 tests the pixel located to the left of the current pixel namely (i 1, The first test is that i is greater than LOWXSEARCH. The second test is that pixel (i 1, j) has not yet been visited. The third test is that an edge is not being crossed in going from j) to (i 1, This third test is performed by checking that EX(i, j) is not equal to logic 1. These three tests can be expressed as: i LOW X SEARCH
AND
(i 1,j) not visited
AND
EX(i,j) 1.
If step 1010 returns true (Yes), processing continues at step 1020. That is, if all three tests are true, pixel (i 1, j) is added to the queue in step 1020. Processing continues at step 1030 and terminates. Otherwise, if step 1010 returns false (No), 717643.doc
I
-22processing continues at step 1030. That is, if any test fails, processing moves immediately to step 1030 in Fig. In decision step 1030, a check is made to determine if the pixel to the right (i 1, j) can be added to the queue. Step 1030 tests the pixel located to the right 00 5 of the current pixel namely (i 1, The first test is that (i 1) is less than N HIGH_XSEARCH. The second test is that pixel (i 1, j) has not yet been visited. The third test is that an edge is not crossed in going from j) to (i 1, j).
SThis third test may be performed by checking that EX(i j) does not equal logic 1. These three tests can be expressed as: (i 1) HIGHX_SEARCH
AND
(i 1, j) not visited
AND
EX(i 1.
If step 1030 returns true (Yes), processing continues at step 1040. That is, if all three tests are true, the pixel (i 1, j) is added to the queue in step 1040.
Processing continues at step 1050 and processing terminates. Otherwise, if step 1030 returns false (No) indicating that any of the tests failed, processing continues at step 1050.
In decision step 1050, a check is made to determine if the pixel above j 1) can be added to the queue. Step 1050 tests the pixel located above the current pixel namely j The first test is that j is greater than LOW_YSEARCH. The second test is that pixel j 1) has not yet been visited. The third test is that an edge is not being crossed in going from j) to (i, j This third test is performed by checking that EY(i, j) is not equal to logic. 1.
These tests can be expressed as: j LOW_YSEARCH
AND
j 1) not visited
AND
EY(i,j) 1.
717643.doc I -23- SIf step 1050 returns true (Yes), processing continues at step 1060. That is, if all three tests succeed, the pixel j 1) is added to the queue in step 1060.
O Processing continues at step 1070 and processing terminates. However, if step 1050 returns false (No) indicating that any of the tests failed, processing continues 00 5 at step 1070.
C1 In decision step 1070, a check is made to determine if the pixel below j C 1) can be added to the queue. Step 1070 tests the pixel located below the Scurrent pixel namely j The first test is that (j 1) is less than HIGHYSEARCH. The second test is that pixel j 1) has not yet been visited. The third test is that an edge is not being crossed in going from j) to (i, j This third test is performed by checking that EY(i, j 1) is not equal to logic 1. These tests can be expressed as: (j 1) HIGHYSEARCH
AND
j 1) not visited
AND
EY(i,j 1) 1.
If step 1070 returns true (Yes), processing continues at step 1080. That is, if all three tests succeed, pixel j 1) is added to the queue in step 1080. Processing continues at step 1090 and terminates. However, if step 1070 returns false (No) indicating that any of the tests failed, processing continues at step 1090.
Interleaved or Pipelined Implementation The compression in accordance with the above embodiment has been described in a non-interleaved or non-pipelined fashion. However, in alternate embodiments, an interleaved or pipelined implementation may be used. That is, for example, five image lines may be buffered at the output of the RIP. Mask data, corresponding to the four latter lines in the buffer are created (edges detected) the first of the buffered lines is needed for vertical edge detection.
This step corresponds to step 110 in Fig. 1. The resulting mask data is then compressed (corresponding to step 120). The four latter lines are then sub- 717643.doc Ssampled by 4 in each dimension (corresponding to step 130). Finally, this subsampled data is compressed (corresponding to step 140).
O Having processed these five lines, a new set of four lines are output from the RIP and processed in a similar fashion, maintaining the last line in the buffer 00o 5 from the previous set of five lines (and thus keeping five lines in the buffer).
NI Processing continues in this way, getting four new lines at a time, for the whole (i image. Thus the steps 110, 120, 130 and 140 are interleaved, or pipelined, on a Sgroup of scanlines basis (or possibly even on a block by block basis). In addition the operations described within each of the steps 110, 120, 130 and 140 can be pipelined.
Other Embodiments A number of alternative embodiments exist using different implementations of parts of the compression and decompression system. For example, once step 740 of Fig. 7 has been completed, the block (mn, n) is marked as clean, DM(m, n) is set to logic 0. This allows subsequent searches that hit pixels in this block to terminate faster.
As each pixel is repaired, that pixel can be marked as clean, allowing searches that hit other pixels in the same block to terminate faster. This requires an additional 1-bit image referred to as damage-in-block (or DB), of size SCALE_AMT x SCALEAMT. DB represents all of the damaged pixels in the current block under repair As part of step 740, an additional first step is to set all pixels in DB to logic 0, as initially, all the pixels are damaged. Step 940 of Fig. 9 may be modified with an additional step of setting the pixel in DB corresponding to the pixel b) that has been repaired to logic 1. Step 935 may be modified so that if pixel j) falls within the block currently being repaired, the corresponding pixel of DB is tested to check if this pixel has already been repaired. If the pixel j) has already been repaired, that pixel is considered clean for the purposes of the test at step 935, and processing moves to step 940.
717643.doc Possibly, no clean pixel may be found to repair a pixel in step 920. If so, a color value for such pixels can be synthesised. Fig 12 shows an example input O image 1200. Image 1200 contains no blocks that only contain black pixels Fig. 13 shows the same image 1305 after it has been downsampled, and 00oO 5 upsampled. Nine of the resulting blocks remain white from the input image 1200.
Each of the remaining blocks has an average color ranging from light grey to black. Table 1310 shows the values of the pixels inside each block. Note that Sthere are no clean black blocks in the image.
Fig. 14 shows the image 1400 after an initial pass using the decompression method of Fig 4. None of the pixels that were black in the original have been successfully repaired, while all pixels that are white have been successfully repaired. Taking block 1435 as an example, the table 1310 shows that the average value of the original pixels was 239. One method for calculating the average as described in step 130 is a box filter, so the average value may have been greater than or equal to 238.5, and less than 239.5. Further, in block 1435, of the 16 pixels have been successfully repaired with a value of 255. Thus an inequality can be formulated as: 15 1 238.5 255 x 239.5, 16 16 where x is the value of the unrepaired pixel 1436. Rearranging and simplifying this equation, the following result is obtained: -9<x<7.
Preferably, the central value is taken, in this case However, the parameter x cannot have a negative value, and hence the nearest non-negative value is taken, in this case 0. The pixel still requiring repair 1436 is replaced with this calculated value.
Applying similar calculations to the unrepaired pixels in blocks 1435, 1445, 1450, 1455, 1465, 1470, and 1475, Table 1 lists the results obtained.
717643.doc -26- TABLE 1 00 Block Number Calculated Pixel Value 1435 0 1445 0 1450 0 1455 1 1465 0 1470 1 1475 0 Fig 15 shows the resulting image 1500 obtained by replacing the unrepaired pixels with the values calculated above.
In the embodiment described hereinbefore, two single-bit mask images are created to represent edges in the input image. Alternatively, a single mask image can be created and compressed. This single mask image may comprise 2 bits per pixel. That is, the two single-bit edge mask images may be combined into one mask image.
The embodiments of the invention provide compression and decompression systems that maintain sharp edges in an image at the edges of objects) at a high resolution, while offering excellent compression. Data within objects is represented at a lower resolution offering greater compression without overly sacrificing image quality.
Computer Implementation The methods according to the embodiments of the invention may be practiced using one or more general-purpose computer systems, printing devices, and other suitable computing devices. The processes described with reference to any one or more of Figs. 1-10, 17, and 18 may be implemented as software, such 717643.doc r Sas an application program executing within the computer system or embedded in a printing device. Software may include one or more computer programs, including O application programs, an operating system, procedures, rules, data structures, and data. The instructions may be formed as one or more code modules, each for 00 5 performing one or more particular tasks. The software may be stored in a computer readable medium, comprising one or more of the storage devices N described below, for example. The computer system loads the software from the Scomputer readable medium and then executes the software. Fig. 19 depicts an N, example of a computer system 1900 with which the embodiments of the invention may be practiced. A computer readable medium having such software recorded on the medium is a computer program product. The use of the computer program product in the computer system may effect an advantageous apparatus for implementing one or more of the above methods.
Fig. 19 illustrates the computer system 1900 in block diagram form, coupled to a network. An operator may use the keyboard 1930 and/or a pointing device such as the mouse 1932 (or touchpad, for example) to provide input to the computer 1950. The computer system 1900 may have any of a number of output devices, including line printers, laser printers, plotters, and other reproduction devices connected to the computer. The computer system 1900 can be connected to one or more other computers via a communication interface 1964 using an appropriate communication channel 1940 such as a modem communications path, router, or the like. The computer network 1920 may comprise a local area network (LAN), a wide area network (WAN), an Intranet, and/or the Internet, for example.
The computer 1950 may comprise a processing unit 1966 one or more central processing units), memory 1970 which may comprise random access memory (RAM), read-only memory (ROM), or a combination of the two, input/output (O10) interfaces 1972, a graphics interface 1960, and one or more storage devices 1962. The storage device(s) 1962 may comprise one or more of the following: a floppy disc, a hard disc drive, a magneto-optical disc drive, CD- ROM, DVD, a data card or memory stick, flash RAM device, magnetic tape or 717643.doc Sany other of a number of non-volatile storage devices well known to those skilled in the art. While the storage device is shown directly connected to the bus in Fig.
O 19, such a storage device may be connected through any suitable interface, such as a parallel port, serial port, USB interface, a Firewire interface, a wireless interface, 00o 5 a PCMCIA slot, or the like. For the purposes of this description, a storage unit IC may comprise one or more of the memory 1970 and the storage devices 1962 (as indicated by a dashed box surrounding these elements in Fig. 19).
SEach of the components of the computer 1950 is typically connected to NI one or more of the other devices via one or more buses 1980, depicted generally in Fig. 19, that in turn comprise data, address, and control buses. While a single bus 1980 is depicted in Fig. 19, it will be well understood by those skilled in the art that a computer, a printing device, or other electronic computing device, may have several buses including one or more of a processor bus, a memory bus, a graphics card bus, and a peripheral bus. Suitable bridges may be utilized to interface communications between such buses. While a system using a CPU has been described, it will be appreciated by those skilled in the art that other processing units capable of processing data and carrying out operations may be used instead without departing from the scope and spirit of the invention.
The computer system 1900 is simply provided for illustrative purposes, and other configurations can be employed without departing from the scope and spirit of the invention. Computers with which the embodiment can be practiced comprise IBM-PC/ATs or compatibles, laptop/notebook computers, one of the Macintosh (TM) family of PCs, Sun Sparcstation a PDA, a workstation or the like. The foregoing are merely examples of the types of devices with which the embodiments of the invention may be practiced. Typically, the processes of the embodiments, described hereinafter, are resident as software or a program recorded on a hard disk drive as the computer readable medium, and read and controlled using the processor. Intermediate storage of the program and intermediate data and any data fetched from the network may be accomplished using the semiconductor memory.
717643.doc -29- SIn some instances, the program may be supplied encoded on a CD-ROM or a floppy disk, or alternatively could be read from a network via a modem O device connected to the computer, for example. Still further, the software can also be loaded into the computer system from other computer readable medium 00 5 comprising magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between the computer and another device, C a computer readable card such as a PCMCIA card, and the Internet and Intranets Scomprising email transmissions and information recorded on websites and the like. The foregoing is merely an example of relevant computer readable mediums.
Other computer readable mediums may be practiced without departing from the scope and spirit of the invention.
Industrial Applicability The embodiments of the invention are applicable to the computer and data processing industries. The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
717643.doc

Claims (6)

  1. 3. The method according to claim 2, wherein two 1-bit edge masks are created or one 2-bit edge mask is created.
  2. 4. The method according to claim 1, wherein said encoding of found edges comprises compressing said found edges. The method according to claim 2, wherein said encoding of found edges comprises losslessly compressing said edge mask.
  3. 6. The method according to claim 1, wherein encoding said alternate representation comprises reducing the resolution of said high-resolution image.
  4. 7. The method according to claim 6, wherein encoding of said alternate representation comprises lossy compressing said reduced resolution image.
  5. 8. A method of compressing an input raster image, said method comprising the steps of:
  6. 717643.doc I -31- Sgenerating one or more mask images from said input raster image, said one or more masks being representative of edges in said input raster image; O compressing said one or more mask images; generating a lower resolution image from said input raster image; and 00 5 compressing said lower resolution image. C 9. The method according to claim 8, wherein said one or more mask Simages is compressed using LZW compression. 10. The method according to claim 8, wherein said lower resolution image is compressed using JPEG compression. 11. A method of forming a high-resolution image, said method comprising: forming said high-resolution image from a low resolution image and high- resolution edge information stored in raster format, said high-resolution edge information comprising information about at least one edge found in said high- resolution image. 12. The method according to claim 11, wherein said forming step comprises the step of selecting between at least two pixels of said low resolution image dependent upon said high-resolution edge information to form at least one pixel of said high-resolution image. 13. A method of generating a high-resolution image from a low resolution image and high-resolution edge information, said method comprising: determining colors of pixels of said high-resolution image from colors of pixels of said low-resolution image, wherein the color of at least one high- resolution image pixel is determined from the color of at least one low resolution image pixel dependent on said high-resolution edge information. 717643.doc -32- 14. The method according to claim 13, wherein said high-resolution edge information controls selection between at least two low resolution image O pixels in said determination of color of said at least one high-resolution image pixel. 00 C1 15. A method of forming an output image on a raster output device, the C method comprising: Sreceiving one or more compressed edge images of an input image N, representation and a compressed lower resolution raster image of said input image representation, said input image representation having a maximum resolution; decompressing said compressed raster image to an uncompressed raster image; decompressing said compressed edge images; and forming said output image on said raster output device dependent upon said uncompressed raster image and said decompressed edge images. 16. The method according to claim 15, further comprising the steps of: determining said one or more edge images representative of edges in said input image representation; compressing said edge images; determining a raster image representative of said input image representation at a resolution lower than said maximum resolution; and compressing said raster image. 17. The method according to claim 15 or 16, further comprising the step of repairing one or more pixels of said uncompressed raster image using said one or more decompressed edge images to form said output image. 18. A method of determining the approximate color of a group of unknown high-resolution pixels, said method comprising: calculating an approximate average color of said group of unknown high- 717643.doc Sresolution pixels from colors of a group of known high-resolution pixels and a color of a known low resolution pixel, wherein an area of the low resolution pixel O covers areas of said known and unknown high-resolution pixels and approximates an average color of said known and unknown high-resolution pixels. 00 19. An apparatus of encoding a high-resolution image, said apparatus comprising: Smeans for finding at least one edge in said high-resolution image; N, means for encoding said found edges in raster format; and means for encoding an alternate representation of said high-resolution image. The apparatus according to claim 19, further comprising means for generating at least one edge mask comprising information representing said found edges from said high-resolution image. 21. The apparatus according to claim 20, wherein two 1-bit edge masks are created or one 2-bit edge mask is created. 22. The apparatus according to claim 19, wherein said means for encoding found edges comprises means for compressing said found edges. 23. The apparatus according to claim 20, wherein said means for encoding found edges comprises means for losslessly compressing said edge mask. 24. The apparatus according to claim 19, wherein means for encoding said alternate representation comprises means for reducing the resolution of said high-resolution image. 717643.doc I The apparatus according to claim 24, wherein means for encoding of said alternate representation comprises means for lossy compressing said reduced resolution image. 26. An apparatus of compressing an input raster image, said apparatus comprising the steps of: means for generating one or more mask images from said input raster image, said one or more masks being representative of edges in said input raster image; means for compressing said one or more mask images; means for generating a lower resolution image from said input raster image; and means for compressing said lower resolution image. 27. The apparatus according to claim 26, wherein said one or more mask images is compressed using LZW compression. 28. The apparatus according to claim 27, wherein said lower resolution image is compressed using JPEG compression. 29. An apparatus of forming a high-resolution image, said apparatus comprising: means for receiving said low resolution image and said high-resolution edge information stored in raster format, said high-resolution edge information comprising information about at least one edge found in said high-resolution image; means for forming said high-resolution image from said low resolution image and said high-resolution edge information stored in raster format. 30. The apparatus according to claim 29, wherein said forming means comprises means for selecting between at least two pixels of said low resolution 717643.doc image dependent upon said high-resolution edge information to form at least one pixel of said high-resolution image. 31. An apparatus of generating a high-resolution image from a low- 00 5 resolution image and high-resolution edge information, said apparatus comprising: N, means for receiving said low resolution image and said high-resolution edge information; and means for determining colors of pixels of said high-resolution image from N colors of pixels of said low resolution image, wherein the color of at least one high-resolution image pixel is determined from the color of said low resolution image pixel dependent on said high-resolution edge information. 32. The apparatus according to claim 31, wherein said high-resolution edge information controls selection between at least two low resolution image pixels in said determination of color of said at least one high-resolution image pixel. 33. An apparatus of forming an output image on a raster output device, the apparatus comprising: means for receiving one or more compressed edge images of an input image representation and a compressed lower resolution raster image of said input image representation, said input image representation having a maximum resolution; means for decompressing said compressed raster image to an uncompressed raster image; means for decompressing said compressed edge images; and means for forming said output image on said raster output device dependent upon said uncompressed raster image and said decompressed edge images. 34. The apparatus according to claim 33, further comprising: 717643.doc -36- means for determining said one or more edge images representative of edges in said input image representation; O means for compressing said edge images; means for determining a raster image representative of said input image 00oO 5 representation at a resolution lower than said maximum resolution; and r means for compressing said raster image. The apparatus according to claim 33 or 34, further comprising N, means for repairing one or more pixels of said uncompressed raster image using said one or more decompressed edge images to form said output image. 36. An apparatus of determining the approximate color of a group of unknown high-resolution pixels, said apparatus comprising: means for receiving a group of known high-resolution pixels and a known low resolution pixel; and means for calculating an approximate average color of said group of unknown high-resolution pixels from colors of said group of known high- resolution pixels and a color of said known low resolution pixel, wherein an area of the low resolution pixel covers areas of said known and unknown high- resolution pixels and approximates an average color of said known and unknown high-resolution pixels. 37. An apparatus, comprising: a storage unit for storing data and instructions to be carried out by a processing unit; and a processing unit coupled to said storage unit, said processing unit being programmed to implement the method in accordance with any one of claims 1-18. 38. A computer program product having a computer readable medium with a computer program recorded therein for encoding a high-resolution image, said computer program product comprising: 717643.doc I -37- Scomputer program code means for finding at least one edge in said high- resolution image; O computer program code means for encoding said found edges in raster format; and 00 5 computer program code means for encoding an alternate representation of C said high-resolution image. 39. The computer program product according to claim 38, further N, comprising computer program code means for generating at least one edge mask comprising information representing said found edges from said high-resolution image. The computer program product according to claim 39, wherein two 1-bit edge masks are created or one 2-bit edge mask is created. 41. The computer program product according to claim 38, wherein said computer program code means for encoding found edges comprises computer program code means for compressing said found edges. 42. The computer program product according to claim 39, wherein said computer program code means for encoding found edges comprises computer program code means for losslessly compressing said edge mask. 43. The computer program product according to claim 38, wherein said computer program code means for encoding said alternate representation comprises computer program code means for reducing the resolution of said high- resolution image. 44. The computer program product according to claim 43, wherein said computer program code means for encoding of said alternate representation comprises computer program code means for lossy compressing said reduced 717643.doc -38- Sresolution image. O 45. A computer program product having a computer readable medium with a computer program recorded therein for compressing an input raster image, 00o 5 said computer program product comprising the steps of: N, computer program code means for generating one or more mask images from said input raster image, said one or more masks being representative of Sedges in said input raster image; N, computer program code means for compressing said one or more mask images; computer program code means for generating a lower resolution image from said input raster image; and computer program code means for compressing said lower resolution image. 46. The computer program product according to claim 45, wherein said one or more mask images is compressed using LZW compression. 47. The computer program product according to claim 46, wherein said lower resolution image is compressed using JPEG compression. 48. A computer program product having a computer readable medium with a computer program recorded therein for forming a high-resolution image, said computer program product comprising: computer program code means for receiving said low resolution image and said high-resolution edge information stored in raster format, said high-resolution edge information comprising information about at least one edge found in said high-resolution image; computer program code means for forming said high-resolution image from said low resolution image and said high-resolution edge information stored 717643.doc Sin raster format. O 49. The computer program product according to claim 48, wherein said computer program code means for forming comprises computer program code 00o 5 means for selecting between at least two pixels of said low resolution image ,I dependent upon said high-resolution edge information to form at least one pixel of ,IC said high-resolution image. N 50. A computer program product having a computer readable medium with a computer program recorded therein for generating a high-resolution image from a low resolution image and high-resolution edge information, said computer program product comprising: computer program code means for receiving said low resolution image and said high-resolution edge information; and computer program code means for determining colors of pixels of said high-resolution image from colors of pixels of said low resolution image, wherein the color of at least one high-resolution image pixel is determined from the color of said low resolution image pixel dependent on said high-resolution edge information. 51. The computer program product according toclaim 50, wherein said high-resolution edge information controls selection between at least two low resolution image pixels in said determination of color of said at least one high- resolution image pixel. 52. A computer program product having a computer readable medium with a computer program recorded therein for forming an output image on a raster output device, the computer program product comprising: computer program code means for receiving one or more compressed edge images of an input image representation and a compressed lower resolution raster 717643.doc Simage of said input image representation, said input image representation having a maximum resolution; O computer program code means for decompressing said compressed raster image to an uncompressed raster image; 00 5 computer program code means for decompressing said compressed edge N, images; and I computer program code means for forming said output image on said Sraster output device dependent upon said uncompressed raster image and said decompressed edge images. 53. The computer program product according to claim 52, further comprising: computer program code means for determining said one or more edge images representative of edges in said input image representation; computer program code means for compressing said edge images; computer program code means for determining a raster image representative of said input image representation at a resolution lower than said maximum resolution; and computer program code means for compressing said raster image. 54. The computer program product according to claim 52 or 53, further comprising computer program code means for repairing one or more pixels of said uncompressed raster image using said one or more decompressed edge images to form said output image. A computer program product having a computer readable medium with a computer program recorded therein for determining the approximate color of a group of unknown high-resolution pixels, said computer program product comprising: computer program code means for receiving a group of known high- resolution pixels and a known low resolution pixel; and 717643.doc I Scomputer program code means for calculating an approximate average color of said group of unknown high-resolution pixels from colors of said group 0 of known high-resolution pixels and a color of said known low resolution pixel, wherein an area of the low resolution pixel covers areas of said known and 00 5 unknown high-resolution pixels and approximates an average color of said known l and unknown high-resolution pixels. 0 DATED this SEVENTH Day of JUNE 2005 C< CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant/Nominated Person SPRUSON FERGUSON 717643.doc
AU2005202481A 2004-06-30 2005-06-07 Compression of image data Abandoned AU2005202481A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2005202481A AU2005202481A1 (en) 2004-06-30 2005-06-07 Compression of image data

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2004903604A AU2004903604A0 (en) 2004-06-30 Compression of Image Data
AU2004903604 2004-06-30
AU2005202481A AU2005202481A1 (en) 2004-06-30 2005-06-07 Compression of image data

Publications (1)

Publication Number Publication Date
AU2005202481A1 true AU2005202481A1 (en) 2006-01-19

Family

ID=35884083

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2005202481A Abandoned AU2005202481A1 (en) 2004-06-30 2005-06-07 Compression of image data

Country Status (1)

Country Link
AU (1) AU2005202481A1 (en)

Similar Documents

Publication Publication Date Title
US7630099B1 (en) Reducing storage requirements for display data
US7680358B2 (en) Image processing apparatus and control method thereof, and program
US8139082B2 (en) Image processing apparatus and its control method, and program
US8855438B2 (en) Image processing apparatus, image processing method of image processing apparatus, and program
AU2012201684A1 (en) Image compression
US8654147B2 (en) Apparatus for generating raster images, raster image generating method, and storage medium
US20090303550A1 (en) Image processing apparatus and image processing method
US20080037902A1 (en) Image processing apparatus and control method therefor
AU2009233627B2 (en) Compression method selection for digital images
US9471857B2 (en) Overcoat processing mechanism
JP5020998B2 (en) Image processing apparatus and image processing method
CA2346761C (en) Method, system, program, and data structure for generating raster objects
JP2008042345A (en) Image processing method and image processor
AU2005202481A1 (en) Compression of image data
US7091985B1 (en) System and method for compressing color data using expandable color palette
JP2010171971A (en) Image forming apparatus, and system therefor and control method thereof
US20090284766A1 (en) Image Synthesis Method, Print System and Image Synthesis Program
US20090244558A1 (en) Image processing apparatus and image processing method
JP3346051B2 (en) Image processing device
US9367775B2 (en) Toner limit processing mechanism
JP3211545B2 (en) Image processing device
JP3581470B2 (en) Data processing method in page printer and page printer
JP2003309727A (en) Image encoder and image encoding method
US20110235927A1 (en) Image processing apparatus and method
JP3792881B2 (en) Image processing apparatus, data processing method for image processing apparatus, and storage medium storing computer-readable program

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period