WO1996036164A2 - Method and apparatus for compressing image data - Google Patents

Method and apparatus for compressing image data Download PDF

Info

Publication number
WO1996036164A2
WO1996036164A2 PCT/US1996/005415 US9605415W WO9636164A2 WO 1996036164 A2 WO1996036164 A2 WO 1996036164A2 US 9605415 W US9605415 W US 9605415W WO 9636164 A2 WO9636164 A2 WO 9636164A2
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
color
pixels
image
value
Prior art date
Application number
PCT/US1996/005415
Other languages
French (fr)
Other versions
WO1996036164A3 (en
Inventor
James P. Hoddie
Ian D. Ritchie
Original Assignee
Apple Computer, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Computer, Inc. filed Critical Apple Computer, Inc.
Priority to AU55564/96A priority Critical patent/AU5556496A/en
Priority to DE69612348T priority patent/DE69612348T2/en
Priority to EP96912900A priority patent/EP0770301B1/en
Publication of WO1996036164A2 publication Critical patent/WO1996036164A2/en
Publication of WO1996036164A3 publication Critical patent/WO1996036164A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/005Statistical coding, e.g. Huffman, run length coding

Definitions

  • the present invention pertains to the field of digital image processing. More particularly, this invention relates to an arrangement for compressing the data for an image displayed on a computer controlled display system to minimize the memory requirement of the display system and to allow the image to be more quickly and efficiently blended with other images.
  • Imaging data is encoded and placed into a frame buffer.
  • this encoded data is extracted and transmitted to a marking device (e.g., a display or a printer).
  • a marking device e.g., a display or a printer.
  • the frame buffer has contained the precise marking pattern (i.e., bitmap or pixel map) to be utilized by the marking device when producing the final output image.
  • the frame buffer consists of binary memory with each bit in the memory representing a spot on the device's output medium.
  • each spot to be imaged by the device is represented by a corresponding value in the frame buffer that specifies the color or luminance of that particular spot.
  • One disadvantage associated is that the frame buffer typically requires a relatively large storage capacity to store the bitmap or pixel map data for the image. This is particularly so if the image also involves colors and/or gray scales. In that case, extra data or information is required to specify the color or gray scale of each spot or pixel of the image, thus increasing the memory space to store the bitmap data.
  • the relatively large frame buffer employed to store the bitmap data typically increases the memory cost of the imaging system, which in turn increases the system cost.
  • Another disadvantage is that it typically takes relatively long time to blend a gray scaled glyph image onto a multi-color graphics image. This is due to the fact that the gray scaled glyph image includes not only the completely imaged spots (i.e., black pixels) and completely unimaged spots (i.e., white pixels), but also partially imaged spots (i.e., gray pixels) measured in different scales or levels.
  • the color of the blended pixel needs to be changed, depending on the scale of the gray pixel and the color of the color pixel.
  • each pixel of the glyph image is individually blended with its corresponding pixel of the graphics image. This typically takes a relatively long time to complete.
  • One of the features of the present invention is to minimize memory required for storing data for displaying an image on a computer controlled display system.
  • Another feature of the present invention is to compress the data for displaying an image on a computer controlled display system such that the memory associated with storing the data can be minimized.
  • Another feature of the present invention is to store the data for displaying a gray scaled glyph on a computer controlled display system in a compressed format such that the glyph can be blended with graphics on the display system relatively quickly and efficiently.
  • a method of compressing data representing a plurality of pixels consecutively arranged along a line of an image to be displayed on a computer controlled display includes the step of determining color of an initial pixel of the plurality of pixels. The value of a pixel count is then incremented. The value of the pixel count indicates the number of pixels in that color. The color of an adjacent pixel of the initial pixel is then determined. If the color of the adjacent pixel is identical to the color of the initial pixel, then the adjacent pixel is caused to be the initial pixel and the step of incrementing the value of the pixel count is again performed. The method then moves to determine the color of the next pixel. If the color of the adjacent pixel is different from the color of the initial pixel, then a datum indicating the color of the initial pixel and the value of the pixel count is generated.
  • a method of compressing data representing a plurality of pixels of an image to be displayed on a computer controlled display is described.
  • a first pixel from the plurality of pixels is located.
  • the color of the first pixel is then determined.
  • a second pixel from the plurality of pixels that s adjacent to the first pixel is then located.
  • the color of the second pixel is then determined.
  • a datum that indicates the color of the first pixel and the number of pixels in that color is generated if the color of the second pixel is determined to be identical to the color of the first pixel.
  • a method of using the compressed data of a first image (e.g., gray scaled glyph image) to blend the first image onto a second image (e.g., multi-color graphics image) is also described.
  • a datum of the first image is retrieved.
  • the datum specifies a color and the number of consecutive pixels in that color.
  • a single blending operation is performed to blend the number of pixels of the first image onto a corresponding number of pixels of the second image if the color of the number of pixels of the first image either supersedes or is superseded by the color of the corresponding pixels of the second image.
  • the color in the datum is either the forecolor (typically white) or backcolor (typically black)
  • no blending of the second image is required.
  • color information in the datum of the first image is a value between the forecolor and backcolor
  • blending value i.e., how much and of what value
  • the operation is as follows. In cases where each pixel in the second image is different in color, the weighted blend of the first image pixels and the second image pixels must be calculated. In cases where the second image pixels are of the same color, no re-check and re-calculation is needed for the length of the corresponding pixels specified by the datum.
  • Figure 1 shows the pixel map of a glyph "t" displayed on a computer controlled display system
  • Figure 2 shows the computer controlled display system, wherein the display system employs the function of compressing the pixel map data for a text image and using the compressed data to blend the text image onto a colored graphics image in accordance with one embodiment of the present invention
  • Figure 3 shows the memory map of the system memory and frame buffer of the display system of Figure 2
  • Figure 4 illustrates a flow chart of the process of generating a compressed and blended pixel map data through a data compressor and a blending circuit of the display system of Figure 2;
  • Figure 5 is a flow chart depicting the process of the data compressor of Figure 4 for compressing the pixel map data in accordance with one embodiment of the present invention
  • Figure 6 shows the process of the blending circuit of Figure 4 for using the compressed pixel map data for a text image to blend the text image onto a colored graphics image in accordance with one embodiment of the present invention.
  • Figure 1 shows the pixel map 10 of a glyph "t" to be displayed on a computer controlled display system.
  • a pixel map is described as a two dimensional array of points having known coordinates which map to a display or a printer.
  • the appearance of a pixel (i.e., spot) on a display is controlled by the signals applied to that pixel.
  • the signals are derived from the data for that pixel stored in a pixel map memory.
  • Pixel map 10 of glyph "t” can be blended (i.e., superimposed) onto a pixel map of a multi-color graphics image.
  • pixel map 10 only shows one glyph "t” for illustration purposes.
  • pixel map 10 may show a text that includes a string of glyph symbols.
  • pixel map 10 is a gray scaled pixel map and includes a number of raster scan lines, each having a number of pixels.
  • scan line 11 includes a number of pixels 12 through 12n. Pixels
  • scan lines of pixel image 10 may include white and black pixels (the black pixels are the imaged pixels on the imaging device, which are also referred to as the forecolor pixels).
  • some other scan lines of pixel map 10 include gray scaled pixels (i.e., partially imaged pixels).
  • scan line 13 includes gray scaled pixels 15, 17, and 18. Each of gray scaled pixels 15, 17, and 18 has a different gray scale or level.
  • gray levels or scales from white (i.e., backcolor) to black (i.e., forecolor) i.e., total sixteen levels.
  • the gray levels can be more or fewer than sixteen.
  • the intensity of the gray color can be specified in thirty-two scales or levels from black to white.
  • pixel map 10 may specify a multi-color graphics image. In this case, the gray le ⁇ als can be used to specify the different colors.
  • each pixel of pixel map 10 requires a data to specify its gray scale or color in order to describe pixel map 10. If pixel map 10 describes a colored graphics image, the actual color (instead of gray scale) of each pixel of pixel map 10 needs to be specified by the pixel data.
  • the pixel data of pixel map 10 is compressed in a computer controlled display system such that when pixel map 10 is blended onto another image, the compressed data allows pixel map 10 to be relatively quickly and efficiently blended onto the other image.
  • the compressing process determines the color (or gray scale) of an initial pixel and then determines if the adjacent pixel of the initial pixel shares the same color (or gray scale). If so, the process increments its pixel count to indicate the number of pixels in that color. The process then moves to determine the color of the next adjacent pixel and increments the pixel count unless the pixel has a different color or gray level.
  • a data is generated to specify the color (or gray scale) and the number of consecutive pixels in that color.
  • the data has two parts, one for specifying the color and the other for specifying the number of pixels.
  • the process then repeats those steps to generate the next compressed data until the entire pixel map has been compressed.
  • the blending can be done in a relatively quick and efficient manner. This means to blend a number of pixels at one time if these pixels share the same color.
  • the storage space required to store the compressed pixel map data is also minimized.
  • Figure 2 shows a computer based system 20 having a computer controlled display system for compressing data of a computer image and for blending the image using the compressed data onto other images according to one embodiment of the present invention.
  • computer system 20 of Figure 2 operates in multimedia environment and supports integrated digital media and three-dimensional graphics and models.
  • computer system 20 is a personal computer.
  • computer system 20 can be a notebook computer, a laptop computer, a minicomputer, a workstation computer, a mainframe computer, or any other type of computer system.
  • Computer system 20 includes a processor 22 which is often a microprocessor such as the commercially available 68030 or 68040 microprocessor from Motorola.
  • Computer system 20 also includes a system bus 21 and system memory 23 for storage of instructions and data for use by processor 22.
  • System bus 21 typically includes address and data lines as well as control lines for allowing communication of data and instructions between various components of computer system 20 such as processor 22 and system memory 23 as well as other components shown in Figure 2.
  • Computer system 20 also includes a frame buffer 24 for storing pixel data or information for display on a display 28 or to be printed by printer.
  • Computer system 20 also includes a mass storage device 26, such as a hard disk, and a disk controller 25 which is typically coupled to system bus 21.
  • Computer system 20 further includes a display controller 27 for controlling and processing image data to be imaged on display 28. As described above, the image data is stored in frame buffer 24 before being displayed on display 28.
  • Input and output of computer system 20 is also provided by an input/output controller 29 which rhay be one unit or several different units as is known in the art for controlling the input and output from /to printers such as printer 30, keyboards such as keyboard 31, and cursor control devices such as cursor controller 32.
  • Processor 22 retrieves programs containing instructions and data from mass storage device 26 and causes these instructions and data to be loaded into system memory 23 for execution of the instructions.
  • Processor 22 executes the instructions and causes a displayable representation, such as a pixel map to be created in frame buffer 24, which representation is then conveyed over system bus 21 to display controller 27 or I/O controller 29 so that the displayable representation, such as a pixel map, may then be displayed on display 28 or printed by printer 30.
  • display 28 may be any variety of suitable computer controlled display devices, such as CRT displays or liquid crystal displays, etc.
  • printer 30 may be one or more of any variety of "hard copy" display devices such as laser printers, ink jet printers, etc. It is well known that numerous other computer architectures exist, and the present invention may be practiced in those architectures as well.
  • Figure 3 shows the memory map 40 of system memory 23 and frame buffer 24 of Figure 2.
  • FIG 3 shows a typical arrangement of the major programs contained within system memory 23 and frame buffer 24 illustrated in Figure 2.
  • a display pixel map section 41 represents the pixel map data stored in frame buffer 24.
  • Each pixel data in a pixel map defines a particular pixel on an output imaging device (e.g., display 28 or printer 30).
  • the pixel map data stored in frame buffer 24 is in a compressed format.
  • the pixel map data can represent a gray scaled text image, a multi-color graphics image, or a blended image of gray scaled text and multi-color graphics.
  • the compression of the pixel map data in accordance with one embodiment will be described in more detail below, in conjunction with Figures 4-5.
  • the blending of two images (i.e., a text image and a graphics image) using the compressed pixel map data in accordance with one embodiment of the present invention is also described in more detail below, in connection with Figures 4 and 6.
  • Memory map 40 of system memory 23 and frame buffer 24 also includes system program section 42 for storing system programs which represent a variety of sequences of instructions for execution by the CPU or processor 22 in order to support system level input and output and control.
  • the system programs such as disk operating systems and the like may be stored within section 42.
  • the programs which provide scan conversion such as scan converters 66 and 72 of Figure 4 may also be stored in section 42.
  • the programs which provide data compression and image blending using the compressed pixel map data in accordance with one embodiment of the present invention may also be stored in section 42.
  • the data compression programs in accordance with one embodiment of the present invention are shown in Figure 4 as data compressors 67 and 74.
  • the image blending programs in accordance with one embodiment of the present invention are shown in Figure 4 as blending circuit 73.
  • System memory 23 typically also includes font resources shown within memory section 43, which font resources include outline font data. Additionally, space within system memory 23 is also reserved for other programs and spare memory as shown as memory section 44 in Figure 3. These other programs may include a variety of useful computational or utility programs as may be desired.
  • graphics data for generating graphics image data may also be stored in section 44 of memory map 40.
  • Figure 4 is the flow chart that shows the process of compressing the scan converted pixel map data of a text image and the process of blending two display images (e.g., a text image and a graphics image) using the compressed pixel map data.
  • output text data 60 specifies a text to be imaged on an output imaging device (e.g., display 28 or printer 30).
  • the output text data 60 typically includes text data for specifying or identifying the alphanumeric or other characters or symbols to be printed or displayed.
  • output text data 60 also includes other control information which will be described below.
  • Output text data 60 can be generated by any known text-rendering techniques adopted in computer system 20 of Figure 2.
  • Output text data 60 typically includes at least one glyph for imaging.
  • text data 60 can include graphics data for imaging graphics, or a combination of text and graphics data.
  • Output text data 60 also includes other control information for defining font and shape in which the text is to be rendered, for defining the size of the glyph, and for defining coordinates of characters relative to a page or display screen or relative to each other. All the information of output text data 60 passes through an interpreter 65 to generate the actual text image data.
  • Interpreter 66 can be implemented by any known text image rendering software programs.
  • the text image data is then applied to scan converter 66 for scan converting the text data into gray scaled pixel map data.
  • scan converter 66 can be implemented by any known scan conversion software or hardware means.
  • the scan converted pixel map data specifies the actual pixel map (e.g., the pixel map shown in Figure 1) to be imaged on an actual output imaging device.
  • Each pixel data of the pixel map data specifies the gray scale (including black and white) of the pixel to be actually imaged.
  • the converted pixel map data then passes through a data compressor 67 for data compression.
  • the above-described procedures for rendering the pixel map data can be done in one step.
  • the data compressing function of compressor 67 of Figure 4 can be explained as follows.
  • pixel 14 is first located and its gray scale (or color) is determined.
  • pixel 14 is determined to be a white (i.e., backcolor) pixel.
  • the gray scale (or color) of its adjacent pixels is determined and the pixel count indicating the number of pixels in that gray scale (or color) is accordingly incremented.
  • the pixel count is initially set at zero.
  • pixel 15 is reached and its gray scale is determined not to be white, a first data is generated.
  • the first data has two parts, one for defining the gray scale (in this case, white) and the other for defining the value of the pixel count (in this case, three).
  • the value three in the pixel count indicates four pixels.
  • the pixel count is then reset zero.
  • each part of the data is four bit wide.
  • the two parts of the data can be longer or shorter than four bits.
  • pixel 16 is located and its color is determined. Because pixel 16 is a black (i.e., backcolor) pixel, a second data is generated that indicates the gray level of pixel 15 and the value of pixel count (in this case, zero). The pixel count is then reset. The process of compressing then moves to check the gray scale of the next pixel of pixel 16 and increment the value of pixel count accordingly until pixel 16i. When pixel 17 is reached, a third data is generated. The color part of the third data specifies the black color and the pixel count part of the third data indicates a value of two. The compression process is then repeated until pixel 20i is checked. This data compression function of compressor 67 will be described in more detail below, in conjunction with Figure 5.
  • the compressed pixel map data from data compressor 67 is then applied to a blending circuit 73 if the text image is intended to be blended onto another image (e.g., a graphics image). If not, the compressed pixel map data from data compressor 67 can be directly applied to frame buffer 24 of Figure 2.
  • an output graphics data 70 is applied to scan converter 72.
  • the output graphics data 70 specifies a graphics image to be imaged on the output imaging device (e.g., display 28 or printer 30).
  • Output graphics data 70 can be generated by any known graphics-rendering techniques adopted in computer system 20 of Figure 2.
  • Output graphics data 70 is then scan converted to become colored pixel map data.
  • the colored pixel map data specifies the actual pixel map of the graphics image to be imaged on the output imaging device.
  • Each pixel data of the colored pixel map data specifies the color of the pixel to be actually imaged.
  • the colored pixel map data from scan converter 72 is then applied to blending circuit 73 if the graphics image is intended to be blended by another image. If not, the colored pixel map data can be directly supplied to frame buffer 24 of Figure 2.
  • the colored pixel map data of graphics data 70 from scan converter 72 may pass through a data compressor such as compressor 67 before being applied to blending circuit 73.
  • Blending circuit 73 uses the compressed pixel map data from data compressor 67 to blend the pixel map image of text data 70 onto the pixel map image of graphics data 70.
  • the graphics image that will be blended may have already been converted and stored in frame buffer 24 ( Figure 2).
  • blending circuit 73 receives the pixel map data of the graphics image for blending from frame buffer 24, instead of scan converter 72.
  • the blending operation of blending circuit 73 using the compressed pixel map data is described as follows, with reference to Figure 1.
  • blending circuit 73 receives the first data defining the color of pixels 14-14i, blending circuit 73 only needs to make one determination for these pixels. In this case and for pixels 14-14i, the decision is to do nothing in terms of blending these pixels.
  • blending circuit 73 When blending circuit 73 receives the third data defining the color of pixels 16-16i, blending circuit 73 again only needs to make one determination. In this case, the black color of pixels 16-16i are to be blended onto the other image. Only when the data for gray pixels such as pixel 15 is received in blending circuit 73, does blending circuit 73 need to individually blend the gray scale of the pixel with the color of the corresponding pixel of the other image. This blending operation by blending circuit 73 using compressed pixel map data will be described in more detail below, in conjunction with Figure 6.
  • the output of blending circuit 73 is then applied to data compressor 74 for further data compression.
  • data compressor 74 is optional in the system. When data compressor 74 is not included, the output of blending circuit 73 is directly applied to frame buffer 24. The operation of data compressor 74 is identical to that of data compressor 67. The output of data compressor 74 is then applied to frame buffer 24. Alternatively, the output of blending circuit 73 can be directly applied to frame buffer 24.
  • the process starts at step 90.
  • a scan line is located.
  • a first pixel of the scan line is located.
  • the value of the pixel count is set to zero.
  • the pixel count is used to count or indicate the number of pixels in a particular color.
  • the color (or gray level) of that pixel is determined.
  • the process then moves to step 95, at which the value of the pixel count is compared against a predetermined number (e.g., 15). This is to make sure that the value of the pixel count does not exceed the predetermined number.
  • the predetermined number can be greater or smaller than sixteen.
  • the value of the predetermined number is determined by the bits assigned to a data for indicating the value of the pixel count. If, at step 95, it is determined that the value has not reached the predetermined number, then step 96 is performed. Otherwise, step 96 is bypassed and step 98 is then performed.
  • step 96 the color of the next pixel is determined. If the color is different from the color of the previous pixel, then step 98 is performed. If not, the process goes to step 97 to increment the pixel count.
  • step 98 a byte wide data is generated that defines the color (or gray scale) and the value of the pixel count (i.e., the number of pixels counted in that color). Then step 99 is performed at which the pixel count is reset to zero. The process then moves to step 100 at which a judgment is made to find out if there are any remaining pixels along the scan line. If so, step 102 is performed. At step 102, the next pixel (i.e., the first pixel of the remaining pixels along the scan line) is located and the process then moves to step 94.
  • step 101 is performed at which it is determined whether the scan line is a last line of the pixel map. If not, the process moves to step 103 at which the next scan line is located. The process then moves to step 92. If, however at step 101, it is determined that the scan line is in fact the last scan line, then the process ends at step 104. Referring to Figure 6, the process of blending starts at step 110. At step
  • a data for the compressed image i.e., the pixel map data of which has been compressed
  • the color and pixel count value of the data are identified.
  • the process then moves to step 113 at which it is determined if the color indication of the data is white (i.e., the backcolor). If so, then step 114 is performed.
  • the corresponding number of pixels of the second image i.e., the other image to be blended
  • step 115 is performed at which it is determined if the color specified by the data is black (i.e., the forecolor). If so, step 116 is performed at which the color of all the corresponding number of pixels of the second image is changed to black.
  • step 115 If, however at step 115, it is determined that the color specified by the data is not black which means the color is a scaled gray, then step 117 is performed at which the gray scale of each of the number of pixels of the compressed image is blended onto the color of its correspond pixel of the second image. The process then moves to step 118, at which it is determined if any more data for the compressed image requires blending. If so, the process moves to step 111. If not, the process ends at step 119.
  • step 117 can be described as follows. First, the color of the pixels of the second image specified by the pixel count is determined. If the pixels have different color, then the blending takes place individually. If the pixels have the same color, then the blending takes place by checking and calculating the weighted blend of the first pixel of the first and second images specified by the pixel count. The weight blend is run for the length of the pixels specified by the pixel count without the need for checking and calculating the weighted blend for each pixel.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Color Television Systems (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method of compressing data representing a plurality of pixels consecutively arranged along a line of an image to be displayed on a computer controlled display is described. The method includes the step of determining color of an initial pixel of the plurality of pixels. The value of a pixel count is then incremented. The value of the pixel count indicates the number of pixels in that color. The color of an adjacent pixel of the initial pixel is then determined. If the color of the adjacent pixel is identical to the color of the initial pixel, then the adjacent pixel is caused to be the initial pixel and the step of incrementing the value of the pixel count is performed. The method then moves to determine the color of the next pixel. If the color of the adjacent pixel is different from the color of the initial pixel, then a datum indicating the color of the initial pixel and the value of the pixel count is generated. A method of using the compressed data of the image to blend the image onto another image is also described.

Description

METHOD AND APPARATUS FOR COi PRESSING IMAGE DATA
FIELD OF THE INVENTION
The present invention pertains to the field of digital image processing. More particularly, this invention relates to an arrangement for compressing the data for an image displayed on a computer controlled display system to minimize the memory requirement of the display system and to allow the image to be more quickly and efficiently blended with other images.
BACKGROUND OF THE INVENTION
Prior art imaging systems typically produce a final output image using two distinct steps. First, imaging data is encoded and placed into a frame buffer. In a second step, when the frame buffer is at least partially filled, this encoded data is extracted and transmitted to a marking device (e.g., a display or a printer). Traditionally, the frame buffer has contained the precise marking pattern (i.e., bitmap or pixel map) to be utilized by the marking device when producing the final output image.
For example, in a prior art bi-level imaging system with a marking device capable of either creating a mark at a given spot or leaving the spot blank, the frame buffer consists of binary memory with each bit in the memory representing a spot on the device's output medium. For imaging systems which include marking devices capable of imaging in multiple colors or gray levels, each spot to be imaged by the device is represented by a corresponding value in the frame buffer that specifies the color or luminance of that particular spot.
Disadvantages are, however, associated with such prior art image rendering techniques. One disadvantage associated is that the frame buffer typically requires a relatively large storage capacity to store the bitmap or pixel map data for the image. This is particularly so if the image also involves colors and/or gray scales. In that case, extra data or information is required to specify the color or gray scale of each spot or pixel of the image, thus increasing the memory space to store the bitmap data. The relatively large frame buffer employed to store the bitmap data typically increases the memory cost of the imaging system, which in turn increases the system cost.
Another disadvantage is that it typically takes relatively long time to blend a gray scaled glyph image onto a multi-color graphics image. This is due to the fact that the gray scaled glyph image includes not only the completely imaged spots (i.e., black pixels) and completely unimaged spots (i.e., white pixels), but also partially imaged spots (i.e., gray pixels) measured in different scales or levels. When a gray pixel is to be blended onto a color pixel, the color of the blended pixel needs to be changed, depending on the scale of the gray pixel and the color of the color pixel. Thus, in order to blend a glyph image onto a colored graphics image, each pixel of the glyph image is individually blended with its corresponding pixel of the graphics image. This typically takes a relatively long time to complete. In addition, it is also relatively costly to make the determination for each pixel if blending is necessary.
SUMMARY OF THE INVENTION
One of the features of the present invention is to minimize memory required for storing data for displaying an image on a computer controlled display system. Another feature of the present invention is to compress the data for displaying an image on a computer controlled display system such that the memory associated with storing the data can be minimized.
Another feature of the present invention is to store the data for displaying a gray scaled glyph on a computer controlled display system in a compressed format such that the glyph can be blended with graphics on the display system relatively quickly and efficiently.
A method of compressing data representing a plurality of pixels consecutively arranged along a line of an image to be displayed on a computer controlled display is described. The method includes the step of determining color of an initial pixel of the plurality of pixels. The value of a pixel count is then incremented. The value of the pixel count indicates the number of pixels in that color. The color of an adjacent pixel of the initial pixel is then determined. If the color of the adjacent pixel is identical to the color of the initial pixel, then the adjacent pixel is caused to be the initial pixel and the step of incrementing the value of the pixel count is again performed. The method then moves to determine the color of the next pixel. If the color of the adjacent pixel is different from the color of the initial pixel, then a datum indicating the color of the initial pixel and the value of the pixel count is generated.
A method of compressing data representing a plurality of pixels of an image to be displayed on a computer controlled display is described. A first pixel from the plurality of pixels is located. The color of the first pixel is then determined. A second pixel from the plurality of pixels that s adjacent to the first pixel is then located. The color of the second pixel is then determined. A datum that indicates the color of the first pixel and the number of pixels in that color is generated if the color of the second pixel is determined to be identical to the color of the first pixel.
A method of using the compressed data of a first image (e.g., gray scaled glyph image) to blend the first image onto a second image (e.g., multi-color graphics image) is also described. A datum of the first image is retrieved. The datum specifies a color and the number of consecutive pixels in that color. Then a single blending operation is performed to blend the number of pixels of the first image onto a corresponding number of pixels of the second image if the color of the number of pixels of the first image either supersedes or is superseded by the color of the corresponding pixels of the second image.
In other words, if the color in the datum is either the forecolor (typically white) or backcolor (typically black), no blending of the second image is required. This allows a maximized performance gain. When color information in the datum of the first image is a value between the forecolor and backcolor, blending is required. When this occurs, the blending value (i.e., how much and of what value) is calculated from the first datum and the color information of the corresponding pixels of the second image. The operation is as follows. In cases where each pixel in the second image is different in color, the weighted blend of the first image pixels and the second image pixels must be calculated. In cases where the second image pixels are of the same color, no re-check and re-calculation is needed for the length of the corresponding pixels specified by the datum.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
Figure 1 shows the pixel map of a glyph "t" displayed on a computer controlled display system;
Figure 2 shows the computer controlled display system, wherein the display system employs the function of compressing the pixel map data for a text image and using the compressed data to blend the text image onto a colored graphics image in accordance with one embodiment of the present invention;
Figure 3 shows the memory map of the system memory and frame buffer of the display system of Figure 2; Figure 4 illustrates a flow chart of the process of generating a compressed and blended pixel map data through a data compressor and a blending circuit of the display system of Figure 2;
Figure 5 is a flow chart depicting the process of the data compressor of Figure 4 for compressing the pixel map data in accordance with one embodiment of the present invention;
Figure 6 shows the process of the blending circuit of Figure 4 for using the compressed pixel map data for a text image to blend the text image onto a colored graphics image in accordance with one embodiment of the present invention.
DETAILED DESCRIPTION
Figure 1 shows the pixel map 10 of a glyph "t" to be displayed on a computer controlled display system. As is known, a pixel map is described as a two dimensional array of points having known coordinates which map to a display or a printer. As is known, the appearance of a pixel (i.e., spot) on a display is controlled by the signals applied to that pixel. The signals are derived from the data for that pixel stored in a pixel map memory. Pixel map 10 of glyph "t" can be blended (i.e., superimposed) onto a pixel map of a multi-color graphics image. In Figure 1, pixel map 10 only shows one glyph "t" for illustration purposes. In practice, pixel map 10 may show a text that includes a string of glyph symbols.
As can be seen from Figure 1, pixel map 10 is a gray scaled pixel map and includes a number of raster scan lines, each having a number of pixels. For example, scan line 11 includes a number of pixels 12 through 12n. Pixels
12-12n are white pixels (i.e., unimaged pixels on an imaging device, which are also referred to as backcolor pixels). Other scan lines of pixel image 10 may include white and black pixels (the black pixels are the imaged pixels on the imaging device, which are also referred to as the forecolor pixels). In addition, some other scan lines of pixel map 10 include gray scaled pixels (i.e., partially imaged pixels). For example, scan line 13 includes gray scaled pixels 15, 17, and 18. Each of gray scaled pixels 15, 17, and 18 has a different gray scale or level.
For one embodiment, there are fourteen gray levels or scales from white (i.e., backcolor) to black (i.e., forecolor) (i.e., total sixteen levels). For other embodiments, the gray levels can be more or fewer than sixteen. For example, the intensity of the gray color can be specified in thirty-two scales or levels from black to white. For another embodiment, pixel map 10 may specify a multi-color graphics image. In this case, the gray le^ als can be used to specify the different colors.
As can be seen from Figure 1, each pixel of pixel map 10 requires a data to specify its gray scale or color in order to describe pixel map 10. If pixel map 10 describes a colored graphics image, the actual color (instead of gray scale) of each pixel of pixel map 10 needs to be specified by the pixel data.
As described below and in accordance with one embodiment of the present invention, the pixel data of pixel map 10 is compressed in a computer controlled display system such that when pixel map 10 is blended onto another image, the compressed data allows pixel map 10 to be relatively quickly and efficiently blended onto the other image. Briefly, the compressing process determines the color (or gray scale) of an initial pixel and then determines if the adjacent pixel of the initial pixel shares the same color (or gray scale). If so, the process increments its pixel count to indicate the number of pixels in that color. The process then moves to determine the color of the next adjacent pixel and increments the pixel count unless the pixel has a different color or gray level. When this occurs, a data is generated to specify the color (or gray scale) and the number of consecutive pixels in that color. The data has two parts, one for specifying the color and the other for specifying the number of pixels. The process then repeats those steps to generate the next compressed data until the entire pixel map has been compressed. By doing so, when the compressed data of a pixel map is used to blend the pixel map onto another pixel map, the blending can be done in a relatively quick and efficient manner. This means to blend a number of pixels at one time if these pixels share the same color. In addition, the storage space required to store the compressed pixel map data is also minimized. The compressing and blending processes according to one embodiment of the present invention will be described in more detail below, in conjunction with Figures 2-6. Figure 2 shows a computer based system 20 having a computer controlled display system for compressing data of a computer image and for blending the image using the compressed data onto other images according to one embodiment of the present invention. For one embodiment, computer system 20 of Figure 2 operates in multimedia environment and supports integrated digital media and three-dimensional graphics and models. For one embodiment, computer system 20 is a personal computer. For other embodiments, computer system 20 can be a notebook computer, a laptop computer, a minicomputer, a workstation computer, a mainframe computer, or any other type of computer system.
Computer system 20 includes a processor 22 which is often a microprocessor such as the commercially available 68030 or 68040 microprocessor from Motorola. Computer system 20 also includes a system bus 21 and system memory 23 for storage of instructions and data for use by processor 22. System bus 21 typically includes address and data lines as well as control lines for allowing communication of data and instructions between various components of computer system 20 such as processor 22 and system memory 23 as well as other components shown in Figure 2. Computer system 20 also includes a frame buffer 24 for storing pixel data or information for display on a display 28 or to be printed by printer.
Computer system 20 also includes a mass storage device 26, such as a hard disk, and a disk controller 25 which is typically coupled to system bus 21. Computer system 20 further includes a display controller 27 for controlling and processing image data to be imaged on display 28. As described above, the image data is stored in frame buffer 24 before being displayed on display 28. Input and output of computer system 20 is also provided by an input/output controller 29 which rhay be one unit or several different units as is known in the art for controlling the input and output from /to printers such as printer 30, keyboards such as keyboard 31, and cursor control devices such as cursor controller 32. Processor 22 retrieves programs containing instructions and data from mass storage device 26 and causes these instructions and data to be loaded into system memory 23 for execution of the instructions. Processor 22 executes the instructions and causes a displayable representation, such as a pixel map to be created in frame buffer 24, which representation is then conveyed over system bus 21 to display controller 27 or I/O controller 29 so that the displayable representation, such as a pixel map, may then be displayed on display 28 or printed by printer 30. As is well known, display 28 may be any variety of suitable computer controlled display devices, such as CRT displays or liquid crystal displays, etc. As is also well known, printer 30 may be one or more of any variety of "hard copy" display devices such as laser printers, ink jet printers, etc. It is well known that numerous other computer architectures exist, and the present invention may be practiced in those architectures as well. Figure 3 shows the memory map 40 of system memory 23 and frame buffer 24 of Figure 2. Figure 3 shows a typical arrangement of the major programs contained within system memory 23 and frame buffer 24 illustrated in Figure 2. In particular, there is shown a display pixel map section 41. Pixel map section 41 represents the pixel map data stored in frame buffer 24. Each pixel data in a pixel map defines a particular pixel on an output imaging device (e.g., display 28 or printer 30).
In accordance with one embodiment of the present invention, the pixel map data stored in frame buffer 24 is in a compressed format. The pixel map data can represent a gray scaled text image, a multi-color graphics image, or a blended image of gray scaled text and multi-color graphics. The compression of the pixel map data in accordance with one embodiment will be described in more detail below, in conjunction with Figures 4-5. The blending of two images (i.e., a text image and a graphics image) using the compressed pixel map data in accordance with one embodiment of the present invention is also described in more detail below, in connection with Figures 4 and 6.
Memory map 40 of system memory 23 and frame buffer 24 also includes system program section 42 for storing system programs which represent a variety of sequences of instructions for execution by the CPU or processor 22 in order to support system level input and output and control. For example, the system programs such as disk operating systems and the like may be stored within section 42. Typically also, the programs which provide scan conversion such as scan converters 66 and 72 of Figure 4 may also be stored in section 42. Moreover, the programs which provide data compression and image blending using the compressed pixel map data in accordance with one embodiment of the present invention may also be stored in section 42. The data compression programs in accordance with one embodiment of the present invention are shown in Figure 4 as data compressors 67 and 74. The image blending programs in accordance with one embodiment of the present invention are shown in Figure 4 as blending circuit 73. These programs will be described in more detail below, in conjunction with Figures 4-6.
System memory 23 typically also includes font resources shown within memory section 43, which font resources include outline font data. Additionally, space within system memory 23 is also reserved for other programs and spare memory as shown as memory section 44 in Figure 3. These other programs may include a variety of useful computational or utility programs as may be desired. In addition, graphics data for generating graphics image data may also be stored in section 44 of memory map 40. Figure 4 is the flow chart that shows the process of compressing the scan converted pixel map data of a text image and the process of blending two display images (e.g., a text image and a graphics image) using the compressed pixel map data. As can be seen from Figure 4, output text data 60 specifies a text to be imaged on an output imaging device (e.g., display 28 or printer 30). The output text data 60 typically includes text data for specifying or identifying the alphanumeric or other characters or symbols to be printed or displayed. In addition, output text data 60 also includes other control information which will be described below. Output text data 60 can be generated by any known text-rendering techniques adopted in computer system 20 of Figure 2. Output text data 60 typically includes at least one glyph for imaging. Alternatively, text data 60 can include graphics data for imaging graphics, or a combination of text and graphics data. Output text data 60 also includes other control information for defining font and shape in which the text is to be rendered, for defining the size of the glyph, and for defining coordinates of characters relative to a page or display screen or relative to each other. All the information of output text data 60 passes through an interpreter 65 to generate the actual text image data. Interpreter 66 can be implemented by any known text image rendering software programs. The text image data is then applied to scan converter 66 for scan converting the text data into gray scaled pixel map data. As described above, scan converter 66 can be implemented by any known scan conversion software or hardware means. The scan converted pixel map data specifies the actual pixel map (e.g., the pixel map shown in Figure 1) to be imaged on an actual output imaging device. Each pixel data of the pixel map data specifies the gray scale (including black and white) of the pixel to be actually imaged. The converted pixel map data then passes through a data compressor 67 for data compression. Moreover, the above-described procedures for rendering the pixel map data can be done in one step.
Referring back to Figure 1, the data compressing function of compressor 67 of Figure 4 can be explained as follows. When, for example, the pixel data of scan line 13 is to be compressed, pixel 14 is first located and its gray scale (or color) is determined. In this case, pixel 14 is determined to be a white (i.e., backcolor) pixel. Then the gray scale (or color) of its adjacent pixels is determined and the pixel count indicating the number of pixels in that gray scale (or color) is accordingly incremented. The pixel count is initially set at zero. When pixel 15 is reached and its gray scale is determined not to be white, a first data is generated. The first data has two parts, one for defining the gray scale (in this case, white) and the other for defining the value of the pixel count (in this case, three). Here, the value three in the pixel count indicates four pixels. The pixel count is then reset zero. For one embodiment, each part of the data is four bit wide. Alternatively, the two parts of the data can be longer or shorter than four bits.
Then pixel 16 is located and its color is determined. Because pixel 16 is a black (i.e., backcolor) pixel, a second data is generated that indicates the gray level of pixel 15 and the value of pixel count (in this case, zero). The pixel count is then reset. The process of compressing then moves to check the gray scale of the next pixel of pixel 16 and increment the value of pixel count accordingly until pixel 16i. When pixel 17 is reached, a third data is generated. The color part of the third data specifies the black color and the pixel count part of the third data indicates a value of two. The compression process is then repeated until pixel 20i is checked. This data compression function of compressor 67 will be described in more detail below, in conjunction with Figure 5.
The compressed pixel map data from data compressor 67 is then applied to a blending circuit 73 if the text image is intended to be blended onto another image (e.g., a graphics image). If not, the compressed pixel map data from data compressor 67 can be directly applied to frame buffer 24 of Figure 2.
As also can be seen from Figure 4, an output graphics data 70 is applied to scan converter 72. The output graphics data 70 specifies a graphics image to be imaged on the output imaging device (e.g., display 28 or printer 30). Output graphics data 70 can be generated by any known graphics-rendering techniques adopted in computer system 20 of Figure 2.
Output graphics data 70 is then scan converted to become colored pixel map data. The colored pixel map data specifies the actual pixel map of the graphics image to be imaged on the output imaging device. Each pixel data of the colored pixel map data specifies the color of the pixel to be actually imaged. The colored pixel map data from scan converter 72 is then applied to blending circuit 73 if the graphics image is intended to be blended by another image. If not, the colored pixel map data can be directly supplied to frame buffer 24 of Figure 2. Alternatively, the colored pixel map data of graphics data 70 from scan converter 72 may pass through a data compressor such as compressor 67 before being applied to blending circuit 73.
Blending circuit 73 uses the compressed pixel map data from data compressor 67 to blend the pixel map image of text data 70 onto the pixel map image of graphics data 70. Alternatively, the graphics image that will be blended may have already been converted and stored in frame buffer 24 (Figure 2). In this case, blending circuit 73 receives the pixel map data of the graphics image for blending from frame buffer 24, instead of scan converter 72. The blending operation of blending circuit 73 using the compressed pixel map data is described as follows, with reference to Figure 1. When blending circuit 73 receives the first data defining the color of pixels 14-14i, blending circuit 73 only needs to make one determination for these pixels. In this case and for pixels 14-14i, the decision is to do nothing in terms of blending these pixels. When blending circuit 73 receives the third data defining the color of pixels 16-16i, blending circuit 73 again only needs to make one determination. In this case, the black color of pixels 16-16i are to be blended onto the other image. Only when the data for gray pixels such as pixel 15 is received in blending circuit 73, does blending circuit 73 need to individually blend the gray scale of the pixel with the color of the corresponding pixel of the other image. This blending operation by blending circuit 73 using compressed pixel map data will be described in more detail below, in conjunction with Figure 6.
The output of blending circuit 73 is then applied to data compressor 74 for further data compression. For one embodiment, data compressor 74 is optional in the system. When data compressor 74 is not included, the output of blending circuit 73 is directly applied to frame buffer 24. The operation of data compressor 74 is identical to that of data compressor 67. The output of data compressor 74 is then applied to frame buffer 24. Alternatively, the output of blending circuit 73 can be directly applied to frame buffer 24.
The scheme of compressing the data for a pixel map image in accordance with one embodiment of the present invention is now described in conjunction with Figure 5. The scheme of using the compressed data of the pixel map image to blend the image onto another pixel map image will be described below, in conjunction with Figure 6.
Referring to Figure 5, the process starts at step 90. At step 91, a scan line is located. At step 92, a first pixel of the scan line is located. At step 93, the value of the pixel count is set to zero. As described above, the pixel count is used to count or indicate the number of pixels in a particular color. At step 94, the color (or gray level) of that pixel is determined. The process then moves to step 95, at which the value of the pixel count is compared against a predetermined number (e.g., 15). This is to make sure that the value of the pixel count does not exceed the predetermined number. Alternatively, the predetermined number can be greater or smaller than sixteen. The value of the predetermined number is determined by the bits assigned to a data for indicating the value of the pixel count. If, at step 95, it is determined that the value has not reached the predetermined number, then step 96 is performed. Otherwise, step 96 is bypassed and step 98 is then performed.
At step 96, the color of the next pixel is determined. If the color is different from the color of the previous pixel, then step 98 is performed. If not, the process goes to step 97 to increment the pixel count.
At step 98, a byte wide data is generated that defines the color (or gray scale) and the value of the pixel count (i.e., the number of pixels counted in that color). Then step 99 is performed at which the pixel count is reset to zero. The process then moves to step 100 at which a judgment is made to find out if there are any remaining pixels along the scan line. If so, step 102 is performed. At step 102, the next pixel (i.e., the first pixel of the remaining pixels along the scan line) is located and the process then moves to step 94.
If, at step 100, it is determined that there is not more pixels left along the scan line, then step 101 is performed at which it is determined whether the scan line is a last line of the pixel map. If not, the process moves to step 103 at which the next scan line is located. The process then moves to step 92. If, however at step 101, it is determined that the scan line is in fact the last scan line, then the process ends at step 104. Referring to Figure 6, the process of blending starts at step 110. At step
111, a data for the compressed image (i.e., the pixel map data of which has been compressed) is retrieved. At step 112, the color and pixel count value of the data are identified. The process then moves to step 113 at which it is determined if the color indication of the data is white (i.e., the backcolor). If so, then step 114 is performed. At step 114, the corresponding number of pixels of the second image (i.e., the other image to be blended) will not change their colors.
If, at step 114, it is determined that the color specified by the data is not the backcolor, then step 115 is performed at which it is determined if the color specified by the data is black (i.e., the forecolor). If so, step 116 is performed at which the color of all the corresponding number of pixels of the second image is changed to black.
If, however at step 115, it is determined that the color specified by the data is not black which means the color is a scaled gray, then step 117 is performed at which the gray scale of each of the number of pixels of the compressed image is blended onto the color of its correspond pixel of the second image. The process then moves to step 118, at which it is determined if any more data for the compressed image requires blending. If so, the process moves to step 111. If not, the process ends at step 119.
The operation of step 117 can be described as follows. First, the color of the pixels of the second image specified by the pixel count is determined. If the pixels have different color, then the blending takes place individually. If the pixels have the same color, then the blending takes place by checking and calculating the weighted blend of the first pixel of the first and second images specified by the pixel count. The weight blend is run for the length of the pixels specified by the pixel count without the need for checking and calculating the weighted blend for each pixel.
In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims

CLAIMSWhat is claimed is:
1. A method of compressing data representing a plurality of pixels consecutively arranged along a line of an image to be displayed on a computer controlled display, comprising the steps of:
(A) determining color of an initial pixel of the plurality of pixels;
(B) incrementing the value of a pixel count wherein the value of the pixel count indicates the number of pixels in that color;
(C) determining color of an adjacent pixel of the initial pixel from the plurality of pixels;
(D) if the color of the adjacent pixel is identical to the color of the initial pixel, then causing the adjacent pixel to be the initial pixel and repeating the steps (B) and (C);
(E) if the color of the adjacent pixel is different from the color of the initial pixel, then generating a datum indicating the color of the initial pixel and the value of the pixel count.
2. The method of claim 1, wherein the step (A) further comprises the step of determining gray scale of the initial pixel.
3. The method of claim 2, wherein the step (C) further comprises the step of determining the gray scale of the adjacent pixel.
4. The method of claim 1, wherein the value of the pixel count is initially set to zero, wherein the method further comprises the steps of
(I) comparing the value of the pixel count with a predetermined threshold value after the step (B);
(II) performing the step (E) if the value of the pixel count exceeds the predetermined threshold value.
5. The method of claim 1, wherein the step (E) further comprises the step of resetting the value of the pixel count to an initial value.
6. The method of claim 5, further comprising the steps of (a) causing the adjacent pixel of different color to be the initial pixel of that different color after the step (E);
(b) repeating the steps (A) through (E) until a last pixel of the plurality of pixels has been reached.
7. The method of claim 6, further comprising the steps of
(i) determining if the line is a last line of the image, wherein the image displayed on a computer controlled display includes a plurality of lines, including the line, wherein each of the plurality of lines includes a plurality of pixels;
(ii) repeating the steps (A) through (E) if the line is not the last line of the plurality of lines.
8. A method of compressing data representing a plurality of pixels of an image to be displayed on a computer controlled display, comprising the steps of:
(A) locating a first pixel from the plurality of pixels;
(B) determining the color of the first pixel;
(C) locating a second pixel from the plurality of pixels that is adjacent to the first pixel;
(D) determining the color of the second pixel;
(E) generating a first datum that indicates the color of the first pixel and the number of pixels in that color if the color of the second pixel is determined to be identical to the color of the first pixel.
9. The method of claim 8, further comprising the step of generating a second datum to indicate the color and number of the first pixel if the color of the second pixel is different from that of the first pixel.
10. The method of claim 8, further comprising the steps of
(F) locating a third pixel from the plurality of pixels that is adjacent to the second pixel;
(G) determining the color of the third pixel;
(H) modifying the first datum to indicate the color of the first pixel and the number of pixels in that color if the color of the third pixel is also determined to be identical to the color of the first pixel.
11. The method of claim 10, further comprising the step of not modifying the first datum if the color of the third pixel is different from that of the first and second pixels.
12. The method of claim 11, further comprising the steps of (I) comparing the number of pixels in the first datum with a predetermined value;
(II) outputting the first datum if the number of pixels in the first datum is equal to the predetermined value.
13. The method of claim 12, further comprising the steps of (i) locating a fourth pixel from the plurality of pixels that is adjacent to the third pixel if the color of the third pixel is identical to that of the first pixel; (ii) determining the color of the fourth pixel;
(iii) modifying the first datum to indicate the color of the first pixel and the number of pixels in that color if the color of the fourth pixel is also determined to be identical to the color of the first pixel.
14. The method of claim 10, wherein the step (B) further comprises the step of determining gray scale of the first pixel.
15. The method of claim 14, wherein the step (D) further comprises the step of determining the gray scale of the second pixel.
16. The method of claim 15, wherein the step (G) further comprises the step of determining the gray scale of the third pixel if the third pixel is not a color pixel.
17. A method of using a compressed data for a first image to blend the first image onto a second image, comprising the steps of:
(A) retrieving a datum of the first image, wherein the datum defines a color and the number of consecutive pixels in that color;
(B) performing a single blending operation to blend the number of pixels of the first image onto a corresponding number of pixels of the second image if the color of the number of pixels of the first image either supersedes or is superseded by colors of the corresponding pixels of the second image.
18. The method of claim 17, wherein the step (B) further compresses the steps of
(a) determining whether the color of the number of pixels of the first image supersedes or is superseded by the colors of the corresponding pixels of the second image; (b) if the color of the number of pixels of the first image supersedes the colors of the corresponding pixels of the second image, then adopting the color of the number of pixels of the first image;
(c) if the color of the number of pixels of the first image is superseded by the colors of the corresponding pixels of the second image, then adopting the colors of the corresponding pixels of the second image.
19. An apparatus of compressing data representing a plurality of pixels of an image to be displayed on a computer controlled display, comprising: (A) means for determining color of an initial pixel of the plurality of pixels;
(B) means for incrementing the value of a pixel count, wherein the value of the pixel count indicates the number of pixels in that color;
(C) means for determining color of an adjacent pixel of the initial pixel from the plurality of pixels;
(D) means for causing the adjacent pixel to be the initial pixel and causing the means for incrementing to increment the value of the pixel count if the color of the adjacent pixel is identical to the color of the initial pixel, wherein the means for causing also causes the means for determining color of an adjacent pixel to determine the color of the adjacent pixel whenever the value of the pixel count is incremented;
(E) means for generating a datum indicating the color of the initial pixel and the value of the pixel count if the color of the adjacent pixel is different from the color of the initial pixel.
20. The apparatus of claim 19, wherein the value of the pixel count is initially set to zero, wherein the apparatus further comprises
(I) means for comparing the pixel count with a predetermined threshold value whenever the pixel count is incremented; (II) means for generating the datum and resetting the value of the pixel count to zero whenever the pixel count exceeds the predetermined threshold value.
21. The apparatus of claim 20, further comprising means for causing the adjacent pixel of different color to be the initial pixel of that different color in order for the value of the pixel count for that different color to be determined.
PCT/US1996/005415 1995-05-09 1996-04-17 Method and apparatus for compressing image data WO1996036164A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU55564/96A AU5556496A (en) 1995-05-09 1996-04-17 Method and apparatus for compressing image data
DE69612348T DE69612348T2 (en) 1995-05-09 1996-04-17 IMAGE DATA COMPRESSION METHOD AND DEVICE
EP96912900A EP0770301B1 (en) 1995-05-09 1996-04-17 Method and apparatus for compressing image data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US43764195A 1995-05-09 1995-05-09
US08/437,641 1995-05-09

Publications (2)

Publication Number Publication Date
WO1996036164A2 true WO1996036164A2 (en) 1996-11-14
WO1996036164A3 WO1996036164A3 (en) 1997-01-09

Family

ID=23737275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/005415 WO1996036164A2 (en) 1995-05-09 1996-04-17 Method and apparatus for compressing image data

Country Status (5)

Country Link
US (1) US5768569A (en)
EP (1) EP0770301B1 (en)
AU (1) AU5556496A (en)
DE (1) DE69612348T2 (en)
WO (1) WO1996036164A2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6205255B1 (en) * 1998-01-06 2001-03-20 Intel Corporation Method and apparatus for run-length encoding of multi-colored images
JP4313051B2 (en) * 2002-02-27 2009-08-12 株式会社リコー Image forming apparatus, accounting counter, image forming method, accounting method, image forming program, and accounting program
JP4515832B2 (en) * 2004-06-14 2010-08-04 オリンパス株式会社 Image compression apparatus and image restoration apparatus
US8681167B2 (en) * 2008-09-23 2014-03-25 Intel Corporation Processing pixel planes representing visual information
US8687004B2 (en) * 2010-11-01 2014-04-01 Apple Inc. Font file with graphic images
CN106340278B (en) * 2016-10-13 2019-02-22 深圳市华星光电技术有限公司 A kind of driving method and device of display panel

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4316222A (en) * 1979-12-07 1982-02-16 Ncr Canada Ltd. - Ncr Canada Ltee Method and apparatus for compression and decompression of digital image data
JPS63157564A (en) * 1986-12-22 1988-06-30 Kokusai Denshin Denwa Co Ltd <Kdd> Coding and decoding device in hierarchical picture
EP0410739A2 (en) * 1989-07-27 1991-01-30 Fujitsu Limited Method and apparatus for compressing halftone image data

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3305841A (en) * 1963-09-30 1967-02-21 Alphanumeric Inc Pattern generator
US3480943A (en) * 1967-04-03 1969-11-25 Alphanumeric Inc Pattern generator
US4091424A (en) * 1977-02-18 1978-05-23 Compression Labs, Inc. Facsimile compression system
US4437122A (en) * 1981-09-12 1984-03-13 Xerox Corporation Low resolution raster images
US4945351A (en) * 1988-05-23 1990-07-31 Hewlett-Packard Company Technique for optimizing grayscale character displays
US5155805A (en) * 1989-05-08 1992-10-13 Apple Computer, Inc. Method and apparatus for moving control points in displaying digital typeface on raster output devices
JPH03154096A (en) * 1989-11-13 1991-07-02 Canon Inc Method and device for generating pattern
US5231385A (en) * 1990-03-14 1993-07-27 Hewlett-Packard Company Blending/comparing digital images from different display window on a per-pixel basis
US5459828A (en) * 1990-08-01 1995-10-17 Xerox Corporation Optimized scaling and production of raster fonts from contour master fonts
US5301267A (en) * 1991-09-27 1994-04-05 Adobe Systems Incorporated Intelligent font rendering co-processor
US5249242A (en) * 1991-12-23 1993-09-28 Adobe Systems Incorporated Method for enhancing raster pixel data
US5426514A (en) * 1992-03-20 1995-06-20 Xerox Corporation Method of gray scaling facsimile images
US5353061A (en) * 1992-10-08 1994-10-04 International Business Machines Corporation System and method for frame-differencing video compression/decompression using perceptually-constant information and image analysis
US5270836A (en) * 1992-11-25 1993-12-14 Xerox Corporation Resolution conversion of bitmap images
US5467134A (en) * 1992-12-22 1995-11-14 Microsoft Corporation Method and system for compressing video data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4316222A (en) * 1979-12-07 1982-02-16 Ncr Canada Ltd. - Ncr Canada Ltee Method and apparatus for compression and decompression of digital image data
JPS63157564A (en) * 1986-12-22 1988-06-30 Kokusai Denshin Denwa Co Ltd <Kdd> Coding and decoding device in hierarchical picture
EP0410739A2 (en) * 1989-07-27 1991-01-30 Fujitsu Limited Method and apparatus for compressing halftone image data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IBM TECHNICAL DISCLOSURE BULLETIN, vol. 31, no. 5, October 1988, NEW YORK US, pages 4-6, XP000024776 "Method of run length coding for natural images" *
PATENT ABSTRACTS OF JAPAN vol. 12, no. 420 (E-679), 8 November 1988 & JP 63 157564 A (KOKUDAI DENSHIN DENWA), 30 June 1988, *

Also Published As

Publication number Publication date
WO1996036164A3 (en) 1997-01-09
AU5556496A (en) 1996-11-29
US5768569A (en) 1998-06-16
DE69612348T2 (en) 2001-09-13
EP0770301A1 (en) 1997-05-02
EP0770301B1 (en) 2001-04-04
DE69612348D1 (en) 2001-05-10

Similar Documents

Publication Publication Date Title
EP0298446B1 (en) Full page graphics image display data reduction
EP0115584A1 (en) Image producing apparatus and methods of processing image-representing signals for use by such apparatus
WO1996036015A1 (en) Method and apparatus for generating a text image on a display with anti-aliasing effect
US5457772A (en) Method to convert bitmaps to monochrome data
US7555170B2 (en) Image processing apparatus for scaling images
US5852710A (en) Apparatus and method for storing image data into memory
JPS6322310B2 (en)
US5768569A (en) Processing data for an image displayed on a computer controlled display system
US6606094B1 (en) Method and apparatus for text image stretching
US6738071B2 (en) Dynamically anti-aliased graphics
US20020126312A1 (en) Accelerating color conversion using a temporary palette cache
JP3469492B2 (en) Font memory and font data reading method
US5548689A (en) Method to convert bitmaps to monochrome data
US7245779B2 (en) Image enhancement employing partial template matching
EP0522550A2 (en) Display control apparatus
US5854633A (en) Method of and system for dynamically adjusting color rendering
US5374957A (en) Decompression method and apparatus for split level image buffer
JP3202439B2 (en) Output device test equipment
US6690491B1 (en) Image processing apparatus for overlapping bit map data and vector data
US6421059B1 (en) Apparatus and method for rendering characters into a memory
US6069613A (en) Basic input-output system (BIOS) read-only memory (ROM) including expansion table for expanding monochrome images into color image
JP2000287063A (en) Image processor and its method
JP3483348B2 (en) Printing system and method for presenting required recording time of printing system
US7348983B1 (en) Method and apparatus for text image stretching
JP2972466B2 (en) Dot pattern compression method and apparatus and output method and apparatus

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML

AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML

WWE Wipo information: entry into national phase

Ref document number: 1996912900

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1996912900

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWG Wipo information: grant in national office

Ref document number: 1996912900

Country of ref document: EP