AU2008264239A1 - Text processing in a region based printing system - Google Patents

Text processing in a region based printing system Download PDF

Info

Publication number
AU2008264239A1
AU2008264239A1 AU2008264239A AU2008264239A AU2008264239A1 AU 2008264239 A1 AU2008264239 A1 AU 2008264239A1 AU 2008264239 A AU2008264239 A AU 2008264239A AU 2008264239 A AU2008264239 A AU 2008264239A AU 2008264239 A1 AU2008264239 A1 AU 2008264239A1
Authority
AU
Australia
Prior art keywords
text
page
representation
fill
fillmap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2008264239A
Inventor
Joseph Leigh Belbin
Edward James Iskenderian
David Robert James Monaghan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2008264239A priority Critical patent/AU2008264239A1/en
Publication of AU2008264239A1 publication Critical patent/AU2008264239A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K15/00Arrangements for producing a permanent visual presentation of the output data, e.g. computer output printers
    • G06K15/02Arrangements for producing a permanent visual presentation of the output data, e.g. computer output printers using printers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K15/00Arrangements for producing a permanent visual presentation of the output data, e.g. computer output printers
    • G06K15/02Arrangements for producing a permanent visual presentation of the output data, e.g. computer output printers using printers
    • G06K15/18Conditioning data for presenting it to the physical printing elements
    • G06K15/1848Generation of the printable image
    • G06K15/1849Generation of the printable image using an intermediate representation, e.g. a list of graphical primitives
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Description

S&F Reft 889300 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): David Robert James Monaghan, Edward James Iskenderian, Joseph Leigh Belbin Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Text processing in a region based printing system The following statement is a full description of this invention, including the best method of performing it known to me/us; 5845c(19111831)
-I
TEXT PROCESSING IN A REGION BASED PRINTING SYSTEM TECHNICAL FIELD The present invention relates generally to computer-based printer systems and, in particular, to reduced memory printer systems for high-speed printing. BACKGROUND 5 A computer application typically provides a page to a printing device for printing in the form of a description of the page, specified using a Page Description Language (PDL) such as Adobe@ PostScript® or Hewlett-Packard@ PCL. The PDL provides descriptions of objects to be rendered onto the page in a rendering (or z) order, as opposed to a raster image of the page to be printed. The page is typically rendered for printing and/or display 10 by an object-based graphics system, also known as a Raster Image Processor (RIP). The printing device receives the description of the page to be rendered and generates an intermediate representation of the page. The printing device then renders the intermediate representation of the page to pixels which are printed to print media such as paper. In general, an intermediate representation of a page consumes less memory than the 15 raster representation. Also, in some prior art printing devices, the intermediate representation of the page may be rendered to pixels in real-time. The intermediate page representation is generated by a controlling program which is executed by a controller processor within the printer device. A pixel rendering apparatus is used to render the intermediate page representation to pixels. The rendered pixels are 20 transferred to a printer engine, such as an electro-photographic engine, which prints the pixel onto the print media. Different methods of representing a page have varying complexity and performance characteristics. Some intermediate page representations are simple to generate, but require 1909997,1 889300_spec -2 large amounts of processing power and memory to render to pixels at high page rates. On the other hand, other intermediate page representations are more complex to generate, but can be rendered to pixels at high page rates, with less processing power and memory. Factors which contribute to the complexity of generating and rendering an intermediate 5 page representation include: (i) whether objects are represented using splines, line segments or another representation; (ii) whether or not objects in the intermediate page representation overlap; and (iii) the amount of compositing that is required to generate pixel data during 10 rendering. Generally, if an intermediate representation of a page is simple to generate, then the intermediate page representation will be complex to render to pixels. If on the other hand an intermediate representation of a page is complex to generate, then the intermediate page representation will be simple to render to pixels. 15 The intermediate representation of a page is typically generated by the controlling program which is executed by the controller processor within a printing device. On the other band, the intermediate page representation is rendered to pixels in a pixel rendering apparatus which is typically in the form of an Application Specific Integrated Circuit (ASIC). 20 In order to optimism the performance of a printing device, the right balance between the amount of processing and resources required to generate an intermediate representation of a page, and those required to render the intermediate page representation to pixels, needs to be determined. SUMMARY 1909997_1 8S300-speci - 3 In accordance with one aspect of the present disclosure there is provided a method of rendering a display list comprising at least one text object, said method comprising the steps of; replacing said at least one text object in the display list with a text placeholder; 5 converting the display list, having said at least one text object replaced by said text placeholder, into an intermediate representation comprising non-overlapping regions; rendering said intermediate representation into pixels, wherein if a region of said intermediate representation has a text placeholder, accessing a text representation and at least one text graphical attribute; and 10 rendering said region based on said text representation and said at least one text graphical attribute. Other aspects are also disclosed. BRIBF DESCFJPTION OF THE DRAWINGS At least one embodiment of the present invention will now be described with 15 reference to the following drawings, in which: Fig. 1 shows a schematic block diagrams of a pixel rendering system for rendering computer graphic object images according to a method in the prior art: Fig. 2 shows a schematic block diagram of a controlling program in the pixel rendering systems shown in Fig. 1; 20 Fig. 3a shows a display list representation of a page; Fig. 3b shows a fillmap representation of the page which is represented in Fig. 3a; Fig. 3c shows a tiled fillmap representation of the page which is represented in Fig. 3a; 1909997_1 889300 sped -4 Fig. 4 shows a portion of filmap representation of a page, the portion containing a fillmap representation of a text object; Fig. 5 shows a display list representation of a page; Fig. 6 shows a display list representation of a page where each text object has been 5 replaced by a text placeholder, Fig. 7 shows a tiled fillmap representation of the page represented in Fig. 6; Fig. 8 shows an representation of a text mask which corresponds to a text object shown in Fig. 5; Fig. 9 shows the bit mask derived from the top-left hand tile of the text mask shown 10 in Fig. S; Fig, 10 shows a schematic block diagram of a pixel rendering system for rendering computer graphic object images according to the present invention; Fig. 11 shows a schematic block diagram of a controlling program in the pixel rendering systems shown in Fig. 10; 15 Fig. 12 shows a flow chart of a controlling program in the pixel rendering systems shown in Fig. 10; Fig. 13 shows a flow chart of the process for generating a tiled text mask to represent a text object; Fig. 14 shows a flow chart of the process of rendering a simplified fillmap 20 representation of a page; and Fig, 15 shows a flow chart of the process of rendering a single level within a region of simplified fillmap representation of a page. 1909997_1 889300_speci - 5 DETAILED DESCRIPTION INCLUDING BEST MODE Fig. 1 shows a schematic block diagram of a pixel rendering system 100 for rendering computer graphic object images which may be processed by a method in the prior art. The pixel rendering system 100 comprises a personal computer 110 connected to 5 a printer system 160 through a network 150. The network 150 may be a typical network involving multiple personal computers, or may be a simple connection between a single personal computer 110 and a printer system 160. The personal computer 110 comprises a host processor 120 for executing a software application 130, such as a word processor or graphical software application. 10 The printer system 160 comprises a controller processor 170 for executing a controlling program 140, a pixel rendering apparatus 180, memory 190, and a printer engine 195 coupled via a bus 175. The pixel rendering apparatus 180 is preferably in the form of an ASIC coupled via the bus 175 to the controller processor 170, memory 190, and the printer engine 195. However, the pixel rendering apparatus 180 may also be 15 implemented in software that is executed in the controller processor 170. In the pixel rendering system 100, the software application 130 creates page-based documents where each page contains objects such as text, lines, fill regions, and image data. The software application 130 sends a high-level description of the page (for example a PDL file) to the controlling program 140 that is executed in the controller processor 170 20 of the printer system 160 via the network 150. The controlling program 140 receives the description of the page from the software application 130, and generates a fillmap representation of the page. The fillmap representation of a page is an example of an intermediate representation as discussed above 1909997_1 889300 sped - 6 and will later be described in detail, The controlling program will later be described with reference to Fig. 2. The controlling program 140, executed on the controller processor 170, is also responsible for providing memory 190 for the pixel rendering apparatus 180, initialising 5 the pixel rendering apparatus 180, and instructing the pixel rendering apparatus 180 to start rendering the fillmap representation of the page to pixels. The controlling program 140 sends the fillmap representation of the page to the pixel rendering apparatus 180. The pixel rendering apparatus 180 then uses the fillmap representation to render the page to pixels, The output of the pixel rendering apparatus 180 10 is a raster representation of the page made up of colour pixel data. This raster representation of the page may be used by the printer engine 195. The printer engine 195 then prints the raster representation of the page onto print media such as paper. The controlling program 140 will now be described with reference to Fig. 2. The 15 controlling program 140 is composed of an object decomposition driver 220, a primitives processor 230 and a job generator 240. The controlling program 140 receives the description of the page from the software application 130. The description of the page is composed of page objects 210, The object decomposition driver 220 decomposes the page objects 210 into edges, levels and fills 250. A fill may be a flat fill representing a single 20 colour, a blend representing a linearly varying colour, a bitmap image or a tiled (i.e. repeated) image. Within the controlling program, the primitives processor 230 then further processes the edges, levels and fills 250 to generate a fillmap representation 260 of the page and a table of fill compositing sequences 270. After the fillmap 260 and the table of fill 19099971 88930Q-speci -7 compositing sequences 270 have been generated, the job generator 240 generates a spool job 280 consisting of the fillmap representation 260 of the page, the table of fill compositing sequences 270, together with the object fills. The controlling program 140 sends the spool job 280 to the pixel rendering apparatus 180 for rendering to pixels, 5 A fillmap representation of a page will now be described in more detail. A fillmap is a region based representation of a page. The fillmap maps a region of pixels within the page to a fill compositing sequence which will be composited to generate the colour data for each pixel within that region. Multiple regions within a filtmap can map to the same fill compositing sequence. Regions within the fillmap do not overlap (i.e. the are non 10 overlapping regions) and hence each pixel in the rendered page can only belong to a single region. Each region within the fillmap is defined by a set of fillmap edges which activate the fill compositing sequence associated with that region. Fillmap edges: (i). are monotonically increasing in the y-direction of the page; (ii) do not intersect; 15 (iii) are aligned with pixel boundaries, meaning that each filimap edge consists of a sequence of segments, each of which follows a boundary between two contiguous pixels; (iv) contain a reference field referring to the index of the fill compositing sequence, within the table of fill compositing sequences, required to be composited to 20 render the region, to which the fillmap edge belongs, to pixels; and (v) activate pixels within a single region. On any given scanline, starting at a filmap edge which activates a region, and progressing in the direction of increasing x, the region remains active until a second filhinap edge which activates a further region is encountered. When the second edge is 1909997_1 889300_speci - 8 encountered, the active region is deactivated, and the region corresponding to the second edge is activated. Within a fillmap, the fill compositing sequence active within each region of pixels is stored in the table of fill compositing sequences. A fill compositing sequence is a 5 sequence of z-ordered levels, where each level contains attributes such as a fill, the opacity of the level, a compositing operator which determines how to mix the colour data of this level with other overlapping levels, and the priority, or z-order, of the level. A fill compositing sequence contains references to all the levels which may contribute colour to the pixels within the region. The table of fill compositing sequences contains all of the fill 10 compositing sequences required to render the page to pixels. The table of fill compositing sequences does not contain duplicate instances of identical fill compositing sequences. Hence, multiple regions within a filmap which map to the same fill compositing sequence, map to the same instance of the fill compositing sequence within the table of fill compositing sequences. 15 The fillmap representation of a page will now be described with reference to Figs. 3a to 3c. Fig. 3a shows a display list representation 300 of a page. The page has a white background. The page contains two objects. The first object 301 is an opaque "T" shaped object with a grey flat fill, The second object 302 is a transparent square with a hatched fill. The second object 302 overlaps the first object 301. 20 Fig. 3b shows a fillmap representation 320 of the page represented in Fig. 3a. The fillmap representation is composed of five pixel aligned edges. Each edge references a fill compositing sequence which will be used to determine the colour of each of the pixels activated by that edge. On any given scan line on which an edge is active, the edge will activate those pixels which are immediately to the right of the edge, until the next edge or a 190997j 889300.speci -9 page boundary is encountered. The first edge 321 traces the left hand boundary of the page, and references a fill compositing sequence 331 which contains a single opaque level which is to be filled using the background fill. The second edge 322 traces the left hand boundary of the first object 301, and references a fill compositing sequence 332 that 5 contains a single level which is opaque and is to be filled using a grey flat fill. The third edge 323 references the same fill compositing sequence 331 as the first edge 321. The fourth edge 324 traces the left hand boundary of the region where the second object 302 overlaps the white background. The fourth edge 324 references a fill compositing sequence 334 which contains two levels. The top most level is transparent and is to be 10 filled using a hatched fill. The bottom most level is opaque and is to be filled using the background till. The fifth edge 325 traces the left hand boundary of the region where the second object 302 overlaps the first object 301. The fifth edge 325 references a fill compositing sequence 333 which contains two levels. The top most level is transparent and is to be filled using a hatched fill. The bottom most level is opaque and is to be filled 15 using a grey flat fill. Accompanying the fillnap representation 320 of the page is a table of fill compositing sequences which contains the fill compositing sequences 331, 332, 333 & 334 referenced by the edges contained in the fillmap representation 320 of the page. Fig. 3c shows a tiled filhnap representation 340 of the page represented in Fig. 3a. 20 The tiled fillmap contains four tiles 350, 360, 370 and 380. Each tile has a height and width of eight pixels. In order to generate the tiled filbnap representation 340 of the page, the edges of the original fillmap representation 320 have been split across tile boundaries. For example, the edge 321 which traces the left hand boundary of the page in the untiled fillmap representation 320 shown in Fig. 3b has been divided into two edges 351 & 371, 19099971 889300.speci - 10 The first edge 351 activates pixels in the top-left hand tile 350, while the second edge 371 activates pixels in the bottom-left hand tile. Also, new edges have been inserted on the tile boundaries to activate the left most pixels of each tile which were previously activated by an edge in a tile to the left of the tile in which the pixels reside. For example, in the top 5 right hand tile 360 a new edge 363 has been inserted to activate pixels which were activated by the edge 322 which traces the left hand boundary of the first object 301 in the original fillmap representation 320 shown in Fig. 3b. Fig. 4 shows a portion 400 of a fillmap representation of a page, where the corresponding portion of the page contains a text object used to display the string "text". 10 Each character within the text object is encoded in the fillinap as a set of pixel aligned edges which describe the shape of the glyph used to represent that character. In order to generate this fillmap representation of the page, each glyph that is used to describe the text object's shape must be scan converted at device resolution in order to produce the pixel aligned edges which describe the shape of that glyph. For pages containing a lot of text, 15 which are typical of pages printed in an office environment, a lot of processing needs to be performed by the controller processor 170 in the printer system 160 in order to generate the fillnap representations of such pages. It is desirable to reduce the amount of processing to be performed by the controller processor 170 during the generation of the page by performing more processing in the pixel rendering apparatus 180, which is preferably 20 implemented in the fonn of an ASIC. This will result in the effective acceleration of the printing system 160. Fig. 5 shows a display list representation 500 of a page to be printed using the present invention. The page has a white background. All of the objects on the page arc opaque. The page contains a title which is represented by a first text object 501. The first 1909997_1 889300_sped - 11 text object 501 is filled using a black flat fill The page also contains a snake-shaped vector graphic object 502. This vector graphic object 502 is filled using a dotted fill. The page contains a paragraph of text represented by a second text object 503. The second text object 503 is filled using a black flat fill. The second text object 503 overlaps the snake 5 shaped vector graphic object 502, and has a higher priority than the snake-shaped vector graphic object 502. The page also contains an image 504 which depicts the silhouette of a figure. Finally, the page contains a third text object 505 which is partially obscured by a circular vector graphic object 506, The priority of the third text object 505 is lower than the priority of the circular vector graphic object 506. The third text object 505 is filled 10 using a black flat fill. The circular vector graphic object 506 is filled using a hatched fill. Fig. 6 shows a second display list representation 600 of the page represented in Fig. 5. In the second display list representation 600 of the page, each text object has been replaced by a text placeholder. A text placeholder is an area which encloses the text object from which the text placeholder is derived, and can be of arbitrary shapes. The dimensions 15 of a text placeholder are equal to the dimensions of the boundaries of the associated text object. Each text placeholder is given the priority of the text object it replaces. Each text placeholder is associated with a text level. A text level references the fill of the associated text object and a text mask, and contains other attributes such as the opacity of the text object, a compositing operator which determines how to mix the colour data of the 20 associated text object with other overlapping levels, and the priority of the level. A text mask identifies which pixels within the text placeholder belong to the associated text object. The structure of the text mask will be explained in detail with reference to Fig. 8. In Fig. 6 the text placebolders have transparent fills to indicate that some pixels within the text placeholders' extents may derive their colour from overlapping page objects which 1909997_1 889300 sped -12 have a lower priority than the priority assigned to the text placeholder. In the example given in Fig. 6, the first text object 501 in Fig. 5 has been replaced by a text placeholder 610. This text placeholder 610 is associated with a text level which contains a reference to a text mask 611, together with a reference to a black flat fill 612, which is the fill that was 5 to be used to derive the colour of the associated text object 501 and the compositing operator of that text object 501. Similarly, the remaining text objects 503 and 505 have each been replaced by a corresponding text placeholder and an associated text level, each of which contains a reference to a text mask, together with a reference to a black flat fill, and the compositing operator of the associated text objects. 10 Fig. 7 shows a tiled filhnap representation 700 of the page represented in Fig. 5. This filmap representation 700 of the page has been derived from the second display list representation 600 shown in Fig. 6. In this filhmap representation 700 of the page, text objects have been encoded as regions 710. Tn some cases, these regions are divided into further regions 720, 730 & 740, where text objects overlap with other objects with lower 15 priority. There are other cases where the text regions are obscured by opaque objects with higher priority, resulting in clipped regions 750. As each text object has been represented as a simple region, the tiled fillmap representation 700 of the page is faster to generate than the equivalent fillmap representation which includes pixel aligned edges which describe the shape of each of the glyphs which make up these text objects. 20 The concept of a text mask will now be described with reference to Fig. 8. A text mask is a structure which is used to identify which of the pixels within the associated text placeholder, belong to the text object from which the text mask is derived. The extent of the text mask is equal to the dimensions of the associated text placeholder, grid-fitted to 1909997 889300_speci - 13 fillmap tile boundaries. Each text mask is divided into tiles. Text mask tiles have the following characteristics: (i) text mask tiles have the same dimensions as fillmap tiles; (ii) text mask tiles are aligned to fillmap tiles; 5 (iii) each text mask tile corresponds to a filimap tile; and (iv) each text mask tile contains a reference to each of the bitmap glyphs from the associated text object, which intersects the corresponding fillmap tile, together with the position of each glyph as seen on the page, relative to the position of the corresponding fillmap tile. 10 Fig. 8 shows an exemplary representation of a text mask 800 which corresponds to the text object 501 in Fig. 5. The text mask 800 is divided into four tiles 810, 820, 830 & 840. Each tile within the text mask 800 corresponds to a tile within the fillmap representation 700 of the page. The text mask 800 makes reference to five bitmap glyphs. The text mask 800 references the following bitmap glyphs: 15 - a bitmap glyph representing an upper case "T" 801; - a bitmap glyph representing a lower case 'T' 802; - a bitmap glyph representing a lower case 'T 803; - a bitmap glyph representing a lower case "1" 804; and - a bitmap glyph representing a lower case "e" 805. 20 Each bitmap glyph is a one bit per pixel bitmap representation of a character to be rendered to pixels. Bitmap glyphs are generated at device resolution, and are used by a graphics renderer to determine which pixels within the bounding box of a text character belong to that character. 1909997_1 889300_sped -14 Each text mask tile, within a given text mask, contains a reference to each of the bitmap glyphs that are required to render the portion of the original text object which intersects the corresponding filhmap tile, to pixels. The order of the glyph references within any given text mask tile is arbitrary. The position of each glyph on the page is stored, 5 together with the reference to that glyph. The position of each glyph can be stored as the offset of the top-left hand corner of the bitmap glyph relative to the top-left hand corner of the tile. The contents of a text mask tile will now be explained with reference to the top-left hand tile 810 of the text mask 800 illustrated in Fig. 8. This text mask tile contains three 10 bitmap glyph references, together with their positions with respect to the tile. The first reference Sil is a reference to a bitmap glyph which represents an upper case "T" 801. The position of the top-left hand corner of this glyph 801 as it appears on the page, relative to the top-left hand corner of the tile 810, is specified by the vector (xi, y'). The second reference' 812 is a reference to a bitmap glyph which represents a lower case "i" 802. The 15 position of the top-left hand corner of this glyph 802 as it appears on the page, relative to the top-left hand comer of the tile 810, is specified by the vector (x2, y2). The third reference 813 is a reference to a bitmap glyph which represents a lower case "t" 803. The position of the top-left band corner of this glyph 803 as it appears on the page, relative to the top-left hand comer of the tile 810, is specified by the vector (xs, ys). 20 Fig, 10 shows a schematic block diagram of a pixel rendering system 1000 for rendering computer graphic object images which are processed in accordance with the present invention. The pixel rendering system 1000 comprises a personal computer 1010 connected to a printer system 1060 through a network 1050. The network 1050 may be a 1909997_j 8ao_speci - 15 typical network involving multiple personal computers, or may be a simple connection between a single personal computer 1010 and a printer system 1060. The personal computer 1010 comprises a host processor 1020 for executing a software application 1030, such as a word processor or graphical software application. 5 The printer system 1060 comprises a controller processor 1070 for executing a controlling program 1040, a pixel rendering apparatus 1080, memory 1090, and a printer engine 1095 coupled via a bus 1075. The pixel rendering apparatus 1080 is preferably in the form of an ASIC coupled via the bus 1075 to the controller processor 1070, memory 1090, and the printer engine 1095. However, the pixel rendering apparatus 1080 may also 10 be implemented in software that is executed in the controller processor 1070, In the pixel rendering system 1000, the software application 1030 creates page based documents, where each page contains objects such as test, lines, fill regions, and image data. The software application 1030 sends a high-level description of the page (for example a PDL file) to the controlling program 1040 that is executed in the controller 15 processor 1070 of the printer system 1060 via the network 1050. The controlling program 1040 receives the description of the page from the software application 1030, and generates a fillmap representation of the page. If the page contains text objects, then each text object is represented as a text placeholder, together with an associated text fill and text mask. The text fill and text mask describe the 20 appearance of the text object within the extents of the text placeholder. The text placeholders are then processed, together with the other page objects, and a fillmap representation of the page is generated. This fillmap representation is a simplified fillmap representation, as text objects are represented as regions, instead of smaller regions 19099971 88ODspei -16 describing the individual text characters. The controlling program will later be described with reference to Fig, 11. The controlling program 1040, executed on the controller processor 1070, is also responsible for providing memory 1090 for the pixel rendering apparatus 1080, initialising 5 the pixel rendering apparatus 1080, and instructing the pixel rendering apparatus 1080 to start rendering the fillmap representation of the page to pixels. The controlling program 1040 sends the fillmap representation of the page to the pixel rendering apparatus 1080. The pixel rendering apparatus 1080 then uses the filhlmap representation to render the page to pixels. The output of the pixel rendering apparatus 10 1080 is a raster representation of the page made up of colour pixel data, which may be used by the printer engine 1095. The printer engine 1095 then prints the raster representation of the page onto print media such as paper. The controlling program 1040 will now be described with reference to Fig. 11 The 15 controlling program 1040 is composed of an object decomposition driver 1120, a deferred text driver 1190, a primitives processor 1130 and a job generator 1140. The controlling program 1040 receives the description of the page from the software application 1030. The object decomposition driver 1120 discriminates between page objects which are text objects, and page objects which are non-text objects. The object decomposition 20 driver 1120 decomposes non-text objects into edges, levels and fills 1150. A fill may be a flat fill representing a single colour, a blend representing a linearly varying colour, a bitmap image or a tiled (i.e. repeated) image. When a text object is encountered, the object decomposition driver 1120 sends the text object to the deferred text driver 1190. The deferred text driver 1190 receives the text 19099974 1j800-Spec - 17 object and generates a text placeholder, a text mask, and text fill, The text placeholder, text mask, and text fill are used to represent the text object. The deferred text driver 1190 stores the text mask in the text mask store 1195. The deferred text driver 1190 then sends the text placeholder and the text fill to the object decomposition driver 1120. The object 5 decomposition driver 1120 then decomposes the text placeholder and text fill into edges, levels and fills 1150. The method used by the controlling program to process text objects will be described in detail with reference to Fi& 12. Within the controlling program, the primitives processor 1130 then further processes the edges, levels and fills 1150, to generate a fillmap 1160 and a table of fill 10 compositing sequences 1170. After the fillmap 1160 and the table of fill composition sequences 1170 have been generated, the job generator 1140 generates a spool job 1180 consisting of the fillmap 1160, the table of fill compositing sequences 1170, the object fills and the text masks 1195. The controlling program 1040 sends the spool job 1180 to the pixel rendering apparatus 1080 for rendering to pixels. 15 The method used by the controlling program 1040 to process text objects will now be described with reference to Fig. 12. The process 1200 starts and proceeds to step 1210 when it is determined that there are remaining page objects to be processed. If it is determined that there are page objects remaining to be process, then processing proceeds to step 1220 where the next unprocessed object is assigned to the variable OBJECT. 20 Processing then proceeds to step 1230, where it is determined if the page object stored in the variable OBIECT is a text object. If it is determined that the page object is a text object, then processing proceeds to step 1240 where it is determined if the text object stored in the variable OBJECT is suitable for deferred text processing. Criteria used to determine whether a text object is suitable for deferred text processing may include 19099971 89300_speci whether or not the fill associated with the text object is simple to render to pixels, such a flat fill, or is complex to render, such as a linear blend or image fill, or whether or not the text object is transparent. If it is determined that the text object stored in the variable OBJECT is suitable for 5 deferred text processing, then processing proceeds to step 1250, where a text mask, text fill and text placeholder are generated to represent the text object. The process of generating a text mask will be described later with reference to Fig. 13. The text fill is the fill that is associated with the text object stored in the variable OBJECT. The text fill maybe a flat fill representing a single colour, a blend representing a linearly varying colour, a bitmap 10 image or a tiled (i.e. repeated) image. The text placeholder is an area which encloses the text object stored in the variable OBJECT. Processing then proceeds to step 1260, where a text level is generated. The text level contains a reference to the text fill and the text mask, and contains other attributes such as the opacity of the associated text object, a compositing operator which detemines 15 how to mix the colour data of this level with other overlapping levels, and the priority of the leveL Next, processing proceeds to step 1265 where the text level is associated with the text placeholder. Processing then proceeds to step 1270, where the text placeholder is added to the display list. Processing then returns to step 1210. Returning to step 1230, if it is determined that the object stored in the variable 20 OBJECT is not a text object, then processing proceeds to step 1280, where the object is added to the display list. Processing then returns to step 1210. Returning to step 1240, if it is determined that the text object stored in the variable OBJECT is not suitable for deferred text processing, then processing proceeds to step 1280, where the text object is added to the display list. Processing then returns to step 1210. 1909991_1 889300_sped -19 If it is determined in step 1210 that there are no more input objects left to process, then processing proceeds to step 1290, where the display list is converted to a fillmap representation. Processing then proceeds to step 1295, where a spooled job is generated based on the fillmap representation. Processing terminates upon the completion of 5 step 1295. The process 1300 of generating a tiled text mask to represent a text object will now be described with reference to Fig. 13. The process 1300 starts and proceeds to step 1305 where the fillmap tiles which intersect with the bounding box of the text object are determined. The extents of the text mask will be determined by the bounding box of the 10 text object, grid-fitted to fillmap tile boundaries. Processing then proceeds to step 1310 where an empty list of glyph entries is created for each fillmap tile with which the text object intersects, as determined in step 1305. Next, processing proceeds to step 1315 where the first glyph in the text object is assigned to the variable GLYPH. Processing then proceeds to step 1320 where the filhnap tiles which intersect with the glyph stored in the 15 variable GLYPH are determined. Next, processing proceeds to step 1325 where the first tile which intersects with GLYPH is assigned to the variable TILE. Processing then proceeds to step 1330 where a glyph entry is created and assigned to the variable GLYPHENTRY. Next, processing proceeds to step 1335 where GLYPHENTRY.OFFSET is set to the offset of GLYPH with respect to the position the 20 fillmap tile stored in the variable TILE. The offset of a glyph with respect to the position of a fillmap tile can be defined as the position of the top-left hand corner of the bounding area of the glyph relative to the top-left hand corner of the tile. Next, processing proceeds to step 1340 where GLYPH_ENTRY.BITMAP is set to the address of the 1 bit per pixel bitmap representation of the glyph stored in the variable 1909997_1 889300_speci - 20 GLYPH. Processing then proceeds to step 1345 where the glyph entry stored in the variable GLYPH_ENTRY is added to the list of glyph entries that corresponds to TILE Process 1300 then proceeds to step 1350 where it is detennined if there are any remaining filliap tiles which intersect with the glyph stored in the variable GLYPH. If it is 5 determined that there are remaining filhnap tiles which intersect with the glyph, then processing proceeds to step 1360 where the next tile which intersects with the glyph stored in the variable GLYPH is assigned to the variable TILE. Processing then returns to step 1330. Returning to step 1350, if it is determined that there are no remaining fillmap tiles 10 which intersect with the glyph stored in the variable GLYPH, then processing proceeds to step 1355 where it is determined whether there are more glyphs in the text object which need to be processed. If it is determined that there are more glyphs in the text object which need to be processed, then processing proceeds to step 1365 where the next glyph in the text object is assigned to the variable GLYPH. Processing then returns to step 1320. 15 Returning to step 1355, if it is determined that there are no more glyphs in the text object remaining to be processed, then processing proceeds to step 1370 where the list of glyph entries for each of the tiles which intersect with the text object are aggregated to fon a tiled text mask. Process 1300 terminates upon completing step 1370. The process 1400 of rendering a tiled filhnap representation which contains at least 20 one region containing a level which makes reference to text mask data will now be described with reference to Fig. 14. The process 1400 starts and proceeds to step 1405 where the first tile in the tiled fillmap representation is stored in the variable TILE. Processing then proceeds to step 1410 where the first region in the tile stored in the variable TILE is assigned to the variable REGION. Next, processing proceeds to step 1415 19009971 889300_spec -21 where the fill compositing sequence which will be used to determine the colour data for each pixel in the region, described by the variable REGION, is retrieved or otherwise accessed. Processing then proceeds to step 1420 where the bottom most level in the fill compositing sequence is assigned to the variable LEVEL. Next, processing proceeds to 5 step 1425 where each pixel within the region described by the variable REGION is modified based on the fill, text mask and a compositing operator contained in the level stored in the variable LEVEL, Step 1425 will later be described in detail with reference to Fig. 15. Next, processing proceeds to step 1430 where it is determined if there are more 10 levels in the fill compositing sequence remaining to be processed. If it is determined in step 1430 that there are more levels in the fill compositing sequence to be processed, the processing proceeds to step 1445 where the level with the next highest priority within the fill compositing sequence for this region is assigned to the variable LEVEL. Processing then returns to step 1425; 15 Retuming to step 1430, if it is determined that there are no more levels in the fill compositing sequence remaining to be processed, then processing proceeds to step 1435 where it is determined if there are more regions within the tile stored in the variable TILE that need to be processed. If it is determined in step 1435 that there are more regions to be processed, processing proceeds to step 1455 where the next unprocessed region within the 20 current tile is assigned to the variable REGION. Processing then returns to step 1415. Returning to step 1435, if it is determined that there are no more regions that need to be processed, then processing proceeds to step 1440 where it is determined if there are more fillmap tiles to be processed. If it is determined that there are more fillmap tiles to be 1909997.1 889300_Speci -22 processed, then processing proceeds to step 1460 where the next tile to be processed is assigned to the variable TILE. Processing then retums to step 1410. Returning to step 1440, if it is determined that all the tiles within the filhmap have been processed, the process 1400 terminates. 5 Step 1425 in Fig. 14 will now be described in detail with reference to Fig. 15. Step 1425 starts and proceeds to sub-step 1510 where it is determined if the level stored in the variable LEVEL contains a reference to a text mask. If it is determined that that the level stored in the variable LEVEL contains a reference to a text mask, then processing proceeds to sub-step 1520 where a bit mask is derived from the text mask. The bit mask 10 has the same dimensions as a fillmap tile and is derived from the text mask tile, within the referenced text mask, which corresponds to the current fillmap tile. The bit mask is generated by- allocating memory for each bit in the bit mask. Each bit in the mask is then reset. The bitmap glyphs referenced within the text mask tile are then offset and merged into the bit mask by applying the logical OR operator to each bit in the mask, and the 15 corresponding bit in the bitmap glyph, within the region of intersection between the tile and the glyph. The bit mask 900 derived from the top-left hand tile 810 of the text mask 800 illustrated in Fig. 8 is shown in Fig. 9. The shaded entries 910 within the bit mask 900 correspond to pixels within the fillmap tile which lie inside the associated text object. The 20 unshaded entries 920 within the bit mask 900 correspond to pixels within the fillmap tile which lie outside of the associated text object. Next processing proceeds to sub-step 1530 where the bit mask that was generated in sub-step 1520 is examined to determine which pixels within the region described by REGION lie inside the original text object. If a bit within the bit mask is set, then the 1902997_1 889300_speci - 23 corresponding pixel within the associated fillmap tile lies inside the text object. Processing then proceeds to sub-step 1540 where the fill and compositing information contained in the level stored in the variable LEVEL are used to update the pixels within the region described by the variable REGION which lie within the text object. The pixels that do not 5 lie within the text object are left unmodified. Step 1425 returns upon completion of sub step 1540. Returning to sub-step 1510, if it is determined that the level does not contain a reference to a text mask, then processing proceeds to sub-step 1550 where the fill and compositing information contained in the level stored in the variable LEVEL are used to 10 update all of the pixels within the region described by the variable REGION. Step 1425 returns upon completion of sub-step 1550. In an exemplary embodiment of the present invention individual text objects are replaced by a text placeholder. The outline of the text placeholder encloses the text object from which the text placeholder was derived. The dimensions of the text placeholder are 15 equal to the dimensions of the bounding area of the associated text object. Each text placeholder is given the priority of the text object it replaces. Each text placeholder is associated with a text level. A text level references the fill of the text object and the text mask, and contains other attributes such as the opacity of the associated text object, a compositing operator which determines how to mix the colour data of this level with other 20 overlapping levels, and the priority of the level. In another embodiment of the present invention, multiple text objects are replaced by a single text placeholder. The text objects must lie within a contiguous priority (or z) range. The text objects must also use the same fill and compositing operators. In this instance, the text placeholder is a rectangular outline which encloses all of the text objects, although any other shapes of the placeholder is 1909997_1 88930_speci -24 possible. The dimensions of the text placeholder are equal to the dimensions of a bounding area which encloses the bounding areas of all of the associated text objects. Each of the text placeholders are associated with a corresponding text levels. The text levels reference the fill of the replaced text objects, and a text mask, and contains other attributes such as the 5 opacity of the level, a compositing operator which determines how to mix the colour data of this level with other overlapping levels, and the priority of the level. The text mask identifies which pixels within the text placeholder belong to any of the associated text objects. The foregoing describes only some embodiments of the present invention, and 10 modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. (Australia Only) In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not consistings only of'. Variations of the word "comprising", such as "comprise" and 15 "comprises" have correspondingly varied meanings. 1909997_89300sped

Claims (2)

  1. 2. A method of rendering a display list comprising at least one text object, said method being substantially as described herein with reference to any one of the 15 embodiments as that embodiment is illustrated in the drawings.
  2. 3. Apparatus adapted to perform the method of claim 1 or 2. Dated this 31st day of December 2008 20 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant Spruson&Ferguson 1909997 1 869300speci
AU2008264239A 2008-12-31 2008-12-31 Text processing in a region based printing system Abandoned AU2008264239A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2008264239A AU2008264239A1 (en) 2008-12-31 2008-12-31 Text processing in a region based printing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2008264239A AU2008264239A1 (en) 2008-12-31 2008-12-31 Text processing in a region based printing system

Publications (1)

Publication Number Publication Date
AU2008264239A1 true AU2008264239A1 (en) 2010-07-15

Family

ID=42332410

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008264239A Abandoned AU2008264239A1 (en) 2008-12-31 2008-12-31 Text processing in a region based printing system

Country Status (1)

Country Link
AU (1) AU2008264239A1 (en)

Similar Documents

Publication Publication Date Title
US5666543A (en) Method of trapping graphical objects in a desktop publishing program
AU2003203331B2 (en) Mixed raster content files
US6268859B1 (en) Method and system for rendering overlapping opaque graphical objects in graphic imaging systems
JP4995057B2 (en) Drawing apparatus, printing apparatus, drawing method, and program
US7978196B2 (en) Efficient rendering of page descriptions
EP0786757A1 (en) Adjusting contrast in antialiasing
JPH08511638A (en) Anti-aliasing device and method for automatic high-speed alignment of horizontal and vertical edges to target grid
JP3142550B2 (en) Graphic processing unit
JP3845045B2 (en) Image processing apparatus, image processing method, image forming apparatus, printing apparatus, and host PC
US6738071B2 (en) Dynamically anti-aliased graphics
US20090091564A1 (en) System and method for rendering electronic documents having overlapping primitives
US8705118B2 (en) Threshold-based load balancing printing system
US20060077210A1 (en) Rasterizing stacked graphics objects from top to bottom
JP4143613B2 (en) Drawing method and drawing apparatus
US7215342B2 (en) System and method for detecting and converting a transparency simulation effect
US7046403B1 (en) Image edge color computation
JP6330790B2 (en) Print control system, print control apparatus, and program
AU2008264239A1 (en) Text processing in a region based printing system
JP5603295B2 (en) Rendering data in the correct Z order
US20060107199A1 (en) Image stitching methods and systems
JP2004122746A (en) System and method for optimizing performance of tone printer
JP6755644B2 (en) Character processing device, character processing method, character processing program
AU2007226809A1 (en) Efficient rendering of page descriptions containing grouped layers
JP2019192087A (en) Information processing device, program, and information processing method
US20060103673A1 (en) Vector path merging into gradient elements

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application