AU2008201723A1 - On-demand pixel-sequential edge tracking - Google Patents

On-demand pixel-sequential edge tracking Download PDF

Info

Publication number
AU2008201723A1
AU2008201723A1 AU2008201723A AU2008201723A AU2008201723A1 AU 2008201723 A1 AU2008201723 A1 AU 2008201723A1 AU 2008201723 A AU2008201723 A AU 2008201723A AU 2008201723 A AU2008201723 A AU 2008201723A AU 2008201723 A1 AU2008201723 A1 AU 2008201723A1
Authority
AU
Australia
Prior art keywords
scanline
clip
objects
group
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2008201723A
Inventor
Thomas Benjamin Sanjay Thomas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2008201723A priority Critical patent/AU2008201723A1/en
Publication of AU2008201723A1 publication Critical patent/AU2008201723A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Description

S&F Ref: 836924 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Thomas Benjamin Sanjay Thomas Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: On-demand pixel-sequential edge tracking The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(1207881_1) -1 ON-DEMAND PIXEL-SEQUENTIAL EDGE TRACKING TECHNICAL FIELD The present invention relates to graphical object rendering and in particular to pixel-sequential rendering into a band- or frame-store buffer. 5 BACKGROUND When a computer application provides data to a target device for printing and/or display, an intermediate description of the page is often given to device driver software in a page description language (PDL). Examples of page description languages include Canon's LIPSIm and HP's PCLTM. The intermediate description of the page includes 10 descriptions of the graphic objects to be rendered. This contrasts with some arrangements where raster image data is generated directly by the application and transmitted for printing or display on the target device. Equivalently, the application may provide a set of descriptions of graphic objects via function calls to a graphics device interface layer (GDI), such as the Microsoft 15 WindowsTM GDI. The driver for the associated target device receives the graphic object descriptions from the GDI layer. For each graphic object, the driver is responsible for generating a description of the graphic object in the page description language that is understood by the rendering system of the target device. The rendering system of the target device typically contains a PDL interpreter 20 that parses the graphic object descriptions and builds a display list (DL) of graphic object data. The display list may be implemented using any of a number of known data structures. The rendering system also contains a raster image processor (RIP) that processes the display list and renders the data to an output page image of pixel values comprising for example Cyan, Magenta, Yellow and blacK (CMYK) channels. Once in 25 this color model, the device displays or prints the page. A graphic object may be a fill region, which contributes colour to the output image, or a clip region. Hereinafter, "graphic object", or simply "object", is taken to mean "fill region". Such a graphic object may be clipped by one or more clip regions. A clip region limits one or more graphic objects that the clip region clips to the boundaries 1206347_1 836924_spec final -2 of the clip region. That is, portions of a graphic object outside of the intersecting clip are not rendered. One method of rendering these objects and clips is to use a pixel-sequential rendering algorithm. In pixel-sequential rendering systems, each pixel is generated in 5 raster order along scanlines. All objects to be drawn are retained in a display list in an edge-based format. Edges are defined to comprise one or more line-, arc- or Bezier segments. Each edge may comprise one or more segments. Objects and clips are formed by such edges, which can be sorted and form the contents of the display list. On each scanline, the edges of objects that intersect the current scanline, known as active edges, 10 are held in increasing order of their points of intersection with the scanline. These points of intersection, or edge crossings, are considered in turn and activate or deactivate objects in the display list. Between each pair of edges considered, the colour data for each pixel which lies between the first edge and the second edge is generated based on which objects are active for that run of pixels. In preparation for the next scanline, the 15 coordinate of intersection of each edge is updated in accordance with the properties of each edge, and the edges are sorted into increasing order of point of intersection with that scanline. Any newly active edges are also merged into the ordered list of active edges. Graphics systems that use pixel-sequential rendering have advantages over object-sequential renderers, because there is no frame store or line store and no 20 unnecessary over-painting. Objects requiring compositing are processed on a per-pixel basis using the original colour data of each object. Each pixel is converted to the output bit depth after any compositing, so the correct result is obtained regardless of the output bit depth. However, pixel-sequential rendering suffers when there are large numbers of 25 edges that must be tracked and maintained in sorted order for each scanline. As each edge is updated for a new scanline, the edge is re-inserted into the active edge list, usually by an insertion sort. For a complex page, which may comprise hundreds of thousands of edges, the time required to maintain the sorted list of active edges for each scanline is a large portion of the total time required to render the page. 30 Methods for "fast tracking" of edges exist to speed up the tracking of edges to a designated starting scanline. "Fast tracking" involves skipping edges or segments of 1206347_1 836924_spec final -3 edges that finish before the starting scanline. Such methods could only use fast tracking for the starting scanline for pixel sequential rendering. Starting scanline refers to the first visible scanline when rendering a page or a render region (like a memory context). The method calculates the exact X value of an edge at the starting scanline using the segment 5 that intersects the starting scanline. The X value is a direction along scanlines in an X-Y or XYZ coordinate space, where scanlines run sequentially in a Y direction. Fast tracking avoids having to increment/decrement the X value for each scanline from the start of the edge on a scanline-by-scanline basis (i.e., incremental tracking) that is not in the visible area. 10 Once this initial fast tracking is done, all edges are tracked normally on a scanline-by-scanline basis until the edge finishes or the end of the page is reached. Consequently, processor time and use of cache and memory are wasted by processing edges of clips and objects that produce no visible output. 15 SUMMARY In accordance with a first aspect of the invention, there is provided a method of pixel-sequentially rendering a display list to an output image. The image comprises a plurality of scanlines. A first scanline of a visible region within which at least one object clipped by a clip in the display list can contribute colour to an image is determined from 20 a display list comprising at least two graphical objects and at least one clip. Each object and each clip has at least one edge. For a current scanline intersecting one of the objects and the clip, the edges of the objects intersecting the current scanline being active edges, an Active Edge List (AEL) holding the active edges in increasing order of points of intersection with the current scanline, the following is performed: 25 determining if the current scanline is the first scanline of the visible region; if the current scanline is determined to be the first scanline of the visible region, performing the steps of: 1206347_1 836924_spec final -4 determining a crossing point of an edge of one of the object and the clip within the current scanline; pixel-sequentially rendering the AEL to the current scanline of the image using the determined crossing position; 5 merging the edge of the object into the AEL comprising at least one other edge of an object, the AEL being ordered by crossing position with the current scanline; and if the current scanline is determined not to be the first scanline of the visible region, performing the step of: 10 pixel-sequentially rendering another object to the current scanline of the image using a predetermined crossing position of an edge of the other object within the current scanline. The method may further comprise: setting an expiry scanline for the edges of the object to the last scanline on which the one object is visible; and updating the edges to a 15 following scanline if the current scanline is not the expiry scanline. The object may be a group of objects satisfying a grouping criterion. The edges may each comprise at least one line-, arc- or Bezier-segments. The crossing points of edges may activate or deactivate objects in the display list. The method may further comprise the steps of creating an empty group of objects, 20 and adding at least one object to the group of objects dependent upon at least one clip associated with the object. The method may further comprise the step of adding the group of objects to the display list in sorted order dependent upon a scanline on which the group of objects is visible. The method may further comprise the step of adding at least one clip for the object to a list of clips for the group of objects. The step of adding 1206347_1 836924_spec final -5 at least one object to the group of objects: adding information about the object to a data structure for the group of objects; linking data for at least one clip for the object to a list of clips for the group of objects; setting bounds for the group of objects dependent upon bounds for the object; and setting bounds of clips for the group of objects 5 dependent upon bounds for the clip for the object. The visible area may be determined based on the at least two graphical objects and the at least one clip. The visible area may be determined based on bounds of the at least two graphical objects and bounds of the at least one clip. The bounds of the at least two graphical objects and the bounds of the at least one 10 clip may each be defined by a bounding box. The method may further comprise splitting the display list into two or more buckets corresponding to two or more partitions of a page in a Y direction, each bucket capable of containing at least one object and/or clip sorted in Z order. The rendering steps may be performed using the buckets to render the partitions of the page dependent 15 upon an object being visible within the partition. In accordance with yet another aspect of the invention, there is provided a system for pixel-sequentially rendering a display list to an output image, the system comprising a processor and a memory and implementing the method according to any one of the foregoing aspects. 20 The rendering apparatus may be implemented as a hardware device or as a software module running on the processor. The system may be implemented within a printer. The system may comprise a processor, memory, an engine, a rendering apparatus, and a rendering store coupled to each other. 1206347_1 836924_specfinal -6 In accordance with still another aspect of the invention, there is provided a computer program product comprising a computer readable medium having recorded therein a computer program for pixel-sequentially rendering a display list to an output image, the computer program product comprising computer program code means for 5 implementing the method according to any one of the foregoing aspects. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention are described hereinafter with reference to the drawings, in which: 10 Fig. 1 is a block diagram schematically illustrating a system for rendering and presenting computer graphic object images, on which the embodiments of the invention may be practised; Fig. 2 is a block diagram showing a rendering pipeline, with which the embodiments of the invention may be implemented; 15 Fig. 3 is a flowchart illustrating a method of constructing a renderable display list according to one embodiment of the invention; Fig. 4 is a flowchart illustrating a method of processing object clips, as used in the method of Fig. 3; Fig. 5 is a flowchart illustrating a method of adding the current object 20 information to a group, as used in the method of Fig. 3; Fig. 6 is a flowchart illustrating a method of rendering a display list, as constructed in Fig. 3, according to one embodiment of the invention; Fig. 7 is a flowchart illustrating a method of rendering each scanline and processing edges for the next scanline, as used in the method of Fig 6; 25 Fig. 8 is a flowchart illustrating a method of loading and fast tracking edges in the Scanline Edge List (SEL), as used in the method of Fig 6; Fig. 9 is a flowchart illustrating a method of loading and fast tracking edges of new groups, as used in the method of Fig 6; 1206347_1 836924_spec final -7 Fig. 10 is a diagram illustrating the overall working of the on-demand, pixel sequential edge-tracking algorithm for an exemplary page; Fig. 11 is a diagram illustrating the overall working of the bucket on-demand pixel sequential edge tracking algorithm for the exemplary page of Fig. 10; and 5 Fig. 12 is a block diagram of a conventional personal computer, with which the embodiments of the invention may be practised. DETAILED DESCRIPTION Methods, systems and computer program products are disclosed for pixel 10 sequentially rendering a display list to an output image. The display list comprises at least two graphical objects and at least one clip, where each graphical object and each clip has one or more edges. The output image comprises a number of scanlines. In the following description, numerous specific details are set forth. However, from this disclosure, it will be apparent to those skilled in the art that modifications and/or 15 substitutions may be made without departing from the scope and spirit of the invention. In other circumstances, specific details may be omitted so as not to obscure the invention. The embodiments of the invention seek to reduce the tracking of edges of non visible object when pixel-sequentially rendering a display list. In a sense, edge tracking 20 is performed on demand when an object is visible, but not otherwise. In this manner, processor time and use of cache and memory are not wasted by processing edges of clips and objects that produce no visible output and in particular objects that are clipped. The embodiments of the invention reduce the number of edges to process in order to pixel sequential render the output image from the display list into a band or frame store buffer. 25 The embodiments of the invention pixel-sequentially render a display list to an output image comprising scanlines. The edges of an object and the edges of a clip clipping the object are fast tracked to a current scanline on which the object first becomes visible. The object or the clip intersects at least a previous scanline. The object is rendered, together with at least one other object visible on the current scanline, on said 30 current scanline. 1206347_1 836924_spec final -8 Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. 5 The embodiments of the invention described hereinafter with reference to Figs. 1-11 may be implemented using a computer system 1200, such as that shown in Fig. 12, in which the processes of Figs. 3 to 9 may be implemented as software, such as one or more application programs executable within the computer system 1200. In particular, the steps of the methods in Figs. 3 to 9 are effected by instructions in the software that 10 are carried out within the computer system 1200. The instructions may be formed as one or more computer program code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules perform the methods for rendering a display list to an output image and the corresponding code modules manage a user interface between the 15 first part and the user. The software may be stored in a computer readable medium, including the storage devices described hereinafter, for example. The software is loaded into the computer system 1200 from the computer readable medium and then executed by the computer system 1200. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program 20 product. The use of the computer program product in the computer system 1200 preferably effects an advantageous apparatus for pixel-sequentially rendering a display list to an output image. As shown in Fig. 12, the computer system 1200 is formed by a computer module 1201, input devices such as a keyboard 1202 and a mouse pointer device 1203, and 25 output devices including a printer 1215, a display device 1214 and loudspeakers 1217. An external Modulator-Demodulator (Modem) transceiver device 1216 may be used by the computer module 1201 for communicating to and from a communications network 1220 via a connection 1221. The network 1220 may be a wide-area network (WAN), such as the Internet or a private WAN. Where the connection 1221 is a telephone line, 30 the modem 1216 may be a traditional "dial-up" modem. Alternatively, where the connection 1221 is a high-capacity (e.g., cable) connection, the modem 1216 may be a 1206347_1 836924_spec final -9 broadband modem. A wireless modem may also be used for wireless connection to the network 1220. The computer module 1201 typically includes at least one processor unit 1205, and a memory unit 1206 for example formed from semiconductor random access 5 memory (RAM) and read only memory (ROM). The module 1201 also includes a number of input/output (I/O) interfaces including an audio-video interface 1207 that couples to the video display 1214 and loudspeakers 1217, an I/O interface 1213 for the keyboard 1202 and mouse 1203 and optionally a joystick (not illustrated), and an interface 1208 for the external modem 1216 and printer 1215. In some implementations, 10 the modem 1216 may be incorporated within the computer module 1201, for example within the interface 1208. The computer module 1201 also has a local network interface 1211 which, via a connection 1223, permits coupling of the computer system 1200 to a local computer network 1222, known as a Local Area Network (LAN). As also illustrated, the local network 1222 may also couple to the wide network 1220 via a 15 connection 1224, which would typically include a so-called "firewall" device or similar functionality. The interface 1211 may be formed by an Ethernet TM circuit card, a wireless Bluetooth TM or an IEEE 802.11 wireless arrangement. The interfaces 1208 and 1213 may afford both serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) 20 standards and having corresponding USB connectors (not illustrated). A scanner or photocopier (not shown in Fig. 12) may be coupled to the interface 1208, 1213. USB and Firewire are common interfaces used for connection to such a scanner as are other serial and parallel interfaces. Storage devices 1209 are provided and typically include a hard disk drive (HDD) 1210. Other devices such as a floppy disk drive and a magnetic 25 tape drive (not illustrated) may also be used. An optical disk drive 1212 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD), USB-RAM, and floppy disks for example may then be used as appropriate sources of data to the system 1200. The components 1205 to 1213 of the computer module 1201 typically 30 communicate via an interconnected bus 1204 and in a manner which results in a conventional mode of operation of the computer system 1200 known to those skilled in 1206347_1 836924_spec final -10 the art. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, SunTM SparcstationsTM, Apple Mac TM or alike computer systems evolved therefrom. Typically, the application programs discussed hereinbefore are resident on the 5 hard disk drive 1210, which are read and controlled in execution by the processor 1205. Intermediate storage of such programs and any data fetched from the networks 1220 and 1222 may be accomplished using the semiconductor memory 1206, possibly in concert with the hard disk drive 1210. In some instances, the application programs may be supplied to the user encoded on one or more CD-ROM and read via the corresponding 10 drive 1212, or alternatively may be read by the user from the networks 1220 or 1222. Still further, the software can also be loaded into the computer system 1200 from other computer readable media. Computer readable media refers to any storage medium that participates in providing instructions and/or data to the computer system 1200 for execution and/or processing. Examples of such media include floppy disks, magnetic 15 tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1201. Examples of computer readable transmission media that may also participate in the provision of instructions and/or data include radio or infra-red transmission channels as well as a network 20 connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the application programs and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1214. Through 25 manipulation of the keyboard 1202 and the mouse 1203, a user of the computer system 1200 and the application may manipulate the interface to provide controlling commands and/or input to the applications associated with the GUI(s). The method of Figs. 3 to 9 may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub 30 functions of pixel-sequentially rendering a display list to an output image. Such 1206347_1 836924_spec final -11 dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories. For ease of presentation, the description of the embodiments of the invention is organized as follows: 5 I. On-Demand, Pixel-Sequential Edge Tracking (OD-PSET) (i) Working of OD-PSET Algorithm for Exemplary Page (ii) Rendering System (iii) Rendering Pipeline (iv) Construction of Display List (DL) 10 (a) Processing Object Clips (b) Adding Object Information to Current Group (v) Rendering a Display List (a) Loading of Existing Edges (b) Loading of New Edges 15 (c) Rendering 'nlines' Scanlines into Output Image II. Bucket On-Demand Pixel Sequential Edge Tracking (BOD-PSET) (i) Changes to OD-PSET (ii) Working of BOD-PSET Algorithm for Exemplary Page In the following description, objects to be drawn are retained in a display list in 20 an edge-based format. Edges are defined to comprise one or more line-, arc- or Bezier segments. Each edge may comprise one or more segments. Objects and clips are formed by such edges, which can be sorted and form the contents of the display list. Objects and clips can have any shape. On each scanline, the edges of objects that intersect the current scanline, known as active edges, are held in increasing order of their points of 25 intersection with the scanline. These points of intersection, or edge crossings, are considered in turn and activate or deactivate objects in the display list. Between each pair of edges considered, the colour data for each pixel that lies between the first edge and the second edge is generated based on which objects are active for that run of pixels. In preparation for the next scanline, the coordinate of intersection of each edge is 30 updated in accordance with the properties of each edge, and the edges are sorted into increasing order of point of intersection with that scanline. Any newly active edges are 1206347_1 836924_spec final -12 also merged into the ordered list of active edges. A visible region or area (also visible bounds hereinafter) is the intersection of a clip (s) and an object(s). In the following description, reference is made to objects and groups of objects. The method is equally applicable to both. Objects may be grouped dependent upon 5 object properties, as described hereinafter with reference to Fig. 3. Grouping reduces overheads in the data structures used. For example, this reduces the number of object boundaries that must be remembered. Instead only the group boundary must be remembered. As used hereinafter, a group is based on the same clips. Also, the following description describes implementation of the methods using bounding boxes 10 around objects and clips. This allows the visible region to be calculated in a more efficient manner, since only bounding box calculations are required (this may be thought of as an approximate visible region or bounds). This improves tracking. However, it will be apparent in the light of this disclosure to those skilled in the art that the embodiments can be applied where the visible region is calculated without bounding boxes around 15 objects and clips (i.e., true visible region). I. On-Demand, Pixel-Sequential Edge Tracking (OD-PSET) The embodiments of the invention are described hereinafter with reference first to an example of processing using the OD-PSET algorithm to provide context before 20 describing the processes in detail. (i) Working of OD-PSET Algorithm for Exemplary Page Fig. 10 illustrates the rendering of an example page 10.PG using the on-demand pixel sequential edge tracking (OD-PSET) algorithm. Example page l0.PG contains a 25 single object 10.01 clipped by a clip I0.C1. Object 10.01 comprises edges 10.OE1, 10.OE2, 10.OE3 and 10.0E4 (indicated with thin solid lines) with a bounding box 10.OBB1 (indicated by dotted lines). In this example, the object 10.01 has a rectangular shape and is rotated towards the right relative to the vertical (Y direction). In contrast, the bounding box 10.OBB I encloses the 30 object 10.01 and is rectangular in shape and oriented vertically. Bounding box I0.OBB1 1206347_1 836924_spec final -13 starts at scanline lO.SC2 and ends at scanline 10.SC4 (both scanlines indicated by thick solid arrow). The clip 10.Cl comprises edges 10.CEI, 1O.CE2, 10.CE3 and l0.CE4 (indicated with thick dashed lines) with a bounding box 10.CBB1 (again, indicated by dotted lines). 5 In this example, the clip I 0.C 1 has an elongated, rectangular shape and is rotated towards the left relative to the vertical (Y direction). The bounding box 10.CBB1 encloses the clip 10.Cl and is rectangular in shape and oriented vertically. As shown in Fig. 10, the clip bounding box l0.CBBI is displace toward the upper left relative to the object bounding box 10.0BB1, but there is a region of overlap or intersection. Within the 10 intersection of the bounding boxes 10.0BB1 and 10.CBB1, the object 10.01 and the clip l0.C1 have an area of intersection indicated by thin diagonal line hatching. The clip bounding box 10.CBB1 starts at scanline 10.SC1 and ends at scanline l0.SC3. The visible bounds are calculated from the intersection of bounding boxes 10.CBB1 and 10.OBB1, which has a top scanline of 1O.SC2 and a bottom scanline of 15 10.SC3. Described hereinafter is the sequence of steps involved in rendering the page 1O.PG: " The display list created for the page 10.PG using method 300 of Fig. 3 comprises one group, which has a single clip 10.CI affecting the group. The 20 group's VisibleY value is scanline 10.SC2 (CurY = 0). " YRemain is set to the last scanline on page 10.PG - CurY (in Fig. 10, the last scanline is the bottom line in the rectangle representing the page 10.PG), and YGrpRemain is set to 10.SC2 - CurY. e Up to scanline 10.SC2, no edges are loaded and white is rendered onto the 25 output image (210 of Fig. 2) since scanline 10.SC2 is the first scanline of the visible bounds. " At CurY = 10.SC2, the object edges 10.OE1 and 10.OE2 are loaded into the Active Edge List (AEL). The edges 10.CE1 and 1O.CE2 of the clip 10.Cl are fast tracked to scanline l0.SC2. The clip edge 10.CEl finishes before 10.SC2, 30 and the clip edge 10.CE4 is loaded and fast tracked up to the scanline 10.SC2. Hence, all edges in the AEL are at scanline 10.SC2. 1206347_1 836924_spec final -14 " The edges 10.OE1, 10.OE2, lO.CE4, 10.CE2 have their expiry scanline set to the scanlinel0.SC3 at the bottom of the visible bounds. " YRemain is calculated to be the scanline 10.SC5 - CurY where the object edge 10.OE3 starts. YGrpRemain is set to the last scanline of the page 10.PG 5 CurY, since there are no more groups left to load. " Normal pixel sequential rendering is performed until CurY = 10.SC5, where the object edge10.OE2 finishes and the object edge 10.0E3 is loaded into AEL. The expiry scanline for the object edge 10.0E3 is set to the scanline 10.SC3. (No output is produced in terms of rendering the object, because the 10 object and clip do not intersect until this point in the visible bounds. Even though the bounding boxes intersect from 10.SC2 till 1O.SC3, only after 10.SC5 do the actual edges of the clip 10.Cl and object 10.01 intersect producing any output.) " YRemain is calculated to be the scanline 10.SC7 - CurY where the clip edge 15 10.CE3 starts. YGrpRemain remains unchanged. " Normal pixel sequential rendering is performed until CurY = 10.SC7, where the clip edge 1O.CE2 finishes, and the clip edge 10.CE3 is loaded into AEL. The expiry scanline for the clip edge 10.CE3 is set to the scanline 10.SC3. In this manner, the intersection of the object 10.01 and the clip 10.C1 in the 20 visible bounds is rendered between scanlines 10.SC5 and 10.SC7, as indicated by diagonal line hatching. This intersection is bounded by the object edge 10.OE1 and the clip edge 10.CE2 in Fig. 10. * YRemain is calculated to be the scanline 10.SC6 - CurY, where the object edge 10.OE4 starts. YGrpRemain remains unchanged. 25 * Normal pixel sequential rendering is performed until CurY = 10.SC6, where the object edge 10.OE1 finishes and the object edge 10.0E4 is loaded into the AEL. The expiry scanline for the object edge 10.OE4 is set to the scanline I0.SC3. In this manner, the intersection of the object 10.01 and the clip 10.CI in the visible bounds is rendered between scanlines I0.SC7 and 10.SC6, 1206347_1 836924_spec final -15 as indicated by diagonal line hatching. This intersection is bounded by the object edge 10.OE1 and the clip edge 10.CE3 in Fig. 10. e YRemain is set to the last scanline in lO.PG - CurY. * Normal pixel sequential rendering is performed until CurY = 10.SC3. When 5 edges are being updated at the scanline 10.SC3, the edges 10.CE4, 10.CE3, 10.OE4, and 10.OE3 in the AEL have reached their expiry scanline and are removed out of the AEL leaving the AEL empty. In this manner, the intersection of the object 10.01 and the clip 10.CI in the visible bounds is rendered between scanlines 10.SC6 and 10.SC3, as indicated by diagonal line 10 hatching. This intersection is bounded by the object edge 10.0E4 and the clip edge 10.CE3 in Fig. 10. " Normal pixel sequential rendering is done from scanline 10.SC3 until the end of the page 10.PG, with the absence of edges in AEL producing white output. As will be apparent from the foregoing example, the methods according to the 15 embodiment of the invention eliminate unnecessary tracking of edges. This is unlike other existing pixel sequential rendering methods, where edge tracking would have been required between the scanlines 10.SCl and 10.SC2 and between the scanlines 10.SC3 and 10.SC4. For ease of illustration only, only a single group was depicted in Fig. 10, and that group only involved one clip and one object. However, it will be apparent in the 20 light of this disclosure to those skilled in the art that multiple groups could be practiced and that each group could comprise several objects and clips. (ii) Rendering System Fig. 1 illustrates schematically a system 100 configured for rendering and 25 presenting computer graphic object images, on which the embodiments of the present invention may be practised. For example, the system 100 may be implemented within a printer. The system 100 includes a processor 102, a system random access memory (RAM) 103, a system read-only memory (ROM) 106, a bus 109, an engine 110, a rendering apparatus 120, and a rendering store 130. The processor 102, the system 30 memory 103, the system read-only memory (ROM) 106, the engine 110, and the rendering apparatus 120 are coupled to each other by the bus 109. Such a bus 109 can 1206347_1 836924_spec final -16 comprise data signals, addressing signals and control signals, well known to those skilled in the art. The rendering store 130 is coupled to the rendering apparatus 120 by a bus and can be implemented using semiconductor RAM. The bus between the rendering store 130 and the rendering apparatus 120 may be an internal bus (separate from the bus 5 109), or may be implemented by the bus 109. The processor 102 is associated with the system RAM 103 which comprises volatile system RAM 104 and non-volatile system RAM 105. The volatile system RAM 104 can be implemented using semiconductor RAM. The non-volatile system RAM 105 can be implemented using a hard disk drive (HDD) or a similar device. The system ROM 106 typically comprises semiconductor 10 ROM 107 and may further comprise compact disk (CD ROM) and/or DVD devices 108. The engine 110 may be a print engine. The print engine can render the pixel-based image, created by the rendering apparatus 120, in the render store 130 to a physical sheet of paper. The print engine can be used for displaying graphics and video as well. The above-described components of the system 100 are operable in a normal 15 operating mode of computer systems well known in the art. In Fig. 1, the rendering apparatus 120 is configured for the rendering of pixel based images. The processor 102 supplies (via the bus 109) graphic object-based descriptions with instructions and data to the rendering apparatus 120, from which the pixel-based images are derived. The rendering apparatus 120 may utilise the system 20 RAM 103 for the rendering of object descriptions although the rendering apparatus 20 may have associated therewith a dedicated rendering store arrangement 130, typically formed of semiconductor RAM. The rendering apparatus 120 may be implemented as a hardware device. Alternatively, the rendering apparatus 120 may be a software module running on the 25 processorl02. (iii) Rendering Pipeline Fig. 2 illustrates a rendering pipeline 200, with which the embodiments of the invention may be implemented. The rendering pipeline comprises a page description 30 language (PDL) interpreter module 201, a display list (DL) builder module 203, and a raster image processor (RIP) module 207 configured in that sequence. The PDL 1206347_1 836924 specfinal -17 interpreter module 201 receives PDL instructions and data and converts graphic objects and clip regions described in the page description language to a form that can be understood by the display list builder module 203, to which the PDL interpreter module is coupled. The DL builder module 203 may also receive input in the form of function 5 calls to the graphics device interface layer (GDI). The DL builder module 203 constructs a display list 205 of graphic objects in a form that is optimised for rendering by the RIP module 207 and outputs the DL 205 to the RIP module 207. The RIP module 207 renders the display list to an output image 210 that is pixel based. 10 The PDL interpreter module 201 and DL builder module 203 may be implemented as driver software modules running on a host PC processor (not shown in Fig. 1) that is in communication with the system 100 via an appropriate interface. Such a host computer may be implemented, for example, using the general purpose computer shown in Fig. 12. Alternatively, the PDL interpreter module 201 and the DL builder 15 module 203 may be implemented as embedded software modules running on the processor 102 within the system 100, e.g. a printer. The RIP module 207 is implemented on the rendering apparatus 120 in the system 100, which can be a hardware device or a software module as described hereinbefore. 20 (iv) Construction of Display List (DL) The operation of the DL builder module 203 is described with reference to Fig. 3. A method 300 of constructing a renderable display list according to one embodiment of the invention is carried out by the display list builder module 203. The display list comprises a list of groups sorted in Y order, with each group linked to all the clips 25 affecting the group. In Fig. 3, the method 300 begins at step 310. In step 310, an object (the current object) is obtained. The object is received from the PDL interpreter module 201. In decision step 320, a check is made to determine whether all objects have finished being processed. If step 320 returns true (Yes), the method 300 ends at step 399. Otherwise, if 30 decision step 320 returns false (No), the method 300 proceeds to step 330. 1206347_1 836924_spec final -18 In step 330, the current object's clips are processed, as described hereinafter with reference to Fig. 4. In decision step 340, a check is made to determine whether a new group is needed for the current object. That is, a determination is made if a new group needs to be opened for the current object according to several grouping criteria, which 5 may include only objects having the same set of clips, the proximity of various objects within a group, etc. These criteria keep the visible region of the group as small as possible. If decision step 340 returns true (Yes) indicating a new group needs to be created, processing continues at step 370. In step 370, a new group is created and is set to be the current group. The method 300 then continues at step 360. If decision step 340 10 returns false (No) indicating a new group is not needed for the current object, processing continues at step 360. In step 360, information about the current object is added to the current group, as described hereinafter with reference to Fig. 5. In step 380, the current group is added to the display list in sorted order. The key used for group sorting is the first scanline (Y 15 value) on which the group is visible, referred to as the group's VisibleY value (see Table 2). This VisibleY value is calculated as the first scanline of the result of intersecting the union of bounding boxes (BB( )) of objects (01, 02, ... ) within the group with the intersection of bounding boxes of clips-ins (Cl, C2, ...) affecting the group, expressed as follows: 20 (BB(Ol) U BB(02) ... ) A BB(Cl) A BB(C2) ... The group's VisibleEndY value (see Table 2) is the last scanline of the result of the same intersection. The method 300 continues from step 380 to step 310 to await the next object from the PDL interpreter module 201. Table I lists the Clip data structures in the display list having the following fields: 25 TABLE 1 Clip.EdgeList: A linked list of Scanline nodes sorted in increasing Y order. 1206347_1 836924_spec final -19 Each Scanline node has the following elements: e Scanline.Y: The scanline number. " Scanline.Edges: Edges that start at Scanline.Y are sorted in X order. " Scanline.Next: Next scanline in the list. (a) Processing Object Clips 5 Fig. 4 illustrates a method of processing one or more object clips, as used in step 330 of Fig. 3. The method 400 begins at step 405. In step 405, a data structure TempGroupData is initialised. The initialization is performed by setting the fields of TempGroupData as follows: * The parameter ClipBounds for the group is initialised to the page bounds; 10 ClipBounds is a temporary variable used for calculating the intersection of the bounding boxes of the clip-ins affecting the current object. (i.e BB(CI) A BB(C2) ... ); and * The linked list Clip is set to an empty list. In step 410, the next clip for the current object is obtained. In decision step 420, 15 a check is made to determine whether all clips for the current object have finished being processed (e.g., this might be indicated by a null value). If step 420 returns true (Yes) indicating no more clips affecting the current object, the method ends in step 499. Otherwise, if step 420 returns false (No), processing continues at step 430. In step 430, data for the current clip is added to a current clip group. The clip data added in step 430 20 is all the data required for rendering the current clip, principally the edges, levels and fill rules of the clip, stored in Clip.EdgeList. In step 440, the TempGroupData.ClipBounds is updated to be the intersection of TempGroupData.ClipBounds with the clip-in bounds (i.e., the bounding box of the current clip-in). In step 440, the current clip is added to the TempGroupData.Clip. The 1206347_1 836924_spec final -20 method 400 continues from step 450 to step 410 to obtain the next clip, if any, affecting the object. Table 2 lists the Group data structure in the display list 205 of Fig. 2 having the following fields: 5 TABLE 2 Group.EdgeList: A linked list of Scanline nodes sorted in increasing Y order. Each Scanline node has the following elements: 0 Scanline.Y: The scanline number. * Scanline.Edges: Edges, which start at Scanline.Y, are sorted in X order. * Scanline.Next: Next scanline in the list. Group.Bounds: The union of object bounding boxes within this group. Group. CIipBounds: The intersection of the bounding boxes of the clip-ins which affect this group. Group.Clips: List of all the clips which affect this group. Group.VisibleY: First scanline (Y value) on which the group is visible. Group.VisibleEndY: Last scanline (Y value) on which the group is visible. 10 (b) Adding Obiect Information to Current Group 1206347_1 836924_spec final -21 Fig. 5 is a flowchart illustrates the method 500 of adding information about the current object to the current group, as used in step 360 of Fig. 3 The method 500 begins at step 510. In step 510, the object information is added to the current group. The object information added is that which is required by the RIP module 207, i.e. the edges, 5 levels, and fill information of the object stored in Group.EdgeList. In step 520, the current group is linked to the group clips in TempGroupData.Clip. That is, the clip list of the current group is set to the clip list stored in TempGroupData.Clip (see Fig. 4). In step 530, the bounds (bounding box) of the current group are updated. The bounds are updated by calculating a union of the bounds of the current group with the 10 bounds of the current object. In step 540, TempGroupData.ClipBounds is copied to the clip bounds of the current group. The visible bounds of the current group are the intersection of Group.ClipBounds and Group.Bounds. The method 500 ends in step 599. (v) Rendering a Display List 15 Fig. 6 illustrates a method 600 of rendering a display list (using the OD-PSET algorithm), as created by DL builder module 203 using the method 300 of Fig. 3. The display list comprises a list of object groups sorted in Y order (the key for sorting is the first visible scanline of the group), and each group's corresponding clips. The method 600 is executed by the RIP module 207 of Fig. 2 to generate the output (pixel-based) 20 image 210 of Fig. 2. The method 600 makes use of the variables and data structures listed in Table 3: TABLE 3 Data Structures Active Edge List (AEL): Edges of clips and objects that have been loaded, are being tracked on the page, and may produce output (depending on the fill rules, levels and clips). 1206347_1 836924_spec final -22 Scanline Edge List (SEL): A linked list of Scanline nodes sorted in increasing Y order. Scanline: Data structure which contains the following: e Scanline.Y: The scanline number. . Scanline.Edges: Edges, which start at Scanline.Y, are sorted in X order. 0 Scanline.Next: Next scanline in the list. Variables: CurY: The current scanline Y value where the rendering algorithm has finished rendering in raster order. YRemain: Number of scanlines after CurY before more edges need to be loaded from the SEL. YGrpRemain: Number of scanlines after CurY before more scanline edge lists need to be added to the SEL and AEL from new visible groups down the page. The method 600 starts at step 610. In step 610, the renderer implemented by the RIP module 207 is initialized. The three variables, CurY, YRemain, and YGrpRemain, 5 are all initialised to zero (0). In decision step 620, a check is made to determine if the current scanline Y value (CurY) is greater than the last scanline (last scanline) to be rendered. That is, step 620 checks if all scanlines have been rendered. If step 620 returns true (Yes), the method 600 ends in step 699. Otherwise, if step 620 returns false (No), processing continues at step 630. 1206347_1 836924_spec final -23 In decision step 630, a check is made to determine if YRemain is zero (0), indicating more edges need to be loaded from the SEL. If decision step 630 returns true (Yes) indicating more edges need to be loaded, processing continues at step 680. In step 680, existing edges are loaded. These are loaded from the SEL. Step 680 is described 5 hereinafter in more detail with reference to Fig. 8. Processing continues at step 640. If decision step 630 returns false (No), processing continues at step 640. In decision step 640, a check is made to determine if YGrpRemain is zero (0), indicating more edges from new groups need to be loaded. If decision step 640 returns true (Yes), processing continues at step 690. In step 690, new edges are loaded from new 10 groups. This described in more detail hereinafter with reference to Fig. 9. Processing continues at step 650. If decision step 640 returns false (No), processing continues at step 650. In step 650, the number of scanlines (nlines) that can be rendered continuously without the need to load more edges from SEL or new groups is calculated as the 15 minimum value of YRemain, YGrpRemain and the difference between lastscanline and CurY. In step 660, linese' scanlines are rendered. The scanlines are rendered in the output pixel-based image 210, as described in more detail hereinafter with reference to Fig. 7. In step 670, YRemain and YGrpRemain are each updated by subtracting from each variable nlines that have been rendered in step 660. Processing continues at step 20 620 to check if all scanlines have been rendered. (a) Loading of Existing Edges Fig. 8 illustrates a method 800 of loading existing edges from the Scanline Edge List (SEL), as used in step 680 of Fig. 6. Method 800 starts in step 805. In step 805, 25 Selhead is set as the first element in SEL. Selhead is a pointer for keeping track of the current SEL that is being looked at). In decision step 810, a check is made to determine if Selhead is valid (e.g., not null). If step 810 returns false (No) processing continues at step 870. In step 870 YRemain is recalculated, which was initialized in step 610 and updated (calculated) in step 670 of Fig. 6. If Selhead is invalid, YRemain is calculated 30 as the difference between the last scanline to be rendered and the current scanline CurY. 1206347_1 836924_spec final -24 Processing then terminates in step 899. If decision step 810 returns true (Yes), processing continues at step 815. In decision step 815, a check is made to determine if Selhead's y value (Selhead.Y) is greater than the current scanline Y value CurY. If step 815 returns true 5 (Yes), processing continues at step 865. In step 865, Selhead is reinserted as the head of the SEL. Processing continues at step 870, where Yremain is recalculated. In the case where processing is from step 865, Yremain is calculated as the difference between Selhead's y value and CurY. (The calculation depends on what the head of SEL is.) If decision step 815 returns false (No) indicating Sel head's y value at is less than or equal 10 to CurY, processing continues at step 820. In step 820, Edge is set to the first edge in Sel_head, i.e. Head of Selhead.edges. In decision step 825, a check is made to determine if Edge is valid. If step 825 returns false (No), processing continues at step 860. In step 860, Selhead.next is inserted into SEL. Sel head.next would not be inserted if Sel head.next loads at a scanline outside the 15 Group's visible extent (determined from its VisibleY and VisibleEndY values). Processing continues at step 805. In step 805, Selhead is set to be the head of SEL. If step 825 returns true (Yes) indicating Edge is valid, processing continues at step 830. In decision step 830, a check is made to determine whether CurY is equal to zero (0). If step 830 returns true (Yes), processing continues at step 840. In step 840, Edge is 20 loaded and fast tracked up to CurY. Fast tracking an edge involves determining which segment of the edge intersects the scanline CurY. Then, using the starting position of this segment and its linear equation, the coordinate of intersection of the segment with the scanline CurY is calculated. The intersection coordinate then becomes the position of the edge for the scanline CurY. Processing continues at step 845 described 25 hereinafter. If step 830 returns false (No), processing continues at step 835. In step 835, Edge is just loaded at CurY. Fast tracking is not performed in this step. In step 845, Edge is merge sorted into AEL using Edge's X value as the key for sorting. In step 850, Edge's expiry is set to be the Group's VisibleEndY value. At this expiry scanline, Edge is removed from the AEL even if the edge has not been already tracked to completion. In 30 step 855, Edge is set to be the next Edge (Edge.next) in the Selhead. Processing continues at step 825. 1206347_1 836924_spec final -25 (b) Loading of New Edges Fig. 9 illustrates the method 900 of loading new edges from groups not already processed in the display list DL, as used in step 690 of Fig. 6. The method 900 starts at 5 step 905. In step 905, Group is set to be the next group in the DL that has not been processed. In decision step 910, a check is made to determine if the Group is valid and if Group's VisibleY value is less than or equal to CurY. If step returns false (No), processing continues at step 960. In step 960, YGrpRemain and YRemain are recalculated. If Group is valid in step 960, YGrpRemain is recalculated as the difference 10 between Group's VisibleY value and CurY. However, if Group is invalid at step 960, YGrpRemain is set to the difference between the last scanline to be rendered and CurY. YRemain is recalculated as the difference between the Y value of the first element in SEL and CurY. The method 900 ends in step 999. If step 910 returns true (Yes), processing continues at step 915. 15 In step 915, edges from Group up to scanline CurY are loaded, fast tracked to CurY as described above, and merged into the AEL in a similar fashion to steps 840 and 845 of Fig. 8. The condition (CurY == 0) in step 830 of Fig. 8 would change to always fast track so that step 840 is always executed. The embodiments of the invention enable fast tracking of newly encountered groups down the page and fast tracking to CurY, 20 which would be the first visible scanline for this group, and merging the groups with the AEL. In step 920, the first scanline edge list entry from Group greater than CurY, if any, is added into SEL. In step 925, Clip is set to be the head of the list of clips affecting Group. In decision step 930, a check is made to determine if Clip is valid (This is a mistake. I have updated the diagram also.). If step 930 returns false (No), processing 25 continues at step 905. Otherwise, if decision 930 returns true (Yes), processing continues at step 935. In decision step 935, a check is made to determine if Clip is not already loaded before. This avoids adding clip edges more than once to the AEL. If step 935 returns true (Yes) indicating Clip is already being used, processing continues at step 940. In step 30 940, Clip is set to be the next clip in the clip list affecting Group. Processing continues at decision step 930. If step 935 returns false (No), processing continues at step 945. 1206347_1 836924_spec final -26 In step 945, Clip is marked as already being used. In step 950, clip edges up to scanline CurY are loaded, fast tracked and merged into the AEL in a similar fashion to steps 840 and 845 of Fig. 8. The condition in step 830 of Fig. 8 would change to always fast track so step 840 is always executed for the same reasons as fast tracking is done in 5 step 915. In step 955, the first scanline edge list from Clip.EdgeList greater than CurY, if any, insert into SEL. Processing continues at step 940, where the next clip in the clip list affecting Group is set as Clip. (c) Rendering linese' Scanlines into Output Image 10 Fig. 7 illustrates a method 700 of rendering 'nlines' scanlines into the output (pixel-based) image 210 of Fig. 2 from the 'CurY' scanline onwards, as used in step 660 of Fig. 6. Processing commence at step 705. In step 705, a check is made to determine if nlines is less than or equal to zero (0). If step 705 returns true (Yes), the method 700 ends in step 799. Otherwise, if step 705 returns false (No), processing continues at step 15 710. In step 710, the Active Edge List (AEL) is rendered into scanline CurY of the output pixel-based image 210 using a pixel sequential rendering algorithm, which is well known to those skilled in the art. Processing continues at step 715. Steps 715 through to 745 of the method 700 update all the edges in the AEL so that the edges are ready for rendering on the next scanline. 20 In step 715, a variable Edge is set to be the head of the AEL. This is the current edge to be processed. These are the edges in groups and clips which make up the object shape. In decision step 720, a check is made to determine if Edge is valid (e.g., null). If decision step 720 returns false (No) indicating Edge is invalid, processing continues at step 750 described hereinafter. Otherwise, if step 720 returns true (Yes) indicating Edge 25 is valid, processing continues at step 725. In decision step 725, a check is made to determine if Edge exists in the next scanline (CurY + 1). If step 725 returns false (No), processing continues at step 745. In step 745, Edge is removed from the AEL. Processing continues at step 740. Otherwise, if decision step 725 returns true (Yes) indicating Edge exists in the next scanline, processing continues at step 730. 30 In decision step 730, a check is made to determine if Edge expires in the current scanline CurY. That is, Edge's expiry scanline is equal to CurY. If step 730 returns true 1206347_1 836924_spec final -27 (Yes) indicating Edge expires in the current scanline CurY, processing continues at step 745 and Edge is removed from the AEL. Otherwise, if step 730 returns false (No), processing continues at step 735. In step 735, Edge's X value (Edge.X) is updated for the next scanline (CurY + 1), i.e. is for rendering. Edge may need to be reinserted, so that 5 AEL remains sorted by X. In step 740, Edge is set to be the next edge in the AEL. Processing then continues at step 720 to determine if the current Edge is valid. When all the edges in AEL have been updated (by means of steps 720 to 745), step 720 returns false (No), and method 700 continues at step 750. In step 750, nlines is decremented by one (1) and CurY is incremented by one (1). Processing then returns to 10 step 705 to check if more lines can be rendered. II. Bucket On-Demand Pixel Sequential Edge Tracking (BOD-PSET) (i) Changes to OD-PSET The OD-PSET algorithm 600 of Fig. 6 described hereinbefore works for a group 15 based display list, where groups are sorted based on a VisibleY value. The OD-PSET algorithm can be modified to work for a display list that is split into one or more buckets corresponding to one or more partitions of a page in the Y direction. Each bucket contains zero (0) or more object groups sorted in Z order. The entire Y range of the page is split up into a number of buckets. Each bucket holds groups that are sorted in Z order. 20 A group exists only in the bucket where the group is first visible (based on the intersection of the union of the bounding boxes of objects in the group with the bounding boxes of clip-ins affecting the group). The number of buckets are usually less than or equal to the number of bands in the page. This technique is referred to hereinafter as the BOD-PSET algorithm. The advantage of the BOD-PSET algorithm is that the technique 25 works harmoniously with other band rendering techniques which requires groups to be sorted in Z order. The BOD-PSET algorithm may be used when rendering into a band store (e.g. as in colour rendering). A band store is a memory of limited or fixed size for storing a subset of scanlines of a page to be rendered, where the entire page to be rendered cannot be stored in the memory, as is well know to those skilled in the art. 30 The method 300 of Fig. 3 for building the display list 205 for rendering by the OD-PSET algorithm 600 of Fig. 6 may also be used to build a display list 205 for 1206347_1 836924_spec final -28 rendering by the BOD-PSET algorithm, with one difference. The difference lies in step 380, where instead of sorting groups by VisibleY value, the Group is added or appended to the end of a bucket b, where Group's VisibleY value is in the scanline range corresponding to the bucket b. 5 The BOD-PSET rendering algorithm is similar to the OD-PSET algorithm 600of Fig. 6. When rendering pixels using BOD-PSET algorithm into each band store, the method 600 of Fig. 6 is executed with the following variations. In step 610, CurY and YRemain are only initialised to zero (0) for the first band render, and all remaining band renders leave these variable values unchanged. YGrpRemain is always set to zero (0) at 10 the start of each band. The methods 700 and 800 of Figs. 7 and 8, i.e. the rendering of nlines scanlines without needing to load any edges and the loading of new edges from SEL, are identical for the BOD-PSET algorithm. For the BOD-PSET algorithm, the differences in method 900 of loading new edges from groups not already processed in the display list DL are: 15 0 Step 905 loads groups that are visible in the current band and have not already been processed when rendering previous bands. 0 Step 960 always calculates YGrpRemain as the number of lines left to render the band. 20 (ii) Working of BOD-PSET Algorithm for Exemplary Page Fig. 11 illustrates an exemplary page 1 1.PG that is the same in terms of content as the page 10.PG of Fig. 10 rendered using the OD-PSET algorithm. The clip 10.C1, the object 10.01, and the bounding boxes 10.CBB1 and 10.0BB1 are labelled I 1.C1, 11.01, 1 .CBB1, and 10.OBB1, respectively. 25 The 1 1.PG contains the single object 11.01 clipped by the clip 1 1.C. The object 11.01 comprises four edges 11.OE 1, 1 .OE2, 11.0E3 and 11.OE4 with the bounding box 1 1.OBB1. The bounding box 1 1.OBB1 starts at the scanline 11.SC2 and ends at the scanline 11.SC4. The clip 1 1.CI comprises four edges 1 1.CEI, 1 .CE2, 1 l.CE3 and 1 l.CE4 with the bounding box l L.CBB 1. The bounding box 11.CBB 1 starts at scanline 30 1 .SCI and ends at the scanline 1 .SC3. The visible bounds are calculated from the intersection of the bounding boxes 1 1.CBBI and 1 1.OBB1, which has a top scanline 1206347_1 836924_spec final -29 11 .SC2 and a bottom scanline 11.SC3. For the sake of brevity other details described hereinbefore with respect to Fig. 10 are not duplicated here. For further details of the content of Fig. 11, reference is made to the description hereinbefore of Fig. 10, where features are numbered in a corresponding manner (e.g., clip 10.C1 in Fig. 10 is clip 5 11.Cl in Fig. 11). In this example, the renderer divides the page 11.PG to be rendered into six bands 11.B1, 11.B2, 1 1.B3, 11.B4, 11.B5, and 11.B6 (each band is referred to by its last scanline, which is depicted by a thick dotted arrow with the arrowhead directed to the left). The sequence of steps for rendering the page 11.PG using BOD-PSET algorithm is: 10 * The display list created for the page 1 .PG using the variation of method 300 of Fig. 3 comprises one group which has a single clip (1 1.C1) affecting the group. The group's VisibleY value is scanlinel 1.SC2, the top of bounding box 11.0BB 1 in the intersection of bounding boxes, and the group is therefore appended to the second bucket(CurY = 0). The number of buckets does not 15 need to match the number of bands. e YGrpRemain is set to the last scanline in the current band (11.B1) - CurY. YRemain is set to the last scanline on the page 1 1.PG - CurY. * Even though clip edges 1 1.CE1 and 1 1.CE2 are in the first band 11.B1, the group is not visible until the second band 11.B2, so the first band 11.B1 is 20 rendered completely white. * When rendering the second band 11.B2 (CurY = 11.B2), the visible region (in part bounded by bounding box 11.OBB1) for the object 11.01 starts at the scanline 1 1.SC2, which is inside the second band 11.B2. The clip edges 11.CE 1 and 11.CE2 are loaded into the AEL and fast tracked to the scanline 25 11.B1 (i.e., the starting scanline of this band 11.B2). The expiry scanline for the clip edges 1 .CEl and 1 1.CE2 is set to the scanline I 1.SC3. * YRemain is set to the scanline 11.SC8 (where the clip edge 11.CE4 starts) CurY. YGrpRemain is set to the last scanline in the current band (11.B2) CurY. 30 0 Normal pixel sequential rendering of the second band 11.B2 is performed until CurY =1 .SC8, where the clip edge 1 1.CE1 finishes and the clip edge 1 1.CE4 1206347_1 836924_spec final -30 is loaded into the AEL. The expiry scanline for 11.CE4 is set to 11 .SC3. The noted portion of the second band 11.B2 until the scanline 1 .SC8 is rendered completely white. * YRemain is set to the scanline 1 1.SC2 - CurY. YGrpRemain is set to the 5 last scanline in the current band (11.B2) - CurY.Normal pixel sequential rendering of the second band 11.B2 is performed until CurY =11 .SC2, where the object edges 1 1.E1 and 1 .OE2 are loaded into the AEL. The expiry scanline for 1 1.OE1 and 1 1.OE2 is set to the scanline 11.SC3. e YRemain is set to the scanline 11.SC5 (in the third band 11.B3 where the 10 object edge 1 1.OE3 starts) - CurY. YGrpRemain is set to the last scanline in the current band (11.B2) - CurY. e The second band 11.B2 finishes rendering, and normal pixel sequential rendering of the third band 11.B3 is performed until CurY = 1 1.SC5 is reached. At this scanline, the object edge 11.OE2 finishes and the object edge 15 11 .OE3 is loaded into the AEL. The expiry scanline for the object edge 1 1.OE3 is set to the scanline 11.SC3. e YRemain is set to the scanline 11.SC7 (where clip edge 11 .CE3 starts) CurY. YGrpRemain is set to the last scanline in the current band - CurY. * The third band 11.B3 finishes rendering, and normal pixel sequential rendering 20 of the fourth band 11.B4 is performed until CurY = 1 1.SC7, where the clip edge 11.CE2 finishes and the clip edge 11.CE3 is loaded into the AEL. The expiry scanline for the clip edge 1 .CE3 is set to the scanline 11 .SC3. * YRemain is calculated to be the scanline 1 .SC6 (where the object edge 11.OE4 starts) - CurY. YGrpRemain is set to the last scanline in the current 25 band (11.B4) - CurY. e The fourth band 11.B4 finishes rendering and normal pixel sequential rendering of the fifth band 1 .B5 is performed until CurY = 11.SC6, where the object edge 1 1.OEI finishes and the object edge 1 1.OE4 is loaded into the AEL. The expiry scanline for the object edge 11.OE4 is set to the scanline 30 11.SC3. 1206347_1 836924_spec final -31 * YRemain is set to last scanline in the page 1 1.PG - CurY. YGrpRemain is set to the last scanline in the current band (11 .B5) - CurY. " Normal pixel sequential rendering of the fifth band 11.B5 is performed until CurY = 11.SC3. When edges are being updated at the scanline 1 1.SC3, the 5 edges 11.CE4, I .CE3, 11.OE4, and 11.OE3 in the AEL have reached their expiry scanline and are all removed from the AEL, leaving the AEL empty. " The fifth band 11 .B5 finishes rendering, and normal pixel sequential rendering of the sixth band 11.B6 is performed until the end of the page 11.PG, with the absence of edges in the AEL producing white output. 10 The BOD-PSET method of rendering may involve more scanlines that have edges tracked without fast tracking compared to the OD-PSET method (e.g. scanlines 11.B 1 to 1 1.SC2). However, as mentioned hereinbefore, the BOD-PSET integrates well with other high speed band rendering techniques. The embodiments of the invention are applicable to the computer and data 15 processing industries, amongst others. The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. 1206347_1 836924_spec final

Claims (20)

1. A method of pixel-sequentially rendering a display list to an output image, 5 said method comprising: determining from a display list comprising at least two graphical objects and at least one clip a first scanline of a visible region within which at least one object clipped by a clip in said display list can contribute colour to an image, each object and each clip having at least one edge, said image comprising a plurality of scanlines; 10 for a current scanline intersecting one of said objects and said clip, the edges of said objects intersecting the current scanline being active edges, an Active Edge List (AEL) holding said active edges in increasing order of points of intersection with said current scanline: determining if said current scanline is said first scanline of said 15 visible region; if said current scanline is determined to be said first scanline of said visible region, performing the steps of: determining a crossing point of an edge of one of said object and said clip within said current scanline; 20 pixel-sequentially rendering said AEL to said current scanline of said image using said determined crossing position; merging said edge of said object into said AEL comprising at least one other edge of an object, said AEL being ordered by crossing position with said current scanline; and 1206347_1 836924_spec final -33 if said current scanline is determined not to be said first scanline of said visible region, performing the step of: pixel-sequentially rendering another object to said current scanline of said image using a predetermined crossing position of 5 an edge of said other object within said current scanline.
2. The method according to claim 1, further comprising: setting an expiry scanline for said edges of said object to the last scanline on which said one object is visible; and 10 updating said edges to a following scanline if the current scanline is not said expiry scanline.
3. The method according to claim 1, wherein said object is a group of objects satisfying a grouping criterion. 15
4. The method according to claim 1, wherein said edges each comprise at least one line-, arc- or Bezier-segments.
5. The method according to claim 1, wherein said crossing points of edges 20 activate or deactivate objects in said display list.
6. The method according to claim 1, further comprising the steps of: creating an empty group of objects; and adding at least one object to said group of objects dependent upon at least one 25 clip associated with said object. 1206347_1 836924_spec final -34
7. The method according claim 6, further comprising the step of adding said group of objects to said display list in sorted order dependent upon a scanline on which said group of objects is visible. 5
8. The method according to claim 6, further comprising the step of adding at least one clip for said object to a list of clips for said group of objects.
9. The method according to claim 6, wherein the step of adding at least one object to said group of objects: 10 adding information about said object to a data structure for said group of objects; linking data for at least one clip for said object to a list of clips for said group of objects; setting bounds for said group of objects dependent upon bounds for said object; and 15 setting bounds of clips for said group of objects dependent upon bounds for said clip for said object.
10. The method according to claim 1, wherein said visible area is determined based on said at least two graphical objects and said at least one clip. 20
11. The method according to claim 1, wherein said visible area is determined based on bounds of said at least two graphical objects and bounds of said at least one clip. 1206347_1 836924_spec final -35
12. The method according to claim 1, wherein said bounds of said at least two graphical objects and said bounds of said at least one clip are each defined by a bounding box. 5
13. The method according to claim 1, further comprising the steps of: splitting said display list into two or more buckets corresponding to two or more partitions of a page in a Y direction, each bucket capable of containing at least one object and/or clip sorted in Z order. 10
14. The method according to claim 13, wherein said rendering steps are performed using said buckets to render said partitions of said page dependent upon an object being visible within said partition.
15 15. A system for pixel-sequentially rendering a display list to an output image, said system comprising a processor and a memory and implementing the method according to any one of claims 1 to 14.
16. The system according to claim 15, wherein said rendering apparatus is 20 implemented as a hardware device.
17. The system according to claim 15, wherein said rendering apparatus is implemented as a software module running on said processor. 1206347_1 836924_spec final -36
18. The system according to claim 15, wherein said system is implemented within a printer.
19. The system according to claim 15, further comprising an engine, a 5 rendering apparatus, and a rendering store coupled to each other,
20. A computer program product comprising a computer readable medium having recorded therein a computer program for pixel-sequentially rendering a display list to an output image, said computer program product comprising computer program 10 code means for implementing the method according to any one of claims I to 14. Dated this 18th day of April 2008 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant SPRUSON&FERGUSON 1206347_1 836924_spec final
AU2008201723A 2008-04-18 2008-04-18 On-demand pixel-sequential edge tracking Abandoned AU2008201723A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2008201723A AU2008201723A1 (en) 2008-04-18 2008-04-18 On-demand pixel-sequential edge tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2008201723A AU2008201723A1 (en) 2008-04-18 2008-04-18 On-demand pixel-sequential edge tracking

Publications (1)

Publication Number Publication Date
AU2008201723A1 true AU2008201723A1 (en) 2009-11-05

Family

ID=41259218

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008201723A Abandoned AU2008201723A1 (en) 2008-04-18 2008-04-18 On-demand pixel-sequential edge tracking

Country Status (1)

Country Link
AU (1) AU2008201723A1 (en)

Similar Documents

Publication Publication Date Title
JP4756937B2 (en) How to draw graphic objects
EP1577838B1 (en) A method of rendering graphical objects
US6995773B2 (en) Automatic memory management
EP1306810A1 (en) Triangle identification buffer
JP2000137825A (en) Fast rendering method for image using raster type graphic object
US8723884B2 (en) Scan converting a set of vector edges to a set of pixel aligned edges
US20150002529A1 (en) Method, system and apparatus for rendering
US20130120381A1 (en) Fast rendering of knockout groups using a depth buffer of a graphics processing unit
AU2013273660A1 (en) Method, apparatus and system for generating an intermediate region-based representation of a document
US6795048B2 (en) Processing pixels of a digital image
US20050162435A1 (en) Image rendering with multi-level Z-buffers
US20040012617A1 (en) Generating one or more linear blends
US20120105911A1 (en) Method, apparatus and system for associating an intermediate fill with a plurality of objects
AU2008201723A1 (en) On-demand pixel-sequential edge tracking
US20050195220A1 (en) Compositing with clip-to-self functionality without using a shape channel
US20100225660A1 (en) Processing unit
AU730559B2 (en) Optimisation in image composition
US6903748B1 (en) Mechanism for color-space neutral (video) effects scripting engine
AU2005200948B2 (en) Compositing list caching for a raster image processor
EP1306811A1 (en) Triangle identification buffer
AU2003204655B2 (en) Generating One or More Linear Blends
JP5760728B2 (en) Image processing apparatus, image forming apparatus, and program
JP4355394B2 (en) Image processing method, image processing apparatus, and program storage medium
AU2009201502A1 (en) Rendering compositing objects
AU2008207665A1 (en) Retaining edges of opaque groups across bands

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period