AU2009201502A1 - Rendering compositing objects - Google Patents

Rendering compositing objects Download PDF

Info

Publication number
AU2009201502A1
AU2009201502A1 AU2009201502A AU2009201502A AU2009201502A1 AU 2009201502 A1 AU2009201502 A1 AU 2009201502A1 AU 2009201502 A AU2009201502 A AU 2009201502A AU 2009201502 A AU2009201502 A AU 2009201502A AU 2009201502 A1 AU2009201502 A1 AU 2009201502A1
Authority
AU
Australia
Prior art keywords
compositing
list
objects
rendering
computer program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009201502A
Inventor
Thomas Benjamin Sanjay Tomas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009201502A priority Critical patent/AU2009201502A1/en
Publication of AU2009201502A1 publication Critical patent/AU2009201502A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Description

S&F Ref: 895809 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Thomas Benjamin Sanjay Tomas Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Rendering compositing objects The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2062485_1) -1 RENDERING COMPOSITING OBJECTS TECHNICAL FIELD The present invention relates to graphic object rendering and in particular to efficiently rendering pages containing compositing objects. 5 BACKGROUND Most raster image processors (RIPs) utilize memory, known as a frame store or a page buffer, to hold a pixel-based image data representation of the page or screen for subsequent printing and/or display. Typically, the outlines of the graphic objects are calculated, filled with colour values and written into the frame store. For two-dimensional 10 computer graphics, objects that appear in front of other objects are simply written into the frame store after the background objects, thereby replacing the background on a pixel-by pixel basis. This is commonly known in the art as "Painter's algorithm" (also as "object sequential rendering"). Objects are considered in priority order, from the rearmost object to the foremost object, and typically, each object is rasterised in scanline order and pixels 15 are written to the frame store in sequential runs along each scanline. These sequential runs are termed "pixel runs". Some RIPs allow objects to be composited with other objects in some way. For example, a logical or arithmetic operation can be specified and performed between one or more semi-transparent graphic objects and the already rendered pixels in the frame buffer. In these cases, the rendering principle remains the same: 20 objects are rasterised in scanline order, and the result of the specified operation is calculated and written to the frame store in sequential runs along each scanline. There are a number of problems with the Painter's algorithm rendering method. One of the problems is that many pixels that are written to the frame store by rasterising an object are over-written when rasterising later objects. There is a clear disadvantage in 25 using resources to write pixel data into a framestore that is at a later stage over-written. Another problem is that when an object requires compositing, pixels beneath the object are typically read from the framestore and combined in some way with the pixels of the object. If the pixels in the framestore are stored in a lower bit-depth than the object requiring compositing, then most compositing operations generate an incorrect result. 895809 (2060295_1) -2 This is the case when the graphics object is, for example, an 8 bit per channel RGBA bitmap and the framestore holds one bit per channel half-toned pixel data. This can occur because pixel values are often stored in a framestore at the bit depth required for printing. Other RIPs may utilise a pixel-sequential rendering approach. In these systems, 5 each pixel is generated in raster order along scanlines. All objects to be drawn are retained in a display list in an edge-based format. On each scanline, the edges of objects that intersect the current scanline, known as active edges, are held in increasing order of their points of intersection with the scanline. These points of intersection, or edge crossings, are considered in turn, and activate or deactivate objects in the display list. 10 Between each pair of edges considered, the colour data for each pixel that lies between the first edge and the second edge is generated based on which objects are active for that run of pixels. In preparation for the next scanline, the coordinate of intersection of each edge is updated in accordance with the properties of each edge, and the edges are sorted into increasing order of point of intersection with that scanline. Any newly active edges are 15 also merged into the ordered list of active edges. Graphics systems that use pixel sequential rendering have significant advantages over object-sequential renderers in that there is no unnecessary over-painting and compositing quality is maintained. Objects requiring compositing are processed on a per-pixel basis using each object's original colour data. Each pixel is converted to the output bit depth after any compositing, so the 20 correct result is obtained regardless of the output bit depth. Pixel-sequential rendering suffers when there are large numbers of edges that must be tracked and maintained in sorted order for each scanline. As each edge is updated for a new scanline, the edge is re-inserted into the active edge list, usually by an insertion sort. For complex pages, which may consist of hundreds of thousands of edges, the time 25 required to maintain the sorted list of active edges for each scanline becomes a large portion of the total time to render a complex page. There are method that has combined both object and pixel sequential algorithms to create a "hybrid" rendering approach, which give a better rendering performance and good quality output by using both methods. 30 In one such hybrid rendering technique, an object-based display list is created and partitioned into one group of objects requiring compositing, and another group of objects 895809 (2060295_1) -3 not requiring compositing. The partitioning is done at the last compositing object. Pixel sequential rendering is used for the groups of objects requiring compositing, and object sequential rendering is used for the objects above the last compositing object. This technique however suffers from performance heavily dependent on the priority or Z value 5 of the last compositing object. Pages having a watermark would have all objects rendered using pixel sequential rendering. Another hybrid rendering technique, known as exclusive pixel sequential rendering (XPSR), renders sets of opaque objects using object sequential rendering and each compositing object is pixel sequentially rendered with objects determined to be 10 underneath the compositing object. The overlap of bounding boxes is used to detect if an object of lower priority is underneath a compositing object. Compositing objects such as watermarks, which have a large bounding box due to the orientation of the text, result in the majority of a page being considered to be underneath the watermark even though the watermark only composites with a small subset of those objects. Also this method 15 determines the objects underneath each compositing object at render time, which can be inefficient. SUMMARY In accordance with an aspect of the invention, there is provided a method of 20 rendering a digital image. A list of objects comprising at least one object to be composited describing the image is received. The objects in the list are ordered in the list in priority order from a lowest-priority object to a highest-priority object. Each compositing object is associated with at least one cell in a compositing grid for the image. For each object, a compositing list of each compositing object in at least one cell 25 associated with the object is updated. The object is rendered if a portion of the object contributes to a rendered output with the compositing list of the compositing object. In accordance with an aspect of the invention, there is provided an apparatus for rendering a digital image, comprising: a memory for storing data and a computer program; and a processor unit coupled to the memory for executing a computer program. The 30 computer program comprises: a computer program code module for receiving a list of objects comprising at least one object to be composited describing the image, the objects 895809 (2060295_1) -4 in the list being ordered in the list in priority order from a lowest-priority object to a highest-priority object; a computer program code module for associating each compositing object with at least one cell in a compositing grid for the image; a computer program code module, for each object, updating the compositing list of each compositing 5 object in at least one cell associated with the object; and a computer program code module for rendering the object if a portion of the object contributes to a rendered output with the compositing list of the compositing object. The apparatus may further comprise a module coupled to the apparatus for providing a description of the digital image comprising the objects. 10 The apparatus may be a printer comprising a print engine coupled to the memory and the processor unit. In accordance with an aspect of the invention, there is provided a computer program product comprising a computer readable medium having recorded therein a computer program for rendering a digital image for execution by a processing unit. The 15 computer program comprises: a computer program code module for receiving a list of objects comprising at least one object to be composited describing the image, the objects in the list being ordered in the list in priority order from a lowest-priority object to a highest-priority object; a computer program code module for associating each compositing object with at least one cell in a compositing grid for the image; a computer 20 program code module for, for each object, updating a compositing list of each compositing object in at least one cell associated with the object; and a computer program code module for rendering the object if a portion of the object contributes to a rendered output with the compositing list of the compositing object. 25 BRIEF DESCRIPTION OF THE DRAWINGS One or more embodiments of the invention are described hereinafter with reference to the following drawings, in which: Fig. 1 is a schematic block diagram of a computer system on which the embodiments of the invention may be practised; 30 Fig. 2 is a schematic block diagram showing a rendering pipeline within which the embodiments of the invention may be implemented; 895809 (2060295_1) -5 Fig. 3 is a schematic flow diagram illustrating a method of constructing an object based display list according to one embodiment of the invention; Fig. 4 is a schematic flow diagram illustrating a method of processing object clips as used in the method of Fig. 3; 5 Fig. 5 is a schematic flow diagram illustrating a method of selecting appropriate compositing object detection and decomposition algorithm as used in the method of Fig. 3; Fig. 6 is a schematic flow diagram illustrating a method of detecting and decomposing compositing glyph strings as used in the method of Fig. 5; 10 Fig. 7 is a schematic flow diagram illustrating a method of detecting and decomposing compositing paths as used in the method of Fig. 5; Fig. 8 is a schematic flow diagram illustrating a method of recording compositing objects in the appropriate compositing grid cells as used in the method of Fig. 3; Fig. 9 is a schematic flow diagram illustrating a method of adding objects to the 15 display list as used in the methods of Figures 3, 5, 6 and 7; Fig. 10 is a schematic flow diagram illustrating a method of rendering an object based display list constructed according to the method of Fig. 3; Fig. 11 is a schematic flow diagram illustrating a method of rendering a set of objects in a render task as used in the method of Fig 10; 20 Fig. 12 is a schematic flow diagram illustrating a method of rendering a set of scanlines in a render task as used in the method of Fig 11; Fig. 13 is a schematic flow diagram illustrating a method of updating compositing lists using the current object being processed as used in the method of Fig 10; Fig. 14 is a schematic flow diagram illustrating a method of appending object to 25 appropriate compositing lists in a compositing grid cell as used in the method of Fig 13; Fig. 15 is a schematic flow diagram illustrating a method of marking a rendered compositing object as dormant in the compositing grid as used in the method of Fig 10; Figs. 16 and 17 are diagrams illustrating the overall working of the algorithm for an exemplary page; 30 Figs. 18A and 18B are more detailed schematic block diagrams of a general purpose computer system that can be used as the computer system of Fig. 1; and 895809 (2060295_1) -6 Fig. 19 is a high-level flow diagram illustrating a method of rendering a digital image. DETAILED DESCRIPTION 5 Methods, apparatuses, and computer program products for rendering a digital image are disclosed. In the following description, numerous specific details, including particular cell sizes, colour spaces, and the like are set forth. However, from this disclosure, it will be apparent to those skilled in the art that modifications and/or substitutions may be made without departing from the scope and spirit of the invention. 10 In other circumstances, specific details may be omitted so as not to obscure the invention. Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. 15 Description of Pape and Obiects When a computer application provides data to a device for printing and/or display, an intermediate description of the page is often given to device driver software in a page description language (PDL). The intermediate description of the page includes 20 descriptions of the graphic objects to be rendered. This contrasts with some arrangements where raster image data is generated directly by the application and transmitted for printing or display. Examples of page description languages include Canon's LIPSTM (Canon Inc. of Japan) and HP's PCL (Hewlett-Packard Inc. of USA). Equivalently, the application may provide a set of descriptions of graphic objects 25 via function calls to a graphics device interface layer (GDI), such as the Microsoft WindowsTM GDI (Microsoft Corp. of USA). The printer driver for the associated target printer is the software that receives the graphic object descriptions from the GDI layer. For each graphic object, the printer driver is responsible for generating a description of the graphic object in the page description language that is understood by the rendering 30 system of the target printer. 895809(2060295_1) -7 The printer's rendering system contains a PDL interpreter that parses the graphic object descriptions and builds a display list (DL) of graphic object data. The rendering system also contains a raster image processor (RIP) that processes the display list and renders the data to an output page image of pixel values comprising for example C, M, Y 5 and K colour channels. Once in this format, the printer prints the page. A graphic object can be a fill region, which (potentially) contributes colour to the output image, or a clip region. Hereinafter, "graphic object" is taken to mean "fill region". Each graphic object may be clipped by a clip region. The clip region limits the 10 graphic objects the clip region clips to the boundaries of the clip region. The clip region may describe a rectangular shape, called a clip rectangle, or describe a more complex shape, called a clip path. There are two types of clip regions: inclusive clip regions, called "clip-ins", where graphic objects are only drawn inside the boundaries of the clip region, and exclusive clip 15 regions, called "clip-outs", where graphic objects are only drawn outside the boundaries of the clip region. The clip region (or simply "clip") may be applied to a single graphic object or a group of graphic objects. A clip region that clips a group of graphic objects is deemed to be "active" over the group of graphic objects. Furthermore, a graphic object or group of 20 graphic objects may be clipped by two or more clip regions. A graphic object's "clipping status" is the list of clips that clip that graphic object. A "watermark" is defined to be a semi transparent diagonal text, which composites with page elements underneath the watermark. 25 Overview Broadly speaking, the embodiments of the invention are directed to a method of rendering a digital image, as illustrated in Fig. 19. Processing commences in step 1912 in which a list of objects 1900 describing the image is received. The list 1900 comprises at least one object to be composited 1910, and the objects in the list 1910 are ordered in the 30 list in priority order from a lowest-priority object to a highest-priority object. In step 1916, each compositing object is associated with at least one cell in a compositing grid 895809 (2060295_1) -8 1914 for the image. In step 1918, for each object, the compositing list 1922 for each compositing object in the corresponding cells in the compositing grid 1914 within the bounding box of the object is updated. For each compositing object, the object to be composited is rendered (if a portion of the object contributes to a rendered output) with 5 the compositing list 1922 of the compositing object using a rendering method in step 1924. The rendering method renders the currently-considered object in sequence of raster scan order. The method may further comprise the steps of determining, within the cell associated with the object, the compositing object overlapping the object, and updating 10 the compositing list of the compositing object. The method may further comprise the steps of determining if the compositing object can be decomposed into a plurality of glyphs; and decomposing a compositing object into a plurality of glyphs, each glyph being treated as a distinct compositing object if the compositing object can be decomposed. The determining step may comprise using 15 a functional relationship of a number of pixels in a bounding box of the object and a sum of pixels in a plurality of bounding boxes of the plurality of glyphs. The method may further comprise the step of adding the distinct compositing object to at least one compositing grid associated with the distinct compositing object. The method may further comprise the steps of determining if the compositing 20 object can be decomposed into a plurality of subpaths; and decomposing a compositing object into a plurality of subpaths, each subpath being treated as a distinct compositing object if the compositing object can be decomposed. The determining step may comprise using a functional relationship of a number of pixels in a bounding box of the object and a sum of pixels in a plurality of bounding boxes of the plurality of subpaths. The method 25 may further comprise the step of adding the distinct compositing object to at least one compositing grid associated with the distinct compositing object. The objects in each cell in the compositing grid may be considered in order of increasing priority; and the method may further comprise the step of rendering the object to an image store using another different rendering method if the currently-considered 30 object does not require compositing. 895809 (2060295 1) -9 After rendering the compositing object, the compositing object may be removed from all cells of the compositing grid. A compositing object may be determined to be overlapping with the object if there is at least one common pixel between a bounding box of the compositing object and a 5 bounding box of the object. The cells associated with each object may be calculated using a bounding box of the object. Each visible compositing object may be added to at least one cell of the compositing grid based on a bounding box of the compositing object. 10 The method may further comprise the steps of: determining if the compositing object can be decomposed into a plurality of glyphs; and decomposing a compositing object into a plurality of glyphs if the compositing object can be decomposed, wherein a subset of the plurality of glyphs is treated as a distinct object. These and other aspects are described in greater detail hereinafter. 15 Rendering System Fig. 1 illustrates schematically a system 100 configured for rendering and presenting computer graphic object images, on which the embodiments of the present invention may be practised. The system includes a processor 120, a system random 20 access memory (RAM) 152, a system read-only memory (ROM) 162, an engine 110, a rendering apparatus 140, and a rendering store 130. The processor 120 is also associated with the system RAM 152, which may include a non-volatile hard disk drive or similar device 156 and volatile, semiconductor RAM 154. The system 100 also includes the system ROM 162 typically comprising 25 semiconductor ROM 164 and which in many cases may be supplemented by compact disk devices (CD ROM) 166 or DVD devices. The engine 110 may be a print engine. The above-described components of the system 100 are interconnected via a bus system 170 and are operable in a normal operating mode of computer systems well known in the art. 30 Also depicted in Fig. 1, the rendering apparatus 140 connects to the bus 170 and is configured for the rendering of pixel-based images derived from graphic object-based 895809 (2060295_1) -10 descriptions supplied with instructions and data from the processor 120 via the bus 170. The rendering apparatus 140 may utilise the system RAM 152 for the rendering of object descriptions, although the rendering apparatus 140 may have associated therewith a dedicated rendering store arrangement 130, typically formed of semiconductor RAM. 5 The system 100 may be implemented within a printer. The rendering apparatus 140 may be implemented as a hardware device. Alternatively, the rendering apparatus 140 may be a software module running on the processor 120. Aspects of the computer system 100 are described in greater detail with reference to Figs. 18A and 18B 10 More Detailed Computer System Figs. 18A and 18B collectively form a schematic block diagram of a general purpose computer system 1800, with which embodiments of the invention can be practiced. 15 As depicted in Fig. 18A, the computer system 1800 is formed by a computer module 1801, input devices such as a keyboard 1802, a mouse pointer device 1803, a scanner 1826, a camera 1827, and a microphone 1880, and output devices including a printer 1815, a display device 1814 and loudspeakers 1817. An external Modulator Demodulator (Modem) transceiver device 1816 may be used by the computer 20 module 1801 for communicating to and from a communications network 1820 via a connection 1821. The network 1820 may be a wide-area network (WAN), such as the Internet or a private WAN. Where the connection 1821 is a telephone line, the modem 1816 may be a traditional "dial-up" modem. Alternatively, where the connection 1821 is a high capacity (e.g., cable) connection, the modem 1816 may be a broadband modem. A 25 wireless modem may also be used for wireless connection to the network 1820. The computer module 1801 typically includes at least one processor unit 1805 and a memory unit 1806, for example, formed from semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The module 1801 also includes an number of input/output (I/O) interfaces including an audio-video interface 1807 that 30 couples to the video display 1814, loudspeakers 1817 and microphone 1880, an I/O interface 1813 for the keyboard 1802, mouse 1803, scanner 1826, camera 1827 and 895809 (2060295 1) -11 optionally a joystick (not illustrated), and an interface 1808 for the external modem 1816 and printer 1815. In some implementations, the modem 1816 may be incorporated within the computer module 1801, for example within the interface 1808. The computer module 1801 also has a local network interface 1811, which via a connection 1823 permits 5 coupling of the computer system 1800 to a local computer network 1822, known as a Local Area Network (LAN). The local network 1822 may also couple to the wide network 1820 via a connection 1824, which would typically include a so-called "firewall" device or device of similar functionality. The interface 1811 may be formed by an EthernetTM circuit card, a BluetoothTM wireless arrangement, or an IEEE 802.11 wireless 10 arrangement. The interfaces 1808 and 1813 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1809 are provided and typically include a hard disk drive (HDD) 1810. Other 15 storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1812 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD), USB-RAM, and floppy disks for example may then be used as appropriate sources of data to the system 1800. 20 The components 1805 to 1813 of the computer module 1801 typically communicate via an interconnected bus 1804 and in a manner that results in a conventional mode of operation of the computer system 1800 known to those in the relevant art. Examples of computers on which the embodiments of the invention can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac or a like 25 computer systems. The method of rendering a digital image may be implemented using the computer system 1800; the processes of Figs. 2-15 may be implemented as one or more software application programs 1833 executable within the computer system 1800. In particular, the steps of the method of rendering a digital image are effected by instructions 1831 in the 30 software 1833 that are carried out within the computer system 1800. The software instructions 1831 may be formed as one or more program code modules, each for 895809 (2060295_1) -12 performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the rendering of a digital image and a second part and the corresponding code modules manage a user interface between the first part and the user. 5 The software 1833 is generally loaded into the computer system 1800 from a computer readable medium and is then typically stored in the HDD 1810, as illustrated in Fig. 18A, or the memory 1806, after which the software 1833 can be executed by the computer system 1800. In some instances, the application programs 1833 may be supplied to the user encoded on one or more CD-ROMs 1825 and read via the 10 corresponding drive 1812 prior to storage in the memory 1810 or 1806. Alternatively, the software 1833 may be read by the computer system 1800 from the networks 1820 or 1822 or loaded into the computer system 1800 from other computer readable media. Computer readable storage media refers to any storage medium that participates in providing instructions and/or data to the computer system 1800 for execution and/or processing. 15 Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1801. Examples of computer readable transmission media that may also used to provide the software, application programs, 20 instructions and/or data to the computer module 1801 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the application programs 1833 and the corresponding code 25 modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1814. Through manipulation of typically the keyboard 1802 and the mouse 1803, a user of the computer system 1800 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications 30 associated with the GUI(s). Other forms of functionally adaptable user interfaces may 895809 (2060295_1) -13 also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1817 and user voice commands input via the microphone 1880. A computer readable medium having a computer program for rendering a digital image recorded on the medium constitutes a computer program product. The computer 5 program can be loaded into memory, such as system RAM 152, and executed on the processing unit or processor 120 of the computer system to perform the method rendering a digital image. Examples of a relevant medium are a CD-ROM 166 and HDD 156. Fig. 18B is a detailed schematic block diagram of the processor 1805 and a "memory" 1834. The memory 1834 represents a logical aggregation of all the memory 10 devices (including the HDD 1810 and semiconductor memory 1806) that can be accessed by the computer module 1801 in Fig. 18A. When the computer module 1801 is initially powered up, a power-on self-test (POST) program 1850 executes. The POST program 1850 is typically stored in a ROM 1849 of the semiconductor memory 1806. A program permanently stored in a 15 hardware device such as the ROM 1849 is sometimes referred to as firmware. The POST program 1850 examines hardware within the computer module 1801 to ensure proper functioning and typically checks the processor 1805, the memory (1809, 1806), and a basic input-output systems software (BIOS) module 1851, also typically stored in the ROM 1849, for correct operation. Once the POST program 1850 has run successfully, the 20 BIOS 1851 activates the hard disk drive 1810. Activation of the hard disk drive 1810 causes a bootstrap loader program 1852 that is resident on the hard disk drive 1810 to execute via the processor 1805. This loads an operating system 1853 into the RAM memory 1806 upon which the operating system 1853 commences operation. The operating system 1853 is a system level application, executable by the processor 1805, to 25 fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. The operating system 1853 manages the memory 1809, 1806 to ensure that each process or application running on the computer module 1801 has sufficient memory in 30 which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1800 must be used 895809 (2060295_1) -14 properly so that each process can run effectively. Accordingly, the aggregated memory 1834 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1800 and how such is used. 5 The processor 1805 includes a number of functional modules including a control unit 1839, an arithmetic logic unit (ALU) 1840, and a local or internal memory 1848, sometimes called a cache memory. The cache memory 1848 typically includes a number of storage registers 1844 - 1846 in a register section. One or more internal buses 1841 functionally interconnect these functional modules. The processor 1805 typically also has 10 one or more interfaces 1842 for communicating with external devices via the system bus 1804, using a connection 1818. The application program 1833 includes a sequence of instructions 1831 that may include conditional branch and loop instructions. The program 1833 may also include data 1832 which is used in execution of the program 1833. The instructions 1831 and the 15 data 1832 are stored in memory locations 1828-1830 and 1835-1837, respectively. Depending upon the relative size of the instructions 1831 and the memory locations 1828 1830, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1830. Alternatively, an instruction may be segmented into a number of parts, each of which is stored in a separate memory location, 20 as depicted by the instruction segments shown in the memory locations 1828-1829. In general, the processor 1805 is given a set of instructions which are executed therein. The processor 1805 then waits for a subsequent input, to which it reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input 25 devices 1802, 1803, data received from an external source across one of the networks 1820, 1822, data retrieved from one of the storage devices 1806, 1809 or data retrieved from a storage medium 1825 inserted into the corresponding reader 1812. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1834. 30 The method of rendering a digital image uses input variables 1854, which are stored in the memory 1834 in corresponding memory locations 1855-1858, and produces 895809 (2060295_1) -15 output variables 1861, which are stored in the memory 1834 in corresponding memory locations 1862-1865. Intermediate variables may be stored in memory locations 1859, 1860, 1866 and 1867. The register section 1844-1846, the arithmetic logic unit (ALU) 1840, and the 5 control unit 1839 of the processor 1805 work together to perform sequences of micro operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 1833. Each fetch, decode, and execute cycle comprises: (a) a fetch operation, which fetches or reads an instruction 1831 from a memory 10 location 1828; (b) a decode operation in which the control unit 1839 determines which instruction has been fetched; and (c) an execute operation in which the control unit 1839 and/or the ALU 1840 execute the instruction. 15 Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1839 stores or writes a value to a memory location 1832. Each step or sub-process in the processes of Figs. 2-15 is associated with one or more segments of the program 1833 and is performed by the register section 1844-1847, 20 the ALU 1840, and the control unit 1839 in the processor 1805 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1833. The method of rendering a digital image may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or 25 sub functions of rendering a digital image. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories. Rendering Pipeline 30 A rendering pipeline 200 within which the embodiments of the invention may be implemented is illustrated in Fig. 2. The pipeline 200 comprises a PDL Interpreter 895809 (2060295_1) -16 Module 201, a Display List (DL) Builder Module 203, and a RIP module 207. The PDL Interpreter module 201 converts graphic objects and clip regions described in a page description language (PDL) to a form that can be understood by the DL Builder module 203. The Display List Builder Module 203 may also receive input in the form of function 5 calls to a GDI interface. The Display List Builder module 203 constructs a display list (DL) 205 of graphic objects in a form that is optimised for rendering by the RIP module 207. The RIP module 207 renders the display list to an output pixel-based image 210. The PDL interpreter module 201 and the Display List Builder module 203 are 10 preferably implemented as driver software modules running on a host PC processor that is in communication with the system 100. Alternatively, the modules 201 and 203 may be implemented as embedded software modules running on the processor 120 within the (printing) system 100. The RIP module 207 is implemented on the rendering apparatus 140 in the system 15 100, which can be a hardware device or a software module. The software modules may be represented by computer program code stored upon a storage medium, such as the HDD 156. Constructing Object-Based Display List 20 Fig. 3 illustrates the method 300 of constructing or generating an object-based display list according to one embodiment of the invention, carried out by the display list builder module 203 of Fig. 2. As noted above, the module 203 may be implemented as embedded software running on the processor 120 of Fig. 1. The software may be retrieved from system ROM 162 or HDD 156 and stored in system RAM 152 during 25 execution on the processor 120. The operation of the display list builder module 203 is described with reference to Fig. 3. Processing begins in step 310, where an object (the current object) is obtained (i.e. received) from the PDL interpreter module 201. In decision step 320, the method 300 checks using the processor 120 if the current object is valid. The object may be stored in 30 the system RAM 152. If step 320 returns false (No), the method 300 ends at step 399, which means the display list is ready to be rendered by the RIP module 207 using the 895809 (2060295_1) -17 method 1000 of Fig. 10. Otherwise, if decision step 320 returns true (Yes), the method 300 proceeds to step 330 where the current object's clips are processed using the processor 120, as described hereinafter with reference to Fig. 4. The clips may also be stored in the system RAM 152. 5 In decision step 340, a check is made using the processor 120 to determine if the object is to be composited. If step 340 returns true (Yes), step 350 detects and decomposes objects using the processor 120 according to the method 500 illustrated in Fig. 5. The decomposed objects may then be stored in the system RAM 152. Following step 350, step 360 adds the object to the compositing grid using the processor 120 as described 10 hereinafter with reference to Fig. 8. The compositing grid may also be stored in the system RAM 152. Processing continues at step 310. If decision step 320 returns false (No), step 370 adds the object to the display list using the processor 120 in accordance with the method 900 on Fig. 9. The display list may be stored in the system RAM 152. After steps 360 and 370, the method 300 returns to 15 step 310 to await the next object from the PDL interpreter module 201. Processing Object Clips A method 400 of processing object clips, as used in step 330 of Fig. 3, is described hereinafter with reference to Fig. 4. The method 400 begins at step 410 where the 20 variable TempData is initialised. The initialisation may be done by the processor 120 and the initialised data can be stored in the system RAM 152. This is done by setting Clip to an empty list (TempData.Clip) and setting TempData.Clip Bounds to the page bounds. In step 420, the next clip for the current object called Clip is obtained, e.g. from the system RAM 152 using the processor 120. Decision step 430 checks if the Clip is 25 valid using the processor 120. If step 430 returns false (No), the method 400 ends at step 499. Otherwise, if step 430 returns true (Yes), the method 400 continues at step 440, which adds data for the current clip to a current clip group using the processor 120. Each clip is stored in its own Clip Group in the system RAM 152. The clip data added in step 440 is all the data required for rendering the current clip, principally its edges, levels and 30 fill rules. 895809 (2060295_1) -18 In step 450, the clip group is added to Temp.Data.Clip using the processor 120. That is, a pointer to the current clip group is added in TempData.Clip list. In step 460 TempData.Clip Bounds are intersected with the bounding box of Clip (Bbox (Clip)) and the result is stored back in Temp Data. Clip Bounds using the processor 120 before 5 processing returns to step 420 to process the next clip for the current object. Note that step 460 is only done when Clip is a clip-in; otherwise step 460 is skipped. Object Detection and Decomposition Fig. 5 depicts the method 500 of detecting and decomposing compositing objects 10 (as used in step 350 of Fig 3). The method 500 begins at decision step 510 where the object is tested using the processor 120 to determine if the object is a bitmap or path based glyph string. The object may be stored in the system RAM 152. If step 510 returns true (Yes), step 520 detects and decomposes breakable glyphs using the processor 120. Step 520 executes method 600 in Fig 6 to detect and decompose breakable glyphs. The 15 method 500 ends after this at step 599. If step 510 returns that the object is not a glyph string (No), decision step 530 makes a check using the processor 120 to determine if the sub path consists of multiple sub paths. If the sub path consists of multiple sub paths (Yes), step 540 detects and decomposes breakable paths. Step 540 executes a method 700 in Fig 7, which is described 20 hereinafter. The method 500 ends after this at step 599. Otherwise, if the sub path does not consist of multiple sub paths (No), step 550 follows step 530 which adds the object to the display list without any decomposition using the processor 120. A method 900 in Fig 9 describes the steps within step 550 in more detail. The method 500 ends after this at step 599. 25 Detecting and Decomposing Glyph Strings Fig. 6 illustrates a method 600 of detecting and decomposing glyph strings, as used in step 520 of method 500. The method 600 begins at step 610 which initialises spixels and gpixels; spixels is calculated as the number of pixels in the bounding box of 30 the string by multiplying the width and height of the bounding box and gpixels is set to 0. 895809 (2060295_1) -19 In step 620, the next glyph is retrieved or gotten from the string called Gg. In decision step 630, a check is made to determine if Gg is valid. If so (Yes), step 640 executes, which accumulates gpixels with the number of pixels in the glyph Gg. Again the number of pixels in Gg is calculated as the product of the width and height of the glyph 5 Gg. Processing continues from step 640 to step 620 to retrieve the next glyph in the string. If not valid (No) at step 630, decision step 650 executes which checks to determine if the ratio of the gpixels to the spixels is greater than SPREADTHRESHOLD. The value for SPREADTHRESHOLD is a tuneable run-time parameter. The ratio helps determine quickly if the text is spread out or at an angle such that there is a lot of empty 10 space in the glyph string. If step 650 returns true (Yes), step 660 executes, which adds each glyph as a separate object into the display list. Instead of having each glyph as a separate object in the display list, subsets of glyphs can be added as separate objects to help fine tune the balance between memory usage and composite object sparseness (COS). For example, a 15 glyph string "Superman" may be broken into objects "Su" "per" and "man" instead of "s","u","p","e","r","m","a","n". The object "Su" may use less memory in the system than "s", "u" and still may have a reasonably small bounding box. The method 900 in Fig 9 describes the step 660 in more detail. After step 660 and if step 650 returns false (No), the method 600 ends at step 699. 20 Detecting and Decomposing Paths Fig. 7 illustrates the method 700 of detecting and decomposing paths, as used in step 540 of Fig. 5. The method 700 begins at step 710 which initialises ppixels and sppixels. ppixels is calculated as the number of pixels in the bounding box of the entire 25 path by multiplying the width and height of the bounding box, and sppixels is set to 0. Step 720 retrieves the next sub path called Gp from the path for the current string. Decision step 730 checks if Gp is valid. If valid (Yes), step 740 executes which accumulates sppixels with the number of pixels in the sub path Gp. Again the number of pixels in Gp is calculated as the product of the width and height of the sub path Gp. 30 Processing from step 740 returns to step 720 to retrieve the next sub path in the path. If not valid (No) at step 730, decision step 750 executes to determine if the ratio of the 895809(2060295_1) -20 sppixels to the ppixels is greater than SPREADTHRESHOLD. The value for SPREADTHRESHOLD is a tuneable run-time parameter. The ratio helps determine quickly if the sub path is spread out such that there is a lot of empty space in the path. If step 750 returns true (Yes), step 760 executes which adds each sub path as a 5 separate object into the display list. Instead of having each sub path as a separate object in the display list, subsets of sub paths can be added as separate objects to compromise memory with composite object sparseness (COS). Method 900 in Fig 9 illustrates step 760 in more detail. After step 760 or if step 750 returns false (No), the method 700 ends at step 799. 10 Adding Compositing Objects in Compositing Grid Cells Fig 8 illustrates the method 800 for adding or recording a compositing object to the compositing grid, as used in step 360 of method 300 in Fig 3. The compositing grid divides equally a page into cells with a pre-determined number of rows 15 MAXCGROWS and columns MAXCGCOLS. These values should be tuned to give the best balance of memory usage and rendering performance as required. Each compositing object may be added to one or more compositing grid cells according to the extents of the compositing object. The compositing grid is used while rendering to efficiently determine which compositing objects are above a particular object. 20 The method 800 begins at step 810 which gets or retrieves the next recently added object to the display list called Obj. If the object sent from the PDL Interpreter module in step 201 of the method 200 is not decomposed by either method 600 or 700, then there is only one recently added object; otherwise, there are many recently added objects. In decision step 815, a check is made to determine if Obj is valid. If Obj is not 25 valid (No), the method 800 ends at step 899. Otherwise, if the Obj is valid (Yes), step 820 is executed which intersects the bounding box of Obj with the bounding box of the clips clipping Obj to give the visible bounding box Vbox. In step 825 variables minx, miny, maxx and maxy for Vbox are calculated, which are indices into the DL Compositing Grid. The formulae used are as follows: 30 maxx = (VBox.Right * MAXCGROWS) / Page Width; maxy = (VBox.Bottom * MAXCGCOLS) / Page Height; 895809 (2060295_1) -21 minx = (VBox.Left * MAXCGROWS) / Page Width; miny = (VBox.Top * MAXCGCOLS) / Page Height; Step 830 sets variable I to variable minx. In decision step 835, a check is made if variable I is less than or equal to maxx. If step 835 returns false (No), step 810 is executed 5 and the next recent object added the display list is retrieved. Otherwise if step 835 returns true (Yes), step 840 sets variable J to miny. In decision step 845, a check is made to determine if J is less than or equal to maxy. If step 845 returns false (No), step 865 increments I by 1 and processing returns to step 835. Otherwise, if step 845 returns true (Yes), step 850 is executed, which appends 10 Obj pointer to linked list DL CompositingGrid[i][j].Active. Next, step 860 increments the value of J by 1 and processing returns to step 845. Adding Objects to Display List Fig 9 illustrates the method 900 of adding one or more objects to a display list, as 15 used in step 370 of Fig 3, step 550 of Fig 5, step 660 of Fig 6, and step 760 of Fig 7. The method 900 begins at step 910 which gets or retrieves the next object to add to the display list called Obj. In decision step 920, a check is made to determine if Obj is valid. If Obj is not valid (No), the method 900 ends at step 999. Otherwise, if Obj is valid, step 930 creates for Obj object information, principally its edges, levels and fill rules. In step 940, 20 Obj is linked to clips stored in TempData.Clip, which was calculated in the method 400 of Fig 4. Step 950 records the bounding box of clips clipping Obj (TempData.Clip Bounds) in the Obj structure. Step 960 adds Obj to the display list in increasing Z order, processing returns to step 910 to get the next object to add to the display list. 25 Rendering Display List Based on Display List In Fig 2, the display list 205 created by the display list module step 203 is rendered by the RIP Module 207 of Fig. 2. Fig 10 illustrates at a high level the method 1000 of rendering the display list inside the RIP Module 207. The RIP Modules renders sets of one or more objects, known 30 as a Render Task, at a time. 895809 (2060295_1) -22 The method 1000 begins at step 1010 which looks at the next object in increasing Z order in the display list called Obj. In decision step 1020, is made to determine if Obj is valid. If the Obj is not valid (No), step 1025 is executed which renders the current render task RT. Method 1100 in Fig 5 11 illustrates step 1025 in more detail. After step 1025, the method 1000 ends at step 1099. If decision step 1020 is true (Yes), decision step 1030 determines if the complexity of the current render task RT exceeds the COMPLEXITYTHRESHOLD value if Obj's complexity is added to the complexity of the current render task RT. The complexity of an 10 object may be determined by various factors, such as the number of edges, and levels the type of fill (image, flat colour, gradient etc.). The complexity of a compositing object is affected by its compositing list as described with reference to step 1460 in method 1400, which is described hereinafter. COMPLEXITYTHRESHOLD is a tuneable parameter that can be changed to affect how objects are grouped into a render task and rendered 15 together. If decision step 1030 retains true (Yes), step 1035 is executed which renders the current render task RT using the method 1100 of Fig 11. From step 1035, the method 1000 returns to step 1010 to look at the next object in the display list. If decision step 1030 returns false (No), decision step 1040 determines if Obj is a 20 compositing object. If step 1040 returns true (Yes), step 1045 adds all objects in Obj.CompositingList to the render task RT. Objects in Obj.CompositingList that are already added in the current render task RT are not added again. When a compositing object is rendered, the object's compositing list is complete with all relevant objects which are underneath the object. This is done in step 1060 of Fig. 10. 25 Following step 1045, step 1050 marks Obj as dormant in the compositing grid. The method 1500 in Fig 15 describes step 1050 of Fig. 10 in more detail. After step 1050 or if decision step 1040 returns false (No), Obj is added to the render task RT at step 1055. The level for Obj is flagged such that step 1230 in method 1200 in Fig 12 returns true for edges of Obj if inside a clip in and outside a clip out. When compositing list 30 objects of a compositing object Obj are added to the render task RT in step 1045, their levels are not flagged like in step 1055 so output only occurs when the compositing object 895809 (2060295_1) -23 Obj is visible. After step 1055 has executed, step 1010 sets the next object in the display list after Obj as Obj. Step 1060 updates the appropriate compositing lists with Obj using method 1300 in Fig 13. In this manner, the compositing list for each compositing object is updated 5 incrementally during rendering and is accurate when it is time to render each compositing object. Processing continues at step 1010 from step 1060 to look at the next object in the display list. Render Set of Obiects in Render Task 10 Fig 1 illustrates the method 1100 of rendering a render task RT as created in the method 1000 of Fig 10. The method 1100 starts at step 1105 which scan converts all the edges in render task RT to allow rasterization of the render task. Edges of all the objects in the render task are combined and sorted in Y order down the page. Within each Y scanline, edges are sorted from left to right across the page. At step 1110, a variable CurY 15 is set to the first scanline on the page with edges in the render task RT. In decision step 1115, a check is made to determine if all the scanlines on the page are rendered. If step 1115 returns true (Yes), the method 1100 ends at step 1199. Otherwise, if step 1115 returns false (No), step 1120 sets a variable Nextloadscanline to the next scanline where new edges need to be loaded into the Active Edge List (AEL) 20 from the scan converted render task RT. Step 1125 calculates the minimum number of lines Nlines that can be rendered from the current scanline CurY to either the end of the page or Nextloadscanline (which ever is closer). Step 1130 renders Nlines scanlines using a method 1200 in Fig 12. After step 1130, processing continues at step 1115 to check if all the scanlines on the page have been rendered. 25 895809 (2060295_1) -24 Rendering Set of Scanlines in Render Task Fig 12 illustrates the method 1200 of rendering each scanline as used in step 1130 in method 1100 in Fig 11. In decision step 1205, a check is made to determine if variable nlines is less than 0 (i.e. the number of scanlines requested has been rendered). If step 5 1205 returns true (Yes), the method 1200 ends at step 1299. If step 1205 returns false (No), variables Cedge is set to point to the first edge in the Active Edge List (AEL) and Pedge is set to point to a dummy first edge, which is left of Cedge in pixel value, in step 1210. In decision step 1225, a check is made to determine if Pedge is valid and inside the 10 right page bounds. If step 1225 returns true (Yes), decision step 1230 is executed which determines if PEdge contributes anything to the output. This involves checking that the edge is an object edge and inside clip-ins and outside clip-outs affecting PEdge. PEdge produces pixels if the object level has been flagged as done in step 1055 in method 1000. In particular edges of objects added to the render task in step 1045 of method 1000 will 15 not produce pixel output unless the edges are inside the corresponding compositing object. If step 1230 returns true (Yes), step 1235 is executed which renders pixels between edges Pedge and Cedge to output based on the levels, fills and does compositing if required. After step 1235 or if step 1230 returns false (No), step 1240 is executed which 20 sets Pedge to Cedge and Cedge to the next edge in the Active Edge List (AEL). From step 1240, processing continues at step 1225 which checks if PEdge is still valid and inside the right page bounds. If step 1225 returns false (No), step 1215 updates each edge to the correct X pixel location in the active edge list (AEL) for the next scanline. If an edge finishes at the 25 current scanline, step 1215 removes that edge from the active edge list (AEL). From step 1215, step 1220 decrements nlines by one and increments CurY by one. From step 1220, processing returns to decision step 1205 to test if nlines if less than zero. Updating or Generating Compositing Lists 30 Fig 13 illustrates the method 1300 of updating the compositing list of each compositing object that covers the object Obj in step 1060 of Fig 10 to generate 895809 (2060295_1) -25 compositing lists. The method 1300 begins at step 1310, which calculates the visible bounding box Vbox which intersects the bounding box of object Obj (Obj being the current object in step 1060 of Fig. 10) and the bounding box of the clips that clip Obj. Step 1315 calculates variables minx, miny, maxx and maxy indices for the DL 5 compositing grid from the visible bounding box Vbox calculated in step 1310. This allows looking at a small subset of compositing grid cells to efficiently work out which compositing objects cover the current object Obj. The calculation is as follows: maxx = (VBox.Right * MAXCGROWS) / Page Width; maxy = (VBox.Bottom * MAXCGCOLS) / Page Height; 10 minx = (VBox.Left * MAXCGROWS) / Page Width; miny = (VBox.Top * MAX_CG_COLS) / Page Height; Step 1320 sets variable I to minx. In decision step 1325, a check is made to determine if I is less than or equal to maxx. If step 1325 returns false (No), method 1300 ends at step 1399. Otherwise, if step 15 1325 returns true (Yes), step 1330 sets J to miny. In decision step 1335, a check is made to determine if J is less than or equal to maxy. If step 1335 returns false (No), step 1350 increments I by one and processing returns to step 1325, which tests if I is less than or equal to maxx. If step 1335 returns true (Yes), step 1340 is executed which appends Obj to the compositing list of overlapping 20 compositing objects in compositing grid's active list at column I and row J according to the method 1400 of Fig 14. Step 1345 increments J by one, and processing returns to step 1335 which if tests J is less than or equal to maxy. Appending Object to Compositing Lists in Cell 25 Fig. 14 illustrates a method 1400 of appending an object to compositing lists in a composite grid cell. Fig. 14 iterates through all the compositing objects in the compositing grid cell[I][J]'s active list, where I and J are the current index values in step 1340 of Fig. 13 described above and inserts Obj to each compositing object's compositing list if found to be overlapping with Obj. 30 Step 1410 gets or retrieves the next compositing object in the DL compositing grid cell[I][J] called CObj. Decision step 1420 checks if CObj is valid. If CObj is not valid 895809 (2060295_1) -26 (No), the method 1400 ends at step 1499. Otherwise, if the Cobj is valid (Yes), step 1430 calculates the visible bounding box Vbox as the intersection of the bounding box of object Obj with the bounding box of compositing object CObj. Decision step 1440 checks to determine if Vbox is empty. If Vbox is not empty 5 (No), this means the compositing object CObj is above and covers object Obj, and step 1450 adds Obj to CObj.Compositing List. The complexity of CObj is updated in step 1460 to reflect the current complexity after adding Obj to the compositing list. This complexity is used in step 1030 of Fig. 10 when deciding which set of objects to render in a render task RT. 10 After step 1460 and if step 1440 returns true (Yes), processing continues at step 1410, which gets the next compositing object from the compositing grid cell [1][J]. Marking Rendered Compositing Object as Dormant Fig 15 illustrates the method 1500 of marking a rendered compositing object as 15 dormant in the compositing grid, as used by step 1050 of Fig. 10. The method moves an active compositing object to the dormant list. The reason for doing this is when a compositing object has been rendered to the output buffer, there is no need to update the compositing list with objects having a higher priority value as an object below can never cover an object above. By removing compositing objects from the active compositing list 20 after being rendered, the number of compositing objects that need to be checked in method 1400 of Fig 14 is reduced to only the relevant compositing objects. When rendering to a strip buffer instead of a frame store, relevant compositing grid cells need to have dormant compositing lists made active again. The method 1500 begins at step 1510. Step 1510 calculates the visible bounding 25 box, and step 1515 calculates variables minx, maxx, miny and maxy indices for the compositing grid using Vbox, similar to steps 1310 and 1315 of Fig 13. Step 1520 sets variable I to minx. Step 1525 checks if I is less than or equal to maxx. If step 1525 returns false (No), the method 1500 ends at step 1599. Otherwise if step 1525 returns true (Yes), step 1530 30 sets J to miny. 895809 (2060295_1) -27 Decision step 1535 checks if J is less than or equal to maxy. If step 1535 returns false (No), step 1555 increments I by one, and processing returns to step 1525, which tests if I is less than or equal to maxx. If decision step 1535 returns true (Yes), step 1540 removes (pops) the first 5 compositing object in the DL active compositing list of compositing grid cell [I][J] called CObj, and step 1545 appends Cobj into the same compositing grid cell's dormant list. Because objects in each cell of the compositing grid are inserted in increasing Z order, step 1540 just needs to pop the first element in the active compositing list to remove the current compositing object being rendered. Step 1550 increments J by one, and processing 10 returns to step 1535, which tests if J is less than or equal to maxy. An example of rendering a page comprising objects describing a digital image is described hereinafter. Example of Rendering a Page 15 Fig. 16 illustrates a scenario page 1601, of which the rendering is explained to understand aspects of the invention. Page 1601 consists of three objects in Z order, as follows: Object 1611 is an opaque text string 'Text String 1' with a string bounding box 1611 BB. 20 Object 1621 is an opaque text string 'Text String 2' with a string bounding box 162 1BB. Object 1631 is a compositing watermark with text 'Watermark' and a string bounding box 163 1BB. Object 1631 actually overlaps Object 1621, but doesn't overlap Object 25 1611 even though the bounding box 1631BB overlaps Object 1611. For simplicity, none of the objects is assumed to have any clips associated with the objects, and the output buffer is a frame store, which has enough memory to hold the entire page contents for rendering. A known method of rendering page 1601 would involve rendering object 1611 to 30 the frame store, followed by rendering object 1621 on top of the current frame store. The bounding box 1631 BB for the object (watermark) 1631 overlaps the bounding boxes 895809 (2060295_1) -28 1611BB and 1621BB of both objects 1611 and 1621, so the object 1631 needs to be rendered along with objects 1621 and 1611. Object 1611 is scan converted unnecessarily using the known method. For more complex page with many edges, doing so adversely affects performance and memory usage to track compositing object overlap. 5 In contrast, objects 1611, 1621 and 1631 on the page 1601 go through the PDL Interpreter Module 201 of Fig. 2 and are passed into the DL Builder Module 203, which creates a Z-ordered object display list 205 of Fig. 2. When building the display list, the Object 1611 is passed through the PDL Interpreter Module 201 to create a single object and is added in the display list by step 370 of Fig. 3. The next Object 1621 is passed 10 through the PDL Interpreter Module 201 to create a single object and appended in the display list through step 370 of Fig. 3. When processing the Object 1631, the method 300 detects this Object 1631 as a compositing object in decision step 340. The Object 1631 goes through step 350 (method 500 of Fig. 5). Step 520 of Fig 5 is executed, which performs method 600 of Fig. 6. The diagonal watermark is assumed to be such that the 15 test in step 650 returns true (Yes), resulting in each glyph W, a, t, e, r, m, a, r and k being added as separate objects into the display list in step 660 of Fig. 6. Referring back to Fig. 3, step 360 of is executed to add each of the recently added objects (glyphs W, a, t, e, r, m, a, r and k) to the compositing grid. Fig. 17 shows an exemplary compositing grid 1701 that has 7 rows (1-7) and 6 20 columns (A-F). To easily locate each cell in the compositing grid, the columns are alphabetically enumerated while the rows are numerically enumerated. The method 800 of Fig. 8 adds each compositing object, recently added to the display list, to the corresponding cells of the compositing grid where the bounding box of the object touches the cell. So: 25 Object 'W' is added to cells 2A, 2B, 3A, 3B. Object 'a' is added to cells 3B, 3C, 4B, 4C. Object 't' is added to cells 3B, 3C, 4B, 4C. Object 'e' is added to cell 4C. Object 'r' is added to cells 4C, 4D, 5C, 5D. 30 Object 'm' is added to cells 4C, 4D, 5C, 5D. Object 'a' is added to cells 5D, 5E, 6D, 6E. 895809 (2060295_ ) -29 Object 'r' is added to cells 5D, 5E, 6D, 6E. Object 'k' is added to cell 6E. In this way the page has been converted into a display list 205 in Fig 2, ready to be rendered by the RIP Module 207 into the Output image 210. The 5 COMPLEXITYTHRESHOLD in step 1030 of Fig 10 is assumed to be such that only one opaque object can be rendered at a time. When rendering the display list for page 1601, the first object 1611 is added to the render task RT in step 1055. Step 1060 updates the compositing lists for the object 1611. Using the bounding box of the object 1611, the following calculations are made: minx = 3, maxx = 3, miny = D and maxy = F based on 10 step 1315 of Fig. 13. As cells 3D, 3E and 3F have no compositing objects, none of the compositing lists are updated by method 1400 in Fig 14. Based on the previous assumption, there is no room in the render task for object 1621 also. Thus, the render task RT with only object 1611 is rendered to the frame store using method 1100 in Fig 11. Object 1621 is similarly added to the render task RT in step 1055, and the 15 compositing lists are updated in step 1060. Using the bounding box of the object 1621, the following calculations are made: minx = 4, maxx = 5, miny = C and maxy = D based on step 1315 of Fig. 13. Processing cells using method 1400 in Fig 14 is performed to add the object 1621 to the relevant compositing object's compositing list. First cell 4C is processed, and the object 1621 is added to the compositing list of compositing objects 'e', 20 'r' and 'm'. The object 1621 is not added to compositing objects 'a' and 't' as the calculated Vbox is empty for those objects in step 1440 of Fig. 14. Next cells 4D, 5C are processed, but since the object 1621 is already in the compositing lists of compositing objects 'r' and 'm', the object 1621 is not added again. Similarly, in processing cell 5D using the method 1400, the compositing lists of compositing objects 'r' and 'm' remain 25 unchanged, since the object 1621 is already in their compositing lists. The compositing list of the compositing object 'a' also remains unchanged as Vbox is empty in step 1440 of Fig. 14. The remaining objects in the display list are all compositing objects of the "Watermark" string, which were decomposed into individual glyph objects and added to 30 the display list using the method 600 in Fig 6. 895809 (2060295_1) -30 First object 'W' is rendered with its compositing list, which is empty, onto the frame store. Object 'W' is made dormant in compositing grid cells 2A, 2B, 3A and 3B by the method 1500 in Fig 15. Object 'W' is added to compositing list for object 'a', but not object 't' from cell 3B, using the method 1300 in Fig 13. 5 The object 'a' is rendered with its composting list, which contains object 'W', onto the frame store. Object 'W' is only scan converted, but no pixels are output when rendering object 'a'. Object 'a' is made dormant in compositing grid cells 3B, 3C, 4B and 4C by the method 1500 in Fig 15. Object 'a' is added to compositing list for object 't' but not objects 'e', 'r' and 'in' from cell 4C, using the method 1300 in Fig 13. 10 Object 't' is rendered with its composting list, which contains object 'a', onto the frame store. Object 't' is made dormant in compositing grid cells 3B, 3C, 4B and 4C by the method 1500 in Fig 15. Object 't' is added to compositing list for object 'e', but not objects 'r' and 'in' from cell 4C, using the method 1300 in Fig 13. Object 'e' is rendered with its composting list, which contains object 't' and 15 opaque object 1621 ('Text String 2'), onto the frame store. Object 'e' is made dormant in compositing grid cell 4C by the method 1500 in Fig 15. Object 'e' is added to compositing list for object 'r', but not object 'm' from cell 4C, using the method 1300 in Fig 13. Object 'r' is rendered with its composting list, which contains object 'e' and 20 opaque object 1621 ('Text String 2'), onto the frame store. Object 'r' is made dormant in compositing grid cells 4C, 4D, 5C, 5D by the method 1500 in Fig 15. Object 'r' is added to compositing list for object 'm', but not objects 'a' and 'r' from cell 5D, using the method 1300 in Fig 13. Object 'm' is rendered with its compositing list, which contains objects 'r' and 25 opaque object 1621 ('Text String 2'), onto the frame store. Object 'm' is made dormant in compositing grid cells 4C, 4D, 5C, 5D by the method 1500 in Fig 15. Object 'm' is added to compositing list for object 'a', but not object 'r' from cell 5D using the method 1300 in Fig 13. Object 'a' is rendered with its compositing list, which contains object 'm', onto the 30 frame store. Object 'a' is made dormant in compositing grid cells 5D, 5E, 6D, 6E by the 895809 (2060295_1) -31 method 1500 in Fig 15. Object 'a' is added to compositing list for object 'r' from cell 6E using the method 1300 in Fig 13. Object 'r' is rendered with its composting list, which contains object 'a', onto the frame store. Object 'r' is made dormant in compositing grid cells 5D, 5E, 6D, 6E by the 5 method 1500 in Fig 15. Object 'r' is added to compositing list for object 'k' from cell 6E using the method 1300 in Fig 13. Object 'k' is rendered with its compositing list, which contains object 'r' onto the frame store. Object 'k' is made dormant in compositing grid cells 6E by the method 1500 in Fig 15. The compositing grid is empty now, so no object's compositing lists is updated 10 using the method 1300 in Fig 13. The reason for rendering a compositing object is complicated in this scenario due to choosing COMPLEXITYTHRESHOLD such that only one object gets added to the render task at any time. By increasing this threshold, this same scenario renders quicker, as for example objects 'W', 'a', 't', 'e', 'r' and the object 1621 can be rendered together in 15 a single render task without increasing the number of objects in the compositing lists. The embodiments of the invention allow the grouping of objects in the display list to be flexible without adversely affecting performance by scan converting more objects than necessary unlike existing methods. Methods, apparatuses, and computer program products for rendering a digital 20 image are disclosed. The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including 25 principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings. 895809 (2060295 1)

Claims (15)

  1. 2. The method according to claim 1, further comprising the steps of: 15 determining, within the cell associated with the object, the compositing object overlapping the object; and updating the compositing list of the compositing object.
  2. 3. The method according to claim 1, further comprising the steps of: determining if the compositing object can be decomposed into a plurality of 20 glyphs; and decomposing a compositing object into a plurality of glyphs, each glyph being treated as a distinct compositing object if the compositing object can be decomposed.
  3. 4. The method according to claim 3, wherein the determining step comprises using a functional relationship of a number of pixels in a bounding box of the object and a 25 sum of pixels in a plurality of bounding boxes of the plurality of glyphs.
  4. 5. The method according to claim 3, further comprising the step of adding the distinct compositing object to at least one compositing grid associated with the distinct compositing object.
  5. 6. The method according to claim 1, further comprising the steps of: 30 determining if the compositing object can be decomposed into a plurality of subpaths; and 895809 (2060295_1) -33 decomposing a compositing object into a plurality of subpaths, each subpath being treated as a distinct compositing object if the compositing object can be decomposed.
  6. 7. The method according to claim 6, wherein the determining step comprises using a functional relationship of a number of pixels in a bounding box of the object and a 5 sum of pixels in a plurality of bounding boxes of the plurality of subpaths.
  7. 8. The method according to claim 6, further comprising the step of adding the distinct compositing object to at least one compositing grid associated with the distinct compositing object.
  8. 9. The method according to claim 1, wherein the objects in each cell in the 10 compositing grid are considered in order of increasing priority; and further comprising the step of rendering the object to an image store using another different rendering method if the currently-considered object does not require compositing.
  9. 10. The method according to claim 1, wherein after rendering the compositing object, the compositing object is removed from all cells of the compositing grid. 15 11. The method according to claim 2, wherein a compositing object is determined to be overlapping with the object if there is at least one common pixel between a bounding box of the compositing object and a bounding box of the object.
  10. 12. The method according to claim 1, wherein the cells associated with each object is calculated using a bounding box of the object. 20 13. The method according to claim 1, wherein each visible compositing object is added to at least one cell of the compositing grid based on a bounding box of the compositing object.
  11. 14. The method according to claim 1, further comprising the steps of: determining if the compositing object can be decomposed into a plurality of 25 glyphs; and decomposing a compositing object into a plurality of glyphs if the compositing object can be decomposed, wherein a subset of the plurality of glyphs is treated as a distinct object.
  12. 15. An apparatus for rendering a digital image, comprising: 30 a memory for storing data and a computer program; and 895809 (2060295_1) -34 a processor unit coupled to the memory for executing a computer program, the computer program comprising: computer program code means for receiving a list of objects comprising at least one object to be composited 5 describing the image, the objects in the list being ordered in the list in priority order from a lowest-priority object to a highest priority object; computer program code means for associating each compositing object with at least one cell in a compositing grid 10 for the image; computer program code means for, for each object, updating the compositing list of each compositing object in at least one cell associated with the object; and computer program code means for rendering the object 15 if a portion of the object contributes to a rendered output with the compositing list of the compositing object.
  13. 16. The apparatus according to claim 15, further comprising a module coupled to the apparatus for providing a description of the digital image comprising the objects.
  14. 17. The apparatus according to claim 15, wherein the apparatus is a printer 20 comprising a print engine coupled to the memory and the processor unit.
  15. 18. A computer program product comprising a computer readable medium having recorded therein a computer program for rendering a digital image for execution by a processing unit, the computer program comprising: computer program code means for receiving a list of 25 objects comprising at least one object to be composited describing the image, the objects in the list being ordered in the list in priority order from a lowest-priority object to a highest priority object; computer program code means for associating each 30 compositing object with at least one cell in a compositing grid for the image; 895809 (2060295_1) -35 computer program code means for, for each object, updating a compositing list of each compositing object in at least one cell associated with the object; and computer program code means for rendering the object 5 if a portion of the object contributes to a rendered output with the compositing list of the compositing object. DATED this 16th Day of April 2009 CANON KABUSHIKI KAISHA 10 Patent Attorneys for the Applicant SPRUSON&FERGUSON 895809 (2060295_1)
AU2009201502A 2009-04-16 2009-04-16 Rendering compositing objects Abandoned AU2009201502A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009201502A AU2009201502A1 (en) 2009-04-16 2009-04-16 Rendering compositing objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009201502A AU2009201502A1 (en) 2009-04-16 2009-04-16 Rendering compositing objects

Publications (1)

Publication Number Publication Date
AU2009201502A1 true AU2009201502A1 (en) 2010-11-04

Family

ID=43033569

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009201502A Abandoned AU2009201502A1 (en) 2009-04-16 2009-04-16 Rendering compositing objects

Country Status (1)

Country Link
AU (1) AU2009201502A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115469959A (en) * 2022-11-11 2022-12-13 成都摹客科技有限公司 Page rendering method, rendering device and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115469959A (en) * 2022-11-11 2022-12-13 成都摹客科技有限公司 Page rendering method, rendering device and computer storage medium
CN115469959B (en) * 2022-11-11 2023-01-31 成都摹客科技有限公司 Page rendering method, rendering device and computer storage medium

Similar Documents

Publication Publication Date Title
US20220198745A1 (en) Assigning primitives to tiles in a graphics processing system
US7755629B2 (en) Method of rendering graphic objects
US9684995B2 (en) Setting a display list pointer for primitives in a tile-based graphics processing system
US20100315431A1 (en) Combining overlapping objects
AU2009225336A1 (en) Method of compositing variable alpha fills supporting group opacity
EP3086289B1 (en) Tiling a primitive in a graphics processing system
EP3086290B1 (en) Tiling a primitive in a graphics processing system
AU2012216432A1 (en) Method, system and apparatus for rendering a graphical object
AU2012258437A1 (en) Method, apparatus and system for determining a merged intermediate representation of a page
US9275316B2 (en) Method, apparatus and system for generating an attribute map for processing an image
US20070133019A1 (en) Transparency printing
AU2010241218B2 (en) Method, apparatus and system for associating an intermediate fill with a plurality of objects
AU2009201502A1 (en) Rendering compositing objects
US7215342B2 (en) System and method for detecting and converting a transparency simulation effect
US8988444B2 (en) System and method for configuring graphics register data and recording medium
US9514555B2 (en) Method of rendering an overlapping region
AU2012258408A1 (en) Path tracing method
AU2011203173B2 (en) Method, apparatus and system for rendering an object on a page
AU2005202742B2 (en) Method of Rendering Graphic Objects
CN118052690A (en) Graphics processor
AU2009243389A1 (en) Rewind edge placement
AU2009233688A1 (en) Incremental disk fallback with layer-merging
AU2008221623A1 (en) System and method for selective colour compositing
AU2007240228A1 (en) Improved hybrid rendering
AU2009212795A1 (en) Drop-out prevention in a pixel-sequential renderer

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application