AU2004233469A1 - Rendering linear colour blends - Google Patents

Rendering linear colour blends Download PDF

Info

Publication number
AU2004233469A1
AU2004233469A1 AU2004233469A AU2004233469A AU2004233469A1 AU 2004233469 A1 AU2004233469 A1 AU 2004233469A1 AU 2004233469 A AU2004233469 A AU 2004233469A AU 2004233469 A AU2004233469 A AU 2004233469A AU 2004233469 A1 AU2004233469 A1 AU 2004233469A1
Authority
AU
Australia
Prior art keywords
value
decision value
pixel
colour
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2004233469A
Inventor
Kevin John Moore
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2004233469A priority Critical patent/AU2004233469A1/en
Publication of AU2004233469A1 publication Critical patent/AU2004233469A1/en
Abandoned legal-status Critical Current

Links

Description

S&F Ref: 687880
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo, 146, Japan Kevin John Moore Spruson Ferguson St Martins Tower Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) Rendering linear colour blends The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5845c -1o RENDERING LINEAR COLOUR BLENDS
O
Z Field of the Invention The present invention relates generally to rendering graphic object based images.
In particular, the present invention relates to the rendering of linear colour and opacity C 5 gradients.
Background Linear ramps are a commonly encountered fill type in graphics systems. A linear ramp can be defined using ramp coefficients: R cx cyRy Ro G cG cGy Go Equation 1 B c,,x cyy Bo Another method of calculating linear ramps is to interpolate from the endpoints of the ramp. The exact expression for interpolation of a colour channel, z, given endpoint values zo at (xo, yo), zi at (xl, yi) and Z 2 at (x 2 y2) is: z-zo a Equation 2 where: -z 0
XY
2 o)-(Z 2 -ZOXyl -yo) S= -xoXZ 2 -zo)-(x 2 -xoXzi -zo) and 6 o Xy 2 (x 2 X Xy yo) Equation 3 Interpolation may be reduced to the ramp coefficient method by calculating out the factors in equation 2.
687880.doc -2- For linear ramps in a colour printing system, there is a problem of providing >enough precision and range for the ramp coefficients. This becomes a serious issue for
O
Z long pixel runs, which are becoming more common as the resolution of printers increases.
N For example, a large format printer with 1200 dpi resolution has 47,000 pixels per meter.
5 This means that the resolution of any colour calculation must be more than 16 bits greater than the natural resolution of the colour representation, in order to prevent single count (Ni errors in gradient fills that extend across such a page.
Another consideration is the use of integer division. Unless the factors of the divisor are all multiples of the base of the number representation, division by an integer will produce a repeating fraction that must be truncated to be represented in a fixed number of digits. In binary systems, this means that any divisor that is not a power of 2 will produce a repeated fraction. The calculations of slope coefficients in the linear ramp calculations are affected by this, as is the interpolation formula. Any calculation that involves a division, such as finding a slope, will suffer this problem.
Furthermore, the colour calculations in equations 1 to 3 must carry enough precision through from the multiplication to ensure that errors arising are well behaved with respect to the gradient of the ramp.
The problem is that the calculation of the x and y terms will cause a checkerboard pattern for small gradients, such that a step occurs whenever each term changes value. A step of two bits is possible when both terms change on the same pixel.
Consequently, low gradient ramps will show a checkerboard pattern, where the size of the blocks in the pattern is determined by the amount of precision that is carried though the multiplication. The amplitude of the step in the checkerboard pattern is determined by the final clamp to an integer value. So if the addition is carried out in m.n bits, the blocks 687880.doc -3- 2 C are pixels on a side, where crc is the ramp coefficient for direction r in colour CrC 0 Z channel C.
For example, if the terms are truncated to 10.1 bits precision before the addition, the term involving a ramp coefficient with a value near 1x2 5 will change every 24 pixels.
If colours are truncated and clamped to 8 bits, the step is 1/255 colour units. This is C sufficient to cause a visible checkerboard pattern in high gamma colour spaces.
Obviously, this can be ameliorated by using higher precision multipliers and adders, but such an approach results in increases in the cost and complexity of the ASIC design that incorporates the calculation.
A further problem with the above approaches is that of speed, since recalculating the linear ramp values at each pixel requires considerable resources. A simpler method which reduces the calculation overhead is desirable.
Bresenham's line algorithm is an algorithm for drawing lines having integer endpoints on a raster display. The algorithm has two advantages: 0 Bresenham's line algorithm uses a decision value that can be calculated by repeated addition and subtraction of constant valued integers, and Bresenham's line algorithm avoids the issue of truncated fractions by using a decision value that is calculated without involving any division.
The adaptation of Bresenham's algorithm to problems involving linearly-varying functions of more than one parameter is less well known. US patent 5,625,768 teaches a method of calculating the decision value based on the slope of a main side of a polygon and a second slope in a direction across the face of the polygon. However, this method is not easy to generalise to larger numbers of dimensions as is required, for example, in volume opacity calculation, or interpolation between calculated values in multidimensional mesh-value problems.
687880.doc Summary oIt is an object of the present invention to substantially overcome, or at least Z ameliorate, one or more disadvantages of existing arrangements. The arrangements (,i disclosed herein use an approach in which a decision value is calculated based on projected areas. The approach may be extended to projected volumes in 4-space, and to CC projections onto any number of dimensions.
An application of the described method is presented in the context of linear ramp r, colour blends in a graphics system. Other applications include the calculation of depth of a plane in a 3-D graphics system and the calculation of linearly varying opacity in volumes.
According to a first aspect of the invention there is provided a method of determining values of a linear gradient fill at a pixel of an image having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the fill is specified by three points each defined by two position values and a colour channel value, and wherein the method comprises the steps of, for each colour channel: retrieving a stored decision value; selecting one of two alternative output values depending on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes.
687880.doc According to a second aspect of the invention there is provided a method of determining output values of a linear function of a plurality of variables, the method comprising the steps of: receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; calculating a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate points specifying the linear function; and determining output values at a sequence of positions in a scanning direction, wherein, for a currently-considered position of the sequence, said determining step comprises the sub-steps of: retrieving a stored decision value; selecting one of two alternative output values for the currently-considered position, wherein the selection depends on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a next position in the sequence.
According to a further aspect of the invention there is provided an apparatus for determining values of a linear gradient fill at a pixel of an image having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the fill is specified by three points each defined by two position values and a colour channel value, and wherein the apparatus comprises, for each colour channel: 687880.doc means for retrieving a stored decision value; >means for selecting one of two alternative output values depending on a sign of
O
Z the retrieved decision value; means for incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and means for storing the incremented decision value for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes.
According to a further aspect of the invention there is provided an apparatus for determining output values of a linear function of a plurality of variables, the apparatus comprising: means for receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; means for calculating a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate points specifying the linear function; and means for determining output values at a sequence of positions in a scanning direction, said determining means comprising: means for retrieving a stored decision value for a currently-considered position of the sequence; means for selecting one of two alternative output values for the currentlyconsidered position, wherein the selection depends on a sign of the retrieved decision value; 687880.doc o means for incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and Z means for storing the incremented decision value for retrieval at a next position Nin the sequence.
According to a further aspect of the invention there is provided a system for Cc determining values of a linear gradient fill at a pixel of an image having one or more
C,
colour channels selected from the group consisting of colour components and opacity, Swherein, for each colour channel, the fill is specified by three points each defined by two position values and a colour channel value, and wherein the system comprises: data storage for storing the three points and a decision value for each colour channel; and a processor in communication with said data storage and adapted, for each colour channel, to: retrieve the stored decision value; select one of two alternative output values depending on a sign of the retrieved decision value; increment the decision value by one of two alternative increments depending on the sign of the decision value; and store the incremented decision value in the data storage for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes.
According to a further aspect of the invention there is provided a system for determining output values of a linear function of a plurality of variables, the system comprising: 687880.doc o data storage for storing: the linear function specified by a plurality of coordinate points each
O
Z defined by a respective value for each of the variables and a corresponding output value of the linear function; (ii) a parameter set based on projections of the coordinate points onto a ¢3space, wherein the dimension of the space is lower than a total number of Scoordinate points specifying the linear function; and S(iii) a decision value; and (,i a processor in communication with said data storage and adapted to determine output values at a sequence of positions in a scanning direction, wherein, for a currentlyconsidered position of the sequence, said processor: retrieves the stored decision value from said data storage; selects one of two alternative output values for the currently-considered position, wherein the selection depends on a sign of the retrieved decision value; increments the decision value by one of two alternative increments depending on the sign of the decision value; and stores the incremented decision value in said data storage for retrieval at a next position in the sequence.
According to a further aspect of the invention there is provided a computer program product comprising machine-readable program code recorded on a machinereadable recording medium, for controlling the operation of a data processing machine on which the program code executes to perform a method of determining values of a linear gradient fill at a pixel of an image having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the 687880.doc -9fill is specified by three points each defined by two position values and a colour channel value, and wherein the method comprises the steps of, for each colour channel: 0 Z retrieving a stored decision value; N selecting one of two alternative output values depending on a sign of the S 5 retrieved decision value; incrementing the decision value by one of two alternative increments depending N on the sign of the decision value; and storing the incremented decision value for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes.
According to a further aspect of the invention there is provided a computer program product comprising machine-readable program code recorded on a machinereadable recording medium, for controlling the operation of a data processing machine on which the program code executes to perform a method of determining output values of a linear function of a plurality of variables, the method comprising the steps of: receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; calculating a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate points specifying the linear function; and determining output values at a sequence of positions in a scanning direction, wherein, for a currently-considered position of the sequence, said determining step comprises the sub-steps of: retrieving a stored decision value; 687880.doc selecting one of two alternative output values for the currently-considered >position, wherein the selection depends on a sign of the retrieved decision value;
O
Z incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and S 5 storing the incremented decision value for retrieval at a next position in the sequence.
(Ni According to a further aspect of the invention there is provided a computer program comprising machine-readable program code for controlling the operation of a data processing apparatus on which the program code executes to perform a method of determining values of a linear gradient fill at a pixel of an image having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the fill is specified by three points each defined by two position values and a colour channel value, and wherein the method comprises the steps of, for each colour channel: retrieving a stored decision value; selecting one of two alternative output values depending on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes.
According to a further aspect of the invention there is provided a computer program comprising machine-readable program code for controlling the operation of a data processing apparatus on which the program code executes to perform a method of 687880.doc -11determining output values of a linear function of a plurality of variables, the method comprising the steps of: Z receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; Cc calculating a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate Spoints specifying the linear function; and determining output values at a sequence of positions in a scanning direction, wherein, for a currently-considered position of the sequence, said determining step comprises the sub-steps of: retrieving a stored decision value; selecting one of two alternative output values for the currently-considered position, wherein the selection depends on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a next position in the sequence.
Brief Description of the Drawings One or more embodiments of the present invention will now be described with reference to the drawings, in which: Fig. 1 is a schematic block diagram representation of a computer system incorporating a rendering arrangement; 687880.doc -12o Fig. 2 is a block diagram showing the functional data flow of the rendering arrangement; Z Fig. 3 is a schematic block diagram representation of the pixel sequential rendering apparatus of Fig. 2 and associated display list and temporary stores; Fig. 4 is a schematic functional representation of the edge processing module of Cc Fig. 3; ~Fig. 5 is a schematic functional representation of the priority determination Smodule of Fig. 3; Fig. 6 is a schematic functional representation of the fill colour determination module of Fig. 3; Figs. 7A to 7C illustrate pixel combinations between source and destination; Fig. 8A illustrates a two-object image used as an example for explaining the operation of the rendering arrangement; Fig. 8B shows a table of a number of edge records of the two-object image shown in Fig. 8A; Figs. 9A and 9B illustrate the vector edges of the objects of Fig. 8A; Fig. 10 illustrates the rendering of a number of scan lines of the image of Fig.
8A; Fig. 11 depicts the arrangement of an edge record for the image of Fig. 8A; Fig. 12A depicts the format of an active edge record created by the edge processing module 400 of Fig. 4; Fig. 12B depicts the arrangement of the edge records used in the edge processing module 400 of Fig.4; Figs. 12C to 12J illustrate the edge update routine implemented by the arrangement of Fig. 4 for the example of Fig. 8A; 687880.doc -13- Figs. 13A and 13B illustrate the odd-even and non-zero winding fill rules; Figs. 14A to 14E illustrate how large changes in X coordinates contribute to spill 0 Z conditions and how they are handled; Figs. 15A to 15E illustrate the priority filling routine implemented by the arrangement of Fig. Cc Figs. 16A to 16D provide a comparison between two prior art edge description N formats and that used in the described apparatus; Figs. 17A and 17B show a simple compositing expression illustrated as an expression tree and a corresponding depiction; Fig. 17C shows an example of an expression tree; Fig. 18 depicts the priority properties and status table of the priority determination module of Fig. 3; Fig. 19 shows a table of a number of raster operations; Figs. 20A and 20B show a table of the principal compositing operations and their corresponding raster operations and opacity flags; Fig. 21 depicts the result of a number of compositing operations; Fig. 22A shows a series of fill priority messages generated by the priority determination module 500; Fig. 22B shows a series of colour composite messages generated by the fill colour determination module 600; Fig. 23 is a schematic functional representation of one arrangement of the pixel compositing module of Fig. 3; Figs. 24A 24D show the operation performed on the stack for each of the various stack operation commands in the Pixel Compositing Module 700 of Fig. 3; Fig. 25 shows a pixel and the eight surrounding pixels; 687880.doc -14- 8 Fig, 26 illustrates how candidate points are reduced to two candidates in Bresenham's line tracking algorithm;
O
Z Fig. 27 illustrates the geometry of a triangular region defining a linear colour ramp; Fig. 28 shows the projection of the area of a parallelogram formed from the triangular region of Fig. 27 onto planes dual to the coordinate axes; Fig. 29 illustrates a data format for the Linear Ramp Module of Fig. 6; SFig. 30 shows a schematic diagram of a calculator to obtain an initial value for the Linear Ramp Module; Fig. 31 shows a schematic diagram of a Linear Ramp Iterator for a horizontal sequence of pixels (X scan) for use in the Linear Ramp Module; and Fig. 32 is a flow diagram of a method for calculating the linear fill values in each colour channel.
Detailed Description including Best Mode Where reference is made in any one or more of the accompanying drawings to steps and/or features which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
For a better understanding of the pixel sequential rendering system 1, a brief overview of the system is first undertaken. Then follows a brief discussion of the driver software for interfacing between a third party software application and the pixel sequential rendering apparatus 20 of the system. A brief overview of the pixel sequential rendering apparatus 20 is then discussed. As will become apparent, the pixel sequential rendering apparatus 20 includes an instruction execution module 300; an edge tracking 687880.doc
I
O module 400; a priority determination module 500; a fill colour determination module 600; a pixel compositing module 700; and a pixel output module 800.
O
Z The general principles of the present disclosure have application in the N calculation of linear ramp fills. This is realised in the described rendering system in the 051 Fill Colour Determination Module 600.
Cn PIXEL SEQUENTIAL RENDERING SYSTEM Fig. 1 illustrates schematically a computer system 1 configured for rendering and presentation of computer graphic object images. The system includes a host processor 2 associated with system random access memory (RAM) 3, which may include a nonvolatile hard disk drive or similar device 5 and volatile, semiconductor RAM 4. The system 1 also includes a system read-only memory (ROM) 6 typically founded upon semiconductor ROM 7 and which in many cases may be supplemented by compact disc devices (CD ROM) 8. The system 1 may also incorporate some means 10 for displaying images, such as a video display unit (VDU) or a printer, both, which operate in raster fashion.
The above-described components of the system 1 are interconnected via a bus system 9 and are operable in a normal operating mode of computer systems well known in the art, such as IBM PC/AT type personal computers and arrangements evolved therefrom, Sun Sparcstations and the like.
Also seen in Fig. 1, a pixel sequential rendering apparatus 20 (or renderer) connects to the bus 9, and is configured for the sequential rendering ofpixel-based images derived from graphic object-based descriptions supplied with instructions and data from the system 1 via the bus 9. The apparatus 20 may utilise the system RAM 3 for the rendering of object descriptions although preferably the rendering apparatus 20 may have 687880.doc 16associated therewith a dedicated rendering store arrangement 30, typically formed of semiconductor RAM.
Z Image rendering operates generally speaking in the following manner. A render job to be rendered is given to the driver software by third party software for supply to the pixel sequential renderer 20. The render job is typically in a page description language or in a sequence of function calls to a standard graphics application program interface (API), N which defines an image comprising objects placed on a page from a rearmost object to a foremost object to be composited in a manner defined by the render job. The driver software converts the render job to an intermediate render job, which is then fed to the pixel sequential renderer 20. The pixel sequential renderer 20 generates the colour and opacity for the pixels one at a time in raster scan order. At any pixel currently being scanned and processed, the pixel sequential renderer 20 composites only those exposed objects that are active at the currently scanned pixel. The pixel sequential renderer determines that an object is active at a currently scanned pixel if that pixel lies within the boundary of the object. The pixel sequential renderer 20 achieves this by reference to a fill counter associated with that object. The fill counter keeps a running fill count that indicates whether the pixel lies within the boundary of the object. When the pixel sequential renderer 20 encounters an edge associated with the object it increments or decrements the fill count depending upon the direction of the edge. The renderer 20 is then able to determine whether the current pixel is within the boundary of the object depending upon the fill count and a predetermined winding count rule. The renderer determines whether an active object is exposed with reference to a flag associated with that object. This flag associated with an object indicates whether or not the object obscures lower order objects. That is, this flag indicates whether the object is partially transparent, and in which case the lower order active objects will thus make a contribution 687880.doc 17to the colour and opacity of the current pixel. Otherwise, this flag indicates that the object is opaque in which case active lower order objects will not make any contribution
O
Z to the colour and opacity of the currently scanned pixel. The pixel sequential renderer Ndetermines that an object is exposed if it is the uppermost active object, or if all the active C 5 objects above the object have their corresponding flags set to transparent. The renderer M 20 then composites these exposed active objects to determine and output the colour and Nopacity for the currently scanned pixel.
The driver software, in response to the page, also extracts edge information defining the edges of the objects for feeding to the edge tracking module. The driver software also generates a linearised table of priority properties and status information (herein called the level activation table) of the expression tree of the objects and their compositing operations which is fed to the priority determination module. The level activation table contains one record for each object on the page. In addition, each record contains a field for storing a pointer to an address for the fill of the corresponding object in a fill table. This fill table is also generated by the driver software and contains the fill for the corresponding objects, and is fed to the fill determination module. The level activation table together with the fill table is devoid of any edge information and effectively represent the objects, where the objects are infinitively extending. The edge information is fed to the edge tracking module, which determines, for each pixel in raster scan order, the edges of any objects that intersect a currently scanned pixel. The edge processing module passes this information onto the priority determination module. Each record of the level activation table contains a counter, which maintains a fill count associated with the corresponding object of the record. The priority determination module processes each pixel in a raster scan order. Initially, the fill counts associated with all the objects are zero, and so all objects are inactive. The priority determination 687880.doc -18o module continues processing each pixel until it encounters an edge intersecting that pixel.
The priority determination module updates the fill count associated with the object of that Z edge, and so that object becomes active. The priority determination continues in this N fashion updating the fill count of the objects and so activating and de-activating the objects. The priority determination module also determines whether these active objects Cc are exposed or not, and consequently whether they make a contribution to the currently N scanned pixel. In the event that they do, the pixel determination module generates a Sseries of messages which ultimately instructs the pixel compositing module to composite the colour and opacity for these exposed active objects in accordance with the compositing operations specified for these objects in the level activation so as to generate the resultant colour and opacity for the currently scanned pixel. These series of messages do not at that time actually contain the colour and opacity for that object but rather an address to the fill table, which the fill determination module uses to determine the colour and opacity of the object.
For ease of explanation the location (ie: priority level or z-order) of the object in the order of the objects from the rearmost object to the foremost is herein referred to as the object's priority. Preferably, a number of non-overlapping objects that have the same fill and compositing operation, and that form a contiguous sequence in the order of the objects, may be designated as having the same priority. Most often, only one priority level is required per object. However, some objects may require several instructions, and thus the object may require several priority levels. For example, a character with a colour fill may be represented by, a bounding box on a first level having the colour fill, a one-bit bitmap which provides the shape of the character on a second level, and the same bounding box on a third level having the colour fill, where the levels are 687880.doc 19- Scomposited together xor Page) and S) xor B to produce the colour character. For fundamental objects, there is a one-to-one relationship with priority levels.
0 Z The pixel sequential renderer 20 also utilises clip objects to modify the shape of other objects. The renderer 20 maintains an associated clip count for the clip in a somewhat similar fashion to the fill count to determine whether the current pixel is within the clip region.
SOFTWARE DRIVER A software program, hereafter referred to as the driver, is loaded and executed on the host processor 2 for generating instructions and data for the pixel-sequential graphics rendering apparatus 20, from data provided to the driver by a third-party application. The third-party application may provide data in the form of a standard language description of the objects to be drawn on the page, such as PostScript and PCL, or in the form of function calls to the driver through a standard software interface, such as the Windows GDI or X-11.
The driver software separates the data associated with an object, supplied by the third-party application, into data about the edges of the object, any operation or operations associated with painting the object onto the page, and the colour and opacity with which to fill pixels which fall inside the edges of the object.
The driver software partitions the edges of each object into edges which are monotonic increasing in the Y-direction, and then divides each partitioned edge of the object into segments of a form suitable for the edge module described below. Partitioned edges are sorted by the X-value of their starting positions and then by Y. Groups of edges starting at the same Y-value remain sorted by X-value, and may be concatenated together to form a new edge list, suitable for reading in by the edge module when rendering reaches that Y-value.
687880.doc 20 O The driver software sorts the operations, associated with painting objects, into priority order, and generates instructions to load the data structure associated with the 0 Z priority determination module (described below). This structure includes a field for the fill rule, which describes the topology of how each object is activated by edges, a field for the type of fill which is associated with the object being painted, and a field to identify whether data on levels below the current object is required by the operation. There is also N a field, herein called clip count, that identifies an object as a clipping object, that is, as an Sobject which is not, itself, filled, but which enables or disables filling of other objects on the page.
The driver software also prepares a data structure (the fill table) describing how to fill objects. The fill table is indexed by the data structure in the priority determination module. This allows several levels in the priority determination module to refer to the same fill data structure.
The driver software assembles the aforementioned data into a job containing instructions for loading the data and rendering pixels, in a form that can be read by the rendering system, and transfers the assembled job to the rendering system. This may be performed using one of several methods known to the art, depending on the configuration of the rendering system and its memory.
PIXEL SEQUENTIAL RENDERING APPARA TUS Referring now to Fig. 2, a functional data flow diagram of the rendering process is shown. The functional flow diagram of Fig. 2 commences with an object graphic description 11 which is used to describe those parameters of graphic objects in a fashion appropriate to be generated by the host processor 2 and/or, where appropriate, stored within the system RAM 3 or derived from the system ROM 6, and which may be interpreted by the pixel sequential rendering apparatus 20 to render therefrom pixel-based 687880.doc -21 images. For example, the object graphic description 11 may incorporate objects with edges in a number of formats including straight edges (simple vectors) that traverse from one point on the display to another, or an orthogonal edge format where a twodimensional object is defined by a plurality of edges including orthogonal lines. Further formats, where objects are defined by continuous curves are also appropriate and these Cc can include quadratic polynomial fragments where a single curve may be described by a Nnumber of parameters which enable a quadratic based curve to be rendered in a single Soutput space without the need to perform multiplications. Further data formats such as cubic splines and the like may also be used. An object may contain a mixture of many different edge types. Typically, common to all formats are identifiers for the start and end of each line (whether straight or curved) and typically, these are identified by a scan line number thus defining a specific output space in which the curve may be rendered.
For example, Fig. 16A shows a prior art edge description of an edge 1600 that is required to be divided into two segments 1601 and 1602 in order for the segments to be adequately described and rendered. This arises because the prior art edge description, whilst being simply calculated through a quadratic expression, could not accommodate an inflexion point 1604. Thus the edge 1600 was dealt with as two separate edges having end points 1603 and 1604, and 1604 and 1605 respectively. Fig.
16B shows a cubic spline 1610 that is described by endpoints 1611 and 1612, and control points 1613 and 1614. This format requires calculation of a cubic polynomial for render purposes and thus is expensive of computational time.
Figs. 16C and 16D show examples of edges applicable to the described arrangement. An edge is considered as a single entity and if necessary, is partitioned to delineate sections of the edge that may be described in different formats, a specific goal of which is to ensure a minimum level of complexity for the description of each section.
687880.doc -22- SIn Fig. 16C, a single edge 1620 is illustrated spanning between
(N
scanlines A and M. An edge is described by a number of parameters including start_x,
O
Z start_y, one or more segment descriptions that include an address that points to the next segment in the edge, and a finish segment used to terminate the edge. Preferably, the edge 1620 may be described as having three step segments, a vector segment, and a c quadratic segment. A step segment is simply defined as having a x-step value and a y- Sstep value. For the three step segments illustrated, the segment descriptions are and Note that the x-step value is signed thereby indicating the direction of the step, whilst the y-step value is unsigned as such is always in a raster scan direction of increasing scanline value. The next segment is a vector segment which typically requires parameters start_x start_y num_of_scanlines (NY) and slope In this example, because the vector segment is an intermediate segment of the edge 1620, the start_x and start_y may be omitted because such arise from the preceding segment(s).
The parameter num_ofscanlines (NY) indicates the number of scanlines the vector segment lasts. The slope value (DX) is signed and is added to the x-value of a preceding scanline to give the x-value of the current scanline, and in the illustrated case, DX +1.
The next segment is a quadratic segment which has a structure corresponding to that of the vector segment, but also a second order value (DDX) which is also signed and is added to DX to alter the slope of the segment.
Fig. 16D shows an example of a cubic curve which includes a description corresponding to the quadratic segment save for the addition of a signed third-order value (DDDX), which is added to DDX to vary the rate of change of slope of the segment.
Many other orders may also be implemented.
It will be apparent from the above that the ability to handle plural data formats describing edge segments allows for simplification of edge descriptions and evaluation, 687880.doc 23 without reliance on complex and computationally expensive mathematical operations. In contrast, in the prior art system of Fig. 16A, all edges, whether orthogonal, vector or Z quadratic were required to be described by the quadratic form.
The operation of the rendering arrangement will be described with reference to C* 5 the simple example of rendering an image 78 shown in Fig. 8 which is seen to include Cc two graphical objects, in particular, a partly transparent blue-coloured triangle N rendered on top of and thereby partly obscuring an opaque red coloured rectangle 90. As S seen, the rectangle 90 includes side edges 92, 94, 96 and 98 defined between various pixel positions and scan line positions Because the edges 96 and 98 are formed upon the scan lines (and thus parallel therewith), the actual object description of the rectangle can be based solely upon the side edges 92 and 94, such as seen in Fig. 9A. In this connection, edge 92 commences at pixel location (40,35) and extends in a raster direction down the screen to terminate at pixel position (40,105). Similarly, the edge 94 extends from pixel position (160,35) to position (160,105). The horizontal portions of the rectangular graphic object 90 may be obtained merely by scanning from the edge 92 to the edge 94 in a rasterised fashion.
The blue triangular object 80 however is defined by three object edges 82, 84 and 86, each seen as vectors that define the vertices of the triangle. Edges 82 and 84 are seen to commence at pixel location (100,20) and extend respectively to pixel locations (170,90) and (30,90). Edge 86 extends between those two pixel locations in a traditional rasterised direction of left to right. In this specific example because the edge 86 is horizontal like the edges 96 and 98 mentioned above, it is not essential that the edge 86 be defined. In addition to the starting and ending pixel locations used to describe the edges 82 and 84, each of these edges will have associated therewith the slope value in this case +1 and-1 respectively.
687880.doc -24- Returning to Fig. 2, having identified the data necessary to describe the graphic objects to the rendered, the graphic system 1 then performs a display list generation Z step 12.
The display list generation 12 is preferably implemented as a software driver executing on the host processor 2 with attached ROM 6 and RAM 3. The display list Cc generation 12 converts an object graphics description, expressed in any one or more of the N well known graphic description languages, graphic library calls, or any other application Sspecific format, into a display list. The display list is typically written into a display list (,i store 13, generally formed within the RAM 4 but which may alternatively be formed within the temporary rendering stores 30. As seen in Fig. 3, the display list store 13 can include a number of components, one being an instruction stream 14, another being edge information 15 and where appropriate, raster image pixel data 16.
The instruction stream 14 includes code interpretable as instructions to be read by the pixel sequential rendering apparatus 20 to render the specific graphic objects desired in any specific image. For the example of the image shown in Fig. 8, the instruction stream 14 could be of the form of: render (nothing) to scan line at scan line 20 add two blue edges 82 and 84; render to scan line at scan line 35 add two red edges 92 and 94; and render to completion.
Similarly, the edge information 15 for the example of Fig. 8 may include the following: edge 84 commences at pixel position 100, edge 82 commences at pixel position 100; 687880.doc (ii) edge 92 commences at pixel position 40, edge 94 commences at pixel position 160; 0 Z (iii) edge 84 runs for 70 scan lines, edge 82 runs for 70 scanlines; (iv) edge 84 has slope edge 82 has slope +1; edge 92 has slope 0, edge 94 has slope 0.
(vi) edges 92 and 94 each run for 70 scanlines.
SIt will be appreciated from the above example of the instruction stream 14 and Sedge information 15 and the manner in which each are expressed, that in the image 78 of Fig. 8, the pixel position and the scanline value define a single 2-dimensional output space in which the image 78 is rendered. Other output space configurations however can be realised using the principles of the present disclosure.
Fig. 8 includes no raster image pixel data and hence none need be stored in the store portion 16 of the display list 13, although this feature will be described later.
The display list store 13 is read by a pixel sequential rendering apparatus The pixel sequential rendering apparatus 20 is typically implemented as an integrated circuit and converts the display list into a stream of raster pixels which can be forwarded to another device, for example, a printer, a display, or a memory store.
Although the pixel sequential rendering apparatus 20 is described as an integrated circuit, it may be implemented as an equivalent software module executing on a general purpose processing unit, such as the host processor 2.
Fig. 3 shows the configuration of the pixel sequential rendering apparatus 20, the display list store 13 and the temporary rendering stores 30. The processing stages 22 of the pixel-sequential rendering apparatus 20 include an instruction executor 300, an edge processing module 400, a priority determination module 500, a fill colour determination module 600, a pixel compositing module 700, and a pixel output module 800. The 687880.doc -26- B processing operations use the temporary stores 30 which, as noted above, may share the same device (eg. magnetic disk or semiconductor RAM) as the display list store 13, or Z may be implemented as individual stores for reasons of speed optimisation. The edge processing module 400 uses an edge record store 32 to hold edge information which is carried forward from scan-line to scan-line. The priority determination module 500 uses Cc a priority properties and status table 34 to hold information about each priority, and the current state of each priority with respect to edge crossings while a scan-line is being Srendered. The fill colour determination module 600 uses a fill data table 36 to hold information required to determine the fill colour of a particular priority at a particular position. The pixel compositing module 700 uses a pixel compositing stack 38 to hold intermediate results during the determination of an output pixel that requires the colours from multiple priorities to determine its value.
The display list store 13 and the other stores 32-38 detailed above may be implemented in RAM or any other data storage technology.
The processing steps shown in the arrangement of Fig. 3 take the form of a processing pipeline 22. In this case, the modules of the pipeline may execute simultaneously on different portions of image data in parallel, with messages passed between them as described below. In another arrangement, each message described below may take the form of a synchronous transfer of control to a downstream module, with upstream processing suspended until the downstream module completes the processing of the message.
INSTRUCTION EXECUTOR The instruction executor 300 reads and processes instructions from the instruction stream 14 and formats the instructions into messages that are transferred via an 08788U.doc -27- O output 398 to the other modules 400, 500, 550, 600 and 700 within the pipeline 22.
Preferably, the instruction stream 14 may include the following instructions: Z LOAD PRIORITY PROPERTIES: This instruction is associated with data to be N loaded into the priority properties and status table 34, and an address in that table to which the data is to be loaded. When this instruction is encountered by the instruction Cc executor 300, the instruction executor 300 issues a message for the storage of the data in N the specified location of the priority properties and status table 34. This may be Saccomplished by formatting a message containing this data and passing it down the processing pipeline 22 to the priority determination module 500 which performs the store operation.
LOAD FILL DATA: This instruction is associated with fill data associated with an object to be loaded into the fill data table 36, and an address in that table to which the data is to be loaded. When this instruction is encountered by the instruction executor 300, the instruction executor 300 issues a message for the storage of the data at the specified address of the fill data table 36. This may be accomplished by formatting a message containing this data and passing it down the processing pipeline 22 to the fill colour determination module which performs the store operation.
LOAD NEW EDGES AND RENDER: This instruction is associated with an address in the display list store 13 of new edges 15 which are to be introduced into the rendering process when a next scanline is rendered. When this instruction is encountered by the instruction executor 300, the instruction executor 300 formats a message containing this data and passes it to the edge processing module 400. The edge processing module 400 stores the address of the new edges in the edge record store 32.
The edges at the specified address are sorted on their initial scanline intersection coordinate before the next scanline is rendered. In one arrangement, they are sorted by 687880.doc 28 the display list generation process 12. In another arrangement, they are sorted by the pixel-sequential rendering apparatus
O
Z SET SCANLINE LENGTH: This instruction is associated with a number of pixels which are to be produced in each rendered scanline. When this instruction is encountered by the instruction executor 300, the instruction executor 300 passes the value Cc to the edge processing module 400 and the pixel compositing module 700.
C,
SETOPACITYMODE: This instruction is associated with a flag, which Sindicates whether pixel compositing operations will use an opacity channel, also known in the art as an alpha or transparency channel. When this instruction is encountered by the instruction executor 300, the instruction executor 300 passes the flag value in the pixel compositing module 700.
SETBUF: This instruction sets the address of external memory buffers used by the pixel sequential rendering apparatus 20. Preferably, at least the input, output and spill buffers of the edge processing module 400 are stored in external memory. Level and fill tables may also be constructed in memory and referenced with a SETBUF.
The instruction executor 300 is typically formed by a microcode state machine that maps instructions and decodes them into pipeline operations for passing to the various modules. A corresponding software process may alternatively be used.
EDGE PROCESSING MODULE The operation of the edge processing module 400 during a scanline render operation will now be described with reference to Fig. 4. The initial conditions for the rendering of a scanline is the availability of three lists of edge records. Any or all of these lists may be empty. These lists are a new edge list 402, obtained from the edge information 15 and which contains new edges as set by the LOAD_NEW_EDGES_AND_RENDER instruction, a main edge list 404 which contains 687880.doc -29edge records carried forward from the previous scanline, and a spill edge list 406 which also contains edge records carried forward from the previous scanline.
O
Z Turning now to Fig. 12A, there is shown the data format of such an edge record, which may include: a current scanline intersection coordinate (referred to here as Cc the X coordinate), (ii) a count (referred to herein as NY) of how many scanlines a Scurrent segment of this edge will last for (in some arrangements this may be represented (,i as a Y limit), (iii) a value to be added to the X coordinate of this edge record after each scanline (referred to here as the DX), (iv) a priority level number or an index to a list of priority numbers, an address (addr) of a next edge segment in the list; and (vi) a number of flags, marked p, o, u, c and d. The flag d determines whether the edge affects the clipping counter or the fill counter. The flag u determines whether the fill counter is incremented or decremented by the edge. The remaining flags are not significant in the rendering process and need not be described.
Such a data format may accommodate vectors, and orthogonally arranged edges.
The format may also include a further parameter herein called DDX, which is a value to be added to the DX value of this edge record after each scanline. The latter enables the rendering of edges describing quadratic curves. The addition of further parameters, DDDX for example, may allow such an arrangement to accommodate cubic curves. In some applications, such as cubic Bezier spline, a 6-order polynomial (ie: up to DDDDDDX) may be required. The flag indicates whether a winding count is to be 687880.doc O incremented or decremented by an edge. The winding count is stored in a fill counter and is used to determine whether a currently scanned pixel is inside or outside the object in
O
Z question.
N' In the example of the edges 84 and 94 of Fig. 8A, the corresponding edge C 5 records at scanline 20 could read as shown in the Table of Fig. 8B.
In this description, coordinates which step from pixel to pixel along a scanline being generated by the rendering process will be referred to as X coordinates, and Scoordinates which step from scanline to scanline will be referred to as Y coordinates.
(,i Preferably, each edge list contains zero or more records placed contiguously in memory.
Other storage arrangements, including the use of pointer chains, are also possible. The records in each of the three lists 402, 404 and 406 are arranged in order of scanline intersection, this being the X coordinate. This is typically obtained by a sorting process, initially managed by an edge input module 408 which receives messages, including edge information, from the instruction executor 300. It is possible to relax the sort to only regard the integral portion of each scanline intersection coordinate as significant. It is also possible to relax the sort further by only regarding each scanline intersection coordinate, clamped to the minimum and maximum X coordinates which are being produced by the current rendering process. Where appropriate, the edge input module 408 relays messages to modules 500, 600 and 700 downstream in the pipeline 22 via an output 498.
The edge input module 408 maintains references into, and receives edge data from, each of the three lists 402, 404, and 406. Each of these references is initialised to refer to the first edge in each list at the start of processing of a scanline. Thereafter, the edge input module 408 selects an edge record from one of the three referenced edge records such that the record selected is the one with the least X coordinate out of the three 687880.doc -31 O referenced records. If two or more of the X-records are equal, each is processed in any order and the corresponding edge crossings output in the following fashion. The
O
Z reference, which was used to select that record, is then advanced to the next record in that Slist. The edge just selected is formatted into a message and sent to an edge update module 410. Also, certain fields of the edge, in particular the current X, the priority Cc numbers, and the direction flag, are formatted into a message which is forwarded to the N priority determination module 500 via an output 498 of the edge processing module 400.
SArrangements that use more or fewer lists than those described here are also possible.
Upon receipt of an edge, the edge update module 410 decrements the count of how many scanlines a current segment will last. If that count has reached zero, a new segment is read from the address indicated by the next segment address. A segment preferably specifies: a value to add to the current X coordinate immediately the segment is read, (ii) a new DX value for the edge, (iii) a new DDX value for the edge, and (iv) a new count of how many scanlines the new segment will last.
If there is no next segment available at the indicated address, no further processing is performed on that edge. Otherwise, the edge update module 410 calculates the X coordinate for the next scanline for the edge. This typically would involve taking the current X coordinate and adding to it the DX value. The DX may have the DDX value added to it, as appropriate for the type of edge being handled. The edge is then written into any available free slot in an edge pool 412, which is an array of two or more edge records. If there is no free slot, the edge update module 410 waits for a slot to become available. Once the edge record is written into the edge pool 412, the edge 687880.doc 32 update module 410 signals via a line 416 to an edge output module 414 that a new edge has been added to the edge pool 412.
O
Z As an initial condition for the rendering of a scanline, the edge output module 414 has references to each of a next main edge list 404' and a next spill edge list 406'.
Each of these references is initialised to the location where the, initially empty, lists 404' Cc and 406' may be built up. Upon receipt of the signal 416 indicating that an edge has been added to the edge pool 412, the edge output module 414 determines whether or not the Sedge just added has a lesser X coordinate than the edge last written to the next main edge list 404' (if any). If this is true, a "spill" is said to have occurred because the edge cannot be appended to the main edge list 404 without violating its ordering criteria. When a spill occurs, the edge is inserted into the next spill edge list 406', preferably in a manner that maintains a sorted next spill edge list 406'. For example this may be achieved using a insertion sorting routine. In some arrangements the spills may be triggered by other conditions, such as excessively large X coordinates.
If the edge added to the edge pool 412 has an X coordinate greater than or equal to the edge last written to the next main edge list 404' (if any), and there are no free slots available in the edge pool 412, the edge output module 414 selects the edge from the edge pool 412 which has the least X coordinate, and appends that edge to the next main edge list 404', extending it in the process. The slot in the edge pool 412 that was occupied by that edge is then marked as free.
Once the edge input module 408 has read and forwarded all edges from all three of its input lists 402, 404 and 406, it formats a message which indicates that the end of scanline has been reached and sends the message to both the priority determination module 500 and the edge update module 410. Upon receipt of that message, the edge update module 410 waits for any processing it is currently performing to complete, then 687880.doc 33 forwards the message to the edge output module 414. Upon receipt of the message, the edge output module 414 writes all remaining edge records from the edge pool 412 to the Z next main edge list 404' in X order. Then, the reference to the next main edge list 404' and the main edge list 404 are exchanged between the edge input module 408 and the edge output module 414, and a similar exchange is performed for the next spill edge Cc list 406' and the spill edge list 406. In this way the initial conditions for the following N scanline are established.
SRather than sorting the next spill edge list 406' upon insertion of edge records thereto, such edge records may be merely appended to the list 406', and the list 406' sorted at the end of the scanline and before the exchange to the current spill list 406 becomes active in edge rasterisation of the next scanline.
It can be deduced from the above that edge crossing messages are sent to the priority determination module 500 in scanline and pixel order (that is, they are ordered firstly on Y and then on X) and that each edge crossing message is labelled with the priority level to which it applies.
Fig. 12A depicts a specific structure of an active edge record 418 that may be created by the edge processing module 400 when a segment of an edge is received. If the first segment of the edge is a step (orthogonal) segment, the X-value of the edge is added to a variable called "X-step" for the first segment to obtain the X position of the activated edge. Otherwise, the X-value of the edge is used. The Xstep value is obtained from the segment data of the edge and is added once to the Xedge value of the next segment to obtain the X position of the edge record for that next segment. This means that the edges in the new edge record will be sorted by Xedge Xstep. The Xstep of the first segment should, therefore, be zero, in order to simplify sorting the edges. The Y-value of the first segment is loaded into the NY field of the active edge record 418. The DX field of the 687880.doc -34active edges copied from the DX field identifier of vector or quadratic segments, and is set to zero for a step segment. A u-flag as seen in Fig. 12A is set if the segment is
O
Z upwards heading (see the description relating to Fig. 13A). A d-flag is set when the edge Nis used as a direct clipping object, without an associated clipping level, and is applicable to closed curves. The actual priority level of the segment, or a level address is copied Cc from the corresponding field of the new edge record into a level field in the active edge record 418. The address of the next segment in the segment list is copied from the Scorresponding field of the new edge record into a segment address field (segment addr) of the active edge record 418. The segment address may also be used to indicate the termination of an edge record.
It will be appreciated from Fig. 12A that other data structures are also possible, and necessary for example where polynomial implementations are used. In one alternative data structure, the 'segment addr' field is either the address of the next segment in the segment list or copied from the segments DDX value, if the segment is quadratic. In the latter case, the data structure has a q-flag which is set if the segment is a quadratic segment, and cleared otherwise. In a further variation, the segment address and the DDX field may be separated into different fields, and additional flags provided to meet alternate implementations.
Fig. 12B depicts the arrangement of the edge records described above and used in the edge processing module 400. A new active edge record 428, a current active edge record 430 and a spill active edge record 432, supplements the edge pool 412. As seen in Fig. 12B, the records 402, 404, 406, 404' and 406' are dynamically variable in size depending upon the number of edges being rendered at any one time. Each record includes a limit value which, for the case of the new edge list 402, is determined by a SIZE value incorporated with the LOAD_EDGES_AND_RENDER instruction. When 687880.doc 35 o such an instruction is encountered, SIZE is checked and if non-zero, the address of the new edge record is loaded and a limit value is calculated which determines a limiting size Z for each of the lists 402, 404, 406, 404' and 406'.
SAlthough the described arrangement utilizes arrays and associated pointers for the handling of edge records, other implementations, such as linked lists for example may Cc be used. These other implementations may be hardware or software-based, or combinations thereof SThe specific rendering of the image 78 shown in Fig. 8A will now be described with reference to scanlines 34, 35 and 36 shown in Fig. 10. In this example, the calculation of the new X coordinate for the next scanline is omitted for the purposes of clarity, with Figs. 12C to 121 illustrating the output edge crossing being derived from one of the registers 428, 430 and 432 of the edge pool 412.
Fig. 12C illustrates the state of the lists noted above at the end of rendering scanline 34 (the top portion of the semi-transparent blue triangle 80). Note that in scanline 34 there are no new edges and hence the list 402 is empty. Each of the main edge lists 404 and next main edge list 404' include only the edges 82 and 84. Each of the lists includes a corresponding pointer 434, 436, and 440 which, on completion of scanline 34, points to the next vacant record in the corresponding list. Each list also includes a limit pointer 450, denoted by an asterisk which is required to point to the end of the corresponding list. If linked lists were used, such would not be required as linked lists include null pointer terminators that perform a corresponding function.
As noted above, at the commencement of each scanline, the next main edge list 404' and the main edge list 404 are swapped and new edges are received into the new edge list 402. The remaining lists are cleared and each of the pointers set to the first member of each list. For the commencement of scanline 35, the arrangement then 687880.doc -36o appears as seen in Fig. 12D. As is apparent from Fig. 12D, the records include four active edges which, from Fig. 10, are seen to correspond to the edges 92, 94, 84 and 82.
Z Referring now to Fig. 12E, when rendering starts, the first segment of the new edge record 402 is loaded into an active edge record 428 and the first active edge records of the main edge list 404 and spill edge list 406 are copied to records 430 and 432 Mc, respectively. In this example, the spill edge list 406 is empty and hence no loading takes
C",
-place. The X-positions of the edges within the records 428, 430 and 432 are then compared and an edge crossing is emitted for the edge with the smallest X-position. In this case, the emitted edge is that corresponding to the edge 92 which is output together with its priority value. The pointers 434, 436 and 438 are then updated to point to the next record in the list.
The edge for which the edge crossing was emitted is then updated (in this case by adding DX 0 to its position), and buffered to the edge pool 412 which, in this example, is sized to retain three edge records. The next entry in the list from which the emitted edge arose (in this case list 402) is loaded into the corresponding record (in this case record 428). This is seen in Fig. 12F.
Further, as is apparent from Fig. 12F, a comparison between the registers 428, 430 and 432 again selects the edge with the least X-value which is output as the appropriate next edge crossing (X=85, Again, the selected output edge is updated and added to the edge pool 412 and all the appropriate pointers incremented. In this case, the updated value is given by X X DX, which is evaluated as 84 85 1. Also, as seen, the new edge pointer 434 is moved, in this case, to the end of the new edge list 402.
In Fig. 12G, the next edge identified with the lowest current X-value is again that obtained from the register 430 which is output as an edge crossing (X=l 15, P=2).
Updating of the edge again occurs with the value being added to the edge pool 412 as 687880.doc -37o shown. At this time, it is seen that the edge pool 412 is now full and from which the edge with the smallest X-value is selected and emitted to the output list 404', and the 0 Z corresponding limited pointer moved accordingly.
As seen in Fig. 12H, the next lowest edge crossing is that from the register 428 C* 5 which is output (X=160 The edge pool 412 is again updated and the next small X- Cc value emitted to the output list 404'.
C,
At the end of scanline 35, and as seen in Fig. 121, the contents of the edge pool S412 are flushed to the output list 404' in order of smallest X-value. As seen in Fig. 12J, the next main edge list 404' and the main edge list 404 are swapped by exchanging their pointers in anticipation of rendering the next scanline 36. After the swapping, it is seen from Fig. 12J that the contents of the main edge list 404 include all edges current on scanline 36 arranged in order of X-position thereby permitting their convenient access which facilitates fast rendering.
Ordinarily, new edges are received by the edge processing module 400 in order of increasing X-position. When a new edge arrives, its position is updated (calculated for the next scanline to be rendered) and this determines further action as follows: if the updated position is less than the last X-position output on the line 498, the new edge is insertion sorted into the main spill list 406 and the corresponding limit register updated; otherwise, if there is space, it is retained in the edge pool 412.
As is apparent from the foregoing, the edge pool 412 aids in the updating of the lists in an ordered manner in anticipation of rendering the next scanline in the rasterised image. Further, the size of the edge pool 412 may be varied to accommodate larger numbers of non-ordered edges. However, it will be appreciated that in practice the edge pool 412 will have a practical limit, generally dependent upon processing speed and 687880.doc -38available memory with the graphic processing system. In a limiting sense, the edge pool 412 may be omitted which would ordinarily require the updated edges to be insertion Z sorted into the next output edge list 404'. However, this situation can be avoided as a Nnormal occurrence through the use of the spill lists mentioned above. The provision of C 5 the spill lists allows the described arrangement to be implemented with an edge pool of Cc practical size and yet handle relatively complex edge intersections without having to Sresort to software intensive sorting procedures. In those small number of cases where the Sedge pool and spill list are together insufficient to accommodate the edge intersection complexity, sorting methods may be used.
An example of where the spill list procedure is utilised is seen in Fig. 14A where three arbitrary edges 60, 61 and 63 intersect an arbitrary edge 62 at a relative position between scanlines A and B. Further, the actual displayed pixel locations 64 for each of scanlines A, B, are shown which span pixel locations C to J. In the above described example where the edge pool 412 is sized to retain three edge records, it will be apparent that such an arrangement alone will not be sufficient to accommodate three edge intersections occurring between adjacent scanlines as illustrated in Fig. 14A.
Fig. 14B shows the state of the edge records after rendering the edges 60, 61 and 63 on scanline. The edge crossing H is that most recently emitted and the edge pool 412 is full with the updated X-values E, G and I for the edges 60, 61 and 63 respectively for the next scanline, scanline B. The edge 62 is loaded into the current active edge record 430 and because the edge pool 412 is full, the lowest X-value, corresponding to the edge is output to the output edge list 404'.
In Fig. 14C, the next edge crossing is emitted (X J for edge 62) and the corresponding updated value determined, in this case X C for scanline B. Because the new updated value X C is less than the most recent value X E copied to the output 687880.doc -39- Slist 404', the current edge record and its corresponding new updated value is transferred directly to the output spill list 406'.
O
Z Fig. 14D shows the state of the edge records at the start of scanline B where it is seen that the main and output lists, and their corresponding spill components, have been swapped. To determine the first emitted edge, the edge 60 is loaded into the current Cc active edge register 430 and the edge 62 is loaded into the spill active edge register 432.
The X-values are compared and the edge 62 with the least X-value (X C) is emitted, Supdated and loaded to the edge pool 412.
Edge emission and updating continues for the remaining edges in the main edge list 404 and at the end of the scanline, the edge pool 412 is flushed to reveal the situation shown in Fig. 14E, where it is seen that each of the edges 60 to 63 are appropriately ordered for rendering on the next scanline, having been correctly emitted and rendered on scanline B.
As will be apparent from the foregoing, the spill lists provide for maintaining edge rasterisation order in the presence of complex edge crossing situations. Further, by virtue of the lists being dynamically variable in size, large changes in edge intersection numbers and complexity may be handled without the need to resort to sorting procedures in all but exceptionally complex edge intersections.
Preferably, the edge pool 412 is sized to retain eight edge records and the lists 404, 404' together with their associated spill lists 406, 406' have a base (minimum) size of 512 bytes which is dynamically variable thereby providing sufficient scope for handling large images with complex edge crossing requirements.
PRIORITY DETERMINATION MOD ULE The operation of the priority determination module 500 will now be described with reference to Fig. 5. The primary function of the priority determination module 500 687880.doc 40 O is to determine those objects that make a contribution to a pixel currently being scanned, order those contributing objects in accordance with their priority levels, and generate Z colour composite messages for instructing the pixel compositing module 700 to Scomposite the ordered objects to generate the required colour and opacity for the current pixel.
SThe priority determination module 500 receives incoming messages 498 from Sthe edge processing module 400. These incoming messages may include load priority Sdata messages, load fill data messages, edge crossing messages, and end of scanline messages. These messages first pass through a first-in first-out (FIFO) buffer 518 before being read by a priority update module 506. The FIFO 518 acts to de-couple the operation of the edge processing module 400 and the priority determination module 500.
Preferably the FIFO 518 is sized to enable the receipt from the edge processing module 400 and transfer of a full scanline of edge-crossings in a single action. Such permits the priority determination module 500 to correctly handle multiple edge-crossings at the same pixel location.
The priority determination module 500 is also adapted to access a priority state table 502, and a priority data table 504. These tables are used to hold information about each priority. Preferably, the priority state and priority data tables 502, 504 are combined in memory as a single level activation table 530, as shown in Fig. 18. Alternatively these tables 502, 504 can be kept separate.
Preferably, the priority properties and status table 34 includes at least the following fields as shown in Fig. 18 for each priority level: a fill-rule flag (FILLRULEISODD_EVEN) which indicates whether this priority is to have its inside versus outside state determined by the application of the odd-even fill rule or the non-zero winding fill rule; 687880.doc -41- (ii) a fill counter (FILL COUNT) for storing a current fill count which is modified in a manner indicated by the fill rule each time an edge affecting this
O
Z priority is crossed; (iii) a clipper flag CLIPPER) which indicates whether this priority is to be used for clipping or filling; (iv) a clip type flag (CLIPOUT) which, for edges which have the Sclipper flag set, records whether the clipping type is a "clip-in" or a "clip-out"; a clip counter (CLIP COUNT) for storing a current clip count which is decremented and incremented when a clip-in type clip region affecting this priority is entered and exited respectively, and incremented and decremented when a clipout type clip region affecting this priority is entered and exited respectively; and (vi) a flag (NEED_BELOW) which records whether this priority requires levels beneath it to be calculated first, referred to as the "need-below" flag.
(vii) a fill table address (FILL INDEX), which points to an address where the fill of the priority is stored; (viii) a fill type (FILL TYPE), (ix) a raster operation code (COLOUR_OP), an alpha channel operation code (ALPHAOP) consisting of three flags (LAO_USE_D_OUT_S, LAO_USE_S_OUT_D and LAO_USE_S_ROP_D), (xi) a stack operation code (STACK_OP), and (xii) a flag (X_INDEPENDENT) which records whether the colour of this priority is constant for a given Y, referred to here as the "x-independent" flag; and (xiii) other information (ATTRIBUTES) of the priority.
Clipping objects are known in the art and act not to display a particular new object, but rather to modify the shape of another object in the image. Clipping objects 687880.doc 42 can also be turned-on and turned-off to achieve a variety of visual effects. For example, the object 80 of Fig. 8A could be configured as a clipping object acting upon the object 0 Z 90 to remove that portion of the object 90 that lies beneath the clipping object 80. This may have the effect of revealing any object or image beneath the object 90 and within the C* 5 clipping boundaries that would otherwise be obscured by the opacity of the object Cc The CLIPPER flag is used to identify whether the priority is a clipping object. Also, the N CLIP-OUT flag is used to determine whether the priority is a clip-in or a clip-out, and the CLIP COUNT is used in a similar fashion to FILL COUNT to determine whether the (,i current pixel is within the clip region.
Figs. 13A and 13B demonstrate the application of the odd-even and non-zero winding rules, for activating objects. The relevant rule to be used is determined by means of the fill-rule flag FILL_RULEISODD_EVEN.
For the purposes of the non-zero winding rule, Fig. 13A illustrates how the edges 71 and 72 of an object 70 are allocated a notional direction, according to whether the edges are downwards-heading or upwards-heading respectively. In order to form a closed boundary, edges link nose-to-tail around the boundary. The direction given to an edge for the purposes of the fill-rule (applied and described later) is independent of the order in which the segments are defined. Edge segments are defined in the order in which they are tracked, corresponding to the rendering direction.
Fig. 13B shows a single object (a pentagram) having two downwards-heading edges 73 and 76, and three upwards-heading edges 74, 75 and 77. The odd-even rule operates by simply toggling a Boolean value in the FILL COUNT as each edge is crossed by the scanline in question, thus effectively turning-on (activating) or turning-off (deactivating) an object's colour. The non-zero winding rule increments and decrements a value stored in the fill counter FILL COUNT dependent upon the direction of an edge 687880.doc 43 o being crossed. In Fig. 13B, the first two edges 73 and 76 encountered at the scanline are downwards-heading and thus traversal of those edge increments the fill counter, to +1 and
O
Z +2 respectively. The next two edges 74 and 77 encountered by the scanline are upwards- Nheading and accordingly decrement the fill counter FILL COUNT, to +1 and 0 01 5 respectively. The non-zero winding rule operates by turning-on (activating) an object's e¢ colour when the fill counter FILL COUNT is non-zero, and turning-off (de-activating) the object's colour when the fill counter FILL COUNT is zero.
SThe NEEDBELOW flag for a priority is established by the driver software and is used to inform the pixel generating system that any active priorities beneath the priority in question do not contribute to the pixel value being rendered, unless the flag is set. The flag is cleared where appropriate to prevent extra compositing operations that would otherwise contribute nothing to the final pixel value.
The raster operation code (COLOUROP), alpha channel operation (ALPHAOP) and stack operation (STACK_OP) together form the pixel operation (PIXELOP) that is to be performed by the pixel compositing module 700 on each pixel where the priority is active and exposed.
Preferably, most of the information contained in the combined table 34 is directly loaded by instructions from the driver software. In particular, the fill-rule flag, the clipper flag, the clip type flag, and the need-below flag, fill table address, fill type, raster operation, code, alpha channel operation code, stack operation code, x_independent flag, and other attributes may be handled in this manner. On the other hand, the fill counter, and clip counter are initially zero and are changed by the priority determination module 500 in response to edge crossing messages.
The priority determination module 500 determines that a priority is active at a pixel if the pixel is inside the boundary edges which apply to the priority, according to the 687880.doc 44 o fill-rule for that priority, and the clip count for the priority. A priority is exposed if it is the uppermost active priority, or if all the active priorities above it have their 0 Z corresponding need-below flags set. In this fashion, pixel values may be generated using only the fill data of the exposed priorities. It is important to note that an object's priority designates the level location of the object in the z-order of the objects from the rearmost object to the foremost object. Preferably, a number of non-overlapping objects that have
C",
the same fill and compositing operation, and that form a contiguous sequence, may be designated as having the same priority. This effectively saves memory space in the fill table. Furthermore, the corresponding edge records of objects need only reference the corresponding priority in order to reference the corresponding fill and compositing operation.
Returning now to Fig. 5, the priority update module 506 maintains a counter 524 which records the scanline intersection coordinate up to which it has completed processing. This will be referred to as the current X of the priority update module 506.
The initial value at the start of a scanline is zero.
Upon examining an edge crossing message received at the head of the FIFO 518, the priority update module 506 compares the X intersection value in the edge crossing message with its current X. If the X intersection value in the edge crossing message is less than or equal to the current X, the priority update module 506 processes the edge crossing message. Edge crossing message processing comes in two forms. "Normal edge processing" (described below) is used when the record in the priority state table 502 indicated by the priority in the edge crossing message has a clipper flag which indicates that this is not a clip priority. Otherwise, "clip edge processing" (described below) is performed.
687880.doc 45 O "Normal edge processing" includes, for each priority in the edge crossing message and with reference to fields of the record of combined table 34 indicated by that Z priority, the steps of: S(i) noting the current fill count of the current priority; S 5 (ii) either: Cc if the fill rule of the current priority is odd-even, setting the fill count to zero if it is currently non-zero, else setting it to any non-zero value, or S(b) if the fill rule of the current priority is non-zero winding, incrementing or decrementing (depending on the edge direction flag) the fill count; and (iii) comparing the new fill count with the noted fill count and if one is zero and the other is non-zero performing an "active flag update" (described below) operation on the current priority.
Some arrangements may use a separate edge crossing message for each priority rather than placing a plurality of priorities in each edge crossing message.
An active flag update operation includes first establishing a new active flag for the current priority. The active flag is non-zero if the fill count for the priority in the priority state table 502 is non-zero and the clip count for the priority is zero, else the active flag is zero. The second step in the active flag update operation is to store the determined active flag in an active flags array 508 at the position indicated by the current priority, then if the need-below flag in the priority state table for the current priority is zero, also storing the active flag in an opaque active flags array 510 at the position indicated by the current priority.
"Clip edge processing" includes, with reference to fields of the priority state table record indicated by the first priority in the edge crossing message, the steps of: noting the current fill count of the current priority; 687880.doc 1 -46- (ii) either: if the fill rule of the current priority is odd-even, setting the
O
Z fill count to zero if it is currently non-zero else setting it to any non-zero value, or N, if the fill rule of the current priority is non-zero winding, C* 5 incrementing or decrementing (depending on the edge direction flag) the fill count; and (iii) comparing the new fill count with the noted fill count and determining a clip delta value of: S(a) zero, if both the new fill count is zero and the noted fill count is zero, or both the new fill count is non-zero and the noted fill count is non-zero, plus one, if the clip type flag of the current priority is clipout and the noted fill count is zero and the new fill count is non-zero, or the clip type flag of the current priority is clip-in and the noted fill count is non-zero and the new fill count is zero, or otherwise, minus one; and (iv) for every subsequent priority after the first in the edge crossing message, add the determined clip delta value to the clip count in the record in the priority state stable indicated by that subsequent priority, and if the clip count either moved from non-zero to zero, or from zero to non-zero in that process, performing an active flag update operation as described above on that subsequent priority. It should be noted that the initial value of each clip count is set by the LOAD_PRIORITY_PROPERTIES instruction described previously. The clip count is typically initialised to the number of clip-in priorities, which affect each priority.
Some arrangements do not associate a priority with a clip, but instead directly increment and decrement the clip count of all priorities given in the edge crossing message. This technique can be used, for example, when clip shapes are simple and do 687880.doc -47not require the application of a complex fill rule. In this specific application, the clip count of the level controlled by an edge is incremented for an upwards heading edge or 0 Z decremented for a downwards heading edge. A simple closed curve, described anticlockwise, acts as a clip-in, whereas a simple closed curve, described clockwise, acts as a clip-out.
Cc When the X intersection value in the edge crossing message is greater than the N current X of the priority update module 506, the priority update module 506 forms a Scount of how many pixels to generate, being the difference between the X intersection value in the edge crossing message and the current X. This count is formatted into a priority generation message, which is sent via a connection 520 to a priority generation module 516. The priority update module 506 then waits for a signal 522 from the priority generation module 516 indicating that processing for the given number of pixels has completed. Upon receipt of the signal 522, the priority update module 506 sets its current X to the X intersection value in the edge crossing message and continues processing as described above.
Upon receipt of a priority generation message 520, the priority generation module 516 performs a "pixel priority generation operation" (described below) a number of times indicated by the count it has been supplied, thereupon it signals 522 the priority update module 506 that it has completed the operation.
Each pixel priority generation operation includes firstly using a priority encoder 514 (eg. a 4096 to 12 bit priority encoder) on the opaque active flags array 510 to determine the priority number of the highest opaque active flag. This priority (if any) is used to index the priority data table 504 and the contents of the record so referenced is formed into a fill priority message output 598 from the priority generation module 516 and sent to the fill colour determination module 600. Further, if a priority was determined 687880.doc -48by the previous step (ie. there was at least one opaque active flag set), the determined priority is held, and is referred to as the "current priority". If no priority was determined Z the current priority is set to zero. The priority generation module 516 then repeatedly uses a modified priority encoder 512 on the active flag array 508 to determine the lowest active flag which is greater than the current priority. The priority so determined (if any) Cc is used to index the level activation table 530 and the contents of the record so referenced N is formed into a fill priority message. This fill priority message is then sent via the output S598 to the fill colour determination module 600, then the determined priority is used to update the current priority. This step is used repeatedly until there is no priority determined (that is, there is no priority flagged in the active flags, which is greater than the current priority). Then the priority generation module 516 forms an end of pixel message and sends it to the fill colour determination module 600. The priority determination module 500 then proceeds to the next pixel to generate another series of fill priority messages in similar fashion.
Turning now to Fig. 22A, there is shown an example of such a series of fill priority messages 2200 generated by the priority determination module 500 for a single current pixel. As described above, these fill priority messages 2202 are first preceded by a START_OFPIXEL command 2201. The fill priority messages 2202 are then sent in priority order commencing with the lowest exposed active priority level. When there are no more fill priority messages 2202 for the current pixel, the priority determination module 500 then sends an END_OF_PIXEL message 2206.
Each of one these fill priority messages 2202 preferably includes at least the following fields: An identifier code FILL_PRTY 2204 for identifying the message as a fill priority message. This code also includes an index LEVEL_INDX to the 687880.doc -49- O corresponding record in the level activation table 530, and also a code FIRSTPIXEL indicating whether or not this fill priority message belongs to a first pixel in a run of
O
Z pixels having the same fill priority messages. The priority determination module 500 asserts the FIRSTPIXEL code for all those fill priority messages of a currently scanned pixel that is intersected by an edge as indicated by the edge crossing messages. The c FIRSTPIXEL code is de-asserted for all fill priority messages of a currently scanned N pixel if there is no edges intersecting that pixel as indicated by the edge crossing Smessages.
(ii) A fill table address FILL_INDEX, (iii) A fill type FILL_TYPE, (iv) A raster operation code COLOR_OP, An alpha channel operation code ALPHA_OP, (vi) A stack operation code STACK_OP, and (vii) A flag XIND which records whether the colour of this priority is constant for a given Y, referred to here as the "x-independent" flag. This flag is asserted when the colour for this priority is constant.
The values of fields (ii) to (vii) for the fill priority message are retrieved from the corresponding record in the combined table 34.
Preferably, the priority generation module 516 notes the value of the xindependent flag of each fill priority message that it forwards to the fill colour determination module 600 while it processes the first pixel of a sequence. If all the forwarded messages have the x-independent flag specified, all subsequent messages in the span of pixels between adjacent edge intersections can be replaced by a single repeat specification of count minus one. This is done by producing a repeat message and sending it to the fill colour determination module 600 in place of all further processing in 687880.doc this sequence. It will be appreciated that if all the fill priority messages of a first pixel in a span of pixels between adjacent edges have their x-independent flag asserted, then the
O
Z colour and opacity of the pixels in the span of pixels will be constant. Thus in these cases, the pixel compositing module 700 need only composite the first pixel in the span of pixels to generate the required constant colour and opacity and pass this onto the pixel Cc output module 800. The generated repeat command is then passed to the pixel output module 800 which reproduces the constant colour and opacity for the subsequent pixels in the span of pixels from the colour and opacity of the first pixel. In this fashion, the number of compositing operations performed by the pixel compositing module 700 is reduced.
As another preferred feature to the basic operation described above, the priority generation module 516 sends the highest opaque priority via the connection 522 to the priority update module 506 after each edge crossing message. The priority update module 506 holds this in a store 526. The priority determination module 506 then, instead of a simple test that the X intersection in the message is greater than the current X, performs a test that the X intersection in the message is greater than the current X and that at least one of the levels in the message is greater than or equal to the highest opaque priority, before producing a fill priority message. By doing this, fewer pixel priority determination operations may be done and longer repeat sequences may be generated.
Using the example of the graphic objects shown in Figs. 8A, 9A and 9B, the priority update process described above can be illustrated, for scanline 35 using the edge crossings seen from Figs. 12C to 12J, as seen in Figs. 15A to Figs. 15A to 15E illustrate operation of the priority tables 502 and 504 which, in a preferred implementation are merged into a single table (see Fig. 18), referred to as the 687880.doc -51 level activation table (LAT) 530 and which is depicted together with arrays 508, 510 and encoders 512 and 514.
O
Z As seen in Fig. 15A, edge crossing messages are received in order for a scanline from the edge processing module 400 and are loaded into the table 530, which is arranged in priority order. The edge crossing messages include, in this example, an incrementing Cc direction according to the non-zero winding rule of the edge traversal. It is possible for N no entries in the level activation table 530 to be set.
SThe level activation table 530 includes column entries for fill count, which are determined from the edge according to the non-zero winding rule or, where appropriate, the odd-even rule. The need-below flag is a property of a priority and is set as part of the LOAD_PRIORITIES_PROPERTIES instruction. The need-below is set for all priority levels when the table 530 is loaded. Other columns such as "clip count" and "fill index table" may be used, but for this example are omitted for simplicity of explanation. Where no level is active the corresponding entries are set to zero. Further, the values of the arrays 510 and 508 are updated from the table 530 after receiving a subsequent edge crossing.
From Fig. 15A, it will be apparent that, for convenience, a number of records have been omitted for clarity. As described previously, the contents of the table 530, where not used in the priority determination module 500, are passed as messages to each of the fill colour determination module 600 for pixel generation, and to the pixel compositing module 700 for compositing operations.
The first edge crossing for scanline 35 (Fig. 12E) is seen in Fig. 15A where for P=I, the fill count is updated to the value of the edge according to the non-zero winding rule. The "need-below" flag for this level has been set to zero by the driver software as the object in question is opaque.
687880.doc -52- Because a previous state of the table 530 was not set, the arrays 510 and 508 remain not set and the priority encoder 514 is disabled from outputting a priority. This is
O
Z interpreted by priority generation module 516 which outputs a count n=40 (pixels) for a S"no object" priority (eg: P being the first, blank, portion of the scanline Fig. 15B shows the arrangement when the edge crossing of Fig. 12F is received.
Cc The fill count is updated. The arrays 510 and 508 are then set with the previous highest level from the table 530. At this time, the module 516 outputs a count n=45, P=I Srepresenting the edge 96 of the opaque red object 90 before intersection with the semitransparent triangle Fig. 15C shows the arrangement when the edge crossing of Fig. 12G is received.
Note that the fill count has been adjusted downwardly because of the non-zero winding rule. Because the object that is valid prior to receiving the current edge crossing is not opaque, the modified priority encoder 512 is used to select the priority P=2 as the highest active level which is output as is current for n=(1 15-85)=30 pixels.
Fig. 15D shows the arrangement when the edge crossing of Fig. 12H is received. Note that previously changed "need-below" for P=2 has been transferred to the active array 508, thus permitting the priority encoder to output a value P=I current for n=(160-115)=45 pixels.
Fig. 15E shows the result when the edge crossing of Fig. 121 is received, providing for an output of P=0 for n=(180-160)=20 pixels.
As such, the priority module 500 outputs counts of pixels and corresponding priority display values for all pixels of a scanline.
FILL COLOUR DETERMINATION MOD ULE The next module in the pipeline is the fill colour determination module 600, the operation of which will now be described with reference to Fig. 6. Incoming 687880.doc 53 B messages 598 from the priority determination module 500, which include set fill data messages, repeat messages, fill priority messages, end of pixel messages, and end of Z scanline messages, first pass to a fill lookup and control module 604. The fill lookup and control module 604 maintains a current X position counter 614 and a current Y position counter 616 for use by various components of the fill colour determination module 600.
Cc Upon receipt of an end of scanline message, the fill lookup and control module 604 resets the current X counter 614 to zero and increments the current Y counter 616. The end of scanline message is then passed to the pixel compositing module 700.
Upon receipt of a set fill data message, the fill lookup and control module 604 stores the data in the specified location 602 of the fill data table 36.
Upon receipt of a repeat message, the fill lookup and control module 604 increments the current X counter 614 by the count from the repeat message. The repeat message is then passed to the pixel compositing module 700.
Upon receipt of an end of pixel message 2202, the fill lookup and control module 604 again increments the current X counter 614, and the end of pixel message is then passed to the pixel compositing module 700.
Upon receipt of a fill priority message, the fill lookup and control module 604 performs operations which include: the fill type from the fill priority message is used to select a record size in the fill data table 36; (ii) the fill table address from the fill priority message, and the record size as determined above, is used to select a record from the fill data table 36; (iii) the fill type from the fill priority message is used to determine* and select a sub-module to perform generation of the fill colour. The sub-modules may 687880.doc -54- O include a raster image module 606, a flat colour module 608, a linearly ramped colour module 610, and an opacity tile module 612;
O
Z (iv) the determined record is supplied to the selected sub-module 606-612; the selected sub-module 606-612 uses the supplied data to determine a colour and opacity value; (vi) the determined colour and opacity is combined with remaining N information from the fill colour message, namely the raster operation code, the alpha channel operation code, the stack operation code, to form a colour composite message 2208, which is sent to the pixel compositing module 700 via the connection 698.
Thus, a message sequence 2200 of Fig. 22A starting with a start of pixel message 2201 message, then fill priority messages 2202 followed by an end of pixel message 2206 is transformed into a message sequence 2212 of Fig. 22B comprising a start of pixel message 2201, colour composite messages 2208 followed by an end of pixel message 2206. These colour composite messages 2202 preferably includes the same fields as the fill priority messages 2202, with the following exceptions: code CLR_CMP 2210 for identifying the message as a colour composite message. This CLR_CMP code also includes the index to the corresponding record in the level activation table 530; (ii) a colour and opacity field for containing the colour and opacity value of the priority. The latter replaces the fill index and fill type fields of the fill priority messages.
In the preferred arrangement, the determined colour and opacity is a red, green, blue and opacity quadruple with 8-bit precision in the usual manner giving 32 bits per pixel. However, a cyan, magenta, yellow and black quadruple with an implied opacity, or one of many other known colour representations may alternatively be used. The red, 687880.doc 8 green, blue and opacity case is used in the description below, but the description may also (,i be applied to other cases.
0 Z The operation of the raster image module 606, the flat colour module 608, the linearly ramped colour module 610, and the opacity tile module 612 will now be described.
0% Flat fills The flat colour module 608 interprets the supplied record as a fixed format record containing three 8-bit colour components (typically interpreted as red, green and blue components) and an 8-bit opacity value (typically interpreted as a measure of the fraction of a pixel which is covered by the specified colour, where 0 means no coverage, that is complete transparency, and 255 means complete coverage, that is, completely opaque). This colour and opacity value is output directly via the connection 698 and forms the determined colour and opacity without further processing.
Linear Ramp Fills by conventional Method In one arrangement, the linearly ramped colour module 610 interprets the supplied record as a fixed format record containing four sets of three constants, cx, cy, and d, being associated with the three colour and one opacity components. For each of these four sets, a result value r is computed by combining the three constants with the current X count, x, and the current Y count, y, using the formula: r clamp (cx x cy y d) where the function "clamp" is defined as: 255 255 x clamp {Lxi 0 x 255 {0 x<0 687880.doc -56- The four results so produced are formed into a colour and opacity value. This colour and opacity value is output directly via the connection 698 and forms the Z determined colour and opacity without further processing.
Linear Ramp Fills using a Bresenham Approach Bresenham's line algorithm is a well-known algorithm for drawing lines with
C€)
CC integer endpoints on a raster display. A related approach, as described below, may be Cused to calculate linear ramp fills in the rendering apparatus 20. Initially, Bresenham's line algorithm is described as essential background. Bresenham's line algorithm has two advantages: it uses a decision value that can be calculated by repeated addition and subtraction of constant valued integers, and it avoids the issue of truncated fractions by using a decision value that is calculated without involving any division.
An abstract line must be approximated for display on a raster device. A set of pixels is switched on to provide a representation of the original line. Once a pixel has been added to the set, it is necessary to decide which of the adjacent pixels should be added to the set representing the line. In general there are eight possible neighbours of any pixel, but the slope of the line may be used to bring this down to two possibilities according to the octant in which the slope of the line lies. This decision only needs to be made once for any line.
This is illustrated in Fig. 25 and Fig. 26. A pixel 2502 has eight adjacent pixels including pixels 2504. The pixel coordinate space is defined by an x-axis 2506, which defines position along a scanline, and a y axis 2508, which defines the scanline. A line 2612 is drawn on the pixel display. Pixel 2610 on scanline 2620 has been determined to be part of the set of pixels representing the line 2612. It is then necessary to select a pixel 687880.doc -57- O on the following scanline 2621 to add to the set. Because the slope of line 2612 falls >within the octant represented by triangle 2624, the choice of pixel on scanline 2621
O
Sbecomes a choice between pixel 2614 and pixel 2616. A determining factor is the relative distance from the actual line crossing 2618 to the pixel location.
Bresenham's line algorithm uses a decision variable designed to compare Cc, distances dl and d 2 Let line 2612 have integer endpoints A (XA, YA) and B (xB, yB), such Sthat ys YA, XB XA, and (yB YA) (XB XA). The last condition restricts the line 2612 to the octant 2624 shown in Fig. 26.
The distance between the left candidate 2614 and the ideal crossing 2618 is: d x x(k) Equation 4 where x is the position of the intersection of the line 2612 with the next scanline 2621, and x(k) is the pixel position of pixel 2610 on the current line 2610. The distance between the right candidate 2616 and the ideal position 2618 is: d 2 x(k)+1 x Equation and di d 2 2x 2x(k)- 1 Equation 6 So, if d] d 2 0, the right candidate 2616 is chosen (ie x(k+1) otherwise the left candidate 2614 is chosen (ie x(k+l) The ideal position ofx on scanline y is a solution of: x x YB A) B xA y YA)+ Equation 7 x yB YA) (xB -xA YYA)+xA B A Note that YB YA is a positive integer, as is so 2x(k) 1 must be a multiple of /YB Y Equation 7 implies that x is also a multiple of B
YA
Therefore, using Equation 6, d, d 2 must also be a multiple of ,y so that 687880.doc -58- P d 2 B A YA) is an integer whose sign is the same as that of d, d 2 Since the decision depends only on the sign of d, d 2 P can be used in its place to determine which of the candidate points to choose. This ensures that only integer arithmetic is required for Bresenham's algorithm, and that there is no rounding error in the calculation.
The decision variable, P, can be evaluated on scanline k using: P(k)=dl -d 2
!YB-YA)
(2x- 2x(k)
-Y
2 -x Ay(k) yA A B 2x(k)yB 2xB -x A 2x(k) -A 2X B -xA A) 2Ax y(k) 2Ay x(k) 2 Ax yA 2Ay xA Ay -y YAr (YB YA) Equation 8 where y YA Ax XB XA The last three terms in equation 8 are constants, so the change in the decision variable from one line to the next is P(k 2Ax(k 2Ay x(k)) 2Ax xk +1 x k Equation 9 2Ax 2Ay xk +1 xk)+ 1 Equation 8, with y(l) YA shows that 2Ax- Ay, allowing the decision value to be iteratively determined.
The full algorithm for the octant 2624 may be described as follows: d X=x B-x A d_Y y_B y_A DP 2*d X DP1 =2*dX 2*dY X=x A Y y_A 687880.doc -59- P 2*d X-d Y EMIT X,Y 0 FORk= 1TOd Y Z IF P 0 THEN P DP
ELSE
X+=l P +=DP1 g 10 ENDIF er Y++ SEMIT X,Y
ENDFOR
SThe algorithm is simple to generalise for the other octants. Alternatively, when edge tracking from scanline to scanline, the decision value may be transformed using a shear transform, such that a decision between Ax and Ax 1 is transformed to the first octant.
Having thus described the Bresenham line algorithm, a Bresenham-type method suitable for use in the linearly ramped colour module 610 will now be described with reference to Figures 29 to 31.
Bresenham's algorithm may be applied to each channel of a linear ramp fill.
However, to ensure exactitude, a decision value must be formed from values that are guaranteed to be integers, when the endpoints are integer-valued. A decision value based on slope calculations is not adequate for colour ramp calculation over large numbers of pixels.
A more sophisticated geometric approach is required for linear colour ramps.
This also has the advantage of being more easily generalisable to more than three dimensions, and so the principles described here may find application in calculations of other kinds.
As illustrated in Fig. 27, the linear ramp for channel colour z is defined at three non-collinear points: (xo, yo, zo), (xl, yi, z 1 and (x2, Y2, z 2 arranged such that points 0, 1, 2 687880.doc Sare described clockwise in the x, y plane. The three non-collinear points thus form triangle 2700 in the x, y plane.
O
Z The module 610 uses a fixed format record containing x, y, and colour/opacity
C
data for the three points on the plane of the page. The software driver is expected to 0 5 ensure that these points are not collinear. Fig. 29 shows the record schematically, where Seach of the points (xo, yo), (xi, yl) and (x2, Y2) has corresponding values for R, G, B and Sopacity. Together, the data in record 2900 define the linear colour ramp for use by the O Linearly Ramped Colour Module 610.
An exact expression for the plane in x, y, z, where z is one of the colour channels, is: a(x-x o (y Equation where: a= (z -zoXy 2 -yo)-(z 2 -zoXy -yo); -xoXZ 2 2 -XoXz, and 5 Xo Xy2 yo)-(2 X Xy yo). Equation 11 Note that 6 is the magnitude of the directed area of a parallelogram formed on the vectors 1 and 0- 2, projected onto the x, y plane. The points are chosen such that 6 is positive. Similarly, a and f3 are the magnitudes of the projections of the same parallelogram onto the z, y and x, z planes, respectively.
This is illustrated in Fig. 28, which shows a parallelogram 2802 in x, y, z space.
Parallelogram 2802 is obtained from the vector between points 0 and 1 and the vector between points 0 and 2 respectively. The projection of parallelogram 2802 onto the x, y axis is parallelogram 2804 and parallelogram 2806 is the projection of 2802 onto the z, y 687880.doc -61o plane. For clarity of illustration, the projection of parallelogram 2802 onto the x, z plane is not shown.
O
Z Clearly, 6, a and 3 are integers if the endpoints have integer coordinates.
Consequently, when using integer or fixed point arithmetic, exact calculation of plane positions or colour values to the nearest integer is possible in a generalised Bresenham c approach.
N The coordinates may be transformed so that the decision is reduced to a choice of O two candidates. Thus, if: Equation 12 where the representation LvalueJ floor (value), a floor operation discarding the fractional part of the value, and we apply the transformation z' z x o )-q jy Yo), Equation 13 the choice of the next value in the z channel is reduced to z' or z when stepping by 1 pixel unit in the x or y direction. Note that z' is an integer wherever z is an integer. In particular, z' is an integer at each of the three points defining the colour ramp.
Noting that the distance zy) is the same as z'ij), the distance between an integer point zy) and the plane 2802 at j) is: d, z.
and the distance between the next integer point zy+1) and the plane is: d 2 =z The decision to choose z'i+1 is made when d 2 dI 0.
687880.doc -62-
O
Sd 2 =2z' -2z'(i,j)+l J( (d 2 d, z' -2(a'(i-xo )+6zo Equation 14 Z where C a' S, and S 5 P r. Equation The decision value is: N P -d 1 =2a'(x 0 -j)-26(z Equation 16 If Py is positive, z'y+l is chosen, and if Py is zero or negative, z'y is chosen.
Note that the decision values may be calculated exactly from the projected areas using only integer values if the defining data in record 2900 is integers, and using only fixed point if the data 2900 are fixed point values. In contrast, approaches which calculate decision values based on the slopes of the lines forming the boundaries of the plane to be interpolated suffer from the fact that division cannot be exactly represented in a fixed point system, or even in a floating point system. While such alternative approaches work well enough for 3-D graphics with small numbers of pixels, printing systems require much higher precision over much larger distances.
If we are scanning in the x-direction, the change in the decision value from one point to the next is: P, Pi +2a'i-2 z -2a' z 7 z Equation 17 26-2a' Z +1.
687880.doc -63- Also, using an initial estimate of namely z' (est), obtained by any means, the Scorrect nearest integer value and the decision value at that point may be determined using equation 16: Pi (est)= 2a'(x o 2P'(y o (z o Equation 18 ND 5 The decision value is then corrected so that it lies in the range 6) as follows: c P. P (est) 28 x Equation 19 2L The correct value for z' can be obtained from the estimate using Equation 16 as follows: z. (est) Equation In the special case where the initial y) position is guaranteed to lie on the line between (xo, yo) and (xi, yl), an initial value for z' on each scanline can be obtained using Bresenham's algorithm between endpoints (yo, zo) and (yI, z1). The initial values obtained in this special case do not require correction.
Having obtained an initial (in general, corrected) value for the scan, the remaining z values may be obtained using equation 17 to decide between the candidates z, i+ z, and z, z, +1 at each iteration.
The above method may be used to perform a linear colour blend over an arbitrary number of colour channels, or to create an alpha gradient on the basis of sample values.
Figure 30 shows the linear ramp initial value estimator 3000 for a single colour/opacity channel. Each channel is independently calculated, so just one channel, C, is shown. Similar estimators are provided for the remaining channels.
The input to initial value estimator 3000 is record 3001, which is a subset of the data for channel C from the record 2900.
687880.doc 64 o When a fill priority message corresponding to a linear ramp of this formn is encountered for the first time, interpolator 3014 calculates an initial estimate of the
O
Z colour/opacity, CO est by any convenient method. Interpolator 3014 may use standard linear interpolation, from the y) position of the pixel and the position and colour/opacity data for the channel C in the fill table entry.
Cc The same input data is also used by block 3002 to calculate the projected areas o 13, and 6 in y, colour) space, according to equations 11.
Arithmetic logic blocks 3004 and 3006 use the values output by block 3002 to calculate the step values used by the linear ramp stepper, using arithmetic logic to implement equations 12. Block 3006 calculates from a and 6, and block 3004 calculates n from 3 and 6. The values calculated by blocks 3004 and 3006 are held in registers or memory for use in subsequent pixels in a run. Once the step values n and have been calculated, block 3010 calculates the modified area coefficient a' and block 3008 calculates the modified area coefficient The modified area coefficients are calculated using arithmetic logic to implement equations 15, i.e. requiring a multiplication and subtraction.
The decision value calculator 3016 calculates the decision value PO est for the original estimate using arithmetic logic to implement equation 18. The inputs to block 3016 are received from blocks 3002, 3008 and 3010. The result for estimated colour/opacity (Cest from interpolator 3014) and estimated decision value (P0_est from the -decision value calculator 3016) are passed to the corrector circuit 3012, where the decision value is checked against 6 (received from block 3002). If the decision value lies in the range the colour and decision value are emitted unaltered. Otherwise, the decision value estimate is corrected by the corrector circuit 3012 using equation 19 to PO PO est 2"N*6 687880.doc where
(N
N Floor(POest 0 Z The colour/opacity estimate is corrected by corrector circuit 3012 according to equation by subtracting N from the original estimate: CO CO est N c The corrected colour/opacity value is output from the corrector circuit 3012 for Sthe pixel at level commands using this fill for the first pixel in the run of pixels.
SFor subsequent pixels, the iterator circuit 3100 shown in Fig. 31 is used. This is a much simpler circuit than circuit 3000 (a modified DDA) that requires fewer pipeline stages to implement. Thus subsequent pixel at level commands can be generated much faster than the first.
The inputs to circuit 3100 are the current value of C and decision value P, together with f' 6, r and Shift register 3102 multiplies a' by 2, and NOT gate 3104 changes the sign of the input. Summer 3110 adds the input P to -2a' and the sum is passed to units 3112, 3122 and 3120. Unit 3112 tests whether which is indicative of the estimated decision value for the next pixel rather than the current pixel, is greater than zero. The result is passed to units 3122 and 3116, which determine the next decision value P and channel value C respectively.
Shift register 3118 multiplies 6 by 2, and unit 3120 sums the output of units 3118 and 3110 to give which is sent to unit 3122. Unit 3122 thus determines the next value of P by selecting between (P+26-2a') and Unit 3106 adds the current value of C to and sends the result to units 3116 and 3114. Unit 3114 adds one and passes the output to unit 3116, which determines the next value of C by selecting between (C and (C depending on the output of unit 3112.
687880.doc -66- O The foregoing describes a hardware implementation of the method. It will be appreciated that the method may also be implemented as software running, for example, Z on a general-purpose computer.
SThe method 3200 is summarised in the flow chart of Fig. 32. The method steps 3204-3210 are applied to each of the channels. Each channel may be processed in Cc parallel rather than sequentially.
N Step 3202 determines a channel for consideration. Then step 3204 retrieves a 'stored decision value for the channel. For the first pixel in a run, the retrieved value may be an estimated value, for example an output of circuit 3000. In step 3206, the sign of the retrieved decision value is retrieved. Then, in step 3208, one of two alternative colour values is selected depending on the sign of the retrieved decision value. Step 3210 increments the decision value by one of two alternative increments, depending on the sign of the decision value. The new decision value is stored, and the method 3210 returns to step 3204 to determine the colour channel value for the next pixel.
The same method can be used for calculating depth of a plane in a 3-D vector graphics system. In this further application, rather than representing values for a colour channel, the variable z in equations 10 to 20 indicates the depth of a plane in an (x,y,z) spatial coordinate system.
Higher-dimensional linear approximation The method of projection can be used to generalise Bresenham's algorithm to higher-dimensional problems. A practical application of such a problem is the calculation of linearly varying opacity in the voxels within a volume, where there are opacity samples on the corners of a tetrahedron in 3-D space.
687880.doc 67 The projections are the components of the dual of the tni-vector formed by three sides of the tetrahedron, arranged such that the projected volume into x, y, z space is Z positive.
N Thus, if we have one or more parameters, t, defined on a tetrahedron that is based C* ~5 on the point samples(x 0 ,y 0 ,z 0 ,t 0
(X
1
,Y
1 ,z 1 ,t 1
(X
2 ,y 2 ,z 2 and (X 3
,IY
3
,IZ
3 ,It 3
)I
then the decision value will be based on the projected volumes: 5 Y YO Y 2 YO Y 3
YO'
z zo z 2 -zo z 3 -zo where the vectors are chosen so that 6 is positive, and ti -to t2 to 3t a yo Y 2 -YO Y3 -YO z zo z 2 -zo z 3 -zo x 1 x 0 x 2 x 0 x 3 x 0 to 2-t 3t z zo z 2 -zo z 3 -zo y t 1 -yto y 2 -yto y 3 -yto So that: (t -to) a(x- x 0 y 0 y(z z) The decision value for a choice between t and t+1 is therefore:
P
1 %k =2ex(xo -i)i2/3(y 0 -k)-25(t As before, the decision value is an integer in the case where the sample points are integervalued.
687880.doc -68o For the purposes of scanning where t varies by an arbitrary step size in any >dimension, we can transform as before:
O
where and The decision value coefficients in the primed space are a'=a-8S P' P y-qgS The remainder of the procedure may be generalised in a corresponding way. For a unit step in the x-direction, the change in decision value is: 2a' ti+l,jk =tik Pi+1,jk i jk 2a'+ 28 ti+, t 1 where the choice of t value is made according to the sign of Pi+,jk as before.
Similarly, for unit steps in the y and z directions respectively: P tU =tk ii+,k lik 1-2Pf'+26 ty+l,k =tk +7+1 2y' tijk+1 =t k+ I k =-2y'+28 tk+ =tik 1 An application of the higher-dimensional method is in 3-D graphics, in a POVray system where a ray is traced through a scene. The opacity of a scene may be defined 687880.doc -69in terms of a set of mesh values of opacity (defined at the comers of tetrahedra), and calculated for a ray using a stepping algorithm as described above within mesh tetrahedra, 0 Z rather than by calculating the opacity of each voxel by interpolation.
N, The method can be generalised to linear interpolation over an arbitrary number of parameters in fixed precision systems, and to line integration in higher-dimensional meshed value calculations, such as those used in Finite Element Method analysis.
Opacity Tile Fills The opacity tile module 612 interprets the supplied record as a fixed format record containing three 8-bit colour components, an 8-bit opacity value, an integer X phase, a Y phase, an X scale, a Y scale, and a 64 bit mask. These values originate in the display list generation and are contained typically in the original page description. A bit address, a, in the bit mask, is determined by the formula: a ((x/2sx px mod 8 y/2sY py) mod 8 x 8 The bit at the address in the bit mask is examined. If the examined bit is one, the colour and opacity from the record is copied directly to the output of the module 612 and forms the determined colour and opacity. If the examined bit is zero, a colour having three zero component values and a zero opacity value is formed and output as the determined colour and opacity.
Raster Image Fills The raster image module 606 interprets the supplied record as a fixed format record containing six constants, a, b, c, d, tx, and ty; an integer count of the number of bits (bpl) in each raster line of the raster image pixel data 16 to be sampled; and a pixel 687880.doc type. The pixel type indicates whether the pixel data 16 in the raster image pixel data is to be interpreted as one of: 0 Z one bit per pixel black and white opaque pixels; (ii) one bit per pixel opaque black or transparent pixels; (iii) 8 bits per pixel grey scale opaque pixels; c (iv) 8 bits per pixel black opacity scale pixels; 24 bits per pixel opaque three colour component pixels;, or (vi) 32 bits per pixel three colour component plus opacity pixels.
Many other formats are possible.
The raster image module 606 uses the pixel type indicator to determine a pixel size (bpp) in bits. Then a bit address, a, in the raster image pixel data 16 is calculated having the formula: a bpp* L a x c y tx J bpl L b x d y ty A pixel interpreted according to the pixel type from the record 602 is fetched from the calculated address in the raster image pixel data 16. The pixel is expanded as necessary to have three eight bit colour components and an eight bit opacity component. By "expanded", it is meant for example, that a pixel from an eight bit per pixel grey scale opaque raster image would have the sampled eight bit value applied to each of the red, green and blue component, and the opacity component set to fully opaque. This then forms the determined colour and opacity output 698 to the pixel compositing module 700.
As a consequence, the raster pixel data valid within a displayable object is obtained through the determination of a mapping to the pixel image data within the 687880.doc -71 Smemory 16. This effectively implements an affine transform of the raster pixel data into
(N
the object-based image and is more efficient than prior art methods which transfer pixel
O
Z data from an image source to a frame store where compositing with graphic objects may occur.
5 As a preferred feature to the above, interpolation between pixels in the raster image pixel data 16 may optionally be performed by first calculating intermediate results p, and q according to the formulae: p=a*x c*y tx q=b*x d*y ty Next the bit addresses, aOO, a01, alO, and al 1, of four pixels in the raster image pixel data 16 are determined according to the formulae: aOO bpp LpJ bpl Lq] aOl aOO bpp al0 aOO bpl all aOO +bpl bpp Next, a result pixel component value, r, is determined for each colour and opacity component according to the formula: r interp(interp(get(a00), get(a01), interp(get(al0),get(all), q) where the function "interp" is defined as: 687880.doc 72 interp(a, b, c) a (c-LcJ) o z In the above equations, the representation LvalueJ floor (value), where a floor operation involves discarding the fractional part of the value.
Cc The get function returns the value of the current pixel component sampled from the raster image pixel data 16 at the given bit address. Note that for some components of Ssome image types this can be an implied value.
As a preferred feature to the above, image tiling may optionally be performed by using x and y values in the above equations which are derived from the current X and Y counters 614,616 by a modulus operation with a tile size read from the supplied record.
Many more such fill colour generation sub-modules are possible.
PIXEL COMPOSITING MODULE The operation of the pixel compositing module 700 will now be described. The primary function of the pixel compositing module is to composite the colour and opacity of all those exposed object priorities that make an active contribution to the pixel currently being scanned.
Preferably, the pixel compositing module 700 implements a modified form of the compositing approach as described in "Compositing Digital Images", Porter, T: Duff, T; Computer Graphics, Vol 18 No 3 (1984) pp 2 5 3 2 5 9 ("Porter And Duff'). Examples of Porter and Duff compositing operations are shown in Fig. 21. However, such an approach is deficient in that it only permits handling a source and destination colour in the intersection region formed by the composite, and as a consequence is unable to accommodate the influence of transparency outside the intersecting region. One method 687880.doc 73 of overcoming this problem is to pad the objects with completely transparent pixels. Thus the entire area becomes in effect the intersecting region, and reliable Porter and Duff
O
Z compositing operations can be performed. This padding is achieved at the driver software Nlevel where additional transparent object priorities are added to the combined table.
These Porter and Duff compositing operations are implemented utilising appropriate Cc colour operations as will be described below in more detail with reference to Figs. and 19.
Preferably, the images to be composited are based on expression trees.
Expression trees are often used to describe the compositing operations required to form an image, and typically comprise a plurality of nodes including leaf nodes, unary nodes and binary nodes. A leaf node is the outermost node of an expression tree, has no descendent nodes and represents a primitive constituent of an image. Unary nodes represent an operation which modifies the pixel data coming out of the part of the tree below the unary operator. A binary node typically branches to left and right subtrees; wherein each subtree is itself an expression tree comprising at least one leaf node. An example of an expression tree is shown in Fig. 17C. The expression tree shown in Fig. 17C comprises four leaf nodes representing three objects A, B, and C, and the page. The expression tree of Fig. 17C also comprises binary nodes representing the Porter and Duff OVER operation. Thus the expression tree represents an image where the object A is composited OVER the object B, the result of which is then composited OVER object C, and the result of which is then composited OVER the page.
Turning now to Figs. 17A and 17B, there is shown a typical binary compositing operation in an expression tree. This binary operator operates on a source object (src) and a destination object (dest), where the source object src resides on the left branch and the destination object (dest) resides on the right branch of the expression tree. The binary 687880.doc -74operation is typically a Porter and Duff compositing operation. The area src n dest >represents the area on the page where the objects src and dest objects intersect (ie both
O
active), the area src rndest where only the src object is active, and the area src r dest where only the dest object is active.
The compositing operations of the expression tree are implemented by means of C N the pixel compositing stack 38, wherein the structure of the expression tree is Simplemented by means of appropriate stack operations on the pixel compositing stack 38.
Turning now to Fig. 23, there is shown the pixel compositing module 700 in more detail. The pixel compositing module 700 receives incoming messages from the fill colour determination module 600. These incoming messages include repeat messages, series of colour composite messages (see Fig. 22B), end of pixel messages, and end of scanline messages, and are processed in sequence.
The pixel compositing module 700 includes a decoder 2302 for decoding these incoming messages, and a compositor 2304 for compositing the colours and opacities contained in the incoming colour composite messages. Also included is a stack controller 2306 for placing the resultant colours and opacities on a stack 38, and an output FIFO 702 for storing the resultant colour and opacity.
During the operation of the pixel compositing module 700, the decoder 2302, upon the receipt of a colour composite message, extracts the raster operation COLOUR_OP and alpha channel operation codes ALPHA_OP and passes them to the compositor 2304. The decoder 2302 also extracts the stack operation STACK_OP and colour and opacity values COLOUR, ALPHA of the colour composite message and passes them to the stack controller 2306. Typically, the pixel composing module 700 combines the colour and opacity from the colour composite message with a colour and opacity popped from the pixel compositing stack 38 according to the raster operation and 687880.doc 75 alpha channel operation from the colour composite message. It then pushes the result back onto the pixel compositing stack 38. More generally, the stack controller 2306 0 Z forms a source (src) and destination (dest) colour and opacity, according to the stack operation specified. If at this time, or during any pop operation from the pixel compositing stack, the pixel compositing stack 38 is found to be empty, an opaque white Cc colour value is used without any error indication. These source and destination colours and opacity are then made available to the compositor 2304 which then performs the Scompositing operation in accordance with the COLOUROP and ALPHAOP codes.
The resultant (result) colour and opacity is then made available to the stack controller 2306, which stores the result on the stack 38 in accordance with the STACK OP code.
These stack operations are described in more detail below.
During the operation of the pixel compositing module 700, if the decoder 2302 receives an end of pixel message, it then instructs the stack controller 2306 to pop a colour and opacity from the pixel compositing stack 38. If the stack 38 is empty an opaque white value is used. The resultant colour and opacity is then formed into a pixel output message which is forwarded to the pixel output FIFO 702. If the decoder 2302 receives a repeat message or an end of scanline message, the decoder 2302 by-passes (not shown) the compositor 2304 and stack controller 2306 and forwards the messages to the pixel output FIFO 702 without further processing.
Figs. 24A, B, C, and D show the operation performed on the pixel compositing stack 38 for each of the various stack operation commands STACKOP in the colour composite messages.
Fig 24A shows the standard operation STD_OP 2350 on the pixel compositing stack 38, where the source colour and opacity (src) are obtained from the colour composite message, and the destination colour and opacity (dest) are popped from the top 687880.doc -76- O of the pixel compositing stack 38. The source colour and opacity (src) is taken from the value in a current colour composite message for the current operation, and destination Z colour and opacity (dest) is popped from the top of the stack 38. The result of the N COLOUROP operation performed by the compositor 2304 is pushed back onto the stack 38.
SFig 24B shows the NOPOP DEST stack operation 2370 on the pixel Scompositing stack 38. The source colour and opacity (src) is taken from the value in a Scurrent composite message for the current operation, and the destination colour and opacity (dest) is read from the top of the stack 38. The result of the COLOUROP operation performed by the compositor 2304 is pushed onto the top of the stack 38.
Fig 24C shows the POP_SRC stack operation, where the source colour and opacity are popped from the top of the stack, and the destination colour and opacity is popped from the next level down the stack. The result of the COLOUR_OP operation performed by the compositor 2304 is pushed onto the top of the stack.
Fig. 24D shows the KEEPSRC stack operation, where the source colour and opacity are popped from the top of the stack, and the destination colour and opacity is popped from the next level down the stack. The result of the COLOUR_OP operation performed by the compositor 2304 is pushed onto the top of the stack.
Other stack operations can be used.
The manner in which the compositor 2304 combines the source (src) colour and opacity with the destination (dest) colour and opacity will now -be described with reference to Figs. 7A to 7C. For the purposes of this description, colour and opacity values are considered to range from 0 to 1, (ie: normalised) although they are typically stored as 8-bit values in the range 0 to 255. For the purposes of compositing together two pixels, each pixel is regarded as being divided into two regions, one region being fully 687880.doc -77opaque and the other fully transparent, with the opacity value being an indication of the proportion of these two regions. Fig. 7A shows a source pixel 702 which has some three
O
Z component colour value not shown in the Figure and an opacity value, The shaded region of the source pixel 702 represents the fully opaque portion 704 of the pixel 702.
Similarly, the non-shaded region in Fig. 7A represents that proportion 706 of the source Cc pixel 702 considered to be fully transparent. Fig. 7B shows a destination pixel 710 with Ssome opacity value, The shaded region of the destination pixel 710 represents the Sfully opaque portion 712 of the pixel 710. Similarly, the pixel 710 has a fully transparent (,i portion 714. The opaque regions of the source pixel 702 and destination pixel 710 are, for the purposes of the combination, considered to be orthogonal to each other. The overlay 716 of these two pixels is shown in Fig. 7C. Three regions of interest exist, which include a source outside destination 718 which has an area of so (1 do), a source intersect destination 720 which has an area of so do, and a destination outside source 722 which has an area of (1 so do. The colour value of each of these three regions is calculated conceptually independently. The source outside destination region 718 takes its colour directly from the source colour. The destination outside source region 722 takes its colour directly from the destination colour. The source intersect destination region 720 takes its colour from a combination of the source and destination colour.
The process of combining the source and destination colour, as distinct from the other operations discussed above is termed a raster operation and is one of a set of functions as specified by the raster operation code from the pixel composite message.
Some of the raster operations included in the described arrangement are shown in Fig. 19.
Each function is applied to each pair of colour components of the source and destination 687880.doc -78o colours to obtain a like component in the resultant colour. Many other functions are possible.
Z The alpha channel operation from the composite pixel message is also Nconsidered during the combination of the source and destination colour. The alpha channel operation is performed using three flags LAO_USEDOUT_S, c LAOUSESOUT_D, LAO_USESROP_D, which respectively identify the regions Sof interest (1 so) do, so (1 do), and so do in the overlay 716 of the source pixel S702 and the destination pixel 710. For each of the regions, a region opacity value is formed which is zero if the corresponding flag in the alpha channel operation is not set, else it is the area of the region.
The resultant opacity is formed from the sum of the region opacities. Each component of the result colour is then formed by the sum of the products of each pair of region colour and region opacity, divided by the resultant opacity.
As shown in Figs. 20A and 20B, the Porter and Duff operations may be formed by suitable ALPHA_OP flag combinations and raster operators COLOR_OP, provided that both operands can be guaranteed to be active together. Because of the way the table is read, if only one of the operands is not active, then the operator will either not be performed, or will be performed with the wrong operand. Thus objects that are to be combined using Porter and Duff operations may be padded out with transparent pixels to an area that covers both objects in the operation. Other transparency operations may be formed in the same way as the Porter and Duff operations, using different binary operators as the COLOR_OP operation.
The resultant colour and opacity is passed to the stack controller circuit and pushed onto the pixel compositing stack 38. However, if the stack operation is 687880.doc 79 STACK KEEPSRC, the source value is pushed onto the stack before the result of the colour composite message is pushed.
O
Z When an end of pixel message is encountered, the colour and opacity value on top of the stack is formed into a pixel output message, and sent to the pixel output module 800. Repeat pixel messages are passed through the pixel compositing module 700 to the Cc pixel output module 800.
PIXEL OUTPUTMODULE The operation of the pixel output module 800 will now be described. Incoming messages are read from the pixel output FIFO, which include pixel output messages, repeat messages, and end of scanline messages are processed in sequence.
Upon receipt of a pixel output message the pixel output module 800 stores the pixel and also forwards the pixel to its output. Upon receipt of a repeat message the last stored pixel is forwarded to the output 898 as many times as specified by the count from the repeat message. Upon receipt of an end of scanline message the pixel output module 800 passes the message to its output.
The output 898 may connect as required to any device that utilizes pixel image data. Such devices include output devices such as video display units or printers, or memory storage devices such as hard disk, semiconductor RAM including line, band or frame stores, or a computer network. However, as will be apparent from the foregoing, a method and apparatus are described that provide for the rendering of graphic objects with full functionality demanded by sophisticated graphic description languages without a need for intermediate storage of pixel image data during the rendering process.
687880.doc SIndustrial Applicability 0 It will be apparent from the above that the arrangements described are applicable to computer graphics and printing industries.
The foregoing describes only some arrangements of the present invention, and N 5 modifications and/or changes can be made thereto without departing from the scope and Cc spirit of the invention, the arrangements being illustrative and not restrictive.
In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including" and not "consisting only of'. Variations of the word comprising, such as "comprise" and "comprises" have corresponding meanings.
687880.doc

Claims (3)

  1. 2. A method according to claim 1 wherein the position values and colour channel values defining the three points are fixed-point values and wherein the alternative output values and the alternative increments are determined from the parameter set using fixed- point arithmetic.
  2. 3. A method according to claim 2, wherein the parameter set is derived using fixed- point arithmetic.
  3. 687880.doc 82 4. A method according to claim 2 or claim 3 wherein the pixel belongs to a run of consecutive pixels in a coordinate direction of the image. O Z N 5. A method according to claim 4 wherein the two alternative output values for a colour channel are determined as the output value for the channel from a preceding pixel Cc in the run added to each of two alternative further increments. 6. A method according to claim 5, wherein the two alternative further increments differ by the minimum possible difference under the fixed-point arithmetic. 7. A method according to claim 4 wherein for a first pixel of the run, the decision value is not retrieved, but is determined for each colour channel by the steps of: estimating the output value by interpolating from the three points specifying the fill; estimating the decision value using the estimated output value and the parameter set; and correcting the estimated decision value and the estimated output value. 8. A method according to any one of the preceding claims wherein the position values and the colour channel values defining the three points are integer values. 9. A method according to any one of the preceding claims wherein one of the three planes is the plane of the image. 687880.doc 83 A method according to claim 9 wherein the parameter set comprises, for each of (,i >the three planes, a magnitude of a parallelogram in the plane, and wherein the O Z parallelogram is a projection onto the plane of a second parallelogram, two sides of the second parallelogram being formed respectively by two vectors joining one of the three points to the other two of the three points specifying the fill. (Ni N 11. A method of determining output values of a linear function of a plurality of variables, the method comprising the steps of: receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; calculating a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate points specifying the linear function; and determining output values at a sequence of positions in a scanning direction, wherein, for a currently-considered position of the sequence, said determining step comprises the sub-steps of: retrieving a stored decision value; selecting one of two alternative output values for the currently-considered position, wherein the selection depends on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a next position in the sequence. 687880.doc -84- O 12. A method according to claim 11 wherein the plurality of coordinate points are fixed-point values and wherein the alternative output values and the alternative Z increments are determined using fixed-point arithmetic from the parameter set. 13. A method according to claim 12 wherein the two alternative output values are Cc determined as the output value of a previous position in the sequence added to each of (Ni two alternative further increments. 14. A method according to claim 12 wherein the two alternative further increments differ by a minimum possible difference under the fixed point arithmetic. A method according to any one of claims 12 to 14 wherein the fixed point arithmetic is integer arithmetic and the coordinate points are defined by integer values for the variables and integer values for the output values. 16. A method according to any one of claims 11 to 15 wherein the linear function defines a gradient fill for an image and wherein there are three coordinate points, each point specified by two variables that define a location and an output value selected from the group consisting of: a colour value for the gradient fill, and an opacity value for the gradient fill. 17. A method according to any one of claims 11 to 15 wherein the function defines depth of a plane in a three-dimensional vector graphics system, and wherein there are 687880.doc 85 three coordinate points, each point specified by two variables that specify a location and an output value that defines depth above a reference plane. 0 18. A method according to any one of claims 11 to 15 wherein the function defines a linearly-varying value within a volume, and wherein there are four coordinate points, Cc each point specified by three variables that specify a spatial location and an output that defines the value of the function at the coordinate point, wherein the value is selected Sfrom the group consisting of colour and opacity. 19. An apparatus for determining values of a linear gradient fill at a pixel of an image having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the fill is specified by three points each defined by two position values and a colour channel value, and wherein the apparatus comprises, for each colour channel: means for retrieving a stored decision value; means for selecting one of two alternative output values depending on a sign of the retrieved decision value; means for incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and means for storing the incremented decision value for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes. 687880.doc -86- O 20. An apparatus for determining output values of a linear function of a plurality of variables, the apparatus comprising: O means for receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; Cc means for calculating a parameter set based on projections of the coordinate N points onto a space, wherein the dimension of the space is lower than a total number of Scoordinate points specifying the linear function; and (Ni means for determining output values at a sequence of positions in a scanning direction, said determining means comprising: means for retrieving a stored decision value for a currently-considered position of the sequence; means for selecting one of two alternative output values for the currently- considered position, wherein the selection depends on a sign of the retrieved decision value; means for incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and means for storing the incremented decision value for retrieval at a next position in the sequence. 21. A system for determining values of a linear gradient fill at a pixel of an image having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the fill is specified by three points each defined by two position values and a colour channel value, and wherein the system comprises: 687880.doc -87- B data storage for storing the three points and a decision value for each colour (,i channel; and 0 Z a processor in communication with said data storage and adapted, for each colour channel, to: retrieve the stored decision value; Cc select one of two alternative output values depending on a sign of the retrieved decision value; increment the decision value by one of two alternative increments depending on (,i the sign of the decision value; and store the incremented decision value in the data storage for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes. 22. A system for determining output values of a linear function of a plurality of variables, the system comprising: data storage for storing: the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; (ii) a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate points specifying the linear function; and (iii) a decision value; and 687880.doc 88 a processor in communication with said data storage and adapted to determine >output values at a sequence of positions in a scanning direction, wherein, for a currently- O Z considered position of the sequence, said processor: retrieves the stored decision value from said data storage; selects one of two alternative output values for the currently-considered position, wherein the selection depends on a sign of the retrieved decision value; increments the decision value by one of two alternative increments depending on the sign of the decision value; and stores the incremented decision value in said data storage for retrieval at a next position in the sequence. 23. A computer program product comprising machine-readable program code recorded on a machine-readable recording medium, for controlling the operation of a data processing machine on which the program code executes to perform a method of determining values of a linear gradient fill at a pixel of an image having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the fill is specified by three points each defined by two position values and a colour channel value, and wherein the method comprises the steps of, for each colour channel: retrieving a stored decision value; selecting one of two alternative output values depending on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a subsequent pixel, 687880.doc 89- wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes. 24. A computer program product comprising machine-readable program code recorded on a machine-readable recording medium, for controlling the operation of a data processing machine on which the program code executes to perform a method of determining output values of a linear function of a plurality of variables, the method comprising the steps of: receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; calculating a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate points specifying the linear function; and determining output values at a sequence of positions in a scanning direction, wherein, for a currently-considered position of the sequence, said determining step comprises the sub-steps of: retrieving a stored decision value; selecting one of two alternative output values for the currently-considered position, wherein the selection depends on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a next position in the sequence. 687880.doc I A computer program comprising machine-readable program code for controlling (Ni the operation of a data processing apparatus on which the program code executes to O Z perform a method of determining values of a linear gradient fill at a pixel of an image N' having one or more colour channels selected from the group consisting of colour components and opacity, wherein, for each colour channel, the fill is specified by three Cc points each defined by two position values and a colour channel value, and wherein the f method comprises the steps of, for each colour channel: Sretrieving a stored decision value; (,i selecting one of two alternative output values depending on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a subsequent pixel, wherein the alternative output values and the alternative increments are determined from a parameter set based on projections of the three points onto three orthogonal planes. 26. A computer program comprising machine-readable program code for controlling the operation of a data processing apparatus on which the program code executes to perform a method of determining output values of a linear function of a plurality of variables, the method comprising the steps of: receiving the linear function specified by a plurality of coordinate points each defined by a respective value for each of the variables and a corresponding output value of the linear function; 687880.doc -91 Scalculating a parameter set based on projections of the coordinate points onto a space, wherein the dimension of the space is lower than a total number of coordinate O Z points specifying the linear function; and determining output values at a sequence of positions in a scanning direction, wherein, for a currently-considered position of the sequence, said determining step Cc comprises the sub-steps of: retrieving a stored decision value; Sselecting one of two alternative output values for the currently-considered position, wherein the selection depends on a sign of the retrieved decision value; incrementing the decision value by one of two alternative increments depending on the sign of the decision value; and storing the incremented decision value for retrieval at a next position in the sequence. 27. A method of determining values of a linear gradient fill at a pixel of an image substantially as described herein with reference to Figs. 25 to 32. 28. A method of determining output values of a linear function substantially as described herein with reference to Figs. 25 to 32. 29. An apparatus for determining values of a linear gradient fill at a pixel of an image substantially as described herein with reference to Figs. 25 to 32. An apparatus for determining output values of a linear function substantially as described herein with reference to Figs. 25 to 32. 687880.doc -92- 31. A system for determining values of a linear gradient fill substantially as O Sdescribed herein with reference to Figs. 25 to 32. 32. A system for determining output values of a linear function of a plurality of c variables substantially as described herein with reference to Figs. 25 to 32. O 33. A computer program product substantially as described herein with reference to Figs. 25 to 32. 34. A computer program substantially as described herein with reference to Figs. to 32. DATED this TWENTY-THIRD Day of NOVEMBER 2004 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant SPRUSON&FERGUSON 687880.doc
AU2004233469A 2004-11-24 2004-11-24 Rendering linear colour blends Abandoned AU2004233469A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2004233469A AU2004233469A1 (en) 2004-11-24 2004-11-24 Rendering linear colour blends

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2004233469A AU2004233469A1 (en) 2004-11-24 2004-11-24 Rendering linear colour blends

Publications (1)

Publication Number Publication Date
AU2004233469A1 true AU2004233469A1 (en) 2006-06-08

Family

ID=36591406

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2004233469A Abandoned AU2004233469A1 (en) 2004-11-24 2004-11-24 Rendering linear colour blends

Country Status (1)

Country Link
AU (1) AU2004233469A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2009212881B2 (en) * 2009-08-31 2012-06-14 Canon Kabushiki Kaisha Efficient radial gradient fills

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2009212881B2 (en) * 2009-08-31 2012-06-14 Canon Kabushiki Kaisha Efficient radial gradient fills

Similar Documents

Publication Publication Date Title
US6961067B2 (en) Reducing the number of compositing operations performed in a pixel sequential rendering system
US7046253B2 (en) Processing graphic objects for fast rasterised rendering
US6828985B1 (en) Fast rendering techniques for rasterised graphic object based images
US7714865B2 (en) Compositing list caching for a raster image processor
US7277102B2 (en) Rendering graphic object based images
US7023439B2 (en) Activating a filling of a graphical object
US7551173B2 (en) Pixel accurate edges for scanline rendering system
US7538770B2 (en) Tree-based compositing system
AU760826B2 (en) Rendering graphic object based images
AU744091B2 (en) Processing graphic objects for fast rasterised rendering
AU743218B2 (en) Fast renering techniques for rasterised graphic object based images
AU2004233469A1 (en) Rendering linear colour blends
US6570575B1 (en) Associated color texture processor for high fidelity 3-D graphics rendering
AU2005200948B2 (en) Compositing list caching for a raster image processor
AU779154B2 (en) Compositing objects with opacity for fast rasterised rendering
AU2004200655B2 (en) Reducing the Number of Compositing Operations Performed in a Pixel Sequential Rendering System
AU2005201868A1 (en) Removing background colour in group compositing
AU2004231232B2 (en) Pixel accurate edges for scanline rendering system
AU2005201929A1 (en) Rendering graphic object images
AU2002301643B2 (en) Activating a Filling of a Graphical Object
AU2004237873A1 (en) State table optimization in expression tree based compositing
AU2004233516B2 (en) Tree-based compositing system
AU2005201931A1 (en) Rendering graphic object images
AU2004231233A1 (en) Render Time Estimation

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period