AU2005229627B2 - Implementing compositing operations on images - Google Patents

Implementing compositing operations on images Download PDF

Info

Publication number
AU2005229627B2
AU2005229627B2 AU2005229627A AU2005229627A AU2005229627B2 AU 2005229627 B2 AU2005229627 B2 AU 2005229627B2 AU 2005229627 A AU2005229627 A AU 2005229627A AU 2005229627 A AU2005229627 A AU 2005229627A AU 2005229627 B2 AU2005229627 B2 AU 2005229627B2
Authority
AU
Australia
Prior art keywords
colour
pixel
opacity
compositing
blend
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2005229627A
Other versions
AU2005229627A1 (en
Inventor
Alexander Vincent Danilo
Kevin John Moore
Craig William Northway
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2005229627A priority Critical patent/AU2005229627B2/en
Priority to US11/551,389 priority patent/US7965299B2/en
Publication of AU2005229627A1 publication Critical patent/AU2005229627A1/en
Application granted granted Critical
Publication of AU2005229627B2 publication Critical patent/AU2005229627B2/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

S&FRef: 727395
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo, 146, Japan Kevin John Moore Craig William Northway Alexander Vincent Danilo Spruson Ferguson St Martins Tower Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) Implementing compositing operations on images The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5845c IMPLEMENTING COMPOSITING OPERATIONS ON IMAGES
C.)
O
FIELD OF INVENTION This invention relates to display and printing of graphic objects and in particular to the modification of displayed graphic objects using blend operations.
C
BACKGROUND
O
The display and printing of graphics images by computer systems often involves some manipulation of the images, either through combining graphic elements using compositing operations or by modifying graphic elements using blend operations. Other manipulations such as colour mapping may also be performed.
In compositing, graphic objects with transparency data may be combined using operators such as the Porter and Duff transparency operators (described in "Compositing Digital Images", Porter, T, Duff, T; Computer Graphics, Vol 18 No 3 (1984) pp 253-259), in which the opacity of a pixel is modelled as the proportion of the pixel that is covered by opaque data. When combining colour and opacity from two objects A and B, the pixel is divided into the following regions: a region of area XA.B where both objects are opaque; a region of area A(1-aXB) where only object A is opaque; a region of area (l-aA)CB where only object B is opaque; and a region of area (l-OAA) (l-OB) where both objects are transparent, where (XA represents the opacity of object A and aXB represents the opacity of object B.
All possible combinations of the three non-transparent regions may be used as compositing operators. It is common, however, only to use the over operation, which uses 727395.doc Sall three contributing regions. The colour of the region where both objects are opaque is O taken from the colour of the topmost object.
SThe over operator has also been used as the basis for blend operations in systems where transparency data is used. In a normalised system, the transparency of an object X 5 is equal to In a blend operation, one of the images (the source or blending object Sor blending effect, Object A) is used as a parameter of a blending, or blend, function B O that modifies the lower or destination object (Object In the presence of transparency, the blending function B operates on the region where both objects are opaque. The blend operation based on the over operator is: aresulCresu, a (1 a )aC, a aB B(C C) aresul aA(1-a)+(1-aA)aB a aB where B(CA, CB) is the blending function and CA and CB represent the colours of Object A and Object B respectively.
The over operator has the useful property that it does not modify pixels of the lower object (Object B) outside the intersection of the two objects. This makes the over operator the preferred operator for framestore image systems, where a framestore memory usually contains the lower object (Object B) before the blend operation is applied.
However, a blend operation based on the over operator contains terms that include colour from both operands of the blending function. This is undesirable. When Object B is partially transparent, i.e. cB<l, the colour of Object A contributes to the colour of the resultant object through the first term (OaA(I-OQB)CA) as well as through the blending function.
727395.doc SFigs. 15A and B illustrate the leakage of colour CA using a blend operator based on the 0 over operator. A partially transparent object 2501 (Object which is a uniform grey 0 rectangle, is blended with a blending object 2502 (Object A) using the over based blending operation known as "dodge". Object 2502 is a rectangle that is predominantly black and which includes a circular region 2503 within which the colour varies radially ifrom a black periphery to a white centre. The result of blending objects 2501 and 2502 is O object 2504 in Fig. 15B. Note that the black colour regions of Object A 2502 leak into the result.
To limit this leakage effect, a method has been used in which the destination object is combined with a white object. The combined object is then subject to the blend operation, following which the contribution of the white object to the blend result is removed. This is an inefficient procedure that still does not produce accurate results in all circumstances.
The leakage effect is particularly noticeable in the 'dodge' and 'burn' blending operations. The 'dodge' operator is intended to brighten the destination (Object B) image to reflect the blend object (Object A) colour. Performing 'dodge' with a black blending object is intended to produce no change. The 'burn' operator is intended to be complementary and darken the destination image to reflect the blending object colour.
Performing 'burn' with a white blending object is intended to produce no change.
SUMMARY OF THE INVENTION A rendering system, which may be a framestore renderer, a bandstore renderer or a pixel sequential renderer, is provided with blending operations based on the Porter and Duff atop operator and the Porter and Duff in operator. These blending operations 727395.doc include colour from both operands in the result of the blend operation, but only through the term incorporating the blend function.
According to a first aspect of the present disclosure there is provided a method of compositing graphic elements in a pixel-based renderer, said method comprising the steps of: receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; determining a blend output of a blend function dependent on the first colour and the second colour; and determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
According to a second aspect there is provided a method of compositing a first graphic element comprising a first colour and a first opacity with a second graphic element comprising a second colour and a second opacity using a blend function that operates on the two colours independently of their respective opacities and a compositing operator characterised in that the contribution of the second graphic element colour to the resultant colour is independent of the first opacity of the first graphic element.
According to a further aspect there is provided an apparatus for compositing graphic elements in a pixel-based renderer, said apparatus comprising: means for receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; means for determining a blend output of a blend function dependent on the first colour and the second colour; and 1186291_1 727395 au amends_01 means for determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
According to a still further aspect there is provided a computer program product comprising machine-readable program code recorded on a machine-readable recording medium, for controlling the operation of a data processing apparatus on which the program code executes to perform a method of compositing graphic elements in a pixelbased renderer, said method comprising the steps of: receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; determining a blend output of a blend function dependent on the first colour and the second colour; and determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
According to yet another further aspect there is provided a computer program comprising machine-readable program code for controlling the operation of a data processing apparatus on which the program code executes to perform a method of compositing graphic elements in a pixel-based renderer, said method comprising the steps of: receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; 1186291_1 727395_auamends 01 Sdetermining a blend output of a blend function dependent on the first colour and o the second colour; and determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
IND
tfl BRIEF DESCRIPTION OF THE DRAWINGS (N One or more embodiments of the invention will now be discussed with reference to the drawings, in which: Fig. 1 is a schematic block diagram representation of a computer system incorporating a rendering arrangement; Fig. 2 is a block diagram showing the functional data flow of the rendering arrangement; Fig. 3 is a schematic block diagram representation of the pixel sequential rendering apparatus of Fig. 2 and associated display list and temporary stores; Figs. 4A to 4C illustrate pixel combinations between source and destination; Fig. 5 illustrates a two-object image used as an example for explaining the operation of the rendering arrangement; Fig. 6 illustrates vector edges of the objects of Fig. Figs. 7A to 7D provide a comparison between edge description formats; Figs. 8A and 8B show a simple compositing expression illustrated as an expression tree and a corresponding depiction; Fig. 8C shows an example of an expression tree; Fig. 9 shows a table of a number of raster operations; 727395.doc Figs. 10A and 10B show a table of the principal compositing operations and their o corresponding raster operations and opacity flags; SFig. 11 depicts the result of a number of compositing operations; Fig. 12 shows a series of colour composite messages generated by the fill colour N\ 5 determination module 600; SFig.. 13 is a schematic functional representation of one arrangement of the pixel O compositing module of Fig. 3; Figs. 14A 14D show the operation performed on the stack for each of the various stack operation commands in the Pixel Compositing Module 700 of Fig. 3; Fig. 15A shows two partially transparent operands to be used in an example of a blending operation; Fig. 15B shows the results of compositing the operands of Fig. 15A using a prior-art over based 'dodge' blending operation; Fig. 15C shows the results of compositing the operands of Fig. 15A using an atop based "dodge" blend operation; and Fig. 16 is a flowchart of a method of compositing two operands with a blend function.
DETAILED DESCRIPTION A rendering system is provided with operations for compositing based on the Porter and Duff model as described in "Compositing Digital Images", Porter, T: Duff, T; Computer Graphics, Vol 18 No 3 (1984) pp 253-259, with the addition of blending operations. In the Porter and Duff model, a pixel in an object is notionally divided into 727395.doc Stwo regions, one fully opaque and one fully transparent, such that the proportion of the o pixel that is opaque is given by the opacity, cc (or so in the notation of Fig. 4A).
Cc If two objects S and D overlap, their opacities act independently such that each pixel in the overlap region is divided into four regions, orthogonally, as shown in Fig. 4C. In the \O 5 region 718 (S OUT only the S object is opaque and contributes colour. In the region 722 (D OUT only the D object is opaque and contributes colour. In the region 720 (S "In O ROP both objects are opaque. In strict Porter and Duff, colour in the S ROP D region 720 is determined by the priority order of the objects. However, when the system caters for blending of colours, this region 720 takes the colour of the blend between the two objects according to the raster operation The fourth region 716 is transparent, and therefore never contributes colour.
A set of flags controlling the contribution of the regions provides a flexible means of determining the Porter and Duff operation to be performed.
The Porter and Duff operators are formed by combining colour and opacity contributed by combinations of the aforementioned regions. The original Porter and Duff operators are formed from the complete set of combinations of these regions where the intersecting region S ROP D 720 takes the colour of the uppermost (higher priority) object.
In the described arrangements, a rendering system, which can be a framestore renderer, a bandstore renderer or a pixel sequential renderer, is provided with blend operations based on the Porter and Duff atop operator. These operations are distinct from both their predecessors, i.e. the Porter and Duff operators and the prior art over based blend operations. Porter and Duff operators only allow inclusion or non-inclusion of colour and transparency values obtained directly from the objects. The prior art blend operations explicitly include colour from both operands in the result of the blend.
727395.doc SThe basic atop operator as defined by Porter and Duff is as follows: 0 aresultCresul aA BCB A(XB CA In the first arrangement, the atop operator is modified such that the colour of the region where both objects are opaque is determined by the blending function, B(CA, CB):
\O
In a result a= B iN Like the over operator, the atop operator does not modify pixels of object B outside the intersection of the two objects. The atop operator is therefore suitable for use in a framestore system, as discussed in more detail below. Furthermore, the blend operation based on atop does not include a term in which object A directly contributes colour. The only effect of object A on the result is through the blending function. Note also that the resultant (non-premultiplied) colour is independent of the opacity of Object B.
A similar blend operation may be constructed by including the colour term from object A and not including that of object B, i.e. a blend operation based on the Porter and Duff ratop operator. However such an operator potentially modifies all of the pixels of object B, which makes it expensive to implement in a framestore system where object B has been stored in the framestore.
Blend operations may also be constructed based on the Porter and Duff in operator.
The basic in operator as defined by Porter and Duff is as follows: 1 a resultCresulrt aA a B C 727395.doc SIn the arrangements described herein, the in-based blend operation is: SaresultCresult aaBB(CA, CB) a result aAaB The in operator has the undesirable property that it potentially modifies an object over \0 5 the entire extent of the object. The in operator is therefore expensive to use in a Sframestore system. This expense is due to the modification of previously rendered objects by clearing the objects outside the area of intersection. However, blend operations based on the in operator are potentially useful operations and are included for completeness.
The remaining Porter and Duff operators do not include the region where both objects overlap and are therefore not suitable candidates for use with blending operations.
The blend functions B used in the described arrangements may be 'Dodge' or 'Burn' functions, which have the following form: Dodge: IfCA+ CB >1 B(CA,CB)=l Otherwise B(CA,CB)=CB/( -CA) Bum: IfCA+ CB <1 B(CA,CB)=0 Otherwise B(CA,CB)= (CA+CB- )/CA Note that in these definitions CA and CB are normalised colours, i.e. valued in the range 1].
727395.doc 1.0 OVERVIEW OF PIXEL SEQUENTIAL RENDERING SYSTEM Fig. 1 illustrates schematically a computer system 1 configured for rendering and O Spresentation of computer graphic object images. The system includes a host processor 2 associated with system random access memory (RAM) 3, which may include a nonvolatile hard disk drive or similar device 5 and volatile, semiconductor RAM 4. The
INC
Ssystem 1 also includes a system read-only memory (ROM) 6 typically founded upon O semiconductor ROM 7 and which in many cases may be supplemented by compact disk devices (CD ROM) 8 or DVD devices. The system 1 may also incorporate some means for displaying images, such as a video display unit (VDU) or a printer, or both, which operate in raster fashion.
The above-described components of the system 1 are interconnected via a bus system 9 and are operable in a normal operating mode of computer systems well known in the art.
Also seen in Fig. 1, a pixel sequential rendering apparatus 20 connects to the bus 9, and is configured for the sequential rendering of pixel-based images derived from graphic object-based descriptions supplied with instructions and data from the processor 2 via.the bus 9. The apparatus 20 may utilise the system RAM 3 for the rendering of object descriptions although preferably the rendering apparatus 20 may have associated therewith a dedicated rendering store arrangement 30, typically formed of semiconductor
RAM.
The pixel sequential renderer 20 operates generally speaking in the following manner. A render job to be rendered is given to driver software on processor 2 by third party software, for supply to the pixel sequential renderer 20. The render job is typically in a page description language which defines an image comprising objects placed on a 727395.doc page from a rearmost object to a foremost object, to be composited in a manner defined
C.)
o by the render job. The driver software converts the render job to an intermediate render
O
Sjob, which is then fed to the pixel sequential renderer.
The pixel sequential renderer generates the colour and opacity for the pixels one at 5 a time in raster scan order. At any pixel currently being scanned and processed, the pixel
IND
sequential renderer composites only those exposed objects that are active at the currently scanned pixel. The pixel sequential render determines that an object is active at a currently scanned pixel if that pixel lies within the boundary of the object. The pixel sequential renderer achieves this by reference to a fill counter associated with that object.
The fill counter keeps a running fill count that indicates whether the pixel lies within the boundary of the object. When the pixel sequential renderer encounters an edge associated with the object the renderer increments or decrements the fill count depending upon the direction of the edge. The renderer is then able to determine whether the current pixel is within the boundary of the object depending upon the fill count and a predetermined winding count rule. The pixel sequential renderer determines whether an active object is exposed with reference to a flag associated with that object. This flag associated with an object indicates whether or not the object obscures lower order objects. That is, this flag indicates whether the object is partially transparent, in which case the lower order active objects make a contribution to the colour and opacity of the current pixel. Otherwise, this flag indicates that the object is opaque, in which case active lower order objects will not make any contribution to the colour and opacity of the currently scanned pixel. The pixel sequential renderer determines that an object is exposed if it is the uppermost active object, or if all the active objects above the object have their corresponding flags set to transparent.
727395.doc SThe pixel sequential renderer then composites these exposed active objects to o determine and output the colour and opacity for the currently scanned pixel.
SThe driver software, in response to the page, also extracts edge information defining the edges of the objects for feeding to an edge tracking module. The driver N 5 software also generates a linearised table (herein after called the priority properties and (status table) of the expression tree of the objects and their compositing operations which is fed to the priority determination module. The priority properties and status table contains one record for each object on the page. In addition, each record contains a field for storing a pointer to an address for the fill of the corresponding object in a fill table.
This fill table is also generated by the driver software and contains the fill for the corresponding objects, and is fed to the fill determination module. The priority properties and status table together with the fill table are devoid of any edge information and effectively represent the objects, where the objects are infinitively extending. The edge information is fed to the edge tracking module, which determines, for each pixel in raster scan order, the edges of any objects that intersect a currently scanned pixel. The edge tracking module passes this information onto the priority determination module. Each record of the priority properties and status table contains a counter, which maintains a fill count associated with the corresponding object of the record.
The priority determination module processes each pixel in a raster scan order.
Initially, the fill counts associated with all the objects are zero, and so all objects are inactive. The priority determination module continues processing each pixel until it encounters an edge intersecting that pixel. The priority determination module updates the fill count associated with the object of that edge, and so that object becomes active. The priority determination continues in this fashion updating the fill count of the objects and 727395.doc so activating and de-activating the objects. The priority determination module also o determines whether these active objects are exposed or not, and consequently whether
O
they make a contribution to the currently scanned pixel. In the event that they do, the pixel determination module generates a series of messages which ultimately instructs the N 5 pixel compositing module to composite the colour and opacity for these exposed active N objects in accordance with the compositing operations specified for these objects in the O priority properties and status table so as to generate the resultant colour and opacity for the currently scanned pixel. These series of messages do not actually contain the colour and opacity for that object but rather an address to the fill table, which the fill determination module uses to determine the colour and opacity of the object.
For ease of explanation the location (viz level) of the object in the order of the objects from the rearmost object to the foremost is herein referred to as the object's priority. Preferably, a number of non-overlapping objects that have the same fill and compositing operation, and that form a contiguous sequence in the order of the objects, may be designated as having the same priority.
The pixel sequential renderer also utilises clip objects to modify the shape of another object. The pixel sequential renderer maintains an associated clip count for the clip in a somewhat similar fashion to the fill count to determine whether the current pixel is within the clip region.
There are often runs of pixels having constant colour and opacity between adjacent edges. The pixel sequential renderer can composite the colour and opacity for the first pixel in the run and in subsequent pixels in the run reproduce the previous composited colour and opacity without any further compositions, thus reducing the overall number of compositing operations.
727395.doc 2.0 OVERVIEW OF SOFTWARE DRIVER 0 A software program (hereafter referred to as the driver), is loaded and executed on O the host processor 2 for generating instructions and data for the pixel-sequential graphics rendering apparatus 20, from data provided by a third-party application. The third-party application may provide data in the form of a standard language description of the objects to be drawn on the page, such as PostScript and PCL, or in the form of function calls to O the driver through a standard software interface, such as the Windows GDI or X- 1.
The driver software separates the data associated with an object (supplied by the third-party application) into data about the edges of the object, any operation or operations associated with painting the object onto the page, and the colour and opacity with which to fill pixels which fall inside the edges of the object.
The driver software partitions the edges of each object into edges which are monotonic increasing in the Y-direction, and then divides each partitioned edge of the object into segments of a form suitable for the edge module described below. Partitioned edges are sorted by the X-value of their starting positions and then by Y. Groups of edges starting at the same Y-value remain sorted by X-value, and may be concatenated together to form a new edge list, suitable for reading in by the edge module when rendering reaches that Y-value.
The driver software sorts the operations, associated with painting objects, into priority order, and generates instructions to load the data structure associated with the priority determination module. This structure includes a field for the fill rule, which describes the topology of how each object is activated by edges, a field for the type of fill which is associated with the object being painted, and a field to identify whether data on levels below the current object is required by the operation. There is also a field, herein 727395.doc Scalled clip count, that identifies an object as a clipping object, that is, as an object which is not, itself, filled, but which enables or disables filling of other objects on the page. O SThe driver software also prepares a data structure (the fill table) describing how to fill objects, said fill table is indexed by the data structure in the priority determination
(N-
N 5 module. This allows several levels in the priority determination module to refer to the same fill data structure.
O The driver software assembles the aforementioned data into a job containing instructions for loading the data and rendering pixels, in a form that can be read by the rendering system, and transfers the assembled job to the rendering system. This may be performed using one of several methods known to the art, depending on the configuration of the rendering system and its memory.
OVERVIEW OF PIXEL SEQUENTIAL RENDERING APPARATUS Referring now to Fig. 2, a functional data flow diagram of the pixel sequential rendering apparatus is shown. The functional flow diagram of Fig. 2 commences with an object graphic description 11 which is used to describe those parameters of graphic objects in a fashion appropriate to be generated by the host processor 2 and/or, where appropriate, stored within the system RAM 3 or derived from the system ROM 6, and which may be interpreted by the pixel sequential rendering apparatus 20 to render therefrom pixel-based images. For example, the object graphic description 11 may incorporate objects with edges in a number of formats including straight edges (simple vectors) that traverse from one point on the display to another, or an orthogonal edge format where a two-dimensional object is defined by a plurality of edges including orthogonal lines. Further formats, where objects are defined by continuous curves are 727395.doc Salso appropriate and these can include quadratic polynomial fragments where a single o curve may be described by a number of parameters which enable a quadratic based curve Sto be rendered in a single output space without the need to perform multiplications.
Further data formats such as cubic splines and the like may also be used. An object may contain a mixture of many different edge types. Typically, common to all formats are
IND
Sidentifiers for the start and end of each line (whether straight or curved) and typically, 8 these are identified by a scan line number thus defining a specific output space in which the curve may be rendered.
For example, Fig. 7A shows a description of an edge 600 that is required to be divided into two segments 601 and 602 in order for the segments to be adequately described and rendered. This arises because the edge description, whilst being simply calculated through a quadratic expression, could not accommodate an inflexion point 604.
Thus the edge 600 was dealt with as two separate edges having end points 603 and 604, and 604 and 605 respectively. Fig. 7B shows a cubic spline 610 that is described by endpoints 611 and 612, and control points 613 and 614. This format requires calculation of a cubic polynomial for render purposes and thus is expensive of computational time.
Figs. 7C and 7D show further examples of edges applicable to the described systems. An edge is considered as a single entity and if necessary, is partitioned to delineate sections of the edge that may be described in different formats, a specific goal of which is to ensure a minimum level of complexity for the description of each section.
In Fig. 7C, a single edge 620 is illustrated spanning scanlines A to M. An edge is described by a number of parameters including start_x, starty, one or more segment descriptions that include an address that points to the next segment in the edge, and a finish segment used to terminate the edge. The edge 620 may be described as having 727395.doc three step segments, a vector segment, and a quadratic segment. A step segment is simply o defined as having an x-step value and a y-step value. For the three step segments Sillustrated, the segment descriptions are and Note that the x-step value is signed thereby indicating the direction of the step, whilst the y-step value is 5 unsigned as such is always in a raster scan direction of increasing scalene value. The next N segment is a vector segment which typically requires parameters start_x start_y numofscanlines (NY) and slope In this example, because the vector segment is an intermediate segment of the edge 620, the start_x and start_y may be omitted because such arise from the preceding segment(s). The parameter numofscanlines (NY) indicates the number of scanlines the vector segment lasts. The slope value (DX) is signed and is added to the x-value of a preceding scanline to give the x-value of the current scanline, and in the illustrated case, DX The next segment is a quadratic segment which has a structure corresponding to that of the vector segment, but also a second order value (DDX) which is also signed and is added to DX to alter the slope of the segment.
Fig. 7D shows an example of a cubic curve according to an arrangement which includes a description corresponding to the quadratic segment save for the addition of a signed third-order value (DDDX), which is added to DDX to vary the rate of change of slope of the segment. Many other orders may also be implemented.
It will be apparent from the above that the ability to handle plural data formats describing edge segments allows for simplification of edge descriptions and evaluation, without reliance on complex and computationally expensive mathematical operations. In contrast, in the system of Fig. 7A, all edges, whether orthogonal, vector or quadratic were required to be described by the quadratic form.
727395.doc -19- The rendering system will be described with reference to the simple example of O rendering an image 78 shown in Fig. 5 which includes two graphical objects, namely a Spartly transparent blue-coloured triangle 80 rendered on top of and thereby partly obscuring an opaque red coloured rectangle 90. As seen, the rectangle 90 includes side edges 92, 94, 96 and 98 defined between various pixel positions and scan line positions Because the edges 96 and 98 are formed upon the scan lines (and thus Sparallel therewith), the actual object description of the rectangle 90 can be based solely upon the side edges 92 and 94, such as seen in Fig. 6. In this connection, edge 92 commences at pixel location (40,35) and extends in a raster direction down the screen to terminate at pixel position (40,105). Similarly, the edge 94 extends from pixel position (160,35) to position (160,105). The horizontal portions of the rectangular graphic object may be obtained merely by scanning from the edge 92 to the edge 94 in a rasterised fashion.
The blue triangular object 80 however is defined by three object edges 82, 84 and 86, each seen as vectors that define the vertices of the triangle. Edges 82 and 84 are seen to commence at pixel location (100,20) and extend respectively to pixel locations (170,90) and (30,90). Edge 86 extends between those two pixel locations in a traditional rasterised direction of left to right. In this specific example because the edge 86 is horizontal like the edges 96 and 98 mentioned above, it is not essential that the edge 86 be defined. In addition to the starting and ending pixel locations used to describe the edges 82 and 84, each of these edges will have associated therewith the slope value, in this case +1 and -1 respectively.
727395.doc
O
Returning to Fig. 2, having identified the data necessary to describe the graphic Sobjects to the rendered, the graphic system 1 then performs a display list generation step 12.
The display list generation 12 is preferably implemented as a software driver executing on the host processor 2 with attached ROM 6 and RAM 3. The display list generation 12 converts an object graphics description, expressed in any one or more of the O well known graphic description languages, graphic library calls, or any other application (Ni specific format, into a display list. The display list is typically written into a display list store 13, generally formed within the RAM 4 but which may alternatively be formed within the rendering stores 30. As seen in Fig. 3, the display list store 13 can include a number of components, one being an instruction stream 14, another being edge information 15 and where appropriate, raster image pixel data 16.
The instruction stream 14 includes code interpretable as instructions to be read by the pixel sequential rendering apparatus 20 to render the specific graphic objects desired in any specific image. For the example of the image shown in Fig. 5, the instruction stream 14 could be of the form of: render (nothing) to scan line at scan line 20 add two blue edges 82 and 84; render to scan line at scan line 35 add two red edges 92 and 94; and render to completion.
Similarly, the edge information 15 for the example of Fig. 5 may include the following: 727395.doc -21- S(i) edge 84 commences at pixel position 100, edge 82 commences at pixel O position 100; S(ii)edge 92 commences at pixel position 40, edge 94 commences at pixel position 160; N 5 (iii) edge 84 runs for 70 scan lines, edge 82 runs for 70 scanlines; edge 84 has slope edge 84 has slope +1; S(v) edge 92 has slope 0 edge 94 has slope 0.
(vi) edges 92 and 94 each run for 70 scanlines.
It will be appreciated from the above example of the instruction stream 14 and edge information 15 and the manner in which each are expressed, that in the image 78 of Fig. 5, the pixel position and the scanline value define a single output space in which the image 78 is rendered. Other output space configurations however can be realised using the principles of the present disclosure.
Fig. 5 includes no raster image pixel data and hence none need be stored in the store portion 16 of the display list 13, although this feature will be described later.
The display list store 13 is read by a pixel sequential rendering apparatus which is typically implemented as an integrated circuit. The pixel sequential rendering apparatus 20 converts the display list into a stream of raster pixels 19 which can be forwarded to another device, for example, a printer, a display, or a memory store.
The pixel sequential rendering apparatus 20 may be implemented as an integrated circuit, or it may be implemented as an equivalent software module executing on a general purpose processing unit, such as the host processor 2.
Fig. 3 shows the configuration of the pixel sequential rendering apparatus 20, the display list store 13 and the temporary rendering stores 30. The processing stages 22 of 727395.doc -22-
O
Nthe pixel-sequential render apparatus 20 include an instruction executor 300, an edge O processing module 400, a priority determination module 500, an optimisation module (not Sshown), a fill colour determination module 600, a pixel compositing module 700, and a pixel output module 800. The processing operations use the temporary stores 30 which as ID 5 noted above, may share the same device (eg. magnetic disk or semiconductor RAM) as Sthe display list store 13, or may be implemented as individual stores for reasons of speed O optimisation. The edge processing module 400 uses an edge record store 32 to hold edge information which is carried forward from scan-line to scan-line. The priority determination module 500 uses a priority properties and status table 34 to hold information about each priority, and the current state of each priority with respect to edge crossings while a scan-line is being rendered. The fill colour determination module 600 uses a fill data table 36 to hold information required to determine the fill colour of a particular priority at a particular position. The pixel compositing module 700 uses a pixel compositing stack 38 to hold intermediate results during the determination of an output pixel that requires the colours from multiple priorities to determine its value.
The display list store 13 and the other stores 32-38 detailed above may be implemented in RAM or any other data storage technology.
The processing steps shown in the arrangement of Fig. 3 take the form of a processing pipeline 22. In this case, the modules of the pipeline may execute simultaneously on different portions of image data in parallel, with messages passed between them as described below. In another arrangement, each message described below may take the form of a synchronous transfer of control to a downstream module, with upstream processing suspended until the downstream module completes the processing of the message.
727395.doc
O
3.1 OVERVIEW OF PIXEL COMPOSITING MODULE O The operation of the pixel compositing module 700 will now be described. The Sprimary function of the pixel compositing module is to composite the colour and opacity of all those exposed object priorities that make an active contribution to the pixel currently being scanned.
Preferably, the pixel compositing module 700 implements a modified form of the O compositing approach as described in "Compositing Digital Images", Porter, T: Duff, T; Computer Graphics, Vol 18 No 3 (1984) pp 2 5 3 2 5 9 Examples of Porter and Duff compositing operations are shown in Fig. 11. However, such an approach is deficient in that it only permits handling a source and destination colour in the intersection region formed by the composite, and as a consequence is unable to accommodate the influence of transparency outside the intersecting region. The present arrangement overcomes this by effectively padding the objects with completely transparent pixels. Thus the entire area becomes in effect the intersecting region, and reliable Porter and Duff compositing operations can be performed. This padding is achieved at the driver software level where additional transparent object priorities are added to the combined table. These Porter and Duff compositing operations are implemented utilising appropriate colour operations as will be described below in more detail with reference to Figs. 10A, 10B, and 9.
Preferably, the images to be composited are based on expression trees. Expression trees are often used to describe the compositing operations required to form an image, and typically comprise a plurality of nodes including leaf nodes, unary nodes and binary nodes. A leaf node is the outermost node of an expression tree, has no descendent nodes and represents a primitive constituent of an image. Unary nodes represent an operation which modifies the pixel data coming out of the part of the tree below the unary operator.
727395.doc A binary node.typically branches to left and right subtrees; wherein each subtree is itself -O an expression tree comprising at least one leaf node. An example of an expression tree is shown in Fig. 8C. The expression tree shown in Fig. 8C comprises four leaf nodes representing three objects A, B, and C, and the page. The expression tree of Fig. 8C also 5 comprises binary nodes representing the Porter and Duff OVER operation. Thus the expression tree represents an image where the object A is composited OVER the object B, O the result of which is then composited OVER object C, and the result of which is then composited OVER the page.
Turning now to Figs. 8A and 8B, there is shown a typical binary compositing operation in an expression tree. This binary operator operates on a source object (src) and a destination object (dest), where the source object src resides on the left branch and the destination object (dest) resides on the right branch of the expression tree. The binary operation is typically a Porter and Duff compositing operation. The area src n dest represents the area on the page where the objects src and dest objects intersect both active), the area srcndest where only the src object is active, and the area src rdest where only the dest object is active.
The compositing operations of the expression tree are implemented by means of the pixel compositing stack 38, wherein the structure of the expression tree is implemented by means of appropriate stack operations on the pixel compositing stack 38.
Fig. 13 shows the pixel compositing module 700 in accordance with one arrangement in more detail. The pixel compositing module 700 receives incoming messages from the fill colour determination module 600. These incoming messages 727395.doc include repeat messages, series of colour composite messages, end of pixel messages, and O end of scanline messages, and are processed in sequence.
A message sequence 2212 is illustrated in Fig. 12. The sequence includes a START_OF_PIXEL message 2201, then one or more colour composite messages 2208 5 followed by an END_OF_PIXEL message 2206. The colour composite messages 2208 are identified by a CLR_CMP field 2210. The X_IND field contains a flag indicating O whether the colour is constant for a given Y-value. The message also includes a stack operation code STACK_OP, an alpha channel operation code ALPHA_OP, a raster operation code COLOR_OP and a field containing the colour and opacity yalue of the priority.
The pixel compositing module 700 comprises a decoder 2302 for decoding these incoming messages and a compositor 2304 for compositing the colours and opacities contained in the incoming colour composite messages. The pixel compositing module 700 also comprises a stack controller 2306 for placing the resultant colours and opacities on a stack 38, and output FIFO 702 for storing the resultant colour and opacity.
During the operation of the pixel compositing module 700, the decoder 2302, upon the receipt of a colour composite message, extracts the raster operation COLOROP and alpha channel operation codes ALPHA_OP and passes them to the compositor 2304. The decoder 2302 also extracts the stack operation STACK_OP and colour and opacity values COLOR, ALPHA of the colour composite message and passes them to the stack controller 2306. Typically, the pixel compositing module 700 combines the colour and opacity from the colour composite message with a colour and opacity popped from the pixel compositing stack 38 according to the raster operation and alpha channel operation from the colour composite message. The module 700 then pushes the result back onto the 727395.doc -26- N, pixel compositing stack 38. More generally, the stack controller 2306 forms a source -4-4 O (src) and destination (dest) colour and opacity, according to the stack operation specified.
SIf at this time, or during any pop of the pixel compositing stack, the pixel compositing stack 38 is found to be empty, an opaque white colour value is used without any error 5 indication. These source and destination colours and opacity are then made available to ,i the compositor 2304 which performs the compositing operation in accordance with the COLOROP and ALPHA_OP codes. The resultant (result) colour and opacity is then made available to the stack controller 2306, which stores the result on the stack 38 in accordance with the STACK_OP code. These stack operations are described below in more detail.
During the operation of the pixel compositing module 700, if the decoder 2302 receives an end of pixel message, it then instructs the stack controller 2306 to pop a colour and opacity from the pixel compositing stack 38. If the stack 38 is empty an opaque white value is used. The resultant colour and opacity is then formed into a pixel output message which is forwarded to the pixel output FIFO 702. If the decoder 2302 receives a repeat message or an end of scanline message, the decoder 2302 by-passes (not shown) the compositor 2304 and stack controller 2306 and forwards the messages to the pixel output FIFO 702 without further processing.
Figs. 14A, B, C, and D show the operation performed on the pixel compositing stack 38 for each of the various stack operation commands STACKOP in the colour composite messages.
Fig. 14A shows the standard operation STD_OP 2350 on the pixel compositing stack 38, where the source colour and opacity (src) are obtained from the colour composite message, and the destination colour and opacity (dest) is popped from the top 727395.doc Sof the pixel compositing stack 38. The source colour and opacity (src) is taken from the O value in a current colour composite message for the current operation, and destination colour and opacity (dest) is popped from the top of the stack 38. The result of the COLOR_OP operation performed by the compositor 2304 is pushed back onto the stack 38.
N Fig. 14B shows the NOPOPDEST stack operation 2370 on the pixel compositing stack 38. The source colour and opacity (src) is taken from the value in a current composite message for the current operation, and the destination colour and opacity (dest) is read from the top of the stack 38. The result of the COLOR_OP operation performed by the compositor 2304 is pushed onto the top of the stack 38.
Fig. 14C shows the POP_SRC stack operation, where the source colour and opacity are popped from the top of the stack, and the destination colour and opacity is popped from the next level down the stack. The result of the COLOROP operation performed by the compositor 2304 is pushed onto the top of the stack.
Fig. 14D shows the KEEP_SRC stack operation, where the source colour and opacity are popped from the top of the stack, and the destination colour and opacity is popped from the next level down the stack. The result of the COLOR_OP operation performed by the compositor 2304 is pushed onto the top of the stack.
Other stack operations may also be used.
The manner in which the compositor 2304 combines the source (src) colour and opacity with the destination (dest) colour and opacity will now be described with reference to Figs. 4A to 4C. For the purposes of this description, colour and opacity values are considered to range from 0 to 1, normalised) although they are typically stored as 8-bit values in the range 0 to 255. For the purposes of compositing together two 727395.doc O pixels, each pixel is regarded as being divided into two regions, one region being fully O opaque and the other fully transparent, with the opacity value being an indication of the proportion of these two regions. Fig. 4A shows a source pixel 702 which has some three component colour values not shown in the Figure and an opacity value, The shaded region of the source pixel 702 represents the fully opaque portion 704 of the pixel 702.
IND
SSimilarly, the non-shaded region in Fig. 4A represents that proportion 706 of the source Spixel 702 considered to be fully transparent. Fig. 4B shows a destination pixel 710 with some opacity value, The shaded region of the destination pixel 710 represents the fully opaque portion 712 of the pixel 710. Similarly, the pixel 710 has a fully transparent portion 714. The opaque regions of the source pixel 702 and destination pixel 710 are, for the purposes of the combination, considered to be orthogonal to each other. The overlay of these two pixels is shown in Fig. 4C. Three regions of interest exist, which include a 'source outside destination' region 718 which has an area of so (1 do), a 'source intersect destination' 720 which has an area of so do, and a 'destination outside source' 722 which has an area of (1 so do. The colour value of each of these three regions is calculated conceptually independently. The source outside destination region 718 takes its colour directly from the source colour. The destination outside source region 722 takes its colour directly from the destination colour. The source intersect destination region 720 takes its colour from a combination of the source and destination colour.
The process of combining the source and destination colour, as distinct from the other operations discussed above is termed a raster operation and is one of a set of functions as specified by the raster operation code from the pixel composite message.
Some of the raster operations are shown in Fig. 9. Each function is applied to each pair of 727395.doc
O
colour components of the source and destination colours to obtain a like component in the
C.)
¢3 resultant colour. Many other functions are possible.
SThe alpha channel operation from the composite pixel message is also considered during the combination of the source and destination colour. The alpha channel operation is performed using three flags LAOUSEDOUTS, LAOUSE S OUTD,
(NO
LAO USE S ROP_D, which respectively identify the regions .of interest (1 so) do, 0 so (1 do), and so do in the overlay of the source pixel 702 and the destination pixel 710. For each of the regions, a region opacity value is formed which is zero if the corresponding flag in the alpha channel operation is not set, else it is the area of the region.
The resultant opacity is formed from the sum of the region opacities. Each component of the result colour is then formed by the sum of the products of each pair of region colour and region opacity, divided by the resultant opacity.
As shown in Fig. 10 OA and B, the Porter and Duff operations may be formed by suitable ALPHA_OP flag combinations and raster operators COLOR_OP, provided that both operands can be guaranteed to be active together. Because of the way the table is .read, if only one of the operands is not active, then the operator will either not be performed, or will be performed with the wrong operand. Thus objects that are to be combined using Porter and Duff operations must be padded out with transparent pixels to an area that covers both objects in the operation. Other transparency operations may be formed in the same way as the Porter and Duff operations, using different binary operators as the COLOR_OP operation.
The resultant colour and opacity is passed to the stack controller circuit and pushed onto the pixel compositing stack 38. However, if the stack operation is 727395.doc SSTACK_KEEP_SRC, the source value is pushed onto the stack before the result of the c-I o colour composite message is pushed.
When an end of pixel message is encountered, the colour and opacity value on top of the stack is formed into a pixel output message, and sent to the Pixel Output module.
tc-- Repeat pixel messages are passed through the Pixel Compositing module to the Pixel Output module.
C 4.0 BLENDS IN A PIXEL SEQUENTIAL RENDERER The pixel sequential rendering system described above can perform the standard 'over' based blending operations. These operations are described in the background art. The equation to describe an 'over' based blending operation is: aresul, Cresut A a)C A )aBCB aABB(CA CB a,resul, aA(l- aA) a B aAa where B(CA, CB) is the blending function.
To perform such a blending operation, the software driver sets the COLOROP operation code to the appropriate value to represent the blending function, B(CA, CB). The driver sets the three ALPHAOP flags LAO_USE_ DOUT_S, LAO_USE_S_ROP_D and LAO_USE_SOUT_D. Setting these flags indicates that all three regions of interest should be considered in this calculation.
The pixel sequential rendering system in the first arrangement is also driven in such a way as to implement the atop-based blending operation: 727395.doc Performing the blending operation as described by the above equation does not permit the colour of the blending object (Object A) to influence the resultant colour except through In S 5 the blending function. The modified atop-based blending operations also share the SPbenefit of the over-based blending operations that when an effect is applied, the effect is restricted to the area of the blending object, i.e. Object B is not modified outside the area
(N
of intersection with Object A. The blending function used for the blending operation, B(CA, CH), will be controlled using the COLOR_OP opcode.
To blend using the atop-based blending operation as described above the software driver sets the LAOUSE_D_OUT_S and the LAO_USES_ROP_D flags in the priority table entries that control the effect, leaving the LAOUSE OUTD flag unset. Fig.
indicates this to be the same combination used by the Porter and Duff atop operator.
The COLOROP opcode is set to an appropriate value to give the correct blending of source and destination colours specified by the blending function.
The atop based blending operation is advantageous compared to blending operations based upon the over operator as the colour of the effect (object A) only affects the resultant composited value through the blending function.
Fig. 15C shows the results 255 of blending a partially trransparent object 2501 (Object B) with a blending object 2502 (Object A) using an atop-based blending operation. For comparison, the results 2504 of the prior art over-based blending operation are shown in Fig. 15B. The two objects 2501, 2502 used in the blending operations are shown separately in Fig. 15A. Note that Object B 2501 is partially transparent, allowing the black colour of Object A 2502 to adversely affect the results when the over based 727395.doc Sblending operation is used, as seen in Fig. 15B. In contrast, when the atop based blending O operation is used, the black colour of object A 2501 does not leak into the result 2505.
SThe pixel sequential rendering system is also capable of implementing blending operations based on the Porter and Duff in operator. The equation for the modified inoperator-based blending operations is: C] aresl, Cres,^ a AaB(CA, CB) In aresult aAaB This can be realised in the pixel sequential rendering system described previously by only setting the LAO_USE_S_ROP_D flag and leaving both the LAO_USE_D_OUT_S and LAO_USE_S_OUT_D flags unset. Once again the COLOR_OP opcode should be set appropriately for the blending function required, B(CA, CY). This modified operator has the disadvantage of modifying pixels over the entire area of Object B.
BLENDS IN A FRAMESTORE RENDERER In an alternative arrangement, the atop-based blending operations are implemented in a framestore (alternatively called a Painter's Algorithm) renderer. A framestore renderer is well known in the prior art and will not be explained in great detail. In general, a framestore render immediately renders each graphical object to a memory representing the entire image. The memory will typically be a matrix/array of pixels. For this particular implementation the framestore renderer must also be able to read the values previously written into the framestore. This particular framestore must contain both colour and opacity values, the opacity values being normalised to the range 0 to 1.
A blending operation based on the Porter and Duff over compositing operation is known in the prior art to be suited to a framestore renderer. A blending operation based 727395.doc c, on the atop Porter and Duff operator according to the present disclosure is also suitable
C.)
o for a framestore renderer. The equation for a modified atop blending operation has Salready been seen several times in this document: S 5 aresult Cr,,esu,,, (1-aA)aBCB a AaBB(CAC, C) C1 result a.
o Object B can be considered to be the background that is already written into the framestore. Object A is the object to be blended with the background (Object The result of blending Object A with Object B has the opacity of Object B. The colour of the result is the sum of two terms, one a product of the opacities of Objects A and B and the blending function, the other term a product of the opacity of Object B, the complement of the opacity of Object A the transparency of Object and the colour of Object B.
Object B has already been drawn to the frame store. Object A is then to be blended with Object B. For each pixel in Object A, the corresponding colour and opacity values are read from the framestore for Object B. These colour and opacity values are used in conjunction with the values from Object A and the blending function to determine the new colour value for the corresponding framestore pixel. This new value is written into the framestore to replace the previous value. The opacity value for this pixel does not need to change, as the resultant opacity is equal to the opacity of Object B.
The atop based blending operation of the first arrangement may be implemented using the method 2600 as shown in Fig. 16. In step 2601 the colour and opacity values of first pixel in object A are obtained either from a buffer containing a rasterised representation of Object A, or from rendering Object A one pixel at a time concurrently with the process 2600.
727395.doc -34- Process 2600 then enters a loop which steps through the pixels of A. In step 2602, O which is the first step in the loop, the process reads the corresponding colour value for the CC current pixel from the buffer containing Object B. Then, in step 2603, the colour and opacity of Object A and Object B are used to calculate the new colour value for the pixel O 5 of the buffer containing Object B. If Object B has not painted this pixel of the buffer the Scolour and opacity information used is that of transparent black.
O In step 2604, the new colour value is written into the buffer containing object B. When using atop based blending operations the opacity of the buffer containing Object B does not change.
Next, in step 2605, the process checks whether there are any more pixels in Object A.
If so, (the YES option of step 2605), process flow proceeds to step 2607, in which the colour and opacity of the next pixel in Object A are obtained. The process 2600 then loops back to step 2602 to process the next pixel. If there are no pixels left in Object A (the NO option of step 2605), the compositing process ends in step 2606.
The results of the blend using the framestore renderer are identical to those of the pixel sequential rendering arrangement described in section 4.0. Fig. 15C shows the results of the atop based blending operation. Comparison can be made with the over based blending operation results shown in Fig. Object A or Object B may be a group of objects that are, for the purposes of compositing, treated as a single object. In particular Object B may be the background, i.e.
the result of previous rendering and compositing operations.
A modified blending operation based on the Porter and Duff in operator may also be implemented in the framestore renderer. This modified operator is less advantageous in a framestore renderer than the blending operation based on the atop operator. The 727395.doc Scomplication is that the use of the in operator means that pixels are modified over the O entire area of the operator. More specifically pixels modified by this operation are not limited to the intersection of the two objects. All pixels within the union of object A and object B are modified by a blend operation based on the in operator.
\S 5 The equation for an in based blending operation has already been seen several times in
C
this document: In a result Cresut aaaB(CA, C 8 a result AaB Object B can be considered to be the background that is already written into the framestore. Object A is the object to be blended with the background (Object The result of blending Object A with Object B has opacity equal to the product of Object A and Object B's opacities. The resulting premultiplied colour is the product of the opacities of Objects A and B and the blending function. Object B and Object A are cleared set to transparent black) outside the intersecting region.
Industrial Applicability It is apparent from the above that the disclosed methods are applicable to the data processing industries.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without. departing from the scope and spirit of the invention, the embodiment(s) being illustrative and not restrictive. The disclosure is presented primarily in terms of a printer engine. However, the disclosed arrangements may be used in any system that requires a renderer.
727395.doc In the context of this specification, the word "comprising" means "including 0 principally but not necessarily solely" or "having" or "including" and not "consisting only of'. Variations of the word comprising, such as "comprise" and "comprises" have corresponding meanings.
727395.doc

Claims (10)

1. A method of compositing graphic elements in a pixel-based renderer, said method comprising the steps of: receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; determining a blend output of a blend function dependent on the first colour and the second colour; and determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
2. A method according to claim 1 comprising the further step of determining a resultant opacity of the compositing operation, the resultant opacity being equal to the first opacity.
3. A method according to claim 1 comprising the further step of determining a resultant opacity of the compositing operation, the resultant opacity being equal to a product of the first opacity and the second opacity.
4. A method according to claim 1 wherein the compositing operation is a modified Porter and Duff atop operation. 1186291_1 727395_auamends_01 -38- A method according to claim 1 wherein the compositing operation is a modified Porter and Duff in operation.
6. An apparatus for compositing graphic elements in a pixel-based renderer, said apparatus comprising: means for receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; means for determining a blend output of a blend function dependent on the first colour and the second colour; and means for determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
7. A computer program product comprising machine-readable program code recorded on a machine-readable recording medium, for controlling the operation of a data processing apparatus on which the program code executes to perform a method of compositing graphic elements in a pixel-based renderer, said method comprising the steps of: receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; determining a blend output of a blend function dependent on the first colour and the second colour; and 1186291_1 727395 au amends_01 -39- determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
8. A computer program comprising machine-readable program code for controlling the operation of a data processing apparatus on which the program code executes to perform a method of compositing graphic elements in a pixel-based renderer, said method comprising the steps of: receiving a first graphic element having a first colour and a first opacity and a second graphic element having a second colour and a second opacity; determining a blend output of a blend function dependent on the first colour and the second colour; and determining a resultant colour of a compositing operation on the first and second graphic elements, the resultant colour being dependent on the blend output and otherwise being independent of the second colour.
9. A method of compositing graphic elements in a pixel-based renderer substantially as described herein with reference to any one of the embodiments as illustrated in the accompanying drawings. An apparatus for compositing graphic elements in a pixel-based renderer substantially as described herein with reference to any one of the embodiments as illustrated in the accompanying drawings. 1186291_1 727395_auamends_01
11. A computer program product substantially as described herein with reference to any one of the embodiments as illustrated in the accompanying drawings.
12. A computer program substantially as described herein with reference to any one of the embodiments as illustrated in the accompanying drawings. DATED this seventh Day of January, 2009 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant SPRUSON&FERGUSON 1186291_1 727395 au amends_01
AU2005229627A 2005-10-31 2005-10-31 Implementing compositing operations on images Ceased AU2005229627B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2005229627A AU2005229627B2 (en) 2005-10-31 2005-10-31 Implementing compositing operations on images
US11/551,389 US7965299B2 (en) 2005-10-31 2006-10-20 Implementing compositing operations on images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2005229627A AU2005229627B2 (en) 2005-10-31 2005-10-31 Implementing compositing operations on images

Publications (2)

Publication Number Publication Date
AU2005229627A1 AU2005229627A1 (en) 2007-05-17
AU2005229627B2 true AU2005229627B2 (en) 2009-01-22

Family

ID=38054971

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2005229627A Ceased AU2005229627B2 (en) 2005-10-31 2005-10-31 Implementing compositing operations on images

Country Status (1)

Country Link
AU (1) AU2005229627B2 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421460B1 (en) * 1999-05-06 2002-07-16 Adobe Systems Incorporated Blending colors in the presence of transparency
US20030193508A1 (en) * 2002-04-11 2003-10-16 Sun Microsystems, Inc Method and apparatus to calculate any porter-duff compositing equation using pre-defined logical operations and pre-computed constants

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421460B1 (en) * 1999-05-06 2002-07-16 Adobe Systems Incorporated Blending colors in the presence of transparency
US20030193508A1 (en) * 2002-04-11 2003-10-16 Sun Microsystems, Inc Method and apparatus to calculate any porter-duff compositing equation using pre-defined logical operations and pre-computed constants

Also Published As

Publication number Publication date
AU2005229627A1 (en) 2007-05-17

Similar Documents

Publication Publication Date Title
US6961067B2 (en) Reducing the number of compositing operations performed in a pixel sequential rendering system
US6828985B1 (en) Fast rendering techniques for rasterised graphic object based images
US7714865B2 (en) Compositing list caching for a raster image processor
US7538770B2 (en) Tree-based compositing system
US7023439B2 (en) Activating a filling of a graphical object
JP2000149035A (en) Method and device for processing graphic object for high- speed raster form rendering
JP2007304576A (en) Rendering of translucent layer
US7965299B2 (en) Implementing compositing operations on images
US7551173B2 (en) Pixel accurate edges for scanline rendering system
JP4210316B2 (en) Digital image articulated rendering method
US6795048B2 (en) Processing pixels of a digital image
AU2005229627B2 (en) Implementing compositing operations on images
JP4109740B2 (en) Convolutional scanning line rendering
US7385609B1 (en) Apparatus, system, and method for increased processing flexibility of a graphic pipeline
JP2005235205A (en) Compositing with clip-to-self functionality without using shape channel
AU2005200948B2 (en) Compositing list caching for a raster image processor
AU2004200655B2 (en) Reducing the Number of Compositing Operations Performed in a Pixel Sequential Rendering System
AU2005201868A1 (en) Removing background colour in group compositing
AU2005201929A1 (en) Rendering graphic object images
AU779154B2 (en) Compositing objects with opacity for fast rasterised rendering
AU2004233516B2 (en) Tree-based compositing system
AU2005229629B2 (en) Implementing compositing operations on images
AU2002301643B2 (en) Activating a Filling of a Graphical Object
AU2005201931A1 (en) Rendering graphic object images
AU2004231232B2 (en) Pixel accurate edges for scanline rendering system

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired