AU765466B2 - Anti-aliased polygon rendering - Google Patents

Anti-aliased polygon rendering Download PDF

Info

Publication number
AU765466B2
AU765466B2 AU72392/00A AU7239200A AU765466B2 AU 765466 B2 AU765466 B2 AU 765466B2 AU 72392/00 A AU72392/00 A AU 72392/00A AU 7239200 A AU7239200 A AU 7239200A AU 765466 B2 AU765466 B2 AU 765466B2
Authority
AU
Australia
Prior art keywords
pixel
line segment
value
area
area function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU72392/00A
Other versions
AU7239200A (en
Inventor
Alan Tonisson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AUPQ4798A external-priority patent/AUPQ479899A0/en
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU72392/00A priority Critical patent/AU765466B2/en
Publication of AU7239200A publication Critical patent/AU7239200A/en
Application granted granted Critical
Publication of AU765466B2 publication Critical patent/AU765466B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Description

S&FRef: 533214
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
Name and Address of Applicant: Canon Kabushiki Kaisha 30-2, Shimomaruko 3-chome, Ohta-ku Tokyo 146 Japan Actual Inventor(s): Address for Service: Invention Title: Spruson Ferguson St Martins Tower 31 Market Street Sydney NSW 2000 Anti-aliased Polygon Rendering Alan Tonisson ASSOCIATED PROVISIONAL APPLICATION DETAILS [33] Country [31] Applic. No(s) AU PQ4798 The following statement is a full description of this invention, performing it known to me/us:- [32] Application Date 22 Dec 1999 including the best method of 5815c TO^ii!5iT«.&;I^- -B^AOTT7F^rfe^ Anti-aliased Polygon Rendering Field of Invention The present invention relates to a method and apparatus for rendering images and in particular to the anti-aliased rendering of polygons for printing or display on a display device. The invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for the antialiased rendering of polygons for printing or display on a display device.
Background The term "aliasing" used in regard to image processing refers to the fact that I o when images are sampled and then printed or displayed on a display device, high frequency components in the original image appear as low frequency components in the reconstructed image. These low frequency components are said to be "aliases" of the original high frequency components.
Images displayed on a computer monitor or printed using a printer are composed of discrete dots or pixels. Aliasing artefacts are often visible in computer generated images as jagged edges or stepped lines. Aliasing occurs during computer image generation, because computer generated images are typically generated from a model or description of an ideal image that contains detail that is lost upon rendering. Aliasing is caused when an image is sampled at a resolution that is not high enough to capture all of the detail in the image. For example, an image may be described in terms of the coordinates of line segments. Fig. 4 shows an example of a line segment 400 through two points (xo, yo) and (xi, yi). The square tiles (eg. 401) of Fig. 4 represent pixels and the dark lines 403 and 405 represent the x and y axis respectively. Fig. 4 is based on the assumption that pixels are square and that the pixel boundaries have integer x or y coordinates. The line segments that define a polygon of any associated image may have integer or non-integer end points. When the image is rendered, the line segment 400 will appear jagged because of the discrete pixels used to represent the image.
V. The process of removing or reducing the effects of aliasing is referred to as S....."anti-aliasing". Aliasing can be removed or reduced by filtering or blurring all or part of 30 the image before sampling. The filtering or blurring removes the high frequency components of the image prior to sampling. Anti-aliasing removes or reduces the jagged o edges but produces a less sharp image. The loss in sharpness is generally preferable to the jagged edges.
53321 4.doc Anti-aliasing techniques can be applied at various stages in image generation.
For example, when producing an image composed of multiple objects, each object can be individually anti-aliased before the objects are composited together to form the final image. In this case, the pixels of the images of the individual objects will typically have associated opacity information. Anti-aliasing techniques may be applied to the opacity information to determine the best opacity values for the pixels in each object.
There are a number of known methods available for anti-aliasing images, but they are all mathematically equivalent to filtering or blurring all or part of the image in some way. The methods differ in how they calculate the resulting filtered and sampled image.
One particular known method of anti-aliasing, known as "super-sampling", first renders or samples the original or ideal image at a higher resolution than the desired output resolution to retain more detail. Each dot or pixel in the output image corresponds to a block of pixels in the super-sampled image. The colour of each output pixel is then produced by averaging the colours of the corresponding dots in the super-sampled image.
The super-sampling method is equivalent to applying a box filter to the super-sampled ~*image and then re-sampling the filtered image at the output resolution. Other types of filters can also be applied by assigning unequal weights to the input pixels when performing the averaging operation.
A significant problem with super-sampling is that it is very expensive because of the amount of memory and time required. The super-sampling method requires much more memory to store a super-sampled image than to store the final output image. If each output pixel corresponds to a four by four block of super-sampled pixels the supersampled image will require sixteen times more memory to store than the resulting anti- S" 25 aliased image.
0 Another known method of anti-aliasing is referred to as "area sampling". Area sampling involves calculating the areas of the fragments of pixels in the output image that are covered by each object in the ideal image. The colour of each output pixel is calculated as a weighted average of the colours of the pixel fragments in the input image, with the colour of each fragment being assigned a weight proportional to the area of the fragment. Fig. 4 shows a plurality of pixel fragments (shaded) produced by the line segment 400. Area sampling requires much less memory than super-sampling and is mathematically equivalent to applying a box filter to the ideal image before sampling.
533214.doc I I 11I 1I1 *l ll.*i ii*h-- I il Area sampling is typically used when rendering an image described in terms of geometric primitives and is typically combined with scan-conversion. Scan-conversion is the process of tracing the outline of a shape to determine which pixels fall on or within the bounds of the shape. When rendering geometric shapes the shapes are scan converted during rendering.
Other known methods of anti-aliasing use variations of the above techniques.
For example, some methods super-sample only parts of the image. There are also many methods of area sampling which differ in how the areas of pixel fragments are calculated or estimated. These methods typically use crude approximations because previously known exact expressions for the areas of pixel fragments are complex and expensive to calculate using digital hardware.
The way in which area sampling methods calculate areas of pixel fragments covered by a polygon typically fall into three different categories. Firstly, methods based on looking up area values in one or more tables where the tables are indexed by an approximation of the slope of the line and the value of a coordinate at a point where an edge of the polygon intersects the boundary of an output pixel. Secondly, methods based on counting numbers of sub-pixels whose centres are contained inside the boundary of a polygon. Finally, methods that approximate the area covered by a polygon based on the distance of the polygon edge from the edge of a pixel, along a horizontal or vertical line through the centre of the pixel.
.S:.These area sampling methods have several disadvantages. Firstly, table lookup methods for area sampling are inaccurate and require memory to hold the tables.
*.:Secondly, methods that count sub-pixels use rendering algorithms that are similar to super-sampling, in that they go through the motions of rendering at a higher resolution, although the sub-pixel colour values are not calculated or stored. Therefore, sub-pixel counting methods are not very efficient because of the extra work required to scan at a higher resolution.
It is an object of the present invention to ameliorate one or more of the limitations of the methods described above.
Summary of the Invention According to one aspect of the present invention there is provided a method of anti-aliasing the edges of a polygon, said method comprising the steps of: processing a description of said polygon to produce a plurality of line segments; 5332 14.doc scanning each of said line segments to determine a plurality of area function values; combining said plurality of area function values to determine a plurality of pixel fragment areas; combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; determining an opacity value for each said pixel, utilising said total covered areas; and determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary.
According to another aspect of the present invention there is provided a method of anti-aliasing the edges of a polygon, said method comprising the steps of: processing a description of said polygon to produce a plurality of line segments; scanning each of said line segments a first time to determine a first plurality of area function values corresponding to intersections of said line segment with horizontal :pixel boundaries; scanning each of said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with 1.00 20 vertical pixel boundaries; combining said first and second pluralities of area function values to determine a o ~plurality of pixel fragment areas; combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; 25 determining an opacity value for each said pixel, utilising said total covered 0: areas; and o o determining a colour value for each said pixel, utilising said opacity values, wherein 00: each said area function value is calculated at an intersection of said line segment with a pixel boundary.
According to still another aspect of the present invention there is provided a method of calculating a total covered area of a pixel produced by at least one line segment, said method comprising the steps of: 533214.dc scanning said line segment to determine a plurality of area function values; combining said plurality of area funiction values to determine a plurality of pixel fragment areas; combining said plurality of pixel fragment areas to determine a total covered area for said pixel.
According to still another aspect of the present invention there is provided an apparatus for anti-aliasing the edges of a polygon, said apparatus comprising: means for processing a description of said polygon to produce a plurality of line segments; means for scanning each of said line segments to determine a plurality of area fuinction values; means for combining said plurality of area fuinction values to determine a plurality of pixel fragment areas; means for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; means for determining an opacity value for each said pixel, utilising said total covered areas; and means for determining a colour value for each said pixel, utilising said opacity 20 values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary.
According to still another aspect of the present invention there is provided an ::::*apparatus for anti-aliasing the edges of a polygon, said apparatus comprising: means for processing a description of said polygon to produce a plurality of line segments; 25 means for scanning each of said line segments a first time to determine a first plurality of area function values corresponding to intersections of said line segment with horizontal pixel boundaries; means for scanning each of said line segments a second time to determine a second plurality of area fuinction values corresponding to intersections of said line segment with vertical pixel boundaries; means for combining said first and second pluralities of area fuinction values to determine a plurality of pixel fragment areas; means for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; 533214.doc means for determining an opacity value for each said pixel, utilising said total covered areas; and means for determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary.
According to still another aspect of the present invention there is provided an apparatus for calculating a total covered area of a pixel produced by at least one line segment, said apparatus comprising: means for scanning said line segment to determine a plurality of area function values; means for combining said plurality of area function values to determine a plurality of pixel fragment areas; and means for combining said plurality of pixel fragment areas to determine a total covered area for said pixel.
According to still another aspect of the present invention there is provided a computer readable medium, having a program recorded thereon, where the program is *.:configured to make a computer execute a procedure for anti-aliasing the edges of a polygon, said program comprising: V. code for processing a description of said polygon to produce a plurality of line segments; code for scanning each of said line segments to determine a plurality of area ****function values; code for combining said plurality of area function values to determine a plurality of pixel fragment areas; 25 code for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; code for determining an opacity value for each said pixel, utilising said total covered areas; and code for determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary.
According to still another aspect of the present invention there is provided a computer readable medium, having a program recorded thereon, where the program is 5332 14.doc I -j4' j f_ -7configured to make a computer execute a procedure for anti-aliasing the edges of a polygon, said program comprising: code for processing a description of said polygon to produce a plurality of line segments; code for scanning each of said line segments a first time to determine a first plurality of area function values corresponding to intersections of said line segment with horizontal pixel boundaries; code for scanning each of said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with vertical pixel boundaries; code for combining said first and second pluralities of area fuinction values to determine a plurality of pixel fragment areas; code for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; code for determining an opacity value for each said pixel, utilising said total covered areas; and code for determining a colour value for each said pixel, utilising said opacity V, values, wherein each said area function value is calculated at an intersection of said line a. a segment with a pixel boundary.
20 According to still another aspect of the present invention there is provided a computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure for calculating a total covered area of a pixel produced by at least one line segment, said program comprising: code for said line segment to determine a plurality of area fuinction values; code for combining said plurality of area function values to determine a plurality of pixel fragment areas; and code for combining said plurality of pixel fragment areas to determine a total covered area for said pixel.
Brief Description of the Drawings A number of preferred embodiments of the present invention will now be described with reference to the drawings, in which: Fig. 1 is a flow diagram showing a method of producing an image in accordance with the preferred embodiment; 5332 14.doc .Fig. 2 is a flow diagram showing a method of calculating pixel-fragment areas in accordance with the method of Fig. 1; Fig. 3 is a flow diagram showing a method of calculating a plurality of pixel boundary intersections in accordance with the method of Fig. 1; Fig. 4 shows a plurality of pixel fragments produced by a line segment; Fig. 5 shows an area function calculated in accordance with the methods of Figs.
I to 3; Fig. 6 shows a rectangular box area function calculated in accordance with the methods of Figs. 1 to 3; Fig. 7 shows a pixel fragment area calculation in accordance with the methods of Figs. i to 3; Fig. 8 shows a pixel fragment area calculation with xj, in accordance with the methods of Figs. 1 to 3; Fig. 9 shows a pixel fragment area calculation for upward pointing line segments in accordance with the methods of Figs. i to 3; Fig. 10 shows the contribution to an area covered at the start of a line segment; Fig. I11 shows the contribution to an area covered at the end of a line segment; Fig. 12 shows two line segments with opposite orientations; Fig. 13 shows boundary crossing information in accordance with the preferred embodiment; Fig. 14 shows a vertical line segment; and Fig. 15 is a schematic block diagram of a general purpose computer upon which *~.the preferred embodiment of the present invention can be practiced.
Detailed Description including Best Mode Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the :0 :contrary intention appears.
Some portions of the detailed description which follows is explicitly or implicitly presented in terms of algorithms and symbolic representations of operations on data within a computer memory (ie. computer code). These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a 5332 14.doc -9desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present description, discussions utilising terms such as "processing"~, "computing", "generating", "6creating", "operating", "communicating", "rendering", "providing", and "linking" or the like, refer to the action and processes of a computer system, or similar electronic device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The preferred embodiment is a method of anti-aliasing the edges of a polygon represented by a plurality of line segments. The preferred embodiment provides a fast and :accurate means of calculating areas of pixel fragments and is suitable for inclusion in 20 anti-aliased polygon rendering methods. The preferred embodiment reduces the aliasing effects at the edges of any polygons rendered in an image.
The following description is based on the assumption that pixels are square and that the pixel boundaries have integer x or y coordinates. The line segments of any associated image can have integer or non-integer end points.
Fig. 1 is a flow diagram showing the method of anti-aliasing the edges of a polygon in accordance with the preferred embodiment of the present invention. The process begins at step 10 1, where a description of the polygon is converted into a plurality of line segment records. The number of line segment records will vary depending on the number of line segments which is dependent on the number of sides that the polygon has.
30 The line segment records comprise coordinate values representing the start and end points of a particular line segment. At the next step 103, the line segment records are scanned by stepping along the line segment at unit distances in a horizontal direction to produce a first plurality of area function values using a line scan algorithm, in accordance with the preferred embodiment. The line segment scan algorithm will be explained in more detail 533214.doc 10 later in this document. Each of the first plurality of area function values corresponds to an intersection of the line segment with a vertical pixel boundary. The process continues at the next step 105, where the line segment records are scanned by stepping along the line segment at unit distances in a vertical direction to produce a second plurality of area function values using the line scan algorithm, in accordance with the preferred embodiment. Each of the second plurality of area function values corresponds to an intersection of the line segment with a horizontal pixel boundary. The area functions used in steps 103 and 105 are described in more detail later in this document. At the next step 107, the area function values computed in steps 103 and 105 are combined to produce a total covered area for each partially covered pixel. The process concludes at step 109, where the total covered areas computed in step 107 are combined together with edge crossing information to form pixel colour values.
Steps 103 and 105 are carried out for each line segment in order to calculate the plurality of area function values. Steps 103 and 105 can be carried out in any order. As will be explained in further detail later in this document, steps 103, 105 and 107 are preferably carried out synchronously for each line segment where area function values are calculated in the order that the line segment intersects with pixel boundaries. However, the horizontal and vertical scans can be carried out separately where the area function values are stored as the values are calculated and combined later at step 107.
Fig. 2 is a flow diagram showing the method of calculating the pixel fragment areas for a single line segment, in accordance with the preferred embodiment of the V. present invention. The process begins at step 201, where a first area function value is :calculated. The first area function value is used as a pixel fragment area for the pixel V containing the start of the line segment. At the next step 203, a pixel fragment area is calculated for each successive pixel that the line segment passes through. Each pixel fragment area calculated at step 203 is obtained by subtracting the value of the area function at the boundary intersection where the line segment enters a pixel, from the value V. of the area function at the boundary intersection where the line segment exits the pixel and taking the fractional part of the resulting difference. The formula for the area function and the method for calculating the area function values will be described in further detail later in this document. The process concludes at the next step 205, where a contribution to the area of the pixel containing the end of the line segment is calculated.
5332 14.doc -11- The contribution of the end of the line segment is treated the same way as a pixel fragment area for the pixel. The calculation of the contribution of the end of the line segment will be explained in more detail later in this document.
The area function used in steps 105, 107, 201 and 203, in accordance with the preferred embodiment can be expressed as: P(x,y) A(x,y) B(x,y) Where, 2(x+xo) and B(x,y) y ceil(x).
The functions A(x,y) and B(x,y) are defined for points on the line through the endpoints (xo,yo) and (xI,yi) of the line segment to be scanned. As shown in Fig. A(x,y) represents the signed area of a trapezoid 501 bounded by the line segment joining (xo,yo) and y) and the y axis. Given two points and on the line 500 through (xo,yo) and (xl,yi), the signed area of the trapezoid 503 bounded by the line segment joining and and the y-axis is given by As shown in Fig. 6, B(x,y) represents the (signed) area of the rectangle with corers (ceil(x),0), (ceil(x), y) and As seen in Fig. 7, if is the point at which a line segment 701 enters a pixel 703 and is the point at which the line segment 701 exits the same pixel 703, then the area of the part of the pixel 703 that is cut off by the line segment 701 can be written as the fractional part of the sum of three areas frac(B+A+B')) where: S frac(B+A+B') frac( B(x,y) floor(x) floor(y) A(x,y) ceil(x') ceil(y') frac( (1) and where: B represents the area of a rectangular box with comers (0,floor(y)), (x,floor(y)), 30 and and B B(x,y) floor(x) floor(y); OO A represents the area of the horizontal strip 700 bounded by the line segment a a 705 and the y-axis 707 A 533214.doc 12- B' represents the area of a rectangular strip 709 with corners and Note that B' ceil(c') and P(x,y) Note that one or more of the areas B, A or B' can be zero depending on where the line segment enters or exits the pixel. The method of calculating the area of each pixel fragment in accordance with the preferred embodiment will be explained in more detail later in this document.
Fig. 3 is a flow diagram showing the method of calculating the plurality of pixel boundary intersections in accordance with the preferred embodiment of the present invention. The process begins at step 301, where a determination is made as to whether the next intersection of a particular line segment is with a vertical pixel boundary or with a horizontal pixel boundary. If it is found that the next intersection of the line segment is with a vertical pixel boundary then the value of the area function for the intersection of the line segment with the vertical pixel boundary is calculated, at the next step 303.
is Otherwise, the value of the area function at the next intersection of the line segment with a horizontal pixel boundary is calculated, at step 305. The value of the area function for the next intersection of the line segment with the vertical pixel boundary is calculated by adding an increment to the value of the area function calculated at the previous intersection of the line segment with a vertical pixel boundary. The increment is equal to plus or minus the value of the y coordinate at the previous intersection with a vertical pixel boundary minus a value equal to half of the magnitude of the change in the y coordinate between adjacent intersections of the line segment with vertical pixel boundaries. Similarly, the value of the area function at the next intersection of the line segment with a horizontal pixel boundary is calculated by adding an increment to the value of the area function calculated at the previous intersection of the line segment with a horizontal pixel boundary. The increment is equal to plus or minus the value of the x coordinate at the previous intersection with a horizontal pixel boundary plus a value equal to half of the magnitude of the change in the y coordinate between adjacent intersections of the line segment with horizontal pixel boundaries.
30 The method of anti-aliasing the edges of a polygon in accordance with the preferred embodiment of the present invention will now be explained in more detail.
oo If the x-axis 711 is taken as increasing to the right and the y-axis 707 is taken as increasing down, the pixel fragment area formula given above calculates the area of increasing down, the pixel fragment area formula given above calculates the area of 533214.doc I L r i 13the part of a pixel that lies on the right side of the line segment 701 (looking along the line segment). The same formula applies no matter how the line segment 701 is oriented.
Fig. 8 shows a line segment 800 with xo x 1 The expression for the area is the same in the case where xo xi and hence the area calculation is the same as for xo xl.
Since reversing the direction of the line segment 800 switches between the left and right sides of a pixel 801, as seen in Fig. 9. The above relationship between the line segment 800 direction and the calculation of a pixel fragment area is valid as long as frac(P(x',y')- 0. When 0 an associated line segment passes exactly through the corer of a pixel and can be ignored since there are no pixel fragments generated at such a point.
Areas of pixel fragments can be efficiently calculated by evaluating frac(P(x,y)) at integer x and y values. Values of frac(P(x,y)) can be calculated by performing independent incremental calculations for intersections with horizontal boundaries and for intersections with vertical pixel boundaries. The type of incremental computation used by the preferred embodiment is an example of a DDA (digital differential analyser).
Letting dx (xi-xo)/(yl-yo) and let dy (yi-yo)/(xi-xo), the single pixel area S 20 function P(x,y) can be calculated incrementally using the following relationships: For integer x values, the following two relationships hold: P(x+l,y+dy) P(x,y) y dy; and P(x-l,y-dy) P(x,y) y -2dy.
For integer y values, the following two relationships hold: P(x+dx,y+l) P(x,y) x dx -y ceil(x); and P(x-dx,y-1) P(x,y) x dx ceil(x-dx).
Note that at integer y values, the (y ceil(x)) and the ceil(x-dx)) terms are integers and can be ignored since only the fractional part of P(x,y) is of interest.
"Therefore, the incremental calculations are symmetrical in x and y, apart from the signs of 30 the terms.
For integer x values, the following two relationships hold: frac(P(x+l,y+dy)) frac(P(x,y) y 2 dy); and (2) frac(P(x-1,y-dy)) frac(P(x,y) y 2 dy). (3) 533214.doc 3 r- I -14- For integer y values, the following two relationships hold: frac(P(x+dx,y+l)) frac(P(x,y) x 2 dx); and (4) frac(P(x-dx,y-1)) frac(P(x,y) x 2 dx). The above relationships and address the incremental calculation of areas cut by line segments. However, for the endpoints of each line segment further relationships must be used.
Fig. 10 shows the fractional area 1000 created from two adjacent line segments 1001, 1002. At the vertex 1005 of the two adjacent line segments 1001, 1002, there is a contribution to the fractional area 1000 from the two adjacent line segments 1001, 1002.
The contribution C of a line segment 1002 to the starting pixel 1007 area the shaded area of the pixel 1007) is given by: C ceil(x') ceil(y') ceil(x') ceil(y') ceil(x') ceil(y') (6) where is the point at which the line segment 1002 exits the starting pixel 1007. Note that the (ceil(x'), ceil(y')) term does not contribute to the fractional part of the area, so it can be ignored.
For integer x', -2 dy (x'-xo) 2 yo xo yo.
For integer y', 2 dx (y'-yo) 2 xo y' ceil(x').
Since y' is an integer, the ceil(x')) term can be ignored because only the fractional part of is of interest.
With reference to Fig. 11, the contribution D of a line segment ((xo,yo)-(xi,yi)) 1100 to an ending pixel 1101 area the shaded area of pixel 1101) is given by: D A(xi,yl) A(x,y) B(x,y) floor(x) ceil(y) A(xi,yi) A(x,y) B(x,y) floor(x) ceil(y) A(xI,yI) P(x,y) floor(x) ceil(y), (7) where is the point where the line segment 1100 enters the ending pixel 1101.
Note that the ((-floor(x) ceil(y)) term is an integer and does not contribute to the fractional part of the pixel area, so the ((-floor(x) ceil(y)) term can be ignored.
Fig. 12 shows that the calculations for endpoints give the correct value even if the line segments 1201, 1202 have different orientations. In this case the two line 533214.doc II I uii segments 1201, 1202 that meet at (xo,yo) are oriented so that the expressions for the contributions have opposite signs and sum to exactly the area of the pixel fragment 1203.
If a pixel is crossed by multiple line segments, the contributions from each segment are added together. The fractional part of the sum is the area of the pixel that is covered by a polygon comprising the line segments, and is based on the assumption that the polygon boundary is a simple closed curve, ie. the boundary does not cross itself. The other assumption that is made, in accordance with the method of the preferred embodiment, is that the boundary is oriented clockwise and that x coordinates increase to the right and y coordinates increase downward.
A first line segment scan algorithm (referred to as a "basic line segment scan algorithm") is presented below for calculating areas of pixel fragments covered by a polygon near the edge of the polygon, in accordance with the preferred embodiment of the present invention. The basic line segment scan algorithm is described in terms of real number quantities and can be implemented using fixed-point arithmetic or floating point arithmetic. The basic line segment scan algorithm is particularly well suited for fixedpoint arithmetic, but sufficient precision must be maintained in order to overcome accumulating round-off errors.
The basic, line segment scan algorithm does not handle vertical, near vertical, horizontal or near horizontal line segments. These line segments must be handled as 20 special cases and will be handled correctly using a second line segment scan algorithm S"(referred to as an "integer line segment scan algorithm"), in accordance with the preferred embodiment of the present invention.
°ooe SThe basic line segment scan algorithm steps along a line segment ((xo,yo)-(xl,yi)) and evaluates frac(P(x,y)) incrementally at integer x and y coordinates. The frac(P(x,y)) values are used to calculate the areas of pixel fragments cut by the line segment.
The basic line segment scan algorithm, in accordance with the preferred embodiment, has four sections, an initialisation section, a control loop an x-scan and a yscan. The initialisation section calculates initial values for the variables that are used in the control loop and the x and y scans. The control loop repeatedly calls either the x-scan 30 or the y-scan until the entire line segment has been processed. The x-scan is called once for each time that the line segment crosses a vertical pixel boundary, ie. it is called for each point with an integer x coordinate on the line segment. The y-scan is called once for each time that the line segment crosses a horizontal pixel boundary, ie. it is called for 533214.doc r r -q 11, r -16each point with an integer y coordinate on the line segment. Note that only one of the xscan or y-scan routines are called per pixel cut by the line segment.
When each fragment is calculated the routine "output_fragment" is called, which takes a pair of coordinates representing the top left comer of the pixel that the fragment is part of, the area of the fragment, and a flag indicating the orientation of the line segment.
The orientation flag indicates whether the area value provided represents the fragment inside or outside the polygon. If the flag is set to true, then the area represents the area of the fragment outside the polygon and if the flag is false, then the area represents the area inside the polygon.
The basic line segment scan algorithm is presented below as computer code, and has seven main variables, X, Y, x, y, Px, Py and P, and two constant increment values dx and dy: where X represents integer x coordinate of point on line segment; Y represents integer y coordinate of point on line segment; x represents x coordinate at integer y values; y represents y coordinate at integer x values; Px represents the value of frac(P(x,y)) at integer x values; Py represents the value of frac(P(x,y)) at integer y values; and i P represents the value of frac(P(x,y)) at the last pixel boundary.
Initialize: upward false; S//Test if the line segment points upward.
25 if(yo y) swap(xo, xi); swap(yo, yi); upward true; 30 Initialize X and Y to the coordinates of the top left corer of the pixel in which (xo, yo) lies.
This assumes that the y axis points downward.
X floor(xo); 533214.doc -17- Y floor(yo); H Test if the segment has a negative step in x.
boolean reverseX (dx 0); if (reverseX) dx (xI-xo)/(yi-yi); dy (y 1 -yo)/(xI-xo); x dx*(Y-yo) xo; y dy*(X-xo)+ yo; H/Initialize Px and Py.
H The values are obtained by evaluating the H expressions for P at the preceeding integer values H before the start of the line segment.
Py frac(dx*(Y-yo)*(Y-yo)/2 xo*(Y-yo)); Px frac(-dy*(X-xo)*(X-xo)/2 yo*X); P 0.0; 20 Control Loop: if (reverseX) cX X-l; else cX X; while (Y floor(y 1 or cX floor(x 1 if(y+dy Y+l) do Xscan; ****else do Yscan; if (reverseX) cX X-1; else 533214.doc 18cX =X; return; Y-scan: Py x+ Y 2 dx; if (reverseX) cX X- 1; else cX =X; output-fragment(cX, Y, frac(Py-P), upward); x x P Py; return; X-scan: if (reverseX) Px+= y -YMy; output fragment(X- 1, Y, frac(Px-P), upward); else Px -y M/2y; output fragment(X, Y, frac(Px-P), upward); 30 y dy; P =Px; return.
5332 14.doc 19 The output from the basic line segment scan algorithm given above, is a sequence of pixel fragments. Each fragment consists of an x coordinate, a y coordinate and an area, and gives enough information to render each pixel that is cut by the boundary of the polygon. However, there is not enough information to fill in pixels that are fully enclosed by the boundary. To determine which pixels are fully enclosed, the y scan needs to be modified slightly to output boundary crossing information.
To render a complete polygon, all of the edges in the polygon should be fed into a modified line segment scan algorithm (not illustrated). The resulting sequences of crossing and fragment information then needs to be sorted and combined to determine the o0 opacities of the pixels to be rendered.
The modified line segment scan algorithm will output a crossing at each point on the polygon boundary with an integer y coordinate. At each point y) on the polygon boundary, the algorithm will output a crossing consisting of three values: floor(x), y and a flag indicating the direction of the crossing (up or down).
The crossing information determined in accordance with the preferred embodiment is for crossings at pixel boundaries and not pixel centres. Pixels that have fragments are rendered using the opacity of an associated polygon multiplied by the total area of the pixel that is covered by the polygon. Other pixels are rendered with 100% of the opacity of the associated polygon if their top edge lies completely inside the polygon 20 boundary. For example, for the polygon 1301 shown in Fig. 13, pixels and (3,3) have their edges strictly contained inside the polygon boundary, but pixel is ~.fragmented so it will be rendered with an opacity multiplied by the area covered. Pixels and should be rendered with unmodified opacity.
The rendering algorithm described above is only one example of how the line segment scan algorithm can be implemented into a rendering algorithm, in accordance with the preferred embodiment. In accordance with a further embodiment, all of the polygon edges are sorted first and multiple line segment scans are allowed to be active simultaneously. The further embodiment allows scanning to be performed in strictly scan-line order.
in accordance with still a further embodiment, active line segments are sorted in order of crossings to eliminate the need to sort pixel fragments. The sort order between two line segments in a single polygon is well defined since the edges are assumed not to cross. The sort order between line segments in different polygons is not well defined and can change as the line segments are scanned.
5332 14.doc As discussed above, the basic line segment scan algorithm does not handle vertical and horizontal line segments. The basic line segment scan algorithm also suffers from accumulating round off errors (ie. the areas calculated get less accurate towards the end of each line segment). These problems can be compensated for by increasing the number of bits used to represent the quantities calculated in the basic line segment scan algorithm. However, increasing the number of bits requires more hardware and still does not completely eliminate errors in the results. The integer line segment scan algorithm overcomes the problems of the basic line segment scan algorithm without the requirement of more hardware.
The integer line segment scan algorithm is related to Bresenham's algorithm and the midpoint algorithm for drawing lines. The x and y scans operate the same way as a generalised midpoint algorithm except that the values computed by the midpoint algorithm are used to perform further area calculations.
The integer line segment scan algorithm can compute the term (floor (S frac(P(x,y))) at integer values of x and y, where the scale factor S= 2 n for some integer n. The term floor (S frac(P(x,y)) can be interpreted as an n-bit binary fraction approximating frac(P(x,y)). n is preferably equal to 8 and S is preferably equal to 256, although the integer line segment scan algorithm will work for any positive even number of bits.
In the integer line segment scan algorithm, the coordinates of the end-points of the line segments are preferably represented as fixed-point numbers with four fraction bits, although the algorithm can work for any number of fraction bits. Since an 8-bit approximation is being used for the resulting area will also be an 8-bit quantity, so there is little advantage in using more than 4 fractional bits in x or y. Using 4 fractional bits means that the endpoints of each line segment can be positioned on a sub-pixel boundary where each pixel is sub-divided into 256 sub-pixels.
The real number quantities in the basic line segment algorithm are scaled by 256 and the integer and fractional parts of these scaled quantities are handled separately.
S Since the fractional parts are rational numbers, they can be made into integers by 30 multiplying each of them by a suitable integer scale.
If x 0 yo, xl, and yl are representable as fixed point numbers with 4 fraction bits, then (2x16 3 (xl-xo) is an integer for integer x. (Similarly, 2x16 3 (ylyo) is an integer for integer y. Scaling the fractional parts of the quantities tracked in the x-scan 533214.doc I I r i iii -21 by (2xl6 3 (xl-xo)) results in integer values. Similarly, scaling the fractional parts of the quantities tracked in the y-scan by 2x16 3 (yl-y0) results in integer values.
The input to the integer line segment scan algorithm is a polygon edge represented in the form: (XO,YO,Sx,Sy,Dx,Dy,upward), where X0=16 xo and is an integer representing the x coordinate, xo, of the starting endpoint of the polygon edge. XO can be interpreted as a fixed-point number with four fractional bits; Y0=16 yo and is an integer representing the y coordinate, yo, of the starting endpoint of the polygon edge. YO can be interpreted as a fixed-point number with four fractional bits; Sx=16 (xi-xo) and is an integer representing the total step in x. Sx can be interpreted as a fixed-point representation of (xi-xo) with four fractional bits; Sy=16 (yi-yo) and is an integer representing the total step in y. Sy can be interpreted as a fixed-point representation of (yi-yo) with four fractional bits; Dx=floor(256 Sx/Sy) and represents the scaled integer part of the change in x per unit step in y; Dy=floor(256 Sy/Sx) and represents the scaled integer part of the change in y per unit step in x; and upward represents a flag indicating the orientation of the line segment.
20 Note that Dx and Dy are redundant since they can be calculated from Sx and Sy, *ft but it is assumed that Dx and Dy have been pre-calculated to avoid the need to perform S" divisions in custom hardware.
To understand the calculations for the integer and fractional parts of the term 256 frac(P(x,y)), the following relationships are useful: 25 floor(256 frac(P(x,y))) floor(256 mod 256 (8) For integer y, 32(yi.yo) frac(256 frac(P(x,y))) 2x16 3 (ylyo0) P(x,y) mod 32(yi.yo) (9) *Sao*: SFor integer x, 32(xl.xo) frac(256 frac(P(x,y))) 2x16 3 (XI-xo) P(x,y) mod 32(xi-xo) 30 Equation indicates that the calculation of floor(256 frac(P(x,y))) requires only 8-bit 2's complement arithmetic.
Equations and (10) give integer expressions that are proportional to the fractional part of 256 frac(P(x,y))) at integer x and y values which allow correction of errors in the calculation of floor(256 frac(P(x,y))) at integer x and y values.
533214.doc 22 To initialise the integer line segment scan algorithm, the following quantities need to be calculated (Note: the quantities will be fturther explained later in the document).
iPy(floor(yo)) floor(256 frac(P(x,y))) at y =floor(yo) (11) IPx(floor(xo)) floor(256 frac(P(x,y))) at x floor(xo) (12) liPx(floor(xo)+1) floor(256 frac(P(x,y))) at x floor(xo)+l (13) FPy(floor(yo)) 32 (yi-yo) frac(256 at y floor(yo) (14) FPx(floor(xo)) 32 (xi-xo) frac(256 at y floor(xo) FPx(floor(xo)+1) 32 (xi-xo) frac(256 at y floor(xo)+l (16) lx(floor(yo)) floor(256 (dx(y-yo)+xo)) at y floor(yo) (17) Iy(floor(xo)) floor(256 (dy(x-xo)±yo)) at x floor(xo) (18) Iy(floor(xo)+1) floor(256 (dy(x-xo)+yo)) at x floor(xo)+1 (19) Fx(floor(yo)) 16 (yi-yo) frac(256 (dx(y-yo)+xo)) at y floor(yo) Fy(floor(xo)) 16 (xi-xo) frac(256 (dy(x-xo)+yo)) at x floor(xo) (21) Fy(floor(xo)+1) 16 (xi-xo) frac(256 (dy(x-xo)+yo)) at x floor(xo)+1 (22) FhDx 32 (yi-yo) frac(128 dx) (23) FhDy 32 (xi-xo) frac(128 dy) (24) Not all of the quantities (11) to (24) need to be calculated for each edge. All .:edges are first normalised so that Sy 0 by reversing the edge if necessary. Normalised egsfrwih x0 ,rqie l fteeqaniisecp 0,1,1 n 9 V96 20mie edges for which Sx 0, require all of these quantities except 0, 1, 1 and 19.
0. 0.l Py(floor(yo)) and FPy(floor(yo)) represent the integer and fractional parts of (256frac(P(x,y))) evaluated at the point on the line through (xc, yo) and (x 1 yi), where y= floor(yo), which is generally not on the line segment. The point on the line through (xo, **se 25 yo) and (xi, yi), where y floor(yo), is the nearest point on the line at or before the start of the line segment with an integer y value.
S
S Equation 8 indicates that (floor(256 frac(P(x,y)))) can be provided by calculating the least significant eight bits of the integer part of 256 where: P(x,y) dx (yY) 2 2 x (y-yo) y ceil(x) Therefore, floor(256 floor(128 dx (y-yo) 2 256 xo (y-yo) 256 y ceil(x)) =floor(128 dx (y-yo) 2 256 xo (floor(yo)-yo) 256 y ceil(x) 5332 14.doc -23 The last term of equation (256 y ceil(x)), can be ignored at integer y values since only the eight least significant bits of the result are of interest. The middle term of equation (25) can be calculated as follows: 256 xo (floor(yo)-yo) 16 xo 16 (floor(yo)-yo) -16 xo 16 (yo-floor(yo)) -Xo (Yo mod 16) The first term of equation floor(128 dx (y-yo) 2 can be approximated efficiently as follows: floor(128 dx (y-yo) 2 floor(256 dx [16(y-yo)] 2 /512) floor(256 dx (YO mod 16)2/512) t floor(Dx (YO mod 16)2/512) The magnitude of the error (Ei) in the above approximation for the term (floor(128 dx is not greater than one, which can be shown as follows: Since Dx floor(256 dx), 256 dx-1 Dx 256 dx.
Therefore, since 0 12 (y-yo) 2 1, 128 (y-yo) 2 dx-1 V2 (y-y0) 2 Dx 128 dx (y-yo) 2 Therefore, floor(128 dx (y-yo)2)-1 floor(Dx (YO mod 16)2/512) floor(128 dx (y-yo) 2 (ie. at y floor(yo)) floor(128 (y-yo) 2 dx) floor(Dx (YO mod 16)2/512), or floor(128 (y-yo) 2 dx) floor(Dx (YO mod 16)2/512) 1.
These expressions for floor(128 (y-yo) 2 dx) can be calculated from the inputs to the algorithm using only bit shifts and multiplications. Note also that only the least significant bits of the result are required, so most of the calculation can be performed using only 8-bit 2's complement arithmetic.
Scaling the term (128 dx (y-yo) 2 by 2Sy, gives an integer value that can be efficiently calculated, which allows the error (Ei) to be determined as follows: Let El 2 Sy (128 dx (y-yo) 2 floor(Dx (Yo mod 16)2/512)) Sy dx [16(yo-y)] 2 2 Sy floor(Dx (YO mod 16)2/512) Sy (Sx/Sy) (YO mod 16) 2 2 Sy floor(Dx (YO mod 16)2/512) Sx (YO mod 16) 2 2 Sy floor(Dx (YO mod 16)2/512).
533214.doc -24- If floor(Dx (YO mod 16)2/512) floor(128 (y-yo) 2 dx), then El 2 Sy frac(128 (y-yo) 2 dx), otherwise El 2 Sy (frac(128 (y-yo) 2 dx)+l).
Note that apart from the evaluation of the term floor(Dx (YO mod 16)2/512), El can be calculated using m bit 2's complement arithmetic where m is two greater than the number of bits needed to represent the range of possible values of Sy.
Given El the term IPy(floor(yo)) and FPy(floor(yo)) can be calculated as follows: IPy(floor(yo)) floor(Dx (YO mod 16)2/512) XO (YO mod 16) mod 256 if El <2 Sy; IPy(floor(yo)) floor(Dx (YO mod 16)2/512) 1 XO (YO mod 16) mod 256 ifE l 2 2 Sy; FPy(floor(yO))= El ifEl 2 Sy; and FPy(floor(yO)) El 2 Sy ifEl 2 2 Sy.
IPx(floor(xO)) and FPx(floor(xO)) represent the integer and fractional parts of 256 frac(P(x,y)) evaluated at the point on the line through (xO, yO) and (xl, yl), where x floor(x0). The integer and fractional parts of 256 frac(P(x,y)) are needed if a line segment is oriented so that x is increasing along the segment, i.e. when Sx 0. The calculation of the integer and fractional parts of 256 frac(P(x,y)) is similar to the calculation of the quantities IPy(floor(yO)) and FPy(floor(yO)).
For integer values of x, P(x,y) 2 dy (x-xo)2 yo x Therefore 256 P(x,y) -128 dy (x-xo) 2 256 yo x Therefore at x floor(xo), floor(256 floor(-128 dy (x-xo) 2 256 yo x) floor(-128 dy (floor(xo)-xo) 2 256 yo floor(xo) (26) The second term of equation (26) (ie. 256 yo floor(xo)) can be calculated as follows: 256 yo floor(xo) 16 yo 16 floor(xo) YO 16 floor(XO/16) The first term of equation (26) (ie. floor(-128 dy (floor(xo)-xo)2)) can be approximated efficiently as follows: 533214.doc 25 floor(-128 dy (X-XO) 2 floor(-256 dy [16(floor(x)-xo)] 2 /5 12) floor(-256 dy (XO mod 16)2 /512) floor((-Dy-1) (XO mod 16)2 /5 12) The magnitude of the error (E 2 in the approximation of floor(-128 dy (x-xo) 2 is not greater than one which can be shown as follows: Since Dy floor(256 dy), 256 dy 1 Dy !256 dy.
Therefore -256 dy 1 :-Dy I -256 dy.
Since (x-floor(xo)) 2 1, -128 dy (XX) 12 (XXO) 2 (-Dy -128 dy (XO) 2 Therefore, floor(-128 dy (X-XO)2)-1 floor((-Dy-1) (XO mod 16)2 /5 12) floor(-128 dy (X-XO) 2 Therefore, at x floor(xO), either I 15 floor(-128 dy (XXO)2) floor((-Dy-1) (XO mod 16)2 /512) *floor(- 128 dy (X-XO) 2 floor((-Dy- 1) (XO mod 16)2 /512)+ 1.
Sclin 12 dy(XO) 2 by 2 Sx, gives an integer value that can be efficiently calculated and allows the error (E 2 to be determined using integer arithmetic as follows: Let E 2 2 Sx (-128 dy (X-XO) 2 floor((-Dy- 1) (XO mod 16)2 /512)) -Sx dy [16 (xo-x)] 2 2 Sx floor((-Dy-1) (XO mod 16)2 /512) -Sx (Sy/Sx) [16 (X-Xo)] 2 2 Sx floor((-Dy-1) (XO mod 16)2 /512) -Sy (XO mod 16)2 2 Sx floor((-Dy-1) (XO mod 16)2 /5 12).
If the term floor (XO mod 16)2/5 12)) floor(-128 (-O 2 dy) then E 2 2Sx frac(-128(X-XO) 2 dy), otherwise E 2 2 Sx (frac(-128 dy (X-XO) 2 Therefore, if E 2 2 Sx then floor((-Dy-1) (XO mod 16)2/512) floor(-128 dy (x-
*XO)
2 otherwise floor((-Dy-1) (XO mod 16)2 /5 12) floor(-128 dy (X-XO) 2 -1.
Given E 2 lIPx(floor(xo)) and FPx(floor(xo)) can be calculated as follows: IPx(floor(xo)) floor((-Dy-1) (XO mod 16)2 /512) -YO 16 floor(XO/ 16) mod 25 6 if E 2 2 Sx; IPx(floor(xo)) floor((-Dy- 1) (XO mod 16)2 /512) +1 -YO 16 floor(XO/ 16) mod 2 56 if E 2 2 Sx; FPx(floor(xO)) E2if
E
2 <2 Sx; and 5332 14.doc 26 FPx(floor(xO)) E2- 2 Sx if E 2 !2 Sx.
IPx(floor(xo)±1) and FPx(floor(xo)+1) represent the integer and fractional parts of the term (256 frac(P(x,y))) evaluated at a point on the line through (xo, yo) and (xi, yi), where x floor(xo)+1. liPx(floor(xo)+1) and FPx(floor(xo)+1) are needed if the line segment is oriented so that x is decreasing along the segment, i.e. when Sx 0. The calculation of JPx(floor(xo)+ 1) and FPx(floor(xo)+ 1) is similar to the calculation of quantities IIPx(floor(xo)) and FPx(floor(xo)).
At x =floor(xo)+1, floor(256 floor(-128 dy (X-XO) 2 256 yo x) floor(-128 dy (floor(xo)+1-_xo) 2 256 yo (floor(xo) 1) (27) The second term of equation (27) is calculated as follows: 256 yo (floor(xo) 1) 16 yo 16 (floor(xo) 1) YO 16 (floor(XO/16) 1) The first term of equation (27) can be approximated efficiently as follows: floor(-128 dy (x-xo)2) floor(-256 dy [16(floor(xo)+1-_xo)] 2 /5 12) floor(-256 dy (16-(XO mod 16))2 /512) .:floor((-Dy-1) (16-(XO mod 16))2 /512).
V The magnitude of the error (E 3 in the approximation of the term floor(-128 dy (x-XO)2) is not greater than one which can be shown as follows: Since Dy floor(256 dy), ~-256 dy- 1 -Dy 1 -256 dy.
.:Since 0 (floor(xo)+ 1-xo) 2 1, -128 dy Ixx) 1V<2 (-Dy 1) -128 dy (-o 2 Therefore, *:floor(-128 dy (x-xo) 2 floor((-Dy-1) (16-(XO mod 16))2 /512) floor(-128 dy (x- XO)2).
Therefore, at x floor(xo)+ 1, either floor(-128 dy (x-xo) 2 floor((-Dy-1) (16-(XO mod 16))2 /512) or floor(-128 (x-xo) 2 dy) floor((-Dy-1) (16-(XO mod 16))2 /5 12) +1.
533214.doc 27 Scaling 128 dy (x-XO) 2 by -2 Sx, gives an integer value that can be efficiently calculated and allows the error (E 3 in the approximation of the term floor(- 128 dy (xxo) 2 to be determined as follows: Let E 3 -2 Sx (-128 dy (x-xo) 2 floor((-Dy-1) (16-(XO mod 16))2 /5 12)) 256 Sx dy (x-XO) 2 2 Sx floor((-Dy- 1) (1 6-(XO mod 16))2 /5 12) Sx (Sy/Sx) 16 (x-xo)] 2 2 Sx floor((-Dy-1) (16-(XO mod 16))2 /512) Sy (I16-(XO mod 16))2 2 Sx floor((-Dy- 1) (1 6-(XO mod 16))2 /512).
If floor((-Dy-1) (16-(XO mod 16))2/5 12) floor(-128 (x-XO) 2 dy) then E 3 -2 Sx frac(128 (x-xo) 2 dy), otherwise E 3 -2 Sx (frac(128 (x-xo) 2 dy)+1).
Therefore, if E 3 -2 Sx then floor((-Dy-1) (16-(XO mod 16))2 /5 12) floor(128 (xxo) 2 dy), otherwise floor((-Dy-1) (16-(XO mod 16))2 /5 12) floor(-128 (x-xo) 2 dy) -1.
Given E 3 IPx(floor(xo)) and FPx(floor(xo)) can be calculated as follows: IPx(floor(xo)) floor((-Dy- 1) (1 6-(XO mod 1 6))2 /5 12) YO 16 floor(XO/16) mod 256 if E 3 -2 Sx; lIPx(floor(xo)) floor((-Dy- 1) (1 6-(XO mod 16))2 /5 12) 1 :-YO 16 floor(XO/1 6) mod 256 if E 3 2! -2 Sx; FPx(floor(xo)) E3 if E 3 -2 Sx; and FPx(floor(xo)) E 3 +2 Sx if E 3 -2 Sx.
Ix(floor(yo) and Fx(floor(yo) represent the integer and fractional parts of the x coordinate of the point on the line through (xO,y 1 and (xi,yi) with y floor(yo).
:At y floor(yo), Ix(floor(yo)) floor(256 (dx (y-yo)+xo)) floor(256 (dx (floor(yo)-yo)+xo)) floor(-256 dx (yo-floor(yo)) 256 xo) floor(-256 dx 16 (yo-floor(yo))/16) 256 xo floor((-Dx-1)(YO mod 16)/16) 16 XO The magnitude of the error (E 4 in the approximation of Ix(floor(yo)) is not greater than one. That is, either floor(256 dx floor((-Dx-1)(YO mod 16)/16) or floor(256 dx floor((-Dx-1)(YO mod 16)/16)+l The error (E 4 in the approximation of Jx(floor(yo)) can be determined as follows: 533214.doc 28 Let E 4 =Sy (256 dx (y-yo) floor((-Dx-1)(YO mod 16)/16)) 256 Sy dx (y-yo) Sy floor((-Dx-1)(YO mod 16)/16) -16 Sy (SxISy) 16 (yo-y) Sy floor((-Dx-1)(YO mod 16)/16) -16 Sx (YO mod 16) Sy floor((-Dx-1)(YO mod 16)/16) Either E 4 Sy frac(256 dx or E Sy (frac(256 dx 1).
Therefore, given E 4 Ix(floor(yo) and Fx(floor(yo)) can be evaluated as follows: Ix(floor(yo)) floor((-Dx-l)(YO mod 16)/16) 16 XO if E 4
SY;
Ix(floor(yo)) floor((-Dx-1)(YO mod 16)/16) XO 1 if E 4
SY
FPx(floor(yO)) E 4 if E 4 Sy; and FPx(floor(yO)) E 4 SY if E 4
SY.
Iy(floor(xo)) and Fy(floor(xo)) represent the integer and fractional parts of the y coordinate of the point on the line through (xo,yo) and (xl,yl) with x floor(xo).
Iy(floor(xo)) and Fy(floor(xo)) need to be calculated when Sx 0 and are calculated the same way as Ix(floor(yo)) and Fx(floor(yo)).
Let E 5 Sx (256 dy (x-xo) floor((-Dy- 1) (XO mod 16)/16)) 256 Sx dy (x-xo) Sx floor((-Dy-1) (XO mod 16)/16) -16 Sx (Sy/Sx) [1I6(xo-x)] Sx floor((-Dy- 1) (XO mod 16)/16) 16 Sy (XO mod 16) Sx floor((-Dy-1) (XO mod 16)/16); Either E 5 Sx frac(256 dy or E 5 Sx (frac(256 dy 1).
Given E 5 the terms Iy(floor(xo)) and Fy(floor(xo)) can be evaluated as follows: Iy(floor(xo)) floor((-Dy-1) (XO mod 16)/16) 16 YO if E 5 Sx; ly(floor(xo)) floor((-Dy-1) (XO mod 16)/16) 16 YO 1 if E 5
SX;
FPy(floor(xO)) E 5 if E 5 Sx; and FPy(floor(xO)) E 5 Sx if E 5 Sx.
Ily(floor(xo)+ 1) and Fy(floor(xo)+ 1) represent the integer and fractional parts of the y coordinate of the point on the line through (xo,yo) and (xi,y 1 with x =floor(xo)+1.
Iy(floor(xo)+1) and Fy(floor(xo)+1) values need to be calculated when Sx 0. The calculation of Iy(floor(xo)+1) and Fy(floor(xo)±1) is similar to the calculation of lx(floor(yO)) and Fx(floor(yO)).
That is, Iy(floor(xo)+1);: floor(Dy (16 XO mod 16)/16).
5332 14.doc 29 The magnitude of the error (E 6 in the approximation of Iy(floor(xo)±1) is not greater than one. The error in the approximation of Iy(floor(xo)±1) can be calculated as follows: Let E 6 -Sx (256 dy (x-xo) floor(Dy (16 XO mod 16)/16)) -256 Sx dy (x-xo) Sx floor(Dy (16 XO mod 16)/16) 16 Sx (Sy/Sx) [1I6(xo-x)] Sx floor(Dy (16 XO mod 16)/16) -16 Sy (16 (XO mod 16)) Sx floor(Dy (16 XO mod 16)/16) Either E 6 -Sx frac(256 dy or E 6 -Sx (frac(256 dy 1).
Therefore given E 6 Iy(floor(xO)+1) and Fy(floor(xO)+l) can be evaluated as follows: Iy(floor(xo)) Iy(floor(xo)) FPy(floor(xO)) FPy(floor(xO)) floor(Dy (16 XO mod 16)/16) 16 YO floor(Dy (16 XO mod 16)/16) 16 YO 1
E
6 Sx if E 6 -Sx; if E 6 -Sx; if E 6 -Sx; and if E 6 -Sx.
FbDx and FhDy represent the fractional parts of 128 dx and 128 dy respectively.
FhDx 2 Sy frac(128 dx) 2 Sy(128 dx floor(128 dx)) 256 Sy(SxISy) 2 Sy floor((256 dx)/2) 256 Sx 2 Sy floor(Dx/2) Let E 7 =256 Sx 2 Sy floor(DxI2).
Either E 7 =2 Sy frac(128 dx) or E 6 2 Sy (frac(128 dx) 1).
Therefore given E, FhDx can be evaluated as follows: FhDx E 7 if E 7 <2 Sy; or FhDx =E 7 2Sy if E 7 :2 Sy.
FhDy can be calculated in a similar way. However, Sx may be negative.
Let E 8 256 Sy 2 Sx floor(Dy/2) if Sx 0; 256 Sy 2 Sx floor((-Dy-l)12) if Sx 0; Either E 8 2 Sx frac(128 dy) or E 8 2 Sx (frac(128 dy) 1).
If Sx 0 then FhDy =E 8 if E 8 <2 Sx; or ER 2 Sx if E 8 :2 Sx; otherwise 5332 14.doc FhDy FhDy
E
8
E
8 2 Sx if E 8 Sx; and if E 8 !2 Sx.
In the y-scan loop, the fractional part of 256 dx is required to be known. The integer line segment scan algorithm does not store the fractional part of 256 dx since the fractional part of 256 dx can be calculated from FhDx as follows: FhDx 2 Sy frac(128 dx) 2 Sy frac(256 Sx/(2Sy)) 256 Sx mod (2Sy); and Sy frac(256 dx) Sy frac(256 dx) Sy frac(256 Sx/Sy) (256 Sx) mod Sy.
Therefore, Sy frac(256 dx) FhDx Sy Sy frac(256 dx) FhDx The calculation of FhDy is similarly carried out as follows: If Sx 0, FhDy 2 Sx frac(128 dy) 2 Sx frac(256 Sy/(2 Sx)) 256 Sy mod (2 Sx); and Sx frac(256dy) Sx frac(256 dy) Sx frac(256 Sy/Sx) (256 Sy) mod Sx.
Sx frac(256dy) FhDy Sx Sx frac(256dy) FhDy Similarly if Sx 0 FhDy 2 Sx frac(128 dy) 2 Sx frac(-256 Sy/(2 Sx)) -256 Sy mod Sx); Sx frac(256dy) Sx frac(256 dy) Sx frac(-256 Sy/-Sx) (-256 Sy) mod -Sx; Sx frac(256 dy) FhDy Sy if FhDx Sy; and if FbDx Sy.
if FhDy Sx; and if FhDy Sx.
if FhDy -Sx; and 5332 14.doc -31 Sx frac(256 dy) FhDy ifFhDy -Sx.
Fig. 14 shows a line segment 1401. If the line segment 1401 is nearly vertical, Dx will be close to zero and the magnitude of Dy will be large. Since Dy floor(256 Sy/Sx), the largest value that Dy can have is 256M where M is the maximum value that Sy can have. In accordance with the preferred embodiment, sufficient bits to represent Dy are preferably used in order to allow all possible values of Dy to be represented, so no modifications to the integer line segment scan algorithm are required to handle near vertical line segments.
For vertical line segments, Dy is undefined, and attempting to calculate Dy will result in a division by zero. Dy is used to initialise variables that are used in the x-scan and the control loop, and Dy is used in the x-scan and the control loop. For vertical line segments, the x-scan should preferably never execute. Similarly, FhDy is also undefined and should be set to zero for vertical line segments.
The control loop will execute correctly if Dy is set to a value that is larger than 256 Sy, and FhDy is set to zero. For vertical line segments FhDy is preferably set to zero. Dy is preferably set to the maximum possible value for vertical line segments.
If a line segment is nearly horizontal, Dy will be close to zero and the magnitude of Dx will be large. Since Dx floor(256 Sx/Sy), the largest value that Dx can have is (256 where M is the maximum value that Sx can have. Sufficient bits to represent Dx are preferably used in order to allow all possible values of Dx to be represented, so no modifications to the integer line segment scan algorithm are required to handle near horizontal line segments.
For horizontal line segments, Dx is undefined, and attempting to calculate Dx will result in a division by zero. Dx is used to initialise variables used in the y-scan, and Dx is used in the y-scan. For horizontal line segments, the y-scan should never execute so no modification to the integer line segment scan algorithm is required to handle horizontal line segments.
A horizontal line segment (xo,yo) (xi,yo) is preferably treated as pointing "upward" if xo xl, and treated as pointing "downward" otherwise, which ensures that all areas are calculated with the correct sign.
The integer line segment scan algorithm, in accordance with the preferred embodiment, is presented below as computer code and has eleven variables X, Y, Ix, Iy, Fx, Fy, IPx, IPy, IP, FPx, FPy, four constant increment values Dx, Dy, FhDx and FhDy 533214.doc 32 and a flag upward indicating whether or not the segment has reversed orientation. The eleven variables of the integer line segment scan algorithm are defined as follows: X represents the integer x coordinate; Y represents the integer y coordinate; Ix represents the integer part of scaled x coordinate at integer y values; ly represents the integer part of scaled y coordinate at integer x values; Fx represents the scaled fractional part of scaled x coordinate at integer y values; Fy represents the scaled fractional part of scaled y coordinate at integer x values; IPx represents the value of floor(256 frac(P(x,y))) at integer x values; IPy represents the value of floor(256 frac(P(x,y))) at integer y values; IP represents the value of floor(256 frac(P(x,y))) at the last pixel boundary; FPx represents the value of 16 (xi-xo) frac(256 frac(P(x,y))) at integer x values; FPy represents the value of 16 (yi-yo) frac(256 frac(P(x,y))) at integer y values; Dx represents the floor(256 dx), a scaled approximation of dx; Dy represents the floor(256 dy), a scaled approximation of dy; *FhDx represents the scaled fractional part of 128 dx; and FhDy represents the scaled fractional part of 128 dy.
Initialize: Y =YO>>4; X XO 4; S//if line segment oriented right to left, S//then increment the initial X value so that it represents the right side of the pixel.
if (Sx 0) X X+1; IP =0; Calculate initial values of IPy and FPy.
t (YO mod 16)*(YO mod 16); IPy FPy represents the error in IPy.
FPy Sx*t 2*Sy*IPy; 533214.doc 33 H/If the error is too large, adjust FPy and Tly.
if (FPy 2*Sy)
I
FPy =2*Sy; Wpy;
I
Py XO*(YO mod 16); HI Calculate FhDx FhDx 256*Sx 2*Sy*(Dx 1I); if (FhDx 2*Sy) FhDx =2*Sy; HI Calculate initial values of Ix and Fx.
15 HI Ix floor((-Dx-1)(YO mod 16)/16); Ix mod 16)) 4; H* Caclt rrri Fx 1*x(Y o 6)-S*x Fx 16Sx*y; o 6-S*x I/Ifulerror itolre vajus fx and Ix.
if(Fx S) t. F Sy;od1)(X od1) (1XO mod X mod 16);; HI Wx floor((-Dy-1)*t**2/5 12) 5332 14.doc -34- IPx 9; if(Sx 0) HI FPx represents the error in IPx.
FPx -Sy*t 2*Sx*IPx; H/If the error is too large, adjust FPx and IPx.
if(FPx 2*Sx) FPx =2*Sx; IPx; @too 0* **as* 00 a so
S
@55
S
5 9
S
else H FPx represents the error in IPx.
FPx Sy*t 2*Sx*IPx; IIf the error is too large, adjust FPx and IPx.
if(FPx -2*Sx) FPx 2*Sx; IPx; IPx YO*16*(XO H Calculate initial values of ly and Fy.
if(Sx 0)
I
HlIy floor((-Dy-1)(XO mod 16)/16); ly mod 16) 4; 533214.doc HI Fy represents the error in Jy.
Fy 16*Sy*(16*(XO mod 16)) Sx*Iy; H/If error is too large, adjust Fy and ly.
if (Fy Sx) Fy-=Sx; Iy 16*YO; else Jy (Dy* (16 (XO mod Fy 16*Sy*(16 (XO mod 16)) Sy*Iy; if (Fy -Sy)
I
Fy Sy; HI Calculate FhDy if (Sx 0) FhDy 256*Sy 2*Sx*(Dy>>l); if (FhDy 2*Sx) FhDy -=2*Sx; else if (Sx 0) FhDy =0; else
I
FhDy 256*Sy 2*Sx*((-Dyl) 1I); 5332 14.doc -36if(FhDy>2 -2*Sx) FhDy 2*Sx; t (YO mod 16)*(YO mod 16); H IPy floor(Dx*(YO mod 16)**2/512) IPy Dx*t 9; FPy represents the size of the error in EWy.
FPy Sx*t 2*Sy*IPy; H/If the error is too large, adjust IPy and FPy.
if (FPy 2*Sy) FPy =2*Sy; W~y; 2Wy XO*(YO mod 16); Control-Loop: if (Sx <0) cX X-1; else cX X; while (Y or cX (XO+Sx)>>4) i{ nY Y Dy; nFY Fy 2*FhDy; if (FhDy Sy) nFY-=Sy; if(nFY Sy) nFy Sy; 533214.doc 37
I
if ((nY Y+1) call X scan; else call Y-scan; if (reverseX) cX =X-1; else cX =X; HI Add the contribution of the end point.
A ((2*XO+Sx)*Sy) 1I; ouputfragment(cX, Y, A-IP, upward); return; Y scan: 20 IPy +=Ix +(Dx 1I); FPY 2*Fx; H/If the error is too large Iadjust FPy and adjust the error termi.
if (FPy 2*Sy) FPy -=2*Sy; ++I]Py; IAdd the fractional part of 128 dy.
FPy +=FhDx; I/If the error is too large, adjust 11 3 y and FPy.
if (FPy 2*Sy) FPy -=2*Sy; 5332 14.doc -38- ++TPy; Ix Dx; Fx FhDx; H Adjust error term for Ix Adjust the adjustment.
if(FhDx Sy) Fx-=Sy; if(Fx Sy) Fx Sy; ly; output-fragment(X, Y, IPy-IP, upward); output crossing(X, Y, upward); P Py; return; X scan: if(Sx 0) 25 IPx-=Iy; FPx 2*Fy; H/If the error is negative, adjust IPx and FPx.
if(FPx 0) FPx 2*Sx; IPx; else 533214.doc -39- IPx Iy; FPx 2*Fy; H/If the error is too large H adjust Wy and FPy.
if(FPx -2*Sx) FPx 2*Sx; ++IPx; 1 IPx H Subtract the fractional part of 128 dx.
FPx FhDy; H/If the error is negative, adjust IPx and FPx.
if(FPx 0) if(Sx 0) FPx 2*Sx; else FPx -2*Sx; IPx; Iy Dy; Fy FhDy; H Adjust error term for ly H Adjust the adjustment.
if(Sx 0) if (FhDy Sx) Fy Sx; 533214.doc else if(FhDy -Sx) Fy Sx; if(Fy Sx) Fy-=Sx; -ly; if(Sx >0) outputfragment(X, Y, IPx-IP, upward); o• else 20 outputfragment(X-1, Y, IPx-IP, upward);
X;
P P=Px; return; The aforementioned preferred methods comprise a particular control flow. There are many other variants of the preferred methods which use different control flows without departing from the spirit or scope of the invention. Furthermore one or more of the steps of the preferred methods can be performed in parallel rather sequentially.
The method of anti-aliasing the edges of a polygon can preferably be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of Fig. 1. Such dedicated hardware can include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
533214.doc 41 The method of anti-aliasing the edges of a polygon can alternatively be implemented using a conventional general-purpose computer system 1500, such as that shown in Fig. 15 wherein the processes of Figs. 1 to 14 can be implemented as software, such as an application program executing within the computer system 1500. In particular, the steps of method of Fig. 1 to 3 are effected by instructions in the software that are carried out by the computer. The software can be divided into two separate parts; one part for carrying out the method of anti-aliasing the edges of a polygon; and another part to manage the user interface between the latter and the user. The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer from the computer readable medium, and then executed by the computer. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer preferably effects an advantageous apparatus for anti-aliasing the edges of a polygon in accordance with the embodiments of the invention.
The computer system 1500 comprises a computer module 1501, input devices :such as a keyboard 1502 and mouse 1503, output devices including a printer 1515 and a display device 1514. A Modulator-Demodulator (Modem) transceiver device 1516 is ~.used by the computer module 1501 for communicating to and from a communications network 1520, for example connectable via a telephone line 1521 or other functional medium. The modem 1516 can be used to obtain access to the Internet, and other network systems, such as a Local Area Network (LAIN) or a Wide Area Network (WAN).
The computer module 1501 typically includes at least one processor unit 1505, a memory unit 1506, for example formed from semiconductor random access memory 25 (RAM) and read only memory (ROM), input/output (110) interfaces including a video interface 1507, and an 1/0 interface 1513 for the keyboard 1502 and mouse 1503 and optionally a joystick (not illustrated), and an interface 1508 for the modem 1516. A :storage device 1509 is provided and typically includes a hard disk drive 1510 and a floppy disk drive 1511. A magnetic tape drive (not illustrated) may also be used. A CD- ROM drive 1512 is typically provided as a non-volatile source of data. The components 1505 to 1513 of the computer module 1501, typically communicate via an interconnected bus 1504 and in a manner which results in a conventional mode of operation of the computer system 1500 known to those in the relevant art. Examples of 5332 14.doc 42 computers on which the embodiments can be practised include EBM-PC's and compatibles, Sun Sparcstations or alike computer systems evolved therefrom.
The application program of the preferred embodiment can be resident on the hard disk drive 1510 and read and controlled in its execution by the processor 1505.
Intermediate storage of the program and any data fetched from the network 1520 may be accomplished using the semiconductor memory 1506, possibly in concert with the hard disk drive 1510. In some instances, the application program may be supplied to the user encoded on a CD-ROM or floppy disk and read via the corresponding drive 1512 or 1511, or alternatively may be read by the user from the network 1520 via the modem device 1516. Still further, the software can also be loaded into the computer system 1500 from other computer readable medium including magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between the computer module 1501 and another device, a computer readable card such as a PCMCIA card, and the Internet and Intranets including email transmissions and information recorded on websites and the like. The foregoing is merely exemplary of relevant computer readable mediums. Other computer readable mediums may be practiced :without departing from the scope and spirit of the invention.
The foregoing describes only one embodiment/some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiment(s) being illustrative and not restrictive.
In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including" and not "consisting only of'. Variations of the word comprising, such as "comprise" and "comprises" have corresponding meanings.
5332 14.doc

Claims (4)

1. A method of anti-aliasing the edges of a polygon, said method comprising the steps of: processing a description of said polygon to produce a plurality of line segments; scanning each of said line segments to determine a plurality of area function values; combining said plurality of area function values to determine a plurality of pixel fragment areas; combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; determining an opacity value for each said pixel, utilising said total covered areas; and determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary. S.
2. The method according to claim 1, wherein said step of scanning each of said line segments comprises the sub-steps of: scanning each of said line segments a first time to determine a first plurality of 0: 20 area function values corresponding to intersections of said line segment with horizontal pixel boundaries; and scanning each of said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with .oo0 vertical pixel boundaries. 0:
3. The method according to claim 2, comprising the further sub-steps of: storing said first and second pluralities of area function values; and combining said first and second pluralities of area function values to determine said plurality of pixel fragment areas.
4. The method according to any one of claims 2 or 3, wherein said first and second scans are combined into a single scan.
533214.doc 44 The method according to claim 4, wherein said single scan produces pixel fragment areas in the order of occurrence of intersections of said line segment with a pixel boundary. 6. The method according to claim 2, wherein said step of determining said first plurality of area function values comprises the following sub-steps: calculating the value of the area function at a first intersection of said line segment with a horizontal pixel boundary; and calculating the value of the area function at each successive intersection of said line segment with a horizontal pixel boundary, by adding an increment to the value of the area function calculated at the previous intersection of said line segment with a horizontal pixel boundary. 7. The method according to claim 6, wherein said increment is equal to half of the value of a horizontal step between successive intersections of said line segment with a horizontal pixel boundary plus the value of the previous horizontal co-ordinate, if values :of the corresponding vertical co-ordinates are increasing at successive intersections, otherwise V, said increment is equal to half of the value of the horizontal step between 20 successive intersections of said line segment with a horizontal pixel boundary minus the value of said previous horizontal co-ordinate. The method according to claim 2, wherein determining said second plurality of area function values comprises the following sub-steps: 25 calculating the value of the area function at a first intersection of said line segment with a vertical pixel boundary; ****calculating the value of the area fuinction at each successive intersection of said line segment with a vertical pixel boundary, by adding an increment to the value of the area function calculated at the previous intersection of said line segment with a vertical pixel boundary. 9. The method according to claim 8, wherein said increment is equal to half of the value of a vertical step between successive intersections of said line segment with a vertical pixel boundary plus the value of the previous vertical co-ordinate if values of the 5332 14.doc corresponding horizontal co-ordinates are increasing at successive intersections, otherwise said increment is equal to half of the value of the vertical step between successive intersections of said line segment with a vertical pixel boundary minus the value of the previous vertical co-ordinate. 10. The method according to claim 1, wherein said step of combining said plurality of area function values comprises the following sub-step for each of said line segments: producing a pixel fragment area for each pixel crossed by said line segment, said pixel fragment area being equal to the fractional part of the difference between the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment enters said pixel and the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment exits said pixel. 11. The method according to any one of claims 1 to 10, wherein said step of S: scanning each of said line segments comprises the following further sub-steps for each of •said line segments: o. determining a first contribution of said line segment to the total covered area of a 20 pixel containing the start of said line segment; and determining a second contribution of said line segment to the total covered area of a pixel containing the end of said line segment. 12. The method according to claim 11, wherein said first contribution is equal to the 25 value of the area function calculated at the first intersection of the line segment with said pixel. S9 13. The method according to any one of claims 11 or 12, wherein said total covered area is equal to the fractional part of the sum of each of said first and second contibutions and each pixel fragment area produced by a line segment crossing said pixel. 14. The method according to any one of claims 1 to 13, wherein each said opacity value is equal to the opacity of said polygon multiplied by said total covered area calculated for said pixel. 533214.doc -46- The method according to any one of claims 1 to 14, wherein said colour value is determined from the opacity of said pixel and the colour of said polygon. 16. The method according to any one of claims 1 to 15, wherein each line segment is represented by a record. 17. The method according to any one of claims 1 to 15, wherein each line segment is represented by a first record and a second record, said first record being used for scanning said line segment to determine intersections with horizontal pixel boundaries and said second record being used for scanning said line segment to determine intersections with vertical pixel boundaries. 18. A method of anti-aliasing the edges of a polygon, said method comprising the steps of: processing a description of said polygon to produce a plurality of line segments; scanning each of said line segments a first time to determine a first plurality of area function values corresponding to intersections of said line segment with horizontal pixel boundaries; scanning each of said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with o* vertical pixel boundaries; .3e* combining said first and second pluralities of area function values to determine a plurality of pixel fragment areas; 25 combining said plurality of pixel fragment areas to determine a total covered S° area for each pixel partially covered by said polygon; determining an opacity value for each said pixel, utilising said total covered areas; and determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary. 19. The method according to claim 18, comprising the further sub-steps of: storing said first and second pluralities of area function values; and 533214.doc 47 combining said first and second pluralities of area function values to determine said plurality of pixel fragment areas. The method according to any one of claims 18 or 19, wherein said first and second scans are combined into a single scan. 21. The method according to claim 20, wherein said single scan produces pixel fragment areas in the order of occurrence of intersections of said line segment with a pixel boundary. 22. The method according to claim 18, wherein said step of determining said first plurality of area function values comprises the following sub-steps: calculating the value of the area function at a first intersection of said line segment with a horizontal pixel boundary; and calculating the value of the area function at each successive intersection of said line segment with a horizontal pixel boundary, by adding an increment to the value of the "area function calculated at the previous intersection of said line segment with a horizontal V pixel boundary. 20 23. The method according to claim 22, wherein said increment is equal to half of the value of a horizontal step between successive intersections of said line segment with a "horizontal pixel boundary plus the value of the previous horizontal co-ordinate, if values of the corresponding vertical co-ordinates are increasing at successive intersections, otherwise S 25 said increment is equal to half of the value of the horizontal step between successive intersections of said line segment with a horizontal pixel boundary minus the g. value of said previous horizontal co-ordinate. 24. The method according to claim 18, wherein determining said second plurality of area function values comprises the following sub-steps: calculating the value of the area function at a first intersection of said line segment with a vertical pixel boundary; calculating the value of the area function at each successive intersection of said line segment with a vertical pixel boundary, by adding an increment to the value of the 533214.doc 48 area function calculated at the previous intersection of said line segment with a vertical pixel boundary. The method according to claim 24, wherein said increment is equal to half of the value of a vertical step between successive intersections of said line segment with a vertical pixel boundary plus the value of the previous vertical co-ordinate if values of the corresponding horizontal co-ordinates are increasing at successive intersections, otherwise said increment is equal to half of the value of the vertical step between successive intersections of said line segment with a vertical pixel boundary minus the value of the previous vertical co-ordinate. 11. 26. The method according to any one of claims 18 to 25, wherein said step of combining said plurality of area function values comprises the following sub-step for each of said line segments: producing a pixel fragment area for each pixel crossed by said line segment, said pixel fragment area being equal to the fractional part of the difference between the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment enters said pixel and the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment exits said pixel. 27. The method according to claim 18, wherein said step scanning each of said line segments comprises the following further sub-steps for each of said line segments: determining a first contribution of said line segment to the total covered area of a pixel containing the start of said line segment; and determining a second contribution of said line segment to the total covered area of a pixel containing the end of said line segment. 28. The method according to claim 27, wherein said first contribution is equal to the value of the area function calculated at the first intersection of the line segment with said pixel. 5332 14.doc 49 29. The method according to any one of claims 27 or 28, wherein said total covered area is equal to the fractional parts of the sum of each of said first and second contibutions and each pixel fragment area produced by a line segment crossing said pixel. 30. The method according to any one of claims 18 to 29, wherein each said opacity value is equal to the opacity of said polygon multiplied by said total covered area calculated for said pixel. 31. The method according to any one of claims 18 to 30, wherein said colour value is determined from the opacity of said pixel and the colour of said polygon. 32. The method according to any one of claims 18 to 3 1, wherein each line segment is represented by a record. 33. The method according to any one of claims 18 to 32, wherein each line segment is represented by a first record and a second record, said first record being used for scanning said line segment to determine intersections with horizontal pixel boundaries and said second record being used for scanning said line segment to determine intersections with vertical pixel boundaries. 34. A method of calculating a total covered area of a pixel produced by at least one line segment, said method comprising the steps of: scanning said line segment to determine a plurality of area function values; o combining said plurality of area function values to determine a plurality of pixel 0 fragment areas; .oo: 25 combining said plurality of pixel fragment areas to determine a total covered area for said pixel. 035. The method according to claim 34, wherein said step of scanning said line segments comprises the sub-steps of: scanning said line segment a first time to determine a first plurality of area fuinction values corresponding to intersections of said line segment with horizontal pixel boundaries; and 533214.doc 50 scanning said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with vertical pixel boundaries. 36. An apparatus for anti-aliasing the edges of a polygon, said apparatus comprising: means for processing a description of said polygon to produce a plurality of line segments; means for scanning each of said line segments to determine a plurality of area function values; means for combining said plurality of area function values to determine a plurality of pixel fragment areas; means for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; means for determining an opacity value for each said pixel, utilising said total covered areas; and means for determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary. 37. The apparatus according to claim 36, further comprising: means for scanning each of said line segments a first time to determine a first plurality of area fuinction values corresponding to intersections of said line segment with horizontal pixel boundaries; and 25 means for scanning each of said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with vertical pixel boundaries. 38. The apparatus according to claim 37, further comprising: means for storing said first and second pluralities of area function values; and means for combining said first and second pluralities of area function values to determine said plurality of pixel fragment areas. 39. The apparatus according to any one of claims 37 or 38, wherein said first and second scans are combined into a single scan. 5332 14.doe -51 The apparatus according to claim 39, wherein said single scan produces pixel fr-agment areas in the order of occurrence of intersections of said line segment with a pixel boundary. 41. The apparatus according to claim 37, further comprising: means for calculating the value of the area function at a first intersection of a line segment with a horizontal pixel boundary; and means for calculating the value of the area function at each successive intersection of said line segment with a horizontal pixel boundary, by adding an increment to the value of the area function calculated at the previous intersection of said line segment with a horizontal pixel boundary. 42. The apparatus according to claim 41, wherein said increment is equal to half of the value of a horizontal step between successive intersections of said line segment with a horizontal pixel boundary plus the value of the previous horizontal co-ordinate, if values oe of the corresponding vertical co-ordinates are increasing at successive intersections, •otherwise S. said increment is equal to half of the value of the horizontal step between 20 successive intersections of said line segment with a horizontal pixel boundary minus the value of said previous horizontal co-ordinate. 00o0 ooo. 43. The apparatus according to claim 37, further comprising: means for calculating the value of the area function at a first intersection of a line 25 segment with a vertical pixel boundary; and means for calculating the value of the area function at each successive intersection of said line segment with a vertical pixel boundary, by adding an increment to S"the value of the area function calculated at the previous intersection of said line segment with a vertical pixel boundary. 44. The apparatus according to claim 43, wherein said increment is equal to half of the value of a vertical step between successive intersections of said line segment with a vertical pixel boundary plus the value of the previous vertical co-ordinate if values of the 533214.doc 52 corresponding horizontal co-ordinates are increasing at successive intersections, otherwise said increment is equal to half of the value of the vertical step between successive intersections of said line segment- with a vertical pixel boundary minus the value of the previous vertical co-ordinate. The apparatus according to claim 36, further comprising: means for producing a pixel fragment area for each pixel crossed by a line segment, said pixel fragment area being equal to the fractional part of the difference between the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment enters said pixel and the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment exits said pixel. 46. The apparatus according to any one of claims 36 to 45, further comprising: means for determnining a first contribution of a line segment to the total covered area of a pixel containing the start of said line segment; and means for determining a second contribution of said line segment to the total covered area of a pixel containing the end of said line segment. 47. The apparatus according to claim 46, wherein said first contribution is equal to the value of the area function calculated at the first intersection of the line segment with said pixel. 48. The apparatus according to any one of claims 46 or 47, wherein said total covered area is equal to the fractional part of the sum of each of said first and second contibutions and each pixel fragment area produced by a line segment crossing said pixel. 49. The apparatus according to any one of claims 36 to 48, wherein each said opacity value is equal to the opacity of said polygon multiplied by said total covered area calculated for said pixel. The apparatus according to any one of claims 36 to 49, wherein said colour value is determined from the opacity of said pixel and the colour of said polygon. 5332 14.doc -53 51. The apparatus according to any one of claims 36 to 50, wherein each line segment is represented by a record. 52. The apparatus according to any one of claims 36 to 50, wherein each line segment is represented by a first record and a second record, said first record being used for scanning said line segment to determine intersections with horizontal pixel boundaries and said second record being used for scanning said line segment to determine intersections with vertical pixel boundaries. 53. An apparatus for anti-aliasing the edges of a polygon, said apparatus comprising: means for processing a description of said polygon to produce a plurality of line segments; means for scanning each of said line segments a first time to determine a first plurality of area function values corresponding to intersections of said line segment with horizontal pixel boundaries; S"means for scanning each of said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with vertical pixel boundaries; 20 means for combining said first and second pluralities of area function values to determine a plurality of pixel fragment areas; means for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; means for determining an opacity value for each said pixel, utilising said total 25 covered areas; and means for determining a colour value for each said pixel, utilising said opacity o values, wherein each said area function value is calculated at an intersection of said line 0. segment with a pixel boundary. 54. The apparatus according to claim 53, further comprising: means for storing said first and second pluralities of area function values; and means for combining said first and second pluralities of area function values to determine said plurality of pixel fragment areas. 533214.doc 54 The apparatus according to any one of claims 53 or 54, wherein said first and second scans are combined into a single scan. 56. The apparatus according to claim 55, wherein said single scan produces pixel fragment areas in the order of occurrence of intersections of said line segment with a pixel boundary. 57. The apparatus according to claim 53, further comprising: means for calculating the value of the area function at a first intersection of a line segment with a horizontal pixel boundary; and means for calculating the value of the area function at each successive intersection of said line segment with a horizontal pixel boundary, by adding an increment to the value of the area function calculated at the previous intersection of said line segment with a horizontal pixel boundary. 58. The apparatus according to claim 57, wherein said increment is equal to half of the value of a horizontal step between successive intersections of said line segment with a horizontal pixel boundary plus the value of the previous horizontal co-ordinate, if values of the corresponding vertical co-ordinates are increasing at successive intersections, 20 otherwise said increment is equal to half of the value of the horizontal step between successive intersections of said line segment with a horizontal pixel boundary minus the value of said previous horizontal co-ordinate. 25 59. The apparatus according to claim 53, further comprising: means for calculating the value of the area function at a first intersection of a line 0 segment with a vertical pixel boundary; 0. means for calculating the value of the area function at each successive intersection of said line segment with a vertical pixel boundary, by adding an increment to the value of the area function calculated at the previous intersection of said line segment with a vertical pixel boundary. The apparatus according to claim 59, wherein said increment is equal to half of the value of a vertical step between successive intersections of said line segment with a 5332 14.doc 55 vertical pixel boundary plus the value of the previous vertical co-ordinate if values of the corresponding horizontal co-ordinates are increasing at successive intersections, otherwise said increment is equal to half of the value of the vertical step between successive intersections of said line segment with a vertical pixel boundary minus the value of the previous vertical co-ordinate. 61. The apparatus according to any one of claims 53 to 60, further comprising: means for producing a pixel fragment area for each pixel crossed by a line segment, said pixel fragment area being equal to the fractional part of the difference between the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment enters said pixel and the value of an area function calculated at the intersection of said line segment with the pixel boundary where said line segment exits said pixel. 62. The apparatus according to claim 53, further comprising: means for determining a first contribution of a line segment to the total covered area of a pixel containing the start of said line segment; and determining a second contribution of said line segment to the total covered area of a pixel containing the end of said line segment. 63. The apparatus according to claim 62, wherein said first contribution is equal to the value of the area function calculated at the first intersection of the line segment with said pixel. 64. The apparatus according to any one of claims 62 or 63, wherein said total covered area is equal to the fractional parts of the sum of each of said first and second contibutions and each pixel fragment area produced by a line segment crossing said pixel. 65. The apparatus according to any one of claims 53 to 64, wherein each said opacity value is equal to the opacity of said polygon multiplied by said total covered area calculated for said pixel. 5332 14.doe -56- 66. The apparatus according to any one of claims 53 to 65, wherein said colour value is determined from the opacity of said pixel and the colour of said polygon. 67. The apparatus according to any one of claims 53 to 66, wherein each line segment is represented by a record. 68. The apparatus according to any one of claims 53 to 67, wherein each line segment is represented by a first record and a second record, said first record being used for scanning said line segment to determine intersections with horizontal pixel boundaries and said second record being used for scanning said line segment to determine intersections with vertical pixel boundaries. 69. An apparatus for calculating a total covered area of a pixel produced by at least one line segment, said apparatus comprising: means for scanning said line segment to determine a plurality of area function :values; means for combining said plurality of area function values to determine a plurality of pixel fragment areas; and :means for combining said plurality of pixel fragment areas to determine a total covered area for said pixel. The apparatus according to claim 69, further comprising: .***means for scanning said line segment a first time to determine a first plurality of area function values corresponding to intersections of said line segment with horizontal pixel boundaries; and means for scanning said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with vertical pixel boundaries. 71. A computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure for anti-aliasing the edges of a polygon, said program comprising: code for processing a description of said polygon to produce a plurality of line segments; 5332 14.doc -57- code for scanning each of said line segments to determnine a plurality of area function values; code for combining said plurality of area function values to determine a plurality of pixel fragment areas; code for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; code for determining an opacity value for each said pixel, utilising said total covered areas; and code for determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary. 72. A computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure for anti-aliasing the edges of a polygon, said program comprising: code for processing a description of said polygon to produce a plurality of line segments; code for scanning each of said line segments a first time to determine a first plurality of area function values corresponding to intersections of said line segment with V.00. 20 horizontal pixel boundaries; code for scanning each of said line segments a second time to determine a second plurality of area function values corresponding to intersections of said line segment with vertical pixel boundaries; 25 code for combining said first and second pluralities of area function values to determine a plurality of pixel fragment areas; code for combining said plurality of pixel fragment areas to determine a total covered area for each pixel partially covered by said polygon; code for determining an opacity value for each said pixel, utilising said total covered areas; and code for determining a colour value for each said pixel, utilising said opacity values, wherein each said area function value is calculated at an intersection of said line segment with a pixel boundary. 5332 14.doc 58 73. A computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure for calculating a total covered area of a pixel produced by at least one line segment, said program comprising: code for scanning said line segment to determine a plurality of area function values; code for combining said plurality of area function values to determine a plurality of pixel fragment areas; and code for combining said plurality of pixel fragment areas to determnine a total covered area for said pixel. 74. A method of anti-aliasing the edges of a polygon, substantially as herein described with reference to any one of the embodiments as illustrated in the accompanying drawings. 75. An apparatus for anti-aliasing the edges of a polygon, substantially as herein described with reference to any one of the embodiments as illustrated in the ~e accompanying drawings. 576. A computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure for anti-aliasing the edges of a polygon, said program being substantially as herein described with reference to any one of the embodiments as illustrated in the accompanying drawings. too. 0:9*0:DATED this twentieth Day of December, 2000 Canon Kabushiki Kaisha '*.Patent Attorneys for the Applicant/Nominated Person SPRUSON FERGUSON 5332 14.doe
AU72392/00A 1999-12-22 2000-12-20 Anti-aliased polygon rendering Ceased AU765466B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU72392/00A AU765466B2 (en) 1999-12-22 2000-12-20 Anti-aliased polygon rendering

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AUPQ4798A AUPQ479899A0 (en) 1999-12-22 1999-12-22 Anti-aliased polygon rendering
AUPQ4798 1999-12-22
AU72392/00A AU765466B2 (en) 1999-12-22 2000-12-20 Anti-aliased polygon rendering

Publications (2)

Publication Number Publication Date
AU7239200A AU7239200A (en) 2001-06-28
AU765466B2 true AU765466B2 (en) 2003-09-18

Family

ID=25637022

Family Applications (1)

Application Number Title Priority Date Filing Date
AU72392/00A Ceased AU765466B2 (en) 1999-12-22 2000-12-20 Anti-aliased polygon rendering

Country Status (1)

Country Link
AU (1) AU765466B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779526B2 (en) 2012-11-27 2017-10-03 Canon Kabushiki Kaisha Method, system and apparatus for determining area of a pixel covered by a scalable definition for a character

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5471568A (en) * 1993-06-30 1995-11-28 Taligent, Inc. Object-oriented apparatus and method for scan line conversion of graphic edges
US5684939A (en) * 1993-07-09 1997-11-04 Silicon Graphics, Inc. Antialiased imaging with improved pixel supersampling
WO1999052079A1 (en) * 1998-04-08 1999-10-14 Webtv Networks, Inc. Object-based anti-aliasing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5471568A (en) * 1993-06-30 1995-11-28 Taligent, Inc. Object-oriented apparatus and method for scan line conversion of graphic edges
US5684939A (en) * 1993-07-09 1997-11-04 Silicon Graphics, Inc. Antialiased imaging with improved pixel supersampling
WO1999052079A1 (en) * 1998-04-08 1999-10-14 Webtv Networks, Inc. Object-based anti-aliasing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779526B2 (en) 2012-11-27 2017-10-03 Canon Kabushiki Kaisha Method, system and apparatus for determining area of a pixel covered by a scalable definition for a character

Also Published As

Publication number Publication date
AU7239200A (en) 2001-06-28

Similar Documents

Publication Publication Date Title
US6608942B1 (en) Method for smoothing jagged edges in digital images
US7006110B2 (en) Determining a coverage mask for a pixel
EP1272977B1 (en) Shape processor
KR100243174B1 (en) Apparatus and method of generating sub-pixel mask
US6768491B2 (en) Barycentric centroid sampling method and apparatus
US5272469A (en) Process for mapping high resolution data into a lower resolution depiction
KR20050030595A (en) Image processing apparatus and method
JP2006106705A (en) Rendering outline font
US6621501B2 (en) Pixel zoom system and method for a computer graphics system
US7679620B2 (en) Image processing using saltating samples
JPS6232476B2 (en)
JP2005100176A (en) Image processor and its method
US6614432B1 (en) Image rendering technique
US7106332B2 (en) Method for converting two-dimensional pen strokes to distance fields
JP4180043B2 (en) Three-dimensional graphic drawing processing device, image display device, three-dimensional graphic drawing processing method, control program for causing computer to execute the same, and computer-readable recording medium recording the same
JP3471115B2 (en) Image coordinate transformation method
AU765466B2 (en) Anti-aliased polygon rendering
JPH07200864A (en) Computer
US7215342B2 (en) System and method for detecting and converting a transparency simulation effect
EP0855682B1 (en) Scan line rendering of convolutions
US6718072B1 (en) Image conversion method, image processing apparatus, and image display apparatus
US20040061877A1 (en) Fast edge reconstruction with upscaling for pulse width modulation rendering
US7170528B1 (en) Fast glyph rendering for vector based fonts
US6380936B1 (en) System and method for inferring projective mappings
AU727677B2 (en) A method for smoothing jagged edges in digital images