WO2006021899A2 - 3d-graphics - Google Patents

3d-graphics Download PDF

Info

Publication number
WO2006021899A2
WO2006021899A2 PCT/IB2005/052509 IB2005052509W WO2006021899A2 WO 2006021899 A2 WO2006021899 A2 WO 2006021899A2 IB 2005052509 W IB2005052509 W IB 2005052509W WO 2006021899 A2 WO2006021899 A2 WO 2006021899A2
Authority
WO
WIPO (PCT)
Prior art keywords
sgp
tgp
overlapping
primitives
geometric
Prior art date
Application number
PCT/IB2005/052509
Other languages
French (fr)
Other versions
WO2006021899A3 (en
Inventor
Kornelis Meinds
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006021899A2 publication Critical patent/WO2006021899A2/en
Publication of WO2006021899A3 publication Critical patent/WO2006021899A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the invention relates to a method of generating 3D-graphics that comprises edge anti-aliasing, a graphics system for generating edge anti-aliased 3D-graphics, a computer comprising the graphics system, and a display apparatus comprising the graphics system.
  • Known methods of generating 3D-graphics receive geometric data which comprises geometric primitives of the 3D-objects from a 3D-application. Usually, the vertices of the primitives are provided as the geometric data. Further, texture data is available which indicate the textures of the 3D-objects. Usually, the texture data is stored in a texture memory and represents texture intensities at texel positions on a texel grid. The 3D-graphics method processes the geometric data and the texture data to obtain pixel intensities on pixel positions on a pixel grid in the screen space. Usually, these pixel intensities are stored in a frame buffer. The pixel intensities are displayed on a display by reading out the frame buffer.
  • a plurality of pixel fragments is stored per pixel position in a plurality of fragment (frame) buffers.
  • Each of the pixel fragments is related to an amount of area of the pixel cell covered by the associated primitive.
  • a weight factor is determined representative for the amount of area covered.
  • the weighted intensities that are obtained by multiplying for each of these primitives the weight factor with the pixel intensities are called partial colors.
  • the pixel fragments that are stored in the pixel fragment buffers are, for each pixel location, merged to obtain the final pixel intensities (including color).
  • the pixel fragment values have to be merged when all the pixel fragment buffers for this pixel are full, even if more primitives contribute to the pixel value. This causes artifacts. It is an object of the invention to provide a method of generating edge anti- aliased 3D-graphics that require less frame buffers at an improved performance.
  • a first aspect of the invention provides a method of generating edge anti- aliased 3D-graphics.
  • a second aspect of the invention provides a graphics system for generating edge anti-aliased 3D-graphics as claimed in claim 4.
  • a third aspect of the invention provides a computer as claimed in claim 5.
  • a fourth aspect of the invention provides a display apparatus as claimed in claim 6.
  • vertices of the geometric primitives of 3D-objects are transformed to screen space.
  • the primitives may be any polygon.
  • the vertices may have positions in the screen space that are in-between grid positions, and the image to be displayed is determined by the intensities (including brightness and color) on the grid positions.
  • the visible parts of the geometric primitives are determined by using the geometric data to obtain non-overlapping primitives. Thus, of overlapping primitives is determined which primitive is on top, seen from the viewpoint. The parts of primitives that are occluded are not used anymore.
  • a first primitive is a large polygon completely covering a small polygon, and the large polygon is on top
  • the small polygon is not anymore processed, only the large polygon is processed further.
  • two adjacent areas are obtained: the small polygon and an area of the large polygon minus the small polygon. This latter area is further also referred to as the delta-area.
  • the delta-area which covers the large polygon except where the small polygon covers it may be a polygon itself, or may be build up out of several adjacent or separate, non-overlapping polygons covering the delta-area.
  • HSR Hidden surface removal
  • the non-overlapping geometric primitives are stored in a memory for later use.
  • the stored non-overlapping geometric primitives are rasterized one by one to determine the intensities of the pixels in the screen space based on texture data retrieved from bitmaps, or/and procedurally on-the-fly generated bitmaps (such as Gouraud shading) representing textures of the 3D-objects.
  • the rasterizer may use the inverse texture mapping or the forward texture mapping approach, which as such both are known.
  • the intensities of the pixels determined for each one of the non-overlapping geometric primitives are accumulated in a frame memory to obtain the final intensities of the pixels after all the non-overlapping geometric primitives have been processed by the rasterizer.
  • the rasterizer in the present invention only has to process non- overlapping geometric primitives, it is not required anymore to store the intermediate results of (partly) overlapping geometric primitives in the fragment frame buffers.
  • the pixel intensity per pixel per primitive can be simply added to the value already stored in the frame buffer for an adjacent non-overlapping primitive that has been processed earlier. Thus, less memory is required, and no artifacts will be caused by more contributing primitives than fragment frame buffers available.
  • the rasterizer is based on the inverse texture mapping approach, which as such is well known.
  • a hidden surface removal unit is disclosed which is present at the input side of the graphics processor.
  • the disclosed scan- line rasterizer has a disadvantage that it requires the non-overlapping primitives that contribute to a pixel to be available at the time of calculating the final pixel color (summation of all the primitive intensity contributions), and therefore requires the primitives in pre-sorted order per scan-line (the order may differ per scan-line).
  • non-overlapping primitives have a specific data structure (not the normal polygonal structure where the vertices of the corners are stored), called pattern, such that it can be processed by the specific scan-line rasterizer used.
  • pattern such that it can be processed by the specific scan-line rasterizer used.
  • the non-overlapping primitives do not have to be in this special data structure (they are in the same polygonal vertex format as the (possibly overlapping) input polygons), the summation of the primitive contributions to a pixel is accumulated via a read-add-write access to the frame buffer on the associated pixel position.
  • the prior art approach has the disadvantage that a normalizing per pixel is required as can be seen in equation (8.2) and equation (8.3) of the prior art.
  • Another disadvantage is that the system is only suitable for box pre-filtering see equation (8.2) and the text just above it in the prior art. The system is not suitable for higher order pre-filtering.
  • the rasterizer is based on the forward texture mapping approach, which as such is well known, for example from the publication "Resample hardware for 3D graphics” by Koen Meinds and Bart Barenbrug, in T. Ertl, W. Heidrich, and M. Doggett, editors, Proceedings of Graphics Hardware 2002, pages 17-26.
  • no hidden surface removal unit is present at the input side of the graphics processor, and a pixel fragment processing circuit that comprises pixel fragment buffers is required.
  • Fig. 1 elucidates a display of a real world 3D-object on a display screen
  • Fig. 2 elucidates the known inverse texture-mapping algorithm
  • Fig. 3 shows a block diagram of a circuit for performing the known inverse texture-mapping algorithm
  • Fig. 4 elucidates the known forward texture-mapping algorithm
  • Fig. 5 shows a block diagram of a circuit for performing the known forward texture-mapping algorithm
  • Fig. 6 shows a block diagram of a 3D-graphics system in accordance with the invention
  • Figs. 7A and 7B show examples of possible configuration of overlapping primitives and the resulting non-overlapping primitives.
  • Fig. 8 shows a block diagram of a computer which comprises the 3D-graphics system
  • Fig. 9 shows a block diagram of a display apparatus that comprises the 3D- graphics system.
  • Fig. 1 elucidates a display of a real world three-dimensional (further also referred to as 3D) object on a display screen.
  • a real world object WO which may be a 3D- object such as the cube shown, is projected on a two-dimensional display screen DS.
  • the appearance of the 3D-object WO is determined by a surface structure usually referred to as texture.
  • the polygon A has a texture TA and the polygon B has a texture TB.
  • the polygons A and B are with a more general term also referred to as the real world graphics primitives.
  • the projection of the real world object WO on the screen DS is obtained by defining an eye or camera position ECP with respect to the screen DS.
  • ECP eye or camera position
  • the polygon SGP in the screen space SSP defined by the coordinates x and y is also referred to as a graphics primitive instead of the graphics primitive in the screen space.
  • graphics primitive is indicated the polygon A in the eye space, or the polygon SGP in the screen space, or the polygon TGP in the texture space, it is clear from the context which graphics primitive is meant. It is only the geometry of the polygon A that is used to determine the geometry of the polygon SGP. Usually, it suffices to know the vertices of the polygon A to determine the vertices of the polygon SGP.
  • the texture TA of the polygon A is not directly projected from the real world into the screen space SSP.
  • the different textures of the real world object WO are stored in a texture map or texture space TSP defined by the coordinates u and v.
  • Fig. 1 shows that the polygon A has a texture TA which is available in the texture space TSP in the area indicated by TA, while the polygon B has another texture TB which is available in the texture space TSP in the area indicated by TB.
  • the polygon A is projected on the texture space TA such that a polygon TGP occurs such that when the texture present within the polygon TGP is projected on the polygon A the texture of the real world object WO is obtained or at least resembled as much as possible.
  • a perspective transformation PPT between the texture space TSP and the screen space SSP projects the texture of the polygon TGP on the corresponding polygon SGP.
  • This process is also referred to as texture mapping.
  • the textures are not all present in a global texture space, but every texture defines its own texture space.
  • Fig. 2 elucidates the known inverse texture-mapping algorithm.
  • Fig. 2 shows the polygon SGP in the screen space SSP and the polygon TGP in the texture space TSP. To facilitate the elucidation, it is assumed that both the polygon SGP and the polygon TGP correspond to the polygon A of the real world object WO of Fig. 1.
  • the intensities PIi of the pixels Pi present in the screen space SSP define the image displayed.
  • the pixels Pi are actually positioned (in a matrix display) or thought to be positioned (in a CRT) in an orthogonal matrix of positions.
  • Fig. 2 only a limited number of the pixels Pi is indicated by the dots.
  • the polygon SGP is shown in the screen space SSP to indicate which pixels Pi are positioned within the polygon SGP.
  • the texels or texel intensities Ti in the texture space TSP are indicated by the intersections of the horizontal and vertical lines. These texels Ti that usually are stored in a memory called texture map define the texture. It is assumed that the part of the texel map or texure space TSP shown corresponds to the texture TA shown in Fig. 1.
  • the polygon TGP is shown in the texture space TSP to indicate which texels Ti are positioned within the polygon TGP.
  • the well known inverse texture mapping comprises the steps elucidated in the now following.
  • a blurring- filter that has a footprint FP is shown in the screen space SSP and has to operate on the pixels Pi to perform a weighted averaging operation required to obtain the blurring.
  • This footprint FP in the screen space SSP is mapped to the texture space TSP and called the mapped footprint MFP.
  • the polygon TGP that may be obtained by mapping the polygon SGP from the screen space SSP to the texture space TSP is also called the mapped polygon.
  • the texture space TSP comprises the textures TA, TB (see Fig. 1) which should be displayed on the surface of the polygon SGP.
  • these textures TA, TB are defined by texel intensities Ti stored in a texel memory.
  • the textures TA, TB are appearance information which defines an appearance of the graphics primitive SGP by defining texel intensities Ti in a texture space TSP.
  • the texels Ti both falling within the mapped footprint MFP and within the mapped polygon TGP are determined. These texels Ti are indicated by the crosses.
  • the mapped blurring- filter MFP is used to weight the texel intensities Ti of these texels Ti to obtain the intensities of the pixels Pi.
  • the weighting of the texel intensities Ti is performed in accordance with a filter characteristic of the mapped blurring-filter MFP, which characteristic is a transformed filter characteristic of the blurring-filter in the screen space SSP.
  • Fig. 3 shows a block diagram of a circuit for performing the known inverse texture mapping.
  • the circuit comprises a screen space rasterizer RSS which operates in the screen space SSP, a resampler RTS in the texture space TSP, a texture memory TM and a pixel fragment processing circuit PFO.
  • Ut, Vt is the texture coordinate of a texel Ti with index t
  • Xp, Yp is the screen coordinate of a pixel with index p
  • It is the color of the texel Ti with index t
  • IPi is the intermediate intensity (brightness and color) of pixel Pi with index i.
  • the screen space rasterizer RSS rasterizes the polygon SGP in the screen space SSP. For every pixel Pi traversed, its anti-aliasing filter footprint FP is mapped to the texture space TSP.
  • the anti-aliasing filter is also commonly referred to as the pre-f ⁇ lter.
  • the texels Ti within the mapped footprint MFP and within the mapped polygon TGP are determined and weighted according to a mapped profile of the anti-aliasing filter.
  • the color of the pixels Pi is computed using the mapped anti-aliasing filter in the texture space TSP.
  • the rasterizer RSS receives the polygons SGP in the screen space SSP to supply the mapped anti-aliasing filter footprint MFP and the coordinates of the pixels Pi.
  • a resampler in the texture space RTS receives the mapped anti-aliasing filter footprint MFP and information on the position of the polygon TGP to determine which texels Ti are within the mapped footprint MFP and within the polygon TGP.
  • the intensities of the texels Ti determined in this manner are retrieved from the texture memory TM.
  • the anti-aliasing filter filters the relevant intensities of the texels Ti determined in this manner to supply the intermediate color IPi of the pixel Pi.
  • the pixel fragment processing circuit PFO blends the intermediate pixel intensities IPi of overlapping polygons due to the blurring.
  • the pixel fragment processing circuit PFO may comprise a pixel fragment composition unit, also commonly referred to as A-buffer, which contains a fragment buffer.
  • A-buffer a pixel fragment composition unit
  • a fragment buffer is used to minimize edge anti-aliasing based on geometric information on the overlap of an area (often a square) associated to a pixel with the polygon.
  • a mask is used on a super-sample grid which enables a quantized approximation of the geometric information. This geometric information is an embodiment of what is called "contribution factor" of a pixel.
  • the contribution value of the pixels of a moving object is dependent on the motion speed and is filtered blurry in the same manner as the color channels.
  • the pixel fragment composition unit PFO will blend these pixel fragments accordingly to their contribution factor until the sum of the contribution factors reaches 100%, or no pixel fragments are available anymore, thereby generating the effect of translucent pixels of moving objects.
  • pixel fragments are required in depth (Z-value) sorted order. Because polygons can be delivered in random depth order, the pixel fragments per pixel location are stored in depth-sorted order in a pixel fragment buffer. However, the in the fragment buffer stored contribution factor is now not based on the geometric coverage per pixel. Instead, the contribution factor, which depends on the motion speed and which is filtered blurry in the same manner as the color channels, is stored.
  • the pixel fragment composition algorithm comprises two stages: insertion of pixel fragments in the fragment buffer and composition of pixel fragments from the fragment buffer. To prevent overflow during the insertion phase, fragments that are closest in their depth values may be merged.
  • the composition phase composes fragments per pixel position in a front to back order.
  • the final pixel color is obtained when the sum of the contribution factors of all added fragments is one or more, or when al pixel fragments have been processed.
  • Fig. 4 elucidates the known forward texture mapping.
  • Fig. 4 shows the polygon SGP in the screen space SSP and the polygon TGP in the texture space TSP. To facilitate the elucidation, it is assumed that both the polygon SGP and the polygon TGP correspond to the polygon A of the real world object WO of Fig. 1.
  • the intensities PIi of the pixels Pi present in the screen space SSP define the image displayed.
  • the pixels Pi are indicated by the dots.
  • the polygon SGP is shown in the screen space SSP to indicate which pixels Pi are positioned within the polygon SGP.
  • the pixel actually indicated by Pi is positioned outside the polygon SGP. With each pixel Pi a footprint FP of a blur filter is associated.
  • the texels or texel intensities Ti in the texture space TSP are indicated by the interstices of the horizontal and vertical lines. Again, these texels Ti that usually are stored in a memory called texture map define the texture. It is assumed that the part of the texel map or texure space TSP shown corresponds to the texture TA shown in Fig. 1.
  • the polygon TGP is shown in the texture space TSP to indicate which texels Ti are positioned within the polygon TGP.
  • the coordinates of the texels Ti within the polygon TGP are mapped (resampled) to the screen space SSP.
  • this mapping (indicated by the arrow AR from the texture space TSP to the screen space SSP) of a texel Ti (indicated by a cross in the texture space) to the screen space SSP provides mapped texels MTi (indicated by the cross in the screen space SSP, which cross may be positioned in-between pixel positions indicated by the dots) in the screen space SSP.
  • a contribution of the mapped texel MTi to all the pixels Pi which have a footprint FP of the blur filter which encompasses the mapped texel MTi is determined in accordance with the filter characteristic of the blur filter. All the contributions of the mapped texels MTi to the pixels Pi are summed to obtain the intensities PIi of the pixels Pi.
  • Fig. 5 shows a block diagram of a circuit for performing the forward texture mapping.
  • the circuit comprises a rasterizer RTS which operates in the texture space TSP, a resampler RSS in the screen space SSP, a texture memory TM and a pixel fragment processing circuit PFO.
  • RTS rasterizer
  • Xp resampler
  • Yp is the screen coordinate of a pixel with index p
  • It is the color of the texel Ti with index t
  • Ip is the filtered color of pixel Pi with index p.
  • the rasterizer RTS rasterizes the polygon TGP in the texture space TSP.
  • the resampler in the screen space RSS maps the texel Ti to a mapped texel MTi in the screen space SSP. Further, the resampler RSS determines the contribution of a mapped texel MTi to all the pixels Pi of which the associated footprint FP of the anti-aliasing filter encompasses this mapped texel MTi. Finally, the resampler RSS sums the intensity contributions of all mapped texels MTi to the pixels Pi to obtain the intensities PIi of the pixels Pi.
  • the pixel fragment processing circuit PFO shown in Fig. 5 has been elucidated in detail with respect to Fig. 3.
  • Fig. 6 shows a block diagram of a 3D-graphics system in accordance with the invention.
  • the 3D-application 1 provides geometric data GD to the vertex transform and lighting unit 2 further also referred to as the T&L unit 2.
  • the geometric date GD defines geometric primitives in the texture space TGP and/or the screen space SGP.
  • the geometric data GD comprises vertices of polygons. These vertices, which are submitted by the 3D-application, are transformed by the T&L unit 2 from a coordinate system used by the 3D-application (for example, "real" world coordinates) to the screen space SSP.
  • the 3D- application might use a 3D-API such as, for example, OpenGL or Direct3D.
  • the coordinates in the screen space SSP have decimal precision behind the dot, for example, the coordinates are represented by floating or fixed point numbers. Also, an intensity may be calculated for each vertex (vertex shading). Usually, the intensity of the vertices comprises a brightness and a color.
  • the hidden surface removal unit 3 determines the visible part(s) of the geometric primitives TGP; SGP using their geometric data.
  • the parts of the geometric primitives TGP; SGP which are occluded by other geometric primitives TGP; SGP seen from the viewpoint or camera ECP are determined and cut-off to obtain only non-overlapping geometric primitives TGP'; SGP'. This is further elucidated with respect to Figs. 7A and 7B.
  • the non-overlapping geometric primitives TGP'; SGP' are stored by the HSR 3 in a primitives memory 4 for later use.
  • the HSR 3 In a forward texture mapping system (further also referred to as FTM system), preferably the HSR 3 operates on geometric primitives SGP in the screen space SSP.
  • the geometric primitives TGP in the texture space TSP are obtained by a transformation of the geometric primitives SGP in the screen space SSP.
  • the HSR In an inverse texture mapping system (further also referred to as ITM system), the HSR preferably operates in the texture space on the geometric primitives SGP.
  • the 3D-graphics system further comprises a rasterizer which receives the output data of the HSR 3, texture information from the texture memory 6 to determine the partial intensities IPi of the pixel Pi for all non-overlapping primitives TGP'; SGP' which contribute to the final intensity PIi of the pixel Pi.
  • the partial intensities IPi are accumulated by the rasterizer in the frame buffer 7 to obtain the final intensities PIi of all the pixels Pi.
  • the operation of the rasterizer 5 depends on whether the 3D-graphics system is a FTM or ITM system.
  • the rasterizer 5 comprises a screen space rasterizer RSS (see Fig. 3) which rasterizes the non-overlapping geometric primitives SGP' in the screen space SSP one by one to obtain the grid positions of the pixels Pi per non-overlapping geometric primitive SGP'.
  • a pre-filter is associated which has a predetermined filter profile and a pre-filter footprint (FP) centered on its associated pixel (Pi).
  • FP pre-filter footprint
  • the pre-filter footprint FP is mapped by the resampler RTS in texture space
  • TSP for each pixel (Pi) of each one of the non-overlapping geometric primitives (SGP') to a texture space (TSP) comprising the textures to obtain a mapped filter footprint (MFP) and a transformed filter profile being a transformed version of the filter profile of the pre-filter.
  • the resampler RTS determines, exact or by approximation, for each mapped filter footprint MFP which texels Ti in the texture space TSP are positioned within both the mapped filter footprint MFP and the non-overlapping geometric primitive TGP' in the texture space TSP. These texels Ti are filtered with the transformed filter profile to obtain the partial intensity IPi of the associated pixel Pi.
  • all partial intensities IPi are determined because the partial intensities IPi are determined for the non-overlapping primitives SGP' sequentially, one by one.
  • the rasterizer 5 accumulates the partial intensities IPi for each of the pixels Pi in the frame buffer 7.
  • the image to be displayed is present in the frame buffer 7. It is not required to process all primitives SGP and to store all the fractions in a plurality of frame buffers.
  • the rasterizer 5 comprises a texture space rasterizer RTS
  • the resampler in screen space RSS maps the texels Ti within a non- overlapping geometric primitive TGP', per non-overlapping geometric primitive TGP', to the screen space SSP to obtain mapped texel positions MTi.
  • the resampler RSS further splats, in the screen space SSP, for each mapped texel position MTi, the associated intensity It of the texel Ti over a group of adjacent pixels Pi of which associated pre-filters have footprints FP overlapping the mapped texel position MTi.
  • the splatting determines the contributions of the texel intensity to the pixels Pi which surround the mapped texel position MTi.
  • Figs. 7A and 7B show examples of possible configuration of overlapping primitives and the resulting non-overlapping primitives.
  • Fig. 7A shows two overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2).
  • the primitives are shown to be triangles.
  • the depth information of the primitives is supplied by the 3D-application. It is assumed that the primitive TGP(I); SGP(I) has a depth value Zl and the primitive TGP(2); SGP(2) has a depth value Z2 such that the primitive TGP(2); SGP(2) is nearer to the viewpoint or camera ECP than the primitive TGP( 1 ); SGP( 1 ).
  • the HSR 3 uses the coordinates of the vertices and the depth values Zl and Z2 of the two overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2) to determine which parts of the primitive TGP(I); SGP(I) is occluded by the primitive TGP(2); SGP(2). This overlapped part is cut-out resulting in 3 primitives Pl, P2 and P3 shown in Fig. 7B.
  • Fig. 7B shows the 3 primitives Pl, P2 and P3 determined by the HSR 3.
  • the primitive P2 is identical to the primitive TGP(2); SGP(2) because this primitive is on top. From the primitive TGP(I); SGP(I), the part occluded by the primitive TGP(2); SGP(2) is in fact cut-out such that only the non-occluded parts Pl and P3 which are visible are available. Thus, instead of processing both overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2) now the only the primitive TGP(2); SGP(2) and the visible parts Pl and P2 of the primitive TGP(I); SGP(I) are processed. Thus in fact, now 3 adjacent, non-overlapping primitives Pl, P2, and P3 are processed. If, for example, the system is only able to process triangles, the HSR 3 has to split the primitive Pl into several primitives which each are a triangle.
  • Fig. 8 shows a block diagram of a computer that comprises the 3D-graphics system.
  • the computer PC that comprises the 3D-graphics system in accordance with the invention has an output Ol to supply the final intensities PIi.
  • the frame buffer 7 receives the partial intensities IPi and supplies the final intensities PIi.
  • the 3D-graphics processing in a computer PC requires dedicated hardware that is present on a graphics board in a slot of the computer PC.
  • the processor of the computer PC that is running a 3D-application supplies the geometric data GD to the graphics board where it is used as input data for the 3D-graphics processing.
  • the output Ol may be a standard plug suitable to transfer the image to a display device.
  • the image transferred to the display device may be in the form of digital data, for example if a DVI interface is used.
  • the image may also be transferred as analog signal(s).
  • the image is transferred as three RGB (Red, Green, and Blue) signals and synchronization signals.
  • the final intensities PIi of the 3D-graphics processing in accordance with the invention may therefore be considered to comprise a digital data stream or analog signals representing the intensities for red, green, and blue in combination or separately.
  • the display device and the computer PC may be integrated into a single cabinet.
  • Fig. 9 shows a block diagram of a display apparatus that comprises the 3D- graphics system.
  • the display apparatus MON comprises the 3D-graphics system in accordance with the invention, a processing circuit PRO, and a display device LCD.
  • the frame buffer 7 receives the partial intensities IPi and supplies the final intensities PIi to the processing circuit PRO.
  • the processing circuit PRO processes the pixel intensities PIi stored in the frame buffer 7 to obtain drive signals DS which are supplied to the display device LCD.
  • the display device LCD has a display screen to display the images determined by the final intensities PIi.
  • the display apparatus may be a computer monitor receiving the geometric data GD from a computer or microprocessor that may be present in the same cabinet as the monitor.
  • the 3D graphics system may be used to display 3D-graphics that are locally generated in the display apparatus, for example to facility easy operation of the display apparatus, for example by generating animated menus.
  • the display apparatus may also be a television receiver.
  • the display device LCD may be of any kind, for example a liquid crystal display, a plasma display, or any other matrix display or a cathode ray tube. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • Use of the verb "comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim.
  • the article "a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Abstract

A graphics system for generating edge anti-aliased 3D-graphics comprises a vertex transform and lightning unit (2) which receives geometric data (GD) comprising geometric primitives (TGP; SGP) of 3D objects (WO) from a 3D-application. The vertex transform and lightning unit (2) transforms the vertices of the geometric primitives (TGP; SGP) to screen space (SSP). A hidden surface removal unit (3) determines visible parts of the geometric primitives (TGP; SGP) using the geometric data (GD) to obtain non-overlapping geometric primitives (TGP'; SGP') representing the visible parts. The non-overlapping geometric primitives (TGP'; SGP') are stored in a primitives memory (4). A texture memory (6) stores texture data (Ti) representing textures of the 3D objects (WO) per non-overlapping primitive (TGP'; SGP'). A rasterizer (5) rasterizes the stored geometric non-overlapping primitives (TGP', SGP') one by one to determine partial intensities (IPi) of pixels (Pi) in the screen space (SSP) based on the texture data (Ti) representing the textures of the 3D objects (WO) per non-overlapping primitive (TGP'; SGP'). The partial intensities (IPi) of the pixels (Pi) determined for each one of the non-overlapping primitives (TGP'; SGP') are accumulated into a frame buffer (7) to obtain final intensities (Pli) of the pixels (Pi).

Description

3D-graphics
The invention relates to a method of generating 3D-graphics that comprises edge anti-aliasing, a graphics system for generating edge anti-aliased 3D-graphics, a computer comprising the graphics system, and a display apparatus comprising the graphics system.
Known methods of generating 3D-graphics receive geometric data which comprises geometric primitives of the 3D-objects from a 3D-application. Usually, the vertices of the primitives are provided as the geometric data. Further, texture data is available which indicate the textures of the 3D-objects. Usually, the texture data is stored in a texture memory and represents texture intensities at texel positions on a texel grid. The 3D-graphics method processes the geometric data and the texture data to obtain pixel intensities on pixel positions on a pixel grid in the screen space. Usually, these pixel intensities are stored in a frame buffer. The pixel intensities are displayed on a display by reading out the frame buffer. In the prior art fragment buffer approach, a plurality of pixel fragments is stored per pixel position in a plurality of fragment (frame) buffers. Each of the pixel fragments is related to an amount of area of the pixel cell covered by the associated primitive. Thus, for each primitive that at least partially covers the pixel cell area, a weight factor is determined representative for the amount of area covered. The weighted intensities that are obtained by multiplying for each of these primitives the weight factor with the pixel intensities are called partial colors. This approach requires a substantial amount of memory bandwidth because for each pixel all contributing primitives have to be processed. When all primitives of a scene have been processed, the pixel fragments that are stored in the pixel fragment buffers are, for each pixel location, merged to obtain the final pixel intensities (including color). Often only a finite number of pixel fragments can be stored per pixel location because only a finite number of pixel fragments can be stored per pixel location because only a finite number of pixel fragment buffers is available. This limitation causes that the pixel fragment values have to be merged when all the pixel fragment buffers for this pixel are full, even if more primitives contribute to the pixel value. This causes artifacts. It is an object of the invention to provide a method of generating edge anti- aliased 3D-graphics that require less frame buffers at an improved performance. A first aspect of the invention provides a method of generating edge anti- aliased 3D-graphics. A second aspect of the invention provides a graphics system for generating edge anti-aliased 3D-graphics as claimed in claim 4. A third aspect of the invention provides a computer as claimed in claim 5. A fourth aspect of the invention provides a display apparatus as claimed in claim 6. Advantageous embodiments are defined in the dependent claims.
In the method of generating edge anti-aliased 3D-graphics, vertices of the geometric primitives of 3D-objects are transformed to screen space. As in the prior art, the primitives may be any polygon. The vertices may have positions in the screen space that are in-between grid positions, and the image to be displayed is determined by the intensities (including brightness and color) on the grid positions. The visible parts of the geometric primitives are determined by using the geometric data to obtain non-overlapping primitives. Thus, of overlapping primitives is determined which primitive is on top, seen from the viewpoint. The parts of primitives that are occluded are not used anymore. For example, if a first primitive is a large polygon completely covering a small polygon, and the large polygon is on top, the small polygon is not anymore processed, only the large polygon is processed further. If the small polygon is on top, two adjacent areas are obtained: the small polygon and an area of the large polygon minus the small polygon. This latter area is further also referred to as the delta-area. The delta-area which covers the large polygon except where the small polygon covers it may be a polygon itself, or may be build up out of several adjacent or separate, non-overlapping polygons covering the delta-area. Hidden surface removal (further referred to as HSR) algorithms as such are known from the publication "Hidden Surface Removal using polygon area sorting" by Kevin Weiler and Peter Atherton, Computer Graphics 11 (SIGGRAPH 77 Proceedings), pp. 214-222, July 1977.
The non-overlapping geometric primitives are stored in a memory for later use. The stored non-overlapping geometric primitives are rasterized one by one to determine the intensities of the pixels in the screen space based on texture data retrieved from bitmaps, or/and procedurally on-the-fly generated bitmaps (such as Gouraud shading) representing textures of the 3D-objects. The rasterizer may use the inverse texture mapping or the forward texture mapping approach, which as such both are known. The intensities of the pixels determined for each one of the non-overlapping geometric primitives are accumulated in a frame memory to obtain the final intensities of the pixels after all the non-overlapping geometric primitives have been processed by the rasterizer. Because the rasterizer in the present invention only has to process non- overlapping geometric primitives, it is not required anymore to store the intermediate results of (partly) overlapping geometric primitives in the fragment frame buffers. In accordance with the present invention, the pixel intensity per pixel per primitive can be simply added to the value already stored in the frame buffer for an adjacent non-overlapping primitive that has been processed earlier. Thus, less memory is required, and no artifacts will be caused by more contributing primitives than fragment frame buffers available.
In an embodiment as claimed in claim 2, the rasterizer is based on the inverse texture mapping approach, which as such is well known. In the Phd thesis "An a layered object-space based architecture for interactive raster graphics" of A.A.M Kuijk, University of Amsterdam, September 1996 a hidden surface removal unit is disclosed which is present at the input side of the graphics processor. However, the disclosed scan- line rasterizer has a disadvantage that it requires the non-overlapping primitives that contribute to a pixel to be available at the time of calculating the final pixel color (summation of all the primitive intensity contributions), and therefore requires the primitives in pre-sorted order per scan-line (the order may differ per scan-line). These non-overlapping primitives have a specific data structure (not the normal polygonal structure where the vertices of the corners are stored), called pattern, such that it can be processed by the specific scan-line rasterizer used. In the system in accordance with the present invention, the non-overlapping primitives do not have to be in this special data structure (they are in the same polygonal vertex format as the (possibly overlapping) input polygons), the summation of the primitive contributions to a pixel is accumulated via a read-add-write access to the frame buffer on the associated pixel position.
Further, the prior art approach has the disadvantage that a normalizing per pixel is required as can be seen in equation (8.2) and equation (8.3) of the prior art. Another disadvantage is that the system is only suitable for box pre-filtering see equation (8.2) and the text just above it in the prior art. The system is not suitable for higher order pre-filtering.
In an embodiment as claimed in claim 3, the rasterizer is based on the forward texture mapping approach, which as such is well known, for example from the publication "Resample hardware for 3D graphics" by Koen Meinds and Bart Barenbrug, in T. Ertl, W. Heidrich, and M. Doggett, editors, Proceedings of Graphics Hardware 2002, pages 17-26. In this document, no hidden surface removal unit is present at the input side of the graphics processor, and a pixel fragment processing circuit that comprises pixel fragment buffers is required.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
In the drawings: Fig. 1 elucidates a display of a real world 3D-object on a display screen,
Fig. 2 elucidates the known inverse texture-mapping algorithm, Fig. 3 shows a block diagram of a circuit for performing the known inverse texture-mapping algorithm,
Fig. 4 elucidates the known forward texture-mapping algorithm, Fig. 5 shows a block diagram of a circuit for performing the known forward texture-mapping algorithm,
Fig. 6 shows a block diagram of a 3D-graphics system in accordance with the invention,
Figs. 7A and 7B show examples of possible configuration of overlapping primitives and the resulting non-overlapping primitives.
Fig. 8 shows a block diagram of a computer which comprises the 3D-graphics system, and
Fig. 9 shows a block diagram of a display apparatus that comprises the 3D- graphics system.
Fig. 1 elucidates a display of a real world three-dimensional (further also referred to as 3D) object on a display screen. A real world object WO, which may be a 3D- object such as the cube shown, is projected on a two-dimensional display screen DS. The appearance of the 3D-object WO is determined by a surface structure usually referred to as texture. In Fig. 1 the polygon A has a texture TA and the polygon B has a texture TB. The polygons A and B are with a more general term also referred to as the real world graphics primitives. The projection of the real world object WO on the screen DS is obtained by defining an eye or camera position ECP with respect to the screen DS. In Fig. 1 is shown how the polygon SGP corresponding to the polygon A is projected on the screen DS. The polygon SGP in the screen space SSP defined by the coordinates x and y is also referred to as a graphics primitive instead of the graphics primitive in the screen space. Thus, with graphics primitive is indicated the polygon A in the eye space, or the polygon SGP in the screen space, or the polygon TGP in the texture space, it is clear from the context which graphics primitive is meant. It is only the geometry of the polygon A that is used to determine the geometry of the polygon SGP. Usually, it suffices to know the vertices of the polygon A to determine the vertices of the polygon SGP.
The texture TA of the polygon A is not directly projected from the real world into the screen space SSP. The different textures of the real world object WO are stored in a texture map or texture space TSP defined by the coordinates u and v. For example, Fig. 1 shows that the polygon A has a texture TA which is available in the texture space TSP in the area indicated by TA, while the polygon B has another texture TB which is available in the texture space TSP in the area indicated by TB. The polygon A is projected on the texture space TA such that a polygon TGP occurs such that when the texture present within the polygon TGP is projected on the polygon A the texture of the real world object WO is obtained or at least resembled as much as possible. A perspective transformation PPT between the texture space TSP and the screen space SSP projects the texture of the polygon TGP on the corresponding polygon SGP. This process is also referred to as texture mapping. Usually, the textures are not all present in a global texture space, but every texture defines its own texture space.
Fig. 2 elucidates the known inverse texture-mapping algorithm. Fig. 2 shows the polygon SGP in the screen space SSP and the polygon TGP in the texture space TSP. To facilitate the elucidation, it is assumed that both the polygon SGP and the polygon TGP correspond to the polygon A of the real world object WO of Fig. 1.
The intensities PIi of the pixels Pi present in the screen space SSP define the image displayed. Usually, the pixels Pi are actually positioned (in a matrix display) or thought to be positioned (in a CRT) in an orthogonal matrix of positions. In Fig. 2 only a limited number of the pixels Pi is indicated by the dots. The polygon SGP is shown in the screen space SSP to indicate which pixels Pi are positioned within the polygon SGP.
The texels or texel intensities Ti in the texture space TSP are indicated by the intersections of the horizontal and vertical lines. These texels Ti that usually are stored in a memory called texture map define the texture. It is assumed that the part of the texel map or texure space TSP shown corresponds to the texture TA shown in Fig. 1. The polygon TGP is shown in the texture space TSP to indicate which texels Ti are positioned within the polygon TGP. The well known inverse texture mapping comprises the steps elucidated in the now following. A blurring- filter that has a footprint FP is shown in the screen space SSP and has to operate on the pixels Pi to perform a weighted averaging operation required to obtain the blurring. This footprint FP in the screen space SSP is mapped to the texture space TSP and called the mapped footprint MFP. The polygon TGP that may be obtained by mapping the polygon SGP from the screen space SSP to the texture space TSP is also called the mapped polygon. The texture space TSP comprises the textures TA, TB (see Fig. 1) which should be displayed on the surface of the polygon SGP. As described above, these textures TA, TB are defined by texel intensities Ti stored in a texel memory. Thus, the textures TA, TB are appearance information which defines an appearance of the graphics primitive SGP by defining texel intensities Ti in a texture space TSP.
The texels Ti both falling within the mapped footprint MFP and within the mapped polygon TGP are determined. These texels Ti are indicated by the crosses. The mapped blurring- filter MFP is used to weight the texel intensities Ti of these texels Ti to obtain the intensities of the pixels Pi. The weighting of the texel intensities Ti is performed in accordance with a filter characteristic of the mapped blurring-filter MFP, which characteristic is a transformed filter characteristic of the blurring-filter in the screen space SSP.
Fig. 3 shows a block diagram of a circuit for performing the known inverse texture mapping. The circuit comprises a screen space rasterizer RSS which operates in the screen space SSP, a resampler RTS in the texture space TSP, a texture memory TM and a pixel fragment processing circuit PFO. Ut, Vt is the texture coordinate of a texel Ti with index t, Xp, Yp is the screen coordinate of a pixel with index p, It is the color of the texel Ti with index t, and IPi is the intermediate intensity (brightness and color) of pixel Pi with index i.
The screen space rasterizer RSS rasterizes the polygon SGP in the screen space SSP. For every pixel Pi traversed, its anti-aliasing filter footprint FP is mapped to the texture space TSP. The anti-aliasing filter is also commonly referred to as the pre-fϊlter. The texels Ti within the mapped footprint MFP and within the mapped polygon TGP are determined and weighted according to a mapped profile of the anti-aliasing filter. The color of the pixels Pi is computed using the mapped anti-aliasing filter in the texture space TSP. Thus, the rasterizer RSS receives the polygons SGP in the screen space SSP to supply the mapped anti-aliasing filter footprint MFP and the coordinates of the pixels Pi. A resampler in the texture space RTS receives the mapped anti-aliasing filter footprint MFP and information on the position of the polygon TGP to determine which texels Ti are within the mapped footprint MFP and within the polygon TGP. The intensities of the texels Ti determined in this manner are retrieved from the texture memory TM. The anti-aliasing filter filters the relevant intensities of the texels Ti determined in this manner to supply the intermediate color IPi of the pixel Pi.
The pixel fragment processing circuit PFO blends the intermediate pixel intensities IPi of overlapping polygons due to the blurring. The pixel fragment processing circuit PFO may comprise a pixel fragment composition unit, also commonly referred to as A-buffer, which contains a fragment buffer. Commonly, a fragment buffer is used to minimize edge anti-aliasing based on geometric information on the overlap of an area (often a square) associated to a pixel with the polygon. Often a mask is used on a super-sample grid which enables a quantized approximation of the geometric information. This geometric information is an embodiment of what is called "contribution factor" of a pixel. For the motion blur application, the contribution value of the pixels of a moving object is dependent on the motion speed and is filtered blurry in the same manner as the color channels. The pixel fragment composition unit PFO will blend these pixel fragments accordingly to their contribution factor until the sum of the contribution factors reaches 100%, or no pixel fragments are available anymore, thereby generating the effect of translucent pixels of moving objects.
To be able to implement the above process, pixel fragments are required in depth (Z-value) sorted order. Because polygons can be delivered in random depth order, the pixel fragments per pixel location are stored in depth-sorted order in a pixel fragment buffer. However, the in the fragment buffer stored contribution factor is now not based on the geometric coverage per pixel. Instead, the contribution factor, which depends on the motion speed and which is filtered blurry in the same manner as the color channels, is stored. The pixel fragment composition algorithm comprises two stages: insertion of pixel fragments in the fragment buffer and composition of pixel fragments from the fragment buffer. To prevent overflow during the insertion phase, fragments that are closest in their depth values may be merged. After all the polygons of the scene are rendered, the composition phase composes fragments per pixel position in a front to back order. The final pixel color is obtained when the sum of the contribution factors of all added fragments is one or more, or when al pixel fragments have been processed.
Fig. 4 elucidates the known forward texture mapping. Fig. 4 shows the polygon SGP in the screen space SSP and the polygon TGP in the texture space TSP. To facilitate the elucidation, it is assumed that both the polygon SGP and the polygon TGP correspond to the polygon A of the real world object WO of Fig. 1.
The intensities PIi of the pixels Pi present in the screen space SSP define the image displayed. The pixels Pi are indicated by the dots. The polygon SGP is shown in the screen space SSP to indicate which pixels Pi are positioned within the polygon SGP. The pixel actually indicated by Pi is positioned outside the polygon SGP. With each pixel Pi a footprint FP of a blur filter is associated.
The texels or texel intensities Ti in the texture space TSP are indicated by the interstices of the horizontal and vertical lines. Again, these texels Ti that usually are stored in a memory called texture map define the texture. It is assumed that the part of the texel map or texure space TSP shown corresponds to the texture TA shown in Fig. 1. The polygon TGP is shown in the texture space TSP to indicate which texels Ti are positioned within the polygon TGP.
The coordinates of the texels Ti within the polygon TGP are mapped (resampled) to the screen space SSP. In Fig. 4, this mapping (indicated by the arrow AR from the texture space TSP to the screen space SSP) of a texel Ti (indicated by a cross in the texture space) to the screen space SSP provides mapped texels MTi (indicated by the cross in the screen space SSP, which cross may be positioned in-between pixel positions indicated by the dots) in the screen space SSP. A contribution of the mapped texel MTi to all the pixels Pi which have a footprint FP of the blur filter which encompasses the mapped texel MTi is determined in accordance with the filter characteristic of the blur filter. All the contributions of the mapped texels MTi to the pixels Pi are summed to obtain the intensities PIi of the pixels Pi.
In the forward texture mapping, the resampling from the colors of the texel Ti to the colors of the pixels Pi occurs in the screen space SSP, and thus is input sample driven. Compared to the inverse texture mapping, it is easier to determine which texels Ti contribute to a particular pixel Pi. Only the mapped texels MTi that are within a footprint FP of the anti¬ aliasing filter for a particular pixel Pi will contribute to the intensity or color of this particular pixel Pi. Further, there is no need to transform the anti-aliasing filter from the screen space SSP to the texel space TSP. Fig. 5 shows a block diagram of a circuit for performing the forward texture mapping. The circuit comprises a rasterizer RTS which operates in the texture space TSP, a resampler RSS in the screen space SSP, a texture memory TM and a pixel fragment processing circuit PFO. Ut, Vt is the texture coordinate of a texel Ti with index, Xp, Yp is the screen coordinate of a pixel with index p, It is the color of the texel Ti with index t, and Ip is the filtered color of pixel Pi with index p.
The rasterizer RTS rasterizes the polygon TGP in the texture space TSP. For every texel Ti which is within the polygon TGP, the resampler in the screen space RSS maps the texel Ti to a mapped texel MTi in the screen space SSP. Further, the resampler RSS determines the contribution of a mapped texel MTi to all the pixels Pi of which the associated footprint FP of the anti-aliasing filter encompasses this mapped texel MTi. Finally, the resampler RSS sums the intensity contributions of all mapped texels MTi to the pixels Pi to obtain the intensities PIi of the pixels Pi.
The pixel fragment processing circuit PFO shown in Fig. 5 has been elucidated in detail with respect to Fig. 3.
Fig. 6 shows a block diagram of a 3D-graphics system in accordance with the invention. The 3D-application 1 provides geometric data GD to the vertex transform and lighting unit 2 further also referred to as the T&L unit 2. The geometric date GD defines geometric primitives in the texture space TGP and/or the screen space SGP. Usually, the geometric data GD comprises vertices of polygons. These vertices, which are submitted by the 3D-application, are transformed by the T&L unit 2 from a coordinate system used by the 3D-application (for example, "real" world coordinates) to the screen space SSP. The 3D- application might use a 3D-API such as, for example, OpenGL or Direct3D. The coordinates in the screen space SSP have decimal precision behind the dot, for example, the coordinates are represented by floating or fixed point numbers. Also, an intensity may be calculated for each vertex (vertex shading). Usually, the intensity of the vertices comprises a brightness and a color.
The hidden surface removal unit 3 (further also referred to as HSR 3) determines the visible part(s) of the geometric primitives TGP; SGP using their geometric data. The parts of the geometric primitives TGP; SGP which are occluded by other geometric primitives TGP; SGP seen from the viewpoint or camera ECP are determined and cut-off to obtain only non-overlapping geometric primitives TGP'; SGP'. This is further elucidated with respect to Figs. 7A and 7B. The non-overlapping geometric primitives TGP'; SGP' are stored by the HSR 3 in a primitives memory 4 for later use. In a forward texture mapping system (further also referred to as FTM system), preferably the HSR 3 operates on geometric primitives SGP in the screen space SSP. The geometric primitives TGP in the texture space TSP are obtained by a transformation of the geometric primitives SGP in the screen space SSP. In an inverse texture mapping system (further also referred to as ITM system), the HSR preferably operates in the texture space on the geometric primitives SGP.
The 3D-graphics system further comprises a rasterizer which receives the output data of the HSR 3, texture information from the texture memory 6 to determine the partial intensities IPi of the pixel Pi for all non-overlapping primitives TGP'; SGP' which contribute to the final intensity PIi of the pixel Pi. The partial intensities IPi are accumulated by the rasterizer in the frame buffer 7 to obtain the final intensities PIi of all the pixels Pi. The operation of the rasterizer 5 depends on whether the 3D-graphics system is a FTM or ITM system.
In an ITM system, the rasterizer 5 comprises a screen space rasterizer RSS (see Fig. 3) which rasterizes the non-overlapping geometric primitives SGP' in the screen space SSP one by one to obtain the grid positions of the pixels Pi per non-overlapping geometric primitive SGP'. To each pixel Pi in the screen space, a pre-filter is associated which has a predetermined filter profile and a pre-filter footprint (FP) centered on its associated pixel (Pi). Such pre-filters are well known from the prior art ITM systems. The pre-filter footprint FP is mapped by the resampler RTS in texture space
TSP for each pixel (Pi) of each one of the non-overlapping geometric primitives (SGP') to a texture space (TSP) comprising the textures to obtain a mapped filter footprint (MFP) and a transformed filter profile being a transformed version of the filter profile of the pre-filter.
The resampler RTS determines, exact or by approximation, for each mapped filter footprint MFP which texels Ti in the texture space TSP are positioned within both the mapped filter footprint MFP and the non-overlapping geometric primitive TGP' in the texture space TSP. These texels Ti are filtered with the transformed filter profile to obtain the partial intensity IPi of the associated pixel Pi. Thus for each non-overlapping primitive SGP' which contributes to the final intensity PIi of a particular one of the pixels Pi all partial intensities IPi are determined because the partial intensities IPi are determined for the non-overlapping primitives SGP' sequentially, one by one. The rasterizer 5 accumulates the partial intensities IPi for each of the pixels Pi in the frame buffer 7. Thus, after processing of all the non- overlapping primitives SGP' of a scene, the image to be displayed is present in the frame buffer 7. It is not required to process all primitives SGP and to store all the fractions in a plurality of frame buffers.
Per non-overlapping primitive SGP', considering a particular one of the non- overlapping primitives SGPl, not only a pre-filter footprint (FP) is attributed and mapped to the texture space TSP to the pixels Pi within this particular one of the primitives SGP' but also to pixels Pi outside the particular one of the primitives SGP' within a band around this particular primitive SGP'. The band is determined by the size of the pre-filter footprint FP. Only those pixels Pi belong to the band of which the pre-filter footprint covers pixels within the particular primitive SGP'. In an FTM system, the rasterizer 5 comprises a texture space rasterizer RTS
(see Fig. 5) which rasterizes the non-overlapping geometric primitives TGP' in the texture space TSP one by one to obtain the grid positions of the texels Ti per non-overlapping geometric primitive TGP'.
The resampler in screen space RSS maps the texels Ti within a non- overlapping geometric primitive TGP', per non-overlapping geometric primitive TGP', to the screen space SSP to obtain mapped texel positions MTi. The resampler RSS further splats, in the screen space SSP, for each mapped texel position MTi, the associated intensity It of the texel Ti over a group of adjacent pixels Pi of which associated pre-filters have footprints FP overlapping the mapped texel position MTi. The splatting determines the contributions of the texel intensity to the pixels Pi which surround the mapped texel position MTi. Usually, the contributions become smaller the further the pixel Pi is away from the mapped texel position MTi. Thus, per non-overlapping geometric primitive TGP' all the splatted intensities It are accumulated for all pixels Pi in the frame buffer 7 to obtain partial intensity contributions PIi for the pixels Pi of the group according to pre-filter profiles of the associated pre-filters. The accumulation of the contributions of the mapped texels may also be performed by the resampler RSS. Now, only the partial contributions PIi thus obtained are accumulated in the frame buffer 7.
Thus, after processing of all the non-overlapping primitives TGP' of a scene, the image to be displayed is present in the frame buffer 7. Figs. 7A and 7B show examples of possible configuration of overlapping primitives and the resulting non-overlapping primitives.
Fig. 7A shows two overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2). By way of example only, the primitives are shown to be triangles. The depth information of the primitives is supplied by the 3D-application. It is assumed that the primitive TGP(I); SGP(I) has a depth value Zl and the primitive TGP(2); SGP(2) has a depth value Z2 such that the primitive TGP(2); SGP(2) is nearer to the viewpoint or camera ECP than the primitive TGP( 1 ); SGP( 1 ).
The HSR 3 uses the coordinates of the vertices and the depth values Zl and Z2 of the two overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2) to determine which parts of the primitive TGP(I); SGP(I) is occluded by the primitive TGP(2); SGP(2). This overlapped part is cut-out resulting in 3 primitives Pl, P2 and P3 shown in Fig. 7B.
Fig. 7B shows the 3 primitives Pl, P2 and P3 determined by the HSR 3. The primitive P2 is identical to the primitive TGP(2); SGP(2) because this primitive is on top. From the primitive TGP(I); SGP(I), the part occluded by the primitive TGP(2); SGP(2) is in fact cut-out such that only the non-occluded parts Pl and P3 which are visible are available. Thus, instead of processing both overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2) now the only the primitive TGP(2); SGP(2) and the visible parts Pl and P2 of the primitive TGP(I); SGP(I) are processed. Thus in fact, now 3 adjacent, non-overlapping primitives Pl, P2, and P3 are processed. If, for example, the system is only able to process triangles, the HSR 3 has to split the primitive Pl into several primitives which each are a triangle.
Fig. 8 shows a block diagram of a computer that comprises the 3D-graphics system. The computer PC that comprises the 3D-graphics system in accordance with the invention has an output Ol to supply the final intensities PIi. Of the 3D-graphics system in accordance with the invention only the frame buffer 7 is shown. As shown in Fig. 5, the frame buffer 7 receives the partial intensities IPi and supplies the final intensities PIi.
Usually, the 3D-graphics processing in a computer PC requires dedicated hardware that is present on a graphics board in a slot of the computer PC. The processor of the computer PC that is running a 3D-application supplies the geometric data GD to the graphics board where it is used as input data for the 3D-graphics processing. The output Ol may be a standard plug suitable to transfer the image to a display device. The image transferred to the display device may be in the form of digital data, for example if a DVI interface is used. The image may also be transferred as analog signal(s). Usually, the image is transferred as three RGB (Red, Green, and Blue) signals and synchronization signals.
The final intensities PIi of the 3D-graphics processing in accordance with the invention may therefore be considered to comprise a digital data stream or analog signals representing the intensities for red, green, and blue in combination or separately. The display device and the computer PC may be integrated into a single cabinet.
Fig. 9 shows a block diagram of a display apparatus that comprises the 3D- graphics system. The display apparatus MON comprises the 3D-graphics system in accordance with the invention, a processing circuit PRO, and a display device LCD. Again, of the 3D-graphics system in accordance with the invention only the frame buffer 7 is shown. The frame buffer 7 receives the partial intensities IPi and supplies the final intensities PIi to the processing circuit PRO. The processing circuit PRO processes the pixel intensities PIi stored in the frame buffer 7 to obtain drive signals DS which are supplied to the display device LCD. The display device LCD has a display screen to display the images determined by the final intensities PIi.
The display apparatus may be a computer monitor receiving the geometric data GD from a computer or microprocessor that may be present in the same cabinet as the monitor. The 3D graphics system may be used to display 3D-graphics that are locally generated in the display apparatus, for example to facility easy operation of the display apparatus, for example by generating animated menus. The display apparatus may also be a television receiver.
The display device LCD may be of any kind, for example a liquid crystal display, a plasma display, or any other matrix display or a cathode ray tube. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. A method of generating edge anti-aliased 3D-graphics, the method comprises: receiving (2) geometric data (GD) comprising geometric primitives (TGP;
SGP) of 3D-objects (WO), determining (3) visible parts of the geometric primitives (TGP; SGP) using the geometric data (GD) to obtain non-overlapping geometric primitives (TGP'; SGP') representing the visible parts, storing (4) the non-overlapping geometric primitives (TGP'; SGP'), transforming (2) vertices of the non-overlapping geometric primitives (TGP'; SGP') to screen space (SSP), - rasterizing (5) the stored non-overlapping geometric primitives (TGP'; SGP') one by one to determine partial intensities (IPi) of pixels (Pi) in the screen space (SSP) based on texture data (Ti) obtained from bitmaps (TA) or procedurally generated data representing textures of the 3D-objects (WO) per non-overlapping primitive (TGP'; SGP'), and accumulating (7) the partial intensities (IPi) of the pixels (Pi) determined for each one of the non-overlapping primitives (TGP'; SGP') to obtain final intensities (PIi) of the pixels (Pi) in the screen space (SSP).
2. A method as claimed in claim 1, wherein the step of rasterizing (5) is based on inverse texture mapping, the step of rasterizing (5) comprising: - screen space rasterizing (RSS) the non-overlapping geometric primitives
(SGP') in the screen space (SSP) one by one to obtain the grid positions of the pixels (Pi) per non-overlapping geometric primitive (SGP'), defining, per non-overlapping geometric primitive (SGP'), in the screen space
(SSP), for each pixel (Pi) within this non-overlapping geometric primitive (SGP'), and in a band around this geometric primitive (SGP'), a pre-fϊlter with a filter profile and a pre-filter footprint (FP) centered around said pixel (Pi), wherein the band is determined by a size of the pre-filter footprint (FP), mapping (RTS) the pre-filter footprint (FP) for each pixel (Pi) of each one of the non-overlapping geometric primitives (SGP') to a texture space (TSP) comprising the textures to obtain a mapped filter footprint (MFP) and a transformed filter profile being a transformed version of the filter profile of the pre- filter, determining (RTS), exact or approximating, for each mapped filter footprint (MFP) which texels (Ti) in the texture space (TSP) are positioned within both the mapped filter footprint (MFP) and the non-overlapping geometric primitive in the texture space
(TGP'), and filtering these texels (Ti) with the transformed filter profile to obtain the partial intensity (PIi) of the associated pixel (Pi), wherein the step of accumulating (7) accumulates the partial intensities (PIi) of the associated pixel (Pi) for different non-overlapping geometric primitives (TGP'; SGP') contributing to the associated pixel (Pi).
3. A method as claimed in claim 1, wherein the step of rasterizing (5) is based on forward texture mapping, the step of rasterizing (5) comprising: texture space rasterizing (RTS) the non-overlapping geometric primitives (TGP') in the texture space (TSP) one by one to obtain the grid positions of the texels (Ti) per non-overlapping geometric primitive (TGP'), mapping (RSS) the texels (Ti) within a non-overlapping geometric primitive
(TGP'), per non-overlapping geometric primitive (TGP'), to the screen space (SSP) to obtain mapped texel positions (MTi), and - splatting (RSS), in the screen space (SSP), for each mapped texel position
(MTi), the associated intensity (It) of the texel (Ti) over a group of adjacent pixels (Pi) of which associated pre-filters have footprints (FP) overlapping the mapped texel position
(MTi). to obtain partial intensity contributions (PIi) for the pixels (Pi) of the group according to pre-filter profiles of the associated pre-filters, wherein the step of accumulating (7) accumulates the partial intensity contributions (PIi) for each one of the pixels (Pi) in the screen space (SSP).
4. A graphics system for generating edge anti-aliased 3D-graphics, the graphics system comprises: - a vertex transform and lightning unit (2) for receiving geometric data (GD) comprising geometric primitives (TGP; SGP) of 3D objects (WO), and for transforming vertices of the geometric primitives (TGP; SGP) to screen space (SSP), a hidden surface removal unit (3) for determining visible parts of the geometric primitives (TGP; SGP) using the geometric data (GD) to obtain non-overlapping geometric primitives (TGP'; SGP') representing the visible parts, a primitives memory (4) for storing the non-overlapping geometric primitives (TGP'; SGP'), a texture memory (6) for storing texture data (Ti) representing textures of the 3D objects (WO) per non overlapping primitive (TGP'; SGP'), a rasterizer (5) for rasterizing the stored geometric non-overlapping primitives (TGP', SGP') one by one to determine partial intensities (IPi) of pixels (Pi) in the screen space (SSP) based on the texture data (Ti) representing the textures of the 3D objects (WO) per non-overlapping primitive (TGP'; SGP'), and - a frame buffer (7) for accumulating, in the screen space (SSP), the partial intensities (IPi) of the pixels (Pi) determined for each one of the non-overlapping primitives (TGP'; SGP') to obtain final intensities (PIi) of the pixels (Pi).
5. A computer (PC) comprising the graphics system claimed in claim 4, the computer (PC) further comprises an output (01) for supplying the pixel intensities (PIi) stored in the frame buffer (7).
6. A display apparatus (MON) comprising the graphics system claimed in claim 4, the display apparatus (MON) further comprises a display device (LCD), and a processing circuit (PRO) for processing the pixel intensities (PIi) stored in the frame buffer (7) to supply drive signals (DS) to the display device (LCD).
PCT/IB2005/052509 2004-08-25 2005-07-26 3d-graphics WO2006021899A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04104067 2004-08-25
EP04104067.6 2004-08-25

Publications (2)

Publication Number Publication Date
WO2006021899A2 true WO2006021899A2 (en) 2006-03-02
WO2006021899A3 WO2006021899A3 (en) 2006-05-18

Family

ID=35529541

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/052509 WO2006021899A2 (en) 2004-08-25 2005-07-26 3d-graphics

Country Status (1)

Country Link
WO (1) WO2006021899A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116594582A (en) * 2022-06-22 2023-08-15 格兰菲智能科技(北京)有限公司 Image display method, apparatus, computer device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003065307A2 (en) * 2002-02-01 2003-08-07 Koninklijke Philips Electronics N.V. Using texture filtering for edge anti-aliasing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003065307A2 (en) * 2002-02-01 2003-08-07 Koninklijke Philips Electronics N.V. Using texture filtering for edge anti-aliasing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FEIBUSH E A ET AL: "Synthetic texturing using digital filters" COMPUTER GRAPHICS, NEW YORK, NY, US, vol. 14, no. 3, July 1980 (1980-07), pages 294-301, XP002289383 ISSN: 0097-8930 *
FOLEY ET AL: "COMPUTER GRAPHICS, PRINCIPLES AND PRACTICE, PASSAGE. THE RENDERING PIPELINE" COMPUTER GRAPHICS. PRINCIPLES AND PRACTICE, READING, ADDISON WESLEY, US, 1996, pages 806-813, XP002363195 *
K. MEINDS, B. BARENBRUG, F. PETERS: "Hardware-accelerated Texture and Edge Antialiasing using FIR-Filters" CONFERENCE ABSTRACTS AND APPLICATIONS, 2002, page 190, XP002259241 ACM SIGGRAPH 2002 *
MEINDS K ET AL: "Resample Hardware for 3D Graphics" EUROGRAPHICS WORKSHOP ON GRAPHICS HARDWARE, 2002, pages 17-27, XP002259239 cited in the application *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116594582A (en) * 2022-06-22 2023-08-15 格兰菲智能科技(北京)有限公司 Image display method, apparatus, computer device and storage medium
CN116594582B (en) * 2022-06-22 2024-03-22 格兰菲智能科技(北京)有限公司 Image display method, apparatus, computer device and storage medium

Also Published As

Publication number Publication date
WO2006021899A3 (en) 2006-05-18

Similar Documents

Publication Publication Date Title
CN107392989B (en) Graphics processor and method of operation, graphics processing system and method of operation
US8379013B2 (en) Method, medium and apparatus rendering 3D graphic data
US6975329B2 (en) Depth-of-field effects using texture lookup
EP3748584B1 (en) Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
CN111508052B (en) Rendering method and device of three-dimensional grid body
US6919906B2 (en) Discontinuity edge overdraw
EP1580694A1 (en) Image rendering with adaptive filtering for anti-aliasing
US7286138B2 (en) Discontinuity edge overdraw
US20070120858A1 (en) Generation of motion blur
EP1494175A1 (en) Selection of a mipmap level
US6184893B1 (en) Method and system for filtering texture map data for improved image quality in a graphics computer system
EP1489560A1 (en) Primitive edge pre-filtering
EP1616299B1 (en) Computer graphics processor and method for generating a computer graphics image
US6614445B1 (en) Antialiasing method for computer graphics
EP1759355B1 (en) A forward texture mapping 3d graphics system
US6906729B1 (en) System and method for antialiasing objects
US20060181534A1 (en) Generation of motion blur
US8212835B1 (en) Systems and methods for smooth transitions to bi-cubic magnification
US20050128209A1 (en) Using texture filtering for edge anti-aliasing
WO2006021899A2 (en) 3d-graphics
US20070097141A1 (en) Primitive edge pre-filtering
WO2010041215A1 (en) Geometry primitive shading graphics system
WO2023177887A1 (en) Super resolution upscaling
Palmér Analytical motion blurred shadows
EP1700271A1 (en) Computer graphics processor and method of rendering images

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase