WO2004072907A1 - Computer graphics system and method for rendering a computer graphic image - Google Patents

Computer graphics system and method for rendering a computer graphic image Download PDF

Info

Publication number
WO2004072907A1
WO2004072907A1 PCT/IB2004/050069 IB2004050069W WO2004072907A1 WO 2004072907 A1 WO2004072907 A1 WO 2004072907A1 IB 2004050069 W IB2004050069 W IB 2004050069W WO 2004072907 A1 WO2004072907 A1 WO 2004072907A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinates
texture
textare
grid
sequence
Prior art date
Application number
PCT/IB2004/050069
Other languages
French (fr)
Inventor
Bart G. B. Barenbrug
Kornelis Meinds
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to US10/545,064 priority Critical patent/US20060202990A1/en
Priority to JP2006502556A priority patent/JP2006517705A/en
Priority to EP04707272A priority patent/EP1597705A1/en
Publication of WO2004072907A1 publication Critical patent/WO2004072907A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators

Definitions

  • the present invention relates to a computer graphics system and to a method for rendering a computer graphic image.
  • surfaces are typically rendered by assembling a plurality of polygons in a desired shape.
  • Computer graphics systems usually have the form of a graphics pipeline where the operations required to generate an image from such a polygon model are performed in parallel so as to achieve a high rendering speed.
  • a computer graphics system is known from US6,297,833.
  • the computer graphics system comprises a front-end and a set-up stage which provide input for the rasterizer.
  • the rasterizer in its turn drives a color generator which comprises a texture stage for generating texture values for selectable textures and a combiner stage which produces realistic output images by mapping textures to surfaces. To that end the rasterizer generates a sequence of coordinates in display space and calculates by interpolation the corresponding texture coordinates.
  • the combiner stage is configured to generate textured color values for the pixels of the polygonal primitive by blending the first texture value with the color values of the first set to generate first blended values, blending the second texture value with the color values of the second set to generate second blended values, and combining the second blended values with the first blended values.
  • This graphics system makes it possible to render an anti-aliased image with reduced computational effort.
  • the achieved anti-aliasing quality is superior to that obtained by 4x4 super-sampling, while the off-chip memory bandwidth and the computational costs are roughly comparable with 2x2 supersampling.
  • the computer graphics system of the invention is characterized by claim 1.
  • the rasterizer generates a regular sequence of coordinates on a grid in a space associated with the primitive on the basis of the geometric information of the primitive.
  • the wording "associated" denotes that the sequence of coordinates traversed by the grid is determined by the primitive. It is capable of generating the sequence so as to coincide with a grid of a texture.
  • the color generator assigns a color to said coordinates using said appearance information.
  • the so obtained color samples are resampled to a grid in display space by the display space resampler. Compared to the method known from US6,297,833 proper filtering is simplified significantly.
  • the texture space resampler in the color generator makes it possible to resample texture data provided by the texture data unit to the base grid from an arbitrary grid.
  • the rasterizer is capable of generating one or more sequences of interpolated values associated with the first sequence comprising a second sequence of coordinates for adressing samples of a texture.
  • the wording "associated" here indicates that for each coordinate of the first sequence there is a corresponding value, or coordinate for the second sequence.
  • the relation between the first and the second sequence of coordinates is for example dependent on the orientation of the primitive in relation to the environment. In this way it is not only possible to map simple textures, but also to map environment data.
  • the shading unit in the color generator enables a relatively wide range of visual effects. This makes it possible to apply shading programs suitable for systems as described in US6,297,833, using effects as multiple texturing, dependent texturing and other forms of pixel shading.
  • the computer graphics system of the invention comprises a texture space resampler which resamples the texture data to the space defined by the base grid. As will be set out in more detail in the description of the drawings this overcomes the need of large buffers.
  • the base grid is the grid of a texture. This overcomes the need to resample that texture. Resampling would entail an additional computational effort and a loss of image quality.
  • cases may occur where no suitable texture is associated with the primitive. Such a case is, for example, a texture described by a ID pattern, which might for example be used to render a rainbow.
  • a texture stored as a 3D (volumetric) pattern is another example.
  • the embodiment of claim 3 also allows rendering images using such textures by selecting a dummy grid.
  • the rasterizer in addition generates a sequence of coordinates in display space associated with the input coordinates. This has the advantage that the coordinates in display space can simply be calculated by interpolation.
  • the positions in display space can be calculated by a separate transformation unit, but this requires floating point multiplications and divisions.
  • the embodiment of claim 5 significantly increases the opportunities for special effects.
  • By feedback of texture data as input coordinates to the texture space resampler it is possible to apply so-called bumped environment mapping as described in "Real-Time Shading", by M. Olano, J.C. Hart, W. Heidrich, M. McCool, A K Peters, Natick, Massachusetts, 2002, page 108.
  • the embodiment of claim 6 further reduces the computation for those cases in which only simple textures are mapped to the surface of the primitive.
  • the definition of simple textures excludes environment data and cases wherein the textures are defined recursively as in bumped environment mapping.
  • the rasterizer can simply generate the input coordinates in a grid that corresponds to the grid in which the textures are stored.
  • the bypass means enable the rasterizer to directly provide the texture information unit with texture coordinates.
  • the bypass means may for example be a separate connection from the rasterizer to the texture information unit. Otherwise it may for example be a module of the texture space resampler which causes the latter to resample in a grid corresponding to the grid generated by the rasterizer.
  • the rasterisation grid selection unit in the embodiment according to claim 7 chooses a grid over the primitive. If any non-dependently accessed 2D textures are associated with the primitive, the selection unit selects from these, the texture map with the highest resolution (and therefore potentially the highest image frequencies). This guarantees maximum quality, since this texture does not need to be resampled by the texture space resampler. In case no suitable 2D texture map exists, a "dummy" grid over the primitive is constructed for the rasteriser to traverse, and on which the pixel shading is performed. In this way, primitives are supported with a wide variety of shading methods (next to application of 2D textures), such as primitives which are shaded with simple Gourraud shading, procedural shading, ID textures, 3D textures etc.
  • the embodiment of claim 9 has the advantage that sampling distance can be adapted to a value which gives an optimal combination of image quality and computational simplicity. This is in particular advantageous in an embodiment where the texture data is provided by a mipmap. A portion of the mipmap can be selected which best matches with the sampling distance.
  • the invention further encompasses the method for rendering a computer graphic image according to claim 10.
  • FIG. 2 schematically shows another prior art computer graphics system
  • Figure 3 schematically shows a computer graphics system constructed by combining the computer graphics systems shown in Figures 1 and 2,
  • Figure 4 schematically shows a graphics system according to the invention
  • Figure 5 shows in more detail the color generation unit of the computer graphics system of Figure 4
  • Figure 6 schematically shows a method of operation
  • Figure 7 schematically illustrates an aspect of the operation
  • Figure 8A shows a first example of a primitive
  • Figure 8B schematically illustrates a further aspect of the operation
  • Figure 9 shows a second example of a primitive
  • FIG. 1 schematically shows a prior art computer graphics system, which is arranged as a graphics pipeline.
  • the known graphics pipeline comprises a model information retrieving unit MIU, e.g. including a vertex shader, that provides a rasterizer RU with primitives.
  • Each primitive may comprise a set of data associated with a geometrical unit such as a triangle.
  • the data comprises geometrical data, e.g. the coordinates of the vertices of the triangle and appearance data.
  • the model information retrieving unit can be programmed, for example, via the OpenGL or Direct3D API.
  • An application programmer can let the vertex shader execute a program per vertex, and provide geometrical and appearance data to the vertex shader such as position, normal, colors and texture coordinates for each vertex.
  • a detailed description of a conventional vertex shader can be found in "A user-programmable vertex engine", Erick Lindholm, Mark J. Kilgard, and Henry Moreton, Proc. Siggraph pages 149-158
  • the rasterizer RU traverses these primitives to supply a shading unit SU with information indicative for addresses within one or more associated texture maps.
  • One or more texture space resamplers TSR subsequently obtain texture data from the addresses indicated by the rasterizer.
  • the color information provided by the textare space resamplers is aligned according to a grid corresponding to the space in which it is displayed, i.e. the display space.
  • the shading unit SU combines the color information according to the current shading program. The result of this combination is either used as an address in the texture data unit TDU in a next pass, or forwarded to the edge anti-aliasing and hidden surface removal EAA & HSR subsystem.
  • the EAA & HSR subsystem uses super-sampling or multi- sampling for edge anti-aliasing, and z-buffer techniques for hidden surface removal.
  • the final image provided by the EAA & HSR subsystem is stored in a frame buffer FB for display.
  • FIG. 2 schematically shows a part of the computer graphics system according to the article "Resample hardware for 3D Graphics" mentioned above.
  • a rasterizer RU In response to an input flow of primitives a rasterizer RU generates a sequence of texture coordinates for a texture data unit TDU and provides a mapped reconstruction filter footprint to a display space resampler DSR which resamples the texture data provided by the texture data unit to display space.
  • the texture data unit TDU may be coupled to the display space resampler DSR via a 4D mipmap reconstruction unit 3D>4D.
  • the display space resampler DSR forwards the pixel data to an edge antialiasing and hidden surface removal unit EAA&HSR.
  • the rasterizer RU controls a very simple texture fetch unit.
  • a textare data unit TDU it may comprise a simple filter 3D>4D to reconstruct 4D mipmap textare data on the fly from the standard 3D mipmaps stored in the textare memory as described in PHNL010924, filed as IB02/05468.
  • the display space resampler DSR takes these colors along with the mapped textare coordinates, and resamples these to the pixel grid on the display. For each texture map, this provides a "layer" of colors in display space.
  • the shading unit can combine all the layers into the final pixel fragment. In effect, this approach results in a per-primitive multipass texturing method for pixel shading. This has two main disadvantages.
  • the display space resampler DSR delivers the pixel fragment colors for its texture in an order corresponding to the texture grid, and since this order might be different for different texture maps, a buffer TMP is needed to store the (combined) colors from previous layers before the shading unit SU can combine the colors from the current layer. This results in overhead, in the form of required extra memory bandwidth. A tile based rendering architecture might mitigate this problem, but would be more complicated.
  • Figure 4 shows an embodiment of a computer graphics system according to the invention which overcomes these disadvantages. It comprises a model information providing unit MIU, possibly comprising a programmable vertex shader, for providing information representing a set of graphics primitives.
  • Figure 8A schematically shows a primitive in the form of a triangle. A first sequence of coordinates can be generated which is associated with the primitive by generating pairs of integer values which are bounded by the coordinates (u ⁇ ,V ⁇ )o , (U I ,V ⁇ ) I , and (u ⁇ ,V ⁇ ) 2 of the triangle.
  • arbitrary polygons may be used.
  • curved primitives as shown in Figure 9, may be used such as Bezier shapes.
  • Such primitives can be simply parameterized by a pair of parameters and having boundaries for the lower and upper values of these parameters.
  • Figure 9 shows an example of a surface bounded by four pairs of coordinates. However three pairs, representing a Bezier triangle, suffice. Alternatively a number higher than 4 may be used. With each pair of boundaries a textare coordinate (u ⁇ ,v ⁇ )o , (u ⁇ ,v ⁇ ) ⁇ , (u ⁇ ,v ⁇ ) 2 and (U I ,V ⁇ ) 3 can be associated.
  • a first sequence of coordinates can be generated which is associated with the primitive by generating pairs of integer values which are bounded by said textare coordinates.
  • the information comprises at least geometrical information defining a shape of the primitives such as the display coordinates of its vertices (not shown) and appearance information defining an appearance of the primitives.
  • Appearance information may comprise textare information, e.g. in the form of textare coordinates and color information, i.e. diffuse color and/or specular color.
  • a fog color can be used to simulate fog.
  • the coordinates of a first and a second texture are shown related to the vertices of the primitive in Figure 8 A.
  • the grid of the first texture TI serves as the base grid.
  • the coordinates for the first texture and the second textare are (ul,vl)j , and (u2, v2)j, respectively, where i is the number of the vertex. Also information representative for the normal of the primitives at position of the vertices may be included.
  • a model information providing unit is well known.
  • a programmable vertex shading unit for use in a model information providing unit is for example described in more detail in the above-mentioned article of Lindholm et all.
  • the model information providing unit MIU can be programmed via the OpenGL and Direct3D API.
  • the computer graphics system according to the invention further comprises a rasterizer (RU) capable of generating a first sequence of textare sample coordinates for addressing samples of a first texture, which coincide with a base grid associated with the primitive, here a grid coinciding with the first textare. It is also capable of generating one or more sequences of interpolated values associated with the first sequence comprising a second sequence of coordinates for addressing samples of a second textare.
  • RU rasterizer
  • the rasterizer RU is further capable of generating a first sequence of coordinates according to a dummy grid. This is relevant in the case that no texture is associated with the primitive, or if the texture is not suitable for a two-dimensional grid. This is the case, for example, for a texture described by a ID pattern, which might for example be used to render a rainbow. Another example is a texture stored as a 3D (volumetric) pattern.
  • the one or more sequences of interpolated values are associated with the first sequence of coordinates in that the rasterizer generates an interpolated value for each coordinate in the first sequence.
  • the interpolated values may be generated at the same time that the first sequence of coordinates is generated, but alternatively may be generated afterwards.
  • a rasterizer is well known as such.
  • a detailed description of a rasterizer is given in "Algorithms for Division Free Perspective Correct Rendering" by B. Barenbrug et all, pp.7-13, Proceedings of Graphics Hardware 2000.
  • the computer graphics system further comprises a color generator for assigning a color to said first sequence of coordinates using said appearance information related to the primitives.
  • the color generator CG comprises a texture data unit TDU for assigning textare data to the textare sample coordinates.
  • the texture data unit TDU is for example a textare synthesizer, which synthesizes a texture value for each coordinate. Otherwise it may be a memory in which predefined textures are stored. The textures may be stored in a compressed format. The memory may also contain multiple copies of the textures stored at a different scale. Known methods to implement this are for example the 3D and the 4D mipmap.
  • the color generator CG further comprises a texture space resampler TSR (shown in more detail in Figure 5) which is arranged for providing output textare data TWu,v in response to textare sample coordinates U f ,V f provided by the shading unit SU.
  • TSR texture space resampler
  • the textare space resampler TSR in the computer graphics system of the invention is driven with coordinates which correspond to a grid position on the first texture, and not with a grid position corresponding to a grid position on the display.
  • textare maps e.g. 8 or higher may be used to define the appearance of the primitives.
  • These textare maps may be resampled sequentially, but alternatively the color generator may have more than one textare space resampler and more than one texture data unit in order to speed-up the resampling process.
  • the color generator CG further comprises a shading unit SU for providing the color using said output textare data and the appearance information provided by the rasterizer.
  • the shading unit may use various data to provide the color, such as an interpolated diffuse color and a normal for calculating a contribution of specular reflection.
  • the display space resampler DSR resamples the color assigned by the color generator to a representation in a grid associated with a display.
  • This process of forward mapping the color to the display grid is preferably performed in two passes, wherein two ID filtering operations are performed after each other in mutually transverse directions. Alternatively however, the mapping to display coordinates could take place in a single 2D filtering operation. Forward mapping color data is described in detail in the aforementioned article "Resample hardware for 3D Graphics".
  • the data provided by the display space resampler DSR is processed by an antialiasing and hidden surface removal unit EAA&HSR. Details thereof can be found in the earlier filed patent application PHN020100, with filing number EP02075420.6.
  • the output data of this unit can be provided to a framebuffer for display or, as indicated by the dashed line, to the texture data unit TDU for use in a later stage.
  • Figure 5 shows again the rasterizer RU, and the texture data unit TDU, as well as, in more detail, the texture space resampler TSR and the shading unit SU of the computer graphics system according to the invention.
  • the textare space resampler TSR comprises an grid coordinate generator GCG for generating integer coordinates (Ui,v;) from the coordinates (u f ,U f ).
  • GCG grid coordinate generator
  • the textures are addressed by two dimensional coordinates, it is alternatively possible to use higher dimensional coordinates or one-dimensional coordinates instead.
  • a selection element SI controlled by a selection signal Sel allows either to forward the coordinates (u f , V f ) unchanged to the textare data unit TDU, or to select the resampled coordinates (UJ,V ⁇ .
  • the rasterizer RU is arranged for generating a regular sequence of coordinates (u ⁇ ,v ⁇ ) on a base grid. The range traversed by this sequence is determined by the data associated with the primitive.To that end the texture data unit TDU is coupled to the rasterizer RU, in casu via a selection element S3 of the shading unit SU and via a selection element SI of the textare space resampler TSR.
  • the rasterizer RU comprises a rasterization grid selection unit RGSU for selecting a base grid to be traversed by the first sequence of coordinates (U],V ⁇ ).
  • the base is preferably the grid of a further textare TI .
  • the rasterization grid selection unit RGSU selects the grid of the associated textare TI which is available at the highest resolution.
  • the rasterizer RU is capable of adapting the sampling distance step- wise as a function of the relation between a space associated with the primitive and the space associated with the display. This is the case where a texture is stored in the form of a 3D or 4D mipmap, and a perspective mapping causes the magnification of the texture to vary.
  • the rasterizer RU is further arranged to interpolate other data related to the primitive, such as coordinates of one or more further textures.
  • the rasterizer RU provides the interpolated further texture coordinates (u 2 ,v 2 ) to the textare space resampler TSR.
  • these coordinates can be passed to the textare data unit TDU via the selection elements S3 and SI.
  • integer values (u;,Vj) coinciding with the grid of the texture, can be calculated by the grid coordinate generator GCG. This is schematically shown in Figure 8B.
  • the selection element SI selects the resampled texture coordinates (ui,vj) as the coordinates for addressing the textare data unit TDU.
  • the coordinate (u 2 ,v 2 ) is surrounded by 4 samples a-d of the second textare.
  • the texture space resampler TSR fetches the corresponding textare data of the second texture T2 from the coordinates provided by the grid coordinate generator GCG and the filter FLT resamples these to the grid of the first texture TI .
  • Resampling may take place for example by nearest neighbor approximation, in which case the filter FLT simply passes on the one value Tuv generated by the TDU as a result of the nearest textare coordinate ui,vi generated by the GCG as the output texture value TWu,v.
  • the filter may cause the selection element S2 to perform this function by selecting the textare data Tu,v provided by the textare data unit TDU, instead of the output of the filter FLT.
  • resampling may take place by interpolation, for example, by bilinear interpolation. When using interpolation the addressed textare data Tu,v is weighted by a filter FLT which is controlled by the grid coordinate generator GCG.
  • the value calculated by the filter FLT is provided via selection element S2 to the shading unit SU as the output textare value TWu,v.
  • This mode is known as bilinear filtering. It is remarked that the texture space resampler TSR may calculate the output textare value TWu,v on the basis of more output coordinates (u,, v ⁇ .
  • textare data is often stored in the form of a 3D mipmap. This may have the consequence that no sequence of sample coordinates can be found which coincides with the texture grid.
  • the method describes in PHNL010924, filed as IB02/05468 enables to calculate 4D mipmap data on the fly from the 3D mipmap. This calculation, also based on bilinear interpolation can be performed by the textare space resampler TSR.
  • the rasterizer RU in addition provides the interpolated color values Cip and the interpolated normal values Nip to the shading unit SU.
  • the shading unit comprises apart from the selection element S3, a shading module SH and a programmable controller CTRL. As illustrated by dashed lines, the controller CTRL controls the switches S1,S2 and S3 and the shading module SH.
  • the shading module SH makes it possible to calculate a color Cu,v in response to several input data such as the interpolated normal Nip and the interpolated color value Cip from the rasterizer and the texture data TWu,v provided by the texture space resampler TSR, as well as environment data (such as information about the position an properties of lightsources).
  • the shading module SH may use well known shading functions, such as Phong shading for that purpose.
  • the shading unit SU provides the output color value Cu,v to the display space resampler DSR which resamples this value to the display coordinates derived. To that end the rasterizer RU may provide interpolated values for the display coordinates. As shown in Figure 5, an output of the shading module SH is coupled via the switching element S3 to the textare space resampler TSR.
  • the feedback facility enables special effects as bumped environment mapping, by reusing the output textare value TWu,v to generate the coordinates of another texture, for example adding these output texture values TWu,v to the input coordinates (u t ,v t ) or by using these output textare values TWu,v directly as feedback coordinates (u f ,V f ).
  • These feedback coordinates are not aligned with the grid.
  • the grid coordinate generator GCG generates textare grid aligned values (u,, v ⁇ from the coordinates (U f ,V f ).
  • step SI information represents a graphics model comprising a set of primitives.
  • the information comprises at least geometrical information indicative for the shape of the primitives and appearance information indicative for the appearance of the primitives.
  • step S2 a first sequence of coordinates is generated coinciding with a base grid associated with the primitive.
  • step S3 one or more sequences of interpolated values are generated which are associated with the first sequence, and which comprise a second sequence of coordinates for addressing samples of a textare.
  • Step S3 may be executed subsequent to step S2 as shown in the flow chart, but may alternatively be executed in parallel with step S2.
  • the base grid may be a dummy grid, or a grid for a further textare.
  • step S4 output textare data aligned with the base grid is obtained by generating coordinates aligned with the texture from the second sequence, fetching data of the texture at those coordinates and providing the output data as a function of the fetched data.
  • step S5 a color is provided using said output textare data and the appearance information.
  • step S6 the color so obtained is resampled to a representation in a grid associated with a display.
  • step SI 1 it is determined whether the appearance of the primitive is determined by two or more textures. If this is the case a textare counter i is initialized at 0 in step SI 2.
  • step S13 it is verified whether the grid of the current textare i coincides with the grid traversed by the sequence of textare sample coordinates. If this is the case program flow continues with step S14 and fetches a texture sample Tu,v at that coordinate. If the grid of the texture i does not coincide with the sequence of textare sample coordinates a textare sample TWu,v is obtained by a resampling routine in step SI 5, which uses a filter (such as a bilinear probe, or a higher order filter) to obtain an interpolated textare value from the texture values surrounding the textare sample coordinates. Alternatively it could simply obtain the textare value Tu,v at the nearest grid point of the textare i.
  • a filter such as a bilinear probe, or a higher order filter
  • Step SI 5 may include generation or modification of sample coordinates using earlier calculated texture data and/or using other momentaneously available shading data, such as interpolated color Cip and the interpolated normal Nip. In this way dependent texturing effects, such as bumped environment mapping, can be obtained.
  • step S14 or step SI 5 program flow continues with step SI 6 where the momentaneously available shading data, such as interpolated color Cip, interpolated normal Nip and textare data is combined.
  • Step SI 6 is followed by step S17 where it is verified whether there are further textares associated with the primitive. If so, the textare counter is incremented and steps SI 3 until and including S 17 are repeated. If it is determined in step S 17 that the last texture was processed, a combined color is calculated using the texure values TWu,v, the interpolated color Cip and other data, such as the interpolated normal Nip.
  • the calculated color value Cu,v is used in step SI 8 as input value for the next processing stage, for example the forward filtering operation that resamples the calculated color value to display coordinates as is described with reference to Figure 4.
  • the forward filtering operation Before or after the forward filtering operation one or more other processing steps may be performed, such as an alpha-test, depth-test and stencil- test procedures.
  • step S19 verifies whether there is exactly one texture. If this is the case the textare value of the present sample coordinate is retrieved in step S20.
  • This step S20 can either straightforwardly retrieve a textare sample as in step SI 4, when the sample coordinate coincides with the grid of the textare. Or, if the sample coordinate does not coincide with the textare grid it may calculate a textare value analogous to the procedure in step S15. Subsequently program flow continues with step S21. If it is determined in step S19 that there is no texture associated with the primitive, control flow directly continues with step S21. In step S21 other color computations may take place, for example using a diffuse color Cip and an interpolated normal Nip, which is followed by step SI 8.

Abstract

A computer graphics system according to the invention comprises a model information providing unit (MIU), a rasterizer (RU), a color generator, and a display space resampler (DSR). The model information providing unit (MIU) provides information representing a set of graphics primitives, the information comprising at least geometrical information defining a shape of the primitives and appearance information defining an appearance of the primitives. The rasterizer (RU) is capable of generating a first sequence of coordinates ((ul,v1)) which coincide with a base grid associated with the primitive, and capable of generating one or more sequences of interpolated values associated with the first sequence comprising a second sequence of coordinates ((u2,v2)) for adressing samples of a texture (T2). The color generator assigns a color (Cu,v) to said first sequence of coordinates using said appearance information, and comprises a texture data unit (TDU), a texture space resampler (TSR) and a shading unit (SU). The display space resampler (DSR) resamples the color (Cu,v) assigned by the color generator in the base grid to a representation in a grid associated with a display.

Description

Computer graphics system and method for rendering a computer graphic image
The present invention relates to a computer graphics system and to a method for rendering a computer graphic image.
In three dimensional computer graphics, surfaces are typically rendered by assembling a plurality of polygons in a desired shape. Computer graphics systems usually have the form of a graphics pipeline where the operations required to generate an image from such a polygon model are performed in parallel so as to achieve a high rendering speed. A computer graphics system is known from US6,297,833. The computer graphics system comprises a front-end and a set-up stage which provide input for the rasterizer. The rasterizer in its turn drives a color generator which comprises a texture stage for generating texture values for selectable textures and a combiner stage which produces realistic output images by mapping textures to surfaces. To that end the rasterizer generates a sequence of coordinates in display space and calculates by interpolation the corresponding texture coordinates. The combiner stage is configured to generate textured color values for the pixels of the polygonal primitive by blending the first texture value with the color values of the first set to generate first blended values, blending the second texture value with the color values of the second set to generate second blended values, and combining the second blended values with the first blended values.
It is a disadvantage of the known systems that anti-aliasing requires a significant computational effort, as color data has to be computed at a resolution which is significantly higher than the display resolution.
A radically different approach is known from the article, "Resample hardware for 3D Graphics", by Koen Meinds and Bart Barenbrug, Proceedings of Graphics Hardware 2002, pp 17-26, ACM 2002, T. Ertl and W. Heidrich and M Doggett (editors). Contrary to the system known from US6,297,833 the rasterizer is capable of traversing a sequence of sample coordinates coinciding with a grid of a texture to be mapped, while the coordinates for the display are interpolated. The resulting pixel values at a display are obtained by mapping the color data calculated for the interpolated display coordinates to the display grid. A resampler unit performing this procedure will be denoted display space resampler (DSR). A resampler which resamples to the grid of a texture, known from US6,297,833, will be denoted as texture space resampler (TSR).
This graphics system makes it possible to render an anti-aliased image with reduced computational effort. The achieved anti-aliasing quality is superior to that obtained by 4x4 super-sampling, while the off-chip memory bandwidth and the computational costs are roughly comparable with 2x2 supersampling.
In this article it is however not recognized how programmable pixel shading, comprising features as dependent multi-texturing (e.g. as used for bumped environment mapping) can be realized in a graphics system as described therein. It is a purpose of the invention to provide a computer graphics system which is capable of rendering images with a relatively wide range of visual effects with a relatively small computational effort.
According to this purpose the computer graphics system of the invention is characterized by claim 1. In the computer graphics system according to the invention the rasterizer generates a regular sequence of coordinates on a grid in a space associated with the primitive on the basis of the geometric information of the primitive. The wording "associated" denotes that the sequence of coordinates traversed by the grid is determined by the primitive. It is capable of generating the sequence so as to coincide with a grid of a texture. The color generator assigns a color to said coordinates using said appearance information. The so obtained color samples are resampled to a grid in display space by the display space resampler. Compared to the method known from US6,297,833 proper filtering is simplified significantly. In the first place it is easier to determine which color samples contribute to a particular pixel. Because the footprint of the prefilter required for anti-aliasing is aligned with the axes defining the display space, it is simple to determine if a texture coordinate, mapped in display space, is within said footprint of a pixel. Furthermore, contrary to inverse texture mapping, it is not necessary to transform the filter function from pixel space to texture space. Finally because the rasterization takes place in a space associated with the primitive only coordinates in said space restricted to the primitive are considered for the filtering process. The texture space resampler in the color generator makes it possible to resample texture data provided by the texture data unit to the base grid from an arbitrary grid. The rasterizer is capable of generating one or more sequences of interpolated values associated with the first sequence comprising a second sequence of coordinates for adressing samples of a texture. The wording "associated" here indicates that for each coordinate of the first sequence there is a corresponding value, or coordinate for the second sequence. The relation between the first and the second sequence of coordinates is for example dependent on the orientation of the primitive in relation to the environment. In this way it is not only possible to map simple textures, but also to map environment data. The shading unit in the color generator enables a relatively wide range of visual effects. This makes it possible to apply shading programs suitable for systems as described in US6,297,833, using effects as multiple texturing, dependent texturing and other forms of pixel shading. Contrary to the system known from US6,297,833 however, the computer graphics system of the invention comprises a texture space resampler which resamples the texture data to the space defined by the base grid. As will be set out in more detail in the description of the drawings this overcomes the need of large buffers.
If possible the base grid is the grid of a texture. This overcomes the need to resample that texture. Resampling would entail an additional computational effort and a loss of image quality. However, cases may occur where no suitable texture is associated with the primitive. Such a case is, for example, a texture described by a ID pattern, which might for example be used to render a rainbow. Another example is a texture stored as a 3D (volumetric) pattern. The embodiment of claim 3 also allows rendering images using such textures by selecting a dummy grid. In the embodiment of claim 4 the rasterizer in addition generates a sequence of coordinates in display space associated with the input coordinates. This has the advantage that the coordinates in display space can simply be calculated by interpolation. Alternatively the positions in display space can be calculated by a separate transformation unit, but this requires floating point multiplications and divisions. The embodiment of claim 5 significantly increases the opportunities for special effects. By feedback of texture data as input coordinates to the texture space resampler it is possible to apply so-called bumped environment mapping as described in "Real-Time Shading", by M. Olano, J.C. Hart, W. Heidrich, M. McCool, A K Peters, Natick, Massachusetts, 2002, page 108. The embodiment of claim 6 further reduces the computation for those cases in which only simple textures are mapped to the surface of the primitive. The definition of simple textures excludes environment data and cases wherein the textures are defined recursively as in bumped environment mapping. When mapping one or more simple textures the rasterizer can simply generate the input coordinates in a grid that corresponds to the grid in which the textures are stored. The bypass means enable the rasterizer to directly provide the texture information unit with texture coordinates. The bypass means may for example be a separate connection from the rasterizer to the texture information unit. Otherwise it may for example be a module of the texture space resampler which causes the latter to resample in a grid corresponding to the grid generated by the rasterizer.
The rasterisation grid selection unit in the embodiment according to claim 7 chooses a grid over the primitive. If any non-dependently accessed 2D textures are associated with the primitive, the selection unit selects from these, the texture map with the highest resolution (and therefore potentially the highest image frequencies). This guarantees maximum quality, since this texture does not need to be resampled by the texture space resampler. In case no suitable 2D texture map exists, a "dummy" grid over the primitive is constructed for the rasteriser to traverse, and on which the pixel shading is performed. In this way, primitives are supported with a wide variety of shading methods (next to application of 2D textures), such as primitives which are shaded with simple Gourraud shading, procedural shading, ID textures, 3D textures etc.
By choosing the grid of the texture which is available in the highest resolution as claimed in claim 8, an optimum quality is obtained when resampling other texture data to this grid.
The embodiment of claim 9 has the advantage that sampling distance can be adapted to a value which gives an optimal combination of image quality and computational simplicity. This is in particular advantageous in an embodiment where the texture data is provided by a mipmap. A portion of the mipmap can be selected which best matches with the sampling distance.
The invention further encompasses the method for rendering a computer graphic image according to claim 10.
These and other aspects of the invention are described in more detail with reference the drawings. Therein Figure 1 schematically shows a prior art computer graphics system,
Figure 2 schematically shows another prior art computer graphics system
Figure 3 schematically shows a computer graphics system constructed by combining the computer graphics systems shown in Figures 1 and 2,
Figure 4 schematically shows a graphics system according to the invention, Figure 5 shows in more detail the color generation unit of the computer graphics system of Figure 4,
Figure 6 schematically shows a method of operation, Figure 7 schematically illustrates an aspect of the operation, Figure 8A shows a first example of a primitive,
Figure 8B schematically illustrates a further aspect of the operation, Figure 9 shows a second example of a primitive,
Figure 1 schematically shows a prior art computer graphics system, which is arranged as a graphics pipeline. The known graphics pipeline comprises a model information retrieving unit MIU, e.g. including a vertex shader, that provides a rasterizer RU with primitives. Each primitive may comprise a set of data associated with a geometrical unit such as a triangle. The data comprises geometrical data, e.g. the coordinates of the vertices of the triangle and appearance data. The model information retrieving unit, can be programmed, for example, via the OpenGL or Direct3D API. An application programmer can let the vertex shader execute a program per vertex, and provide geometrical and appearance data to the vertex shader such as position, normal, colors and texture coordinates for each vertex. A detailed description of a conventional vertex shader can be found in "A user-programmable vertex engine", Erick Lindholm, Mark J. Kilgard, and Henry Moreton, Proc. Siggraph pages 149-158, August 2001.
The rasterizer RU traverses these primitives to supply a shading unit SU with information indicative for addresses within one or more associated texture maps. One or more texture space resamplers TSR subsequently obtain texture data from the adresses indicated by the rasterizer. The color information provided by the textare space resamplers is aligned according to a grid corresponding to the space in which it is displayed, i.e. the display space. The shading unit SU combines the color information according to the current shading program. The result of this combination is either used as an address in the texture data unit TDU in a next pass, or forwarded to the edge anti-aliasing and hidden surface removal EAA & HSR subsystem. Usually, the EAA & HSR subsystem uses super-sampling or multi- sampling for edge anti-aliasing, and z-buffer techniques for hidden surface removal. The final image provided by the EAA & HSR subsystem is stored in a frame buffer FB for display.
Figure 2 schematically shows a part of the computer graphics system according to the article "Resample hardware for 3D Graphics" mentioned above. In response to an input flow of primitives a rasterizer RU generates a sequence of texture coordinates for a texture data unit TDU and provides a mapped reconstruction filter footprint to a display space resampler DSR which resamples the texture data provided by the texture data unit to display space. The texture data unit TDU may be coupled to the display space resampler DSR via a 4D mipmap reconstruction unit 3D>4D. The display space resampler DSR forwards the pixel data to an edge antialiasing and hidden surface removal unit EAA&HSR.
In the known computer graphics system shown in Figure 1 the textare space resampler TSR provides the shading unit SU with colors and data on the pixel grid in display space. Subsequently, in display space they are combined. Applying this teaching for the computer graphics system in Figure 2, means that the shading unit SU should be placed after the display space resampler DSR. This leads to the combined architecture shown in Figure 3.
In the combined architecture shown in Figure 3 the rasterizer RU controls a very simple texture fetch unit. Apart from a textare data unit TDU it may comprise a simple filter 3D>4D to reconstruct 4D mipmap textare data on the fly from the standard 3D mipmaps stored in the textare memory as described in PHNL010924, filed as IB02/05468. No other filtering needs to be performed to obtain the colors on the texture grid traversed by the rasterizer. The display space resampler DSR takes these colors along with the mapped textare coordinates, and resamples these to the pixel grid on the display. For each texture map, this provides a "layer" of colors in display space. The shading unit can combine all the layers into the final pixel fragment. In effect, this approach results in a per-primitive multipass texturing method for pixel shading. This has two main disadvantages.
First, the display space resampler DSR delivers the pixel fragment colors for its texture in an order corresponding to the texture grid, and since this order might be different for different texture maps, a buffer TMP is needed to store the (combined) colors from previous layers before the shading unit SU can combine the colors from the current layer. This results in overhead, in the form of required extra memory bandwidth. A tile based rendering architecture might mitigate this problem, but would be more complicated.
Second, a multipass approach such as this can not cope with dependent texturing, and this is a vital feature in the pixel shading units of today's GPUs. Figure 4 shows an embodiment of a computer graphics system according to the invention which overcomes these disadvantages. It comprises a model information providing unit MIU, possibly comprising a programmable vertex shader, for providing information representing a set of graphics primitives. Figure 8A schematically shows a primitive in the form of a triangle. A first sequence of coordinates can be generated which is associated with the primitive by generating pairs of integer values which are bounded by the coordinates (uι,Vι)o , (UI,VΪ)I, and (uι,Vι)2 of the triangle. In other embodiments arbitrary polygons may be used. Instead of planar, curved primitives, as shown in Figure 9, may be used such as Bezier shapes. Such primitives can be simply parameterized by a pair of parameters and having boundaries for the lower and upper values of these parameters. Figure 9 shows an example of a surface bounded by four pairs of coordinates. However three pairs, representing a Bezier triangle, suffice. Alternatively a number higher than 4 may be used. With each pair of boundaries a textare coordinate (uι,vι)o , (uι,vι)ι, (uι,vι)2 and (UI,VΪ)3 can be associated. Then, analogously, a first sequence of coordinates can be generated which is associated with the primitive by generating pairs of integer values which are bounded by said textare coordinates. The information comprises at least geometrical information defining a shape of the primitives such as the display coordinates of its vertices (not shown) and appearance information defining an appearance of the primitives. Appearance information may comprise textare information, e.g. in the form of textare coordinates and color information, i.e. diffuse color and/or specular color. Furthermore a fog color can be used to simulate fog. By way of example the coordinates of a first and a second texture are shown related to the vertices of the primitive in Figure 8 A. The grid of the first texture TI serves as the base grid. The coordinates for the first texture and the second textare are (ul,vl)j , and (u2, v2)j, respectively, where i is the number of the vertex. Also information representative for the normal of the primitives at position of the vertices may be included.
A model information providing unit is well known. A programmable vertex shading unit for use in a model information providing unit is for example described in more detail in the above-mentioned article of Lindholm et all. The model information providing unit MIU can be programmed via the OpenGL and Direct3D API. The computer graphics system according to the invention further comprises a rasterizer (RU) capable of generating a first sequence of textare sample coordinates for addressing samples of a first texture, which coincide with a base grid associated with the primitive, here a grid coinciding with the first textare. It is also capable of generating one or more sequences of interpolated values associated with the first sequence comprising a second sequence of coordinates for addressing samples of a second textare. The rasterizer RU is further capable of generating a first sequence of coordinates according to a dummy grid. This is relevant in the case that no texture is associated with the primitive, or if the texture is not suitable for a two-dimensional grid. This is the case, for example, for a texture described by a ID pattern, which might for example be used to render a rainbow. Another example is a texture stored as a 3D (volumetric) pattern. The one or more sequences of interpolated values are associated with the first sequence of coordinates in that the rasterizer generates an interpolated value for each coordinate in the first sequence. The interpolated values may be generated at the same time that the first sequence of coordinates is generated, but alternatively may be generated afterwards.
A rasterizer is well known as such. A detailed description of a rasterizer is given in "Algorithms for Division Free Perspective Correct Rendering" by B. Barenbrug et all, pp.7-13, Proceedings of Graphics Hardware 2000.
The computer graphics system according to the invention further comprises a color generator for assigning a color to said first sequence of coordinates using said appearance information related to the primitives. The color generator CG comprises a texture data unit TDU for assigning textare data to the textare sample coordinates. The texture data unit TDU is for example a textare synthesizer, which synthesizes a texture value for each coordinate. Otherwise it may be a memory in which predefined textures are stored. The textures may be stored in a compressed format. The memory may also contain multiple copies of the textures stored at a different scale. Known methods to implement this are for example the 3D and the 4D mipmap.
The color generator CG further comprises a texture space resampler TSR (shown in more detail in Figure 5) which is arranged for providing output textare data TWu,v in response to textare sample coordinates Uf,Vf provided by the shading unit SU. In order to provide the output textare data TWu,v it generates texture sample coordinates (u;,Vi) aligned with the grid of the second textare T2. Subsequently it fetches data Tu,v from the second textare T2 at those coordinates and resamples the fetched textare data Tu,v to the grid of the first textare TI . In this way textare maps which do not share the same grid can be combined. Contrary to the texture space resampler TSR known from the prior art, the textare space resampler TSR in the computer graphics system of the invention is driven with coordinates which correspond to a grid position on the first texture, and not with a grid position corresponding to a grid position on the display.
In practice an arbitrary number of textare maps e.g. 8 or higher may be used to define the appearance of the primitives. These textare maps may be resampled sequentially, but alternatively the color generator may have more than one textare space resampler and more than one texture data unit in order to speed-up the resampling process.
The color generator CG further comprises a shading unit SU for providing the color using said output textare data and the appearance information provided by the rasterizer. Apart from the textare data, the shading unit may use various data to provide the color, such as an interpolated diffuse color and a normal for calculating a contribution of specular reflection.
Subsequently the display space resampler DSR resamples the color assigned by the color generator to a representation in a grid associated with a display. This process of forward mapping the color to the display grid is preferably performed in two passes, wherein two ID filtering operations are performed after each other in mutually transverse directions. Alternatively however, the mapping to display coordinates could take place in a single 2D filtering operation. Forward mapping color data is described in detail in the aforementioned article "Resample hardware for 3D Graphics".
The data provided by the display space resampler DSR is processed by an antialiasing and hidden surface removal unit EAA&HSR. Details thereof can be found in the earlier filed patent application PHN020100, with filing number EP02075420.6. The output data of this unit can be provided to a framebuffer for display or, as indicated by the dashed line, to the texture data unit TDU for use in a later stage.
Figure 5 shows again the rasterizer RU, and the texture data unit TDU, as well as, in more detail, the texture space resampler TSR and the shading unit SU of the computer graphics system according to the invention.
In the embodiment shown in Figure 5 the textare space resampler TSR comprises an grid coordinate generator GCG for generating integer coordinates (Ui,v;) from the coordinates (uf,Uf). Although in the embodiment shown the textures are addressed by two dimensional coordinates, it is alternatively possible to use higher dimensional coordinates or one-dimensional coordinates instead. A selection element SI controlled by a selection signal Sel allows either to forward the coordinates (uf, Vf) unchanged to the textare data unit TDU, or to select the resampled coordinates (UJ,VØ.
The rasterizer RU is arranged for generating a regular sequence of coordinates (uι,vι) on a base grid. The range traversed by this sequence is determined by the data associated with the primitive.To that end the texture data unit TDU is coupled to the rasterizer RU, in casu via a selection element S3 of the shading unit SU and via a selection element SI of the textare space resampler TSR.
The rasterizer RU comprises a rasterization grid selection unit RGSU for selecting a base grid to be traversed by the first sequence of coordinates (U],Vι).
The base is preferably the grid of a further textare TI . In particular, where two or more textures TI, T2 are associated with the primitive the rasterization grid selection unit RGSU selects the grid of the associated textare TI which is available at the highest resolution.
However, if no suitable textare is available, a dummy grid is selected as the base grid. The rasterizer RU is capable of adapting the sampling distance step- wise as a function of the relation between a space associated with the primitive and the space associated with the display. This is the case where a texture is stored in the form of a 3D or 4D mipmap, and a perspective mapping causes the magnification of the texture to vary.
The rasterizer RU is further arranged to interpolate other data related to the primitive, such as coordinates of one or more further textures. The rasterizer RU provides the interpolated further texture coordinates (u2,v2) to the textare space resampler TSR. In case that these interpolated further texture coordinates coincide with the grid of the second textare T2 these coordinates can be passed to the textare data unit TDU via the selection elements S3 and SI. In case however that the further textare coordinates (u2,v2) do not coincide, integer values (u;,Vj), coinciding with the grid of the texture, can be calculated by the grid coordinate generator GCG. This is schematically shown in Figure 8B. The selection element SI then selects the resampled texture coordinates (ui,vj) as the coordinates for addressing the textare data unit TDU. As shown in Figure 8B, the coordinate (u2,v2) is surrounded by 4 samples a-d of the second textare. The texture space resampler TSR fetches the corresponding textare data of the second texture T2 from the coordinates provided by the grid coordinate generator GCG and the filter FLT resamples these to the grid of the first texture TI . Resampling may take place for example by nearest neighbor approximation, in which case the filter FLT simply passes on the one value Tuv generated by the TDU as a result of the nearest textare coordinate ui,vi generated by the GCG as the output texture value TWu,v. Alternatively the filter may cause the selection element S2 to perform this function by selecting the textare data Tu,v provided by the textare data unit TDU, instead of the output of the filter FLT. Alternatively, resampling may take place by interpolation, for example, by bilinear interpolation. When using interpolation the addressed textare data Tu,v is weighted by a filter FLT which is controlled by the grid coordinate generator GCG. The value calculated by the filter FLT is provided via selection element S2 to the shading unit SU as the output textare value TWu,v. This mode is known as bilinear filtering. It is remarked that the texture space resampler TSR may calculate the output textare value TWu,v on the basis of more output coordinates (u,, vø.
In practice textare data is often stored in the form of a 3D mipmap. This may have the consequence that no sequence of sample coordinates can be found which coincides with the texture grid. However the method describes in PHNL010924, filed as IB02/05468 enables to calculate 4D mipmap data on the fly from the 3D mipmap. This calculation, also based on bilinear interpolation can be performed by the textare space resampler TSR.
The rasterizer RU in addition provides the interpolated color values Cip and the interpolated normal values Nip to the shading unit SU.
As shown in the figure the shading unit comprises apart from the selection element S3, a shading module SH and a programmable controller CTRL. As illustrated by dashed lines, the controller CTRL controls the switches S1,S2 and S3 and the shading module SH. The shading module SH makes it possible to calculate a color Cu,v in response to several input data such as the interpolated normal Nip and the interpolated color value Cip from the rasterizer and the texture data TWu,v provided by the texture space resampler TSR, as well as environment data (such as information about the position an properties of lightsources). The shading module SH may use well known shading functions, such as Phong shading for that purpose.
Shading methods are described for example in: "The PixelFlow Shading System, a shading language on graphics hardware:", by M. Olano and A. Lastra, in proceedings Siggraph (July 1998), pp 159-168. See also the Microsoft DirectX Graphics Programmers Guide, DirectX 8.1 ed. Microsoft Developer's Network Library, 2001 and the book "Real-Time Shading", by M. Olano, J.C. Hart, W. Heidrich, M. McCool, A K Peters, Natick, Massachusetts, 2002.
The shading unit SU provides the output color value Cu,v to the display space resampler DSR which resamples this value to the display coordinates derived. To that end the rasterizer RU may provide interpolated values for the display coordinates. As shown in Figure 5, an output of the shading module SH is coupled via the switching element S3 to the textare space resampler TSR.
The feedback facility enables special effects as bumped environment mapping, by reusing the output textare value TWu,v to generate the coordinates of another texture, for example adding these output texture values TWu,v to the input coordinates (ut,vt) or by using these output textare values TWu,v directly as feedback coordinates (uf,Vf). Usually these feedback coordinates are not aligned with the grid. The grid coordinate generator GCG generates textare grid aligned values (u,, vø from the coordinates (Uf,Vf). The method for rendering a computer graphic image according to the invention is schematically illustrated in the flow chart of Figure 6. As illustrated therein the method comprises comprising the following steps.
In step SI information is provided which represents a graphics model comprising a set of primitives. The information comprises at least geometrical information indicative for the shape of the primitives and appearance information indicative for the appearance of the primitives.
In step S2 a first sequence of coordinates is generated coinciding with a base grid associated with the primitive. In step S3 one or more sequences of interpolated values are generated which are associated with the first sequence, and which comprise a second sequence of coordinates for addressing samples of a textare. Step S3 may be executed subsequent to step S2 as shown in the flow chart, but may alternatively be executed in parallel with step S2. The base grid may be a dummy grid, or a grid for a further textare. In step S4 output textare data aligned with the base grid is obtained by generating coordinates aligned with the texture from the second sequence, fetching data of the texture at those coordinates and providing the output data as a function of the fetched data.
In step S5 a color is provided using said output textare data and the appearance information.
In step S6 the color so obtained is resampled to a representation in a grid associated with a display.
The operation of the color generator is described in more detail with reference to the flowchart of Figure 7. In step SI 1 it is determined whether the appearance of the primitive is determined by two or more textures. If this is the case a textare counter i is initialized at 0 in step SI 2.
Then in step S13 it is verified whether the grid of the current textare i coincides with the grid traversed by the sequence of textare sample coordinates. If this is the case program flow continues with step S14 and fetches a texture sample Tu,v at that coordinate. If the grid of the texture i does not coincide with the sequence of textare sample coordinates a textare sample TWu,v is obtained by a resampling routine in step SI 5, which uses a filter (such as a bilinear probe, or a higher order filter) to obtain an interpolated textare value from the texture values surrounding the textare sample coordinates. Alternatively it could simply obtain the textare value Tu,v at the nearest grid point of the textare i. Step SI 5 may include generation or modification of sample coordinates using earlier calculated texture data and/or using other momentaneously available shading data, such as interpolated color Cip and the interpolated normal Nip. In this way dependent texturing effects, such as bumped environment mapping, can be obtained. After step S14 or step SI 5 program flow continues with step SI 6 where the momentaneously available shading data, such as interpolated color Cip, interpolated normal Nip and textare data is combined.
Step SI 6 is followed by step S17 where it is verified whether there are further textares associated with the primitive. If so, the textare counter is incremented and steps SI 3 until and including S 17 are repeated. If it is determined in step S 17 that the last texture was processed, a combined color is calculated using the texure values TWu,v, the interpolated color Cip and other data, such as the interpolated normal Nip.
After the last textare of the primitive has been processed, the calculated color value Cu,v is used in step SI 8 as input value for the next processing stage, for example the forward filtering operation that resamples the calculated color value to display coordinates as is described with reference to Figure 4. Before or after the forward filtering operation one or more other processing steps may be performed, such as an alpha-test, depth-test and stencil- test procedures.
If it was determined in step SI 1 that the appearance of the primitive is determined by less than two textares, step S19 is executed. Step S19 verifies whether there is exactly one texture. If this is the case the textare value of the present sample coordinate is retrieved in step S20. This step S20 can either straightforwardly retrieve a textare sample as in step SI 4, when the sample coordinate coincides with the grid of the textare. Or, if the sample coordinate does not coincide with the textare grid it may calculate a textare value analogous to the procedure in step S15. Subsequently program flow continues with step S21. If it is determined in step S19 that there is no texture associated with the primitive, control flow directly continues with step S21. In step S21 other color computations may take place, for example using a diffuse color Cip and an interpolated normal Nip, which is followed by step SI 8.

Claims

CLAIMS:
1. Computer graphics system comprising a model information providing unit (MIU) for providing information representing a set of graphics primitives, the information comprising at least geometrical information defining a shape of the primitives and appearance information defining an appearance of the primitives, a rasterizer (RU) capable of generating a first sequence of coordinates ((uι,vι)) which coincide with a base grid associated with the primitive, and capable of generating one or more sequences of interpolated values associated with the first sequence comprising a second sequence of coordinates ((u2,v2)) for adressing samples of a textare (T2), - a color generator for assigning a color (Cu,v) to said first sequence of coordinates using said appearance information, the color generator comprising a textare data unit (TDU) for assigning textare data (Tu,v) to the textare coordinates and a texture space resampler (TSR) arranged for providing output texture data (TWu,v) by generating texture coordinates aligned with the grid of the texture (T2) from the second sequence of coordinates, fetching data from the textare (T2) at the generated textare coordinates and resampling the fetched texture data (Tu,v) to the base grid, a shading unit (SU) capable of providing the color (Cu,v) using said output textare data and the appearance information provided by the rasterizer, a display space resampler (DSR) for resampling the color (Cu,v) assigned by the color generator in the base grid to a representation in a grid associated with a display.
2. Computer graphics system, wherein the base grid is the grid of a further texture (TI).
3. Computer graphics system, wherein the base grid is a dummy grid.
4. Computer graphics system according to claim 1, characterized in that the rasterizer (RU) in addition is arranged for generating a sequence of coordinates in display space associated with the first sequence of textare coordinates ((uι,vι)).
5. Computer graphics system according to claim 1, characterized by a feedback facility (SH, S3,ICG,S1) for providing further texture coordinates (uf,vø to the texture space resampler (TSR) in response to the output textare data (TWu,v).
6. Computer graphics system according to claim 1, characterized by a bypass facility (S3,S1) for enabling the rasterizer (RU) to directly provide the textare data unit (TDU) with textare coordinates ((uι,Vι)).
7. Computer graphics system according to claim 1, characterized in that the rasterizer (RU) comprises a rasterization grid selection unit (RGSU) for selecting a grid to be traversed by the first sequence of textare coordinates ((uι,vι)).
8. Computer graphics system according to claim 7, characterized in that where two or more textares (TI, T2) are associated with the primitive the rasterization grid selection unit (RGSU) selects the grid of the associated textare (TI) which is available at the highest resolution.
9. Computer graphics system according to claim 1, characterized in that the rasterizer (RU) is capable of adapting the sampling distance step-wise as a function of the relation between a space associated with the primitive and the space associated with the display.
10. Method for rendering a computer graphic image comprising the steps of - providing information representing a graphics model comprising a set of primitives, the information comprising at least geometrical information indicative for the shape of the primitives and appearance information indicative for the appearance of the primitives, generating a first sequence of coordinates coinciding with a base grid associated with the primitive, generating one or more sequences of interpolated values associated with the first sequence comprising a sequence of texture coordinates for adressing samples of a texture, providing output texture data aligned with the base grid by generating textare coordinates aligned with the texture from the second sequence, fetching data of the texture at the generated texture coordinates and — providing the output textare data as a function of the fetched data, - providing a color using said output textare data and the appearance information resampling the color so obtained to a representation in a grid associated with a display.
PCT/IB2004/050069 2003-02-13 2004-02-02 Computer graphics system and method for rendering a computer graphic image WO2004072907A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/545,064 US20060202990A1 (en) 2003-02-13 2004-02-02 Computer graphics system and method for rendering a computer graphic image
JP2006502556A JP2006517705A (en) 2003-02-13 2004-02-02 Computer graphics system and computer graphic image rendering method
EP04707272A EP1597705A1 (en) 2003-02-13 2004-02-02 Computer graphics system and method for rendering a computer graphic image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03100313.0 2003-02-13
EP03100313 2003-02-13

Publications (1)

Publication Number Publication Date
WO2004072907A1 true WO2004072907A1 (en) 2004-08-26

Family

ID=32865034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/050069 WO2004072907A1 (en) 2003-02-13 2004-02-02 Computer graphics system and method for rendering a computer graphic image

Country Status (6)

Country Link
US (1) US20060202990A1 (en)
EP (1) EP1597705A1 (en)
JP (1) JP2006517705A (en)
KR (1) KR20050093863A (en)
CN (1) CN1748230A (en)
WO (1) WO2004072907A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100893637B1 (en) 2006-07-26 2009-04-17 엔비디아 코포레이션 Accellerated start tile search
WO2011030165A3 (en) * 2009-09-14 2011-04-28 Sony Computer Entertainment Europe Limited A method of determining the state of a tile based deferred rendering processor and apparatus thereof
CN102594494A (en) * 2012-01-11 2012-07-18 浙江工业大学 Intelligent terminal-oriented progressive network adaptive transmission method

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005332195A (en) * 2004-05-19 2005-12-02 Sony Computer Entertainment Inc Texture unit, image drawing apparatus, and texel transfer method
US7511717B1 (en) * 2005-07-15 2009-03-31 Nvidia Corporation Antialiasing using hybrid supersampling-multisampling
US20080037066A1 (en) * 2006-08-10 2008-02-14 Sauer Charles M Method and Apparatus for Providing Three-Dimensional Views of Printer Outputs
CN104025181B (en) * 2011-12-30 2016-03-23 英特尔公司 The block based on classification for uncoupling sampling postpones coloring system structure
US9743057B2 (en) * 2012-05-31 2017-08-22 Apple Inc. Systems and methods for lens shading correction
KR102059578B1 (en) * 2012-11-29 2019-12-27 삼성전자주식회사 Method and apparatus for processing primitive in 3 dimensional graphics rendering system
KR102101834B1 (en) 2013-10-08 2020-04-17 삼성전자 주식회사 Image processing apparatus and method
US9355489B2 (en) * 2013-11-14 2016-05-31 Intel Corporation Land grid array socket for electro-optical modules
US10262393B2 (en) * 2016-12-29 2019-04-16 Intel Corporation Multi-sample anti-aliasing (MSAA) memory bandwidth reduction for sparse sample per pixel utilization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000004505A1 (en) * 1998-07-16 2000-01-27 The Research Foundation Of State University Of New York Apparatus and method for real-time volume processing and universal 3d rendering
US6297833B1 (en) * 1999-03-23 2001-10-02 Nvidia Corporation Bump mapping in a computer graphics pipeline

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6954204B2 (en) * 2002-07-18 2005-10-11 Nvidia Corporation Programmable graphics system and method using flexible, high-precision data formats

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000004505A1 (en) * 1998-07-16 2000-01-27 The Research Foundation Of State University Of New York Apparatus and method for real-time volume processing and universal 3d rendering
US6297833B1 (en) * 1999-03-23 2001-10-02 Nvidia Corporation Bump mapping in a computer graphics pipeline

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MEINDS K ET AL: "Resample Hardware for 3D Graphics", EUROGRAPHICS WORKSHOP ON GRAPHICS HARDWARE, XX, XX, 2002, pages 17 - 27, XP002259239 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100893637B1 (en) 2006-07-26 2009-04-17 엔비디아 코포레이션 Accellerated start tile search
WO2011030165A3 (en) * 2009-09-14 2011-04-28 Sony Computer Entertainment Europe Limited A method of determining the state of a tile based deferred rendering processor and apparatus thereof
US9342430B2 (en) 2009-09-14 2016-05-17 Sony Computer Entertainment Europe Limited Method of determining the state of a tile based deferred rendering processor and apparatus thereof
CN102594494A (en) * 2012-01-11 2012-07-18 浙江工业大学 Intelligent terminal-oriented progressive network adaptive transmission method

Also Published As

Publication number Publication date
EP1597705A1 (en) 2005-11-23
JP2006517705A (en) 2006-07-27
US20060202990A1 (en) 2006-09-14
CN1748230A (en) 2006-03-15
KR20050093863A (en) 2005-09-23

Similar Documents

Publication Publication Date Title
US5949424A (en) Method, system, and computer program product for bump mapping in tangent space
US5880736A (en) Method system and computer program product for shading
US7532220B2 (en) System for adaptive resampling in texture mapping
US7446780B1 (en) Temporal antialiasing in a multisampling graphics pipeline
US20060158451A1 (en) Selection of a mipmap level
WO2006095481A1 (en) Texture processing device, drawing processing device, and texture processing method
US20060202990A1 (en) Computer graphics system and method for rendering a computer graphic image
EP1489560A1 (en) Primitive edge pre-filtering
EP1616299B1 (en) Computer graphics processor and method for generating a computer graphics image
EP1634248A1 (en) Adaptive image interpolation for volume rendering
US20050088450A1 (en) Texture roaming via dimension elevation
EP1759355B1 (en) A forward texture mapping 3d graphics system
EP1766584A2 (en) Inverse texture mapping 3d graphics system
US6894696B2 (en) Method and apparatus for providing refractive transparency in selected areas of video displays
US20070097141A1 (en) Primitive edge pre-filtering
KR0153664B1 (en) 3d object generator in a graphic system
Angel et al. An interactive introduction to OpenGL programming
Angel et al. An interactive introduction to OpenGL and OpenGL ES programming
WO2010041215A1 (en) Geometry primitive shading graphics system
Carr et al. Real-Time Procedural Solid Texturing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004707272

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 10545064

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 1020057014774

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2004804093X

Country of ref document: CN

Ref document number: 2006502556

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 1020057014774

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2004707272

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 10545064

Country of ref document: US