WO2010041215A1 - Geometry primitive shading graphics system - Google Patents

Geometry primitive shading graphics system Download PDF

Info

Publication number
WO2010041215A1
WO2010041215A1 PCT/IB2009/054424 IB2009054424W WO2010041215A1 WO 2010041215 A1 WO2010041215 A1 WO 2010041215A1 IB 2009054424 W IB2009054424 W IB 2009054424W WO 2010041215 A1 WO2010041215 A1 WO 2010041215A1
Authority
WO
WIPO (PCT)
Prior art keywords
geometry
texture
frame
space
primitives
Prior art date
Application number
PCT/IB2009/054424
Other languages
French (fr)
Inventor
Kornelis Meinds
Original Assignee
Nxp B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nxp B.V. filed Critical Nxp B.V.
Publication of WO2010041215A1 publication Critical patent/WO2010041215A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the invention relates to a geometry primitive shading graphics processor, a graphics adapter comprising the graphics processor, a computer comprising the graphics processor, a display apparatus comprising the graphics processor, and a method of mapping geometry data onto a screen space.
  • a rendering system is known from WO 03/017204 A2.
  • a rendering process of a complex virtual urban environment containing several thousands of dynamic animated objects like models of humans, animals or vehicles is simplified.
  • the objects can be be viewed from different angles either by the motion or by the rotation of the objects themselves or by moving the view point of rendering. For each of these discrete angles there is a pre-computed image of the object being stored.
  • the objects could be also animated, for example a human model could be made walking or running.
  • These animations consist, for a viewing direction of a set of frames, which are a sequence of images that shows the animation from a single direction. For each of the discrete viewing directions, also this set of object animation frames is also pre-computed and stored.
  • object animation frames are glued onto a polygon or geometry primitive which is then placed at a certain location in the virtual 3D world such that it can be rendered. This is done for thousands of objects like the human models. If two or more of these thousands of objects happen to use both the same viewing direction and the same animation frame then there is a single associated image that only has to be loaded once instead of two or more times.
  • An inverse texture mapping 3D traffic system is known form EP 1 765 84.
  • the inverse texture mapping 3D graphic processors maps a 3D object (or primitive) onto a screen space.
  • a texture memory stores texel intensities of texture space grid positions.
  • a plurality of screen space rasterizers determines pixel grid positions "same" polygons or geometry primitives within different screens spaces at a plurality of corresponding different display instants during a temporal interval between successive sample instants of geometric data of the 3D model.
  • the screen space polygons for successive temporal display instants have different positions in the screen space dependent on motion information of the 3D model.
  • a plurality of corresponding mappers map the pixel grid positions of the screen space polygons at the different display instants to texture space positions.
  • a texture space resampler determines texel intensities at the texture space positions from the texel grid intensities of the texture space grid positions stored in the texture memory.
  • a texture cache temporarily stores, for every texture space polygon, the texel intensities required by the texture space resampler during the temporal interval for all the screen space polygons associated with a same texture space polygon.
  • a plurality of corresponding pixel shaders determine at said different display instants, pixel intensities from the texel intensities.
  • a forward texture mapping 3D graphic system is known from EP 1 759 355.
  • the forward texture mapping 3D graphics processor comprises a texture memory to store texel intensities at texture space grid positions.
  • a plurality of mappers map of a polygon a particular texel with a same texture space grid position having a texel intensity to corresponding screen space positions at corresponding different instants during a same temporal interval occurring between two successive samplings of geometric data of the polygon.
  • the corresponding screen space positions depend on motion information of the polygon.
  • Corresponding screen space resamplers determine a contribution of the texel intensities at the corresponding screen space positions to surrounding screen grid positions at the corresponding different instants.
  • EP 1 765 84 or known from EP 1 759 355 does only uses a single sampling of the geometry, but with motion data, to produce multiple frames at successive display instants.
  • a disadvantage of this approach is that it requires the application to send this motion data together with the geometry data which requires extension of current graphics APIs (like OpenGL) and therefore it requires the applications to change their programs for producing the motion data. Even if the changes to the applications are small this can be a considerable drawback because using these applications "as is” will not enable the frame-rate-up-conversion mode with the associated advantages as discussed in EP 1 765 84 and EP 1 759 355.
  • the present invention is based on the thought to provide a geometry primitive shading processor for mapping geometry data supplied by a graphics application onto screen space, which comprises a texture memory for storing texel intensities of texture space grid positions and a multiple frame rendering module being adapted to render geometry primitives in multiple frames by determining for each frame pixel intensities by means of the texel intensities supplied by the texture memory.
  • the process of determining for each frame pixel intensities of screen space grid positions on the basis of the texel intensities could be an output driven process, which is known as a inverse texture mapping process, or could also be an input driven process, which is known as a forward texture mapping process.
  • a capture geometry module which has a frame-geometry buffer.
  • This capture geometry module is adapted to capture at least two geometry samplings of the geometry data supplied by the graphics application and to store the at least two captured geometry samplings in the frame-geometry buffer. Then, between those at least two geometry samplings, corresponding geometry primitives, which are geometry primitives having the same coordinates of vertices in model space, the same coordinates of vertices in texture space, the same texture binding number or numbers, and which, if applicable, have the same shader program binding number are related. These corresponding geometry primitives are then fed into multiple pipelines corresponding to multiple frame instances of the multiple frame rendering module. Thus the same or corresponding geometry primitives or polygons are rasterized for multiple frames simultaneously. This way the texture data, which is regularly a data traffic bottleneck in 3D systems, does only have to be fetched once for the texture mapping process.
  • the multiple frame rendering module is a multiple inverse texture mapping module comprising a plurality of vertex shaders or vertex transformation and lighting (T & L) units for determining vertex positions and vertex intensities (or other values) for the corresponding geometry primitives at a plurality of corresponding frames, a plurality of screen space rasterizers for determining pixel grid positions within the corresponding geometry primitives at a plurality of corresponding frames, a plurality of corresponding mappers for mapping the pixel grid positions of the corresponding geometry primitives at the different frames to texture space positions, a plurality of texture space resamplers for determining texel intensities at the texture space positions from the texel grid intensities of the texture space grid positions stored in the texture memory, a texture cache for temporarily storing, for every corresponding geometry primitives, the texel intensities required by the texture space resampler, and a plurality of corresponding pixel shaders for determining, at each frame,
  • T & L vertex shader
  • intensity should also be read as values that are not intensity or color values, but also other values, e.g. normal vector modulation values, i.e. intensities/colors can also be rendered from texel values (that could be intensities/colors or not) by performing further calculations.
  • the multiple frame rendering module is a multiple forward texture mapping module comprising a vertex shader or a vertex transformation and lighting (T & L) unit for determining the vertex intensities having a part that determines a plurality of vertex positions for the corresponding geometry primitives at the multiple frame instances, a texture space rasterizer for determining texture grid positions within the texture space geometry primitive in the texture space corresponding to the plurality of the corresponding screen space geometry primitives at the multiple frame instances, a pixel shader for determining intensities or values at the texture grid, a plurality of mappers for mapping a particular texel within the texture space geometry primitive in the texture space to associated screen space positions of the corresponding geometry primitives at the multiple frame instances, and a plurality of screen space resamplers for determining a contribution of the texel intensities or values at the associated screen space positions to surrounding screen grid positions at the multiple frame instances.
  • T & L vertex transformation and lighting
  • the multiple frame rendering module being a multiple inverse texture mapping, comprises a plurality of vertex T & L units for transforming vertex coordinates of the corresponding geometry primitives in world space to the screen space to obtain screen space coordinates and for performing light calculations to determine an intensity per vertex.
  • the second embodiment being a multiple forward texture mapping system only one single vertex T & L unit is needed where performing light calculations to determine an intensity per vertex is only done once for the multiple frames, but where still the transformation of the vertex coordinates of the corresponding geometry primitives is done multiple times.
  • the multiple frame rendering module further comprises a multitude of hidden surface removal units and a multitude of frame buffers for storing pixel intensities being determined at the screen grid positions.
  • Each frame buffer stores a rendered image for a particular one of the frame instants.
  • the geometry primitive shading graphics processor is part of a graphics adapter, a computer or a display apparatus.
  • the object of the present invention is further solved by a method of mapping geometry data supplied by a graphics application onto a screen space, wherein the method comprises the steps of capturing at least two geometry samplings of the geometry data by means of a captured geometry module having a frame -geometry buffer, storing the at least two captured frame instances of geometry samplings in the frame-geometry buffer, relating corresponding geometry primitives of the at least two geometry samplings stored in the frame-geometry buffer, and rendering the geometry primitives in multiple pipelines corresponding to multiple frame instances by determining for each frame pixel intensities or screen space grid positions by means of texel intensities of texture space grid positions, wherein the same texel intensities are fetched a single time to be used for multiple successive frames for rendering the corresponding geometry primitives.
  • the rendering step according to the present invention could be a multiple inverse texture mapping process or a multiple forward texture mapping process.
  • Fig. 1 shows a mapping of a textured 3D object to the screen
  • Fig. 2 illustrates a sequence of successive frame instances, in which geometry samplings of at frame instances comprise corresponding geometry primitives
  • Fig. 3 shows a block diagram of a prior art inverse texture mapping 3D graphics system
  • Fig. 4 illustrates the operation of the prior art inverse texture mapping system
  • Fig. 5 shows a block diagram of a first embodiment of the present invention using a multiple inverse texture mapping module
  • Fig. 6 illustrates the operation of the first embodiment of the present invention as shown in fig. 5,
  • Fig. 7 shows a block diagram of a prior art forward texture mapping 3D graphics system
  • Fig. 8 illustrates the operation of the prior art forward texture mapping system
  • Fig. 9 shows a block diagram of a second embodiment of the present invention using a multiple forward texture mapping module
  • Fig. 10 illustrates the operation of the second embodiment of the present invention as shown in fig. 9,
  • Fig. 11 shows a computer comprising the geometry primitive shading graphics processor according to the present invention.
  • Fig. 12 shows a display apparatus, comprising the geometry primitive shading graphic processor of the invention.
  • Fig. 1 illustrates the mapping of a textured 3D object WO in world space on a display screen DS.
  • the object may also be available in other 3D spaces such as model or eye space, in the following all these spaces are referred to as world space.
  • An object WO which may be a three-dimensional textured object such as the cube shown, is projected on the two-dimensional display screen DS.
  • a surface structure or texture defines the appearance of the three-dimensional object WO.
  • the polygon A has a texture TA and the polygon B has a texture TB.
  • the polygons A and B are with a more general term also referred to as graphics primitives.
  • the projection on to the display screen DS of the object WO is obtained by defining an eye or camera position ECP within the world space.
  • Fig. 1 shows how the polygon SGP projected on the screen DS is obtained from the corresponding polygon A.
  • the polygon SGP in the screen space SSP is defined by it's vertex coordinates in the screen space SSP. It is only the projection of the geometry of the polygon
  • the texture TA of the polygon A is not directly projected from the world space onto the screen space SSP.
  • the different textures of the world space object WO are stored in a texture memory TM (see Figs. 3, 5, 7 and 9) or texture space TSP defined by the coordinate axes u and v.
  • Fig. 1 shows that the polygon A has a texture TA which is available in the texture space TSP in the area indicated by TA, while the polygon B has another texture TB which is available in the texture space TSP in the area indicated by TB.
  • the polygon A is associated with the texture space TA to obtain a polygon TGP such that the texture within the polygon TGP is attached on the polygon A.
  • a perspective transformation PPT between the texture space TSP and the screen space SSP projects the texture of the polygon TGP on the corresponding polygon SGP.
  • This process is also referred to as texture mapping.
  • the textures are not all present in a global texture space, but every texture defines its own texture space TSP.
  • the textures in the texture space TSP are stored in a texture memory TM for a discrete number of positions in the texture space TSP.
  • these discrete positions are the grid positions in the texture space TSP determined by integer values of u and v. These discrete grid positions are further referred to as grid texture positions or grid texture coordinates.
  • Positions in the texture space which are not limited to the grid positions are referred to as positions in the texture space TSP or as positions in the u,v space TSP.
  • the positions in the u,v space may be represented by floating point numbers.
  • the image to be displayed is stored in a frame buffer memory. Commonly this image is often referred to as frame.
  • this image is often referred to as frame.
  • these discrete positions are the grid positions in the screen space SSP determined by integer values of x and y. These discrete grid positions are referred to as grid screen positions or grid screen coordinates.
  • Positions in the x,y space which are not limited to the grid positions are referred to as positions in the x,y space or as positions in the screen space SSP. These positions in the x,y space may be represented by floating point numbers.
  • Fig. 2 illustrates an example of different successive frames 1-3, which show three examples of graphics primitives or geometry primitives, being already rendered by means of texture information contained in the texture memory TM.
  • the first example of a geometry primitive is the front side surface C of a wall, which is static (in view of its vertices in world space) in the three successive frames 1-3.
  • Another example of a geometry primitive is the front surface D of a car body, which is moving from a right side to a left side in the successive frames 1-3 (it has different vertices in world space), but is static in view of the car model (thus, having the same vertices in model space).
  • the third example of a geometry primitive is a front wheel surface E of the car body, which is a model at its own and having thus the same model coordinates in the three frames 1-3, being however translated and rotated in world space.
  • a graphics application calculates and sends the transformations (e.g. translation, scaling, rotation, perpective projection or a combination of these) that the vertex T & L unit VER will execute of different 3D objects in world space and feeds this geometry data to a geometry primitive shading graphics processor.
  • This geometry data contains the vertices of all geometry primitives in texture space, the vertices of all geometry primitives in model space, the texture binding number (or numbers in case of multiple textures) of the textures being applied to the different geometry primitives, and further, if applicable, the shader program binding number.
  • 3D graphics API's such as OpenGL or Direct3D or to 2D graphics API's such as OpenVG
  • the successive frames 1-3 are then rendered by the geometry primitive shading graphics processor.
  • the geometry data comprising at least two geometry samplings (corresponding to the wall, the car body and the wheels) is captured for different successive frame instants (for example as shown in Fig. 2) by means of a capture geometry module CGM (see Figs. 5 and 9) and stored in a frame-geometry buffer FGB (Figs. 5 and 9).
  • corresponding geometry primitives are related in the successive frame instances, wherein corresponding geometry primitives are identified using the following information:
  • the texture binding number or numbers in case of multiple textures) or shader program binding numbers, if applicable, should be the same, the coordinates of the vertices in model space should be the same and the coordinates of the vertices in texture space should be the same. If all these criteria are fulfilled, geometry primitives in successive frame instances are determined as corresponding geometry primitives, which is in the example of Fig. 2 the case for the corresponding geometry primitives C, D and E.
  • the corresponding geometry primitives are grouped and fed into a multiple frame rendering module, which performs the rendering process of corresponding geometry primitives simultaneously, wherein the texture data required for the rendering process of the corresponding geometry primitives has only to be fetched once from the texture memory TM (as shown in Figs. 5 and 9).
  • the multiple frame rendering module is, in a first embodiment of the present invention (which is shown in Fig. 5) a multiple inverse texture mapping module MITM, and in a second embodiment of the present invention, a multiple forward texture mapping module MFTM (as shown in Fig. 9).
  • MITM multiple inverse texture mapping module
  • MFTM multiple forward texture mapping module
  • Fig. 3 shows a block diagram of the prior art inverse texture mapping 3D graphics system.
  • a vertex transformation and lighting unit VER further also referred to as the vertex T&L unit, transforms the vertex coordinates of the polygon A in model space to the screen space SSP to obtain screen space coordinates x v , y v of the vertices of the screen space polygon SGP.
  • the vertex T&L unit further performs light calculations to determine an intensity (also referred to as color) per vertex. If a texture TA is to be applied to a screen space polygon SGP, the vertex T&L unit receives texture space coordinates u v ,v v from the application.
  • the vertex T&L unit calculates the associated screen space coordinates x v ,y v (see Fig. 4) of the vertices of the screen space polygons SGP such that the position thereof in the screen space SSP is known.
  • the positions of the vertices will not coincident with the screen space grid positions or texture space grid positions.
  • the screen space rasterizer SRAS determines the grid positions x g ,y g of the pixels which are positioned within the screen space polygon SGP which is determined by the screen space coordinates x v ,y v of its vertices.
  • the rasterizer SRAS may include a so called rasterizer setup which initializes temporal variables required by the rasterizer SRAS for efficient processing based on interpolation of the vertex attributes.
  • the mapper iMAP maps the screen space grid positions x g ,y g to corresponding texture space positions u,v in the texture space TSP, see Fig. 4. Generally, these texel positions u, v will not coincident with texture space grid positions u g ,v g .
  • the pixel shader PS determines the intensity PSI(x g ,y g ) (also referred to as color) of a pixel with the screen space coordinates x g ,y g .
  • the pixel shader can use a single resampled texture intensity or value at u,v or multiple intensities or values from multiple resampled textures with a combination of them to determine the pixel intensity or value.
  • the pixel shader PS receives a set of attributes ATR per pixel, the grid screen coordinates x g ,y g of the pixel and the corresponding texture coordinates u,v.
  • the texture coordinates u,v are used to address texture data TI(u g ,v g ) on grid texture positions u g ,v g stored in the texture memory TM via the texture space resampler TSR.
  • the pixel shader PS may modify the texture coordinate data u,v and may combine several texture maps to determine the value for a single pixel. It also may perform shading without the use of texture data but on basis of a formula such as the well known Phong shading or other procedural shading techniques.
  • the texture space resampler TSR determines the intensity PI(u,v) which is equal to value PI(x g ,y g ) of the pixel on the screen space grid position (x g ,y g ) mapped to the texture space coordinate (u,v).
  • the texture data TI(u g , v g ) corresponding to the texture space grid position u g ,v g is indicated by TI(u g ,v g ).
  • the texel intensities TI(u g ,v g ) for texture space grid positions u g ,v g are stored in the texture memory TM.
  • the texture space resampler TSR determines the intensity PI(u,v) by weighting and accumulating the texel intensities TI(u g ,u v ) of texels with texture space grid coordinates u g ,v g and which have to contribute to the intensity PI(u,v).
  • the texture space resampler TSR determines the intensity PI(u,v) at the texture space position u,v by filtering the texel intensities on texture space grid positions u g , V g surrounding the texture space position u,v. For example, a bilinear interpolation using the four texture space grid positions u g ,v g surrounding the texture space position u,v may be used.
  • the resulting intensity PI(u,v) at the position u,v is used by the pixel shader PS to determine the pixel intensity PSI(x g ,y g ) on the pixel grid position x g ,y g .
  • the pixel shader might repeatedly obtain several PI(u,v) values from different textures when multiple textures are being used to calculate the intensity or value PSI(x g ,y g ) on the pixel grid position x g ,y g .
  • the hidden surface removal unit HSR usually includes a Z-buffer which enables determination of the visible colors on a per pixel basis.
  • the depth value z of a produced pixel value PSI(x g ,y g ) is tested against the depth value of the one stored in the Z- buffer at the same pixel screen coordinate x g ,y g (thus on the screen grid).
  • the pixel intensity or color PIP(x g ,y g ) is written into the frame buffer FB and the Z-buffer is updated.
  • the image to be displayed IM is read from the frame buffer FB.
  • Fig. 4 illustrates the operation of the prior art inverse texture mapping system.
  • the left diagram of Fig. 4 shows the screen space polygon SGP in the screen space SSP.
  • One of the vertices of the polygon SGP is indicated by the screen space positions x v ,y v which usually does not coincide with the screen space grid positions x g ,y g .
  • the screen space grid positions x g ,y g are the positions which have integer values for x and y.
  • the image to be displayed is determined by the intensities (color and brightness) PIP(x g ,y g ) of the pixels which are positioned on the screen space grid positions x g ,y g .
  • the rasterizer SRAS determines the screen space grid positions x g ,y g within the polygon SGP.
  • the right diagram of Fig. 4 shows the texture space polygon TGP in the texture space TSP.
  • One of the vertices of the texture space polygon TGP is indicated by the texture space positions u v ,v v which usually does not coincide with the texture space grid positions u g ,v g .
  • the texture space grid positions u g ,v g are the positions which have integer values for u and v.
  • the intensities of the texels TI(u g ,v g ) are stored in the texture memory TM for these texture space grid positions u g ,v g .
  • a known technique which uses these different resolution textures is called MIP-mapping.
  • the mapper iMAP maps the screen space grid coordinates x g ,y g to corresponding texture space positions u,v in the texture space.
  • the intensity at a texture space position u,v is determined by filtering. For example, the intensity at the texture space position u,v which is, or contributes to, the intensity of the pixel at the screen space grid position x g ,y g , is determined as a weighted sum of intensities at surrounding texture space grid positions u g ,v g .
  • Fig. 5 shows a block diagram of the first embodiment of the geometry primitive shading graphics processor in accordance to the present invention.
  • the basic structure of the multiple inverse texture mapping module MITM shown in Fig. 5 is identical to the known inverse texture mapping module ITM shown in Fig. 3. The difference is that a plurality of pipelines formed by the transform and lighting module VER,, the rasterizer SRAS j , the mapper iMAP,, the pixel shader PS j , the hidden surface removal unit HSR,, and the frame buffer FB j is present instead of the single pipeline of Fig. 3.
  • the same texture space polygon or a texture space geometry primitive TSP (as shown in Fig. 4) is loaded in the texture cache once for rendering corresponding geometry primitives SGP j at successive frame instances t,.
  • the same (maximum) pixel fill rate can be achieved (neglecting the overhead on the borders of the screens) as without using this multiple frame technique, but the texture data traffic between the texture cache TC and the texture space resampler TSR is lowered by a factor equal to the number of pipelines corresponding to successive frame instances being processed in parallel.
  • Fig. 6 illustrates the operation of the first embodiment of the geometry primitive shading graphics processor using a multiple inverse texture mapping module according to the present invention as shown in Fig. 5.
  • the left diagram of Fig. 6 shows the screen space polygon of one of the corresponding geometry primitives SGP j at different frame instances at a frame instant t l s wherein all frame instances t, are rendered in parallel on the basis of the geometry primitive TGP in the texture space, as described for a single rendering process in Fig. 4.
  • a vertex transformation and lighting unit VER transforms the vertex coordinates of the polygon A (Fig. 1) in world space to the screen space SSP and performs light calculations to determine an intensity per vertex, as already described with regard to Fig. 3
  • a texture space rasterizer TRAS determines the grid positions u g ,v g of the texels in the texture space TSP within the polygon determined by the texture space coordinates u v ,v v of the vertices of the polygon.
  • the texture space rasterizer TRAS could be directly followed by a mapper to map every grid texture coordinate to screen space positions.
  • this is not required for the pixel shader PS, which now operates on grid texture coordinates, and thus the (u g ,v g ) to (x,y) mapping can be delayed in the processing pipeline until the screen space resampler SSR which does require the (xy) grid screen positions.
  • the pixel shader PS determines the intensity TSI (also referred to as color) of a texel.
  • TSI also referred to as color
  • the pixel shader PS is actually a texel shader. But, it is referred to as the pixel shader PS because its functionality is identical to the pixel shader PS used in the inverse texture mapping system.
  • the pixel shader PS may be a dedicated or a programmable unit.
  • the pixel shader PS receives a set of attributes per texel, including the grid texture coordinates u g ,v g of the texel.
  • the grid texture coordinates u g ,v g are used to address texture data TI on grid texture positions stored in the texture memory TM via the optional texture space resampler TSR.
  • the texture data corresponding to the texel grid position u g ,v g is indicated by TI(u g ,v g ).
  • the pixel shader PS may modify the texture coordinate data u g ,v g and may apply and combine several texture maps on a same texel. It also may perform shading without the use of texture data on basis of a formula such as the well known Gouraud and Phong shading techniques. In fig.
  • the pixel shader PS supplies the texel intensity TSI(u g ,v g ) to the mapper MAP.
  • a forward texture mapping system can traverse on a texture grid it may bypass the texture space resampler TSR.
  • the texture space resampler TSR may be relevant in that, if multiple textures are applied to the same polygon, the texture space resampler TSR might be used for obtaining texture values on the (u,v) grid coordinates of the first texture while traversing in-between texture positions on a secondary or higher order texture when the texture space rasterizer TRAS traverses on the grid of a first texture.
  • the mapper MAP maps the texel position u g ,v g of which the texel intensity TSI(u g , v g ) has been determined to a screen space position x,y.
  • the screen space resampler SSR uses the texel intensity TSI(u g ,v g ) which is mapped to in-between screen space position x,y to determine the contribution to the pixel intensities PI(x g ,y g ) in an area around (x,y).
  • a forward texture mapping system a single texel intensity TSI(x,y) is added to several pixels on grid positions x g ,y g , in a weighted fashion.
  • a texel value TSI(u g ,v g ) mapped to screen space is splatted to several pixels (which inherently are positioned on the x g ,y g grid).
  • the hidden surface removal unit HSR usually includes a Z-buffer which enables determination of the visible colors on a per pixel basis.
  • the depth value z of a produced pixel value is tested against the depth value of the one stored in the Z-buffer at the same pixel screen coordinate x g ,y g (thus on the screen grid).
  • the pixel intensity or color PIP(x g ,y g ) is written into the frame buffer FB and the Z- buffer is updated.
  • the image to be displayed IM is read from the frame buffer FB.
  • Fig. 8 illustrates the operation of the forward texture mapping system.
  • the left diagram of Fig. 8 shows the texel space polygon TGP in the texture space.
  • the texture space rasterizer TRAS rasterizes the texture in the texture space TSP to obtain the texture space grid positions u g , V g within the polygon TGP or within an area extended just around the polygon TGP.
  • the texture space rasterizer TRAS may operate on several resolutions of a texture (MIP-maps), and may switch multiple times from MIP-map across the polygon.
  • the shader PS retrieves the intensities on the texture space grid positions u g ,v g .
  • the mapper MAP maps the texture grid positions u g ,v g to x,y positions in the screen space. Usually, these mapped x,y positions do not coincident with the screen space grid positions x g ,y g .
  • the grid positions x g ,y g are the positions which have integer values for x and y.
  • the diagram on the right side of Fig. 8 shows the screen space polygon SGP in the screen space SSP. The above vertices of the polygon SGP is indicated by the screen position x v ,y v which usually does not coincide with the screen space grid positions x g , yg .
  • the vertex transformation and lighting unit VER, the texture space rasterizer TRAS, the pixel shader PS, the optional texture space resampler TSR and the texture memory TM are identical to the corresponding items in Fig. 7. Also their operation is identical to that elucidated with respect to Fig. 7.
  • the mapper MAP, the screen space resampler SSR, the hidden surface removal HSR and the frame buffer FB of Fig. 7 are replaced by a plurality of mappers MAP j , a plurality of screen space resamplers SSR,, a plurality of hidden surface removal units HSR,, and a plurality of frame buffers FB j .
  • Each of these items operates in the same manner as elucidated with respect to Fig. 7.
  • the coordinates x VJ , y VJ of the multitude of corresponding geometry primitives in screen space areare produced for the multiple frame instances by the vertex T & L unit VER but the lighting (or shading) of the vertices is only done once.
  • the other units, the texel space rasterizer TRAS and the pixel shader PS only forward this data and then feed it into a plurality of mappers MAP j , which map the texture grid coordinates u g , v g to screen space coordinates x,, y, of the corresponding geometry primitives SGP j at different frame instances t,.
  • mappings are then supplied by a multitude of pipelines, the number of which is equal to the number of corresponding geometry primitives in the different frame instances, to a multitude of screen space resamplers SSR,, a multitude of hidden surface removal units HSR, and frame buffers FB j , wherein each item of the multitude of pipelines operates in the same manner as indicated with respect to Fig. 7.
  • a particular one of the pipelines provides the screens space image of a particular geometry primitive having corresponding geometry primitives in successive frame instances at different frame instances.
  • the multiple items MAP,, SSR,, HSR j5 FB j and partially VER may be hardware which is present multiple times or the same hardware may be used in a time multiplexing mode, or a combination of these two possibilities may be implemented.
  • the time multiplexing of the screen space rasterizer SSR, function depends on its implementation. With a one-pass resampler, using a 2D filter, no states have to be stored and time multiplexing is easy. However, the one-pass resampler requires more bandwidth to the frame buffer (or tile buffer in a tile base rendering system). In a two-pass resampler which uses two ID resamplers, the state of the vertical resampler has to be stored for each sample of an input line. This requires a couple of line memories with length equal to the frame buffer width. If tile based rendering is used, however, this length is equal to the width of the tile.
  • Fig. 10 illustrates the operation of the second embodiment of the geometry primitive processor as shown in Fig. 9.
  • the diagrams on the left side of Figs. 8 and 10 are identical.
  • the left side diagram shows the single texture space polygon TGP.
  • the right side diagram of Fig. 10 shows one of the corresponding geometry primitives SGP j .
  • the polygons SGP J in the different frame instances t, are mapped from the same texture space TGP.
  • the polygon SGP j shows the screen position at the frame instant t,, wherein the texture position u g , V g is mapped to the position x ⁇ ,y ⁇ at the instant t,.
  • the intensity information of the texel at the texture position u g ,v g is splatted in a known manner to the surrounding Xg,,yg, screen space grid coordinates, respectively.
  • the reduction of compute recourses in the texture space rasterizer TRAS and the pixel shader PS is equal to the number of frame instances t, being rendered simultaneously.
  • the approach of the present invention is very advantageous in view of games on mobiles or other applications like 3D car navigation, in which some additional latency is not a problem.
  • the more critical amount of compute recourses and bandwith requirements especially in view of the multiple forward texture mapping module having only one texture space rasterizer TRAS and only one pixel shader PS is drastically reduced.
  • Fig. 11 shows a computer comprising the multiple frame rendering system.
  • the computer PC comprises a processor 3, a graphics adapter 2 and a memory 4.
  • the processor 3 is suitably programmed to supply input data IT to the graphics adapter 2.
  • the processor 3 communicates with the memory 4 via the bus Dl.
  • the graphics adapter 2 comprises the geometry primitive shading graphics system 1.
  • the graphics adapter is a module that is plugged into a suitable slot (for example an AGP slot).
  • the graphics adapter comprises its own memory (for example the texture memory TM and the frame buffer FB).
  • the graphics adapter may use part of the memory 4 of the computer PC, now the graphics adapter need to communicate with the memory 4 via the bus D2 or via the processor 3 and the bus Dl.
  • the graphics adapter 2 supplies the output image OI via a standard interface to the display apparatus DA.
  • the display apparatus may be any suitable display, such as, for example, a cathode ray tube, a liquid crystal display, or any other matrix display.
  • the computer PC and the display DA need not be separate units which communicate via a standard interface but may be combined in a single apparatus, such as, for example, a personal digital assistant (PDA or pocket PC) or any other mobile device with a display for displaying
  • Fig. 12 shows a display apparatus comprising the multiple frame rendering system.
  • the display apparatus DA comprises the pipeline 1 which receives the input data (geometry plus related data) IT and supplies the output image OI to a signal processing circuit 11.
  • the signal processing circuit 11 processes the output image OI to obtain a drive signal DS for the display 12.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to a geometry primitive shading graphics processor for mapping geometry data (WO) supplied by a graphics application onto a screen space (SSP), the graphics processor comprising a texture memory (TM) for storing texel intensities TI (ug,Vg) on texture space (TSP) grid positions (ug,vg), a multiple frame rendering module (MITM, MFTM) being adapted to render geometry primitives in multiple frames for successive instants (tj) by determining for each frame pixel intensities or values PIP(xgi, ygi) of screen space grid positions (xgi,ygi) by means of the texel intensities or values TI(ug,vg) supplied by the texture memory (TM), characterized by a capture geometry module (CGM) having a frame-geometry buffer (FGB), the capture geometry module (CGM) being able to capture at least two frame instants of geometry samplings of the geometry data and to store the at least two captured geometry samplings in the frame-geometry buffer (FGB), to relate corresponding geometry primitives (SGPj) of the at least two geometry samplings stored in the frame-geometry buffer (FGB), and to feed the corresponding geometry primitives (SGPj) into multiple pipelines corresponding to multiple frame instances of the multiple frame rendering module, wherein the same texel intensities or values TI(ug,vg) are fetched a single time to be used for multiple successive frames for rendering the corresponding geometry primitives (SGPj).

Description

GEOMETRY PRIMITIVE SHADING GRAPHICS SYSTEM
FIELD OF THE INVENTION
The invention relates to a geometry primitive shading graphics processor, a graphics adapter comprising the graphics processor, a computer comprising the graphics processor, a display apparatus comprising the graphics processor, and a method of mapping geometry data onto a screen space.
BACKGROUND OF THE INVENTION
A rendering system is known from WO 03/017204 A2. Herein, a rendering process of a complex virtual urban environment containing several thousands of dynamic animated objects like models of humans, animals or vehicles is simplified. The objects can be be viewed from different angles either by the motion or by the rotation of the objects themselves or by moving the view point of rendering. For each of these discrete angles there is a pre-computed image of the object being stored. The objects could be also animated, for example a human model could be made walking or running. These animations consist, for a viewing direction of a set of frames, which are a sequence of images that shows the animation from a single direction. For each of the discrete viewing directions, also this set of object animation frames is also pre-computed and stored. These object animation frames are glued onto a polygon or geometry primitive which is then placed at a certain location in the virtual 3D world such that it can be rendered. This is done for thousands of objects like the human models. If two or more of these thousands of objects happen to use both the same viewing direction and the same animation frame then there is a single associated image that only has to be loaded once instead of two or more times.
An inverse texture mapping 3D traffic system is known form EP 1 765 84. Herein, the inverse texture mapping 3D graphic processors maps a 3D object (or primitive) onto a screen space. A texture memory stores texel intensities of texture space grid positions. A plurality of screen space rasterizers determines pixel grid positions "same" polygons or geometry primitives within different screens spaces at a plurality of corresponding different display instants during a temporal interval between successive sample instants of geometric data of the 3D model. The screen space polygons for successive temporal display instants have different positions in the screen space dependent on motion information of the 3D model. A plurality of corresponding mappers map the pixel grid positions of the screen space polygons at the different display instants to texture space positions. A texture space resampler determines texel intensities at the texture space positions from the texel grid intensities of the texture space grid positions stored in the texture memory. A texture cache temporarily stores, for every texture space polygon, the texel intensities required by the texture space resampler during the temporal interval for all the screen space polygons associated with a same texture space polygon. A plurality of corresponding pixel shaders determine at said different display instants, pixel intensities from the texel intensities.
A forward texture mapping 3D graphic system is known from EP 1 759 355. Herein, the forward texture mapping 3D graphics processor comprises a texture memory to store texel intensities at texture space grid positions. A plurality of mappers map of a polygon a particular texel with a same texture space grid position having a texel intensity to corresponding screen space positions at corresponding different instants during a same temporal interval occurring between two successive samplings of geometric data of the polygon. The corresponding screen space positions depend on motion information of the polygon. Corresponding screen space resamplers determine a contribution of the texel intensities at the corresponding screen space positions to surrounding screen grid positions at the corresponding different instants.
The approach known form EP 1 765 84 or known from EP 1 759 355 does only uses a single sampling of the geometry, but with motion data, to produce multiple frames at successive display instants. A disadvantage of this approach is that it requires the application to send this motion data together with the geometry data which requires extension of current graphics APIs (like OpenGL) and therefore it requires the applications to change their programs for producing the motion data. Even if the changes to the applications are small this can be a considerable drawback because using these applications "as is" will not enable the frame-rate-up-conversion mode with the associated advantages as discussed in EP 1 765 84 and EP 1 759 355.
OBJECT AND SUMMARY OF THE INVENTION
It is an object of the present invention to provide a geometry primitive shading graphics system, in which the texture data bandwidth requirements are lowered and the required compute resources are reduced.
This object is obtained by the features of the independent claims.
In particular, the present invention is based on the thought to provide a geometry primitive shading processor for mapping geometry data supplied by a graphics application onto screen space, which comprises a texture memory for storing texel intensities of texture space grid positions and a multiple frame rendering module being adapted to render geometry primitives in multiple frames by determining for each frame pixel intensities by means of the texel intensities supplied by the texture memory. Herein, the process of determining for each frame pixel intensities of screen space grid positions on the basis of the texel intensities could be an output driven process, which is known as a inverse texture mapping process, or could also be an input driven process, which is known as a forward texture mapping process.
To reduce the texture data traffic between the texture memory and the multiple frame rendering module, a capture geometry module according to the present invention is provided, which has a frame-geometry buffer. This capture geometry module is adapted to capture at least two geometry samplings of the geometry data supplied by the graphics application and to store the at least two captured geometry samplings in the frame-geometry buffer. Then, between those at least two geometry samplings, corresponding geometry primitives, which are geometry primitives having the same coordinates of vertices in model space, the same coordinates of vertices in texture space, the same texture binding number or numbers, and which, if applicable, have the same shader program binding number are related. These corresponding geometry primitives are then fed into multiple pipelines corresponding to multiple frame instances of the multiple frame rendering module. Thus the same or corresponding geometry primitives or polygons are rasterized for multiple frames simultaneously. This way the texture data, which is regularly a data traffic bottleneck in 3D systems, does only have to be fetched once for the texture mapping process.
In a first embodiment, the multiple frame rendering module according to the present invention is a multiple inverse texture mapping module comprising a plurality of vertex shaders or vertex transformation and lighting (T & L) units for determining vertex positions and vertex intensities (or other values) for the corresponding geometry primitives at a plurality of corresponding frames, a plurality of screen space rasterizers for determining pixel grid positions within the corresponding geometry primitives at a plurality of corresponding frames, a plurality of corresponding mappers for mapping the pixel grid positions of the corresponding geometry primitives at the different frames to texture space positions, a plurality of texture space resamplers for determining texel intensities at the texture space positions from the texel grid intensities of the texture space grid positions stored in the texture memory, a texture cache for temporarily storing, for every corresponding geometry primitives, the texel intensities required by the texture space resampler, and a plurality of corresponding pixel shaders for determining, at each frame, pixel intensities or - A - values at the screen grid positions from the texel intensities or values. It should be noted that the term "intensity" as used in the claims of the present invention should also be read as values that are not intensity or color values, but also other values, e.g. normal vector modulation values, i.e. intensities/colors can also be rendered from texel values (that could be intensities/colors or not) by performing further calculations.
In a second embodiment of the present invention, the multiple frame rendering module is a multiple forward texture mapping module comprising a vertex shader or a vertex transformation and lighting (T & L) unit for determining the vertex intensities having a part that determines a plurality of vertex positions for the corresponding geometry primitives at the multiple frame instances, a texture space rasterizer for determining texture grid positions within the texture space geometry primitive in the texture space corresponding to the plurality of the corresponding screen space geometry primitives at the multiple frame instances, a pixel shader for determining intensities or values at the texture grid, a plurality of mappers for mapping a particular texel within the texture space geometry primitive in the texture space to associated screen space positions of the corresponding geometry primitives at the multiple frame instances, and a plurality of screen space resamplers for determining a contribution of the texel intensities or values at the associated screen space positions to surrounding screen grid positions at the multiple frame instances. Thus, since only one module of a vertex shader, a texture space rasterizer, and a pixel shader is used for rendering corresponding geometry primitives at the multiple frame instances, lower computational resources are required.
Preferably, the multiple frame rendering module according to the first embodiment of the present invention, being a multiple inverse texture mapping, comprises a plurality of vertex T & L units for transforming vertex coordinates of the corresponding geometry primitives in world space to the screen space to obtain screen space coordinates and for performing light calculations to determine an intensity per vertex. For the second embodiment, being a multiple forward texture mapping system only one single vertex T & L unit is needed where performing light calculations to determine an intensity per vertex is only done once for the multiple frames, but where still the transformation of the vertex coordinates of the corresponding geometry primitives is done multiple times.
In addition, it is preferred that the multiple frame rendering module further comprises a multitude of hidden surface removal units and a multitude of frame buffers for storing pixel intensities being determined at the screen grid positions. Each frame buffer stores a rendered image for a particular one of the frame instants. Thus, by reading out and displaying all the frame buffers after rendering a multitude of corresponding geometry primitives, the image containing a multitude of geometry primitives could be obtained, wherein the required compute recourses due to grouping the processing for the multiple frame instances together could be reduced. In a further preferred embodiment of the present invention the geometry primitive shading graphics processor is part of a graphics adapter, a computer or a display apparatus.
The object of the present invention is further solved by a method of mapping geometry data supplied by a graphics application onto a screen space, wherein the method comprises the steps of capturing at least two geometry samplings of the geometry data by means of a captured geometry module having a frame -geometry buffer, storing the at least two captured frame instances of geometry samplings in the frame-geometry buffer, relating corresponding geometry primitives of the at least two geometry samplings stored in the frame-geometry buffer, and rendering the geometry primitives in multiple pipelines corresponding to multiple frame instances by determining for each frame pixel intensities or screen space grid positions by means of texel intensities of texture space grid positions, wherein the same texel intensities are fetched a single time to be used for multiple successive frames for rendering the corresponding geometry primitives. Herein, the rendering step according to the present invention could be a multiple inverse texture mapping process or a multiple forward texture mapping process.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other objects, features an other advantages of the present invention will be more clearly understood from the following detailed description taking in conjunction with the accompanying drawings. The invention will now be described in greater detail hereinafter, by way of non-limiting examples, with reference to the embodiments shown in the drawings.
Fig. 1 shows a mapping of a textured 3D object to the screen, Fig. 2 illustrates a sequence of successive frame instances, in which geometry samplings of at frame instances comprise corresponding geometry primitives,
Fig. 3 shows a block diagram of a prior art inverse texture mapping 3D graphics system,
Fig. 4 illustrates the operation of the prior art inverse texture mapping system, Fig. 5 shows a block diagram of a first embodiment of the present invention using a multiple inverse texture mapping module,
Fig. 6 illustrates the operation of the first embodiment of the present invention as shown in fig. 5,
Fig. 7 shows a block diagram of a prior art forward texture mapping 3D graphics system,
Fig. 8 illustrates the operation of the prior art forward texture mapping system,
Fig. 9 shows a block diagram of a second embodiment of the present invention using a multiple forward texture mapping module,
Fig. 10 illustrates the operation of the second embodiment of the present invention as shown in fig. 9,
Fig. 11 shows a computer comprising the geometry primitive shading graphics processor according to the present invention, and
Fig. 12 shows a display apparatus, comprising the geometry primitive shading graphic processor of the invention.
DESCRIPTION OF EMBODIMENTS
Fig. 1 illustrates the mapping of a textured 3D object WO in world space on a display screen DS. Instead of the world space the object may also be available in other 3D spaces such as model or eye space, in the following all these spaces are referred to as world space. An object WO, which may be a three-dimensional textured object such as the cube shown, is projected on the two-dimensional display screen DS. A surface structure or texture defines the appearance of the three-dimensional object WO. In Fig. 1 the polygon A has a texture TA and the polygon B has a texture TB. The polygons A and B are with a more general term also referred to as graphics primitives. The projection on to the display screen DS of the object WO is obtained by defining an eye or camera position ECP within the world space. Fig. 1 shows how the polygon SGP projected on the screen DS is obtained from the corresponding polygon A. The polygon SGP in the screen space SSP is defined by it's vertex coordinates in the screen space SSP. It is only the projection of the geometry of the polygon
A which is used to determine the geometry of the polygon SGP. Usually, it suffices to know the vertices of the polygon A and the projection to determine the vertices of the polygon
SGP.
The texture TA of the polygon A is not directly projected from the world space onto the screen space SSP. The different textures of the world space object WO are stored in a texture memory TM (see Figs. 3, 5, 7 and 9) or texture space TSP defined by the coordinate axes u and v. For example, Fig. 1 shows that the polygon A has a texture TA which is available in the texture space TSP in the area indicated by TA, while the polygon B has another texture TB which is available in the texture space TSP in the area indicated by TB. The polygon A is associated with the texture space TA to obtain a polygon TGP such that the texture within the polygon TGP is attached on the polygon A. A perspective transformation PPT between the texture space TSP and the screen space SSP projects the texture of the polygon TGP on the corresponding polygon SGP. This process is also referred to as texture mapping. Usually, the textures are not all present in a global texture space, but every texture defines its own texture space TSP. It has to be noted that the textures in the texture space TSP are stored in a texture memory TM for a discrete number of positions in the texture space TSP. Usually these discrete positions are the grid positions in the texture space TSP determined by integer values of u and v. These discrete grid positions are further referred to as grid texture positions or grid texture coordinates. Positions in the texture space which are not limited to the grid positions are referred to as positions in the texture space TSP or as positions in the u,v space TSP. The positions in the u,v space may be represented by floating point numbers. In a same manner, the image to be displayed is stored in a frame buffer memory. Commonly this image is often referred to as frame. Again, only a discrete number of positions in the x,y space or screen space SSP is available. Usually, these discrete positions are the grid positions in the screen space SSP determined by integer values of x and y. These discrete grid positions are referred to as grid screen positions or grid screen coordinates. Positions in the x,y space which are not limited to the grid positions are referred to as positions in the x,y space or as positions in the screen space SSP. These positions in the x,y space may be represented by floating point numbers. Fig. 2 illustrates an example of different successive frames 1-3, which show three examples of graphics primitives or geometry primitives, being already rendered by means of texture information contained in the texture memory TM.
The first example of a geometry primitive is the front side surface C of a wall, which is static (in view of its vertices in world space) in the three successive frames 1-3. Another example of a geometry primitive is the front surface D of a car body, which is moving from a right side to a left side in the successive frames 1-3 (it has different vertices in world space), but is static in view of the car model (thus, having the same vertices in model space). The third example of a geometry primitive is a front wheel surface E of the car body, which is a model at its own and having thus the same model coordinates in the three frames 1-3, being however translated and rotated in world space.
Before the rendering process is performed, a graphics application calculates and sends the transformations (e.g. translation, scaling, rotation, perpective projection or a combination of these) that the vertex T & L unit VER will execute of different 3D objects in world space and feeds this geometry data to a geometry primitive shading graphics processor. This geometry data contains the vertices of all geometry primitives in texture space, the vertices of all geometry primitives in model space, the texture binding number (or numbers in case of multiple textures) of the textures being applied to the different geometry primitives, and further, if applicable, the shader program binding number. On the basis of this geometry data information supplied by the graphics application, which produces calls to 3D graphics API's such as OpenGL or Direct3D or to 2D graphics API's such as OpenVG, the successive frames 1-3 are then rendered by the geometry primitive shading graphics processor.
According to the present invention, the geometry data comprising at least two geometry samplings (corresponding to the wall, the car body and the wheels) is captured for different successive frame instants (for example as shown in Fig. 2) by means of a capture geometry module CGM (see Figs. 5 and 9) and stored in a frame-geometry buffer FGB (Figs. 5 and 9). After storing the different geometry samplings, corresponding geometry primitives are related in the successive frame instances, wherein corresponding geometry primitives are identified using the following information: The texture binding number (or numbers in case of multiple textures) or shader program binding numbers, if applicable, should be the same, the coordinates of the vertices in model space should be the same and the coordinates of the vertices in texture space should be the same. If all these criteria are fulfilled, geometry primitives in successive frame instances are determined as corresponding geometry primitives, which is in the example of Fig. 2 the case for the corresponding geometry primitives C, D and E.
Thereafter, the corresponding geometry primitives are grouped and fed into a multiple frame rendering module, which performs the rendering process of corresponding geometry primitives simultaneously, wherein the texture data required for the rendering process of the corresponding geometry primitives has only to be fetched once from the texture memory TM (as shown in Figs. 5 and 9).
The multiple frame rendering module is, in a first embodiment of the present invention (which is shown in Fig. 5) a multiple inverse texture mapping module MITM, and in a second embodiment of the present invention, a multiple forward texture mapping module MFTM (as shown in Fig. 9). The two embodiments of the present invention will be now described in detail in the following.
For a better understanding of the first embodiment of the present invention, a prior art inverse texture mapping 3D graphic system is discussed first with regard to the polygon A of Fig. 1. It should be, however noted, that this could be also applied to the geometry primitives C, D, E of Fig. 2.
Fig. 3 shows a block diagram of the prior art inverse texture mapping 3D graphics system. A vertex transformation and lighting unit VER, further also referred to as the vertex T&L unit, transforms the vertex coordinates of the polygon A in model space to the screen space SSP to obtain screen space coordinates xv, yv of the vertices of the screen space polygon SGP. The vertex T&L unit further performs light calculations to determine an intensity (also referred to as color) per vertex. If a texture TA is to be applied to a screen space polygon SGP, the vertex T&L unit receives texture space coordinates uv,vv from the application. The vertex T&L unit calculates the associated screen space coordinates xv,yv (see Fig. 4) of the vertices of the screen space polygons SGP such that the position thereof in the screen space SSP is known. Usually, the positions of the vertices will not coincident with the screen space grid positions or texture space grid positions.
The screen space rasterizer SRAS determines the grid positions xg,yg of the pixels which are positioned within the screen space polygon SGP which is determined by the screen space coordinates xv,yv of its vertices. The rasterizer SRAS may include a so called rasterizer setup which initializes temporal variables required by the rasterizer SRAS for efficient processing based on interpolation of the vertex attributes.
The mapper iMAP maps the screen space grid positions xg,yg to corresponding texture space positions u,v in the texture space TSP, see Fig. 4. Generally, these texel positions u, v will not coincident with texture space grid positions ug,vg.
The pixel shader PS determines the intensity PSI(xg,yg) (also referred to as color) of a pixel with the screen space coordinates xg,yg. The pixel shader can use a single resampled texture intensity or value at u,v or multiple intensities or values from multiple resampled textures with a combination of them to determine the pixel intensity or value. The pixel shader PS receives a set of attributes ATR per pixel, the grid screen coordinates xg,yg of the pixel and the corresponding texture coordinates u,v. The texture coordinates u,v are used to address texture data TI(ug,vg) on grid texture positions ug,vg stored in the texture memory TM via the texture space resampler TSR. The pixel shader PS may modify the texture coordinate data u,v and may combine several texture maps to determine the value for a single pixel. It also may perform shading without the use of texture data but on basis of a formula such as the well known Phong shading or other procedural shading techniques.
The texture space resampler TSR determines the intensity PI(u,v) which is equal to value PI(xg,yg) of the pixel on the screen space grid position (xg,yg) mapped to the texture space coordinate (u,v). The texture data TI(ug, vg) corresponding to the texture space grid position ug,vg is indicated by TI(ug,vg). The texel intensities TI(ug,vg) for texture space grid positions ug,vg are stored in the texture memory TM. The texture space resampler TSR determines the intensity PI(u,v) by weighting and accumulating the texel intensities TI(ug,uv) of texels with texture space grid coordinates ug,vg and which have to contribute to the intensity PI(u,v). Thus, the texture space resampler TSR determines the intensity PI(u,v) at the texture space position u,v by filtering the texel intensities on texture space grid positions ug, Vg surrounding the texture space position u,v. For example, a bilinear interpolation using the four texture space grid positions ug,vg surrounding the texture space position u,v may be used. The resulting intensity PI(u,v) at the position u,v is used by the pixel shader PS to determine the pixel intensity PSI(xg,yg) on the pixel grid position xg,yg. The pixel shader might repeatedly obtain several PI(u,v) values from different textures when multiple textures are being used to calculate the intensity or value PSI(xg,yg) on the pixel grid position xg,yg.
In the case of a 3D graphics system commonly a hidden surface removal (HSR) module is needed. For a 2D graphics system this might not be needed. The hidden surface removal unit HSR, usually includes a Z-buffer which enables determination of the visible colors on a per pixel basis. The depth value z of a produced pixel value PSI(xg,yg) is tested against the depth value of the one stored in the Z- buffer at the same pixel screen coordinate xg,yg (thus on the screen grid). Depending on the outcome of the test, the pixel intensity or color PIP(xg,yg) is written into the frame buffer FB and the Z-buffer is updated. The image to be displayed IM is read from the frame buffer FB.
Fig. 4 illustrates the operation of the prior art inverse texture mapping system. The left diagram of Fig. 4 shows the screen space polygon SGP in the screen space SSP. One of the vertices of the polygon SGP is indicated by the screen space positions xv,yv which usually does not coincide with the screen space grid positions xg,yg. The screen space grid positions xg,yg are the positions which have integer values for x and y. The image to be displayed is determined by the intensities (color and brightness) PIP(xg,yg) of the pixels which are positioned on the screen space grid positions xg,yg. The rasterizer SRAS determines the screen space grid positions xg,yg within the polygon SGP.
The right diagram of Fig. 4 shows the texture space polygon TGP in the texture space TSP. One of the vertices of the texture space polygon TGP is indicated by the texture space positions uv,vv which usually does not coincide with the texture space grid positions ug,vg. The texture space grid positions ug,vg are the positions which have integer values for u and v. The intensities of the texels TI(ug,vg) are stored in the texture memory TM for these texture space grid positions ug,vg. There may be stored several texture maps in different resolutions of the same texture. A known technique which uses these different resolution textures is called MIP-mapping. The mapper iMAP maps the screen space grid coordinates xg,yg to corresponding texture space positions u,v in the texture space. The intensity at a texture space position u,v is determined by filtering. For example, the intensity at the texture space position u,v which is, or contributes to, the intensity of the pixel at the screen space grid position xg,yg, is determined as a weighted sum of intensities at surrounding texture space grid positions ug,vg.
Fig. 5 shows a block diagram of the first embodiment of the geometry primitive shading graphics processor in accordance to the present invention. The basic structure of the multiple inverse texture mapping module MITM shown in Fig. 5 is identical to the known inverse texture mapping module ITM shown in Fig. 3. The difference is that a plurality of pipelines formed by the transform and lighting module VER,, the rasterizer SRASj, the mapper iMAP,, the pixel shader PSj, the hidden surface removal unit HSR,, and the frame buffer FBj is present instead of the single pipeline of Fig. 3. The index j indicates that the item known from Fig. 3 is the jth item with 1 < j < n. Thus if n = 3 (cf. Fig. 2), all the items with the index j are present three times and each one is indicated by one of the indices j running from 1 to 3. All the items of Fig. 5 operate in the same manner as the corresponding items of Fig. 3. In a practical embodiment, the items may be hardware which is present multiple times, or the same hardware may be used in a time multiplexing mode, or a combination of these two possibilities may be implemented. What counts is that now the processes in Fig. 5 occur j times at j different rendering instants t,.
Thus, in accordance with the present invention, the same texture space polygon or a texture space geometry primitive TSP (as shown in Fig. 4) is loaded in the texture cache once for rendering corresponding geometry primitives SGPj at successive frame instances t,. When all blocks VER,, SARSj, iMAP, PSJ? HSRj and FBj will be used in a time multiplexed manner, then the same (maximum) pixel fill rate can be achieved (neglecting the overhead on the borders of the screens) as without using this multiple frame technique, but the texture data traffic between the texture cache TC and the texture space resampler TSR is lowered by a factor equal to the number of pipelines corresponding to successive frame instances being processed in parallel.
Fig. 6 illustrates the operation of the first embodiment of the geometry primitive shading graphics processor using a multiple inverse texture mapping module according to the present invention as shown in Fig. 5. Herein, the left diagram of Fig. 6 shows the screen space polygon of one of the corresponding geometry primitives SGPj at different frame instances at a frame instant tl s wherein all frame instances t, are rendered in parallel on the basis of the geometry primitive TGP in the texture space, as described for a single rendering process in Fig. 4.
In the following, the second embodiment of a geometry primitive shading graphics processor using a multiple forward texture mapping module according to the present invention is discussed. However, for a better understanding of the graphics processor of the present invention, a prior art forward texture mapping 3D graphic system will be discussed first, which is shown in Fig. 7.
A vertex transformation and lighting unit VER, further referred to as the vertex T&L unit, transforms the vertex coordinates of the polygon A (Fig. 1) in world space to the screen space SSP and performs light calculations to determine an intensity per vertex, as already described with regard to Fig. 3
A texture space rasterizer TRAS determines the grid positions ug,vg of the texels in the texture space TSP within the polygon determined by the texture space coordinates uv,vv of the vertices of the polygon. Similarly to the screen space rasterizer SRAS of the inverse texture mapping system, the texture space rasterizer TRAS could be directly followed by a mapper to map every grid texture coordinate to screen space positions. However, this is not required for the pixel shader PS, which now operates on grid texture coordinates, and thus the (ug,vg) to (x,y) mapping can be delayed in the processing pipeline until the screen space resampler SSR which does require the (xy) grid screen positions. This way we obtain the computional reduction advantage of only requiring a TRAS, PS and, partially, a VER module instead of needing a plurality, one for each of the plurality of parallel rendered frames.
The pixel shader PS determines the intensity TSI (also referred to as color) of a texel. Thus, in a forward texture mapping system, the pixel shader PS is actually a texel shader. But, it is referred to as the pixel shader PS because its functionality is identical to the pixel shader PS used in the inverse texture mapping system. The pixel shader PS may be a dedicated or a programmable unit. The pixel shader PS receives a set of attributes per texel, including the grid texture coordinates ug,vg of the texel. The grid texture coordinates ug,vg are used to address texture data TI on grid texture positions stored in the texture memory TM via the optional texture space resampler TSR. The texture data corresponding to the texel grid position ug,vg is indicated by TI(ug,vg). The pixel shader PS may modify the texture coordinate data ug,vg and may apply and combine several texture maps on a same texel. It also may perform shading without the use of texture data on basis of a formula such as the well known Gouraud and Phong shading techniques. In fig. 7, by way of example, it is assumed that the pixel shader PS supplies the texel intensity TSI(ug,vg) to the mapper MAP. It has to be noted that because a forward texture mapping system can traverse on a texture grid it may bypass the texture space resampler TSR. The texture space resampler TSR may be relevant in that, if multiple textures are applied to the same polygon, the texture space resampler TSR might be used for obtaining texture values on the (u,v) grid coordinates of the first texture while traversing in-between texture positions on a secondary or higher order texture when the texture space rasterizer TRAS traverses on the grid of a first texture. Note that the TSR (in the FTM system) it is typically enough to resample usin affine mappings between a second and a first texture map, whereas the TSR in the ITM system has to support also perspective mappings, which makes the TSR resampler more difficult. The mapper MAP maps the texel position ug,vg of which the texel intensity TSI(ug, vg) has been determined to a screen space position x,y.
The screen space resampler SSR uses the texel intensity TSI(ug,vg) which is mapped to in-between screen space position x,y to determine the contribution to the pixel intensities PI(xg,yg) in an area around (x,y). In a forward texture mapping system, a single texel intensity TSI(x,y) is added to several pixels on grid positions xg,yg, in a weighted fashion. Or said differently, a texel value TSI(ug,vg) mapped to screen space is splatted to several pixels (which inherently are positioned on the xg,yg grid). When all texels TSI that affect the pixel intensity PI(xg,yg) at a grid position xg,yg have contributed, the final pixel intensity PI(xg,yg) is obtained.
The hidden surface removal unit HSR, usually includes a Z-buffer which enables determination of the visible colors on a per pixel basis. The depth value z of a produced pixel value is tested against the depth value of the one stored in the Z-buffer at the same pixel screen coordinate xg,yg (thus on the screen grid). Depending on the outcome of the test, the pixel intensity or color PIP(xg,yg) is written into the frame buffer FB and the Z- buffer is updated. The image to be displayed IM is read from the frame buffer FB. Fig. 8 illustrates the operation of the forward texture mapping system. The left diagram of Fig. 8 shows the texel space polygon TGP in the texture space. One of the vertices of the polygon TGP is indicated by the texture space positions uv,vv, which usually does not coincide with the texture space grid positions ug,vg. The texture space grid positions ug, Vg are the positions which have integer values for u and v. The texture space rasterizer TRAS rasterizes the texture in the texture space TSP to obtain the texture space grid positions ug, Vg within the polygon TGP or within an area extended just around the polygon TGP. The texture space rasterizer TRAS may operate on several resolutions of a texture (MIP-maps), and may switch multiple times from MIP-map across the polygon. The shader PS retrieves the intensities on the texture space grid positions ug,vg. The mapper MAP maps the texture grid positions ug,vg to x,y positions in the screen space. Usually, these mapped x,y positions do not coincident with the screen space grid positions xg,yg. The grid positions xg,yg are the positions which have integer values for x and y. The diagram on the right side of Fig. 8 shows the screen space polygon SGP in the screen space SSP. The above vertices of the polygon SGP is indicated by the screen position xv,yv which usually does not coincide with the screen space grid positions xg,yg. The image to be displayed is determined by the intensities (color and brightness) of the pixels which are positioned on the screen space grid positions xg,yg. The screen space resampler SSR splats the texel's intensity on the surrounding pixel grid positions xg,yg of the mapped x, y position to obtain the contribution of the corresponding texel intensity on the pixel grid positions xg,yg. The intensity of a pixel corresponding to a particular grid position xg,yg is the sum of the contributions of all mapped texel intensities which contribute to the intensity on this particular grid position xg,yg. Fig. 9 shows a block diagram of the second embodiment of the geometry primitive shading graphics processor according to the present invention using a multiple forward texture mapping module MFTM.
The vertex transformation and lighting unit VER, the texture space rasterizer TRAS, the pixel shader PS, the optional texture space resampler TSR and the texture memory TM are identical to the corresponding items in Fig. 7. Also their operation is identical to that elucidated with respect to Fig. 7.
Thus, only one texture space rasterizer TRAS and one pixel shader PS and one texture space resampler TSR and one vertex shader (except the part that transforms the vertices into screen space, that part has to be done multiple) has to be employed in the multi forward texture mapping module of the present invention for rendering multiple corresponding geometry primitives, leading to a reduction of compute recourses (next to the reduction in texture bandwidth reduction). The usage of a multi forward texture mapping module is thus preferred in comparison to the multi inverse texture mapping module of Fig. 5, since the texture space rasterizing process in the texture space rasterizer TRAS and the pixel shading process in the pixel shader PS is computationally intensive, wherein the computational effort can be decreased by a factor according to the number of corresponding geometry primitives in the multitude of frame instances. This is the case even for primitives having no textures applied, but the shading is done solely on the basis of computation without texture fetch, the multiple forward texture mapping system can decrease computational requirements.
The mapper MAP, the screen space resampler SSR, the hidden surface removal HSR and the frame buffer FB of Fig. 7 are replaced by a plurality of mappers MAPj, a plurality of screen space resamplers SSR,, a plurality of hidden surface removal units HSR,, and a plurality of frame buffers FBj. Each of these items operates in the same manner as elucidated with respect to Fig. 7.
The invention is directed to a reduction of compute recourses by grouping a multitude of corresponding geometry primitives of geometry samplings of successive frame instances, which could be rendered by using a corresponding texture space geometry primitive being the same for all corresponding geometry primitives in screen space.
Thus, the coordinates xVJ, yVJ of the multitude of corresponding geometry primitives in screen space areare produced for the multiple frame instances by the vertex T & L unit VER but the lighting (or shading) of the vertices is only done once. The other units, the texel space rasterizer TRAS and the pixel shader PS only forward this data and then feed it into a plurality of mappers MAPj, which map the texture grid coordinates ug, vg to screen space coordinates x,, y, of the corresponding geometry primitives SGPj at different frame instances t,. These mappings are then supplied by a multitude of pipelines, the number of which is equal to the number of corresponding geometry primitives in the different frame instances, to a multitude of screen space resamplers SSR,, a multitude of hidden surface removal units HSR, and frame buffers FBj , wherein each item of the multitude of pipelines operates in the same manner as indicated with respect to Fig. 7. A particular one of the pipelines provides the screens space image of a particular geometry primitive having corresponding geometry primitives in successive frame instances at different frame instances. Thus, after repeating the rendering process using the geometry primitive shading graphics processor according to Fig. 9 for each of the multitude of corresponding geometry primitives being related by the captured geometry module CGM, the complete frame could be rendered at different frame instances.
In a practical embodiment, the multiple items MAP,, SSR,, HSRj5FBj and partially VER may be hardware which is present multiple times or the same hardware may be used in a time multiplexing mode, or a combination of these two possibilities may be implemented.
The time multiplexing of the screen space rasterizer SSR, function depends on its implementation. With a one-pass resampler, using a 2D filter, no states have to be stored and time multiplexing is easy. However, the one-pass resampler requires more bandwidth to the frame buffer (or tile buffer in a tile base rendering system). In a two-pass resampler which uses two ID resamplers, the state of the vertical resampler has to be stored for each sample of an input line. This requires a couple of line memories with length equal to the frame buffer width. If tile based rendering is used, however, this length is equal to the width of the tile. Thus, time multiplexing the screen space resampler SSR, using two-pass resampling with tile based rendering only requires multiplying the relatively small sized line memories. The traditional hidden surface removal units HSR, use a Z-buffer with the size of a frame or a tile.
It should be noted that the same is true for the Frame Buffer unit and the tile- frame buffer (on-chip) and the frame buffer (off-chip). This is state information that has to be duplicated when time multiplexing this function.
Fig. 10 illustrates the operation of the second embodiment of the geometry primitive processor as shown in Fig. 9. The diagrams on the left side of Figs. 8 and 10 are identical. The left side diagram shows the single texture space polygon TGP. The right side diagram of Fig. 10 shows one of the corresponding geometry primitives SGPj. The polygons SGPJ in the different frame instances t, are mapped from the same texture space TGP. The polygon SGPj shows the screen position at the frame instant t,, wherein the texture position ug, Vg is mapped to the position x},y} at the instant t,. In accordance with the prior art FTM, the intensity information of the texel at the texture position ug,vg is splatted in a known manner to the surrounding Xg,,yg, screen space grid coordinates, respectively. The reduction of compute recourses in the texture space rasterizer TRAS and the pixel shader PS is equal to the number of frame instances t, being rendered simultaneously. In the following, the implementation of the first and second embodiment of the invention should be discussed.
Although, there is an increase of latency of the display of the frames, which is not desired in view of computer games on PC or a game console like a X-Box or Playstation, the approach of the present invention is very advantageous in view of games on mobiles or other applications like 3D car navigation, in which some additional latency is not a problem. Herein, the more critical amount of compute recourses and bandwith requirements especially in view of the multiple forward texture mapping module having only one texture space rasterizer TRAS and only one pixel shader PS is drastically reduced.
In view of the implementation of frame based rendering, there is some overhead associated to the screen boundary processing, when parts of geometry primitives are not anymore within the screen for one of the successive frames. This means that if two frames are rendered simultaneously, some textured geometry primitive areas within the screen of one frame instance but outside the other are traversed only for one frame, so fetching of texture data can not be reused for the other frame and no texture data traffic savings are obtained for these areas. However, this overhead is only very minimal in view of the gain, since the texture data traffic savings is almost equal to the number of multiple rendered frames. Within a tile based rendering system, there is the advantage that the scene geometry for every frame is already captured and processed by the tile processor that will administrate the geometry to the tiles and store it, which is commonly done in a off-chip memory. This is for the purpose of changing the render order and render per tile all the geometry that impacts that title. At this geometry capturing point, additional frame instances of geometry could be captured.
In a tile based rendering system, the overhead on screen boundaries within a frame based rendering system repeats itself on every tile border. So, except for static scenes, i.e. when the screen of current frame is equal to screens of next frames, this overhead becomes more significant for tile based rendering systems and is depending on the tile size, wherein bigger tiles are more efficient. It seems that often a lot of games include more horizontal motion than vertical, thus, in this case it can be desired to choose the tile size also more wide than high to reduce the tile boundary overhead. Fig. 11 shows a computer comprising the multiple frame rendering system. The computer PC comprises a processor 3, a graphics adapter 2 and a memory 4. The processor 3 is suitably programmed to supply input data IT to the graphics adapter 2. The processor 3 communicates with the memory 4 via the bus Dl. The graphics adapter 2 comprises the geometry primitive shading graphics system 1. Usually, the graphics adapter is a module that is plugged into a suitable slot (for example an AGP slot). Usually, the graphics adapter comprises its own memory (for example the texture memory TM and the frame buffer FB). However the graphics adapter may use part of the memory 4 of the computer PC, now the graphics adapter need to communicate with the memory 4 via the bus D2 or via the processor 3 and the bus Dl. The graphics adapter 2 supplies the output image OI via a standard interface to the display apparatus DA. The display apparatus may be any suitable display, such as, for example, a cathode ray tube, a liquid crystal display, or any other matrix display. The computer PC and the display DA need not be separate units which communicate via a standard interface but may be combined in a single apparatus, such as, for example, a personal digital assistant (PDA or pocket PC) or any other mobile device with a display for displaying images.
Fig. 12 shows a display apparatus comprising the multiple frame rendering system. The display apparatus DA comprises the pipeline 1 which receives the input data (geometry plus related data) IT and supplies the output image OI to a signal processing circuit 11. The signal processing circuit 11 processes the output image OI to obtain a drive signal DS for the display 12.

Claims

CLAIMS:
1. A geometry primitive shading graphics processor for mapping geometry data (WO) supplied by a graphics application onto a screen space (SSP), the graphics processor comprising:
— a texture memory (TM) for storing texel intensities TI (ug,vg) of texture space grid positions (ug,vg), and
— a multiple frame rendering module (MITM, MFTM) being adapted to render geometry primitives in multiple frames by determining for each frame pixel intensities PIP (xg, yg) of screen space grid positions (xg,yg) by means of the texel intensities TI (ug,vg) supplied by the texture memory (TM), characterized by a capture geometry module (CGM) having a frame-geometry buffer (FGB), the capture geometry module (CGM) being adapted
— to capture at least two frame instances of geometry samplings of the geometry data and to store the at least two captured geometry samplings in the frame- geometry buffer (FGB),
— to relate corresponding geometry primitives (SGPj) of the at least two geometry samplings stored in the frame-geometry buffer (FGB), and
— to feed the corresponding geometry primitives (SGPj) into multiple pipelines corresponding to multiple frame instances of the multiple frame rendering module, wherein the same texel intensities TI (ug,vg) are fetched a single time to be used for multiple successive frames for rendering the corresponding geometry primitives (SGPj).
2. A geometry primitive shading graphics processor as claimed in claim 1, wherein the corresponding geometry primitives (SGPj) are geometry primitives determined by having the same coordinates of vertices in model space, the same coordinates of vertices in texture space and the same texture binding number or numbers.
3. A geometry primitive shading graphics processor as claimed in claim 2, wherein the corresponding geometry primitives (SGPj) are geometry primitives further determined by having the same shader program binding number.
4. A geometry primitive shading graphics processor as claimed in one of the claims 1 to 3, wherein the multiple frame rendering module is a multiple inverse texture mapping module.
5. A geometry primitive shading graphics processor as claimed in claim 4, wherein the multiple frame rendering module comprises:
- a plurality of vertex shaders (VER,) for determining vertex positions (xVJ,yVJ) and vertex intensities for the corresponding geometry primitives (SGPj) at a plurality of corresponding frames,
- a plurality of screen space rasterizers (SRASj) for determining pixel grid positions (xg,yg) within the corresponding geometry primitives (SGPj) at a plurality of corresponding frames,
- a plurality of corresponding mappers (MAPj) for mapping the pixel grid positions (xg,yg) of the corresponding geometry primitives (SGPj) at the different frames to texture space positions (u,,v,),
- a plurality of texture space resamplers (TSR) for determining texel intensities PI(UJ,VJ) at the texture space positions (u,, v,) from the texel grid intensities TI(ug,vg) of the texture space grid positions (ug,vg) stored in the texture memory (TM), - a texture cache (TC) for temporarily storing, for every corresponding geometry primitives (SGPj), the texel intensities TI(ug,vg) required by the texture space resampler (TSR), and
- a plurality of corresponding pixel shaders (PSj) for determining, at each frame, pixel intensities PSI(x,,yj) from the texel intensities PI(ug,vg).
6. A geometry primitive shading graphics processor as claimed in one of the claims 1 to 3, wherein the multiple frame rendering module is a multiple forward texture mapping module.
7. A geometry primitive shading graphics processor as claimed in claim 6, wherein the multiple frame rendering module comprises:
- a vertex shader (VER) for determining the vertex intensities having a part that determines a plurality of vertex positions (xVJ,yVJ) for the corresponding geometry primitives (SGPj ) at the multiple frame instances (t,), - a texture space rasterizer (TRAS) for determining texture grid positions (ug,Vg) within the texture space geometry primitive (TGP) in the texture space (TSP) corresponding to the plurality of the corresponding geometry primitives (SGPj ) at the multiple frame instances (t,), - a pixel shader (PS) for determining intensities TSI(ug,vg) in texture space,
- a plurality of mappers (MAPj) for mapping a particular value TSI(ug,vg) produced by the pixel shader, within the texture space geometry primitive (TGP) in the texture space (TSP) to associated screen spaces positions (x,, y,) of the corresponding geometry primitives (SGPj) at the multiple frame instances (t,), and - a plurality of screen space resamplers (SSR,) for determining a contribution of the texel intensities TI(ug,vg) at the associated screen space positions (x,, y,) to surrounding screen grid positions (xa, ya) at the multiple frame instances (t,).
8. A geometry primitive shading graphics processor as claimed in one of the claims 1 to 7, wherein the multiple frame rendering module further comprises a multitude of frame buffers (FBj) for storing pixel intensities PIP(XgJ5Vg,) being determined at the screen grid positions (x,,Vj).
9. A geometry primitive shading graphics processor as claimed in one of the claims 1 to 8, wherein the multiple frame rendering module further comprises a multitude of hidden surface removal units (HSR,).
10. A graphics adapter comprising the geometry primitive shading graphics processor as claimed in one of the claims 1 to 9.
11. A computer comprising the geometry primitive shading graphics processor as claimed in one of the claims 1 to 9.
12. A display apparatus comprising the geometry primitive shading graphics processor as claimed in one of the claims 1 to 9.
13. A method of mapping geometry data (WO) supplied by a graphics application onto a screen space (SSP), the method comprising the steps of:
- capturing at least two frame instances of geometry samplings of the geometry data by means of a capture geometry module (CGM) having a frame-geometry buffer (FGB),
- storing the at least two captured geometry samplings in the frame-geometry buffer (FGB), - relating corresponding geometry primitives (SGPj) of the at least two geometry samplings stored in the frame-geometry buffer (FGB), and
- rendering the geometry primitives in multiple pipelines corresponding to multiple frame instances (t,) by determining for each frame pixel intensities PIP(xg, yg) of screen space grid positions (xg,yg) by means of texel intensities TI(ug,vg) of texture space grid positions (ug,vg), wherein the same texel intensities TI (ug,vg) are fetched a single time to be used for multiple successive frames for rendering the corresponding geometry primitives
(SGPj).
14. A method as claimed in claim 13, wherein the rendering step is a multiple inverse texture mapping process.
15. A method as claimed in claim 13, wherein the rendering step is a multiple forward texture mapping process.
PCT/IB2009/054424 2008-10-09 2009-10-08 Geometry primitive shading graphics system WO2010041215A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08105529 2008-10-09
EP08105529.5 2008-10-09

Publications (1)

Publication Number Publication Date
WO2010041215A1 true WO2010041215A1 (en) 2010-04-15

Family

ID=41350652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/054424 WO2010041215A1 (en) 2008-10-09 2009-10-08 Geometry primitive shading graphics system

Country Status (1)

Country Link
WO (1) WO2010041215A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565747B2 (en) 2017-09-06 2020-02-18 Nvidia Corporation Differentiable rendering pipeline for inverse graphics

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1542167A1 (en) * 2003-12-09 2005-06-15 Koninklijke Philips Electronics N.V. Computer graphics processor and method for rendering 3D scenes on a 3D image display screen
WO2005124693A2 (en) * 2004-06-16 2005-12-29 Koninklijke Philips Electronics N.V. Inverse texture mapping 3d graphics system
EP1759355A1 (en) * 2004-06-16 2007-03-07 Koninklijke Philips Electronics N.V. A forward texture mapping 3d graphics system
WO2008004135A2 (en) * 2006-01-18 2008-01-10 Lucid Information Technology, Ltd. Multi-mode parallel graphics rendering system employing real-time automatic scene profiling and mode control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1542167A1 (en) * 2003-12-09 2005-06-15 Koninklijke Philips Electronics N.V. Computer graphics processor and method for rendering 3D scenes on a 3D image display screen
WO2005057501A1 (en) * 2003-12-09 2005-06-23 Koninklijke Philips Electronics N.V. Computer graphics processor and method for rendering 3-d scenes on a 3-d image display screen
WO2005124693A2 (en) * 2004-06-16 2005-12-29 Koninklijke Philips Electronics N.V. Inverse texture mapping 3d graphics system
EP1759355A1 (en) * 2004-06-16 2007-03-07 Koninklijke Philips Electronics N.V. A forward texture mapping 3d graphics system
WO2008004135A2 (en) * 2006-01-18 2008-01-10 Lucid Information Technology, Ltd. Multi-mode parallel graphics rendering system employing real-time automatic scene profiling and mode control

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565747B2 (en) 2017-09-06 2020-02-18 Nvidia Corporation Differentiable rendering pipeline for inverse graphics

Similar Documents

Publication Publication Date Title
US9754407B2 (en) System, method, and computer program product for shading using a dynamic object-space grid
US9129443B2 (en) Cache-efficient processor and method of rendering indirect illumination using interleaving and sub-image blur
US9747718B2 (en) System, method, and computer program product for performing object-space shading
US10636213B2 (en) Graphics processing systems
US7843463B1 (en) System and method for bump mapping setup
EP2973423B1 (en) System and method for display of a repeating texture stored in a texture atlas
US20120229460A1 (en) Method and System for Optimizing Resource Usage in a Graphics Pipeline
US7446780B1 (en) Temporal antialiasing in a multisampling graphics pipeline
JP2004164593A (en) Method and apparatus for rendering 3d model, including multiple points of graphics object
EP1519317B1 (en) Depth-based antialiasing
US10217259B2 (en) Method of and apparatus for graphics processing
US10074159B2 (en) System and methodologies for super sampling to enhance anti-aliasing in high resolution meshes
CN107784622B (en) Graphics processing system and graphics processor
US20150262413A1 (en) Method and system of temporally asynchronous shading decoupled from rasterization
US8319798B2 (en) System and method providing motion blur to rotating objects
EP1759355B1 (en) A forward texture mapping 3d graphics system
KR20180037838A (en) Method and apparatus for processing texture
Policarpo et al. Deferred shading tutorial
WO2023177888A1 (en) Locking mechanism for image classification
KR101227155B1 (en) Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image
WO2010041215A1 (en) Geometry primitive shading graphics system
WO2005124693A2 (en) Inverse texture mapping 3d graphics system
WO2021150372A1 (en) Hybrid binning
US20190295214A1 (en) Method and system of temporally asynchronous shading decoupled from rasterization
Smit et al. A shared-scene-graph image-warping architecture for VR: Low latency versus image quality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09740969

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09740969

Country of ref document: EP

Kind code of ref document: A1