EP1496475A1 - A geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor - Google Patents
A geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor Download PDFInfo
- Publication number
- EP1496475A1 EP1496475A1 EP03015270A EP03015270A EP1496475A1 EP 1496475 A1 EP1496475 A1 EP 1496475A1 EP 03015270 A EP03015270 A EP 03015270A EP 03015270 A EP03015270 A EP 03015270A EP 1496475 A1 EP1496475 A1 EP 1496475A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- coordinates
- module
- projection
- back face
- primitives
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/40—Hidden part removal
Definitions
- the present invention relates to techniques for triangle culling in pipelined 3D graphic engines and was developed by paying specific attention to the possible application to graphic engines that operate in association with graphic languages.
- Exemplary of such an application are the graphic engines operating in association with e.g. OpenGL, NokiaGL and Direct3D, in particular in mobile phones.
- Modern 3D graphic pipelines for graphic engines in graphic cards include a rich set of features for synthesizing interactive three-dimensional scenes with a high and realistic quality.
- figure 1 a block diagram of a geometry stage 111 is shown.
- the structure of the geometry stage 111 can be better understood by referring to the different space coordinates that are used and implemented by its modules.
- One of the primary roles of the pipelined graphic system is to transform coordinates from the 3D space used in an application scene/stage into the 2D space of the final display unit. This transformation normally involves several intermediate coordinate systems, namely:
- the geometry stage 111 comprises in the first place a model view transform module, indicated with a block 201.
- the model view transform module 201 processes graphic information I in form of either triangles or their vertexes, and applies the model-view matrix to the vertexes.
- Each vertex is multiplied by a 4x4 transform matrix in order to rotate or translate or scale or skew it.
- Such a four coordinates system defines a homogeneous coordinate space and it is very convenient since points and vectors may be processed from a mathematical point of view with the same 4x4 matrices.
- a four dimensional transform is implemented as:
- M 1 indicates a rototranslation matrix and is possible to have only one of such matrixes.
- the vertex positions are represented by quadruples of the type ( x, y, z, 1 ), where the value 1 is allotted to the w coordinate for convenience, and vectors are represented by quadruples of the type ( x, y , z, 0 ), which can be thought of as a point at infinity.
- the output of the module 201 is a new vertex that contains the coordinates and normals transformed into view space. This space is suitable for lighting and fog calculations in a lighting module 202 and a fog module 203.
- the lighting module 202 performs a vertex operation to calculate color on the basis of the source light and its material property.
- the fogging module 203 performs a vertex operation to calculate its color knowing the fog characteristic.
- a projection transform module 204 is provided further downstream to transform the coordinates in the view space into normalized projection coordinates.
- the projection transform module 204 is designed to process either vertexes or triangles.
- the projection transform module 204 is followed by a primitive assembly module 205 that is responsible for assembling collections of vertexes into primitives.
- primitives are triangles.
- the primitive assembly module 205 must know what kinds of primitives are being assembled and how many vertexes have been submitted. For example, when assembling triangles, every third vertex triggers the creation of a new triangle and resets a vertex counter.
- a frustum culling module 206 follows that operates by rejecting triangles lying outside the viewing region. Module 206 operates in normalised projection coordinates. A vertex having coordinates (x, y, z, w) is outside the frustum if: x > w, x ⁇ -w, y > w, y ⁇ - w , Z > w or z ⁇ - w , and a triangle lies outside of the frustum if all its vertexes lie on one side of the frustum (i.e. all above, below, to the left, right, in front or behind).
- the pipelined frustum culling module 206 uses the notion of "outcodes” derived e.g. from the Cohen-Sutherland line clipping algorithm, as described for instance in the publication by I.Sutherland, " Reentrant polygon clipping" CACM, 17(1), January 1974, 32-42.
- An outcode is here a 6-bit set of flags indicating the relationship of a vertex to the viewing frustum. If the outcodes of all three vertexes are zero, then the triangle is wholly within the frustum and must be retained. If the conjunction of the outcodes is non-zero, then all three vertexes must lie to one side of the frustum and the geometry must be culled. Otherwise the geometry must be retained, even though it may not actually overlap the view volume.
- the outcodes are useful in the pipeline for a clipper module 207 that follows.
- the clipper module 207 performs clipping to the viewing frustum as a pipeline of clips against its bounding planes. Clipping is necessary for several reasons. It is necessary for the efficiency of a rasterization stage 112 that follow geometric stage 111 to eliminate regions of polygons that lie outside the viewing frustum. Moreover, clipping against the near plane prevents vertexes with a coordinate w equal to zero being passed on to a subsequent perspective divide module 208. Clipping also bounds the coordinates that will be passed to the rasterization stage 112 allowing fixed precision operation in that module.
- the clipping module as implemented increases the view volume slightly before applying the clips. This prevents artifacts from appearing at the edges of the screen, but implies that the rasterization stage 112 must cope with coordinates that are not strictly bounded by the device coordinates.
- Each clip is implemented as a specialization of a generic plane-clipping device. It is important to apply the clip to the near clipping plane first since this guarantees that further clips do not result in divide by zero.
- the clipper module 207 is followed by the perspective divide module 208 already mentioned in the foregoing.
- the module 208 converts normalized projection coordinates into normalized device coordinates (x/w ,y/w ,z/w ,1).
- Back face culling is a cheap and efficient method for eliminating geometry.
- a polygonal solid it is possible to enumerate each face such that the vertexes appear in a consistent winding order when they are visible.
- the choice of direction is arbitrary and OpenGL language, for instance, allows the desired order to be specified.
- front facing polygons may not be all visible.
- Back-face culling is not appropriate for transparent objects where back faces may well be visible.
- Backface culling typically leads to roughly half of the polygons in a scene being eliminated. Since the decision is local to each polygon, it does not eliminate polygons that are obscured by other polygons.
- Backface culling must be consistent with rasterization. If the back face culling module 209 eliminates polygons that should be rendered, then holes will appear in the final display.
- the back face culling module 209 processes normalised device coordinates, even if it could operate equally well in screen space, where results are likely to be more consistent with the rasteriser. However in a circuital implementation, the calculations in the screen space would be fixed-point rather than floating-point as used in device coordinates. The only disadvantage is that viewport transforms in a subsequent viewport transform module 210 can not be avoided.
- the module 210 converts vertexes from normalized device coordinates into screen coordinates.
- the x and y coordinates are mapped to lie within a specified viewport and the z coordinate is mapped to lie within the range specified by the depth-range application programming interface (API) call.
- API application programming interface
- the rasterization stage 112 is designed to sample the geometry at the center of each pixel i.e. coordinates (0.5, 0.5), (1.5, 0.5)
- the geometry stage 111 shown in figure 1 effects a bidimensional back face culling into the screen space.
- Back face culling is performed after i.e. downstream the perspective division in module 208, i.e. only in two dimensions.
- figure 2 a schematic view of the vectors involved is shown.
- a triangle TR is shown along with its triangle normal vector TN, i.e. the vector that is normal to the triangle surface.
- An eye-point WP is shown associated to an eye-point vector V.
- a projected normal vector N i.e. the projection of the normal vector TN in the direction of the eye-point vector V is also shown.
- the normal projected vector N is the cross product between two triangle edge vectors.
- the sign of the normal projected vector N depends on the orientation of T 1 and T 2 .
- the sign is defined on the basis of the order of triangle vertexes.
- the first vector is from the first vertex toward the second vertex.
- the second vector is from the second vertex toward the third vertex.
- the eye-point vector V is drawn from the eye-point PW toward one of the triangle vertexes.
- the projected normal vector N on the eye-point vector V is the inner (i.e. dot) product of the two vectors T 1 and T 2 divided by the module of the eye-point vector V.
- the division by the module of the eye-point vector V is however not necessary because this needed only for testing the sign of the projection vector N.
- the normal vector TN is orthogonal to the screen and parallel to the eye-point vector V, so that the dot product is not necessary and only the z component of their cross (i.e. outer) product exists, namely the projected normal vector N.
- This method requires only two multiplications and one algebraic sum for each triangle; however it is possible to perform culling only at the end stage of the pipeline: consequently, the triangles that are not visible are eliminated at a very late stage and the resulting pipeline is not efficient.
- Figure 3 shows a block diagram of a geometry stage 111a having a different arrangement of the various processing modules with respect to the geometry stage 111 described with reference to figure 1.
- the model view transform module 201 is followed by the primitive assembly module 205, with the back face culling module 209 cascaded thereto. Downstream the back face culling module 209 the projection transform module 204, the frustum culling module 206, the lighting 202 and fog 203 modules, the clipper module 207 and the perspective divide module 208 are arranged.
- the view port transform module 210 is finally located before (i.e. upstream) the rasterization stage 112.
- the computation of the projected normal vector N is schematically shown in figure 4,.
- N x , N y , N z are the components of the normal vector TN
- the object of the present invention is thus to provide a geometric processing stage for a pipelined graphic engine, wherein the overall mathematical operation count and number of primitives, i.e. triangles, to be processed is significantly reduced without appreciably affecting the accuracy of the results obtained and the speed of processing.
- the invention also relates to a corresponding method, as well as to a computer program product loadable in the memory of a computer and comprising software code portions for performing the method of the invention when the product is run on a computer.
- the arrangement described herein provides a graphic engine comprising a geometric processing stage that performs back face culling after the projection transform, i.e. in four dimensions.
- the arrangement described herein exploits the relationship between the screen space and the projection space established by the perspective division in order to lower complexity of calculation.
- a further advantage is the possibility of hardware implementation in a form adapted both to 3D back face culling and 4D back face culling: a geometry stage adapted for use with different graphics language can be implemented with a reduction of hardware.
- the arrangement described herein gives rise to a back face culling module able to deal in a simplified manner with four dimensions coordinates.
- the back face culling operation is performed on normalized projection coordinates, after the projection transform, so that calculation takes place in four dimensions.
- Such a geometry stage 111b comprises a model view transform module 201, followed by a projection transform module 204 a the primitive assembly module 205.
- a back face culling module 309 is then provided, the module 309 operating - as better detailed in the following - on normalized projection coordinates as generated by the projection transform module 204.
- a view port transform module 210 is arranged at the end of the geometry stage 111b upstream of the rasterization stage 112.
- calculation of the cross product and the dot product of four dimensional vectors as originated by the projection transform module 204 is rather complex and requires an extensive amount of operations.
- the geometry stage 111b described herein exploits - in order to simplify calculations - the existing relationship between the screen space and the projection space represented by the perspective division operation.
- S P/P w
- P P x , P y , P z , P w
- P w -Z for the homogeneous coordinate where Z is the z component in the view space.
- Figure 6 shows a schematic diagram of a possible circuit implementation of the cross calculation.
- the vertex coordinates expressed in projection coordinates P are sent to a layer of multipliers 11, that yield the subproducts P 1y P 2w , P 1w P 2y , P 0w P 2y , P 0y P 2w , P 1w P 0y , P 1y P 0w .
- Such subproducts are then sent to three corresponding subtraction nodes 12, whose outputs are multiplied by the coordinates P 0x , P 1x and P 2x respectively in a further layer of three multipliers 13.
- a comparison block 15 evaluates if such value is positive, and in that case an enable signal EN is issued for the current triangle towards the next module.
- the circuit of figure 6 can also be used in association with the 3D back face culling method shown with reference to figure 3.
- the 4D back face culling as shown in figure 5 and 3D back face culling as shown in figure 3 use the same function with different inputs.
- P 0w P 1w P 2w N ( (P 0x (P 1y P 2w -P 1w P 2y ) + P 1x ( P 0w P 2y - P 0y P 2w ) + P 2x (P 1w P 0y - P 1y P 0w ))
- TN ⁇ V V x (T 1y T 2z - T 1z T 2y ) +V y (T 1x T 2z - T 1z T 2x )+V z (T 1x T 2y - T 1y T 2x )
- the geometry stage 111b just described is adapted to execute OpenGL or NokiaGL commands or instructions.
- the eye-viewpoint of the perspective projection is always the axes origin.
- the eye-viewpoint vector of the orthogonal projection is always parallel to the z-axis.
- a corresponding projection matrix must be loaded by means of a load matrix command.
- This matrix is generic, and the 3D position of the eye-viewpoint is unknown, whereby 3D back face culling cannot be applied.
- the arrangement described significantly reduces the overall mathematical operation count and number of primitives, i.e. triangles, to be processed by the 3D pipeline and in the geometry stage in particular: in fact, the number of primitives at the first stages of the pipeline is approximately halved, so that the workload and the power consumption in the subsequent pipeline stages are correspondingly reduced.
- a further advantage is provided by the possibility of hardware implementation adapted for both 3D and 4D back face culling.
- a geometry stage adapted for use with different graphics language can thus be implemented with reduced hardware requirements.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Image Generation (AREA)
Abstract
- a model view module (201) for generating projection coordinates of primitives of the video signals in a view space, said primitives including visible and non-visible primitives,
- a back face culling module (309) arranged downstream of the model view module (201) for at least partially eliminating the non visible primitives,
- a projection transform module (204) for transforming the coordinates of the video signals from view space coordinates into normalized projection coordinates (P), and
- a perspective divide module (208) for transforming the coordinates of the video signals from normalized projection (P) coordinates into screen space coordinates (S).
Description
- the modeling space: the modeling space is the space in which individual elements in a model are defined. These are usually the "natural" coordinates for an object. For example, a sphere may be defined as having unit radius and be centered at the origin in the modeling coordinates. Subsequent scaling and translation would position and resize the sphere appropriately within a scene;
- the world space: this represents the coordinate system of the final scene prior to viewing. OpenGL does not support an explicit notion of world space separate from view space, but some other graphics systems do (e.g. Direct3D, GKS 3D);
- the view space: the view space has the synthetic camera as its origin, with the view direction along the z-axis. The view space coordinates are obtained from the modeling space via a model-view transformation. This coordinate system is sometimes referred to as "eye space";
- the normalised projection space: here the coordinates are projected into a canonical space via the projection transformation. It is a four dimensional, space where each coordinate is represented by (x, y, z, w). The view volume represents the region of the model visible to the synthetic camera. It is bounded by six planes, for example z = 0, z = w, y = -w, y = w, x = -w, x = w. A perspective projection results in a view volume that is a frustum;
- the normalised device space: the normalised projection coordinates are converted into normalised device coordinates by dividing the first three coordinates (x, y, z) by the fourth w coordinate to obtain (x/w, y/w, z/w). The values of the resulting coordinates normally lie in the range -1 to 1 or from O to 1 in the case of the z coordinate;
- the screen space: this space corresponds to the coordinates used to address the physical displaying device.
- figures 1 to 4 have already been described in the foregoing;
- figure 5 represents a block diagram of a geometry stage of a graphic system according to the invention; and
- figure 6 represents a circuit implementation of part of the geometry stage shown in figure 5.
- if the user adopts a specific command to effect orthogonal or perspective projection, then the pipeline applies the 3D back face culling solution. In that case culling is operated upstream of the transform projection module, thus being more efficient than 4D culling;
- if the user loads its own projection matrix, then the pipeline applies the 4D dimension back face culling solution.
Claims (15)
- A geometric processing stage (111b for a pipelined engine for processing video signals and generating therefrom processed video signal in space coordinates (S) adapted for display on a screen, said geometric processing stage (111b) including:a model view module (201) for generating projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives,a back face culling module (309) arranged downstream of said model view module (201) for at least partially eliminating said non visible primitives,a projection transform module (204) for transforming the coordinates of said video signals from said view space coordinates into normalized projection coordinates (P), anda perspective divide module (208) for transforming the coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S),said back face culling module (309) is arranged downstream said projection transform module (204) and operates on normalized projection (P) coordinates of said primitives, andsaid perspective divide module (208) is arranged downstream said back face culling module (309) for transforming the coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S).
- The processing stage of claim 1, characterized in that said back face culling module (309) is configured for performing a perspective division (S=P/Pw) for transforming said screen coordinates (S) into said normalized projection (P) coordinates.
- The processing stage of claim 2, characterized in that said back face culling module (309) is configured for calculating a projection normal vector (N) of a primitive defined in said normalized projection (P) coordinates.
- The processing stage of claim 3, characterized in that it includes a layer of multipliers (11) for computing subproducts among components of said projection coordinates (P).
- The processing stage of either of claims 3 or 4, characterized in that it includes a comparison circuit (17) suitable for evaluating the sign of said projection normal vector (N) and issuing a signal (EN) enabling subsequent processing depending on said sign.
- The processing stage of any of the previous claims, characterized in that said back face culling module (309) is configured for operating on:4D normalized projection (P) coordinates of said primitives as received from said projection transform module (204), and3D view space coordinates of said primitives as received from said model view module (201).
- The processing stage of claim 6, characterized in that said back face culling module (309) includes:a first layer of multipliers (11) for computing first subproducts of components of said coordinates (P),a layer of subtraction nodes (12) for computing differences of said subproducts of coordinates as computed in said first layer of multipliers (11),a second layer of multipliers (13) for computing second subproducts of said differences as computed in said layer of subtraction nodes (12), anda summation node (14) for summing said second subproducts as computed in said second layer of multipliers (13).
- A method of processing video signals and generating therefrom processed video signal in space coordinates (S) adapted for display on a screen, said method including the steps of:generating (210) projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives,back face culling (309) said primitives for at least partially eliminating said non visible primitives,transforming (204) the coordinates of said video signals from said view space coordinates into normalized projection coordinates (P), andtransforming (208) the coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S),operating said back face culling (309) on said normalized projection (P) coordinates of said primitives, andtransforming the coordinates of said video signals as resulting from said back face culling from said normalized projection (P) coordinates into said screen space coordinates (S).
- The method of claim 8, characterized in that said back face culling (309) includes a perspective division (S=P/Pw) for transforming said screen coordinates (S) into said normalized projection (P) coordinates.
- The method of claim 9, characterized in that said back face culling (309) includes calculating a projection normal vector (N) of a primitive defined in said normalized projection (P) coordinates.
- The method of claim 10, characterized in that it includes the steps of providing a layer of multipliers (11) for computing subproducts among components of said projection coordinates (P).
- The method of either of claims 10 or 11, characterized in that it includes the step of evaluating the sign of said projection normal vector (N) and issuing a signal (EN) enabling subsequent processing depending on said sign.
- The method of any of the previous claims 8 to 12, characterized in that it includes the step of providing a back face culling module (309) configured for selectively operating on:4D normalized projection (P) coordinates of said primitives as received from said projection transform module (204), and3D view space coordinates of said primitives as received from said model view module (201).
- The method of claim 13, characterized in that it comprises the step of:including in said back face culling module (309) a first layer of multipliers (11), a layer of subtraction nodes (12), a second layer of multipliers (13), and a summation node (14), andcomputing via said first layer of multipliers (11) first subproducts of components of said coordinates (P),computing via said layer of subtraction nodes (12) differences of said subproducts of coordinates as computed in said first layer of multipliers (11),computing via said second layer of multipliers (13) second subproducts of said differences as computed in said layer of subtraction nodes (12), andsumming via said summation node (14) said second subproducts as computed in said second layer of multipliers (13).
- A computer program product loadable in the memory of a computer and comprising software code portions for performing the method of any of the claims 8 to 14 when the product is run on a computer.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03015270.6A EP1496475B1 (en) | 2003-07-07 | 2003-07-07 | A geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor |
US10/886,939 US7236169B2 (en) | 2003-07-07 | 2004-07-07 | Geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03015270.6A EP1496475B1 (en) | 2003-07-07 | 2003-07-07 | A geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1496475A1 true EP1496475A1 (en) | 2005-01-12 |
EP1496475B1 EP1496475B1 (en) | 2013-06-26 |
Family
ID=33442742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03015270.6A Expired - Lifetime EP1496475B1 (en) | 2003-07-07 | 2003-07-07 | A geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor |
Country Status (2)
Country | Link |
---|---|
US (1) | US7236169B2 (en) |
EP (1) | EP1496475B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012047622A2 (en) | 2010-09-28 | 2012-04-12 | Intel Corporation | Backface culling for motion blur and depth of field |
TWI450215B (en) * | 2010-12-14 | 2014-08-21 | Via Tech Inc | Pre-culling processing method, system and computer readable medium for hidden surface removal of image objects |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6563502B1 (en) * | 1999-08-19 | 2003-05-13 | Adobe Systems Incorporated | Device dependent rendering |
US7719536B2 (en) * | 2004-03-31 | 2010-05-18 | Adobe Systems Incorporated | Glyph adjustment in high resolution raster while rendering |
US7580039B2 (en) * | 2004-03-31 | 2009-08-25 | Adobe Systems Incorporated | Glyph outline adjustment while rendering |
US7639258B1 (en) | 2004-03-31 | 2009-12-29 | Adobe Systems Incorporated | Winding order test for digital fonts |
US20050231533A1 (en) * | 2004-04-20 | 2005-10-20 | Lin Chen | Apparatus and method for performing divide by w operations in a graphics system |
US7463261B1 (en) * | 2005-04-29 | 2008-12-09 | Adobe Systems Incorporated | Three-dimensional image compositing on a GPU utilizing multiple transformations |
US7466322B1 (en) | 2005-08-02 | 2008-12-16 | Nvidia Corporation | Clipping graphics primitives to the w=0 plane |
US7420557B1 (en) * | 2005-08-25 | 2008-09-02 | Nvidia Corporation | Vertex processing when w=0 |
US20070171219A1 (en) * | 2006-01-20 | 2007-07-26 | Smedia Technology Corporation | System and method of early rejection after transformation in a GPU |
US8134570B1 (en) * | 2006-09-18 | 2012-03-13 | Nvidia Corporation | System and method for graphics attribute packing for pixel shader usage |
US20080068383A1 (en) * | 2006-09-20 | 2008-03-20 | Adobe Systems Incorporated | Rendering and encoding glyphs |
KR100848687B1 (en) * | 2007-01-05 | 2008-07-28 | 삼성전자주식회사 | 3-dimension graphic processing apparatus and operating method thereof |
US8035641B1 (en) | 2007-11-28 | 2011-10-11 | Adobe Systems Incorporated | Fast depth of field simulation |
US8279222B2 (en) * | 2008-03-14 | 2012-10-02 | Seiko Epson Corporation | Processing graphics data for a stereoscopic display |
KR101682650B1 (en) * | 2010-09-24 | 2016-12-21 | 삼성전자주식회사 | Apparatus and method for back-face culling using frame coherence |
US10783696B2 (en) | 2014-04-05 | 2020-09-22 | Sony Interactive Entertainment LLC | Gradient adjustment for texture mapping to non-orthonormal grid |
US9836816B2 (en) * | 2014-04-05 | 2017-12-05 | Sony Interactive Entertainment America Llc | Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport |
US9710881B2 (en) | 2014-04-05 | 2017-07-18 | Sony Interactive Entertainment America Llc | Varying effective resolution by screen location by altering rasterization parameters |
US9865074B2 (en) | 2014-04-05 | 2018-01-09 | Sony Interactive Entertainment America Llc | Method for efficient construction of high resolution display buffers |
US11302054B2 (en) | 2014-04-05 | 2022-04-12 | Sony Interactive Entertainment Europe Limited | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US9710957B2 (en) | 2014-04-05 | 2017-07-18 | Sony Interactive Entertainment America Llc | Graphics processing enhancement by tracking object and/or primitive identifiers |
US10068311B2 (en) | 2014-04-05 | 2018-09-04 | Sony Interacive Entertainment LLC | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US10969740B2 (en) | 2017-06-27 | 2021-04-06 | Nvidia Corporation | System and method for near-eye light field rendering for wide field of view interactive three-dimensional computer graphics |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020030693A1 (en) | 1998-01-15 | 2002-03-14 | David Robert Baldwin | Triangle clipping for 3d graphics |
US20020118188A1 (en) * | 2000-04-04 | 2002-08-29 | Natalia Zviaguina | Method and system for determining visible parts of transparent and nontransparent surfaces of three-dimensional objects |
US20030001851A1 (en) | 2001-06-28 | 2003-01-02 | Bushey Robert D. | System and method for combining graphics formats in a digital video pipeline |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6275233B1 (en) * | 1996-11-01 | 2001-08-14 | International Business Machines Corporation | Surface simplification preserving a solid volume |
US5926182A (en) * | 1996-11-19 | 1999-07-20 | International Business Machines Corporation | Efficient rendering utilizing user defined shields and windows |
US6509905B2 (en) * | 1998-11-12 | 2003-01-21 | Hewlett-Packard Company | Method and apparatus for performing a perspective projection in a graphics device of a computer graphics display system |
US6411301B1 (en) * | 1999-10-28 | 2002-06-25 | Nintendo Co., Ltd. | Graphics system interface |
US6618048B1 (en) * | 1999-10-28 | 2003-09-09 | Nintendo Co., Ltd. | 3D graphics rendering system for performing Z value clamping in near-Z range to maximize scene resolution of visually important Z components |
US6774895B1 (en) * | 2002-02-01 | 2004-08-10 | Nvidia Corporation | System and method for depth clamping in a hardware graphics pipeline |
US7027059B2 (en) * | 2002-05-30 | 2006-04-11 | Intel Corporation | Dynamically constructed rasterizers |
-
2003
- 2003-07-07 EP EP03015270.6A patent/EP1496475B1/en not_active Expired - Lifetime
-
2004
- 2004-07-07 US US10/886,939 patent/US7236169B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020030693A1 (en) | 1998-01-15 | 2002-03-14 | David Robert Baldwin | Triangle clipping for 3d graphics |
US20020118188A1 (en) * | 2000-04-04 | 2002-08-29 | Natalia Zviaguina | Method and system for determining visible parts of transparent and nontransparent surfaces of three-dimensional objects |
US20030001851A1 (en) | 2001-06-28 | 2003-01-02 | Bushey Robert D. | System and method for combining graphics formats in a digital video pipeline |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012047622A2 (en) | 2010-09-28 | 2012-04-12 | Intel Corporation | Backface culling for motion blur and depth of field |
EP2622580A4 (en) * | 2010-09-28 | 2017-06-21 | Intel Corporation | Backface culling for motion blur and depth of field |
TWI450215B (en) * | 2010-12-14 | 2014-08-21 | Via Tech Inc | Pre-culling processing method, system and computer readable medium for hidden surface removal of image objects |
Also Published As
Publication number | Publication date |
---|---|
US7236169B2 (en) | 2007-06-26 |
EP1496475B1 (en) | 2013-06-26 |
US20050190183A1 (en) | 2005-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1496475B1 (en) | A geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor | |
US6137497A (en) | Post transformation clipping in a geometry accelerator | |
US8436854B2 (en) | Graphics processing unit with deferred vertex shading | |
US7292242B1 (en) | Clipping with addition of vertices to existing primitives | |
US6052129A (en) | Method and apparatus for deferred clipping of polygons | |
US6417858B1 (en) | Processor for geometry transformations and lighting calculations | |
US6141013A (en) | Rapid computation of local eye vectors in a fixed point lighting unit | |
EP3340183B1 (en) | Graphics processing employing cube map texturing | |
US7746355B1 (en) | Method for distributed clipping outside of view volume | |
US7755626B2 (en) | Cone-culled soft shadows | |
US7659893B1 (en) | Method and apparatus to ensure consistency of depth values computed in different sections of a graphics processor | |
US6384824B1 (en) | Method, system and computer program product for multi-pass bump-mapping into an environment map | |
US7948487B2 (en) | Occlusion culling method and rendering processing apparatus | |
US7812837B2 (en) | Reduced Z-buffer generating method, hidden surface removal method and occlusion culling method | |
JP2009032122A (en) | Image processor, image processing method, and program | |
US7400325B1 (en) | Culling before setup in viewport and culling unit | |
US7466322B1 (en) | Clipping graphics primitives to the w=0 plane | |
KR20090058015A (en) | Method and device for performing user-defined clipping in object space | |
US7292239B1 (en) | Cull before attribute read | |
NO324930B1 (en) | Device and method for calculating raster data | |
US5892516A (en) | Perspective texture mapping circuit having pixel color interpolation mode and method thereof | |
US11941741B2 (en) | Hybrid rendering mechanism of a graphics pipeline and an effect engine | |
US20100277488A1 (en) | Deferred Material Rasterization | |
US20020126127A1 (en) | Lighting processing circuitry for graphics adapter | |
KR0164160B1 (en) | A graphic processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK |
|
17P | Request for examination filed |
Effective date: 20050304 |
|
AKX | Designation fees paid |
Designated state(s): DE FR GB IT |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: STMICROELECTRONICS LIMITED Owner name: STMICROELECTRONICS SRL |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: STMICROELECTRONICS SRL Owner name: STMICROELECTRONICS LIMITED |
|
17Q | First examination report despatched |
Effective date: 20121005 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): DE FR GB IT |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 60344354 Country of ref document: DE Owner name: STMICROELECTRONICS (RESEARCH & DEVELOPMENT) LI, GB Free format text: FORMER OWNERS: STMICROELECTRONICS LTD., ALMONDSBURY, BRISTOL, GB; STMICROELECTRONICS S.R.L., AGRATE BRIANZA, IT Ref country code: DE Ref legal event code: R081 Ref document number: 60344354 Country of ref document: DE Owner name: STMICROELECTRONICS SRL, IT Free format text: FORMER OWNERS: STMICROELECTRONICS LTD., ALMONDSBURY, BRISTOL, GB; STMICROELECTRONICS S.R.L., AGRATE BRIANZA, IT |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 60344354 Country of ref document: DE Effective date: 20130822 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20130926 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20130626 |
|
26N | No opposition filed |
Effective date: 20140327 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 60344354 Country of ref document: DE Effective date: 20140327 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20130926 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 14 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 15 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 60344354 Country of ref document: DE Representative=s name: SCHMITT-NILSON SCHRAUD WAIBEL WOHLFROM PATENTA, DE Ref country code: DE Ref legal event code: R081 Ref document number: 60344354 Country of ref document: DE Owner name: STMICROELECTRONICS (RESEARCH & DEVELOPMENT) LI, GB Free format text: FORMER OWNERS: STMICROELECTRONICS LIMITED, BRISTOL, GB; STMICROELECTRONICS SRL, AGRATE BRIANZA, IT Ref country code: DE Ref legal event code: R081 Ref document number: 60344354 Country of ref document: DE Owner name: STMICROELECTRONICS SRL, IT Free format text: FORMER OWNERS: STMICROELECTRONICS LIMITED, BRISTOL, GB; STMICROELECTRONICS SRL, AGRATE BRIANZA, IT |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: TQ Owner name: STMICROELECTRONICS (RESEARCH & DEVELOPMENT) LI, GB Effective date: 20180103 Ref country code: FR Ref legal event code: TQ Owner name: STMICROELECTRONICS SRL, IT Effective date: 20180103 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 16 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20220622 Year of fee payment: 20 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20220621 Year of fee payment: 20 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R071 Ref document number: 60344354 Country of ref document: DE |