BACKGROUND OF THE INVENTION

[0001]
The present invention relates to the field of perpixel lighting in realtime threedimensional (“3D”) computer graphics hardware and software. Primarily, most realtime computer graphics systems rely on pervertex lighting schemes such as Gouraud shading. In this scheme, the curvature of a polygon surface is represented through different surface normal vectors at each polygon vertex. Lighting calculations are carried out for each vertex and the resultant color information is interpolated across the surface of the polygon. Lighting schemes such as Gouraud shading are generally utilized for their speed and simplicity of operation since they require far less calculation than more complex strategies. Perpixel lighting, in contrast, is a lighting strategy in which separate lighting calculations for one or more light sources are carried out for each pixel of a drawn polygon. Most wellknown perpixel lighting strategies are variations on a basic vertex normal interpolation scheme, i.e., Phong shading. Vertex normal interpolation strategies interpolate the normal vectors given at each vertex throughout the polygon surface. For each pixel, the interpolated vertex normal is normalized to unit length and then used in perpixel lighting calculations. Typically the perpixel calculations involve taking the dot product of the normal vector and the light source vector to arrive at a light source brightness coefficient. While fast perpixel dot product hardware is not infeasible with the speed and complexity of today's microprocessors, the calculations involved in normalizing the interpolated vertex vector (i.e., floating point square root and division) are prohibitive for practical realtime implementation at high speed.

[0002]
Another perpixel lighting technique, commonly referred to as bump mapping, involves using a twodimensional (“2D”) map to store surface height or orientation and using texel values from this map to perturb a (usually interpolated) surface normal vector. Calculation in traditional combinational bump mapping (i.e., where the bump map angle perturbation is combined with a potentially changing surface normal) mostly involves resolving the bump map perturbation to a 3D vector that is subsequently combined with the surface normal vector. Since the surface normal vector may change from pixel to pixel, an appropriate, usually orthogonal, orientation must be given to the bump map vector. This process usually requires additional normalization and a significant computational overhead, making combinational bump mapping approaches impractical for efficient realtime calculation. A wellknown method of avoiding these calculations is to store a bump map as a collection of normalized 3D vectors, therefore avoiding the need for normalization and combination. While this strategy is more practical for realtime implementations, it has several drawbacks. Such a system is inflexible since bump maps may only be used for objects in preset orientations, and surface curvature must be represented within in the bump map rather than through vertex normals as in Phong shading and its equivalents. Furthermore, the accuracy of the image is limited by the granularity of the bump map, since values failing between adjacent texels are traditionally interpolated but not renormalized. Another drawback of the abovementioned bump mapping scheme is the size and inflexibility of the bump maps. Since the bump map texels contain 3D vectors, medium to large complexity maps will occupy a great deal of memory. Also, due to the specific nature of the bump maps, they are generally only usable on the surfaces for which they were designed; therefore such bump maps are not often used for multiple surfaces.

[0003]
A further aspect of perpixel lighting is the calculation of intensity of specular reflections. Traditionally, the calculation of specular reflection involves the dot product of the light source vector and the view reflection vector (the view, or eye, vector reflected around the surface normal vector). Alternately, the same calculation can be made with the dot product of the view vector and the reflection of the light vector around the normal. In either of the alternatives, at least one vector must be reflected around a surface normal vector that potentially changes from pixel to pixel. The calculation required to obtain a reflected vector, while not as costly as bump map combination, is nonetheless significant.

[0004]
Yet another complication in perpixel lighting is presented by the cases of point light sources and point view vectors. Point light sources involve a light vector that changes on a perpixel basis. Traditionally, the difference vector between the surface point and the light source is calculated and normalized for each pixel, which is computationally undesirable for efficient calculation. Likewise, point view vectors involve a view vector that changes on a perpixel basis. Utilizing point view vectors also requires the calculation and normalization of a difference vector on a perpixel basis.

[0005]
The application of the aforementioned perpixel lighting techniques provides visually enhanced, higher quality and more realistic images than today's realtime image generators are capable of producing. While techniques exist which can provide similar images, these techniques are difficult to implement and inflexible to use. Therefore, there exists a real need for a practical and efficient apparatus and method that provides vertex normal interpolation, combinational bump mapping, specular reflection calculation, and support for point lighting and point viewer within realtime 3D graphics systems.
SUMMARY OF THE INVENTION

[0006]
The present invention is directed to a method for shading polygon surfaces in a real time rendering system. The method includes the step of providing at least one polygon surface to be shaded. The polygon surface having a plurality of pixels and including at least one surface angle. The method also includes the step of providing at least one point light source. The method further includes the step of calculating using computer hardware, for substantially each drawn pixel of said polygon surface, a substantially normalized 3D surface direction vector and a 3D point light vector.
BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS

[0007]
[0007]FIG. 1 is a diagram illustrating the translation of normal vectors to a view coordinate system in accordance with a preferred embodiment of the invention.

[0008]
[0008]FIG. 2 is a diagram illustrating the conversion of a 3D vector into an angleproportional 2D vector in accordance with a preferred embodiment of the invention.

[0009]
[0009]FIG. 3 is diagram illustrating the combination of a surface angle vector and a bump map vector to produce a composite surface angle vector in accordance with a preferred embodiment of the invention.

[0010]
[0010]FIG. 4 is a diagram illustrating the production of a view reflection vector from a composite surface angle vector in accordance with a preferred embodiment of the invention.

[0011]
[0011]FIG. 5 is a diagram illustrating the calculation of the view reflection vector.

[0012]
[0012]FIG. 6 is a diagram of a preferred hardware embodiment of the present invention.

[0013]
[0013]FIG. 7 is a diagram illustrating an AP translation unit in accordance with a preferred embodiment of the invention.

[0014]
[0014]FIG. 8 is a diagram illustrating a preferred hardware embodiment of the per pixel operation of the present invention.

[0015]
[0015]FIG. 9 is a diagram illustrating the preferred embodiment of the point light operations of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE INVENTION

[0016]
The present invention provides a method and system for the efficient calculation of complex perpixel lighting effects in a realtime computer graphics system. For the purposes of this disclosure, the term “realtime computer graphics system” is defined as any computer based system capable of or intended to generate images at a rate greater than or equal to 10 images per second. Some examples of realtime computer graphics systems include: standalone console videogame hardware, 3D graphics accelerator cards for PC's and workstation class computers, multipurpose set top boxes, virtual reality imaging devices, and imaging devices for commercial or military flight simulators. All of the abovementioned systems are likely to benefit from the increased image quality afforded by the methods and practices of the present invention.

[0017]
As used herein, the term “angleproportional” is defined as a characteristic of a 2D vector wherein the length of the 2D vector is proportional to the angle between a 3D direction vector (corresponding to said 2D vector) and a 3D axis vector (usually representing the zaxis of a predefined coordinate system).

[0018]
As also used herein, the term “view coordinate system” is defined as a 3D coordinate system (which can be defined by a 3D position vector and at least three 3D direction vectors) that represents the position and orientation from which a 3D scene is being viewed.

[0019]
As further user herein, the term “view vector” is defined herein as a 3D vector representing the forward direction from which a scene is being viewed. The view vector is usually directed, either positively or negatively, along the zaxis of the view coordinate system and is expressed in worldview coordinates.

[0020]
As further used herein, the term “current polygon” is defined herein as the polygon that is currently being operated on by the methods of the present invention.

[0021]
Lastly, as used herein, the term “current pixel” is defined herein as the pixel within a polygon surface currently being operated on by methods of the present invention.

[0022]
The present invention comprises two areas of execution within a computer graphics system: perpolygon operations and perpixel operations. The perpolygon operations of the present invention are performed once for each polygon in a scene to which the present invention is applied. Likewise, the perpixel operations of the present invention are performed for each drawn pixel on a polygon surface wherein the aforementioned perpolygon operations are assumed to have been previously applied to said polygon. Additionally, the present invention provides a method to enable accurate realtime calculation of point light vectors useful for advanced lighting strategies. Most of the perpolygon and perpixel operations of the present invention are detailed in U.S. patent application Ser. No. 09/222,036 filed on Dec. 29, 1998, in the name of David J. Collodi, the disclosure of which is hereby incorporated by reference. The operations are detailed herein for purposes of consistency and example.

[0023]
The perpolygon operations of the present invention are performed in order to provide a set of angleproportional surface angle vectors to be utilized within the perpixel operations. For the purposes of simplicity, this disclosure shall assume the existence of a polygon to be rendered wherein said polygon provides a 3D surface normal vector for each of its vertices and said polygon is the current polygon. The surface normal vectors are used to collectively specify the amount of curvature along the polygon surface.

[0024]
First, the surface normal vectors of the current polygon are rotated to correspond to the direction of the view coordinate system. It is well known in the art that a 3D coordinate system (or rather the translation to a particular 3D coordinate system) can be represented by a 4×4 matrix. A 4×4 matrix can represent both rotational and positional translations. Since the rotation of surface normal vectors requires only rotational translations, a 3×3 matrix, M, is used. Each surface normal vector, N_{i}, is multiplied by matrix M to produce a corresponding rotated surface vector, R_{i}.

M*N _{i=R} _{i} (1)

[0025]
The above calculation is performed for each surface normal vector belonging to the current polygon, N_{1}N_{x }where x is the number of vertices in the polygon. After this calculation is performed for each surface normal vector (i.e., for each vertex), a set of rotated surface vectors, R_{1}R_{x}, is produced. The purpose of performing this rotation to view coordinate space is to provide a common orientation to all polygons in the rendered frame. Additionally, all 3D directional and point light source vectors to be used in the lighting of the current polygon must be translated to the view coordinate system as well. The translation of said 3D light source vectors is accomplished by the same, abovementioned, translations used to translate the surface normal vectors. The result of the 3D light source vector translation is a set of corresponding 3D rotated light vectors that are expressed relative to the view coordinate system.

[0026]
Next each rotated vector, R_{i}, is transformed to an angleproportional 2D surface angle vector, n_{i}, where the length of n_{i }is proportional to the angle between R_{i }and its zaxis. A procedure for the transformation of a 3D vector into a corresponding angleproportional 2D vector is detailed in the aboveidentified Collodi U.S. patent application Ser. No. 09/222,036. After transforming each R vector, a set of 2D surface angle vectors, n_{1}n_{x}, are created. Each n vector is angleproportional to its corresponding R vector. FIG. 1 demonstrates a translation of normal vectors, 6, to a view coordinate system. The resulting R vectors 10 and corresponding n vectors 12 are illustrated. As a result of the transformations detailed in Collodi U.S. patent application Ser. No. 09/222,036, an angleproportional vector length of 1.0 corresponds to an angle of 90° between the original 3D vector and its zaxis. This is demonstrated by FIG. 2, wherein the vector 22 is converted to the angleproportional 2D vector 24 where the length of the 2D vector, 26, corresponds to the angle between the original vector and its axis vector 18. The preceding scale of angleproportional vectors will be used herein for purposes of clarity and example only. Those of ordinary skill in the art should recognize that the resultant angleproportional vectors can be transformed into any arbitrary 2D coordinate system. Furthermore, although the present disclosure presents angleproportional vector values in floating point format, this is done for purposes of example only and it may be more efficient in practice to work with angleproportional vectors in a fixedpoint format. For example, a value of 1.0 could be represented as 256 in a fixedpoint format, whereas 2.0 is represented as 512 and so on. It is a preferred practice of the present invention to deal with fixedpoint 2D vectors.

[0027]
An optional step is to limit the vectors that are far from the view angle. The direction of vectors at or near 180° from the viewer is unstable. It is therefore advantageous to limit the direction and distance of these vectors. An example of a basic limiting method is detailed in the following disclosure. First, a 3D vector U is obtained where the direction of U is perpendicular and normal to the plane of the polygon (i.e., U is the “real” polygon surface normal vector). Next the x and y components of U are scaled by dividing each component by the larger component (either x or y) of U. Then the scaled x and y components of U are doubled. The scaled x and y components of U form 2D vector u which represents the angleproportional direction of the polygon surface at (or slightly greater than) 180°. Angleproportional n vectors with large angles relative to the viewer (which can easily be derived from the zcoordinate of the corresponding R vector) are interpolated with the u vector weighted on the relative angle (to viewer) of the n vector.

[0028]
A further optional step at this point is to calculate a 2dimensional bump map rotation value. Since a bump map, in whatever format it is presented, is basically a 2D texture map, the map itself has its own local coordinate system, i.e., which direction is up, down, left, right, etc. The bump map is mapped arbitrarily onto the polygon surface and therefore may not necessarily share the same orientation as the view coordinate system. Since bump map perturbations will be done in 2D space, only a 2D rotation value is necessary to specify the 2D rotation of the bump map coordinate system relative to the view coordinate system. A simple method of obtaining said bump map rotation is to perform a comparison of the bump map orientation (using the bump map coordinate values provided at each polygon vertex) to the screen orientation of the translated polygon (since the screen orientation corresponds directly to the view coordinate system). Two 2D bump map rotation vectors are required to specify the translation from the bump map orientation to the view orientation. The use of any known techniques to obtain said 2D vectors is acceptable. In one embodiment of the present invention, the bump map orientation vectors are used to rotate each of the above mentioned 2D surface angle vectors, n_{1}n_{x}, to the bump map orientation. Additionally the aforementioned 3D rotated light vectors must also be rotated (in the xy plane) to the bump map orientation. This is accomplished by applying the bump map rotational translations to the x and y coordinate of each 3D rotated light vector. An alternate embodiment uses (the inverse) of the bump map orientation vectors to translate 2D bump map vectors into the view coordinate system as opposed to translating surface angle and light vectors to the bump map coordinate system.

[0029]
The next section details the perpixel operations of the present invention. As previously stated, the perpixel operations of the present invention are performed for at least each visible pixel on the screen surface of the current polygon during the scanline conversion of the polygon. Note that the perpixel operations detailed herein need not be performed concurrently with the drawing of the current polygon to video RAM. Alternate embodiments of the present invention perform perpixel lighting operations prior to final rendering to screen memory. Additional embodiments of the present invention perform perpixel lighting operations after color values have been placed in screen memory. It is, however, a preferred method to perform perpixel lighting operations concurrently with the drawing of color values to screen memory.

[0030]
Initially, the previously mentioned set of 2D surface angle vectors is interpolated from their vertex values, n_{1}n_{x }as previously defined, to the location of the current pixel. Techniques for interpolating vertex values to an arbitrary point within a polygon are well known to those skilled in the art. Any interpolation strategy can be used including, but not limited to, linear interpolation, inverse (perspectivecorrect) interpolation, quadratic or cubic interpolation. The interpolation of 2D surface angle vectors produces an aggregate 2D surface angle vector, n, which represents the orientation of the polygon surface at the current pixel. In circumstances where the orientation of the polygon surface does not change, i.e., flat surfaces, it is not necessary to interpolate the n value from given vertex values since all vertex values would be the same. In this case, a fixed n value may be used which is generally, although not necessarily, equivalent to the normal surface orientation of the current polygon.

[0031]
Next, the aggregate surface angle vector, n, is combined with a 2D bump map vector, b. In one embodiment of the present invention, the bump map vector is obtained from a given bump map and accessed by interpolated bump map coordinates given at the polygon vertices in accordance with standard vertex mapping techniques wellknown by those skilled in the applicable art. The 2D bump map vector may be obtained directly from the texel values stored in the bump map. Alternately, the 2D bump map vector may be calculated from retrieved texel values stored in the bump map. One wellknown example of said bump map vector calculations is storing relative height values in the bump map. Height values are retrieved for the nearest three texel values. Assuming that the texel at coordinates x, y (t(x,y)) maps to the current pixel, then texels t(x,y), t(x+1,y), and t(x, y+1) are loaded from the bump map. Since each texel contains a scalar height value, the 2D bump map vector, b, is calculated from the differences in height values in the following manner:

b=(t(x+1,y)−t(x,y),t(x,y+1)−t(x,y)) (2)

[0032]
An alternate method for storing bump map data involves storing a polar representation of the bump map vector at each texel. The polar representation comprises two fields, one for the 2D angle of the bump map vector and another for the magnitude of the bump map vector. A preferred method of retrieving the 2D bump map vector from said polar representation is through the use of a lookup table. The direction and magnitude (or functions of those values) values are used to index a lookup table which returns the appropriate 2D bump map vector. The primary advantage of storing bump map vectors in polar representation is that the rotation of polar vectors is easily accomplished. In the aforementioned embodiments in which the bump map vector is rotated to view orientation, said rotation is facilitated by storing bump map vectors in polar representation. Rotating a polar vector involves providing a scalar angle of rotation (for example an 8bit number where the value 256 is equivalent to 360°) and simply adding that number to the rotation value of the polar vector.

[0033]
For added image quality, map based bump map values may be additionally interpolated with any wellknown texel interpolation scheme such as bilinear or trilinear interpolation. For direct mapping schemes, i.e., where texels contain 2D bump map vectors, the vector values given at each texel are interpolated. Alternately for indirect mapping schemes, such as the height map detailed above, it is desirable to first calculate all necessary 2D bump map vectors and subsequently interpolate those vectors. It should be noted that one or more 2D bump map vectors may be combined to produce the final b vector. The ability to easily combine and aggregate multiple bump maps and/or to combine bump map perturbation with a variable surface normal is an advantageous feature of the present invention since this technique provides for a great deal of flexibility, reusability and decreased memory costs in many 3D graphics applications.

[0034]
In an alternate embodiment of the present invention, 2D bump map values are calculated procedurally from a function of the surface position (and other optional values). Procedural texture/bump map techniques offer the advantages of flexibility and minimal memory usage balanced with the cost of additional calculation. Alternately, if bump mapping is not selected for the current polygon, a null b vector (0,0) can be used. In this case, it is not necessary to combine the bump map vector with the n vector and the combination step may therefore be skipped. For the purposes of clarity and continuity of the example detailed herein, a b vector of (0,0) will be used for cases in which bump mapping is not used.

[0035]
Once the bump map vector, b, is arrived at, it is combined with the n vector through vector addition to produce the composite surface angle vector, c:

c=n+b (3)

[0036]
The c vector represents the composite orientation of the polygon surface at the current pixel with respect to polygon curvature and bump map perturbation. FIG. 3 demonstrates the combination of surface angle vector n 28 and bump map vector b 30 to produce the composite surface angle vector c 32. In alternate embodiments, the aforementioned c vector is used to address an environment map. Environment maps are traditionally 2D color maps that provide reflection information for a given scene. Since the c vector represents the composite orientation (due to surface bump and curvature) of the current pixel in relation to the view coordinate system, it can be used to accurately address a 2D environment map that is also (traditionally) relative to the view coordinate system. By addressing an environment map in this manner, a consistency is maintained between lighting equations and reflection values (from an environment map). A significant feature of the present invention is the provision of a method for coordinating traditional, equationbased, lighting information with reflection (environment map) values in a realtime 3D graphics system.

[0037]
Once the c vector is arrived at, the view reflection vector is next calculated. The view reflection vector represents the direction the view vector reflects off of the surface at the current pixel. Since the 2D vector coordinate space is angleproportional to the view vector, the direction of the view vector is located at coordinates (0,0). Consequently, the 2D view reflection vector, r, reflected around the c vector (which represents the current pixel surface orientation) is simply the c vector doubled:

r=2c (4)

[0038]
[0038]FIG. 4 illustrates the production of view reflection vector r 34 from composite surface angle vector c 36.

[0039]
The above calculation is accurate provided that the direction of view is always directed along the zaxis of the view coordinate system. For most applications, this assumption is accurate enough to produce visually sufficient results. However, the exact view direction varies in accordance with the screen position of the current pixel since its screen position represents an intersection between the view plane and the vector from the focal point to the object surface. The preceding scenario in which the view direction is allowed to vary with screen coordinates is commonly referred to as a point viewer. In cases in which point viewing is desired, the view reflection vector, r, must be calculated in an alternate manner. First the 2D displacement vector of the screen coordinates of the current pixel and the screen coordinates of the center of the screen must be found. Assuming the screen coordinates of the current pixel are represented by 2D vector p, and the screen coordinates of the center of the screen are represented by 2D vector h, the 2D displacement vector, d, is calculated as follows:

d=p−h (5)

[0040]
Next, 2D displacement vector d is converted to an approximately angleproportional 2D offset vector, o. The most straightforward way to convert d to o is to multiply d by a scalar value, y, representing the ratio of the viewing angle to the screen width. The viewing angle represents the total angle from the focal point to the two horizontal (or vertical) edges of the screen and should be given in the same angleproportional scale as other angleproportional vectors (in this example, a value of 1.0 representing 900). The screen width is just the width (or height) of the screen in pixels. For example, if the viewing angle is 45° and the screen is 100 pixels wide, the y value would be 0.5/100, or {fraction (1/200)}. The o vector is calculated as follows:

o=d*x (6)

[0041]
In order to calculate view reflection vector, r, in the case of a point viewer, the r vector is positively displaced by o. The formula for r is:

r=2c+o (7)

[0042]
The calculation is illustrated in FIG. 5, where view vector r 38 is found by doubling vector c 42 and adding vector o 40.

[0043]
It should be noted that the above formula is only an approximation of the true view reflection vector. However, the approximate view reflection calculated by the preceding formula is able to produce visually consistent and convincing images with little or no discernable loss in image quality. In alternate embodiments of the present invention, the r vector, as opposed to the c vector, is used to address an environment map as previously detailed.

[0044]
Once the 2D composite surface angle vector and view reflection vector are calculated, they are next transformed into normalized (unit length) 3D vectors. The 2D composite surface angle vector, c, is transformed into normalized 3D composite surface vector C. Likewise, 2D view reflection vector, r, is transformed into normalized view reflection vector A. The conversion from a 2D angleproportional vector to a normalized 3D vector by mathematical calculation is computationally expensive in terms of hardware complexity and computation time. Therefore, it is a preferred practice of the present invention to perform said conversion from 2D angleproportional vector to normalized 3D vector with the aid of a lookup table. The use of a lookup table offers the advantage of being able to produce normalized composite surface and reflection vectors without using a square root operation. The complexity of the square root operation combined with the difficulty of calculating 3D composite surface and view reflection vectors has heretofore prohibited practical realtime calculation of complex lighting effects. Methods of the present invention using lookup tables, therefore, represent a significant improvement in the realtime calculation of complex perpixel lighting effects.

[0045]
A preferred lookup table method is to use fixed point x and y coordinates of an angleproportional vector to directly access a 2D lookup table wherein said lookup table contains normalized 3D vectors. The vectors contained in the lookup table may be stored in either floating point or fixedpoint format. For matters of efficiency, however, it is a preferred practice of the present invention to store 3D lookup table vectors in fixedpoint format. For example, a fixedpoint format of 8 bits per vector component, i.e., 24bits per 3D vector, would provide sufficient accuracy while minimizing the size of the lookup table. Fixed point 3D vectors obtained from the lookup table can easily be converted to floating point format for further calculation if necessary. In order to further enhance visual consistency, lookup table vectors can be interpolated using any of a number of wellknown interpolation techniques including, but not limited to, bilinear and trilinear interpolation, quadratic interpolation and cubic interpolation. The size of the lookup table can be additionally decreased due to the fact that the coordinate system is symmetric about the x and y axis. Therefore the lookup table need only cover the positive x/positive y quadrant. To utilize such a lookup table, negative x and y coordinates (in the 2D vector used to address the table) are first negated and the 3D vector is retrieved (and optionally interpolated) from the table. Then the corresponding x and/or y coordinates in the 3D vector are negated provided that the x and/or y coordinates of the 2D addressing vector were originally negative. Since several vector additions may be performed on angleproportional vectors, the final c and r vectors can have lengths greater than 2.0 (equivalent to 180°). Therefore, the 2D lookup table must at least cover coordinate values ranging from 0 to 2.0. A 512×512 map should be of sufficient accuracy to cover such a range, however larger maps may be implemented depending on the desired accuracy.

[0046]
An alternate embodiment of the present invention utilizes a onedimensional lookup table. The lookup table is addressed by the square of the length of the abovementioned addressing 2D angleproportional vector. Each lookup table element contains two elements: a zvalue and a scalar value s. The zvalue is used as the zcoordinate for the resultant 3D vector while the s value is used to scale the x and y values of said addressing 2D vector yielding the x and y values of said resultant 3D vector. The abovementioned onedimensional lookup table strategy provides a significant memory savings over the aforementioned 2D lookup table, but also incurs a higher computational cost.

[0047]
The lookup table strategies detailed above are presented for the purpose of example only and, as can be recognized by someone skilled in the applicable art, any adequate lookup table strategy may be employed without departing from the scope of the present invention as defined by the appended claims and their equivalents.

[0048]
Regardless of the calculation method applied, the conversion of 2D vectors c and r to normalized 3D vectors produces unitlength 3D composite surface vector C and unitlength 3D view reflection vector A. The C and A vectors can then be used in calculating diffuse and specular light coefficients for any number of light sources. Given a light source whose direction is represented by unitlength light source vector L, the diffuse coefficient, Cd, of said light source at the current pixel is given by:

c _{d} =L*C (8)

[0049]
While the specular coefficient, c_{s}, is given by:

c _{s} =L*A (9)

[0050]
The specular coefficient value c_{s }is optionally applied to a specularity function to account for surface reflectivity characteristics. For example, a commonly used specularity function raises the c_{s }value to a given power, exp, where the higher exp values produce “shinier looking” specular highlights.

[0051]
A further alternate embodiment utilizes a onedimensional lookup table as in the previously mentioned lookup table strategy. As with the aforementioned strategy, a zvalue and scalar s value are provided by the lookup table. In this embodiment, however, the s value is not used to scale the x and y values of the addressing vector. Rather, the addressing vector, with the aforementioned zvalue included, is used as a 3D vector in the above mentioned diffuse and/or specularity dot product calculation. The result of the dot product calculation is then scaled by the s vector to produce the correct shading value as in the following equations:

c _{d}=(L*C)*s (10)

c _{s}=(L*A)*s (11)

[0052]
Once diffuse and specular components have been calculated, they may be used as scalar values to apply diffuse and specular lighting to the current pixel. Standard color based pixel lighting algorithms utilizing scalar light coefficients are wellknown to those skilled in the art. Any such lighting algorithm (which requires scalar diffuse and specular coefficient values) may be applied to modulate the color of the current pixel.

[0053]
A further aspect of the present invention applies to the calculation of point light source direction vectors. As opposed to directional light sources, where the light source direction is constant within the frame, the direction of point light sources is variable across a surface. The direction at which a point light strikes a surface is determined by difference between the position of the surface and the light source. A prior art approach to the calculation of point light source direction vectors involves normalizing the difference vector between the light source position and the surface position. Since standard vector normalization requires computationally expensive division and square root operations, the application of said approach to the calculation of point light source direction vectors is infeasible for efficient realtime operation. A method is presented for the accurate calculation of point light source direction vectors that does not involve division or square root operations.

[0054]
According to the present invention, a 3D difference vector, D, is obtained for at least every drawn pixel. The difference vector is found by the following formula:

D=P−S (12)

[0055]
where P is a 3D vector in the view coordinate system representing the location (in 3D space) of the point light source and S is a 3D vector in the view coordinate system representing the location (in 3D space) of the polygon surface at the current pixel. The preceding vector subtraction may be performed on a perpixel basis wherein the S vector is appropriately updated for each pixel. Alternately, a set of point light source direction vectors, D_{1}D_{x }(where x is the number of vertices in the current polygon), may be calculated (by the above formula) for each polygon vertex and where said D value is interpolated from said direction vectors.

[0056]
Once the D vector is obtained for the current pixel, a scalar value, k, is calculated where:

k=1/sqrt(D*D) (13)

[0057]
In a preferred embodiment of the present invention, a lookup table is used in the determination of the k value. A preferred onedimensional lookup table contains k values (in fixed or floating point format) and is addressed by a function of D*D. The D vector, however, may be of arbitrary length, thereby requiring a large lookup table to determine accurate k values. Therefore, in a preferred practice, the D vector is scaled prior to the calculation of the k value. A preferred method for the scaling of the D vector is presented herein. First the largest component (either x, y, or z) of the D vector is found, i.e., max(x, y, z). Next an exponent value, n, is found from the max component value by:

n=└(log _{2} m)┘ (14)

[0058]
where m is said maximum component value of D. Next a 3D scaled difference vector, E, is calculated where:

E=D/2^{n} (15)

[0059]
A scalar length value, g, is next calculated by:

g=(D*D)/2^{2n} (16)

[0060]
This scheme is advantageous since the n value can be found directly from the exponent field of a number in a standard floating point format and division by a power of two simply requires an exponent subtraction for floating point numbers.

[0061]
Finally, the above mentioned g value is used to obtain k from the preferred lookup table method detailed previously. Once k and E have been calculated, lighting equations may now be carried out for the point light source. As defined above, the c_{d }and c_{s }coefficient values for the point light source at the current pixel are now determined from the following formulae:

c _{d}=(C*E)*k (17)

c _{s}=(A*E)*k (18)

[0062]
where vectors C and A are the 3D composite surface vector and 3D view reflection vector as previously defined. Now lighting coefficients for a point light source have been calculated without using costly square root or division operations. This process allows for point lighting to be efficiently and practically applied in realtime image generation.

[0063]
A novel and useful aspect of the present invention as disclosed above is that, in certain embodiments, it allows shading data, such as light and surface normal vectors, to be specified in a recognized standard format. In many wellknown lighting systems, such as Gouraud and Phong shading, lights are specified with 3D vectors (specifying normalized direction for parallel lights and position for point lights) along with color and brightness information. Likewise, in the aforementioned lighting systems, surface curvature is specified by providing a normalized 3D surface angle vector for each polygon vertex. Also, a common format for bump map data, which is wellknown to those skilled in the art, is to use a height value for each bump map texel, as detailed previously in this disclosure. The use of a common interface allows for quick crossplatform development by way of a standard programming interface. Most current 3D programming interfaces, such as OpenGL and DirectX, provide functionality for specifying standard shading data (light and surface normal vectors in the abovementioned standard format) for lighting in 3D graphics applications. Many current programming interfaces also contain support for standard bump maps as well.

[0064]
The methods and operations of the present invention do not require additional, or alternate, inputs other than the abovementioned standard shading data, i.e., light and surface normal vector data. In the present invention, vertex normal values are specified as normalized 3D vectors and light vectors are specified in a compatible format, i.e., a 3D vector for direction or position as well as additional color and brightness information. Bump maps may be given in any of several standard formats wherein no additional, algorithmspecific information is required. The ability of the present invention to operate accurately and efficiently with standard inputs is a primary advantage. Most wellknown 3D shading speedup methods require algorithmspecific input data in order to perform correctly, thereby limiting the application of said speedup methods to custom programming interfaces. Most 3D graphics software developers have experience in standard 3D programming interfaces and develop crossplatform applications wherein the use of said standard 3D programming interfaces is a necessity. The use of nonstandard programming interfaces demanded by many 3D lighting algorithms serves as a severe limiting factor to their widespread use in industry applications. Use of the present invention is advantageous since it requires no additional, “nonstandard” input data to operate correctly and efficiently. Therefore, the features of the present invention, implemented in either software or custom hardware, can be accessed by current programming interfaces without requiring software developers to produce additional, applicationspecific code. The present invention provides a universal shading interface whereby crossplatform applications can take advantage of the advanced lighting features of the present invention on platforms that support them, while still working correctly, i.e. defaulting to simpler shading algorithms such as Gouraud shading, on platforms that do not support advanced lighting. The methods and operations of the present invention provide for the ability to accurately and efficiently utilize advance shading techniques which are accessible through a standard 3D programming interface.
Detailed Description of a Preferred Hardware Embodiment

[0065]
In order to provide maximum rendering speed and efficiency, a hardware implementation of the present invention is preferred. Since the methods of the present invention are not exceedingly complex, they are able to be implemented without excessive hardware expense in a number of 3D graphics systems including, for example, consumerlevel PC graphics accelerator boards, standalone console video game hardware, multipurpose “set top boxes,” highend workstation graphics hardware, highend studio production graphics hardware, and virtual reality devices. Although a hardware implementation is preferred, those skilled in the art will recognize that alternate embodiments of the present invention may be implemented in other forms including, but not limited to: as a software computer program, as a microprogram in a hardware device, and as a program in a programmable perpixel shading device.

[0066]
The following sections describe preferred hardware implementations for the perpolygon, perpixel, and point lighting operations of the present invention. The hardware implementation provided is used as part of, and assumes the existence of, a 3D graphics processing hardware element (such as a 3D graphics accelerator chip). The perpixel (and point lighting) operations of the present invention serve to provide diffuse and/or specular lighting coefficients for one or more light sources. These lighting coefficients may subsequently be used in shading hardware to scale the corresponding light source colors and to use said light source colors to modulate pixel color. Techniques for utilizing light source colors and light coefficients to modulate pixel colors are numerous and well known to those skilled in the art. It is the objective of the present invention to provide an efficient method and system that produces normalized 3D composite surface and view reflection vectors and consequently produces diffuse and/or specular light coefficients for one or more light sources on a per pixel basis. Therefore, it is outside the scope of this disclosure to provide a detailed description of the abovementioned shading hardware although it should be noted that the preferred hardware embodiment of the present invention is designed to work in conjunction with dedicated shading hardware.

[0067]
[0067]FIG. 6 shows a diagram of a preferred hardware implementation of the perpolygon operations of the present invention. The hardware perpolygon operations assume the presence of a current polygon, a set of 3D surface normal vectors (N_{1}N_{x}: x=number of polygon vertices) corresponding to the set of vertices of the current polygon, a view orientation (represented in this example in preferred form as a 3×3 matrix), and a set of 3D light source vectors (L_{1}L_{n}: n>=1) expressed relative to the view coordinate system. The surface normal vectors should be expressed in the same reference frame as the view orientation is expressed (worldspace orientation, for example) so the view orientation matrix can be used to transform the normal vectors to viewspace. For the purposes of this example, and in accordance with a preferred practice of the present invention, said surface normal vectors, light source vectors and view orientation are in standard 32bit floatingpoint format. At block 46, surface normal vectors N_{1}N_{x }are translated to view orientation by matrix multiplication of each N vector by the view translation matrix 48 using fast vector translation hardware (i.e., fast dot product circuitry, multiplyadders, etc.). In an alternate embodiment, the translation of the N vectors is done externally (i.e., by an external processor or an onboard, multipurpose transform engine), and translated N vectors are provided. Translation of multiple vectors may be performed in series or in parallel, although, to decrease execution time, a parallel (or pipelined) approach is preferred. The 3D translated surface vectors R_{1}R_{x }are produced from the abovementioned transformation. An alternate embodiment limits the R vectors if they are too near 1800 from the direction of view. In this alternate embodiment, R vectors at large angles from the view are limited to less than 180° and their direction is clamped to that of the vector normal to the plane of the current polygon. At block 50, the set of R vectors are transformed into a corresponding set of angleproportional 2D surface angle vectors n_{1}n_{x}. In order to perform said angleproportional translations, a single AP translation unit may be used in series or, in a preferred practice, several AP translation units are used in parallel.

[0068]
[0068]FIG. 7 shows a block diagram of an AP translation unit which converts a 3D vector into an 2D angleproportional vector 62. First, the z coordinate of A is used to address a onedimensional lookup table at block 58, which produces a proportionality value, p 56. Although the use of a lookup table is preferred, alternate embodiments may calculate p by the application of a mathematical function. At block 54, the x and y components of the A vector are multiplied by p (with fast multiplication hardware) to produce (the x and y components of) the a vector. In a preferred practice, the a vector is given in fixed point format. Alternate embodiments leave the a vector in floating point format.

[0069]
After the abovementioned set of R vectors has been transformed to the abovementioned set of n vectors, the n vectors are then stored, preferably in a local memory, to be later used during the perpixel operations of the present invention. Alternate embodiments calculate a bumpmap rotation at this stage of operations. In said alternate embodiments, the set of n vectors and the set of light source vectors (L_{1}L_{n}) may be rotated to the bumpmap rotation. Further alternate embodiments include interpolation hardware to interpolate distant n vectors (at large angle distances from the viewer) with a 2D planar normal vector, u, as described above in the detailed description of the present invention.

[0070]
[0070]FIG. 8 shows a logic diagram for a preferred hardware embodiment of the perpixel operations of the present invention. The hardware perpixel operations assume the presence of a current polygon, a set of 3D light source vectors expressed relative to the view coordinate system, and a set of 2D surface angle vectors (n_{1}n_{x}: x=number of vertices in current polygon) preferably calculated by the perpolygon operations of the present invention as detailed above. The perpixel operations are assumed to be performed at least once for every drawn pixel on the surface of the current polygon. At vertex interpolation unit 68, the surface angle vectors (n_{1}n_{x}) at the polygon vertices are interpolated at the current pixel to form the surface angle vector n. A preferred method of vertex interpolation, which is well known in the art, is to calculate the slopes (change in value per change in coordinate) of the vectors at the polygon edges and accumulate the edge value for each successive scanline. Likewise, for the current scanline, the slope is calculated and the vector value is accumulated for each pixel. In the cases where the current polygon does not represent any curvature, the above interpolation step may be omitted. At a 2D bump map vector, b 80 is obtained from a texture memory 64. A preferred method interpolates bump map coordinates from vertex values and uses the interpolated coordinate values to address the bump map. A preferred bump map format stores a 2D bump map vector at each bump map texel. Other embodiments store scalar height values at each bump map texel and calculate the b vector from said height values as detailed earlier. Further embodiments realize b vector values from a set of procedural calculations. Although not illustrated in the present example, once the b vector is found alternate embodiments translate the b vector from the bump map orientation to the view orientation preferably by the application of a 2×2 rotation matrix.

[0071]
At vector addition unit 70, bump map vector b is combined by vector addition with surface angle vector n to form a composite surface angle vector c (c=n+b). The vector addition can be efficiently performed with fast addition hardware. Preferably, to simplify computation, vector values are stored and operated on in fixedpoint format although alternate embodiments use vector values in floatingpoint format. Next, at block 72, the c vector is doubled to produce 2D view reflection vector r. Doubling the c vector is easily accomplished by leftshifting the component values of the c vector by one bit. Alternately, if the c vector is in floating point format, one is added to the exponent fields of the component values. An alternate embodiment adds a 2D offset vector, o, to r after doubling where r=2c+o. Next, the component values of the c vector are used to address 2D lookup table 82 to provide 3D composite surface vector C. In a preferred method, the lookup table values contain 3D vectors in fixedpoint format. The x and y component values of c are used to address the nearest four values in said lookup table. The four lookup table values are bilinearly interpolated to form 3D vector C. The component values of C are finally converted to floating point format for ease of subsequent calculation. Alternate embodiments of the present invention leave the C vector in fixedpoint format. Further alternate embodiments store floatingpoint vector values in the lookup table. At 74, the 2D r vector is used to address lookup table 82 to produce 3D view reflection vector A by the same, abovedetailed process as the C vector is calculated. At blocks 86 and 76, the C and A vectors are combined with light source L through the use of high speed, floating point dot product hardware to produce scalar diffuse light coefficient c_{d }and scalar specular light coefficient c_{s}. The present example only demonstrates the calculation of diffuse and specular light coefficients for one light source. This is done for clarity of example only and it should be obvious to those skilled in the art that the calculation of coefficient values for more than one light source may easily be implemented in series or in parallel using comparable hardware. The c_{d }and c_{s }values are then passed to shading unit 78 where they are eventually used to modulate pixel lighting color.

[0072]
[0072]FIG. 9 shows a logic diagram of the point light operations of the present invention. Point light operations are performed on the same (perpixel) basis as the above detailed perpixel operations. In a preferred embodiment, the point light operations are performed in parallel with the perpixel operations. Alternate embodiments perform point light operations in series with perpixel operations. The hardware point light source operations assume the presence of a current polygon, a set of 3D surface position vectors (S_{1}S_{x}: x=number of polygon vertices) which give the position in view coordinate space of each polygon vertex, and a 3D point light source vector, P, which gives the location of the point light source relative to the view coordinate system. It is further assumed that all 3D vectors are given in standard floating point notation (i.e., sign bit, exponent, and mantissa fields). At block 94, surface position vectors S_{1}S_{x }are interpolated at the current pixel to produce 3D current surface position vector S. At block 92, 3D difference vector D is calculated by vector subtraction where D=P−S. The subtraction is performed with highspeed vector addition (subtraction) hardware. At block 90, the component of D (either x, y, or z) with the largest absolute value, max, is found. In a preferred practice, only the exponent fields of the D component values are compared and max is determined to be the component with the greatest exponent value. At block 96, the length of vector D squared, len, is calculated in parallel by taking the dot product of D with itself (i.e., the square of the x component added to the square of the y component added to the square of the z component). The dot product can be performed efficiently with fast dot product/multiplyadd hardware. Alternate embodiments do the abovementioned length calculation in series rather than in parallel. At block 100, D is scaled by the nearest power of 2 of max (the largest component value in D) producing scaled 3D difference vector D′. A preferred method for the abovementioned scaling of D first finds a signed scalar exponent difference value, e, by subtracting the exponent field value of max from the exponent field value of 1.0 (usually 127 in standard 32bit floating point notation). The e value is then added to the exponent fields of each component in D. If the addition of e to an exponent field value yields a negative number, the field value is clamped to zero. At block 98, scalar length value g is calculated by adding 2e to the exponent value of len. At block 102, the g value is used to address a preferred onedimensional lookup table, yielding scalar value k. In a preferred practice, the k value is bilinearly interpolated from the nearest two lookup table values. At blocks 106 and 104, diffuse and specular lighting coefficients are calculated for the point light source. This stage further assumes the presence of a 3D composite surface vector C and a 3D view reflection vector A, preferably obtained from the perpixel operations of the present invention. The diffuse component value is calculated by taking the dot product of C and D′ and multiplying said dot product by k. Likewise, the specular component value is calculated by taking the dot product of R and D′ and multiplying said dot product by k. It should be obvious to those skilled in the art that the light source coefficient calculation of point light sources in the abovedescribed manner is comparable to the previously detailed calculation of light coefficient values in the perpixel operations of the present invention with the addition of the extra step of scaling by the k value. In a preferred hardware embodiment, the point light operations work in conjunction with the perpixel operations, providing 3D vector D′, which is taken as a light source direction vector, and the scalar k value. The perpixel operations, in turn, use the D′ as a light source vector (L) and perform dot product calculations with the C and A vectors in the previously detailed manner. The perpixel operations also have a logic element that optionally scales the c_{s }and c_{d }values produced by said dot product operations by the k scalar value (if the light source is a point light source).

[0073]
The above section details a practical and efficient hardware configuration for the realtime calculation of normalized 3D surface and reflection vectors where the surface direction is interpolated and dynamically combined with bump map values on a perpixel basis. Likewise, the hardware described above calculates, in realtime, diffuse and specular lighting coefficient values for one or more directional light sources from a dynamically variable surface. Furthermore, the above hardware configuration is able to calculate, in realtime, diffuse and specular lighting coefficient values for one or more point light sources from a dynamically variable surface. The embodiments described above are included for the purpose of describing the present invention, and as should be recognized by those skilled in the applicable art, is not intended to limit the scope of the present invention as defined by the appended claims and their equivalents.