US20040207631A1 - Efficient bump mapping using height maps - Google Patents

Efficient bump mapping using height maps Download PDF

Info

Publication number
US20040207631A1
US20040207631A1 US10/611,719 US61171903A US2004207631A1 US 20040207631 A1 US20040207631 A1 US 20040207631A1 US 61171903 A US61171903 A US 61171903A US 2004207631 A1 US2004207631 A1 US 2004207631A1
Authority
US
United States
Prior art keywords
data
texture
filtering
height
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/611,719
Inventor
Simon Fenney
Paolo Fazzini
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagination Technologies Ltd
Original Assignee
Imagination Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagination Technologies Ltd filed Critical Imagination Technologies Ltd
Assigned to IMAGINATION TECHNOLOGIES LIMITED reassignment IMAGINATION TECHNOLOGIES LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FAZZINI, PAOLO GIUSEPPE, FENNEY, SIMON
Publication of US20040207631A1 publication Critical patent/US20040207631A1/en
Priority to US11/228,876 priority Critical patent/US7733352B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • This invention relates to a method and apparatus for generating bump map data for use in a 3 dimensional computer graphics system.
  • FIG. 1 shows a surface normal being perturbed.
  • each perturbation is computed by first taking derivates of a bump displacement texture or ‘height map’ and subsequently applying it to the original surface normal and surface tangent vectors.
  • the height map is a simple array of scalar values that gives the ‘vertical’ displacement or ‘height’ of a surface at regular grid points relative to that surface. Typically these are represented by monochromatic image data, e.g. a bitmap, with the brightness of any pixel being representative of the ‘height’ at that point. Standard texture mapping practices are used to access the height data. The normal perturbations and lighting calculations are done in global or model space.
  • FIG. 8 a shows the application of Blinn's method to an ‘illuminated’ flat surface.
  • the normal map could be of relatively low precision—often as low as, say 8 or even 4 bits per pixel—the normal map may require 16 to 32 bits per pixel.
  • the pre-processing steps of generating and compressing the normal map and the process of using the compressed normal map in 3D rendering are shown in FIG. 2.
  • a height map 2 is used for normal map generation 4 .
  • An optional compression step 6 may then be used to produce an output map 8 .
  • an optional decompression step is first performed on-the-fly before the map is used by shading calculations 12 to provide pixel data to an output frame buffer 14 .
  • a 2D texture can be considered to be a vector function of 2 variables (U, V).
  • U and V range from 0 to N.
  • the pixel, or “texel” values stored in the texture can be considered to be representative of the points in the centres of the respective texels, i.e. at coordinates (i+0.5, j+0.5), where i and j are integers and represent the texel coordinate of the particular texel. This is illustrated in FIG. 3 for texel (i,j), the centre of which is indicated by ‘20’.
  • V′: Vs ⁇ 0.5
  • Vi: floor(V′);
  • Vblend: V′ ⁇ Vi
  • Colour1 LinearBlend(Texel(Ui, Vi+1), Texel(Ui+1, Vi+1), Ublend);
  • the Ublend and Vblend values are thus in the range [0 . . . 1), and can be most conveniently represented by a fixed point number of, say, 8 to 16 bits precision.
  • FIG. 4 a shows hardware, typical in the art, that performs the first steps of the above bilinear algorithm.
  • the requested sample position is input, ‘50’, and the positions adjusted by 1 ⁇ 2 a texel, ‘51’ via a subtraction.
  • the ‘floors’ of the coordinate values are computed, ‘52’, and these define the texel integer coordinates, ‘53’, for the top left texel of the required set of 4 texels.
  • the values are also subtracted, ‘54’, from previous values to produce the blending factors for the bilinear operation, ‘55’.
  • the colours in 3D computer graphics are usually 4-D entities, having Red, Green, Blue, and Alpha (i.e. transparency) components.
  • the integer texel coordinates computed in ‘53’ are used to access the four neighbouring texels, ‘60’ thru ‘63’. Each of these has its own Red, Green, Blue, and Alpha components.
  • a bi-quadratic B-spline which has C1 continuity (i.e. continuous first derivative).
  • a bi-quadratic B-spline also has the property that, for any point on the surface, a sub-grid of 3 ⁇ 3 control points is needed to evaluate that point and/or derivatives at that point.
  • a one-dimensional slice though a section of a quadratic B-spline is shown in FIG. 5.
  • the points, ‘80’, ‘81’, and ‘82’ can be considered to be three adjacent control points in a row of the grid.
  • the region of the curve between ‘85’, and ‘86’ depends only on these three control values (and the neighbouring 6 values in the 3 ⁇ 3 sub-grid in the case of a bi-quadratic surface).
  • the 3 ⁇ 3 grid of control points can be replaced by an equivalent set of 3 ⁇ 3 Bezier control points.
  • An example showing the situation for a bi-quadratic surface is shown, in plan form, in FIG. 6.
  • the original 9 B-spline control points, one example of which is shown by ‘100’, are converted into the equivalent Bezier control points, such as ‘101’.
  • the grid of 3 ⁇ 3 B-spline points are: [ a b c d e f g h k ]
  • the region of interest is the central ‘square’, i.e. a position specified by (u,v), where 0 ⁇ u,v ⁇ 1.
  • one method based on de Casteljau would be to bi-linearly interpolate sets of 2 ⁇ 2 neighbouring control points, using (u,v) as weights, to produce a new set of 2 ⁇ 2 intermediate control points.
  • One of the four sets of 2 ⁇ 2 intermediate control points is indicated by ‘102’.
  • the height map defines ‘height’ values only at certain sample locations and so a means of computing the height surface at other points is required.
  • bump mapping requires the surface normal which, in turn, usually implies the need for surface tangents.
  • Blinn points out that the surface height is not actually required and proposes a function that only computes tangents. He notes that in order to avoid discontinuities in the shading, his tangent functions are continuous. Using the 3 ⁇ 3 grid of height samples shown in 6, Blinn's function performs 3 bilinear blends respectively of the top left, top right, and bottom left neighbours, and then computes the differences of the top left and top right result and the top left and bottom left result as part of the tangent generation.
  • FIG. 8 a illustrates the result from a preferred embodiment of the present invention.
  • a further limitation of Peercy et al's technique is that dynamic bump mapping, i.e. where the bump heights are computed frame-by-frame, is far more difficult to achieve.
  • the height values may be generated as the result of a separate rendering pass.
  • the pre-processing step, including generation of the various MIP map levels, may take too much time to allow real-time rendering.
  • the filtered surface normals are created ‘on demand’ and are not stored. This provides the joint benefits of reducing the amount of texture data and bandwidth needed for bump mapping, as well as overcoming some of the issues with the filtering of normal maps. This feature is also important when using dynamic height maps in real-time rendering since a pre-processing step may be prohibitive.
  • Embodiments of the invention keep the advantages of computing bump map-based shading in local tangent space as described by Peercy et al, (although it is not restricted to doing so), with the convenience of directly using Blinn's height map but with the option of using a function with higher continuity.
  • FIG. 1 shows the process of perturbing surface Normals as described by Blinn
  • FIG. 2 shows a flow chart of the pipeline used for Peercy et al's method described above
  • FIG. 3 shows the relationship of bilinear filtering of a texture to the texels of that texture
  • FIG. 4 a shows an overview coordinate calculation device in typical prior art bilinear hardware
  • FIG. 4 b shows an overview of prior art hardware that applies the bilinear blending to the addressed texels
  • FIG. 5 shows a segment of a piecewise quadratic B-spline curve, or equivalently, a section through a bi-quadratic B-spline Surface
  • FIG. 6 shows a plan view of a section of a height map being interpreted as a bi-quadratic B-spline surface
  • FIG. 7 shows an overview of a hardware system embodying the invention with modifications to support normal generation from height maps
  • FIG. 8 a shows the results of bumping mapping using Blinn's height map derivative function
  • FIG. 8 b shows the function used by a preferred embodiment
  • FIG. 9 shows some alternative filter patterns that could be used for computing derivatives of a bump map.
  • height map textures which store an array of height values, using preferably 4 or 8 bits per texel. Each value will encode a fixed-point number with some number of fraction bits—preferably 1 ⁇ 4 of the bits will be assigned to the fractional part.
  • the embodiment fits a bi-quadratic B-spline through this set of points, thus giving the virtual height map texture C1 continuity (i.e. continuous first derivatives).
  • the points ‘80’, ‘81’, and ‘82’ represent three adjacent height values/control points in a row of the height map.
  • the value in the texture is allocated (preferably) to the y dimension while the other coordinate values (i.e., x and z) are implicitly defined by the texel's coordinate position.
  • Alternative embodiments may assign these dimensions in some other permutation.
  • the other control points needed for the surface in the texel region are shown, in plan form, in FIG. 6.
  • the 3 ⁇ 3 grid of samples is then fed into the ‘Replicate’ unit, 152 , which outputs values to the Red, Green, Blue, and Alpha bilinear units.
  • the Red channel receives the top left grid of 2 ⁇ 2 scalar values, i.e. those fetched from . . . [ a b d e ]
  • unit 155 actually only outputs three values: Tang1[y], Tang2[y] and 1 Texturesize.
  • This vector is then normalised, preferably by squaring the N vector, computing the inverse of the square root of the result, and multiplying that scalar by the original components.
  • the normalisation step may appear expensive, but it would be a requirement of any system that supported compressed normal maps, such as that described in WO9909523 or British patent application No. 0216668.4. Thus, if such texture formats were already supported, the re-normalisation hardware would be reused.
  • An example of the output of this embodiment is shown in FIG. 8 b.
  • trilinear filtering can be adapted to support normal generation—the only difference in procedure will be that the values fed into tangent construction unit ‘155’ will be a ‘blend’ of the values computed from two adjacent MIP map levels chosen.
  • Other embodiments supporting improved anisotropic filtering are also feasible.
  • an interpolated scale factor may be applied to the deltas/tangents before normalisation so that a height map can be scaled differently for different models or different parts of the same model.
  • the blend factor adjust unit, 153 is not used and the B-spline control points are converted to the equivalent Bezier representations according to Equation 1 in a modified ‘152’ unit.
  • the actual interpolated height value would be computed by including a third linear blending operation.
  • Blinn's height interpolation function could be employed.
  • the blend factor adjust unit, 153 is not used and it is unnecessary to use the bilinear ‘alpha’ channel. That also implies that it is unnecessary to fetch source texel ‘k’.
  • the tangent unit, 155 then simplifies to compute the difference of ‘green’ and ‘red’ and the difference of ‘blue’ and ‘red’.
  • dedicated sampling hardware could be included that takes numerous texture samples and applies an alternative derivative filter such as 4 taps, Sobel, Prewitt, Parks-McClellan derivatives filters as represented in FIG. 9.
  • colour textures are also filtered using bi-quadratic B-splines, either through the addition of bilinear filtering units, or by iterations through the colour channels, whereby the individual weights to the bilinear units are adjusted according to the previously described embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A method for generating bump map data substantially in real time for use in a 3-dimensional computer graphics system. Data is received which defines an area to which a texture is to be applied. Texture data to apply to the area is also received. This data includes surface height data. A set of partially overlapping samples of texture data are then filtered and surface tangent vectors derived therefrom. A bump map surface normal is then derived from the tangent vectors.

Description

  • This invention relates to a method and apparatus for generating bump map data for use in a 3 dimensional computer graphics system. [0001]
  • BACKGROUND TO THE INVENTION
  • In the field of 3D computer graphics, detail is often added to otherwise smooth objects though the use of Bump Mapping, which was introduced by Blinn in his paper [0002] “Simulation of Wrinkled Surfaces” (SIGGRAPH 1978, pp286-292). This operates by perturbing, on a pixel-by-pixel basis, an object's otherwise ‘smoothly’ varying surface normal vector. Because the surface's normal vector is used when computing the shading of that surface, its modification can give the appearance of bumps. FIG. 1 shows a surface normal being perturbed.
  • In Blinn's technique, each perturbation is computed by first taking derivates of a bump displacement texture or ‘height map’ and subsequently applying it to the original surface normal and surface tangent vectors. The height map is a simple array of scalar values that gives the ‘vertical’ displacement or ‘height’ of a surface at regular grid points relative to that surface. Typically these are represented by monochromatic image data, e.g. a bitmap, with the brightness of any pixel being representative of the ‘height’ at that point. Standard texture mapping practices are used to access the height data. The normal perturbations and lighting calculations are done in global or model space. FIG. 8[0003] a shows the application of Blinn's method to an ‘illuminated’ flat surface.
  • A more ‘hardware friendly’ method was later developed by Peercy et al (“Efficient Bump Mapping Hardware”, SIGGRAPH 1997, pp 303-306, (also U.S. Pat. No. 5,949,424)). This directly stores perturbed surface normals in a texture map, often called a normal map. Unlike Blinn's method, these normals are defined in a local tangential coordinate space, which can be likened to the representation of parts of the earth's surface on a page in an atlas. In Peercy's technique, the lights used for shading are also transformed into this tangential space and thus the shading calculations are also computed locally. This process significantly reduces the number of calculations required when using bump mapping. It has become popular in recent 3D hardware systems and is sometimes known as ‘Dot3 bump mapping’. [0004]
  • To minimize the texture memory and, more importantly, memory bandwidth required by this procedure, it is desirable to compress the normal maps. Unfortunately many of the commonly used texture compression schemes are not suitable as they cause a loss of information that, when applied to the special case of normal maps, can cause an unacceptable degradation in image quality. Two methods that are specifically tailored to normal maps, however, are described in our International patent application No. WO9909523—these typically still use 16 bits to represent each surface normal. [0005]
  • This then leaves the task of generating the normal map. One popular method again uses an initial height map, as originally described by Blinn. From that height map, a normal map can then be pre-computed, prior to rendering, by taking the cross product of the local derivative vectors of the height function sampled at regular positions. For cases where texture filtering is required, e.g. those based on the well-known MIP mapping techniques, the height map should be repeatedly down-sampled and the associated normal map regenerated to produce the multiple MIP map levels. Problems can arise, however, when applying the texture filtering techniques, e.g. bilinear or trilinear filtering, to normal maps. [0006]
  • It should be noted that whereas the height map could be of relatively low precision—often as low as, say 8 or even 4 bits per pixel—the normal map may require 16 to 32 bits per pixel. The pre-processing steps of generating and compressing the normal map and the process of using the compressed normal map in 3D rendering are shown in FIG. 2. In the generation phase a [0007] height map 2 is used for normal map generation 4. An optional compression step 6 may then be used to produce an output map 8. in the use of the map 8, an optional decompression step is first performed on-the-fly before the map is used by shading calculations 12 to provide pixel data to an output frame buffer 14.
  • Also well known in the art is the aspect of texture filtering, primarily the application of bilinear or trilinear filtering, the latter as invented by Williams (“Pyramidal Parametrics”, Lance Williams, Computer Graphics, Vol. 7, No. 3, July 1983, pp 1-11). Bilinear filtering is briefly discussed below, since trilinear filtering is just the blending of two bilinear operations. [0008]
  • A 2D texture can be considered to be a vector function of 2 variables (U, V). For simplicity in this discussion, we will assume that, for an N×N pixel texture, the values of U and V range from 0 to N. When bilinear filtering is applied, the pixel, or “texel”, values stored in the texture can be considered to be representative of the points in the centres of the respective texels, i.e. at coordinates (i+0.5, j+0.5), where i and j are integers and represent the texel coordinate of the particular texel. This is illustrated in FIG. 3 for texel (i,j), the centre of which is indicated by ‘20’. At this point in the texture, bilinear filtering will return the colour of that texel. Similarly, sampling at locations ‘21’, ‘22’, and ‘23’ will return the colours of texels (i+1,j),(i,j+1), and (i+l ,j+1) respectively. Now consider any sampling location within the square formed by ‘20’, ‘21’, ‘22’, and ‘23’, such as point ‘24’. Such a point has texture coordinates (u[0009] s,vs) where i+0.5≦us<i+1.5 and j+0.5≦vs<j+1.5. The texture values for any point in the square will be formed from a bilinear blend of the four surrounding texels.
  • In particular, the process used in the art will be some simple variation of the following: [0010]
  • U′:=Us−0.5;//Place stored texel value at centre of texel [0011]
  • V′:=Vs−0.5; [0012]
  • Ui:=floor(U′); [0013]
  • Vi:=floor(V′); [0014]
  • Ublend:=U′−Ui; [0015]
  • Vblend:=V′−Vi; [0016]
  • //Do 2 horizontal linear blends [0017]
  • Colour0:=LinearBlend(Texel(Ui, Vi), Texel(Ui+1, Vi), Ublend); [0018]
  • Colour1:=LinearBlend(Texel(Ui, Vi+1), Texel(Ui+1, Vi+1), Ublend); [0019]
  • //Do 1 vertical linear blend [0020]
  • Result:=LinearBlend(Colour0, Colour1, Vblend); [0021]
  • The Ublend and Vblend values are thus in the range [0 . . . 1), and can be most conveniently represented by a fixed point number of, say, 8 to 16 bits precision. [0022]
  • FIG. 4[0023] a shows hardware, typical in the art, that performs the first steps of the above bilinear algorithm. The requested sample position is input, ‘50’, and the positions adjusted by ½ a texel, ‘51’ via a subtraction. The ‘floors’ of the coordinate values are computed, ‘52’, and these define the texel integer coordinates, ‘53’, for the top left texel of the required set of 4 texels. The values are also subtracted, ‘54’, from previous values to produce the blending factors for the bilinear operation, ‘55’.
  • It should be noted that the colours in 3D computer graphics are usually 4-D entities, having Red, Green, Blue, and Alpha (i.e. transparency) components. When the bilinear blending described above is performed, all four components of the various colour values are operated on in parallel. This is shown in the second stage of the bilinear operation in FIG. 4[0024] b. The integer texel coordinates computed in ‘53’, are used to access the four neighbouring texels, ‘60’ thru ‘63’. Each of these has its own Red, Green, Blue, and Alpha components. In the example, there are four (usually identical) bilinear units, ‘65’ thru ‘68’, each of which computes one of the four colour channels using the blend factors, ‘55’. The individual scalar results are then recombined into the one resulting colour, ‘69’.
  • Another known aspect of 3D computer graphics is that of fitting smooth surfaces through or near a set of control points. In particular we are interested in two types known as uniform B-spline and Bezier splines, as described in literature such as “Computer Graphics. Principles and Practice” (Foley et al) or “Curves and Surfaces for CAGD. A practical guide” (Farin). [0025]
  • Of particular interest to this application is the case of a bi-quadratic B-spline which has C1 continuity (i.e. continuous first derivative). A bi-quadratic B-spline also has the property that, for any point on the surface, a sub-grid of 3×3 control points is needed to evaluate that point and/or derivatives at that point. A one-dimensional slice though a section of a quadratic B-spline is shown in FIG. 5. The points, ‘80’, ‘81’, and ‘82’ can be considered to be three adjacent control points in a row of the grid. The region of the curve between ‘85’, and ‘86’ depends only on these three control values (and the neighbouring 6 values in the 3×3 sub-grid in the case of a bi-quadratic surface). [0026]
  • One popular way of evaluating such a curve is to first convert it to the equivalent Bezier representation, i.e. a different set of 3 control points, and then apply the de Casteljau algorithm which uses repeated linear interpolation (see Farin). For the simple case of quadratic curves, this amounts to using a new set of control points which are ‘88’, ‘81’ (i.e., it is re-used), and ‘89’. Points ‘88’ and ‘89’ are just the mid points of the connecting line segments and could be found by simple averaging. [0027]
  • For the conversion of a bi-quadratic B-spline surface, the 3×3 grid of control points can be replaced by an equivalent set of 3×3 Bezier control points. An example showing the situation for a bi-quadratic surface is shown, in plan form, in FIG. 6. The original 9 B-spline control points, one example of which is shown by ‘100’, are converted into the equivalent Bezier control points, such as ‘101’. Stating this more precisely, if the grid of 3×3 B-spline points are: [0028] [ a b c d e f g h k ]
    Figure US20040207631A1-20041021-M00001
  • then the equivalent set of Bezier points are computed from: [0029] [ a b c d e f g h k ] = [ 1 2 1 2 0 0 1 0 0 1 2 1 2 ] [ a b c d e f g h k ] [ 1 2 0 0 1 2 1 1 2 0 0 1 2 ] = [ a + b + d + e 4 b + e 2 b + c + e + f 4 d + e 2 e e + f 2 d + e + g + h 4 e + h 2 e + f + h + k 4 ] Equation 1
    Figure US20040207631A1-20041021-M00002
  • Referring again to FIG. 6, the region of interest is the central ‘square’, i.e. a position specified by (u,v), where 0≦u,v≦1. In the case of a bi-quadratic surface, one method based on de Casteljau would be to bi-linearly interpolate sets of 2×2 neighbouring control points, using (u,v) as weights, to produce a new set of 2×2 intermediate control points. One of the four sets of 2×2 intermediate control points is indicated by ‘102’. These four results are, in turn, bilinearly interpolated, again using the (u,v) weights, to produce the surface point. If tangents to the surface are required, a method such as given by Mann and Rose (“Computing values and derivatives of Bezier and B-spline tensor products”, CAGD, Vol 12, February 1995) can be used. For the bi-quadratic case, this can be done by performing additional linear interpolations using the 2×2 intermediate control values. Finally, taking the cross product of these tangents generates the surface normal. [0030]
  • The height map defines ‘height’ values only at certain sample locations and so a means of computing the height surface at other points is required. In particular, bump mapping requires the surface normal which, in turn, usually implies the need for surface tangents. Blinn points out that the surface height is not actually required and proposes a function that only computes tangents. He notes that in order to avoid discontinuities in the shading, his tangent functions are continuous. Using the 3×3 grid of height samples shown in 6, Blinn's function performs 3 bilinear blends respectively of the top left, top right, and bottom left neighbours, and then computes the differences of the top left and top right result and the top left and bottom left result as part of the tangent generation. [0031]
  • Although Blinn's function results in a continuous normal, its derivative can have discontinuities. Unfortunately, the human visual system is very sensitive to changes in the derivative of image intensity, and so ‘artefacts’ can be seen. The method also tends to emphasise the underlying grid of the height map, which can be seen in FIG. 8[0032] a. (For the sake of comparison, FIG. 8b illustrates the result from a preferred embodiment of the present invention).
  • Although the introduction of Peercy et al's pre-perturbed normal map method makes bump mapping more practical in real-time hardware, it still requires ‘large’ texture formats as well as the separate pre-processing step to convert a height map to normal map. The ‘large’ texture formats consume valuable bandwidth as well as memory and cache storage and, although special normal map compression techniques exist, these formats are still often larger than the original source height map. Also filtering of the normal map may also be problematic. [0033]
  • A further limitation of Peercy et al's technique is that dynamic bump mapping, i.e. where the bump heights are computed frame-by-frame, is far more difficult to achieve. For example, the height values may be generated as the result of a separate rendering pass. The pre-processing step, including generation of the various MIP map levels, may take too much time to allow real-time rendering. [0034]
  • Finally, it is beneficial to use a height function with C2 (or higher) continuity so that the normal interpolation is C1 (or higher). In particular, it is important to have an inexpensive means of producing this function. [0035]
  • SUMMARY OF THE INVENTION
  • We have appreciated that it is possible to implement, in hardware, an additional set of functions that provides an efficient means for direct transformation of a height map into filtered perturbed surface normals that have C1 continuity. These normals can subsequently be used for various rendering purposes such as per-pixel lighting. In particular, we have devised a method which, by re-using colour texture filtering hardware that is ubiquitous in today's graphics systems in a new way with the addition of some small processing units, achieves the functions needed to compute the normal from a smooth surface controlled by a set of heights. Thus the data can be generated substantially in real time. [0036]
  • The filtered surface normals are created ‘on demand’ and are not stored. This provides the joint benefits of reducing the amount of texture data and bandwidth needed for bump mapping, as well as overcoming some of the issues with the filtering of normal maps. This feature is also important when using dynamic height maps in real-time rendering since a pre-processing step may be prohibitive. [0037]
  • Embodiments of the invention keep the advantages of computing bump map-based shading in local tangent space as described by Peercy et al, (although it is not restricted to doing so), with the convenience of directly using Blinn's height map but with the option of using a function with higher continuity.[0038]
  • Preferred embodiments of the invention will now be described in detail by way of example with reference to the accompanying diagrams in which: [0039]
  • FIG. 1 shows the process of perturbing surface Normals as described by Blinn; [0040]
  • FIG. 2 shows a flow chart of the pipeline used for Peercy et al's method described above; [0041]
  • FIG. 3 shows the relationship of bilinear filtering of a texture to the texels of that texture; [0042]
  • FIG. 4[0043] a shows an overview coordinate calculation device in typical prior art bilinear hardware;
  • FIG. 4[0044] b shows an overview of prior art hardware that applies the bilinear blending to the addressed texels;
  • FIG. 5 shows a segment of a piecewise quadratic B-spline curve, or equivalently, a section through a bi-quadratic B-spline Surface; [0045]
  • FIG. 6 shows a plan view of a section of a height map being interpreted as a bi-quadratic B-spline surface; [0046]
  • FIG. 7 shows an overview of a hardware system embodying the invention with modifications to support normal generation from height maps; [0047]
  • FIG. 8[0048] a shows the results of bumping mapping using Blinn's height map derivative function, while FIG. 8b shows the function used by a preferred embodiment; and
  • FIG. 9 shows some alternative filter patterns that could be used for computing derivatives of a bump map.[0049]
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
  • The preferred embodiment will now be described. Access is provided to height map textures, which store an array of height values, using preferably 4 or 8 bits per texel. Each value will encode a fixed-point number with some number of fraction bits—preferably ¼ of the bits will be assigned to the fractional part. [0050]
  • The embodiment fits a bi-quadratic B-spline through this set of points, thus giving the virtual height map texture C1 continuity (i.e. continuous first derivatives). In FIG. 5, the points ‘80’, ‘81’, and ‘82’ represent three adjacent height values/control points in a row of the height map. The value in the texture is allocated (preferably) to the y dimension while the other coordinate values (i.e., x and z) are implicitly defined by the texel's coordinate position. Alternative embodiments may assign these dimensions in some other permutation. The other control points needed for the surface in the texel region are shown, in plan form, in FIG. 6. [0051]
  • The manner in which the normal is computed is now described with reference to FIG. 7. As with the standard texture filtering system described above, i.e. FIGS. 4[0052] a and 4 b, it is assumed that the texture base coordinates to which the texture is to be applied will be calculated and supplied as before, at ‘50’. Modified address unit ‘150’ then computes the ‘base’ texture coordinate, ‘53’, and blend factors ‘55’, in a manner that is similar to the prior art method described with reference to FIG. 4a, except that step ‘51’, the typical subtraction of a half-texel dimension, is bypassed when performing height map bump mapping.
  • A modified texel fetch unit, ‘151’, which in FIG. 4[0053] b consisted of units ‘60’ thru ‘63’ which obtained four sets of RGBA vectors, is enhanced to be able to fetch a 3×3 set of scalar height values. In particular, it retrieves the following grid of height texels: [ ( U i - 1 , V j - 1 ) ( U i , V j - 1 ) ( U i + 1 , V j - 1 ) ( U i - 1 , V j ) ( U i , V j ) ( U i + 1 , V j ) ( U i - 1 , V j + 1 ) ( U i , V j + 1 ) ( U i + 1 , V j + 1 ) ] = [ a b c d e f g h k ]
    Figure US20040207631A1-20041021-M00003
  • For brevity, these have be renumbered a, b, etc. [0054]
  • It will be apparent to those skilled in the art that, with application of the address-bit interleaved texture storage format described in our British patent number GB2297886, such a height-map can be packed into the ‘equivalent’, in terms of storage, of a colour texture of ½×½ resolution of the height map. Each 2×2 group of scalar height data would occupy the space of a single four-dimensional colour. With such a format, the height map data can then be accessed using a very simple modification of exactly the same fetch mechanism used by units ‘60’ thru ‘63’ in FIG. 4[0055] a.
  • The 3×3 grid of samples is then fed into the ‘Replicate’ unit, [0056] 152, which outputs values to the Red, Green, Blue, and Alpha bilinear units. In particular, the Red channel receives the top left grid of 2×2 scalar values, i.e. those fetched from . . . [ a b d e ]
    Figure US20040207631A1-20041021-M00004
  • . . . while similarly the green channel receives the top right set, the blue, the bottom left, and the alpha receives the bottom right. Clearly some values, such as b′ or e′, will be used more than once, thus the grids supplied to each unit overlap at least partially. [0057]
  • [0058] Unit 153 takes the blend factors, ‘55’, and computes new sets of U and V blends as follows: Ublend 0 = 1 2 + Ublend 2 Ublend 1 = Ublend 2 Vblend 0 = 1 2 + Vblend 2 Vblend 1 = Vblend 2
    Figure US20040207631A1-20041021-M00005
  • As Ublend and Vblend are typically fixed point numbers, it should be appreciated that these ‘calculations’ are completely trivial and incur no cost at all in hardware. [0059]
  • These new blend values are distributed to bilinear units, ‘65’ thru ‘68’ as follows: [0060]
    Red: (Ublend0, Vblend0)
    Green: (Ublend1, Vblend0)
    Blue: (Ublend0, Vblend1)
    Alpha: (Ublend1, Vblend1)
  • This manipulation of the blend factors eliminates the need to convert from the quadratic B-spline control points to the Bezier control points, as described previously in [0061] Equation 1. These bilinear units therefore effectively produce data which will enable surface normals with C1 continuity to subsequently be derived.
  • The results of the 4 bilinear interpolations are fed to the tangent construction unit, [0062] 155. This generates two tangent vectors, Tang1 and Tang2, which are functionally equivalent to using the following calculations:
  • Tang1[X]:=1 Texturesize; [0063]
  • Tang1[Y]:=LinearInterpolate(VBlend, [0064]
  • GreenResult−RedResult, [0065]
  • AlphaResult−BlueResult); [0066]
  • Tang1[Z]:=0; [0067]
  • Tang2[X]:=0; [0068]
  • Tang2[Y]:=LinearInterpolate(UBlend, [0069]
  • BlueResult−RedResult, [0070]
  • AlphaResult−GreenResult); [0071]
  • Tang2[Z]:=1 Texturesize [0072]
  • where [0073]
  • LinearInterpolate (x, A, B):=A+x*(B−A); [0074]
  • For reasons that will soon be apparent, [0075] unit 155 actually only outputs three values: Tang1[y], Tang2[y] and 1 Texturesize.
  • Finally, in unit ‘156’, the cross product of these tangents is computed. It should be noted that if the preferred embodiment is chosen, the presence of zeros in the tangent components simplifies the cross product to the following calculation: [0076]
  • N[x]:=Tang1[y]; [0077]
  • N[y]:=1 Texturesize [0078]
  • N[z]:=Tang2[y]; [0079]
  • This vector is then normalised, preferably by squaring the N vector, computing the inverse of the square root of the result, and multiplying that scalar by the original components. The normalisation step may appear expensive, but it would be a requirement of any system that supported compressed normal maps, such as that described in WO9909523 or British patent application No. 0216668.4. Thus, if such texture formats were already supported, the re-normalisation hardware would be reused. An example of the output of this embodiment is shown in FIG. 8[0080] b.
  • In an alternative embodiment, trilinear filtering can be adapted to support normal generation—the only difference in procedure will be that the values fed into tangent construction unit ‘155’ will be a ‘blend’ of the values computed from two adjacent MIP map levels chosen. Other embodiments supporting improved anisotropic filtering are also feasible. [0081]
  • In another embodiment, an interpolated scale factor may be applied to the deltas/tangents before normalisation so that a height map can be scaled differently for different models or different parts of the same model. [0082]
  • In another embodiment, the blend factor adjust unit, [0083] 153, is not used and the B-spline control points are converted to the equivalent Bezier representations according to Equation 1 in a modified ‘152’ unit.
  • In another embodiment, the actual interpolated height value would be computed by including a third linear blending operation. [0084]
  • In another embodiment, Blinn's height interpolation function could be employed. In this embodiment, the blend factor adjust unit, [0085] 153, is not used and it is unnecessary to use the bilinear ‘alpha’ channel. That also implies that it is unnecessary to fetch source texel ‘k’. The tangent unit, 155, then simplifies to compute the difference of ‘green’ and ‘red’ and the difference of ‘blue’ and ‘red’.
  • In another alternative embodiment, dedicated sampling hardware could be included that takes numerous texture samples and applies an alternative derivative filter such as 4 taps, Sobel, Prewitt, Parks-McClellan derivatives filters as represented in FIG. 9. [0086]
  • In another embodiment, colour textures are also filtered using bi-quadratic B-splines, either through the addition of bilinear filtering units, or by iterations through the colour channels, whereby the individual weights to the bilinear units are adjusted according to the previously described embodiments. [0087]

Claims (15)

1. A method for generating bump map data substantially in real time for use in a 3-dimensional computer graphics system comprising the steps of:
receiving data defining an area to which a texture is to be applied;
receiving texture data to apply to the area, the data including surface height data;
filtering each of a set of partially overlapping samples of the texture data;
deriving surface tangent vectors from the filtered samples; and
deriving a bump map surface normal from the surface tangent vectors.
2. A method according to claim 1 in which the tangent vectors are defined in local tangent space.
3. A method according to claim 1, in which the filtering step includes the step of using bi-quadratic B-splines to model a height surface from the surface height data.
4. A method according to claim 1, in which the filtering step includes the step of using existing hardware in the colour channels of the 3D graphics system to filter the overlapping samples of texture data.
5. A method according to claim 3 in which the filtering step is modified with blending factors.
6. Apparatus for generating bump map data substantially in real time for use in a 3-dimensional computer graphics system comprising:
means for receiving data defining an area to which a texture is to be applied;
means for receiving texture data to apply to the area, the data including height data;
means for filtering each of a set of partially overlapping samples of the texture data;
means for deriving surface tangent vectors from the filtered samples; and
means for deriving a bump map surface normal from the surface tangent vectors.
7. Apparatus according to claim 6 in which the step of tangent vectors are defined in local tangent space.
8. Apparatus according to claim 6 in which the filtering means comprises a means to use bi-quadratic B-splines to model height surface from the surface height data.
9. Apparatus according to claim 6, in which the filtering means includes means to use existing hardware in the colour channels of the 3D graphics system to filter the overlapping samples of texture data.
10. Apparatus according to claim 8 in which the filtering modifies the filtering with blending factors.
11. A 3D graphics system comprising a plurality of colour data processing means for generating data for use in shading an image to be represented by the 3D graphics system;
means for supplying texture data to be applied to the image; and
means for assigning the colour data processing means to the generation of bump map data for use in applying the texture data to the image.
12. (Cancelled).
13. (Cancelled).
14. A method for generating bump map data for use in a 3-dimensional computer graphics system comprising the steps of:
receiving data defining an area to which a texture is to be applied;
receiving texture data to apply to the area, the data including surface height data;
filtering each of a set of partially overlapping samples of the texture data;
deriving surface tangent vectors from the filtered samples; and
deriving a bump map surface normal from the surface tangent vectors.
15. Apparatus for generating bump map data for use in a 3-dimensional computer graphics system comprising:
means for receiving data defining an area to which a texture is to be applied;
means for receiving texture data to apply to the area, the data including height data;
means for filtering each of a set of partially overlapping samples of the texture data;
means for deriving surface tangent vectors from the filtered samples; and
means for deriving a bump map surface normal from the surface tangent vectors.
US10/611,719 2003-04-15 2003-07-01 Efficient bump mapping using height maps Abandoned US20040207631A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/228,876 US7733352B2 (en) 2003-04-15 2005-09-16 Efficient bump mapping using height maps

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0308737.6 2003-04-15
GB0308737A GB2400778B (en) 2003-04-15 2003-04-15 Efficient bump mapping using height map

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/228,876 Continuation US7733352B2 (en) 2003-04-15 2005-09-16 Efficient bump mapping using height maps

Publications (1)

Publication Number Publication Date
US20040207631A1 true US20040207631A1 (en) 2004-10-21

Family

ID=9956851

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/611,719 Abandoned US20040207631A1 (en) 2003-04-15 2003-07-01 Efficient bump mapping using height maps
US11/228,876 Expired - Lifetime US7733352B2 (en) 2003-04-15 2005-09-16 Efficient bump mapping using height maps

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/228,876 Expired - Lifetime US7733352B2 (en) 2003-04-15 2005-09-16 Efficient bump mapping using height maps

Country Status (5)

Country Link
US (2) US20040207631A1 (en)
EP (1) EP1616303B1 (en)
JP (1) JP4637091B2 (en)
GB (1) GB2400778B (en)
WO (1) WO2004095376A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060038817A1 (en) * 2004-08-20 2006-02-23 Diehl Avionik Systeme Gmbh Method and apparatus for representing a three-dimensional topography
US20060104544A1 (en) * 2004-11-17 2006-05-18 Krish Chaudhury Automatic image feature embedding
US20060146063A1 (en) * 2004-12-08 2006-07-06 Akio Ohba Method for creating normal map data
US20070098219A1 (en) * 2005-06-13 2007-05-03 Spence Clay D Method and system for filtering, registering, and matching 2.5D normal maps
US20080170795A1 (en) * 2007-01-11 2008-07-17 Telefonaktiebolaget Lm Ericsson (Publ) Feature block compression/decompression
US20100149199A1 (en) * 2008-12-11 2010-06-17 Nvidia Corporation System and method for video memory usage for general system application
US7916149B1 (en) 2005-01-04 2011-03-29 Nvidia Corporation Block linear memory ordering of texture data
US7928988B1 (en) 2004-11-19 2011-04-19 Nvidia Corporation Method and system for texture block swapping memory management
US7961195B1 (en) 2004-11-16 2011-06-14 Nvidia Corporation Two component texture map compression
US8078656B1 (en) 2004-11-16 2011-12-13 Nvidia Corporation Data decompression with extra precision
US8373718B2 (en) 2008-12-10 2013-02-12 Nvidia Corporation Method and system for color enhancement with color volume adjustment and variable shift along luminance axis
US8594441B1 (en) 2006-09-12 2013-11-26 Nvidia Corporation Compressing image-based data using luminance
US8724895B2 (en) 2007-07-23 2014-05-13 Nvidia Corporation Techniques for reducing color artifacts in digital images
US8947448B2 (en) 2009-12-24 2015-02-03 Sony Corporation Image processing device, image data generation device, image processing method, image data generation method, and data structure of image file
US9081681B1 (en) * 2003-12-19 2015-07-14 Nvidia Corporation Method and system for implementing compressed normal maps
US20170091961A1 (en) * 2015-09-24 2017-03-30 Samsung Electronics Co., Ltd. Graphics processing apparatus and method for determining level of detail (lod) for texturing in graphics pipeline

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7551177B2 (en) * 2005-08-31 2009-06-23 Ati Technologies, Inc. Methods and apparatus for retrieving and combining samples of graphics information
AU2006200969A1 (en) * 2006-03-07 2007-09-27 Canon Information Systems Research Australia Pty Ltd Print representation
FR2929417B1 (en) * 2008-03-27 2010-05-21 Univ Paris 13 METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT FROM POINTS, COMPUTER PROGRAM AND CORRESPONDING IMAGING SYSTEM
US7973705B2 (en) * 2009-07-17 2011-07-05 Garmin Switzerland Gmbh Marine bump map display
RU2637901C2 (en) * 2015-11-06 2017-12-07 Общество С Ограниченной Ответственностью "Яндекс" Method and data storing computer device for drawing graphic objects
RU2629439C2 (en) * 2015-12-29 2017-08-29 Общество С Ограниченной Ответственностью "Яндекс" Method and system of storage of data for drawing three-dimensional graphic objects

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5949424A (en) * 1997-02-28 1999-09-07 Silicon Graphics, Inc. Method, system, and computer program product for bump mapping in tangent space
US6765584B1 (en) * 2002-03-14 2004-07-20 Nvidia Corporation System and method for creating a vector map in a hardware graphics pipeline
US6850244B2 (en) * 2001-01-11 2005-02-01 Micron Techology, Inc. Apparatus and method for gradient mapping in a graphics processing system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09231402A (en) * 1996-02-27 1997-09-05 Sony Corp Bump mapping method and picture generating device
GB9717656D0 (en) * 1997-08-20 1997-10-22 Videologic Ltd Shading three dimensional images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5949424A (en) * 1997-02-28 1999-09-07 Silicon Graphics, Inc. Method, system, and computer program product for bump mapping in tangent space
US6850244B2 (en) * 2001-01-11 2005-02-01 Micron Techology, Inc. Apparatus and method for gradient mapping in a graphics processing system
US6765584B1 (en) * 2002-03-14 2004-07-20 Nvidia Corporation System and method for creating a vector map in a hardware graphics pipeline

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9081681B1 (en) * 2003-12-19 2015-07-14 Nvidia Corporation Method and system for implementing compressed normal maps
US20060038817A1 (en) * 2004-08-20 2006-02-23 Diehl Avionik Systeme Gmbh Method and apparatus for representing a three-dimensional topography
US8918440B2 (en) 2004-11-16 2014-12-23 Nvidia Corporation Data decompression with extra precision
US8078656B1 (en) 2004-11-16 2011-12-13 Nvidia Corporation Data decompression with extra precision
US7961195B1 (en) 2004-11-16 2011-06-14 Nvidia Corporation Two component texture map compression
US20060104544A1 (en) * 2004-11-17 2006-05-18 Krish Chaudhury Automatic image feature embedding
US7734118B2 (en) * 2004-11-17 2010-06-08 Adobe Systems Incorporated Automatic image feature embedding
US7928988B1 (en) 2004-11-19 2011-04-19 Nvidia Corporation Method and system for texture block swapping memory management
US20060146063A1 (en) * 2004-12-08 2006-07-06 Akio Ohba Method for creating normal map data
US8456481B2 (en) 2005-01-04 2013-06-04 Nvidia Corporation Block linear memory ordering of texture data techniques
US7916149B1 (en) 2005-01-04 2011-03-29 Nvidia Corporation Block linear memory ordering of texture data
US20110169850A1 (en) * 2005-01-04 2011-07-14 Nvidia Corporation Block linear memory ordering of texture data
US8436868B2 (en) 2005-01-04 2013-05-07 Nvidia Corporation Block linear memory ordering of texture data
US7844133B2 (en) 2005-06-13 2010-11-30 Sarnoff Corporation Method and system for filtering, registering, and matching 2.5D normal maps
US20100172597A1 (en) * 2005-06-13 2010-07-08 Clay Douglas Spence Method and system for filtering, registering, and matching 2.5d normal maps
US7747106B2 (en) * 2005-06-13 2010-06-29 Sarnoff Corporation Method and system for filtering, registering, and matching 2.5D normal maps
US20070098219A1 (en) * 2005-06-13 2007-05-03 Spence Clay D Method and system for filtering, registering, and matching 2.5D normal maps
US8594441B1 (en) 2006-09-12 2013-11-26 Nvidia Corporation Compressing image-based data using luminance
US20080170795A1 (en) * 2007-01-11 2008-07-17 Telefonaktiebolaget Lm Ericsson (Publ) Feature block compression/decompression
US7853092B2 (en) * 2007-01-11 2010-12-14 Telefonaktiebolaget Lm Ericsson (Publ) Feature block compression/decompression
US8724895B2 (en) 2007-07-23 2014-05-13 Nvidia Corporation Techniques for reducing color artifacts in digital images
US8373718B2 (en) 2008-12-10 2013-02-12 Nvidia Corporation Method and system for color enhancement with color volume adjustment and variable shift along luminance axis
US20100149199A1 (en) * 2008-12-11 2010-06-17 Nvidia Corporation System and method for video memory usage for general system application
US8610732B2 (en) 2008-12-11 2013-12-17 Nvidia Corporation System and method for video memory usage for general system application
US8947448B2 (en) 2009-12-24 2015-02-03 Sony Corporation Image processing device, image data generation device, image processing method, image data generation method, and data structure of image file
US20170091961A1 (en) * 2015-09-24 2017-03-30 Samsung Electronics Co., Ltd. Graphics processing apparatus and method for determining level of detail (lod) for texturing in graphics pipeline
US9898838B2 (en) * 2015-09-24 2018-02-20 Samsung Electronics Co., Ltd. Graphics processing apparatus and method for determining level of detail (LOD) for texturing in graphics pipeline

Also Published As

Publication number Publication date
WO2004095376A3 (en) 2005-01-20
GB2400778B (en) 2006-02-01
WO2004095376A2 (en) 2004-11-04
JP4637091B2 (en) 2011-02-23
EP1616303B1 (en) 2023-05-03
US7733352B2 (en) 2010-06-08
EP1616303A2 (en) 2006-01-18
GB0308737D0 (en) 2003-05-21
GB2400778A (en) 2004-10-20
US20060109277A1 (en) 2006-05-25
JP2006523876A (en) 2006-10-19

Similar Documents

Publication Publication Date Title
US7733352B2 (en) Efficient bump mapping using height maps
US7432936B2 (en) Texture data anti-aliasing method and apparatus
US7692661B2 (en) Method of creating and evaluating bandlimited noise for computer graphics
US20210097642A1 (en) Graphics processing
US6888544B2 (en) Apparatus for and method of rendering 3D objects with parametric texture maps
US7532220B2 (en) System for adaptive resampling in texture mapping
US6788304B1 (en) Method and system for antialiased procedural solid texturing
US7672476B2 (en) Bandlimited noise for computer graphics
US20060158451A1 (en) Selection of a mipmap level
EP1489560A1 (en) Primitive edge pre-filtering
JPH11506846A (en) Method and apparatus for efficient digital modeling and texture mapping
US6400370B1 (en) Stochastic sampling with constant density in object space for anisotropic texture mapping
JP2006523886A (en) Computer graphic processor and method for generating computer graphic images
WO2003054796A2 (en) Image rendering apparatus and method using mipmap texture mapping
EP1058912B1 (en) Subsampled texture edge antialiasing
US7656411B2 (en) Matched texture filter design for rendering multi-rate data samples
US8212835B1 (en) Systems and methods for smooth transitions to bi-cubic magnification
KR100633029B1 (en) Method of Analyzing and Modifying a Footprint
US6570575B1 (en) Associated color texture processor for high fidelity 3-D graphics rendering
Meinds et al. Resample hardware for 3D graphics
US7689057B2 (en) Method of bandlimiting data for computer graphics
Popescu et al. Forward rasterization
Aizenshtein et al. Sampling Textures with Missing Derivatives
Jarosz et al. Bilinear Accelerated Filter Approximation

Legal Events

Date Code Title Description
AS Assignment

Owner name: IMAGINATION TECHNOLOGIES LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FENNEY, SIMON;FAZZINI, PAOLO GIUSEPPE;REEL/FRAME:014262/0937

Effective date: 20030626

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION