WO1997042605A1 - Method and device for graphics mapping of a surface onto a two-dimensional image - Google Patents

Method and device for graphics mapping of a surface onto a two-dimensional image Download PDF

Info

Publication number
WO1997042605A1
WO1997042605A1 PCT/IB1997/000423 IB9700423W WO9742605A1 WO 1997042605 A1 WO1997042605 A1 WO 1997042605A1 IB 9700423 W IB9700423 W IB 9700423W WO 9742605 A1 WO9742605 A1 WO 9742605A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
image
displacement
texture
determining
Prior art date
Application number
PCT/IB1997/000423
Other languages
French (fr)
Inventor
Hendrik Dijkstra
Patrick Fransiscus Paulus Meijers
Original Assignee
Philips Electronics N.V.
Philips Norden Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Electronics N.V., Philips Norden Ab filed Critical Philips Electronics N.V.
Priority to EP97915644A priority Critical patent/EP0840916B1/en
Priority to DE69713858T priority patent/DE69713858T2/en
Priority to JP9539674A priority patent/JPH11509661A/en
Publication of WO1997042605A1 publication Critical patent/WO1997042605A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the invention relates to a method for graphics mapping of a surface from an at least three-dimensional model space onto a two-dimensional image, in which a texture coordinate is allocated to a point on the surface and the point is projected, onto a pixel of the image along a projection axis, which method includes the following steps: - determining a normalized coordinate which is a ratio of the texture coordinate to a depth of the point along the projection axis; determining the depth of the point; determining the texture coordinate by multiplication of the normalized coordinate by the depth; - determining an image contribution by the point to an image content of the pixel on the basis of the texture coordinate.
  • the invention also relates to a device for graphics mapping of a surface from an at least three-dimensional model space onto a two-dimensional image, in which a texture coordinate is allocated to a point on the surface and the point is projected, onto a pixel of the image along a projection axis
  • device includes: coordinate-determining means for determining a normalized coordinate which is a ratio of the texture coordinate to a depth of the point along the projection axis; depth-determining means for determining the depth of the point; multiplier means for multiplying the normalized coordinate by the depth in order to obtain the texture coordinate; image-forming means for determining an image contribution by the point to an image content of the pixel on the basis of the texture coordinate.
  • a method of this kind is known from the copending United States Patent Application No. 08/346971 and its European equivalent EP 656 609 (PHB 33881).
  • a triangle is mapped from a three-dimensional space onto a two-dimensional image.
  • a point R on the triangle has an x, an y and a z coordinate (indicated by lower case letters).
  • the projection of the point R in the image has an X * -coordinate and a Y * -coordinate (indicated by upper case letters).
  • Texture on the triangle is suggested by means of a texture map.
  • the point R is mapped onto the texture map by allocation of a pair of texture coordinates U',V * thereto (upper case letters).
  • the texture map allocates an intensity/color value to each pair of texture coordinates U * ,V ⁇ Using this coordinate pair U * ,V" as an index, the associated intensity/color value is looked up in the texture map and thereby the image contribution by the point R to the pixel where onto it is projected is determined.
  • the method according to the invention is characterized in that the determination of the depth includes the following steps: determining a displacement of the pixel relative to a line in the image onto which there is projected a part of the surface which has a constant depth in the projection direction; - calculating an interpolation function which interpolates the depth as a function of the displacement. Interpolation reduces the amount of arithmetic work required to determine the depth as compared with the inversion of 1/z.
  • interpolation is performed as a function of the displacement, moreover, fewer interpolation coefficients need be updated than if inte ⁇ olation were to take place directly as a function of X * and Y * . Consequently, less storage space is required for the storage of the inte ⁇ olation coefficients.
  • a version of the method according to the invention in which the pixel is preceded by a series of pixels on a scan line in the image is characterized in that the displacement is determined by adding an increment to a further displacement of a preceding pixel of the series.
  • the displacement increases linearly along the scan line. At every transition between successive pixels along the scan line the displacement increases by a fixed amount which is a function of the orientation of the surface relative to the scan line. As a result, the displacement can be determined by means of a minimum amount of arithmetic work.
  • a version of the method according to the invention is characterized in that there is a range of displacements of pixels onto which the surface is mapped, which range is subdivided into sub-ranges of displacements, that for calculation of the inte ⁇ olation function in each sub-range of displacements a respective set of inte ⁇ olation coefficients is used, and that the method includes a step of selecting a resolution of the subdivision into sub-ranges in dependence on a property of the surface. As the resolution of subdivision into sub-range is higher, the error caused by the inte ⁇ olation will be smaller. (A higher resolution in this context means a smaller difference between the minimum and the maximum displacement value within a sub-range).
  • the error can be limited to an acceptable value by choosing the resolution to be dependent on a property of the surface (such as the angle of the normal to the surface with the projection axis and/or the depth range of the surface). Therefore, for different mappings of the same surface, from different projection directions, the resolution may be different. This can be readily achieved because inte ⁇ olation takes place as a function of the displacement.
  • a version of the method according to the invention utilizes an indexed set of texture maps having an index-dependent resolution, a current index being selected from the set of texture maps in dependence on the depth and the image contribution being determined in conformity with a texture value associated with the texture coordinate in an indexed texture map, and is characterized in that the current index is calculated by means of a further inte ⁇ olation function of indices as a function of the displacement.
  • the index is dependent on the derivative of the texture coordinate to the pixel coordinate and hence is in principle a rather complex function of inter alia the depth and the depth gradient. Arithmetic work is saved by calculating this index so as to be inte ⁇ olated as a function of the displacement.
  • the embodiments of the device according to the invention utilize the versions of the method according to the invention and hence save arithmetic time/arithmetic circuits and/or storage space for tables with interpolation coefficients.
  • Fig. 1 shows a graphics processing system
  • Fig. 2 is a side elevation of the geometry of a projection
  • Fig. 3 shows the texture mapping
  • Fig. 4 shows the depth inte ⁇ olation
  • Fig. 5 shows a device for imaging with texture mapping
  • Fig. 6 shows a circuit for determining the displacement.
  • Fig. 1 shows a graphics processing system.
  • This system includes a processor 1 , a memory 2 and an image display unit 3.
  • the memory 2 stores parameters which describe a content of a three-dimensional space, for example in terms of coordinates of vertices of polygons.
  • the processor 1 receives the coordinates of a view point wherefrom the space is viewed and calculates, using the parameters stored in the memory 2, the image content of an image thus obtained.
  • the calculated image content is used for controlling the image display unit which thus displays the image in a manner which is visible to human users.
  • Fig. 2 is a side elevation of the geometry of a projection of one polygon.
  • the Figure shows a polygonal surface 5 which is projected onto a projection plane 11.
  • the projection plane 11 is shown from the side, so that only a line 11 remains which corresponds to the X-coordinate direction in the projection plane 11.
  • the projection takes place along a projection axis 10 which is a normal to the projection plane 11 which extends through the view point 6.
  • An edge 12 of the polygonal surface 5 is separately indicated. This edge 12 extends between two vertices 13, 14 which are mapped onto two points 16, 17 in the image plane. For the sake of clarity of the projection, the projection lines through the vertices 13, 14, the mapped points 16, 17 and the view point are shown.
  • Fig. 3 shows the principle of texture mapping.
  • the texture coordinates U * (R),V * (R) there is associated a texture value T(U * (R),V * (R)).
  • the projection maps the point R on a point 27 having the image coordinates X * (R),Y * (R) in the image 24. Therefore, the texture value T(U * (R),V'(R)) is decisive in respect of the image contribution in the point 27 having coordinates X * (R),Y " (R) in the image 24 on which the point R is mapped from the surface 22.
  • the image 24 is formed pixel-by-pixel.
  • a pixel having a pixel coordinate pair X'p.Y'p is used each time.
  • R follows from the coordinate pair X * P ,Y * p. From the coordinate X * P ,Y * P the associated texture coordinate U * P ,V * P is determined.
  • the texture map is defined by mapping the coordinates x, y, z of the point R on the surface 22 onto texture coordinates U P ,Vp from the three-dimensional space as:
  • the coordinates X * P ,Y'p of the projection of the point R in the image 24 are x/z,y/z.
  • the coordinates of the projection X * P ,Y * P therefore, are mapped on texture coordinates U P ,V P as:
  • the texture coordinates U P ,V P can be calculated from the coordinates X * P ,Y * P of the projection of the point R and the coefficients U o , v 0 , a x , a y , a ⁇ b x , b y , b z using no more than additions and multiplications.
  • the coefficients Uo, a x , a y , a conjunction v 0 , b x , b y , b z can be determined from the values U * P , V * P X * P , Y * P , z of three vertices of the polygon.
  • z is obtained as follows. First a displacement L is determined for the pixel 27. This displacement L is defined as the distance from the pixel 27 to the line 28, i.e. the length of a pe ⁇ endicular from the pixel 27 to the line 28 (the displacement L of a pixel to one side of the line, say the left-hand side, is then taken as minus this distance, and the displacement L of a pixel to the other side, say the right-hand side, as plus this distance).
  • the line 28 is chosen as a line of points in the image which are a map of a section of the surface 5 with an arbitrary plane extending parallel to the projection plane 11; all points of this section thus have the same depth z in the projection direction and, moreover, the section extends pe ⁇ endicularly to the projection axis 10 but need not necessarily intersect said projection axis 10.
  • Fig. 4 shows an example of the function 34.
  • the function 34 is approximated by inte ⁇ olation.
  • the function 34 is calculated, for example once for all pixels 27, for a number of predetermined displacement values L 35, 36, 37. Subsequently, for a requested pixel 27, having the coordinates X * P ,Y"p and the displacement L, it is determined which two of the predetermined displacement values 35, 36, 37 are nearest to the displacement determined. Subsequently, the ratio ⁇ of the distances of the displacement L determined to these two nearest predetermined displacement values is determined. The depth z associated with the displacement L determined is subsequently calculated as Z *
  • Z * , and Z * 2 are the depths calculated for the nearest predetermined displacement values.
  • inte ⁇ olation formulas may also be used, for example a quadratic inte ⁇ olation formula which is correct for three of the predetermined displacement values.
  • the predetermined displacement values 35, 36, 37 need not be equidistant either; for example, in given circumstances the depth can be more accurately approximated by using appropriately preselected, non-equidistant points 35, 36, 37.
  • the respective image contributions are determined successively for a series of pixels situated along a scan line in the image.
  • a scan line is, for example a horizontal line in the image which corresponds to a line signal in a video signal in which pixels are consecutively arranged.
  • the displacement values of successive pixels along such a scan line deviate each time by a fixed amount. Therefore, it is advantageous to calculate the displacement for an initial pixel on the scan line which is associated with the surface 5 and to calculate the displacement for subsequent pixels on the scan line each time by incrementing the displacement of the preceding pixel by said fixed amount.
  • the displacement L is preferably represented by an integer part I and a fractional part ⁇ .
  • the integer part I serves as an index for searching the pair of predetermined displacements nearest to the displacement L.
  • the fractional part represents the ratio of the distances to the nearest predetermined displacements.
  • the fractional part ⁇ is calculated each time by adding the fixed amount to the fractional part ⁇ . When the fractional part a exceeds a maximum value as a result of addition, the fractional part a is reduced by said maximum value, and the integer part is incremented.
  • the minimum value of the size can be determined experimentally, for example once (for use for all polygons).
  • the time required for the calculations involving inte ⁇ olation is approximated, for example as a constant part (independent of the size) and a size-dependent part, for example a part proportional to the size. The same holds for the time required for inversion.
  • the point at which these approximations intersect as a function of the size constitutes the desired minimum value.
  • the minimum value is chosen so that, generally speaking, beyond this value inte ⁇ olation is faster, and that therebelow inversion is generally faster.
  • the diameter of the region of pixels is used as a measure of the size or the maximum of the x-range and the y-range of this region, or the distance between the map of the vertices of a polygon, or the number of pixels in the region.
  • the size of the region on which the surface is mapped is estimated first. If this size is smaller than the minimum value, z is calculated by inversion and the interpolation is not executed.
  • the z of the vertices of the region can be calculated and the z of the pixels in the region can subsequently be determined by inte ⁇ olation of the z of the vertices.
  • the errors thus introduced are small because the region is small.
  • Fig. 5 shows a device for mapping with texture mapping.
  • the device includes a cascade connection of an incremental displacement determining device 40, a depth inte ⁇ olator 42, a multiplier 44, a texture map memory 46, and an image display device 47.
  • the input 40a of the incremental displacement-determining device 40 is also coupled to the input of a ratio-determining device 48; the output of the ratio determining device 48 is coupled to an input of the multiplier 44.
  • the device forms the image contents for successive pixels on a scan line.
  • a signal on the input 40a of the incremental displacement-determining device 40 indicates that a next pixel is concerned.
  • the incremental displacement-determining device 40 increments the displacement value L determined for the preceding pixel by a fixed amount, indicating the increase of the displacement L from one pixel to another along the scan line.
  • the depth inte ⁇ olator 42 receives the displacement L thus calculated and calculates the depth z therefrom by inte ⁇ olation. This depth is applied to the multiplier 44.
  • the ratio- determining device 48 increments the ratio (U * -u 0 )/z of the offset of the texture coordinate U' and the depth z, determined for the preceding pixel, by a further fixed amount which indicates the increase of the ratio (U * -u 0 )/z from one pixel to another along the scan line.
  • the multiplier multiplies the ratio (U * -u 0 )/z thus determined by the inte ⁇ olated depth Z * , thus generating the offset of the texture coordinate U * .
  • the latter is used as an address for reading the texture map memory 46.
  • the resultant texture value I(U * ) is subsequently used to calculate the image content for the pixel which is made visible to the viewer of the display device.
  • a plurality of surfaces can be processed in order to generate a complete image. For each surface the required fixed amounts whereby the displacement L and the ratio (U * -Uo)/z are incremented are each time loaded again via a bus 49. If necessary, the texture memory 46 is also loaded again or at least offsets referring to other texture in the texture memory are loaded.
  • Fig. 5 shows only the part of the circuit which is used for calculating a first one of the texture coordinates U * .
  • a second texture coordinate V * is also calculated. This calculation is the same as that of the first texture coordinate.
  • use can be made of a further ratio determining device and a further multiplier which operate in parallel with the ratio determining device 48 and the multiplier 44 and receive the same inte ⁇ olated depth Z * .
  • the further multiplier then generates the second texture coordinate V * which is applied to the texture map memory 46 as the second part of the texture address.
  • Fig. 6 shows a circuit for determining the displacement which is intended for use in the displacement- determining device 40.
  • This circuit includes a first and a second register 60, 62 which are coupled to respective inputs of an adder 64.
  • the adder 64 includes a sum output and an overflow output.
  • the overflow output is coupled to a counter 66.
  • the sum output is coupled to an input of the second register 62.
  • the first register 60 stores the fixed amount whereby the displacement is incremented for each successive pixel.
  • the second register stores the fractional part a of the calculated displacement.
  • the adder 64 adds the fixed amount to the fractional part a.
  • the sum is loaded into the second register 62 again.
  • the adder 64 is, for example a binary adder which, in the case of overflow, outputs the sum modulo 2 n together with an overflow signal. This overflow signal then increments the count of the counter 66.
  • the count of the counter 66 represents the integer part I of the displacement.
  • the integer part addresses a set of inte ⁇ olation coefficients which are stored in a memory (not shown) in the inte ⁇ olator 42, for example the coefficients Z' t and (Z * 2 -Z * ,).
  • the use of an inte ⁇ olated depth Z * for calculating the contribution of texture to the image content of a pixel may give rise to image artefacts.
  • the image artefacts can be maintained within acceptable limits by using a sufficiently small distance between the successive displacement values 35, 36, 37 wherebetween the depth is inte ⁇ olated.
  • These displacement values 35, 36, 37 define successive sub-ranges of the total displacement range. For each sub-range there is provided a respective set of inte ⁇ olation coefficients. As the sub-range is smaller, i.e. as the displacement values 35, 36, 37 wherebetween inte ⁇ olation takes place are closer to one another, the image artefacts are smaller.
  • the distance between the displacement values 35, 36, 37 wherebetween inte ⁇ olation takes place is chosen in dependence on the parameters of the surface 5, so that for each surface to be reproduced a respective distance is chosen between successive displacement values wherebetween inte ⁇ olation takes place.
  • the choice of the distance is made, for example as follows.
  • the error Dz in the inte ⁇ olated depth Z * causes an U ⁇ V * deviation in the calculated texture coordinates U * ,V ⁇ It has been found that a suitable criterion for the maximum acceptable error Dz follows from the following condition: the U * ,V * deviation caused by the error Dz must be a factor e smaller than the change in U * ,V * between neighbouring pixels. Explicitly, for example, the error Dz must be so small that at least one of the following conditions is satisfied:
  • the error Dz in its turn can be expressed in the distance between successive displacement values wherebetween inte ⁇ olation takes place.
  • the maximum permissible distance is the distance yielding a maximum Dz which still satisfies at least one of the above conditions for U * and at least one of the above conditions for V * .
  • the choice of the factor e requires a compromise: a small factor e (U * ,V * deviation very small) means that the distance becomes very small and hence much arithmetic work is required to calculate the depths associated with displacement values wherebetween inte ⁇ olation takes place; an excessively large factor e leads to visible image artefacts.
  • the maximum inte ⁇ olation error Dz occurring can be reliably determined by determining the depth z halfway an inte ⁇ olation interval around such a vertex both exactly (by inversion of 1/z) and by inte ⁇ olation between two 1/z values which are situated one half distance above and below the 1/z values of the vertex.
  • the difference between the depths thus obtained is a reliable measure of the maximum error Dz (the actual error is at most 32/27 of the difference).
  • Dz is then made so small that at least one of these two conditions is also satisfied.
  • the distance between successive displacements wherebetween the depth is inte ⁇ olated is chosen in dependence on the parameters of the surface to be reproduced.
  • the distance between successive displacement values wherebetween inte ⁇ olation takes place is separately chosen, in dependence on the properties of the relevant surface, for each of said surfaces.
  • the distance between successive displacement values wherebetween inte ⁇ olation takes place is adapted to the properties of the surface in such a manner that an imaging error occurs which can still be accepted, thus saving the arithmetic work otherwise spent on unnecessarily accurate images.
  • Texture mapping usually utilizes a MIPMAP: a set of texture maps of increasing resolution. For the calculation of an image contribution to a pixel having coordinates X ⁇ Y ⁇ a texture value T(U * ,V) is read from one texture map, chosen from this set, or inte ⁇ olated between the texture values T(U * ,V * ) and T'(U ⁇ V * ) of two texture maps.
  • Said one or two texture maps are chosen on the basis of a value "LOD" which is a measure of the part of the texture space (range of U * ,V * coordinates) mapped, after rounding, on the same pixel having coordinates X * ,Y ⁇ for example by determining the maximum value of the derivatives dUVdX * , dUVdY * etc. This is performed to avoid aliasing.
  • LOD a measure of the part of the texture space (range of U * ,V * coordinates) mapped, after rounding, on the same pixel having coordinates X * ,Y ⁇ for example by determining the maximum value of the derivatives dUVdX * , dUVdY * etc. This is performed to avoid aliasing.
  • LOD a measure of the part of the texture space (range of U * ,V * coordinates) mapped, after rounding, on the same pixel having coordinates X * ,Y ⁇ for example by determining the maximum value of the derivatives dUVd
  • the value LOD is preferably also determined by inte ⁇ olation as a function of the displacement, using the same distance between successive displacement values wherebetween inte ⁇ olation takes place as used for the inte ⁇ olation of the depth z.
  • the depth as well as the value LOD can be calculated from one calculation of the displacement.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A series of points on a surface is projected onto pixels of an image along a projection axis. Furthermore, texture coordinates are allocated to the points as follows. First a normalized coordinate which is a ratio of the texture coordinate to a depth of the point along the projection axis is determined by linear interpolation. Displacement of the pixel is determined relative to a line in the image onto which a part of the surface, which has a constant depth in the projection direction, is projected. Subsequently, an interpolation function is calculated which interpolates the depth as a function of the displacement. The texture coordinate is determined by multiplication of the normalized coordinate by the depth. The image contribution by the point to an image content of the pixel is determined on the basis of the texture coordinate.

Description

Method and device for graphics mapping of a surface onto a two-dimensional image.
The invention relates to a method for graphics mapping of a surface from an at least three-dimensional model space onto a two-dimensional image, in which a texture coordinate is allocated to a point on the surface and the point is projected, onto a pixel of the image along a projection axis, which method includes the following steps: - determining a normalized coordinate which is a ratio of the texture coordinate to a depth of the point along the projection axis; determining the depth of the point; determining the texture coordinate by multiplication of the normalized coordinate by the depth; - determining an image contribution by the point to an image content of the pixel on the basis of the texture coordinate.
The invention also relates to a device for graphics mapping of a surface from an at least three-dimensional model space onto a two-dimensional image, in which a texture coordinate is allocated to a point on the surface and the point is projected, onto a pixel of the image along a projection axis, which device includes: coordinate-determining means for determining a normalized coordinate which is a ratio of the texture coordinate to a depth of the point along the projection axis; depth-determining means for determining the depth of the point; multiplier means for multiplying the normalized coordinate by the depth in order to obtain the texture coordinate; image-forming means for determining an image contribution by the point to an image content of the pixel on the basis of the texture coordinate.
A method of this kind is known from the copending United States Patent Application No. 08/346971 and its European equivalent EP 656 609 (PHB 33881). In the cited publication a triangle is mapped from a three-dimensional space onto a two-dimensional image. A point R on the triangle has an x, an y and a z coordinate (indicated by lower case letters). The projection of the point R in the image has an X*-coordinate and a Y*-coordinate (indicated by upper case letters).
Texture on the triangle is suggested by means of a texture map. The point R is mapped onto the texture map by allocation of a pair of texture coordinates U',V* thereto (upper case letters). The texture map allocates an intensity/color value to each pair of texture coordinates U*,V\ Using this coordinate pair U*,V" as an index, the associated intensity/color value is looked up in the texture map and thereby the image contribution by the point R to the pixel where onto it is projected is determined.
In practice, however, not the point R is used but the coordinates X*,Y* of the pixel, i.e. the projection of the point R. In order to determine the image contribution, the associated pair of texture coordinates U*,V* must be found for the coordinates X*,Y\ To this end, in accordance with EP 656 609 first the depth z and the X*,Y* and U*,V* coordinates of the vertices of the triangle are determined. Subsequently, U7z, V7z and 1/z are calculated for the vertices. The values U*/z, V7z and 1/z for the other points on the triangle can subsequently be determined, rather easily by linear inteφolation of these values for the vertices. Subsequently, z is determined from the interpolated 1/z by inversion and finally U* and V* are determined by multiplication of the interpolated U7z and V7z by z. The inversion 1/z is problematic. Known computers, require a comparatively long period of time for the calculation of the inverse and special hardware for faster calculation of this inverse is very complex.
It is inter alia an object of the invention to provide a method and a device of the kind set forth which require less arithmetic work. The method according to the invention is characterized in that the determination of the depth includes the following steps: determining a displacement of the pixel relative to a line in the image onto which there is projected a part of the surface which has a constant depth in the projection direction; - calculating an interpolation function which interpolates the depth as a function of the displacement. Interpolation reduces the amount of arithmetic work required to determine the depth as compared with the inversion of 1/z. Because interpolation is performed as a function of the displacement, moreover, fewer interpolation coefficients need be updated than if inteφolation were to take place directly as a function of X* and Y*. Consequently, less storage space is required for the storage of the inteφolation coefficients.
A version of the method according to the invention in which the pixel is preceded by a series of pixels on a scan line in the image is characterized in that the displacement is determined by adding an increment to a further displacement of a preceding pixel of the series. The displacement increases linearly along the scan line. At every transition between successive pixels along the scan line the displacement increases by a fixed amount which is a function of the orientation of the surface relative to the scan line. As a result, the displacement can be determined by means of a minimum amount of arithmetic work. A version of the method according to the invention is characterized in that there is a range of displacements of pixels onto which the surface is mapped, which range is subdivided into sub-ranges of displacements, that for calculation of the inteφolation function in each sub-range of displacements a respective set of inteφolation coefficients is used, and that the method includes a step of selecting a resolution of the subdivision into sub-ranges in dependence on a property of the surface. As the resolution of subdivision into sub-range is higher, the error caused by the inteφolation will be smaller. (A higher resolution in this context means a smaller difference between the minimum and the maximum displacement value within a sub-range). The error can be limited to an acceptable value by choosing the resolution to be dependent on a property of the surface (such as the angle of the normal to the surface with the projection axis and/or the depth range of the surface). Therefore, for different mappings of the same surface, from different projection directions, the resolution may be different. This can be readily achieved because inteφolation takes place as a function of the displacement.
A version of the method according to the invention utilizes an indexed set of texture maps having an index-dependent resolution, a current index being selected from the set of texture maps in dependence on the depth and the image contribution being determined in conformity with a texture value associated with the texture coordinate in an indexed texture map, and is characterized in that the current index is calculated by means of a further inteφolation function of indices as a function of the displacement. The index is dependent on the derivative of the texture coordinate to the pixel coordinate and hence is in principle a rather complex function of inter alia the depth and the depth gradient. Arithmetic work is saved by calculating this index so as to be inteφolated as a function of the displacement.
The embodiments of the device according to the invention utilize the versions of the method according to the invention and hence save arithmetic time/arithmetic circuits and/or storage space for tables with interpolation coefficients.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. In the drawings:
Fig. 1 shows a graphics processing system, Fig. 2 is a side elevation of the geometry of a projection,
Fig. 3 shows the texture mapping,
Fig. 4 shows the depth inteφolation,
Fig. 5 shows a device for imaging with texture mapping, and
Fig. 6 shows a circuit for determining the displacement. Fig. 1 shows a graphics processing system. This system includes a processor 1 , a memory 2 and an image display unit 3. During operation the memory 2 stores parameters which describe a content of a three-dimensional space, for example in terms of coordinates of vertices of polygons. The processor 1 receives the coordinates of a view point wherefrom the space is viewed and calculates, using the parameters stored in the memory 2, the image content of an image thus obtained. The calculated image content is used for controlling the image display unit which thus displays the image in a manner which is visible to human users.
The image is obtained by projection of the polygons onto a projection plane. Fig. 2 is a side elevation of the geometry of a projection of one polygon.
The Figure shows a polygonal surface 5 which is projected onto a projection plane 11. The projection plane 11 is shown from the side, so that only a line 11 remains which corresponds to the X-coordinate direction in the projection plane 11. The projection takes place along a projection axis 10 which is a normal to the projection plane 11 which extends through the view point 6.
An edge 12 of the polygonal surface 5 is separately indicated. This edge 12 extends between two vertices 13, 14 which are mapped onto two points 16, 17 in the image plane. For the sake of clarity of the projection, the projection lines through the vertices 13, 14, the mapped points 16, 17 and the view point are shown. The X-coordinate X* of the mapped point 16 is a ratio x/z of the X coordinate x of the point 13 of the surface and the depth z of the point 13 in the projection direction along the projection axis 10. For the Y-coordinate Y* (not shown) of the mapped point 16 it also holds that Y*=y/z.
Fig. 3 shows the principle of texture mapping. There is an image of a point R on the surface 22 at a point 21 having the texture coordinates U*(R),V*(R) in the texture map 20. With the texture coordinates U*(R),V*(R) there is associated a texture value T(U*(R),V*(R)). In addition, the projection maps the point R on a point 27 having the image coordinates X*(R),Y*(R) in the image 24. Therefore, the texture value T(U*(R),V'(R)) is decisive in respect of the image contribution in the point 27 having coordinates X*(R),Y"(R) in the image 24 on which the point R is mapped from the surface 22.
The image 24 is formed pixel-by-pixel. A pixel having a pixel coordinate pair X'p.Y'p is used each time. Thus, not a point R in the space which is mapped on a coordinate pair X*(R),Y*(R) is used, but exactly the opposite: R follows from the coordinate pair X* P,Y*p. From the coordinate X* P,Y* P the associated texture coordinate U* P,V* P is determined.
This can be expressed in formulas as follows. The texture map is defined by mapping the coordinates x, y, z of the point R on the surface 22 onto texture coordinates UP,Vp from the three-dimensional space as:
U* P = x ax + y a,, + z az + u0
Figure imgf000007_0001
The coordinates X* P,Y'p of the projection of the point R in the image 24 are x/z,y/z. The coordinates of the projection X* P,Y* P, therefore, are mapped on texture coordinates UP,VP as:
U*p = z ( X*p ax + Y*p ay + a* ) + UQ Vp = z ( X'p b. + Y* P by + b. ) -I- v0
If the depth z is known, the texture coordinates UP,VP can be calculated from the coordinates X* P,Y* P of the projection of the point R and the coefficients Uo, v0, ax, ay, a^ bx, by, bz using no more than additions and multiplications. The coefficients Uo, ax, ay, a„ v0, bx, by, bz can be determined from the values U* P, V* P X* P, Y* P, z of three vertices of the polygon. For this determination the fact should be taken into account that a class u0, ax, a,, 2^ leads each time to the same relation between U* P and X* P, Y* P, because a relation exists between x, y and z on the surface 22. It suffices to determine one element of this class, for example the element with u0=0. Analogously, it also suffices to determine b_, by, bz for v0=0.
In accordance with the invention z is obtained as follows. First a displacement L is determined for the pixel 27. This displacement L is defined as the distance from the pixel 27 to the line 28, i.e. the length of a peφendicular from the pixel 27 to the line 28 (the displacement L of a pixel to one side of the line, say the left-hand side, is then taken as minus this distance, and the displacement L of a pixel to the other side, say the right-hand side, as plus this distance).
The line 28 is chosen as a line of points in the image which are a map of a section of the surface 5 with an arbitrary plane extending parallel to the projection plane 11; all points of this section thus have the same depth z in the projection direction and, moreover, the section extends peφendicularly to the projection axis 10 but need not necessarily intersect said projection axis 10. In terms of the image this means that the line 28 extends parallel to the horizon of the map of the surface 5 (the horizon of a surface is the set of all points of intersection of the map of parallel extending lines in the surface 5).
The depth z associated with a pixel 27 is a function of the displacement L: z=z0/(l-L/Lo), in which ZQ is the depth of the line 28 and Lo is the displacement of the horizon.
Fig. 4 shows an example of the function 34. In accordance with the invention the function 34 is approximated by inteφolation. The function 34 is calculated, for example once for all pixels 27, for a number of predetermined displacement values L 35, 36, 37. Subsequently, for a requested pixel 27, having the coordinates X* P,Y"p and the displacement L, it is determined which two of the predetermined displacement values 35, 36, 37 are nearest to the displacement determined. Subsequently, the ratio α of the distances of the displacement L determined to these two nearest predetermined displacement values is determined. The depth z associated with the displacement L determined is subsequently calculated as Z*
Figure imgf000008_0001
Therein, Z*, and Z* 2 are the depths calculated for the nearest predetermined displacement values.
Evidently, other inteφolation formules may also be used, for example a quadratic inteφolation formule which is correct for three of the predetermined displacement values. The predetermined displacement values 35, 36, 37 need not be equidistant either; for example, in given circumstances the depth can be more accurately approximated by using appropriately preselected, non-equidistant points 35, 36, 37.
Preferably, the respective image contributions are determined successively for a series of pixels situated along a scan line in the image. A scan line is, for example a horizontal line in the image which corresponds to a line signal in a video signal in which pixels are consecutively arranged. The displacement values of successive pixels along such a scan line deviate each time by a fixed amount. Therefore, it is advantageous to calculate the displacement for an initial pixel on the scan line which is associated with the surface 5 and to calculate the displacement for subsequent pixels on the scan line each time by incrementing the displacement of the preceding pixel by said fixed amount.
The displacement L is preferably represented by an integer part I and a fractional part α. The integer part I serves as an index for searching the pair of predetermined displacements nearest to the displacement L. The fractional part represents the ratio of the distances to the nearest predetermined displacements. Along the scan line the fractional part α is calculated each time by adding the fixed amount to the fractional part α. When the fractional part a exceeds a maximum value as a result of addition, the fractional part a is reduced by said maximum value, and the integer part is incremented.
It has been found that for surfaces which are mapped on a region of pixels whose dimension remains below a minimum value, execution of the complete set of calculations required for inteφolation is slower on existing computer hardware than the calculation of the depth z by calculating first the inverse 1/z (being linearly related to X* and Y*) and subsequently inverting this inverse.
For given computer hardware the minimum value of the size can be determined experimentally, for example once (for use for all polygons). The time required for the calculations involving inteφolation is approximated, for example as a constant part (independent of the size) and a size-dependent part, for example a part proportional to the size. The same holds for the time required for inversion. The point at which these approximations intersect as a function of the size constitutes the desired minimum value. The minimum value is chosen so that, generally speaking, beyond this value inteφolation is faster, and that therebelow inversion is generally faster.
For example, the diameter of the region of pixels is used as a measure of the size or the maximum of the x-range and the y-range of this region, or the distance between the map of the vertices of a polygon, or the number of pixels in the region. In an embodiment of the invention, preferably the size of the region on which the surface is mapped is estimated first. If this size is smaller than the minimum value, z is calculated by inversion and the interpolation is not executed.
Alternatively, for a region having a size below the minimum value the z of the vertices of the region can be calculated and the z of the pixels in the region can subsequently be determined by inteφolation of the z of the vertices. The errors thus introduced are small because the region is small.
Fig. 5 shows a device for mapping with texture mapping. The device includes a cascade connection of an incremental displacement determining device 40, a depth inteφolator 42, a multiplier 44, a texture map memory 46, and an image display device 47. The input 40a of the incremental displacement-determining device 40 is also coupled to the input of a ratio-determining device 48; the output of the ratio determining device 48 is coupled to an input of the multiplier 44.
During operation the device forms the image contents for successive pixels on a scan line. A signal on the input 40a of the incremental displacement-determining device 40 indicates that a next pixel is concerned. In response thereto, the incremental displacement-determining device 40 increments the displacement value L determined for the preceding pixel by a fixed amount, indicating the increase of the displacement L from one pixel to another along the scan line. The depth inteφolator 42 receives the displacement L thus calculated and calculates the depth z therefrom by inteφolation. This depth is applied to the multiplier 44.
In response to the signal on the input of the incremental displacement- determining device 40, the ratio- determining device 48 increments the ratio (U*-u0)/z of the offset of the texture coordinate U' and the depth z, determined for the preceding pixel, by a further fixed amount which indicates the increase of the ratio (U*-u0)/z from one pixel to another along the scan line. The multiplier multiplies the ratio (U*-u0)/z thus determined by the inteφolated depth Z*, thus generating the offset of the texture coordinate U*. The latter is used as an address for reading the texture map memory 46. The resultant texture value I(U*) is subsequently used to calculate the image content for the pixel which is made visible to the viewer of the display device.
A plurality of surfaces can be processed in order to generate a complete image. For each surface the required fixed amounts whereby the displacement L and the ratio (U*-Uo)/z are incremented are each time loaded again via a bus 49. If necessary, the texture memory 46 is also loaded again or at least offsets referring to other texture in the texture memory are loaded.
Fig. 5 shows only the part of the circuit which is used for calculating a first one of the texture coordinates U*. Evidently, when two-dimensional textures are used, a second texture coordinate V* is also calculated. This calculation is the same as that of the first texture coordinate. For this puφose use can be made of a further ratio determining device and a further multiplier which operate in parallel with the ratio determining device 48 and the multiplier 44 and receive the same inteφolated depth Z*. The further multiplier then generates the second texture coordinate V* which is applied to the texture map memory 46 as the second part of the texture address. Fig. 6 shows a circuit for determining the displacement which is intended for use in the displacement- determining device 40. This circuit includes a first and a second register 60, 62 which are coupled to respective inputs of an adder 64. The adder 64 includes a sum output and an overflow output. The overflow output is coupled to a counter 66. The sum output is coupled to an input of the second register 62. During operation the first register 60 stores the fixed amount whereby the displacement is incremented for each successive pixel. The second register stores the fractional part a of the calculated displacement. For each successive pixel the adder 64 adds the fixed amount to the fractional part a. The sum is loaded into the second register 62 again. The adder 64 is, for example a binary adder which, in the case of overflow, outputs the sum modulo 2n together with an overflow signal. This overflow signal then increments the count of the counter 66. The count of the counter 66 represents the integer part I of the displacement.
The integer part addresses a set of inteφolation coefficients which are stored in a memory (not shown) in the inteφolator 42, for example the coefficients Z't and (Z* 2-Z*,). The fractional part a controls the calculation whereby an inteφolated depth Z* is determined from said inteφolation coefficients as an approximation of the real depth z, for example as Z'=Z', +a(ZYZ*,).
The use of an inteφolated depth Z* for calculating the contribution of texture to the image content of a pixel may give rise to image artefacts. The image artefacts can be maintained within acceptable limits by using a sufficiently small distance between the successive displacement values 35, 36, 37 wherebetween the depth is inteφolated. These displacement values 35, 36, 37 define successive sub-ranges of the total displacement range. For each sub-range there is provided a respective set of inteφolation coefficients. As the sub-range is smaller, i.e. as the displacement values 35, 36, 37 wherebetween inteφolation takes place are closer to one another, the image artefacts are smaller. Preferably, the distance between the displacement values 35, 36, 37 wherebetween inteφolation takes place is chosen in dependence on the parameters of the surface 5, so that for each surface to be reproduced a respective distance is chosen between successive displacement values wherebetween inteφolation takes place. The choice of the distance is made, for example as follows. The error Dz in the inteφolated depth Z* causes an U\ V* deviation in the calculated texture coordinates U*,V\ It has been found that a suitable criterion for the maximum acceptable error Dz follows from the following condition: the U*,V* deviation caused by the error Dz must be a factor e smaller than the change in U*,V* between neighbouring pixels. Explicitly, for example, the error Dz must be so small that at least one of the following conditions is satisfied:
Dz IdUVdzl < e IdUVdX'l
Dz I dUVdz l << ££ II ddUUVVddYV'j'
and that also at least one of the following conditions is satisfied:
Dz jdVVdz| < e jdVVdX' i Dz jdWdz ! < e j dWdY' j
(The unit in which X* and Y* are expressed is chosen so that X* increases by 1 from one pixel to another; the same holds for Y*).
The error Dz in its turn can be expressed in the distance between successive displacement values wherebetween inteφolation takes place. The maximum permissible distance is the distance yielding a maximum Dz which still satisfies at least one of the above conditions for U* and at least one of the above conditions for V*.
The choice of the factor e requires a compromise: a small factor e (U*,V* deviation very small) means that the distance becomes very small and hence much arithmetic work is required to calculate the depths associated with displacement values wherebetween inteφolation takes place; an excessively large factor e leads to visible image artefacts. The factor e is preferably smaller than 1. It has been found that a factor e = 1/2 offers suitable results.
It has been found in practice that the choice of the maximum permissible distance between successive displacement values wherebetween inteφolation takes place can be reliably made by evaluating the above conditions for a limited number of points only (for example, three vertices of a polygonal surface). To this end, per vertex the derivatives dUVdX* etc. and dU7dz etc. are calculated. The largest of the ratios j dU'/dX* j / jdU7dz | is then chosen; this ratio will be referred to as MAXU. Furthermore, the largest of the ratios |dWdX*| / jdV/dz| is also chosen and referred to as MAXV. The smaller one of MAXU and MAXY will be denoted as MAX. The above conditions are satisfied if Dz < e MAX for all vertices.
It has also been found that the maximum inteφolation error Dz occurring can be reliably determined by determining the depth z halfway an inteφolation interval around such a vertex both exactly (by inversion of 1/z) and by inteφolation between two 1/z values which are situated one half distance above and below the 1/z values of the vertex. The difference between the depths thus obtained is a reliable measure of the maximum error Dz (the actual error is at most 32/27 of the difference). This difference equals Dz = d2 z / (z 2- d2) (therein, z is the depth of such a vertex and d is half the distance times the derivative along that distance). The maximum permissible distance is found, for example by deriving the distance by solution of the equation Dz=«MAX for the maximum error Dz thus found.
Evidently, this is merely a practice-proven example of how to choose the distance between successive displacement values wherebetween inteφolation takes place. Prior to the calculation of the image contribution by the surface this distance is calculated and subsequently the image contributions are calculated as described above, inteφolation then taking place between displacements which are situated the calculated distance apart. It has been found in practice that the inteφolated depth Z" can be advantageously used not only for texture inteφolation but also for other puφoses, for example Z-buffer updates. In order to ensure that no excessive errors are introduced in that case, it is attractive to impose not only the above conditions as regards Dz (Dz J dU7dz | < e | dU/dX j etc.), but also comparable conditions in respect of the error in the inteφolated depth Z:
Dz < ! dZVdX* | Dz < e I dZVdY* !
Dz is then made so small that at least one of these two conditions is also satisfied. Generally speaking, for the specific inteφolation formule used the distance between successive displacements wherebetween the depth is inteφolated is chosen in dependence on the parameters of the surface to be reproduced. For the generating of an image, a number of surfaces will be reproduced in the image; in accordance with the invention the distance between successive displacement values wherebetween inteφolation takes place is separately chosen, in dependence on the properties of the relevant surface, for each of said surfaces.
Thus, the distance between successive displacement values wherebetween inteφolation takes place is adapted to the properties of the surface in such a manner that an imaging error occurs which can still be accepted, thus saving the arithmetic work otherwise spent on unnecessarily accurate images.
Texture mapping usually utilizes a MIPMAP: a set of texture maps of increasing resolution. For the calculation of an image contribution to a pixel having coordinates X\Y\ a texture value T(U*,V) is read from one texture map, chosen from this set, or inteφolated between the texture values T(U*,V*) and T'(U\V*) of two texture maps. Said one or two texture maps are chosen on the basis of a value "LOD" which is a measure of the part of the texture space (range of U*,V* coordinates) mapped, after rounding, on the same pixel having coordinates X*,Y\ for example by determining the maximum value of the derivatives dUVdX*, dUVdY* etc. This is performed to avoid aliasing. For the inteφolation between two texture maps the relative weight allocated to the two texture maps is also taken in dependence on the value LOD.
In accordance with the invention, the value LOD is preferably also determined by inteφolation as a function of the displacement, using the same distance between successive displacement values wherebetween inteφolation takes place as used for the inteφolation of the depth z. Thus, the depth as well as the value LOD can be calculated from one calculation of the displacement.

Claims

CLAIMS:
1. A method for graphics mapping of a surface from an at least three- dimensional model space onto a two-dimensional image, in which a texture coordinate is allocated to a point on the surface and the point is projected onto a pixel of the image along a projection axis, which method includes the following steps - determining a normalized coordinate which is a ratio of the texture coordinate to a depth of the point along the projection axis; determining the depth of the point; determining the texture coordinate by multiplication of the normalized coordinate by the depth; - determining an image contribution by the point to an image content of the pixel on the basis of the texture coordinate; characterized in that the determination of the depth includes the following steps: determining a displacement of the pixel relative to a line in the image onto which there is projected a part of the surface which has a constant depth in the projection direction; calculating an inteφolation function which inteφolates the depth as a function of the displacement.
2. A method as claimed in Claim 1 , in which the pixel is preceded by a series of pixels on a scan line in the image, characterized in that the displacement is determined by adding an increment to a further displacement of a preceding pixel of the series.
3. A method as claimed in Claim 1 or 2, characterized in that a size of a representation of the surface on the image is determined, and that, if the size is below a predetermined minimum, first an inverse of the depth is determined, after which the depth is determined by inversion of the inverse.
4. A method as claimed in Claim 1 , 2 or 3, characterized in that there is a range of displacements of pixels onto which the surface is mapped, which range is subdivided into sub-ranges of displacements, in that for calculation of the inteφolation function in each sub-range of displacement a respective set of inteφolation coefficients is used, and in that the method includes a step of selecting a resolution of the subdivision into sub-ranges in dependence on a property of the surface.
5. A method as claimed in Claim 1, 2, 3 or 4 which utilizes an indexed set of texture maps having an index-dependent resolution, a current index being selected from the set of texture maps and the image contribution being determined in conformity with a texture value associated with the texture coordinate in an indexed texture map, characterized in that the current index is calculated by means of a further inteφolation function of indices as a function of the displacement.
6. A device for graphics mapping of a surface from an at least three- dimensional model space onto a two-dimensional image, in which a texture coordinate is allocated to a point on the surface and the point is projected onto a pixel of the image along a projection axis, which device includes coordinate-determining means for determining a normalized coordinate which is a ratio of the texture coordinate to a depth of the point along the projection axis; - depth-determining means for determining the depth of the point; multiplier means for multiplying the normalized coordinate by the depth in order to obtain the texture coordinate; image-forming means for determining an image contribution by the point to an image content of the pixel on the basis of the texture coordinate; characterized in that the depth-determining means include displacement-determining means for determining a displacement of the pixel relative to a line in the image onto which there is projected a part of the surface which has a constant depth in the projection direction; inteφolation means for calculating an inteφolation function which inteφolates the depth as a function of the displacement.
7. A device as claimed in Claim 6, arranged to determine respective image contributions for each of a series of pixels on a scan line in the image, characterized in that the displacement-determining means include incrementation means which determine the displacement of the pixel by adding an increment to a further displacement of a preceding pixel of the series.
8. A device as claimed in Claim 6 or 7, characterized in that it includes means for determining a size of a representation of the surface on the image, and that it is arranged to determine, if the size is below a predetermined minimum, first an inverse of the depth and subsequently the depth by inversion of the inverse.
9. A device as claimed in Claim 6, 7 or 8, characterized in that there is a range of displacements of pixels onto which the surface is mapped, which range is subdivided into sub-ranges of displacements, that the device includes a memory for storing an associated set of inteφolation coefficients for each sub-range, the inteφolation means being arranged to calculate the inteφolation function for each sub-range in conformity with the associated set of inteφolation coefficients from the memory, and that the device includes coefficient-determining means for determining the respective sets of inteφolation coefficients, said coefficient-determining means selecting a resolution of the subdivision into sub-ranges in dependence on a property of the surface.
10. A device as claimed in Claim 6, 7, 8 or 9, including a further memory for storing an indexed set of texture maps having an index- dependent resolution; index-selection means for selecting a current index, the image-forming means determining the image contribution in conformity with a texture value associated with the texture coordinate in an indexed texture map, characterized in that the device includes index-inteφolation means for calculating the current index by means of a further inteφolation function of indices as a function of the displacement.
PCT/IB1997/000423 1996-05-06 1997-04-21 Method and device for graphics mapping of a surface onto a two-dimensional image WO1997042605A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP97915644A EP0840916B1 (en) 1996-05-06 1997-04-21 Method and device for graphics mapping of a surface onto a two-dimensional image
DE69713858T DE69713858T2 (en) 1996-05-06 1997-04-21 METHOD AND DEVICE FOR GRAPHICALLY IMAGING A SURFACE ON A TWO-DIMENSIONAL IMAGE
JP9539674A JPH11509661A (en) 1996-05-06 1997-04-21 Method and apparatus for graphics mapping a surface to a two-dimensional image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP96201244 1996-05-06
EP96201244.9 1996-05-06

Publications (1)

Publication Number Publication Date
WO1997042605A1 true WO1997042605A1 (en) 1997-11-13

Family

ID=8223953

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB1997/000423 WO1997042605A1 (en) 1996-05-06 1997-04-21 Method and device for graphics mapping of a surface onto a two-dimensional image

Country Status (7)

Country Link
US (1) US5986664A (en)
EP (1) EP0840916B1 (en)
JP (1) JPH11509661A (en)
KR (1) KR19990028742A (en)
DE (1) DE69713858T2 (en)
TW (1) TW371748B (en)
WO (1) WO1997042605A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6232980B1 (en) * 1998-03-18 2001-05-15 Silicon Graphics, Inc. System and method for generating planar maps of three-dimensional surfaces
US9626789B2 (en) * 2013-05-07 2017-04-18 Advanced Micro Devices, Inc. Implicit texture map parameterization for GPU rendering

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0676724A2 (en) * 1994-04-05 1995-10-11 Kabushiki Kaisha Toshiba Texture mapping method and image processing apparatus
WO1995027266A1 (en) * 1994-03-31 1995-10-12 Argonaut Technologies Limited Fast perspective texture mapping for 3-d computer graphics

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5307450A (en) * 1991-02-19 1994-04-26 Silicon Graphics, Inc. Z-subdivision for improved texture mapping
US5594846A (en) * 1994-12-19 1997-01-14 Sun Microsystems, Inc. Perspective correction of texture in graphics by adaptive approximation
US5841441A (en) * 1996-01-19 1998-11-24 Virtus Corporation High-speed three-dimensional texture mapping systems and methods
US5777623A (en) * 1996-02-15 1998-07-07 Canon Kabushiki Kaisha Apparatus and method for performing perspectively correct interpolation in computer graphics in a variable direction along a line of pixels

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995027266A1 (en) * 1994-03-31 1995-10-12 Argonaut Technologies Limited Fast perspective texture mapping for 3-d computer graphics
EP0676724A2 (en) * 1994-04-05 1995-10-11 Kabushiki Kaisha Toshiba Texture mapping method and image processing apparatus

Also Published As

Publication number Publication date
DE69713858D1 (en) 2002-08-14
KR19990028742A (en) 1999-04-15
EP0840916B1 (en) 2002-07-10
JPH11509661A (en) 1999-08-24
DE69713858T2 (en) 2003-02-13
US5986664A (en) 1999-11-16
EP0840916A1 (en) 1998-05-13
TW371748B (en) 1999-10-11

Similar Documents

Publication Publication Date Title
US6184888B1 (en) Method and apparatus for rapidly rendering and image in response to three-dimensional graphics data in a data rate limited environment
US6469700B1 (en) Per pixel MIP mapping and trilinear filtering using scanline gradients for selecting appropriate texture maps
US5986663A (en) Auto level of detail-based MIP mapping in a graphics processor
US6323874B1 (en) System and method for rendering an image
EP0875860B1 (en) Precise gradient calculation system and method for a texture mapping system of a computer graphics system
Ewins et al. Mip-map level selection for texture mapping
JP3675488B2 (en) Circuit for determining non-homogeneous secondary perspective texture mapping coordinates using linear interpolation
JP2004103039A (en) Texture treatment and shading method for 3d image
US5977983A (en) Method and apparatus for adjusting graphics processing procedures based on a selectable speed/quality gauge
WO1998029836A9 (en) Circuit for determining non-homogenous second order perspective texture mapping coordinates using linear interpolation
US6292191B1 (en) Dynamically selectable MIP map blending for a software graphics engine
US5777623A (en) Apparatus and method for performing perspectively correct interpolation in computer graphics in a variable direction along a line of pixels
US6157386A (en) MIP map blending in a graphics processor
US5438654A (en) System and method for sharpening texture imagery in computer generated interactive graphics
WO1999052076A1 (en) System and method of selecting level of detail in texture mapping
US6400370B1 (en) Stochastic sampling with constant density in object space for anisotropic texture mapping
US5886703A (en) Perspective correct texture mapping system and methods with intelligent subdivision
JP2006520963A5 (en)
JPH11126261A (en) Texture mapping method and device thereof
EP0840916B1 (en) Method and device for graphics mapping of a surface onto a two-dimensional image
EP1027682B1 (en) Method and apparatus for rapidly rendering an image in response to three-dimensional graphics data in a data rate limited environment
EP1183652B1 (en) Method and system for performing mip map level selection
US5542025A (en) Precision Z-interpolation method and apparatus
US6326976B1 (en) Method for determining the representation of a picture on a display and method for determining the color of a pixel displayed
US6333746B1 (en) Auto level of detail texture mapping for a software graphics engine

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 1997915644

Country of ref document: EP

ENP Entry into the national phase

Ref country code: JP

Ref document number: 1997 539674

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1019980700040

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1997915644

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1019980700040

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 1997915644

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1019980700040

Country of ref document: KR