US20010010517A1 - Perspective projection calculation devices and methods - Google Patents

Perspective projection calculation devices and methods Download PDF

Info

Publication number
US20010010517A1
US20010010517A1 US09/814,684 US81468401A US2001010517A1 US 20010010517 A1 US20010010517 A1 US 20010010517A1 US 81468401 A US81468401 A US 81468401A US 2001010517 A1 US2001010517 A1 US 2001010517A1
Authority
US
United States
Prior art keywords
depth
calculation unit
interpolation
coordinates
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/814,684
Inventor
Ichiro Iimura
Yasuhiro Nakatsuka
Jun Satoh
Takashi Sone
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US09/814,684 priority Critical patent/US20010010517A1/en
Publication of US20010010517A1 publication Critical patent/US20010010517A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • the present invention relates to figure generating systems for image processors, and more particularly to a perspective projection calculation device and method for correcting geometrical parameters of a perspectively projected three-dimensional figure.
  • a perspectively projected figure for example a triangle
  • linear interpolation is generally performed for each span, using the respective vertex coordinates of a perspectively projected triangle, and geometrical parameters necessary and sufficient for shading are approximately calculated for the respective points within the perspectively projected triangle.
  • JP-A-3-198172 discloses a method of calculating geometrical parameters necessary and sufficient for shading on a plane figure in a three-dimensional space without interpolation for each span, but no specified method of calculating interpolation coefficients used for the interpolation.
  • the present invention provides a perspective projection calculation device in an image processor for perspectively projecting a triangle defined in a three-dimensional space onto a two-dimensional space and for shading the triangle in the second-dimensional space, comprising:
  • At least one plane slope element coefficient calculating means for calculating a coefficient which implies a plane slope element of the triangle defined in the three-dimensional space
  • interpolation coefficient calculating means for calculating an interpolation coefficient from the plane slope element coefficient calculated by the plane slope element coefficient calculating means
  • At least one correcting means for making a perspective correction, using the interpolation coefficient are provided.
  • the plane slope element coefficient may be used in common in all parameters to be interpolated.
  • An inverse matrix of a matrix of vertex coordinates of the triangle defined in the three-dimensional space may be used as the plane slope element coefficient.
  • the plane slope element coefficient, the interpolation coefficient and/or an interpolation expression including the interpolation coefficient may be used in common in the triangle defined in the three-dimensional space and/or in a perspectively projected triangle in a two-dimensional space.
  • the interpolation expression may involve only multiplication and/or addition.
  • the geometrical parameters may be interpolated in a three-dimensional space.
  • the interpolation expression may include a term involving a depth. More specifically, it may use the inverse of depth coordinates as geometrical parameters.
  • the interpolation expression may interpolate the geometrical parameters while maintaining the linearity thereof on a plane.
  • the present invention provides a perspective projection calculation device in an image processor which includes at least one display, at least one frame buffer for storing an image to be displayed on the display, and at least one figure generator for generating a figure which composes the image on the frame buffer, thereby making a perspective correction on the respective pixels of the figure,
  • the present invention provides a perspective projection calculation device in an image processor which includes a depth buffer which stores data on a depth from a viewpoint for a plane to be displayed and which removes a hidden surface, the image processor perspectively projecting a triangle defined in a three-dimensional space onto a two-dimensional space and shading the triangle in the two-dimensional space,
  • the depth buffer comprises a buffer for storing a non-linear value in correspondence to the distance from the viewpoint.
  • the depth buffer may comprise a buffer for storing a non-linear value representing a resolution which increases toward the viewpoint in place of the depth value. More specifically, the depth buffer may comprise a buffer for storing the inverse of a depth value in place of the depth value.
  • the present invention provides a perspective projection calculation method in an image processing method for perspectively projecting a triangle defined in a three-dimensional space onto a two-dimensional space and for shading the triangle in the second-dimensional space, comprising the steps of:
  • the plane slope element coefficient may be used in common in all parameters to be interpolated.
  • An inverse matrix of a matrix of vertex coordinates of the triangle defined in the three-dimensional space may be used as the plane slope element coefficient.
  • the plane slope element coefficient, the interpolation coefficient and/or an interpolation expression including the interpolation coefficient may be used in common in the triangle defined in the three-dimensional space and/or in a perspectively projected triangle in a two-dimensional space.
  • the interpolation expression may involve only multiplication and/or addition.
  • the geometrical parameters may be interpolated in a three-dimensional space.
  • the interpolation expression may include a term involving a depth. More specifically, the interpolation expression may use the inverses of depth coordinates as the geometrical parameters.
  • the interpolation calculation expression may interpolate the geometrical parameters while maintaining the linearity thereof on a plane.
  • the present invention provides a perspective projection calculation method in an image processing method which uses a depth buffer which stores data on a depth from a viewpoint for a plane to be displayed and which removes a hidden surface, a triangle defined in a three-dimensional space being perspectively projected onto a two-dimensional space and the triangle being shaded in the two-dimensional space, comprising the step of:
  • the depth buffer may store a non-linear value representing a resolution which increases toward the viewpoint in place of the depth value. More specifically, the depth buffer may store an inverse of a depth value is stored in the depth buffer in place of the depth value.
  • a plane slope element coefficient in a three-dimensional space usable in common in a plurality of geometrical parameters necessary for shading a perspectively projected figure is used to reduce the number of dividing operations, and the plurality of geometrical parameters is corrected for each plane in the three-dimensional space.
  • correct perspective correction is made rapidly.
  • FIG. 1 is a block diagram of an illustrative image processing system which employs one embodiment of a perspective projection calculation device according to the present invention
  • FIG. 2 is a block diagram of an illustrative perspective correction unit
  • FIG. 3 is a block diagram of an illustrative plane slope element coefficient calculation unit which is a component of the perspective correction calculation unit;
  • FIG. 4 is a block diagram of a depth coordinate interpolation coefficient calculation unit which is a component of the interpolation coefficient calculation unit;
  • FIG. 5 is a block diagram of a texture coordinate interpolation coefficient calculation unit which is a component of the interpolation coefficient calculation unit;
  • FIG. 6 is a block diagram of a depth coordinate correction unit which is a component of a correction unit
  • FIG. 7 is a block diagram of an illustrative texture coordinate s-component correction unit for an s-component of a texture coordinate correction unit as a component of the correction unit;
  • FIG. 8 is a block diagram of a modification of the perspective correction calculation unit of FIG. 2;
  • FIG. 9 is a block diagram of an illustrative luminance calculation unit which cooperates with a pixel address calculation unit and the correction unit to compose a figure generator;
  • FIGS. 10A and 10B show a triangle displayed on a display
  • FIG. 11 shows the relationship between geometrical parameters and the corresponding triangles to be displayed
  • FIGS. 12A, 12B and 12 C show the relationship between three kinds of coordinate systems and vrc 1110 c /rzc 1180 c /pdc 1170 c ;
  • FIG. 13 illustrates an interpolation expression for s-components of depth and texture coordinates as typical interpolation expressions to be processed in the correction unit
  • FIG. 14 shows a luminance calculation expression to be processed in the luminance calculation unit
  • FIG. 15 is a graph of the relationship between the depth from a viewpoint of a triangle to be displayed and its inverse with z and 1/z values as parameters;
  • FIG. 16 shows z values for several 1/z values
  • FIG. 17 shows the relationship between a case in which a z value is stored in a depth buffer and a case in which a 1/z value is stored in the depth buffer.
  • FIGS. 1 - 17 a preferred embodiment of a perspective projection calculation device and method according to the present invention will be described next.
  • FIG. 1 is a block diagram of an illustrative image processing system which employs one embodiment of a perspective projection calculation device according to the present invention.
  • the system is composed of a figure vertex information inputting unit 1000 , an image processor 2000 , a memory module 3000 and a display 4000 .
  • the image processor 2000 is composed of a perspective correction calculation unit 2100 , and a figure generator 2200 .
  • the memory module 3000 is composed of a frame buffer 3100 and a depth buffer 3200 .
  • FIG. 2 is a block diagram of the perspective projection calculation unit 2100 , which makes a correction on the perspective projection and is composed of a plane slope element coefficient calculation unit 2310 , an interpolation coefficient calculation unit 2320 , a pixel address calculation unit 2400 , and a correction unit 2520 .
  • the interpolation coefficient calculation unit 2320 includes a depth coordinate interpolation coefficient calculation unit 2321 and a texture coordinate interpolation coefficient calculation unit 2322 .
  • the correction unit 2520 includes a depth coordinate correction unit 2521 and a texture coordinate correction unit 2522 .
  • FIG. 3 is a block diagram of the plane slope component coefficient calculation unit 2310 which is a component of the perspective correction calculation unit 2100 .
  • the plane slope element coefficient calculation unit 2310 receives figure vertex information from the figure vertex information inputting unit 1000 .
  • a vertex coordinate matrix composition unit 2310 (a) composes a matrix based on the figure vertex information or the vertex coordinates (v 0 vertex coordinates 1111 , v 1 vertex coordinates 1112 , v 2 vertex coordinates 1113 ) of a triangle defined in a three-dimensional space.
  • An inverse matrix calculation unit 2310 (b) calculates an inverse matrix or plane slope element coefficients 2310 ( 1 ) based on the earlier-mentioned matrix.
  • FIG. 4 is a block diagram of an illustrative depth coordinate interpolation coefficient calculation unit 2321 which is a component of the interpolation coefficient calculation unit 2320 .
  • a depth coordinate matrix composition unit 2321 (a) composes a matrix of viewpoint-front clipping plane distances.
  • a multiplier 2321 (c) multiplies a plane slope element coefficient 2310 ( 1 ) calculated by the plane slope element calculation unit 2310 by the matrix of viewpoint-front plane clipping distances composed by the depth coordinate matrix composer 2321 (a).
  • a multiplier 2321 (d) multiplies the output from the multiplier 2321 (c) by a sign conversion matrix 2321 (b) to calculate depth coordinate interpolation coefficients 2321 ( 1 ).
  • the sign conversion matrix 2321 (b) implies conversion from a viewpoint coordinate system 1110 c to a recZ coordinate system 1180 c.
  • FIG. 5 is a block diagram of an illustrative texture coordinate interpolation coefficient calculation unit 2322 which is a component of the interpolation coefficient calculation unit 2320 .
  • the texture coordinate interpolation coefficient calculation unit 2322 receives figure vertex information from the figure vertex information inputting unit 1000 .
  • a texture coordinate matrix composition unit 2322 (a) composes a matrix based on figure vertex information or vertex texture coordinates (v 0 vertex texture coordinates 1121 , v 1 vertex texture coordinates 1122 , v 2 vertex texture coordinates 1123 ) of a triangle defined in a three-dimensional space.
  • a multiplier 2322 (b) multiplies data on the matrix from the texture coordinate matrix composition unit 2322 (a) by a plane slope element coefficient 2310 ( 1 ) calculated by the plane slope element calculation unit 2310 to calculate texture coordinate interpolation coefficients 2322 ( 1 ), which are composed of texture coordinate s-component interpolation coefficients and texture coordinate t-component interpolation coefficients.
  • FIG. 6 is a block diagram of the depth coordinate correction unit 2521 which is a component of the correction unit 2520 .
  • a multiplier 2521 (a) multiplies an x-component of a depth coordinate interpolation coefficient 2321 ( 1 ) by an x-component of an address generated by pixel address calculation unit 2400 .
  • An adder 2521 (c) adds the output from the multiplier 2521 (a) and a constant component of the depth coordinate interpolation coefficient 2321 ( 1 ).
  • a multiplier 2521 (b) multiplies a y-component of the depth coordinate interpolation coefficient 2321 ( 1 ) by a y-component of the address generated by the pixel address calculation unit 2400 .
  • Last, an adder 2521 (d) adds the output from the adder 2521 (c) and the output from the multiplier 2521 (b) to calculate corrected depth coordinates 2520 ( 1 ).
  • FIG. 7 is a block diagram of an illustrative texture coordinate s-component correction unit 2522 s which is an s-component of the texture coordinate correction unit 2522 which is a component of the correction unit 2520 .
  • a multiplier 2522 s(a) of the texture coordinate s-component correction unit 2522 s multiplies an x-component of an address generated from the pixel address calculation unit 2400 by a value of -near/recZ calculated by the corrected depth coordinates 2520 ( 1 ).
  • a multiplier 2522 s(b) multiples the output from the multiplier 2522 s(a) by an x component of the texture coordinate s-component interpolation coefficient 2322 ( 1 )s.
  • a multiplier 2522 s(c) multiplies a y-component of the address generated from the pixel address calculation unit 2400 by the value of -near/recZ calculated from the corrected depth coordinates 2520 ( 1 ).
  • a multiplier 2522 s(e) multiples the output from the multiplier 2522 s(c) by a y-component of the texture coordinate s-component interpolation coefficient 2322 ( 1 )s.
  • a multiplier 2522 s(d) multiplies a z-component of the texture coordinate s-component interpolation coefficient 2322 ( 1 )s by the value of -near/recZ calculated by the corrected depth coordinates 2520 ( 1 ).
  • An adder 2522 s(f) adds the outputs from the multipliers 2522 s(b) and 2522 s(d).
  • an adder 2522 s(g) adds the outputs from the multiplier 2522 s(e) and the adder 2522 s(f) to calculate corrected texture s-component coordinates 2520 ( 2 )s.
  • FIG. 8 is a block diagram of a modification of the perspective correction calculation unit 2100 of FIG. 2.
  • the modification is arranged so as to handle geometrical parameters including vertex light source intensities whose linearities are maintained on a plane defined in a three-dimensional space, in addition to the depth coordinates and vertex texture coordinates.
  • Geometrical parameters whose linearities are maintained on a plane defined in a three-dimensional space are correctable with respect to perspective projection in a manner similar to that in which the depth coordinates and vertex texture coordinates will be corrected.
  • perspective correction is made on a light source intensity attenuation rate necessary for luminance calculation, using a similar structure to that of FIG. 8.
  • Perspective correction is made on a normal vector 1130 , a light source direction vector 1140 , a viewpoint direction vector 1190 , and a light source reflection vector 1150 in a three-dimensional space where linearity of parameters on a plane is maintained.
  • the space is referred to as a (u, v, w) space and its coordinate system is referred to as a normalized model coordinate system.
  • perspective correction is made on geometrical parameters such as normals, depth coordinates, vertex texture coordinates, light source, and viewpoint necessary for calculation of luminance, using a structure similar to that of FIG. 2; more specifically, a light source intensity attenuation rate 1160 determined by the positional relationship between the light source and respective points in the figure, a normal vector 1130 indicative of the direction of the plane, a light source direction vector 1140 indicative of the direction of the light source, a viewpoint direction vector 1190 indicative of the direction of the viewpoint, and a light source reflection vector 1150 indicative of the reflecting direction of the light source.
  • a light source intensity attenuation rate 1160 determined by the positional relationship between the light source and respective points in the figure
  • a normal vector 1130 indicative of the direction of the plane
  • a light source direction vector 1140 indicative of the direction of the light source
  • a viewpoint direction vector 1190 indicative of the direction of the viewpoint
  • a light source reflection vector 1150 indicative of the reflecting direction of the light source.
  • Perspective correction is made on the normal vector 1130 , light source direction vector 1140 , viewpoint direction vector 1190 and light reflection vector 1150 in a three-dimensional space in which linearity of parameters on a plane is maintained.
  • the space and the coordinate system are referred to as a (u, v, r) space and a normalized model coordinate system, respectively.
  • the values (u, v, w), to which those vectors are converted, in the normalized model coordinate system in which those vectors can be linearly interpolated are used.
  • the coordinate values, in a normalized model coordinate system, of the vertexes of a triangle defined in the three-dimensional space are referred to as vertex normalized model coordinates.
  • FIG. 9 is a block diagram of an illustrative luminance calculation unit 2510 which cooperates with the pixel address calculation unit and correction unit 2520 to compose the figure generator 2200 .
  • a light source-caused-attenuation-free ambient/diffusive/specular component luminance calculation unit 2510 (b) calculates the luminances of a light source-caused-attenuation-free ambient/diffusive/specular components on the basis of the texture color and the corrected light source intensity attenuation rate.
  • a spot angle attenuation rate calculation unit 2510 (c) calculates an Lconc th power of the inner product ( ⁇ Ldir ⁇ Li) of a reverse light source vector 11 A 0 and the light source direction vector 1140 .
  • a light source incident angle illumination calculation unit 2510 (d) calculates the inner product (L ⁇ N) of the normal vector 1130 and the light source direction vector 1140 .
  • a specular reflection attenuation rate calculation unit 2510 (e) calculates a Sconc th power of the inner product (V ⁇ R) of the viewpoint direction vector 1190 and the light source reflection vector 1150 where Lconc is a spot light source intensity index and Sconc is a specular reflection index.
  • a multiplier 2510 (f) multiplies the output from the light source-caused-attenuation-free ambient/diffusive/specular component luminance calculation unit 2510 (b) by the output from the spot angle attenuation rate calculation unit 2510 (c).
  • a multiplier 2510 (g) multiplies the outputs from the multiplier 2510 (f) by the light source incident angle illumination calculation unit 2510 (d).
  • a multiplier 2510 (h) multiplies the output from the multiplier 2510 (f) by the output from the specular reflection attenuation rate calculation unit 2510 (e).
  • a whole light source ambient component calculation unit 2510 (j), a whole light source diffusive component calculation unit 2510 (k), a whole light source specular component calculation unit 2510 ( 1 ) each add the respective associated components by the number of light sources.
  • a luminance synthesis adder 2510 (o) adds the output from the luminance calculation unit 2510 (i) for a natural field ambient component and an emission component, the output from a whole light source ambient component calculation unit 2510 (j), the output from a whole light source diffusive component calculation unit 2510 (k), and the output from a whole light source specular component calculation unit 2510 ( 1 ) to calculate a pixel luminance 2510 ( 1 ).
  • FIG. 10 shows a triangle 1100 displayed on the display 4000 .
  • the triangle is the one in a two-dimensional space to which the corresponding triangle defined in the three-dimensional space is perspectively projected.
  • FIG. 10B shows pdc 1170 , rzc 1180 , vrc 1110 , texture 1120 , light source intensity attenuation rate 1160 , and regular model coordinates 1180 at the vertexes of the triangle 1100 displayed on the display 4000 .
  • the pdc 1170 denotes the coordinates of the triangle displayed on the display 4000
  • the rzc 1180 denote coordinates of the perspectively projected triangle
  • the vrc 1110 denotes coordinates of the triangle present before the perspective projection
  • the texture coordinates 1120 are those corresponding to a texture image mapped on the triangle
  • the light source intensity attenuation rates 1160 are scalar values determined depending on the positional relationship between the light source and respective points within the figure
  • the regular model coordinates 1180 are coordinate values in a regular model coordinate system which is a space in which the normal vector 1130 , light source direction vector 1140 , viewpoint direction vector 1190 , and the light source reflection vector 1150 maintain their linearities on a plane in the three-dimensional space.
  • the pdc 1170 c represents “Physical Device Coordinates”, the rzc 1180 c “recZ Coordinates”, and the vrc 1110 c “View Reference Coordinates”.
  • the recZ 1180 z and the inverse of the depth coordinates are in proportional relationship on the vrc.
  • FIG. 11 shows the relationship between their geometrical parameters and a triangle to be displayed.
  • a point (x v , y v , z v ) 1110 within a triangle on the vrc 1110 c there are geometrical parameters which require correction for the perspective projection; i.e., depth coordinates 1180 z , texture coordinates 1120 , and light source intensity attenuation rate 1160 , regular model coordinates 11 B 0 .
  • the point (x v , y v , z v ) 1110 in the triangle on the vrc 1110 c is mapped onto a view plane by perspective projection to become a point (x r , y r , z r ) 1180 on the rzc 1180 c .
  • the mapped point on the rzc 1180 c becomes a point (z, y, z) 1170 on the pdc 1170 c on the display 4000 .
  • FIGS. 12A, 12B, 12 C show the relationship between the above-mentioned three kinds of coordinate systems and vrc 1110 c /rzc 1180 c /pdc 1170 c .
  • the viewpoint is at the origin.
  • a figure model to be displayed is defined in this coordinate system.
  • the rzc 1180 c is the coordinate system obtained by subjecting the vrc 1110 c to perspective projection which produces a sight effect similar to that produced by a human sight system.
  • the relation between the rzc and the vrc is represented with the left side equations in FIG. 12C.
  • the figure delineated onto the rzc 1180 c is converted to the one on the pdc 1170 c , which is then displayed on the display 4000 .
  • the relation between the pdc and the rzc is represented with the right side equations in FIG. 12C.
  • FIG. 13 shows interpolation expressions for a depth coordinate 2521 (e) and a s-component 2522 s(h) of texture coordinates as typical interpolation expressions to be processed in the correction unit 2520 .
  • the interpolated depth coordinates recZ 2520 ( 1 ) in the rzc 1180 c are given as:
  • (x r , y r ) is the rzc 1180 of a point to be interpolated
  • (recZx, recZy, recZc) denote depth coordinate interpolation coefficients 2321 ( 1 ) calculated by the interpolation coefficient calculation unit 2320 .
  • the texture coordinates 1120 have two components (s, t).
  • the interpolated coordinates s 2520 ( 2 )s of its s-component are given as:
  • (x r , y r ) is the rzc 1180 of a point to be interpolated
  • (S x , S y , S z ) are texture coordinate s-component interpolation coefficients 2322 ( 1 )s calculated by the interpolation coefficient calculation unit 2320
  • z v is the depth vrc 1110 of the point to be interpolated and is obtained from the interpolated depth coordinate recZ 2520 ( 1 ) in the rzc 1180 c .
  • the rzc 1180 of the point to be interpolated may be calculated from the pdc 1170 in accordance with a linear relation expression. Interpolation expressions for other geometrical parameters may be obtained in a manner similar to that in which the interpolation expression for the s component of the texture coordinates 1120 is done.
  • FIG. 14 shows a luminance calculation expression to be processed by the luminance calculation unit 2510 .
  • the respective color components C of a pixel are below represented by the sum of the color Ca of a figure illuminated by ambient light present in a natural world, the color Ce of emission light radiated by the figure itself, the color Cai of a ambient reflection component from the figure illuminated by the light source, the color Cdi of diffusive reflection light component from the figure illuminated by the light source, and the color Csi of the specular reflection light component from the figure illuminated by the light source:
  • C is composed of three components R, G and B, i is a light source number.
  • Cai, Cdi and Csi are obtained as:
  • ( ⁇ Ldiri ⁇ Li), (N ⁇ Li) and (V ⁇ Ri) are each an inner product; Ka, Kb and Kc are each a reflection coefficient of a material; La, Ld and Ls are each the color of the light source; Ctel is the texture color; Latt is the light source intensity attenuation rate 1160 ; Ldir is the light source vector 11 A 0 ; L is the light source direction vector 1140 ; N is the normal vector 1130 ; V is the viewpoint direction vector 1190 ; R is the light source reflection vector 1150 ; Lconc is the spot light source intensity index; and Sconc is the specular reflection index.
  • FIG. 15 is a graph 3210 of the relationship between depth from a viewpoint for a triangle to be displayed and the inverse of the depth value with z and 1/z values as parameters.
  • the horizontal axis of this graph represents a z value as the depth and the vertical axis the inverse of z (i.e., 1/z).
  • FIG. 16 shows several z values and corresponding 1/z values.
  • a correspondence table 3240 for z and 1/z values shows z values obtained where 1/z values of from 0.1 to 1.0 are plotted at equal intervals.
  • a z-value numerical straight line 3220 and a 1/z value numerical straight line 3230 represents the z-1/z value correspondence table 3240 in numerical straight lines.
  • FIG. 17 shows the relationship between the case in which z values are stored in the depth buffer 3200 and the case in which 1/z values are stored in the depth buffer 3200 .
  • a depth buffer 3250 which includes the depth buffer 3200 which has stored z values of from 1 mm to 10 6 mm or 1000 m with a resolution of 1 mm.
  • the depth buffer 3200 is required to be divided by at least 10 6 . It is obvious that the significance of 1 mm varies between the difference between 1 m+1 mm and 1 m+2 mm and the difference between 100 m+1 mm and 100 m+2 mm.
  • the 1/z value storage depth buffer 3260 includes the depth buffer 3200 in which 1/z values are stored to improve a resolution in an area near the viewpoint. It is obvious from FIGS. 15 and 16 that the resolution in an area near the viewpoint is improved when the 1/z values are stored in the depth buffer 3200 . Dividing the depth buffer 3200 by 10 3 will suffice the assurance of a resolution of 1 mm for a depth value not more than 10 3 mm or 1 m as the distance from the viewpoint as the requirements for storage of 1/z values in the depth buffer 3200 .
  • storage of 1/z values in the depth buffer 3200 only requires 16 bits/pixel whereas storage of z values in the depth buffer 3200 requires 24 bits/pixel.
  • the plane slope element coefficient calculation unit 2310 calculates as a plane slope element coefficient 2310 ( 1 ) the inverse matrix of a matrix of vrc 1110 for given three vertexes. Let the respective vertexes of the triangle 1100 be v 0 , v 1 and v 2 ; let the corresponding pdc 1170 be (x 0 , y 0 , z 0 ), (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ); let rzc 1180 be (x r 0 , y r 0 , z r 0 ), (x r 1 , y r 1 , z r 1 ), and (x r 2 , y r 2 , z r 2 ); and let vrc 1110 be (x v 0 , y v 0 , z v 0 ), (x v 1 ,
  • M - 1 [ x v ⁇ 0 x v ⁇ 1 x v ⁇ 2 y v ⁇ 0 y v ⁇ 1 y v ⁇ 2 z v ⁇ 0 z v ⁇ 1 z v ⁇ 2 ] - 1 (Eq. 2)
  • the plane slope element coefficient 2310 ( 1 ) implies the plane slope of the triangle defined in the three-dimensional space and is usable in common in a plurality of geometrical parameters.
  • the plane slope element coefficients 2310 ( 1 ), interpolation coefficients whose calculating procedures will be described below, and an interpolation expression composed of the interpolation coefficients are usable in common in the triangle.
  • the depth coordinate interpolation coefficient calculation unit 2321 calculates a depth coordinate interpolation coefficient 2321 ( 1 ) based on the plane slope element coefficient 2310 ( 1 ) while the texture coordinate interpolation coefficient calculation unit 2322 calculates a texture coordinate interpolation coefficient 2322 ( 1 ) based on the information given above and the plane slope element coefficient 2310 ( 1 ).
  • the texture coordinate interpolation coefficients 2322 ( 1 ) will be described next.
  • the texture coordinates are corrected in the three-dimensional space. If B which satisfies Ex. 6 below is calculated, it becomes the texture coordinate interpolation coefficient 2322 ( 1 ) represented by Eq. 7 below.
  • the depth coordinates 2520 ( 1 ) obtained after correction are calculated from coordinates produced by the pixel address calculation unit 2400 and to be corrected by perspective projection and the depth coordinate interpolation coefficient 2321 ( 1 ) calculated before in accordance with the next interpolation expression:
  • the texture coordinate 2520 ( 2 )s obtained after correction for the texture s-component is calculated from the coordinates produced by the pixel address calculation unit 2400 and to be corrected by perspective projection, the above calculated texture coordinate interpolation coefficient 2322 ( 1 ), and the corrected depth coordinate 2520 ( 1 ) in accordance with the next interpolation expression:
  • a plurality of geometrical parameters other than the depth coordinates and texture coordinates 1120 may be processed in a manner similar to that used for the texture coordinates 1120 to achieve perspective correction. Its structure is already shown in FIG. 8.
  • luminance calculation is performed on the basis of the corrected geometrical parameters in the luminance calculation unit 2510 .
  • a method of making the luminance calculation is already described above.
  • the depth coordinates stored in the depth buffer 3200 are compared with the corrected depth coordinates 2520 ( 1 ). When the conditions are satisfied, the corrected depth coordinates 2520 ( 1 ) are transferred to the depth buffer 3200 .
  • the display controller 2600 may (or may not) write the color data into a frame buffer 3100 and displays the color data on the display 4000 , on the basis of the information from the luminance calculation unit 2510 , depth comparator 2530 , and frame buffer 3100 . When this series of processes is performed for all the points within the triangle, the processing for the triangle is completed.
  • the perspective projection calculation device includes the plane slope element coefficient calculation unit which calculates coefficients which imply a plane slope of a triangle defined in the three-dimensional space usable in common in a plurality of geometrical parameters to be interpolated, the interpolation coefficient calculation unit which calculates interpolation coefficients from the plane slope element coefficients obtained in the plane slope element coefficient calculation unit, and the correction unit which makes accurate perspective corrections, using the interpolation coefficients obtained in the interpolation coefficient calculation unit, the perspective projection calculation device is capable of accurately making perspective corrections rapidly for each plane while avoiding an increase in the number of dividing operations.

Abstract

A perspective projection calculation device making a perspective correction accurately and rapidly in each plane while avoiding an increase in the number of dividing operations. The perspective projection calculation device comprises at least one plane slope element coefficient calculation unit for calculating a coefficient which implies a plane slope element of the triangle defined in the three-dimensional space usable in common in a plurality of geometrical parameters to be interpolated, at least on interpolation coefficient calculation unit for calculating an interpolation coefficient from the plane slope element coefficient calculated by the plane slope element coefficient calculation unit, and at least one correction unit for making a perspective correction, using the interpolation coefficient obtained in the interpolation coefficient calculation unit.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to figure generating systems for image processors, and more particularly to a perspective projection calculation device and method for correcting geometrical parameters of a perspectively projected three-dimensional figure. [0001]
  • When a perspectively projected figure, for example a triangle, is shaded, linear interpolation is generally performed for each span, using the respective vertex coordinates of a perspectively projected triangle, and geometrical parameters necessary and sufficient for shading are approximately calculated for the respective points within the perspectively projected triangle. [0002]
  • In order to prevent reality based on perspective projection from being impaired, secondary interpolation is performed for each span, using the vertex coordinates of the perspectively projected triangle and geometrical parameters necessary and sufficient for shading are approximately calculated for the respective points within the perspectively projected triangle. [0003]
  • For example, JP-A-3-198172 discloses a method of calculating geometrical parameters necessary and sufficient for shading on a plane figure in a three-dimensional space without interpolation for each span, but no specified method of calculating interpolation coefficients used for the interpolation. [0004]
  • A known method of interpolation for each plane is disclosed in Juan Pineda: “A Parallel Algorithm for Polygon Rasterization”, [0005] Computer Graphics, Vol. 22, No. 4, August 1988, pp. 17-20. However, this method does not refer to processing of a perspectively projected figure.
  • In the above prior art, when interpolation coefficients necessary for interpolation are calculated for each span interpolation, calculations including division are required for each span. In addition, when geometrical parameters to be interpolated are different even in the interpolation for the same span, calculations including division for the interpolation coefficients are required for the respective parameters. [0006]
  • The effects of the perspective projection in the prior art are inaccurate and approximate. [0007]
  • SUMMARY OF THE INVENTION
  • It is therefore an object of the present invention to provide a perspective projection calculation device which is capable of reducing the number of times of division required for shading, and rapidly making an accurate correction on perspective projection for each plane. [0008]
  • It is another object of the present invention to provide a perspective projection calculation method which is capable of reducing the number of times of division required for shading, and rapidly making an accurate correction on perspective projection for each plane. [0009]
  • In order to achieve the above objects, the present invention provides a perspective projection calculation device in an image processor for perspectively projecting a triangle defined in a three-dimensional space onto a two-dimensional space and for shading the triangle in the second-dimensional space, comprising: [0010]
  • at least one plane slope element coefficient calculating means for calculating a coefficient which implies a plane slope element of the triangle defined in the three-dimensional space; [0011]
  • at least on interpolation coefficient calculating means for calculating an interpolation coefficient from the plane slope element coefficient calculated by the plane slope element coefficient calculating means; and [0012]
  • at least one correcting means for making a perspective correction, using the interpolation coefficient. [0013]
  • The plane slope element coefficient may be used in common in all parameters to be interpolated. [0014]
  • An inverse matrix of a matrix of vertex coordinates of the triangle defined in the three-dimensional space may be used as the plane slope element coefficient. [0015]
  • The plane slope element coefficient, the interpolation coefficient and/or an interpolation expression including the interpolation coefficient may be used in common in the triangle defined in the three-dimensional space and/or in a perspectively projected triangle in a two-dimensional space. [0016]
  • The interpolation expression may involve only multiplication and/or addition. [0017]
  • The geometrical parameters may be interpolated in a three-dimensional space. [0018]
  • The interpolation expression may include a term involving a depth. More specifically, it may use the inverse of depth coordinates as geometrical parameters. [0019]
  • The interpolation expression may interpolate the geometrical parameters while maintaining the linearity thereof on a plane. [0020]
  • In order to achieve the above objects, the present invention provides a perspective projection calculation device in an image processor which includes at least one display, at least one frame buffer for storing an image to be displayed on the display, and at least one figure generator for generating a figure which composes the image on the frame buffer, thereby making a perspective correction on the respective pixels of the figure, [0021]
  • wherein coefficients necessary and sufficient for perspective projection calculation are used as an interface to the figure generator. [0022]
  • In order to achieve the above objects, the present invention provides a perspective projection calculation device in an image processor which includes a depth buffer which stores data on a depth from a viewpoint for a plane to be displayed and which removes a hidden surface, the image processor perspectively projecting a triangle defined in a three-dimensional space onto a two-dimensional space and shading the triangle in the two-dimensional space, [0023]
  • wherein the depth buffer comprises a buffer for storing a non-linear value in correspondence to the distance from the viewpoint. [0024]
  • The depth buffer may comprise a buffer for storing a non-linear value representing a resolution which increases toward the viewpoint in place of the depth value. More specifically, the depth buffer may comprise a buffer for storing the inverse of a depth value in place of the depth value. [0025]
  • In order to achieve the another object, the present invention provides a perspective projection calculation method in an image processing method for perspectively projecting a triangle defined in a three-dimensional space onto a two-dimensional space and for shading the triangle in the second-dimensional space, comprising the steps of: [0026]
  • calculating a coefficient which implies a plane slope element of the triangle defined in the three-dimensional space; [0027]
  • calculating an interpolation coefficient from the plane slope element coefficient; and [0028]
  • making a perspective correction, using the interpolation coefficient. [0029]
  • The plane slope element coefficient may be used in common in all parameters to be interpolated. [0030]
  • An inverse matrix of a matrix of vertex coordinates of the triangle defined in the three-dimensional space may be used as the plane slope element coefficient. [0031]
  • In any perspective projection calculation device, the plane slope element coefficient, the interpolation coefficient and/or an interpolation expression including the interpolation coefficient may be used in common in the triangle defined in the three-dimensional space and/or in a perspectively projected triangle in a two-dimensional space. [0032]
  • The interpolation expression may involve only multiplication and/or addition. [0033]
  • The geometrical parameters may be interpolated in a three-dimensional space. [0034]
  • The interpolation expression may include a term involving a depth. More specifically, the interpolation expression may use the inverses of depth coordinates as the geometrical parameters. [0035]
  • The interpolation calculation expression may interpolate the geometrical parameters while maintaining the linearity thereof on a plane. [0036]
  • In order to achieve the another object, the present invention provides a perspective projection calculation method in an image processing method which uses a depth buffer which stores data on a depth from a viewpoint for a plane to be displayed and which removes a hidden surface, a triangle defined in a three-dimensional space being perspectively projected onto a two-dimensional space and the triangle being shaded in the two-dimensional space, comprising the step of: [0037]
  • storing a non-linear value in the depth buffer in correspondence to the distance from the viewpoint. [0038]
  • The depth buffer may store a non-linear value representing a resolution which increases toward the viewpoint in place of the depth value. More specifically, the depth buffer may store an inverse of a depth value is stored in the depth buffer in place of the depth value. [0039]
  • In the present invention, a plane slope element coefficient in a three-dimensional space usable in common in a plurality of geometrical parameters necessary for shading a perspectively projected figure is used to reduce the number of dividing operations, and the plurality of geometrical parameters is corrected for each plane in the three-dimensional space. Thus, correct perspective correction is made rapidly. [0040]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an illustrative image processing system which employs one embodiment of a perspective projection calculation device according to the present invention; [0041]
  • FIG. 2 is a block diagram of an illustrative perspective correction unit; [0042]
  • FIG. 3 is a block diagram of an illustrative plane slope element coefficient calculation unit which is a component of the perspective correction calculation unit; [0043]
  • FIG. 4 is a block diagram of a depth coordinate interpolation coefficient calculation unit which is a component of the interpolation coefficient calculation unit; [0044]
  • FIG. 5 is a block diagram of a texture coordinate interpolation coefficient calculation unit which is a component of the interpolation coefficient calculation unit; [0045]
  • FIG. 6 is a block diagram of a depth coordinate correction unit which is a component of a correction unit; [0046]
  • FIG. 7 is a block diagram of an illustrative texture coordinate s-component correction unit for an s-component of a texture coordinate correction unit as a component of the correction unit; [0047]
  • FIG. 8 is a block diagram of a modification of the perspective correction calculation unit of FIG. 2; [0048]
  • FIG. 9 is a block diagram of an illustrative luminance calculation unit which cooperates with a pixel address calculation unit and the correction unit to compose a figure generator; [0049]
  • FIGS. 10A and 10B show a triangle displayed on a display; [0050]
  • FIG. 11 shows the relationship between geometrical parameters and the corresponding triangles to be displayed; [0051]
  • FIGS. 12A, 12B and [0052] 12C show the relationship between three kinds of coordinate systems and vrc 1110 c/rzc 1180 c/pdc1170 c;
  • FIG. 13 illustrates an interpolation expression for s-components of depth and texture coordinates as typical interpolation expressions to be processed in the correction unit; [0053]
  • FIG. 14 shows a luminance calculation expression to be processed in the luminance calculation unit; [0054]
  • FIG. 15 is a graph of the relationship between the depth from a viewpoint of a triangle to be displayed and its inverse with z and 1/z values as parameters; [0055]
  • FIG. 16 shows z values for several 1/z values; and [0056]
  • FIG. 17 shows the relationship between a case in which a z value is stored in a depth buffer and a case in which a 1/z value is stored in the depth buffer. [0057]
  • DESCRIPTION OF THE PREFERRED EMBODIMENT
  • Referring to FIGS. [0058] 1-17, a preferred embodiment of a perspective projection calculation device and method according to the present invention will be described next.
  • FIG. 1 is a block diagram of an illustrative image processing system which employs one embodiment of a perspective projection calculation device according to the present invention. The system is composed of a figure vertex [0059] information inputting unit 1000, an image processor 2000, a memory module 3000 and a display 4000. The image processor 2000 is composed of a perspective correction calculation unit 2100, and a figure generator 2200. The memory module 3000 is composed of a frame buffer 3100 and a depth buffer 3200.
  • FIG. 2 is a block diagram of the perspective [0060] projection calculation unit 2100, which makes a correction on the perspective projection and is composed of a plane slope element coefficient calculation unit 2310, an interpolation coefficient calculation unit 2320, a pixel address calculation unit 2400, and a correction unit 2520.
  • The interpolation [0061] coefficient calculation unit 2320 includes a depth coordinate interpolation coefficient calculation unit 2321 and a texture coordinate interpolation coefficient calculation unit 2322. The correction unit 2520 includes a depth coordinate correction unit 2521 and a texture coordinate correction unit 2522.
  • FIG. 3 is a block diagram of the plane slope component [0062] coefficient calculation unit 2310 which is a component of the perspective correction calculation unit 2100. The plane slope element coefficient calculation unit 2310 receives figure vertex information from the figure vertex information inputting unit 1000. A vertex coordinate matrix composition unit 2310(a) composes a matrix based on the figure vertex information or the vertex coordinates (v0 vertex coordinates 1111, v1 vertex coordinates 1112, v2 vertex coordinates 1113) of a triangle defined in a three-dimensional space. An inverse matrix calculation unit 2310(b) calculates an inverse matrix or plane slope element coefficients 2310(1) based on the earlier-mentioned matrix.
  • FIG. 4 is a block diagram of an illustrative depth coordinate interpolation [0063] coefficient calculation unit 2321 which is a component of the interpolation coefficient calculation unit 2320. A depth coordinate matrix composition unit 2321(a) composes a matrix of viewpoint-front clipping plane distances. A multiplier 2321(c) multiplies a plane slope element coefficient 2310(1) calculated by the plane slope element calculation unit 2310 by the matrix of viewpoint-front plane clipping distances composed by the depth coordinate matrix composer 2321(a). A multiplier 2321(d) multiplies the output from the multiplier 2321(c) by a sign conversion matrix 2321(b) to calculate depth coordinate interpolation coefficients 2321(1). The sign conversion matrix 2321(b) implies conversion from a viewpoint coordinate system 1110 c to a recZ coordinate system 1180 c.
  • FIG. 5 is a block diagram of an illustrative texture coordinate interpolation [0064] coefficient calculation unit 2322 which is a component of the interpolation coefficient calculation unit 2320. The texture coordinate interpolation coefficient calculation unit 2322 receives figure vertex information from the figure vertex information inputting unit 1000. A texture coordinate matrix composition unit 2322(a) composes a matrix based on figure vertex information or vertex texture coordinates (v0 vertex texture coordinates 1121, v1 vertex texture coordinates 1122, v2 vertex texture coordinates 1123) of a triangle defined in a three-dimensional space. A multiplier 2322(b) multiplies data on the matrix from the texture coordinate matrix composition unit 2322(a) by a plane slope element coefficient 2310(1) calculated by the plane slope element calculation unit 2310 to calculate texture coordinate interpolation coefficients 2322(1), which are composed of texture coordinate s-component interpolation coefficients and texture coordinate t-component interpolation coefficients.
  • FIG. 6 is a block diagram of the depth coordinate [0065] correction unit 2521 which is a component of the correction unit 2520. A multiplier 2521(a) multiplies an x-component of a depth coordinate interpolation coefficient 2321(1) by an x-component of an address generated by pixel address calculation unit 2400. An adder 2521(c) adds the output from the multiplier 2521(a) and a constant component of the depth coordinate interpolation coefficient 2321(1). A multiplier 2521(b) multiplies a y-component of the depth coordinate interpolation coefficient 2321(1) by a y-component of the address generated by the pixel address calculation unit 2400. Last, an adder 2521(d) adds the output from the adder 2521(c) and the output from the multiplier 2521(b) to calculate corrected depth coordinates 2520(1).
  • FIG. 7 is a block diagram of an illustrative texture coordinate s-[0066] component correction unit 2522s which is an s-component of the texture coordinate correction unit 2522 which is a component of the correction unit 2520. A multiplier 2522s(a) of the texture coordinate s-component correction unit 2522s multiplies an x-component of an address generated from the pixel address calculation unit 2400 by a value of -near/recZ calculated by the corrected depth coordinates 2520(1). A multiplier 2522s(b) multiples the output from the multiplier 2522s(a) by an x component of the texture coordinate s-component interpolation coefficient 2322(1)s. A multiplier 2522s(c) multiplies a y-component of the address generated from the pixel address calculation unit 2400 by the value of -near/recZ calculated from the corrected depth coordinates 2520(1). A multiplier 2522s(e) multiples the output from the multiplier 2522s(c) by a y-component of the texture coordinate s-component interpolation coefficient 2322(1)s. A multiplier 2522s(d) multiplies a z-component of the texture coordinate s-component interpolation coefficient 2322(1)s by the value of -near/recZ calculated by the corrected depth coordinates 2520(1). An adder 2522s(f) adds the outputs from the multipliers 2522s(b) and 2522s(d). Last, an adder 2522s(g) adds the outputs from the multiplier 2522s(e) and the adder 2522s(f) to calculate corrected texture s-component coordinates 2520(2)s.
  • FIG. 8 is a block diagram of a modification of the perspective [0067] correction calculation unit 2100 of FIG. 2. The modification is arranged so as to handle geometrical parameters including vertex light source intensities whose linearities are maintained on a plane defined in a three-dimensional space, in addition to the depth coordinates and vertex texture coordinates. Geometrical parameters whose linearities are maintained on a plane defined in a three-dimensional space are correctable with respect to perspective projection in a manner similar to that in which the depth coordinates and vertex texture coordinates will be corrected.
  • In this case, perspective correction is made on a light source intensity attenuation rate necessary for luminance calculation, using a similar structure to that of FIG. 8. Perspective correction is made on a [0068] normal vector 1130, a light source direction vector 1140, a viewpoint direction vector 1190, and a light source reflection vector 1150 in a three-dimensional space where linearity of parameters on a plane is maintained. The space is referred to as a (u, v, w) space and its coordinate system is referred to as a normalized model coordinate system.
  • The procedures for processing the vertex coordinates and vertex texture coordinates of FIG. 8 are the same as those employed in FIG. 2. The light source intensity attenuation rate and vertex regular model coordinates which are modified geometrical parameters will be processed in a manner similar to that in which the vertex texture coordinates of FIG. 2 will be done. [0069]
  • In addition, in FIG. 8, perspective correction is made on geometrical parameters such as normals, depth coordinates, vertex texture coordinates, light source, and viewpoint necessary for calculation of luminance, using a structure similar to that of FIG. 2; more specifically, a light source [0070] intensity attenuation rate 1160 determined by the positional relationship between the light source and respective points in the figure, a normal vector 1130 indicative of the direction of the plane, a light source direction vector 1140 indicative of the direction of the light source, a viewpoint direction vector 1190 indicative of the direction of the viewpoint, and a light source reflection vector 1150 indicative of the reflecting direction of the light source. Perspective correction is made on the normal vector 1130, light source direction vector 1140, viewpoint direction vector 1190 and light reflection vector 1150 in a three-dimensional space in which linearity of parameters on a plane is maintained. The space and the coordinate system are referred to as a (u, v, r) space and a normalized model coordinate system, respectively. In the real perspective correction, the values (u, v, w), to which those vectors are converted, in the normalized model coordinate system in which those vectors can be linearly interpolated are used. For example, the coordinate values, in a normalized model coordinate system, of the vertexes of a triangle defined in the three-dimensional space are referred to as vertex normalized model coordinates.
  • FIG. 9 is a block diagram of an illustrative [0071] luminance calculation unit 2510 which cooperates with the pixel address calculation unit and correction unit 2520 to compose the figure generator 2200. A texture color acquirement unit 2510(a) acquires color data C=(R, G, B) which involve colors of the texture based on corrected texture coordinates 2520(2). A light source-caused-attenuation-free ambient/diffusive/specular component luminance calculation unit 2510(b) calculates the luminances of a light source-caused-attenuation-free ambient/diffusive/specular components on the basis of the texture color and the corrected light source intensity attenuation rate. A spot angle attenuation rate calculation unit 2510(c) calculates an Lconcth power of the inner product (−Ldir·Li) of a reverse light source vector 11A0 and the light source direction vector 1140. A light source incident angle illumination calculation unit 2510(d) calculates the inner product (L·N) of the normal vector 1130 and the light source direction vector 1140.
  • A specular reflection attenuation rate calculation unit [0072] 2510(e) calculates a Sconcth power of the inner product (V·R) of the viewpoint direction vector 1190 and the light source reflection vector 1150 where Lconc is a spot light source intensity index and Sconc is a specular reflection index. A multiplier 2510(f) multiplies the output from the light source-caused-attenuation-free ambient/diffusive/specular component luminance calculation unit 2510(b) by the output from the spot angle attenuation rate calculation unit 2510(c). A multiplier 2510(g) multiplies the outputs from the multiplier 2510(f) by the light source incident angle illumination calculation unit 2510(d). A multiplier 2510(h) multiplies the output from the multiplier 2510(f) by the output from the specular reflection attenuation rate calculation unit 2510(e).
  • The above series of processing steps is performed by the number of light sources. A whole light source ambient component calculation unit [0073] 2510(j), a whole light source diffusive component calculation unit 2510(k), a whole light source specular component calculation unit 2510(1) each add the respective associated components by the number of light sources. Last, a luminance synthesis adder 2510(o) adds the output from the luminance calculation unit 2510(i) for a natural field ambient component and an emission component, the output from a whole light source ambient component calculation unit 2510(j), the output from a whole light source diffusive component calculation unit 2510(k), and the output from a whole light source specular component calculation unit 2510(1) to calculate a pixel luminance 2510(1).
  • Referring to FIGS. [0074] 10-17, the operation of the perspective projection calculation device, thus constructed, will be described next.
  • FIG. 10 shows a [0075] triangle 1100 displayed on the display 4000. The triangle is the one in a two-dimensional space to which the corresponding triangle defined in the three-dimensional space is perspectively projected. FIG. 10B shows pdc 1170, rzc 1180, vrc 1110, texture 1120, light source intensity attenuation rate 1160, and regular model coordinates 1180 at the vertexes of the triangle 1100 displayed on the display 4000. The pdc 1170 denotes the coordinates of the triangle displayed on the display 4000, the rzc 1180 denote coordinates of the perspectively projected triangle, the vrc 1110 denotes coordinates of the triangle present before the perspective projection, the texture coordinates 1120 are those corresponding to a texture image mapped on the triangle, the light source intensity attenuation rates 1160 are scalar values determined depending on the positional relationship between the light source and respective points within the figure, and the regular model coordinates 1180 are coordinate values in a regular model coordinate system which is a space in which the normal vector 1130, light source direction vector 1140, viewpoint direction vector 1190, and the light source reflection vector 1150 maintain their linearities on a plane in the three-dimensional space.
  • The [0076] pdc 1170 c represents “Physical Device Coordinates”, the rzc 1180 c “recZ Coordinates”, and the vrc 1110 c “View Reference Coordinates”. The recZ 1180 z and the inverse of the depth coordinates are in proportional relationship on the vrc.
  • FIG. 11 shows the relationship between their geometrical parameters and a triangle to be displayed. At a point (x[0077] v, yv, zv) 1110 within a triangle on the vrc 1110 c, there are geometrical parameters which require correction for the perspective projection; i.e., depth coordinates 1180 z, texture coordinates 1120, and light source intensity attenuation rate 1160, regular model coordinates 11B0. The point (xv, yv, zv) 1110 in the triangle on the vrc 1110 c is mapped onto a view plane by perspective projection to become a point (xr, yr, zr) 1180 on the rzc 1180 c. The mapped point on the rzc 1180 c becomes a point (z, y, z) 1170 on the pdc 1170 c on the display 4000.
  • FIGS. 12A, 12B, [0078] 12C show the relationship between the above-mentioned three kinds of coordinate systems and vrc 1110 c/rzc 1180 c/pdc 1170 c. For the vrc 1110 c representing the broken-lined volume with the equations in FIG. 12B, the viewpoint is at the origin. A figure model to be displayed is defined in this coordinate system. The rzc 1180 c is the coordinate system obtained by subjecting the vrc 1110 c to perspective projection which produces a sight effect similar to that produced by a human sight system. The relation between the rzc and the vrc is represented with the left side equations in FIG. 12C. The figure delineated onto the rzc 1180 c is converted to the one on the pdc 1170 c, which is then displayed on the display 4000. The relation between the pdc and the rzc is represented with the right side equations in FIG. 12C.
  • FIG. 13 shows interpolation expressions for a depth coordinate [0079] 2521(e) and a s-component 2522s(h) of texture coordinates as typical interpolation expressions to be processed in the correction unit 2520. The interpolated depth coordinates recZ 2520(1) in the rzc 1180 c are given as:
  • recZ=recZx×x r +recZy×y r +recZc
  • where (x[0080] r, yr) is the rzc 1180 of a point to be interpolated, (recZx, recZy, recZc) denote depth coordinate interpolation coefficients 2321(1) calculated by the interpolation coefficient calculation unit 2320.
  • The texture coordinates [0081] 1120 have two components (s, t). The interpolated coordinates s2520(2)s of its s-component are given as:
  • s=(S x ×x r +S y ×y r +S z)'z v
  • where (x[0082] r, yr) is the rzc 1180 of a point to be interpolated, (Sx, Sy, Sz) are texture coordinate s-component interpolation coefficients 2322(1)s calculated by the interpolation coefficient calculation unit 2320, zv is the depth vrc 1110 of the point to be interpolated and is obtained from the interpolated depth coordinate recZ 2520(1) in the rzc 1180 c. The calculation expression in this embodiment is zv=-near/recZ where “near” is the distance between the front clipping plane and a viewpoint at the vrc 1110 c. The rzc 1180 of the point to be interpolated may be calculated from the pdc 1170 in accordance with a linear relation expression. Interpolation expressions for other geometrical parameters may be obtained in a manner similar to that in which the interpolation expression for the s component of the texture coordinates 1120 is done.
  • FIG. 14 shows a luminance calculation expression to be processed by the [0083] luminance calculation unit 2510. The respective color components C of a pixel are below represented by the sum of the color Ca of a figure illuminated by ambient light present in a natural world, the color Ce of emission light radiated by the figure itself, the color Cai of a ambient reflection component from the figure illuminated by the light source, the color Cdi of diffusive reflection light component from the figure illuminated by the light source, and the color Csi of the specular reflection light component from the figure illuminated by the light source:
  • C=Ca+Ce+υ(Cai+Cdi+Csi)
  • where C is composed of three components R, G and B, i is a light source number. Cai, Cdi and Csi are obtained as:[0084]
  • Cai=Ka×Lai×Ctel×Latti×(−Ldiri·Li){circumflex over ()}Lconc
  • Cdi=Kd×Ldi×Ctel×Latti×(−Ldiri·Li){circumflex over ()}Lconc×(N·Li)
  • Csi=Ks×Lsi×Ctel×Latti×(−Ldiri·Li){circumflex over ()}Lconc×( V·Ri){circumflex over ()} Sconc
  • where (−Ldiri·Li), (N·Li) and (V·Ri) are each an inner product; Ka, Kb and Kc are each a reflection coefficient of a material; La, Ld and Ls are each the color of the light source; Ctel is the texture color; Latt is the light source [0085] intensity attenuation rate 1160; Ldir is the light source vector 11A0; L is the light source direction vector 1140; N is the normal vector 1130; V is the viewpoint direction vector 1190; R is the light source reflection vector 1150; Lconc is the spot light source intensity index; and Sconc is the specular reflection index.
  • FIG. 15 is a [0086] graph 3210 of the relationship between depth from a viewpoint for a triangle to be displayed and the inverse of the depth value with z and 1/z values as parameters. The horizontal axis of this graph represents a z value as the depth and the vertical axis the inverse of z (i.e., 1/z).
  • FIG. 16 shows several z values and corresponding 1/z values. A correspondence table [0087] 3240 for z and 1/z values shows z values obtained where 1/z values of from 0.1 to 1.0 are plotted at equal intervals. A z-value numerical straight line 3220 and a 1/z value numerical straight line 3230 represents the z-1/z value correspondence table 3240 in numerical straight lines.
  • FIG. 17 shows the relationship between the case in which z values are stored in the [0088] depth buffer 3200 and the case in which 1/z values are stored in the depth buffer 3200. For example, consider a depth buffer 3250 which includes the depth buffer 3200 which has stored z values of from 1 mm to 106 mm or 1000 m with a resolution of 1 mm. In this case, it will be easily understood that the depth buffer 3200 is required to be divided by at least 106. It is obvious that the significance of 1 mm varies between the difference between 1 m+1 mm and 1 m+2 mm and the difference between 100 m+1 mm and 100 m+2 mm. More specifically, at a position near the viewpoint, an accurate depth value is needed whereas at a point remoter from the viewpoint, the difference of 1 mm is less significant. The 1/z value storage depth buffer 3260 includes the depth buffer 3200 in which 1/z values are stored to improve a resolution in an area near the viewpoint. It is obvious from FIGS. 15 and 16 that the resolution in an area near the viewpoint is improved when the 1/z values are stored in the depth buffer 3200. Dividing the depth buffer 3200 by 103 will suffice the assurance of a resolution of 1 mm for a depth value not more than 103 mm or 1 m as the distance from the viewpoint as the requirements for storage of 1/z values in the depth buffer 3200. This example indicates that when depth values of from 1 mm to 106 mm or 1000 m are stored in the depth buffer, storage of 1/z values serves to reduce the size of the depth buffer advantageously to {fraction (1/1000)} (= 103/106) compared to simple storage of the z values. As a result, for example, storage of 1/z values in the depth buffer 3200 only requires 16 bits/pixel whereas storage of z values in the depth buffer 3200 requires 24 bits/pixel.
  • Accurate perspective correction on the depth coordinates and [0089] texture coordinates 1120 will be described next on the basis of the description just mentioned above, by taking a triangle as an example of a figure. First, the figure vertex information inputting unit 1000 feeds the vrc 1110 of the triangle vertexes to the plane slope element coefficient calculation unit 2310 of the perspective correction calculation unit 2100, and texture coordinates 1120 corresponding to the triangle vertexes to the texture coordinate interpolation coefficient calculation unit 2322.
  • The plane slope element [0090] coefficient calculation unit 2310 calculates as a plane slope element coefficient 2310 (1) the inverse matrix of a matrix of vrc 1110 for given three vertexes. Let the respective vertexes of the triangle 1100 be v0, v1 and v2; let the corresponding pdc 1170 be (x0, y0, z0), (x1, y1, z1) and (x2, y2, z2); let rzc 1180 be (xr 0, y r 0, zr 0), (xr 1, y r 1, zr 1), and (xr 2, y r 2, zr 2); and let vrc 1110 be (xv 0, y v 0, zv 0), (xv 1, y v 1, zv 1), and (xv 2, y v 2, zv 2). A matrix M of vrc 1110 is given by Eq. 1 below. Since the plane slope element coefficient 2310(1) is the inverse matrix of M of FIG. 1 below, it is given as Eq. 2 below: M = [ x v 0 x v 1 x v 2 y v 0 y v 1 y v 2 z v 0 z v 1 z v 2 ] (Eq. 1)
    Figure US20010010517A1-20010802-M00001
    M - 1 = [ x v 0 x v 1 x v 2 y v 0 y v 1 y v 2 z v 0 z v 1 z v 2 ] - 1 (Eq. 2)
    Figure US20010010517A1-20010802-M00002
  • The plane slope element coefficient [0091] 2310(1) implies the plane slope of the triangle defined in the three-dimensional space and is usable in common in a plurality of geometrical parameters. The plane slope element coefficients 2310(1), interpolation coefficients whose calculating procedures will be described below, and an interpolation expression composed of the interpolation coefficients are usable in common in the triangle.
  • The depth coordinate interpolation [0092] coefficient calculation unit 2321 calculates a depth coordinate interpolation coefficient 2321(1) based on the plane slope element coefficient 2310(1) while the texture coordinate interpolation coefficient calculation unit 2322 calculates a texture coordinate interpolation coefficient 2322(1) based on the information given above and the plane slope element coefficient 2310(1).
  • First, a process for calculating the depth coordinate interpolation coefficient [0093] 2321(1) will be described among the specified processes for calculating those interpolation coefficients. Assume now that the distance between the viewpoint and the view plane is 1 on the vrc, it will be seen that the perspective projection is represented in Eq. 3 below and that the perspectively projected coordinates are directly proportional to 1/zv. Now, recZ is defined as near/(−zv) where “near” is the distance between the viewpoint and the front clipping plane. Thus, if A which satisfies Eq. 4 below is obtained, the A is the depth coordinate interpolation coefficient 2321(1) represented by Eq. 5 below. x r = x v ( - z v ) / 1 , y r = - y v ( - z v ) / 1 (Eq. 3) [ recZ ] = A [ x r y r 1 ] = [ recZx recZy recZc ] [ x r y r 1 ] ( Eq. 4) A = [ recZx recZy recZc ] = [ recZ0 recZ1 recZ2 ] [ x r 0 x r 1 x r 2 y r 0 y r 1 y r 2 1 1 1 ] - 1 = [ - near z v 0 - near z v 1 - near z v 2 ] [ x r 0 x r 1 x r 2 y r 0 y r 1 y r 2 1 1 1 ] - 1 = [ near near near ] [ - 1 z v 0 0 0 0 - 1 z v 0 0 0 0 - 1 z v 0 ] [ x r 0 x r 1 x r 2 y r 0 y r 1 y r 2 1 1 1 ] - 1 = [ near near near ] [ - z v 0 0 0 0 - z v 1 0 0 0 - z v 2 ] - 1 [ x r 0 x r 1 x r 2 y r 0 y r 1 y r 2 1 1 1 ] - 1 = [ near near near ] { [ x r 0 x r 1 x r 2 y r 0 y r 1 y r 2 1 1 1 ] [ - z v 0 0 0 0 - z v 1 0 0 0 - z v 2 ] } - 1 = [ near near near ] [ - x r 0 · z v 0 - x r 1 · z v 1 - x r 2 · z v 2 - y r 0 · z v 0 - y r 1 · z v 1 - y r 2 · z v 2 - z r 0 · z v 0 - z r 1 · z v 1 - z r 2 · z v 2 ] - 1 = [ near near near ] [ x v 0 x v 1 x v 2 - y v 0 - y v 1 - y v 2 - z v 0 - z v 1 - z v 2 ] - 1 = [ near near near ] { [ 1 0 0 0 - 1 0 0 0 - 1 ] [ x v 0 x v 1 x v 2 y v 0 y v 1 y v 2 z v 0 z v 1 z v 2 ] } - 1 = [ near near near ] [ x v 0 x v 1 x v 2 y v 0 y v 1 y v 2 z v 0 z v 1 z v 2 ] - 1 [ 1 0 0 0 - 1 0 0 0 - 1 ] - 1 = [ near near near ] [ x v 0 x v 1 x v 2 y v 0 y v 1 y v 2 z v 0 z v 1 z v 2 ] - 1 [ 1 0 0 0 - 1 0 0 0 - 1 ] (Eq. 5)
    Figure US20010010517A1-20010802-M00003
  • Next, the texture coordinate interpolation coefficients [0094] 2322(1) will be described next. In order to reflect the influence of the perspective projection accurately on the coefficients 2322(1), the texture coordinates are corrected in the three-dimensional space. If B which satisfies Ex. 6 below is calculated, it becomes the texture coordinate interpolation coefficient 2322(1) represented by Eq. 7 below. [ s t ] = B [ x v y v z v ] = [ Sx Sy Sz Tx Ty Tz ] [ x v y v z v ] (Eq. 6) B = [ Sx Sy Sz Tx Ty Tz ] = [ s0 s1 s2 t0 t1 t2 ] [ x v 0 x v 1 x v 2 y v 0 y v 1 y v 2 z v 0 z v 1 z v 2 ] - 1 (Eq. 7)
    Figure US20010010517A1-20010802-M00004
  • The depth coordinates [0095] 2520(1) obtained after correction are calculated from coordinates produced by the pixel address calculation unit 2400 and to be corrected by perspective projection and the depth coordinate interpolation coefficient 2321(1) calculated before in accordance with the next interpolation expression:
  • recZ=recZx×x r +recZy×y r +recZc 2521(e).
  • In the actual processing for the depth, the depth coordinates themselves are not used and the inverse values of the depth coordinates are handled instead. The texture coordinate [0096] 2520(2)s obtained after correction for the texture s-component is calculated from the coordinates produced by the pixel address calculation unit 2400 and to be corrected by perspective projection, the above calculated texture coordinate interpolation coefficient 2322(1), and the corrected depth coordinate 2520(1) in accordance with the next interpolation expression:
  • S=(Sx×x r +Sy×y r +Szzv 2522 s(h).
  • As described above, the geometrical parameters are interpolated in the three-dimensional space and effects due to perspective projection for shading are accurately expressed. [0097]
  • A plurality of geometrical parameters other than the depth coordinates and [0098] texture coordinates 1120 may be processed in a manner similar to that used for the texture coordinates 1120 to achieve perspective correction. Its structure is already shown in FIG. 8.
  • Since correction on geometrical parameters in the coordinates which were produced by the pixel [0099] address calculation unit 2400 and which should be subjected to perspective correction has been made, luminance calculation is performed on the basis of the corrected geometrical parameters in the luminance calculation unit 2510. A method of making the luminance calculation is already described above.
  • Data on the colors calculated in the [0100] luminance calculation unit 2510 are fed to the display controller 2600. Data on the corrected depth coordinate 2520(1) are fed to a depth comparator 2530. In that case, the inverse values themselves of the depth coordinates obtained from an interpolation expression below by allowing for the compression of the depth buffer 3200 are fed to the depth comparator 2530:
  • recZ=recZx×x r +recZy×y r +recZc 2521(e).
  • The depth coordinates stored in the [0101] depth buffer 3200 are compared with the corrected depth coordinates 2520(1). When the conditions are satisfied, the corrected depth coordinates 2520(1) are transferred to the depth buffer 3200. The display controller 2600 may (or may not) write the color data into a frame buffer 3100 and displays the color data on the display 4000, on the basis of the information from the luminance calculation unit 2510, depth comparator 2530, and frame buffer 3100. When this series of processes is performed for all the points within the triangle, the processing for the triangle is completed.
  • While in the embodiment, phong shading with texture mapping to which accurate perspective correction has been made has been illustrated, the use of colors at the triangle vertexes as geometrical parameters brings about Gouraud Shading which has subjected to accurate perspective correction. [0102]
  • While as the geometrical parameters four kinds of data; the depth coordinates, texture coordinates [0103] 1120, light source intensity attenuation rate 1160, and regular model coordinates 1180 have been named, similar perspective correction can be made on any geometrical parameters as long as the parameters' linearities are maintained on a plane in the three-dimensional space.
  • While in the [0104] embodiment 1/z values or recZ which are the inverses of the depth coordinates are illustrated as being stored in the depth buffer 3200, z values which are solely depth coordinates may be stored instead. Values alternative to non-linear depth values such as improve a resolution of depth values in a region near the viewpoint may be stored.
  • Since the perspective projection calculation device according to the present invention includes the plane slope element coefficient calculation unit which calculates coefficients which imply a plane slope of a triangle defined in the three-dimensional space usable in common in a plurality of geometrical parameters to be interpolated, the interpolation coefficient calculation unit which calculates interpolation coefficients from the plane slope element coefficients obtained in the plane slope element coefficient calculation unit, and the correction unit which makes accurate perspective corrections, using the interpolation coefficients obtained in the interpolation coefficient calculation unit, the perspective projection calculation device is capable of accurately making perspective corrections rapidly for each plane while avoiding an increase in the number of dividing operations. [0105]

Claims (1)

What is claimed is:
1. A perspective projection calculation device in an image processor for perspectively projecting a triangle defined in a three-dimensional space onto a two-dimensional space and for shading the triangle in the second-dimensional space, comprising:
at least one plane slope element coefficient calculating means for calculating a coefficient which implies a plane slope element of the triangle defined in the three-dimensional space;
at least on interpolation coefficient calculating means for calculating an interpolation coefficient from the plane slope element coefficient calculated by the plane slope element coefficient calculating means; and
at least one correcting means for making a perspective correction, using the interpolation coefficient.
US09/814,684 1995-11-09 2001-03-15 Perspective projection calculation devices and methods Abandoned US20010010517A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/814,684 US20010010517A1 (en) 1995-11-09 2001-03-15 Perspective projection calculation devices and methods

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP29094995A JP3635359B2 (en) 1995-11-09 1995-11-09 Perspective projection calculation apparatus and perspective projection calculation method
JP07-290949 1995-11-09
US08/745,858 US6043820A (en) 1995-11-09 1996-11-08 Perspective projection calculation devices and methods
US09/536,757 US6236404B1 (en) 1995-11-09 2000-03-28 Perspective projection calculation devices and methods
US09/814,684 US20010010517A1 (en) 1995-11-09 2001-03-15 Perspective projection calculation devices and methods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/536,757 Continuation US6236404B1 (en) 1995-11-09 2000-03-28 Perspective projection calculation devices and methods

Publications (1)

Publication Number Publication Date
US20010010517A1 true US20010010517A1 (en) 2001-08-02

Family

ID=17762575

Family Applications (3)

Application Number Title Priority Date Filing Date
US08/745,858 Expired - Lifetime US6043820A (en) 1995-11-09 1996-11-08 Perspective projection calculation devices and methods
US09/536,757 Expired - Fee Related US6236404B1 (en) 1995-11-09 2000-03-28 Perspective projection calculation devices and methods
US09/814,684 Abandoned US20010010517A1 (en) 1995-11-09 2001-03-15 Perspective projection calculation devices and methods

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US08/745,858 Expired - Lifetime US6043820A (en) 1995-11-09 1996-11-08 Perspective projection calculation devices and methods
US09/536,757 Expired - Fee Related US6236404B1 (en) 1995-11-09 2000-03-28 Perspective projection calculation devices and methods

Country Status (4)

Country Link
US (3) US6043820A (en)
JP (1) JP3635359B2 (en)
KR (1) KR100419052B1 (en)
TW (1) TW425512B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060050383A1 (en) * 2003-01-20 2006-03-09 Sanyo Electric Co., Ltd Three-dimentional video providing method and three dimentional video display device
US20060087507A1 (en) * 2004-10-25 2006-04-27 Sony Corporation Information processing apparatus and method, program, and navigation apparatus
US20060187242A1 (en) * 2005-02-18 2006-08-24 Lee Seong-Deok Method of, and apparatus for image enhancement taking ambient illuminance into account
US20080291198A1 (en) * 2007-05-22 2008-11-27 Chun Ik Jae Method of performing 3d graphics geometric transformation using parallel processor
US20090228784A1 (en) * 2008-03-04 2009-09-10 Gilles Drieu Transforms and animations of web-based content
CN104897130A (en) * 2015-06-18 2015-09-09 广西壮族自治区气象减灾研究所 Method for calculating solar elevation angle by adopting space-based remote sensing, blocking and interpolation

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7119809B1 (en) * 2000-05-15 2006-10-10 S3 Graphics Co., Ltd. Parallel architecture for graphics primitive decomposition
JP2002008060A (en) * 2000-06-23 2002-01-11 Hitachi Ltd Data processing method, recording medium and data processing device
GB2372188B (en) * 2001-02-08 2005-07-13 Imagination Tech Ltd Volume clipping in computer 3-D Graphics
JP3702269B2 (en) * 2002-12-06 2005-10-05 コナミ株式会社 Image processing apparatus, computer control method, and program
US20070040832A1 (en) * 2003-07-31 2007-02-22 Tan Tiow S Trapezoidal shadow maps
US7324113B1 (en) * 2005-03-09 2008-01-29 Nvidia Corporation Perspective correction computation optimization
JP2006252423A (en) * 2005-03-14 2006-09-21 Namco Bandai Games Inc Program, information storage medium and image generation system
EP1860614A1 (en) 2006-05-26 2007-11-28 Samsung Electronics Co., Ltd. 3-Dimensional Graphics Processing Method, Medium and Apparatus Performing Perspective Correction
WO2008120298A1 (en) * 2007-03-28 2008-10-09 Fujitsu Limited Perspective correction circuit, image processing device, and image processing program
GB2461912A (en) * 2008-07-17 2010-01-20 Micron Technology Inc Method and apparatus for dewarping and/or perspective correction of an image
US8774556B2 (en) 2011-11-30 2014-07-08 Microsoft Corporation Perspective correction using a reflection
US9710957B2 (en) * 2014-04-05 2017-07-18 Sony Interactive Entertainment America Llc Graphics processing enhancement by tracking object and/or primitive identifiers
US20170337728A1 (en) * 2016-05-17 2017-11-23 Intel Corporation Triangle Rendering Mechanism
US20210134049A1 (en) * 2017-08-08 2021-05-06 Sony Corporation Image processing apparatus and method
US10922884B2 (en) * 2019-07-18 2021-02-16 Sony Corporation Shape-refinement of triangular three-dimensional mesh using a modified shape from shading (SFS) scheme

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03198172A (en) * 1989-12-27 1991-08-29 Japan Radio Co Ltd System for shading viewing conversion triangle
US5415549A (en) * 1991-03-21 1995-05-16 Atari Games Corporation Method for coloring a polygon on a video display
JP2682559B2 (en) * 1992-09-30 1997-11-26 インターナショナル・ビジネス・マシーンズ・コーポレイション Apparatus and method for displaying image of object on display device and computer graphics display system
FR2714503A1 (en) * 1993-12-29 1995-06-30 Philips Laboratoire Electroniq Image processing method and device for constructing from a source image a target image with change of perspective.
WO1997018667A1 (en) * 1995-11-14 1997-05-22 Sony Corporation Special effect device, image processing method, and shadow generating method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060050383A1 (en) * 2003-01-20 2006-03-09 Sanyo Electric Co., Ltd Three-dimentional video providing method and three dimentional video display device
US7403201B2 (en) * 2003-01-20 2008-07-22 Sanyo Electric Co., Ltd. Three-dimensional video providing method and three-dimensional video display device
US20060087507A1 (en) * 2004-10-25 2006-04-27 Sony Corporation Information processing apparatus and method, program, and navigation apparatus
CN100346356C (en) * 2004-10-25 2007-10-31 索尼株式会社 Information processing apparatus and method, program, and navigation apparatus
US7420558B2 (en) * 2004-10-25 2008-09-02 Sony Corporation Information processing apparatus and method, program, and navigation apparatus
US20060187242A1 (en) * 2005-02-18 2006-08-24 Lee Seong-Deok Method of, and apparatus for image enhancement taking ambient illuminance into account
US7995851B2 (en) * 2005-02-18 2011-08-09 Samsung Electronics Co., Ltd. Method of, and apparatus for image enhancement taking ambient illuminance into account
US20080291198A1 (en) * 2007-05-22 2008-11-27 Chun Ik Jae Method of performing 3d graphics geometric transformation using parallel processor
US20090228784A1 (en) * 2008-03-04 2009-09-10 Gilles Drieu Transforms and animations of web-based content
US8234564B2 (en) * 2008-03-04 2012-07-31 Apple Inc. Transforms and animations of web-based content
CN104897130A (en) * 2015-06-18 2015-09-09 广西壮族自治区气象减灾研究所 Method for calculating solar elevation angle by adopting space-based remote sensing, blocking and interpolation

Also Published As

Publication number Publication date
US6236404B1 (en) 2001-05-22
JP3635359B2 (en) 2005-04-06
KR100419052B1 (en) 2004-06-30
JPH09134451A (en) 1997-05-20
TW425512B (en) 2001-03-11
KR970029140A (en) 1997-06-26
US6043820A (en) 2000-03-28

Similar Documents

Publication Publication Date Title
US6236404B1 (en) Perspective projection calculation devices and methods
US5808619A (en) Real-time rendering method of selectively performing bump mapping and phong shading processes and apparatus therefor
US5377313A (en) Computer graphics display method and system with shadow generation
US6690372B2 (en) System, method and article of manufacture for shadow mapping
US6204857B1 (en) Method and apparatus for effective level of detail selection
US5745666A (en) Resolution-independent method for displaying a three-dimensional model in two-dimensional display space
EP0447195B1 (en) Pixel interpolation in perspective space
US6593923B1 (en) System, method and article of manufacture for shadow mapping
US6621925B1 (en) Image processing apparatus and method and information providing medium
US7098924B2 (en) Method and programmable device for triangle interpolation in homogeneous space
US7545375B2 (en) View-dependent displacement mapping
US6731298B1 (en) System, method and article of manufacture for z-texture mapping
US20060114262A1 (en) Texture mapping apparatus, method and program
US20140071124A1 (en) Image processing apparatus
US6552726B2 (en) System and method for fast phong shading
US6614431B1 (en) Method and system for improved per-pixel shading in a computer graphics system
JPH0434159B2 (en)
US6407744B1 (en) Computer graphics bump mapping method and device
US7015930B2 (en) Method and apparatus for interpolating pixel parameters based on a plurality of vertex values
US20010045956A1 (en) Extension of fast phong shading technique for bump mapping
EP0974935B1 (en) Spotlight characteristic forming method and image processor using the same
US5821942A (en) Ray tracing through an ordered array
US5649078A (en) Efficient two-pass rasterization scheme utilizing visibility information
JPH09138865A (en) Three-dimensional shape data processor
JP2952585B1 (en) Image generation method

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE