CN100478995C - Trapezoidal shadow maps - Google Patents

Trapezoidal shadow maps Download PDF

Info

Publication number
CN100478995C
CN100478995C CNB2004800264860A CN200480026486A CN100478995C CN 100478995 C CN100478995 C CN 100478995C CN B2004800264860 A CNB2004800264860 A CN B2004800264860A CN 200480026486 A CN200480026486 A CN 200480026486A CN 100478995 C CN100478995 C CN 100478995C
Authority
CN
China
Prior art keywords
fragment
trapezoidal
summit
value
conversion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004800264860A
Other languages
Chinese (zh)
Other versions
CN1853201A (en
Inventor
陈朝成
托比亚斯·奥斯卡·马丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Singapore
Original Assignee
National University of Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Singapore filed Critical National University of Singapore
Publication of CN1853201A publication Critical patent/CN1853201A/en
Application granted granted Critical
Publication of CN100478995C publication Critical patent/CN100478995C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Generation (AREA)

Abstract

A method of real-time shadow generation in computer graphical representation of a scene, the method comprising defining an eye's frustum based on a desired view of the scene; defining a location of a light source illuminating at least a portion of the scene; generating a trapezoid to approximate an area, E, within the eye's frustum in the post-perspective space of the light, L; applying a trapezoidal transformation to objects within the trapezoid into a trapezoidal space for computing a shadow map; and determining whether an object or part thereof is in shadow in the desired view of the scene utilising the computed shadow map.

Description

Trapezoidal shadow maps
Technical field
The present invention relates to a kind of for the method, a kind of data storage medium and a kind of computer system that produce the shadow maps of deriving (shadow map) of real-time shadow in the computer graphical representation of scenery widely.
Background of invention
Because the growth that the processing of powerful Graphics Processing Unit is supported, so the generation of real-time shadow has caused that recently people much note in the computer graphics system.In many application, shade is important, because they increase the further sense of reality and additional depth cue is provided scenery.
Seeking method how to calculate shade started from before decades.We notice in most of technology, in the shade quality with play up (rendering) and exist the time on and trade off.Nearest method is based on the shadow maps algorithm (SSM) of standard.Succinct and the easy to understand of this binary channels (two-pass) algorithm.In first passage, there is depth buffer under situation about starting, scenery to be played up from the viewpoint of light.This impact damper is read or stores into the image that is called shadow maps.In second channel, scenery is played up and comprised shade from the video camera viewpoint and determine for each fragment.If the z value of fragment is stored in depth value in the shadow maps accordingly greater than it when in the ken that transforms to light, then fragment is arranged in shade.
Compare with additive method, standard shadow maps algorithm is realized easily, and its calculating is fast relatively.In addition, its operation can shine upon in nearest graphic hardware and carry out effectively.Special-purpose texture (texture) is used for shadow maps, and shade is determined to utilize projective textures to shine upon to carry out.
On the other hand, SSM has many restrictions.First defective is a resolution problem.When light approached the viewpoint of scenery and eyes, the SSM effect was better, but when light away from the time near shadow edge, produce and obscure.This is owing to the low shadow maps resolution in needing the zone of high-resolution causes.Except a spot of texture storage device only is used to collect the actual conditions of shadow maps, when the focal zone of visual frustum (eye ' s frustum) was not utilized corresponding to its complementary spaces of these sightless positions of eyes ken to the shadow maps very little part of contribution and in shadow maps, this problem can occur.
Another restriction is called the polygon offset problem.Because the space attribute of image, so shade relatively utilizes limited accuracy to carry out, and it causes the problem from shade.This can handle by seeking deviation value (and slope coefficient), and described value adds that the depth value of shadow maps is to move apart light to the z value a little.We notice that it is that cost solves resolution problem to make the polygon offset problem worsen that certain methods is utilized the nonlinear Distribution of depth value.
Another restriction is called continuity problem, and wherein the shadow maps quality changes significantly from a picture to another picture, causes the flicker of shade.This takes place in the shadow maps of the shadow maps method of all distortion such as bounding box method of estimation (see figure 2) and perspective.Particularly, for example, the shadow maps of perspective relies on the convex closure of all objects that can cast shadow.This convex closure and the shade quality that obtains thus may flip-floies.In an example, this situation is to betide object to move in dynamic environment or shift out in the frustum of light.In another example, when position that in fact algorithm has moved eyes for example to avoid owing to during the inversion order of the object that perspective projection causes, can observe this situation.
Therefore, for the above-mentioned restriction of balance, envision the present invention and it is implemented out now.
Summary of the invention
According to a first aspect of the invention, provide a kind of in the computer graphical representation of scenery the method for real-time shadow generation, this method comprises that the required ken based on scenery limits visual frustum; Limit the position of the light source of illumination at least a portion scenery; Produce trapezoidal to estimate area E in the visual frustum in the back perspective space L of light; Object in trapezoidal is applied trapezoidal conversion,, be used to calculate shadow maps so that it is transformed in the trapezoid space; And utilize the shadow maps that calculates to determine whether object or its part are arranged in the shade of the required ken of scenery.
Produce trapezoidal top line I respectively tWith bottom line I bCan comprise with the step that in L, estimates E:
The line I of-computing center, it is by the hither plane of E and the center of far plane;
The 2D convex closure of-calculating E;
-calculating I t, it is with the I quadrature and contact the border of the convex closure of E;
-calculating I b, it is parallel to I tAnd the border of the convex closure of contact E.
Under the consistent basically situation in the center of the hither plane of E and far plane, can be defined as the minimum frame of restriction far plane trapezoidal.
Producing trapezoidal side line can comprise with the step that estimates E in L:
-appointment is d from the distance of the hither plane of visual frustum, with the focal zone in the required ken that is limited to scenery;
-determine to be positioned among the L point p on the I L, the distance of the hither plane of itself and visual frustum is d;
-calculate the position that I goes up some q, wherein q is a projection centre, so that trapezoidal bottom line and top line are mapped to y=-1 and y=+1 respectively, and pL is mapped to point on the y=ξ, wherein ξ-1 and+1 between; And
Trapezoidal two side lines of-structure, every by q, and wherein every side line contacts the 2D convex closure of E on each side of I.
In one embodiment, ξ=-0.6.
Required some ξ can be based on the minimized iterative process of waste is determined.
When finding local minimum, iterative process can stop.
Iterative process can be calculated in advance, and the result is stored in the table directly to quote.
This method can comprise:
-determine the frustum of light source and the crossing I between the visual frustum;
The central point e on the summit of-calculating I;
-limit by the position of eyes and the center line I of e n, be used to produce trapezoidal.
This method can comprise the focal zone that qualification is new, and it makes these planes close with tight restriction I on how much between the hither plane and far plane of visual frustum.
Trapezoidal conversion can comprise four trapezoidal angle units of being mapped to square, and it is the shape of square shadow maps, or is mapped to general rectangle, and it is the shape of rectangle shadow maps.
The size of square or general rectangle can change based on the configuration of light source and eyes.
Trapezoidal conversion can be only transforms to trapezoid space with the x on summit and y value from the back perspective space of light, and the z value remains the value in the back perspective space of light.
This method can comprise and applies trapezoidal conversion to obtain the x in trapezoid space, y and w value x T, y TAnd w T, and the z value z of calculating in trapezoid space T, z T = z L · w T w L Z wherein LAnd w LBe respectively z and the w value in the back perspective space of light.
This method can comprise:
-in the first passage that shadow maps produces,
The back perspective space L of coordinate figure with fragment from the trapezoid space reciprocal transformation to light, with
Obtain the first conversion fragment, utilize the plane equation of the first conversion fragment to calculate the distance value z of first conversion fragment light source in L L1, with z L1Add off-set value, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
The texture coordinate that will be assigned to fragment by projective textures to L, obtains second conversion fragment from the texture coordinate of conversion from the trapezoid space reciprocal transformation, utilizes the plane equation of the second conversion fragment to calculate the distance value z of second conversion fragment light source in L L2, and based on depth value of storing in the shadow maps and z L2Come relatively to determine whether fragment is arranged in shade.
This method can comprise:
-in the first passage that shadow maps produces,
-during the stage of summit, the coordinate figure on summit is transformed in the trapezoid space, and specify texture coordinate to the summit, it equals the coordinate figure on the summit in the back perspective space of light, and
-during the fragment stage, use the degree of depth of the texture coordinate replacement fragment of fragment, the degree of depth is added side-play amount, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
-during the stage of summit, the coordinate figure on summit is transformed in the back perspective space of eyes, and specify two texture coordinates to the summit, first is the coordinate figure on the summit in the back perspective space of light, second is the coordinate figure on summit in the trapezoid space, and
-during the fragment stage,, determine the shade of fragment based on depth value of storing in the shadow maps and comparison based on the value of first texture coordinate of fragment, wherein said depth value conduct is based on the index of second texture coordinate of fragment.
This method can comprise:
-in the first passage that shadow maps produces,
The back perspective space L of coordinate figure with fragment,, utilize the plane equation of the first conversion fragment to calculate the distance value z of first conversion fragment light source in L to obtain the first conversion fragment from the trapezoid space reciprocal transformation to light L1, with z L1Add off-set value, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
-during the stage of summit, the coordinate figure on summit is transformed in the back perspective space of eyes, and specify two texture coordinates to the summit, first is the coordinate figure on the summit in the back perspective space of light, second is the coordinate figure on summit in the trapezoid space, and
-during the fragment stage,, determine the shade of fragment based on depth value of storing in the shadow maps and comparison based on the value of first texture coordinate of fragment, wherein said depth value conduct is based on the index of second texture coordinate of fragment.
This method can comprise:
-in the first passage that shadow maps produces,
-during the stage of summit, the coordinate figure on summit is transformed in the trapezoid space, and specify texture coordinate to the summit, it equals the coordinate figure on the summit in the back perspective space of light, and
-during the fragment stage, use the degree of depth of the texture coordinate replacement fragment of fragment, the degree of depth is added side-play amount, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
The texture coordinate that will be assigned to fragment by projective textures to L, obtains second conversion fragment from the texture coordinate of conversion from the trapezoid space reciprocal transformation, utilizes the plane equation of the second conversion fragment to calculate the distance value z of second conversion fragment light source in L L2, and based on depth value of storing in the shadow maps and z L2Come relatively to determine whether fragment is arranged in shade.
This method may further include at the shadow maps that utilize to calculate determines object or its part add the polygon side-play amount when whether being arranged in the shade of the required ken of scenery that is used for representing.
Can the throw light on various piece at least of scenery of two or more light sources, and this method is applied to each light source.
According to a second aspect of the invention, provide a kind of in the computer graphical representation of scenery the system of real-time shadow generation, this system comprises processor unit, is used for limiting visual frustum based on the required ken of scenery; Be used to limit the position of the light source of illumination at least a portion scenery; Be used to produce trapezoidal, in the back perspective space L of light, in visual frustum, to estimate area E from light source; Be used for trapezoidal object is applied trapezoidal conversion,, be used to calculate shadow maps so that it is transformed in the trapezoid space; And utilize the shadow maps that calculates to determine whether object or its part are arranged in the shade of the required ken of scenery.
According to a third aspect of the invention we, a kind of data storage medium is provided, it has storage computer code device thereon, be used for the method for instruct computer execution in the computer graphical representation real-time shadow generation of scenery, this method comprises that the required ken based on scenery limits visual frustum; Limit the position of the light source of illumination at least a portion scenery; Produce trapezoidal, in the back perspective space L of light, in visual frustum, to estimate area E from light source; Object in trapezoidal is applied trapezoidal conversion,, be used to calculate shadow maps so that it is transformed in the trapezoid space; And utilize the shadow maps that calculates to determine whether object or its part are arranged in the shade of the required ken of scenery.
Description of drawings
To only also embodiments of the invention be described in conjunction with the accompanying drawings now by example, in the accompanying drawings:
Fig. 1 has illustrated in the back perspective space of light and the comparison between the shade that produces in trapezoid space, described in exemplary embodiment.
Fig. 2 has illustrated the comparison between the shade that produces by bounding box method of estimation and trapezoidal method of estimation in two continuous pictures, described in exemplary embodiment.
Fig. 3 has illustrated the comparison between the shadow maps that utilizes bounding box method of estimation and the generation of trapezoidal method of estimation, described in exemplary embodiment.
Fig. 4 has illustrated the trapezoidal conversion of carrying out in trapezoidal method of estimation, described in exemplary embodiment.
Fig. 5 has illustrated trapezoidal conversion, its with focal zone be mapped to shadow maps 80% in, described in exemplary embodiment.
Fig. 6 shows the synoptic diagram of trapezoidal method of estimation, described in exemplary embodiment.
Fig. 7 shows when the angle between the eyes that change eyes and light, in the shadow maps of constant upwards vector with eyes by the figure in the shared zone of focal zone.
Fig. 8 has illustrated the shade quality that produces by trapezoidal method of estimation, described in exemplary embodiment.
Fig. 9 is the synoptic diagram according to the computer system that is used for implementation method and system of exemplary embodiment.
Figure 10 has illustrated trapezoidal conversion and four trapezoidal summits, with focal zone be mapped to shadow maps 80% in, described in exemplary embodiment.
Figure 11 has illustrated during trapezoidal The calculation of transformation matrix with the step of trapezoidal top margin central transformation to initial point, described in exemplary embodiment.
Figure 12 has illustrated the trapezoidal step of rotation during trapezoidal The calculation of transformation matrix, described in exemplary embodiment.
Figure 13 has illustrated that during trapezoidal The calculation of transformation matrix conversion comprises the step of intersection point of two side lines of two sides, described in exemplary embodiment.
Figure 14 has illustrated the trapezoidal step of clip during trapezoidal The calculation of transformation matrix, described in exemplary embodiment.
Figure 15 has illustrated the trapezoidal step of calibration during trapezoidal The calculation of transformation matrix, described in exemplary embodiment.
Figure 16 has illustrated during trapezoidal The calculation of transformation matrix with the trapezoidal step that is transformed to rectangle, described in exemplary embodiment.
Figure 17 has illustrated during trapezoidal The calculation of transformation matrix the step along y axle translation rectangle, described in exemplary embodiment.
Figure 18 has illustrated the step of calibration rectangle during trapezoidal The calculation of transformation matrix, described in exemplary embodiment.
Figure 19 has illustrated that the end product of trapezoidal transformation matrix represents, described in exemplary embodiment.
Embodiment
With reference to figure 1, exemplary embodiment of the present invention provides a kind of method of utilizing trapezoidal shadow maps to calculate three-dimensional (3D) computer graphical shade, and this reflection is by obtaining from the trapezoidal estimation of visual frustum when the ken of light is seen.
Fig. 1 (a) shows and directly calculates from the light ken or knownly in the back perspective space of light have 225 rules shadow maps 102 of the scenery 106 of plant models 104 at interval.When light away from the time, shade is aliasing in the ken of eyes and occurs, as shown in shade 108.Fig. 1 (b) shows the shadow maps 110 of the scenery 114 that calculates from the ken of light after applying trapezoidal conversion, focusing on for eyes is on the potential visible zone (only having 15 plant models 112).As a result, obtain high-quality shade 116.
In addition, with reference to figure 2, the method for exemplary embodiment solves the shade flicker that is caused by continuity problem, and the shade quality changes significantly from a picture to another picture in this problem.In every width of cloth of four width of cloth figure, the back perspective space of light is in top left for example 222, and the shadow maps of generation is on the right, top for example 224, and the shade of plant 210,212,218 and 220 (as in the scenery of Fig. 1) is in the bottom.Fig. 2 (a) shows shade flicker (the contrast shade 210 from a picture i to next picture i+1 that produces by standard bounding box method of estimation, 212), this method is at the bounding box 204 that has zone 202 when the back perspective space of light source is seen in visual frustum.Compare with the quality of shade 210, the shade quality of shade 212 is poorer obviously.On the contrary, Fig. 2 (b) shows the smooth shade conversion from a picture i to next picture i+1 that utilizes trapezoidal method of estimation to produce, and contrast shade 218,220 is as described in the exemplary embodiment.The quality of shade 218 and shade 220 does not have too big difference.In addition, compare, can see once more that for example the quality of shade 218 improves with for example shade 210.
Be without loss of generality, there is monochromatic light in the supposition of this instructions in scenery, and visual frustum is fully in the frustum of light.In other words, there is the single source that produces shade.Other situations are positioned at the projection centre by light as the summit of visual frustum wherein and are parallel to after the plane of hither plane of light or thereon, will be described in the rear section of instructions.
Can observe shadow maps is made of two parts: a part is in visual frustum, and another part is outside visual frustum.What behave understanding is that only the former is useful to determining whether pixel is arranged in shade.Thereby in order to increase shadow maps resolution, a kind of mode is to be minimized by the shared inlet of the latter, total losing quantity that is called.Fig. 3 show the example of trapezoidal in the exemplary embodiment estimation 306 and when light is observed the minimum bounding box in the zone 302 in visual frustum estimate 308.A kind of mode of handling resolution problem zone 302 in visual frustum that is better utilization when light is observed is called the shadow maps of E at this.The calculating that this requires additional normal matrix N transforms to the N space with the back perspective space 300 with light, summarizes (wherein the N space relates to trapezoid space 304 or bounding box space 310) in Fig. 3.Shadow maps is then from the N spatial configuration, with opposite from back perspective space 300.During shade was determined, pixel was transformed in the N space and is used for depth ratio, rather than transforms in the back perspective space of light.
Intuitively, when estimating to approach area E more, 302, the resolution of the shadow maps that obtains thus is high more.Minimum this zone is an area E, 302 convex closure C.Yet, do not know that how effectively C (it is until the polygon on six limits) to be transformed to shadow maps (rectangular shape usually) minimizes losing quantity simultaneously.
Next natural selection is to use minimum closed boundary frame B 308 to estimate C.Yet bounding box estimates it is not to cause minimum losing quantity, as in the comparison of bounding box space 310 from Fig. 3 and trapezoid space 304 as can be seen.
In the exemplary embodiment, the trapezoidal estimation region E that is considered to, 302 suitable shape.Prior, two parallel top margin 305 and base 307 form surprising powerful mechanism with trapezoidal shape and the size (as described later) of control from picture to picture.This has successfully handled continuity problem.When handling not " imply " losing quantity in another kind mentioned above, in the exemplary embodiment for trapezoidal selection of equal importance and attention be two side 309,311.This losing quantity is to the excessively sampling and lower sampling rate is enough of near object in shadow maps.Exemplary embodiment have effective mechanism with the resolution distribution that determines two sides 309,311, make can use to object in the focal zone of appointment.By comparison, the conversion that is used for minimum bounding box B 308 does not have dirigibility on extended configuration.As a result, when the degree of depth of the ken increased, minimum bounding box method had the influence of deterioration to shadow maps resolution.
As described in the background technology, continuity problem is the result of the remarkable change of shadow maps quality from a picture to next picture, causes the flicker of shade.For minimum bounding box method, if having unexpected change the estimation in the zone in visual frustum when light is seen, the shadow maps quality changes.Fig. 2 (a) shows from picture i to picture i+1, and the estimation direction that has a zone of minimum bounding box 204,205 in visual frustum, respectively when seeing from light 202,203 respectively changes.As a result, there is significantly change in the resolution in the different piece of shadow maps.Usually, as visual frustum when light is seen (wherein when light is seen visual frustum from the visible side of light ken quantity difference) when a kind of shape is transformed into another kind of difformity, problem may often take place.As a comparison, in the trapezoid method of exemplary embodiment, Fig. 2 (b) shows from picture i to picture i+1, and the resolution in the different piece of shadow maps does not significantly change, contrast shade 218,220.
With reference to figure 6, exemplary embodiment has efficient and effective method controlling trapezoidal change, thereby handles continuity problem.
Target is that structure is trapezoidal estimating the area E in visual frustum when light is seen, 602, and it has the constraint condition that each this continuous estimation causes the level and smooth conversion of shadow maps resolution.The strategy of taking in the exemplary embodiment is for relying on the level and smooth conversion of trapezoidal shape and size, to cause the level and smooth conversion of shadow maps resolution.At first, exemplary embodiment is calculated to obtain bottom line and top line.According to these, when calculating two side lines, trapezoidal base and top margin are defined.
Describe below at E, obtain the calculating of the bottom line and the top line on trapezoidal border on 602.
Calculate finding the back perspective space L of light, two parallel lines in 600 are to comprise required trapezoidal base and top margin.Target is to select parallel lines to have level and smooth conversion with convenient eyes when a picture moves (with respect to light) to another picture.
At first, visual frustum is transformed among the back perspective space L 600 of light, to obtain E, 602.
Secondly, the line I of computing center 604, it is by the hither plane 622 of E 602 and the center of far plane 624.
Secondly, calculate the 2D convex closure (on its border, having maximum six summits) of E 602.
Secondly, calculate top line I t608, it is with I 604 quadratures and contact the convex closure border of E 602.Top line I t608 intersect with I 604 at point, compare with the center of the far plane 624 of E 602, and this point more approaches the center of hither plane 622.
Then, calculate bottom line I b606, it is parallel to (and being different from) top line I t608 (promptly also being orthogonal to I) also contact the convex closure border of E 602.
Above-mentioned algorithm is to make center line I 604 control I t608 and I bExcept when 606 selection, the far away situation consistent with the center (almost) of hither plane.In the exemplary embodiment, algorithm causes limiting the minimum frame of far plane 624 as required trapezoidal to its individual processing.Explain above-mentioned algorithm basic principle for ensuing two sections, to handle continuity problem.
Imagination E, 602, in spheroid, draw visual frustum, the center of this spheroid is in the position of eyes, and radius equals from eyes to the far plane distance of 624 every nook and cranny.The position of supposing eyes does not change.Eyes from a picture to the angle of pitch of next picture and direction can be encoded at the point on the spheroid (it is I 604 and the intersection point of spheroid) near the point another, and the rolling of eyes does not change encoded point but cause the rotation of visual frustum along I 604.More importantly, utilize the level and smooth eye motion from picture to picture, four angles that are positioned at the far plane 624 of the visual frustum on the spheroid also have level and smooth conversion on spheroid.Because the position at I 604 and above-mentioned four angles is determined I uniquely b606, so it is also changed smoothly from picture to picture.Similarly, I t608 also change smoothly from picture to picture.
Secondly, the position of supposing eyes changes but keeps its direction to next from a picture with respect to light.In this case, only there is calibration E, 602 problem, and the I that calculates b606 and I t608 are parallel to line before.In other words, under the level and smooth translation of visual frustum, I b606 and I t608 change smoothly from picture to picture once more.
Before the calculating of describing side line, we at first pass through its N in the analysis chart 5 (a) TWith the given trapezoidal effect that is converted into trapezoid space.Notice N THas the effect that top margin is stretched to unit length.In this case, compare with the base, top margin is short relatively, therefore, stretches the bottom that causes the triangle shown in all is pushed to unit square, shown in Fig. 5 (b).This means near top margin, by I tThe zone of (608 among Fig. 6) restriction (approaching hither plane (622 among Fig. 6)) has finally occupied the major part of shadow maps.This causes for the excessive sampling of the object that is in close proximity to eyes in shadow maps, has lost the resolution (for example the top from Fig. 5 (b), second triangle, 502 to the 4th triangles 504) of other objects simultaneously.This is above-mentioned because the losing quantity type that excessive sampling causes.
Trapezoidal 510 among Fig. 5 (a), its corresponding trapezoid space 508 is shown in Fig. 5 (b).Under the situation of Fig. 5 (b), we obtain the excessive sampling for the zonule of E 506.Under the situation of Fig. 5 (c), for different trapezoidal (the having identical top line and bottom line) of utilizing 80% rule to calculate, its trapezoidal conversion is mapped to four zones 512 (trapezoidal top) in initial 80% in the shadow maps.
On the contrary, when carrying out conversion by its trapezoidal conversion, the fraction of shadow maps is occupied by nearly object when " wide " trapezoidal (top margin and base with almost equal length).Because the method target that exemplary embodiment adopts is to realize by " important " object in visual frustum effective use of available shadow maps storer, therefore then be to calculate side line and after this calculate required trapezoidal algorithm.
Next, will describe the calculating of side line, it will form E, the side on the trapezoidal border on 602.
With reference to figure 6, suppose that eyes are noted that more in the distance from hither plane 622 be object and shade thereof in the δ.That is, the focal zone of eyes, or simply be called focal zone, be the visual frustum that blocks from the δ of hither plane 622 distance.Making p is that distance away from hither plane 622 is the point of δ, its respective point p L, 618 in Space L, is positioned at I in 600, on 604.Make p L, 618 distances from top line are δ ', 614.Exemplary embodiment structure is trapezoidal comprising E, and 602, so N TWith p L, 618 are mapped to 80% line or are called some points on 80% line (seeing Fig. 5 (c)) in the trapezoid space in the exemplary embodiment.This method is called 80% rule at this.
For this reason, determine the perspective projection problem calculating, put q on 604 at I, 620 position, q wherein, 620 as projection centre with p L, 618 are mapped to the point on the 80% line y=ξ 610 (being ξ=-0.6), and bottom line 606 and top line 608 are mapped to y=-1 and y=+1 respectively.Make λ, 616 is the distance between bottom line and the top line.Then, q, 620 distance tables from top line are shown η, and 612, and calculate by following 1D homogeneous perspective projection:
- ( λ + 2 η ) / λ 2 ( λ + η ) η / λ 1 0 · δ ′ + η 1 = ξ % ω , And ξ = ξ % ω
Therefore η = λ δ ′ + λ δ ′ ξ λ - 2 δ ′ - λξ
Secondly, by q, 620 and contact E, two lines of 602 convex closure are configured to become the side line of the side that comprises required trapezoidal border.
For certain situation (is struggle (dueling) frustum situation as visual frustum when seeing in the back perspective space at light), 80% rule may cause the remarkable losing quantity of shadow maps storer.Therefore, in the exemplary embodiment, above-mentioned variations of algorithms is an iterative process.Suppose that shadow maps is the horizontal reflection of x with inlet (entry).(example of x value is 512,1024 or 2048 in some applications.) in first iterative process, p L, 618 are mapped to 80% line (or 0.8x), and in each iterative process in succession, p L, 618 are mapped to a line of the inlet before the line of previous iteration process to calculate q, 620.Utilize the q of each calculating, 620, corresponding trapezoidal and its trapezoidal conversion N TAs preceding, calculate.From all iterative process, adopt trapezoidal, its N TThe conversion focal area is to cover the maximum region (it is possible measuring by other) in the shadow maps.In another embodiment, in case the value of x can be located, wherein focal zone covers local maximum region (or other are measured accordingly) in the shadow maps, and then iterative process can stop.In other words, in case exist from the change of the extremely poor coverage rate of good coverage rate, then iterative process can stop, and uses the value of good coverage rate as x.Aforementioned calculation is not wasted, because it comprises the simple calculations and the iterative process of peanut only.In fact, for the given angle between the sight line of the given upwards vector of eyes and eyes and light, p L, 618 are mapped to best ξ wherein, and 610 are independent of scenery, thereby can calculate in advance.Therefore, the ξ that all these are best, 610 (thereby η, 612) can be stored in the table, and for each possible upwards vector of eyes, this table has the angle parameter between the sight line of eyes and light.Thereby in another embodiment, simple look-up table also can substitute above-mentioned iterative process.
Fig. 7 shows when the angle between the sight line that changes eyes and light, in the shadow maps of constant upwards vector with eyes by the curve 700 in the shared zone of focal zone.Focal zone occupies the zonule for struggle frustum situation, but when occupying big zone in the ken of side at light of for example E when visible.
In order to understand 80% rule, by the angle (being expressed as the data point on the xy plane) between the sight line that changes eyes and light, the vector that keeps up simultaneously is constant, and the curve 700 by the overall area that focal zone covered in shadow maps produces.Embodiment utilizes the curve of the different upwards vectors of having of series of identical kind to implement.Observing the slight different upwards continuous curves of vector is the surface with very approaching value.There is level and smooth conversion in these curve representations on the zone occupied by focal zone.The expression that this is very strong the method that exemplary embodiment adopted handled continuity problem well.Therefore, 80% rule of utilizing in the exemplary embodiment is effective.In another embodiment, can regulate this number percent according to the needs of using.
Above-mentioned discussion supposes that visual frustum is positioned at the frustum of light fully, and as in outdoor scenery, wherein sunlight is main light source.If not this situation, a kind of improvement is to enlarge the ken of light to comprise visual frustum.This is not effective use of shadow maps.Equally, this may be difficult to handle and be always unfeasible.Also have certain situation, wherein the summit of visual frustum is positioned at after the plane or on it, this plane by light projection centre and be parallel to the hither plane of light.This summit has inverted order or be mapped to the infinite distance in L (among Fig. 6 600).The simple extension of these situations is avoided in ensuing two sections discussion.
Particularly, it is just enough only the part in the frustum of light of visual frustum to be transformed to L (among Fig. 6 600).The remaining not part in the frustum of light is not clearly thrown light on, and therefore can't have shade.Therefore, in the exemplary embodiment, only handle at the frustum of light and the crossing I between the visual frustum (have and be no more than 16 intersection points) as its summit.This has been avoided easily because the problems referred to above that perspective transform causes.
By the hither plane of visual frustum and the line I at far plane center (among Fig. 6 604) can no longer be the center line that calculates bottom line and top line.A kind of method is to calculate the central point e on the summit of I, and to use the line by eye position and e be the new center line I that is used to calculate nNew focal zone must be defined, because focal zone may be not exclusively in I.A kind of method is how much hither planes that promote eyes (among Fig. 6 622) and far plane (among Fig. 6 624) (approaching each other) closely limiting I in place, thereby obtains f ' as the distance between these planes.Making f is the original far plane of eyes in the place and the distance between the hither plane.Then, in one embodiment, new focal zone is positioned at new hither plane and its parallel plane, and wherein the distance between these planes is (δ f '/f).Notice that δ is that initial selected is to set the distance of focal zone.
Utilize above-mentionedly, the method for Cai Yonging is applicable to more wide range of applications now in the exemplary embodiment: by dipped beam to distance light, and indoor and outdoors scenery.Fig. 8 (a) and (b) show demonstration under the situation of utilizing two kinds of optical illumination imagination features.Fig. 8 (a) shows when observing outside the frustum of light, by an adjacent light 802 and two features 806 that adjacent light 804 illuminates.Fig. 8 (b) shows the feature 808 that is illuminated by dipped beam (left shade 810) and distance light (right shade 812), and it is played up by the trapezoidal method of estimation that exemplary embodiment adopts.From Fig. 8, can observe the method that adopts in the exemplary embodiment and can realize high shade quality for the dipped beam situation and for being transformed into the distance light situation, wherein the distance light situation is unfavorable for the shadow maps of standard.
Following description makes the type of serviceization of trapezoidal estimation in the method that adopts in the exemplary embodiment.
With reference to figure 3, consider the vertex v in the object space.Then, at the back perspective space L of light, this summit in 300 is v L=P LC LWv, wherein P LAnd G LBe the projection and the shooting matrix of light, W is the natural matrix on summit.At L, the E in 300, eight angular vertexs of 302 are according to E in the object space, and 302 angular vertex multiply by P LC LC E -1Obtain, wherein C E -1It is the contrary shooting matrix of eyes.As shown in Figure 4, E is treated to two dimension (2D) object of the flattening on the front 400 of photometric units cube 404.We use trapezoidal T 402 to estimate that (with comprising) is treated to the E of 2D object.A normal matrix N TBe configured so that T, four angles of 402 are mapped to unit square 401 or rectangle.We call the vertex v in the trapezoid space T=N TV L, N TBe trapezoidal transformation matrix, and be trapezoidal shadow maps by the shadow maps that trapezoid space obtains.
Trapezoidal in the exemplary embodiment transformation matrix N is described below TCalculating, be mapped to unit square with four angles with T.Similarly, also can calculate N TBe mapped to rectangle with four angles with T.
With reference to Figure 10, target is computational transformation N T(4 * 4 matrix), it is with four angles of trapezoidal 1000, t 0, t 1, t 2, t 3Be mapped to the front of unit cube 1002, promptly under following constraint condition, calculate N T:
- 1 - 1 1 1 = N T · t 0 , + 1 - 1 1 1 = N T · t 1 , + 1 + 1 1 1 = N T · t 2 , - 1 + 1 1 1 = N T · t 3
Exist certain methods to realize.Usual way is to utilize quadrilateral to calculate to carry out the quadrilateral mapping.Other method is to trapezoidal rotation, translation, clip, calibration and the normal operations of applying, it is mapped to the front of unit cube.The following describes a kind of from a series of 4 * 4 matrix T 1, R, T 2, H, S 1, N, T 3And S 2Calculate N TMethod.In being discussed below, vector u=(x u, y u, z u, w u) and v=(x v, y v, z v, w v) preservation intermediate result.
As first step, with reference to Figure 11, T 1The center 1100 of top margin 1102 is transformed to initial point:
u = t 2 + t 3 2 , And T 1 = 1 0 0 - x u 0 1 0 - y u 0 0 1 0 0 0 0 1
Then, with reference to Figure 12, trapezoidal T 1200 is by applying R around the initial point rotation, so that top margin 1202 and x axle conllinear.
u = t 2 - t 3 | t 2 - t 3 | , And R = x u y u 0 0 y u - x u 0 0 0 0 1 0 0 0 0 1
Secondly, with reference to Figure 13, comprise two side (t 0, t 3) and (t 1, t 2) the intersection point i of two side lines 1300,1302 by applying T 2Transform to initial point:
u = R · 2 T 1 · i , And T 2 = 1 0 0 - x u 0 1 0 - y u 0 0 1 0 0 0 0 1
As next step, with reference to Figure 14, trapezoidally must utilize the H clip, to such an extent as to it and y rotational symmetry, the i.e. line and the y axle conllinear at the center of center by base 1402 and top margin 1404:
u = T 2 · R · T 1 · ( t 2 + t 3 ) 2 , And H = 1 - x u / y u 0 0 0 1 0 0 0 0 1 0 0 0 0 1
Secondly, with reference to Figure 15, trapezoidal by applying S 1Calibration is to such an extent as to comprise two side (t 0, t 3) and (t 1, t 2) two side lines 1500,1502 between angle be 90 the degree, to such an extent as to and the distance between top margin 1504 and the x axle be 1:
U=HT 2RT 1T 2, and S 1 = 1 / x u 0 0 0 0 1 / y u 0 0 0 0 1 0 0 0 0 1
Secondly, with reference to Figure 16, following conversion N is with the trapezoidal rectangle 1600 that is transformed to:
N = 1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 0
Then, with reference to Figure 17, rectangle 1700 overlaps with initial point up to its center along the translation of y axle.This is by applying T 3Realize.After this conversion, rectangle 1700 is also about the x rotational symmetry:
u=N·S 1·H·T 2·R·T 1·T 0
V=NS 1HT 2RT 1T 2, and
T 3 = 1 0 0 0 0 1 0 - ( y u / w u + y v / w v ) 2 0 0 1 0 0 0 0 1
Then, with reference to Figure 18, rectangle 1800 must utilize S 2Along the calibration of y axle, to such an extent as to it covers the front of unit cube 1900, as shown in figure 19:
U=T 3NS 1HT 2RT 1T 0, and S 2 = 1 0 0 0 0 - w u / y u 0 0 0 0 1 0 0 0 0 1
Thereby, trapezoidal conversion N TCan followingly calculate:
N T=S 2·T 3·N·S 1·H·T 2·R·T 1
Come back to Fig. 4, in the exemplary embodiment, N TIntention be the only x and the y value on these summits of conversion object.Yet, depend on the x and the y value on each summit, this conversion also influences its z value.Thereby, may be not enough to remedy surperficial acne (acne) effect for single side-play amount (as in standard shadow maps method) on all summits.
Fig. 4 shows the trapezoidal estimation 402 in the frustum at light in the back perspective space of light.Fig. 4 also shows the trapezoidal estimation under above-mentioned trapezoidal conversion, causes being unit square 401 (or rectangle) for facing 405, but is trapezoidal on side view 409.This has worsened the polygon offset problem.Fig. 4 also shows the method that adopts by exemplary embodiment to keep unit square 407 for side view 408 under trapezoidal conversion.
Trapezoidal conversion comprises two-dimensional projection.The key property of this conversion is the z on summit in the trapezoid space TDepend on w TIn fact, being distributed on the trapezoidal shadow maps of z value changes, and may be inadequate as the constant polygon side-play amount in standard shadow maps method therefore.This problem is that the polygon side-play amount of appointment may be too high for the pixel that is included near the object the eyes, and may be too low for the pixel that comprises farther object.If the polygon side-play amount is too high, shade disappears and may take place; On the other hand, if too low, may cause surperficial acne.
By the depth value in the back perspective space that keeps light in the exemplary embodiment, can specify constant polygon side-play amount to prevent the polygon offset problem, be similar to the technology of in the shadow maps method of standard, using.Distribute to keep evenly, as among Fig. 4 from the unit square 407 of side view 408 as can be seen.
In one embodiment, in order to realize this point, the only x on each summit, y and w value are passed through N TTransform to trapezoid space (among Fig. 3 304), keep the z value among the back perspective space L (among Fig. 3 300) of light simultaneously.With simple form, the formula that the summit is transformed in the trapezoid space (among Fig. 3 304) is v now T=N Tv L, to obtain its x T, y TAnd w TValue, then, from v LZ and w value be z LAnd w LCalculate z TValue is as follows:
z T = z L · w T w L
Can during the first passage that shadow maps produces, realize aforementioned calculation to calculate required z with vertex program T, and during the second channel that shade is determined, realize with another vertex program to calculate corresponding z in L (among Fig. 3 300) for each summit TPresent embodiment is realized easily and is feasible in practice.Yet said method only is the estimation for the z value of reality.When the frustum of visual frustum or light does not comprise king-sized triangle, incorrect z value at every some place of triangle is found to be that it doesn't matter, thereby in case because should mistake very little it utilize big relatively polygon side-play amount to regulate just and can ignore.
In order to improve the foregoing description, other embodiment can utilize based on the ray molding, and/or based on the method for many texture coordinates.Notice that each method has two passages that common shadow maps produces and shade is determined.These methods can be combined as the method for four various combinations, to handle this problem.
In the ray molding, the fragment stage is used to calculate the correct z value of each fragment among the L (among Fig. 3 300).In first passage (shadow maps generation), use N T -1With contrary viewport matrix, with the x of fragment and y value from the trapezoid space reciprocal transformation to L (Fig. 3 300).Use plane equation π in the L (in Fig. 3 300) of fragment, to calculate z value thereafter.This value adds side-play amount, is stored in the shadow maps then.Then, in second channel (shade is determined), N T -1Be applied to the x of the texture coordinate that is assigned to fragment (passing through projective textures) T, y TAnd w TValue is to obtain x L, y LAnd w LUtilize these values, calculate the z value of fragment among the L (among Fig. 3 300) according to π.This z value and the (x that is stored in shadow maps T/ w T, y T/ w T) depth value in the inlet compares, to determine that fragment is whether in shade.
In many texture coordinates method, to locate at first passage (shadow maps generation), the summit stage is transformed to v with each vertex v T=(x T, y T, z T, w T) and with v L=(x L, y L, z L, w L) be appointed as its texture coordinate.Texture coordinate on the triangle is by the v of linear interpolation triangular apex L/ w TValue obtains.Secondly, the fragment stage is used z L/ w LReplace the degree of depth of fragment and it is added side-play amount.In fact, the z value on summit utilizes necessary polygon side-play amount to be set to z in the trapezoid space LIn second channel (shade is determined), the summit stage transforms to the back perspective space of eyes as output vertex with each summit.For the summit, also calculate two texture coordinate v L=(x L, y L, z L, w L) and v T=(x T, y T, z T, w T).Then, each fragment of fragment phase process is to pass through z L/ w LWith by (x T/ w T, y T/ w T) value in the shadow maps of index compares definite shade.
Appendix A shows summit and the fragment programs coding that is used to realize trapezoidal conversion in the exemplary embodiment.The method that adopts is above-mentioned many texture coordinates method.Only show shadow maps and produce step, i.e. the first passage of algorithm is because the second channel of algorithm is worked in a similar manner.Utilize for example summit and fragment programs or Cg or other computer graphics program of other schemes, can realize with identical functions in the appendix A.
Notice that for clear the calculating that is added to the constant polygon side-play amount in the last depth value is omitted in appendix A.
Appendix B shows the display routine of the realization that is used for above-mentioned algorithm in the exemplary embodiment.
Exemplary embodiment can utilize GNU C++ and OpenGL in the Linux environment, have on the Intel Pentium 4 1.8GHZ CPU of nVidia GeForce FX5900 hypergraph shape controller and realize.ARB summit/fragment programs or Cg program can be used to handle the polygon offset problem.Shadow maps can be provided in impact damper or the common texture storage device.Exemplary embodiment use multiple geometry but shirtsleeve operation as the convex closure in 2D, line operate etc., thus make stability problem be easy to handle.
Embodiments of the invention can provide following advantage.
By utilize visual frustum that trapezoidal estimation sees by light and with trapezoidal bending (warpping) to shadow maps, thereby shadow maps resolution is improved.This has increased the number of samples for the zone of approaching eyes, therefore causes higher shade quality.
Trapezoidal calculating is so that realize the level and smooth change of shadow maps resolution.This calculating is not wasted on calculating, because only based on eight summits of visual frustum, rather than calculates trapezoidally based on whole scenery, and this has eliminated the continuity problem that produces in all prior aries.
In addition, trapezoidal estimation is constant operation, and algorithm is calibrated well.Undoubtedly bending comprises perspective transform, and wherein the polygon skew becomes a problem.Yet, by one of three kinds of methods discussing in the exemplary embodiment, wherein having comprised and on Modern Graphic hardware, utilized summit/fragment programs or Cg program, this problem can solve.
Should be appreciated that those skilled in the art can be applied to a plurality of light sources to the present invention easily, wherein each light source all respectively has shadow maps.
The method and system of exemplary embodiment can be realized on computer system 900, as being schematically shown among Fig. 9.It can be implemented as software such as computer program, its in computer system (it can be palm PC, mobile phone, desk-top computer, kneetop computer etc.) 900, be performed and instruct computer system 900 to implement the method for this exemplary embodiment.
Computer system 900 comprises computer module 902, load module such as keyboard 904 and mouse 906 and a plurality of output device such as display 908 and printer 910.
Computer module 902 is connected to computer network 912 via suitable transceiver 914, can lead to for example the Internet (Internet) or other network systems such as Local Area Network or Wide Area Network (WAN).
Computer module 902 in the example comprises processor 918, random memory (RAM) 920 and ROM (read-only memory) (ROM) 922.Computer module 902 also comprises a plurality of I/O (I/O) interface, for example to the I/O interface 924 (or wherein display is positioned at the remote location place) of display 908 with to the I/O interface 926 of keyboard 904.
The assembly of computer module 902 is generally communicated by letter via interconnect bus 928 and in mode known to the skilled in the association area.
Application program generally offers the user of computer system 900, and this program coding and utilizes the respective data storage media drive of data storage device 930 to read on data storage medium such as CD-ROM or floppy disk.Application program is read and controls by processor 918 in it is carried out.The intermediate storage of routine data can utilize RAM 920 to finish.
With aforementioned manner, the method for utilizing trapezoidal shadow maps to produce shade is disclosed.A plurality of embodiment have only been described.Yet, to those skilled in the art, obtain multiple change and/or distortion is conspicuous, only otherwise depart from scope of the present invention according to these disclosures.
Appendix A
The vertex program of light
Appendix B
Figure C20048002648600291

Claims (21)

1. the method for a real-time shadow generation in the computer graphical representation of scenery, this method comprises:
-limit visual frustum, E based on the required ken of scenery;
The position of the light source of-qualification illumination at least a portion scenery;
-generation has top line I parallel to each other tWith bottom line I bTrapezoidal with two side lines, to estimate the E of perspective space L behind the expression light, E is treated to the two-dimensional bodies of solid front faces;
-object in trapezoidal is applied trapezoidal conversion, so that it is transformed in the trapezoid space, be used to calculate shadow maps; And
-utilize the shadow maps that calculates to determine whether object or its part are arranged in the shade of the required ken of scenery.
2. produce trapezoidal top line I respectively according to the process of claim 1 wherein tWith bottom line I bComprise with the step of in L, estimating E:
The line I of-computing center, it is by the center of the hither plane of E and the center of far plane;
The 2D convex closure of-calculating E;
-calculating I t, it is with the I quadrature and contact the border of the convex closure of E;
-calculating I b, it is parallel to I tAnd the border of the convex closure of contact E.
3. according to the process of claim 1 wherein that the minimum frame of restriction far plane is defined as trapezoidal under the consistent basically situation in the center of the center of the hither plane of E and far plane.
4. according to the method for claim 1 or 2, wherein produce trapezoidal side line and comprise with the step of in L, estimating E:
-appointment is d from the distance of the hither plane of visual frustum, with the focal zone in the required ken that is limited to scenery;
-determine to be positioned among the L point p on the I L, the distance between the hither plane of itself and visual frustum is d;
-calculate the position that I goes up some q, wherein q is a projection centre, so that trapezoidal bottom line and top line are mapped to y=-1 and y=+1 respectively, and with p LBe mapped to the point on the y=ξ, wherein ξ-1 and+1 between; And
Trapezoidal two side lines of-structure, every by q, and wherein every side line contacts the 2D convex closure of E on the opposite side of I.
5. according to the method for claim 4, ξ=-0.6 wherein.
6. according to the method for claim 4, a wherein required ξ determines based on the iterative process of minimum losses amount.
7. according to the method for claim 6, wherein when finding local minimum, iterative process stops.
8. according to the method for claim 6, wherein iterative process is calculated in advance, and the result is stored in the table of directly quoting.
9. according to the method for claim 1, comprising:
-determine the frustum of light source and the crossing I between the visual frustum;
The central point e on the summit of-calculating I;
-limit by the position of eyes and the center line I of e n, be used to produce trapezoidal.
10. according to the method for claim 9, comprise further limiting new focal zone that it makes these planes approaching with tight restriction I on how much between the hither plane and far plane of visual frustum.
11. according to the process of claim 1 wherein that trapezoidal conversion comprises four trapezoidal angles are mapped to unit square, it is the shape of square shadow maps, or is mapped to general rectangle, it is the shape of rectangle shadow maps.
12. according to the method for claim 11, wherein the size of square or general rectangle changes based on the configuration of light source and eyes.
13. according to the process of claim 1 wherein that trapezoidal conversion only transforms to trapezoid space with the x on summit and y value from the back perspective space of light, and the z value remains the value in the back perspective space of light.
14., comprise applying trapezoidal conversion to obtain the x in trapezoid space, y and w value x according to the method for claim 13 T, y TAnd w T, and the z value z of calculating in trapezoid space T, z T = z L · w T w L , Z wherein LAnd w LBe respectively z and the w value in the back perspective space of light.
15. the method according to claim 13 comprises:
-in the first passage that shadow maps produces,
The back perspective space L of coordinate figure with fragment,, utilize the plane equation of the first conversion fragment to calculate the distance value z of first conversion fragment light source in L to obtain the first conversion fragment from the trapezoid space reciprocal transformation to light L1, with z L1Add off-set value, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
The texture coordinate that will be assigned to fragment by projective textures to L, obtains second conversion fragment from the texture coordinate of conversion from the trapezoid space reciprocal transformation, utilizes the plane equation of the second conversion fragment to calculate the distance value z of second conversion fragment light source in L L2, and based on depth value of storing in the shadow maps and z L2Come relatively to determine whether fragment is arranged in shade.
16. the method according to claim 13 comprises:
-in the first passage that shadow maps produces,
-during the stage of summit, the coordinate figure on summit is transformed in the trapezoid space, and specify texture coordinate to the summit, it equals the coordinate figure on the summit in the back perspective space of light, and
-during the fragment stage, use the degree of depth of the texture coordinate replacement fragment of fragment, the degree of depth is added side-play amount, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
-during the stage of summit, the coordinate figure on summit is transformed in the back perspective space of eyes, and specify two texture coordinates to the summit, first is the coordinate figure on the summit in the back perspective space of light, second is the coordinate figure on summit in the trapezoid space, and
-during the fragment stage,, determine the shade of fragment based on depth value of storing in the shadow maps and comparison based on the value of first texture coordinate of fragment, wherein said depth value conduct is based on the index of second texture coordinate of fragment.
17. the method according to claim 13 comprises:
-in the first passage that shadow maps produces,
The back perspective space L of coordinate figure with fragment,, utilize the plane equation of the first conversion fragment to calculate the distance value z of first conversion fragment light source in L to obtain the first conversion fragment from the trapezoid space reciprocal transformation to light L1, with z L1Add off-set value, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
-during the stage of summit, the coordinate figure on summit is transformed in the back perspective space of eyes, and specify two texture coordinates to the summit, first is the coordinate figure on the summit in the back perspective space of light, second is the coordinate figure on summit in the trapezoid space, and
-during the fragment stage,, determine the shade of fragment based on depth value of storing in the shadow maps and comparison based on the value of first texture coordinate of fragment, wherein said depth value conduct is based on the index of second texture coordinate of fragment.
18. the method according to claim 13 comprises:
-in the first passage that shadow maps produces,
-during the stage of summit, the coordinate figure on summit is transformed in the trapezoid space, and specify texture coordinate to the summit, it equals the coordinate figure on the summit in the back perspective space of light, and
-during the fragment stage, use the degree of depth of the texture coordinate replacement fragment of fragment, the degree of depth is added side-play amount, and the value that will obtain thus is stored in the shadow maps as depth value;
-in the second channel that shade is determined,
The texture coordinate that will be assigned to fragment by projective textures to L, obtains second conversion fragment from the texture coordinate of conversion from the trapezoid space reciprocal transformation, utilizes the plane equation of the second conversion fragment to calculate the distance value z of second conversion fragment light source in L L2, and based on depth value of storing in the shadow maps and z L2Come relatively to determine whether fragment is arranged in shade.
19., further be included in the shadow maps that utilize to calculate and determine object or its part add the polygon side-play amount when whether being arranged in the shade of the required ken of scenery that is used for representing according to the method for claim 1.
20. according to the process of claim 1 wherein two or more light illuminations relative part of scenery at least, and this method is applied to each light source.
21. the system of a real-time shadow generation in the computer graphical representation of scenery, this system comprises
-processor unit is used for limiting visual frustum, E based on the required ken of scenery; Be used to limit the position of the light source of illumination at least a portion scenery; Be used to produce and have top line I parallel to each other tWith bottom line I bTrapezoidal with two side lines, to estimate the E of perspective space L behind the expression light from light source, E is treated to the two-dimensional bodies of solid front faces; Be used for trapezoidal object is applied trapezoidal conversion,, be used to calculate shadow maps so that it is transformed in the trapezoid space; And utilize the shadow maps that calculates to determine whether object or its part are arranged in the shade of the required ken of scenery.
CNB2004800264860A 2003-07-31 2004-07-30 Trapezoidal shadow maps Expired - Fee Related CN100478995C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US49136803P 2003-07-31 2003-07-31
US60/491,368 2003-07-31
US60/581,978 2004-06-21

Publications (2)

Publication Number Publication Date
CN1853201A CN1853201A (en) 2006-10-25
CN100478995C true CN100478995C (en) 2009-04-15

Family

ID=37134189

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004800264860A Expired - Fee Related CN100478995C (en) 2003-07-31 2004-07-30 Trapezoidal shadow maps

Country Status (1)

Country Link
CN (1) CN100478995C (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4852555B2 (en) * 2008-01-11 2012-01-11 株式会社コナミデジタルエンタテインメント Image processing apparatus, image processing method, and program

Also Published As

Publication number Publication date
CN1853201A (en) 2006-10-25

Similar Documents

Publication Publication Date Title
CN101606181B (en) System and methods for real-time rendering of deformable geometry with global illumination
JP2667835B2 (en) Computer Graphics Display
US7068274B2 (en) System and method for animating real objects with projected images
US6677956B2 (en) Method for cross-fading intensities of multiple images of a scene for seamless reconstruction
US6930681B2 (en) System and method for registering multiple images with three-dimensional objects
US5742749A (en) Method and apparatus for shadow generation through depth mapping
US5704024A (en) Method and an apparatus for generating reflection vectors which can be unnormalized and for using these reflection vectors to index locations on an environment map
KR100316352B1 (en) 2D image rendering method in 3D object space and 3D scene image synthesis device
US20050041024A1 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
JPH0757117A (en) Forming method of index to texture map and computer control display system
US20030038822A1 (en) Method for determining image intensities of projected images to change the appearance of three-dimensional objects
US20030043152A1 (en) Simulating motion of static objects in scenes
JP2009525526A (en) Method for synthesizing virtual images by beam emission
CN113272872A (en) Display engine for post-rendering processing
JP2000504453A (en) Method and apparatus for generating computer graphics images
US6552726B2 (en) System and method for fast phong shading
Yang et al. Nonlinear perspective projections and magic lenses: 3D view deformation
CN105205852A (en) Three-dimensional ship dynamic display method based on multiscale rendering and fitting
US9401044B1 (en) Method for conformal visualization
Davis et al. Interactive refractions with total internal reflection
JP4448637B2 (en) Fast and smooth rendering of lighted and textured spheres
CN108874932B (en) Ocean water sound field three-dimensional visualization method based on improved ray projection algorithm
US20070040832A1 (en) Trapezoidal shadow maps
CN100478995C (en) Trapezoidal shadow maps
US7034827B2 (en) Extension of fast phong shading technique for bump mapping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090415

Termination date: 20120730