CN101393651A - View field driving real-time shadow method - Google Patents

View field driving real-time shadow method Download PDF

Info

Publication number
CN101393651A
CN101393651A CNA2008102262164A CN200810226216A CN101393651A CN 101393651 A CN101393651 A CN 101393651A CN A2008102262164 A CNA2008102262164 A CN A2008102262164A CN 200810226216 A CN200810226216 A CN 200810226216A CN 101393651 A CN101393651 A CN 101393651A
Authority
CN
China
Prior art keywords
aabb bounding
bounding box
visual field
light source
echo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008102262164A
Other languages
Chinese (zh)
Inventor
沈旭昆
齐越
胡勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing University of Aeronautics and Astronautics
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNA2008102262164A priority Critical patent/CN101393651A/en
Publication of CN101393651A publication Critical patent/CN101393651A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a method for real-time shadow driven by field of view. In a three-dimensional virtual environment, the method adopts the following steps: firstly, the field of view is projected to the plane of a light source to generate a projection area, and generate a two-dimensional AABB bounding box for the projection area; secondly, a shadowgraph is generated, and the range of coverage of the shadowgraph is only within the AABB bounding box; and thirdly, the shadowgraph is mapped into a scene to generate shadow. The invention has the advantages that unnecessary shadow drawing in the scene is avoided, the utilization factor of the shadowgraph is increased, and the anti-aliasing effect is very good; in addition, only one to two shadowgraphs need to be drawn, and the real-time property is very high, thereby meeting the requirement of real-time shadow drawing in a large-scale scene with millions of patches.

Description

The real-time shadow method that a kind of visual field drives
Technical field
The present invention relates to the real-time shadow method in a kind of three-dimensional virtual environment, the real-time shadow method that a kind of especially visual field drives.
Background technology
In the real-time shadow method for drafting based on echo of current research, be applied to the lose shape research method of problem of shade in the scene drawing at classical echo method shade, mainly be divided into two big classes.First class is based on Perspective Shadow Maps (PSM), and these class methods all are by carrying out anti-aliasing to the echo Reparameterization.Other class methods are based on Adaptive Shadow Maps (ASM), and the basic thought of these class methods is the resolution that improves echo, reaches anti-aliasing purpose.
The PSM method all transforms to scene and light source the back projector space and generates echo.By conversion, the object near apart from viewpoint is exaggerated, and be reduced apart from the object that viewpoint is far away.The shade map generalization also is at the back projector space, and also just bigger apart from the near object of viewpoint shared ratio on echo, it is meticulousr to sample.Afterwards Trapezoidal Shadow Maps (TSM) and Light Space Perspective Shadow Maps (LiSPSM) have made further improvement again to PSM.These class methods have all reduced perspective significantly and have lost shape, and but can not reduce projection and lose shape, and also anti-aliasing effect, the influence that is subjected to light source and viewpoint position is very big, and the real-time shadow that is difficult to be applied in the large scale scene generates.
ASM reduces losing shape of shade by the hierarchical structure that echo is organized into a kind of quaternary tree.For the zone that shade in the scene loses shape easily, utilize high-resolution echo to shine upon, increase its sampling rate.This method is difficult to accomplish real-time rendering under current hardware condition.Utilize GPU to realize this method, but still be difficult to satisfy the needs that the real-time shadow under the complex scene is drawn.A kind of improvement implementation method that Resolution-Matched Shadow Maps (RMSM) is utilizes the GPGPU technology to realize the parallel computation that shade generates, and it is better to utilize this method to draw out the effect of shade, but speed is still unhappy.QueriedVirtual Shadow Maps (QVSM) and Fitted Virtual Shadow Maps (FVSM) are as two kinds of up-to-date improving one's methods of ASM, all be that original echo is carried out dynamic quad-tree partition, for the zone that shade in the scene loses shape easily, utilize high-resolution echo to shine upon.These two kinds of methods have all overcome the shortcoming of required complex data structures among the ASM, under the current main-stream video card, can go out anti-aliasing shade by real-time rendering.But every drafting one two field picture needs to draw several echoes, and speed has been subjected to influence.Practical Shadow Mapping (PrSM) is the most approaching with method of the present invention, compares with method of the present invention, and PrSM obtains the whole bounding box of looking centrum, generates echo again and be mapped in the scene to go in this bounding box.The size of echo is subjected to looking the influence of the parameter of centrum, when the scope of looking centrum exceeded the visual field the scope of the scene that can see the time, will cause the invalid of echo to play up, influence the quality of shade.Parallel-Split Shadow Maps (PSSM) can regard a kind of improvement of PrSM as, this method is along the direction of viewpoint, to look centrum is divided into experimental process and looks centrum, under the light source coordinate system, be respectively each son and look centrum and calculate bounding box and draw an independently sub-echo and be mapped in the scene, generate real-time shadow.This method is the same with FVSM with QVSM, needs to draw many echoes, and the render speed of shade has been subjected to influence.
Summary of the invention
Technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, and the real-time shadow method that provides a kind of visual field to drive, it is fast that this method has render speed simultaneously, can draw out high-quality shade.
Technical solution of the present invention: the real-time shadow method that a kind of visual field drives may further comprise the steps:
A. visual field back projection is produced view field to the light source plane, generate the AABB bounding box of a two dimension for this view field; The described visual field is meant that from viewpoint position the shade that can see receives the scope of body;
B. in described AABB bounding box scope, generate echo;
C. this echo is mapped to and generates shade in the scene.
After generating the AABB bounding box of a two dimension among the described step a, judge that whether this AABB bounding box needs division to become two sub-AABB bounding boxs, if need, then divides this AABB bounding box, so that the anti-aliasing effect quality of the echo of follow-up generation height.
The described process of division that needs is divided into needs division and two processes that divide of judging whether;
Wherein deterministic process is: at first calculate echo utilization factor v, try to achieve its AABB bounding box for the resulting view field of visual field projection on the light source plane, obtain the area S of view field respectively pBe v with the ratio Sp/Sr of the area Sr of AABB bounding box, set a needed threshold value η, in the time of v<η, need split into two sub-AABB bounding boxs to two-dimentional AABB bounding box;
The process that divides is: two intersection of diagonal C that try to achieve view field, choose the reference point of C as the division of AABB bounding box, draw two limits that are parallel to the AABB bounding box respectively from the C point, and these both sides are vertical mutually, then from wherein straight line, view field is divided into two polygons, respectively these two polygons are obtained two sub-AABB bounding boxs again, and obtain the area and the S1 of two sub-AABB bounding boxs, repeat above method, equally also can obtain two sub-AABB bounding boxs and area of the two and S2 for another straight line, compare the size of S1 and S2, the pairing limit of choosing among both of smaller value is final branch raw edges.
Described step a is visual field back projection produces view field to the light source plane process:
(1) draws out the visual field from viewpoint position;
(2) utilize following computing formula that visual field back projection is produced view field to the light source plane:
x ′ y ′ 0 1 = m LightModelview · m EyeModelview - 1 · x y z 1 - - - ( 1 )
Wherein: (x, y z) are the coordinate of any sheet P of unit of in the field range certain, and (x ', y ') be the projection P of the first P of certain any sheet on the light source plane in the field range ' coordinate, m LightModelviewWith
Figure A200810226216D00062
Represent the model viewpoint inverse of a matrix battle array under model viewpoint matrix under the light source coordinate system and the eye coordinates system respectively.
The present invention is with the advantage that the echo method of classics is compared:
(1) can draw out high-quality shade
The present invention produces view field to visual field back projection to the light source plane, generate the bounding box of a two dimension for this view field, bounding box has been arranged like this, the drafting of echo only is confined in the bounding box, avoided shade drafting unnecessary in the scene, improved the utilization factor of echo, anti-aliasing is effective, and the effect of the shade of drawing out is improved greatly.
(2) it is fast, real-time to have a render speed
Whole process of the present invention only needs to draw one to two width of cloth echo, and is real-time, can satisfy the needs that real-time shadow is drawn in the large scale scene of dough sheets up to a million.
Description of drawings
Fig. 1 is the visual field of the present invention and the difference of looking centrum;
Fig. 2 A generates the process synoptic diagram of the visual field and back projection for the present invention;
Fig. 2 B generates the synoptic diagram of AABB bounding box for back projection of the present invention zone;
Fig. 3 is the generative process of shade among the present invention;
Fig. 4 A is the vertical fission process synoptic diagram of AABB bounding box of the present invention;
Fig. 4 B is an AABB bounding box horizontal split process synoptic diagram of the present invention;
Fig. 5 is the inventive method specific implementation step.
Embodiment
As shown in Figure 5, specific implementation step of the present invention is as follows:
Step S501, in step S501, to light source plane 202,202 is the AABB bounding box 204 that the visual field 201 generates a two dimension on the light source plane the visual field 201 back projections.
The visual field of the present invention is meant from viewpoint position that with to look centrum different the shade that can see receives the scope of body.Fig. 1 represented the visual field and looked the difference of centrum, and that looks centrum 101 and be standard on the graphics looks the centrum notion, determined by projective transformation matrix, and cut out and cut out face 1012 after face 1011 extends to by the past always for depth range.And the visual field 103 of the present invention refers to scope farthest in 102 scenes that can see of viewpoint, marks with mesh lines in Fig. 1.
Owing to be source of parallel light, can think in world space, there is an infinitely-great rectangular shaped light source plane.Every light in the scene all is along the ray that sends perpendicular to the direction on this plane by this plane.
The direction of the four edges on rectangular shaped light source plane is decided by model viewing matrix and the projection matrix under the light source coordinate system.The direction of the four edges on rectangular shaped light source plane in world coordinate system, respectively with utilize standard echo method, from light source position draw scene before cut out the face rectangle the direction unanimity of four edges world coordinate system.
As shown in Figure 1,102 zones that can see of current viewpoint are all in the visual field 103, as long as draw out correct shade in the visual field 103.In the scene outside the visual field 103, the shade of drawing out also is invalid.Under the situation that echo resolution is fixed, in the visual field 103, increase the utilization factor of echo, become the anti-aliasing key of the present invention.
The visual field ask method:
In order to obtain field range, need obtain when the interior coordinate information of forward sight centrum scope apart from viewpoint sheet unit farthest.The method of asking of the present invention is respectively the comparison function of the initial value of depth buffer and depth test to be adjusted to glClearDepth (0.0) and glDepthFunc (GL_GEQUAL), and the visible part in the scene is all to be apart from viewpoint zone farthest at this moment.These zones are " visual field " of the present invention scope.
The visual field is to the calculating of light source plane projection:
For any sheet P of unit of in the field range certain, its position under eye coordinates system be (x, y, z).Its projection P on the light source plane ' the computing formula of coordinate (x ', y ') be:
x ′ y ′ 0 1 = m LightModelview · m EyeModelview - 1 · x y z 1 - - - ( 1 )
Wherein, m LightModelviewWith
Figure A200810226216D00072
Represent the model viewpoint inverse of a matrix battle array under model viewpoint matrix under the light source coordinate system and the eye coordinates system respectively.
Fig. 2 A is depicted as the process of field range to the light source plane projection.Fig. 2 B is depicted as the synoptic diagram of the AABB bounding box of view field on the light source plane.In order to increase the utilization factor of echo as much as possible, the present invention projects to area of visual field 201 on the light source plane along light source direction, obtains a tetragonal view field 203 on light source plane 202.Shown in Fig. 2 B, on light source plane 202, try to achieve the AABB bounding box (Axis Aligned Bounding Box) 204 of view field 203.
The calculating of the AABB bounding box of projection:
The visual field with a texture storage in video memory.Yet present GPU can not be so intelligently calculates the scope in the visual field by this texture, need transfer to this step and goes among the CPU to calculate.Yet, if whole data texturing is transferred on the internal memory from video memory, can badly influence the efficient (for example the image window size is 1024*768, and sheet unit represents with 4*32 position floating number each position that every drafting one frame just need transmit the data of 12MB from video memory to internal memory) of real-time calculating.The present invention has used a little skill, by the RTT technology, four angles of this texture is rendered in the texture of a 2*2 size earlier, the texture of this 2*2 is transferred on the CPU again, utilizes four angles to form the quadrilateral in the visual field.Calculate their projections on the light source plane, and calculate two-dimentional AABB bounding box.
After in step S501, generating the AABB bounding box of a two dimension, can also judge that whether this AABB bounding box needs division to become two sub-AABB bounding boxs, if need, then divides this AABB bounding box, so that the anti-aliasing effect quality of the echo of follow-up generation height.The process of division is divided into needs division and two processes how dividing of judging whether.
Figure 4 shows that AABB bounding box fission process synoptic diagram.
When sight line and shade receive the body angle less the time, the span of field range is bigger, becomes long and narrow to the resulting quadrilateral of light source plane projection, and it is also bigger to draw resulting AABB bounding box, and the utilization factor of echo becomes lowly, has caused shade to lose shape.Be subjected to the inspiration of PSSM method, the present invention adopts a kind of method of echo division to solve the top problem that is run into.
Judge whether and to be split into: at first calculate echo utilization factor v.On the light source plane, try to achieve its AABB bounding box 204, obtain the area S of the two respectively for the resulting view field 203 of visual field projection pAnd Sr.Its two ratio Sp/Sr is v.Obviously, v is more little, and the anti-aliasing effect of shade is just poor more.Set a threshold value η (η is between 50% to 100%, and selected threshold η of the present invention is 60%), in the time of v<η, AABB bounding box R is split into two sub-AABB bounding box R1 and R2.
The process that specifically divides is as follows: try to achieve two intersection of diagonal C of view field 203, choose the reference point of C as bounding box 204 divisions.Can do two limits that are parallel to AABB bounding box 204 respectively from the C point, and orthogonal straight line.At first, view field 203 is divided into two polygons 2031 and 2032, respectively these two polygons 2031 and 2032 is obtained AABB bounding box 2041 and 2042 again, and obtain the area and the S1 of AABB bounding box 2041 and 2042 from wherein straight line a.Repeat above method, equally also can obtain two AABB bounding boxs 2043 and 2044 and the two area and S2 for another straight line b.Compare the size of S1 and S2, the pairing limit of choosing among both of smaller value is final branch raw edges.The foundation of this way remains to be done bounding box to such an extent that compact more, increases the utilization factor of echo.
AABB bounding box 204 splits into two sub-AABB bounding boxs 2041 and 2042 (or 2043 and 2044) afterwards, again according to the method for Fig. 2 A and Fig. 2 B, in these two sub-AABB bounding boxs 2041 and 2042 (or 2043 and 2044), draw echo respectively, and be mapped to respectively in the scene, generate real-time shadow.
In step S502, generate echo, the scope that echo covered is only in AABB bounding box 204.
The process of drawing echo is as follows, and viewpoint position is adjusted to the position at light source place, and projection mode changes rectangular projection (Ortho2D) into, and the four edges up and down of rectangular projection and the four edges of AABB bounding box overlap.The scope that echo covered that so just can guarantee to draw is only in the AABB bounding box.
In step S503, echo is mapped to generates shade in the scene.
In this step,, draw echo from light source position according to the two-dimentional AABB bounding box on the light source plane; Utilize the method for time-delay shade at last, draw out final result.
The time-delay shadow method is a kind of mutation method of time-delay painted (Deferred Shading) method.Its benefit is when several echoes are mapped in the scene, can reduce the number of times of drawing scene, improves real-time rendering speed with this.Sheet unit in the scene has passed through various tests, has finally formed after the pixel on the screen, again these pixels is carried out the painted calculating of shade.
The first step of time-delay shadow method is utilized multiple texture (the Multiple Render To Texture) technology that is rendered into, and draws scene from viewpoint position, is kept at respectively in two textures the color value of sheet unit in the scene with respect to the coordinate figure of viewpoint position; Second step and classical echo method are consistent, and from light source position drafting scene, and handle is kept in the echo with respect to the depth value of the sheet unit of light source position; The 3rd the step at first draw a quadrilateral all over the screen, say play up from the pilot position to color and vein be mapped on this quadrilateral.For each pixel in the color and vein, from the texture of position, take out its positional information, positional information by the matrix coordinate transform, is transformed under the light source coordinate system, compare with the depth value of storing in the echo, to determine whether this sheet unit is in the shade.For the sheet unit that is in the shade, carry out the painted processing of shade; For being in the outer sheet unit of shade, keep its color constant.
Figure 3 shows that the generative process of shade.Only draw echo and be mapped in the scope to the AABB bounding box 204 of the view field 203 that obtains among the present invention and generate shade 301 (as shown in Figure 3) in the scene.Utilize the method for drafting of this echo,, farthest dwindled the scene domain that echo covered within sweep of the eye.Under the constant condition of echo resolution, reduced the sheet that each line unit is shone upon in echo unit number to greatest extent, obtain anti-aliasing effect.
It is as follows to utilize echo to draw the process of shade: draw scene from viewpoint position, for each the sheet unit in the scene, suppose its coordinate for (x, y, z).Utilize matrixing, transform under the light source coordinate system, obtain coordinate and be (x ', y ', z ').Compare with the corresponding value in being stored in echo.Greater than the depth value that is stored in the echo, illustrating between this sheet unit and the light source has shelter as the depth value of chankings unit, and this sheet unit is in the shade, and this sheet unit is carried out the painted processing of shade; Otherwise this sheet unit is in outside the shade.
Writing of the inventive method utilizes the OpenGL shape library in conjunction with the painted language of GLSL, realizes under the VC2005 environment.
Though the present invention discloses as above with preferred embodiment, so it is not in order to limit the present invention.The persond having ordinary knowledge in the technical field of the present invention, without departing from the spirit and scope of the present invention, when being used for a variety of modifications and variations.Therefore, protection scope of the present invention is as the criterion when looking claims person of defining.

Claims (4)

1, a kind of real-time shadow method of visual field driving is characterized in that step is as follows:
A. visual field back projection is produced view field to the light source plane, generate the AABB bounding box of a two dimension for this view field; The described visual field is meant that from viewpoint position the shade that can see receives the scope of body;
B. in described AABB bounding box scope, generate echo;
C. this echo is mapped to and generates shade in the scene.
2, the real-time shadow method of visual field driving according to claim 1, it is characterized in that: after generating the AABB bounding box of a two dimension among the described step a, judge whether this AABB bounding box needs division to become two sub-AABB bounding boxs, if need, then divide this AABB bounding box, so that the anti-aliasing effect quality of the echo of follow-up generation height.
3, the real-time shadow method of visual field driving according to claim 2 is characterized in that: the process of described needs division is divided into needs division and two processes that divide of judging whether;
Wherein deterministic process is: at first calculate echo utilization factor v, try to achieve its AABB bounding box for the resulting view field of visual field projection on the light source plane, obtain the area S of view field respectively pBe v with the ratio Sp/Sr of the area Sr of AABB bounding box, set a needed threshold value η, in the time of v<η, need split into two sub-AABB bounding boxs to two-dimentional AABB bounding box;
The process that divides is: two intersection of diagonal C that try to achieve view field, choose the reference point of C as the division of AABB bounding box, draw two limits that are parallel to the AABB bounding box respectively from the C point, and these both sides are vertical mutually, then from wherein straight line, view field is divided into two polygons, respectively these two polygons are obtained two sub-AABB bounding boxs again, and obtain the area and the S1 of two sub-AABB bounding boxs, repeat above method, equally also can obtain two sub-AABB bounding boxs and area of the two and S2 for another straight line, compare the size of S1 and S2, the pairing limit of choosing among both of smaller value is final branch raw edges.
4, the real-time shadow method of visual field driving according to claim 1 and 2 is characterized in that: described step a is visual field back projection produces view field to the light source plane process:
(1) draws out the visual field from viewpoint position;
(2) utilize following computing formula that visual field back projection is produced view field to the light source plane:
x ′ y ′ 0 1 = m LightModelview · m EyeModelview - 1 · x y z 1 - - - ( 1 )
Wherein: (x, y z) are the coordinate of any sheet P of unit of in the field range certain, and (x ', y ') be the projection P of the first P of certain any sheet on the light source plane in the field range ' coordinate, m LightModelviewWith
Figure A200810226216C00032
Represent the model viewpoint inverse of a matrix battle array under model viewpoint matrix under the light source coordinate system and the eye coordinates system respectively.
CNA2008102262164A 2008-11-07 2008-11-07 View field driving real-time shadow method Pending CN101393651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008102262164A CN101393651A (en) 2008-11-07 2008-11-07 View field driving real-time shadow method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008102262164A CN101393651A (en) 2008-11-07 2008-11-07 View field driving real-time shadow method

Publications (1)

Publication Number Publication Date
CN101393651A true CN101393651A (en) 2009-03-25

Family

ID=40493932

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008102262164A Pending CN101393651A (en) 2008-11-07 2008-11-07 View field driving real-time shadow method

Country Status (1)

Country Link
CN (1) CN101393651A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882324A (en) * 2010-05-19 2010-11-10 北京航空航天大学 Soft shadow real-time rendering method based on bidirectional half-shadow graph
CN101937577A (en) * 2010-09-17 2011-01-05 浙江大学 Method capable of generating shadow with boundary pixel oversampling effect
CN102365657A (en) * 2009-03-27 2012-02-29 汤姆森特许公司 Method for generating shadows in an image
CN104205173A (en) * 2012-03-29 2014-12-10 汤姆逊许可公司 Method for estimating the opacity level in a scene and corresponding device
CN104952103A (en) * 2015-05-19 2015-09-30 中国人民解放军理工大学 Viewpoint-dependent shadow map creating method
CN106991717A (en) * 2017-03-16 2017-07-28 珠海市魅族科技有限公司 A kind of image processing method being applied under three-dimensional scenic and system
CN103455998B (en) * 2012-06-04 2017-12-22 中兴通讯股份有限公司 The detection method and device of shade in video image
CN110955739A (en) * 2019-04-16 2020-04-03 北京仁光科技有限公司 Plotting processing method, shared image plotting method, and plot reproducing method
CN113509721A (en) * 2020-06-18 2021-10-19 完美世界(北京)软件科技发展有限公司 Shadow data determination method, device, equipment and readable medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102365657A (en) * 2009-03-27 2012-02-29 汤姆森特许公司 Method for generating shadows in an image
CN102365657B (en) * 2009-03-27 2014-09-17 汤姆森特许公司 Method for generating shadows in an image
CN101882324B (en) * 2010-05-19 2012-03-28 北京航空航天大学 Soft shadow real-time rendering method based on bidirectional half-shadow graph
CN101882324A (en) * 2010-05-19 2010-11-10 北京航空航天大学 Soft shadow real-time rendering method based on bidirectional half-shadow graph
CN101937577A (en) * 2010-09-17 2011-01-05 浙江大学 Method capable of generating shadow with boundary pixel oversampling effect
CN104205173B (en) * 2012-03-29 2017-03-29 汤姆逊许可公司 For estimating the method and corresponding equipment of the opacity level in scene
CN104205173A (en) * 2012-03-29 2014-12-10 汤姆逊许可公司 Method for estimating the opacity level in a scene and corresponding device
CN103455998B (en) * 2012-06-04 2017-12-22 中兴通讯股份有限公司 The detection method and device of shade in video image
CN104952103A (en) * 2015-05-19 2015-09-30 中国人民解放军理工大学 Viewpoint-dependent shadow map creating method
CN104952103B (en) * 2015-05-19 2018-03-09 中国人民解放军理工大学 The shadow map creation method that a kind of viewpoint relies on
CN106991717A (en) * 2017-03-16 2017-07-28 珠海市魅族科技有限公司 A kind of image processing method being applied under three-dimensional scenic and system
CN106991717B (en) * 2017-03-16 2020-12-18 珠海市魅族科技有限公司 Image processing method and system applied to three-dimensional scene
CN110955739A (en) * 2019-04-16 2020-04-03 北京仁光科技有限公司 Plotting processing method, shared image plotting method, and plot reproducing method
CN113509721A (en) * 2020-06-18 2021-10-19 完美世界(北京)软件科技发展有限公司 Shadow data determination method, device, equipment and readable medium
CN113509721B (en) * 2020-06-18 2023-10-13 完美世界(北京)软件科技发展有限公司 Shadow data determining method, apparatus, device and readable medium

Similar Documents

Publication Publication Date Title
CN101393651A (en) View field driving real-time shadow method
RU2216781C2 (en) Image-based method for presenting and visualizing three-dimensional object and method for presenting and visualizing animated object
US5805782A (en) Method and apparatus for projective texture mapping rendered from arbitrarily positioned and oriented light source
Oliveira et al. Relief texture mapping
Kaufman et al. Volume graphics
US20130057653A1 (en) Apparatus and method for rendering point cloud using voxel grid
Didyk et al. Adaptive Image-space Stereo View Synthesis.
CN104427325A (en) Fast integrated image generating method and naked eye three-dimensional display system interacted with user
RU2001118221A (en) Image-based method for representing and visualizing a three-dimensional object and method for representing and visualizing an animated object
Dorward A survey of object-space hidden surface removal
CN104484852A (en) Complex geometry image representation method for point cloud curve surface
CN102243768B (en) Method for drawing stereo picture of three-dimensional virtual scene
CN104318605B (en) Parallel lamination rendering method of vector solid line and three-dimensional terrain
CN102930593B (en) Based on the real-time drawing method of GPU in a kind of biocular systems
KR102442488B1 (en) Graphics processing systems and graphics processors
CN104217461B (en) A parallax mapping method based on a depth map to simulate a real-time bump effect
CN104299259A (en) Dynamic interpolation method and sea surface scene generation method
KR100381817B1 (en) Generating method of stereographic image using Z-buffer
CN103945209B (en) A kind of DIBR method based on piecemeal projection
Hormann et al. A quadrilateral rendering primitive
CN103295260A (en) Real-time volumetric three-dimensional data generation method based on rotator three-dimensional display
Tredinnick et al. Experiencing interior environments: New approaches for the immersive display of large-scale point cloud data
Li et al. An occlusion detection algorithm for 3d texture reconstruction of multi-view images
Yu Efficient visibility processing for projective texture mapping
Seng et al. Realistic real-time rendering of 3D terrain scenes based on OpenGL

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20090325