CN109658494A - A kind of Shading Rendering method in three-dimensional visualization figure - Google Patents

A kind of Shading Rendering method in three-dimensional visualization figure Download PDF

Info

Publication number
CN109658494A
CN109658494A CN201910013863.5A CN201910013863A CN109658494A CN 109658494 A CN109658494 A CN 109658494A CN 201910013863 A CN201910013863 A CN 201910013863A CN 109658494 A CN109658494 A CN 109658494A
Authority
CN
China
Prior art keywords
depth
datum
benchmark
light source
mapped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910013863.5A
Other languages
Chinese (zh)
Other versions
CN109658494B (en
Inventor
赵耀
李�泳
雷尧
王瑶瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dms Corp
Original Assignee
Beijing Dameisheng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dameisheng Technology Co Ltd filed Critical Beijing Dameisheng Technology Co Ltd
Priority to CN202310449109.2A priority Critical patent/CN116485987A/en
Priority to CN202310449108.8A priority patent/CN116468845A/en
Priority to CN201910013863.5A priority patent/CN109658494B/en
Publication of CN109658494A publication Critical patent/CN109658494A/en
Application granted granted Critical
Publication of CN109658494B publication Critical patent/CN109658494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/12Shadow map, environment map

Abstract

The present invention relates to a kind of Shading Rendering methods, this method comprises: determining the first depth datum (Z for the region to be rendered1) and the second depth datum (Z2), by the first depth datum (Z1) and the second depth datum (Z2) in respect to the first depth datum (Z closer to light source (300)1) depth value be mapped as depth capacity benchmark (Zmax), by the first depth datum (Z1) and the second depth datum (Z2) in be relatively farther from the second depth datum (Z of light source (300)2) depth value be mapped as minimum-depth benchmark (Zmin), the present invention can reduce flashing for shade.

Description

A kind of Shading Rendering method in three-dimensional visualization figure
Technical field
The present invention relates to Shading Rendering method more particularly to a kind of Shading Rendering methods in three-dimensional visualization figure.
Background technique
In reality scene, shade is a kind of common illumination phenomenon, is typically referred to opaque along straightline propagation due to light Object stops and the dark area of generation.And in virtual scene, the shade that the object generated by three-dimensional modeling indicates is for big Relativeness on body between visualization screen and test object is very important.It is existing at present more in real-time rendering Kind Shading Rendering algorithm, these algorithms are broadly divided into three classes, i.e. global illumination algorithm, Shadow Mapping algorithm and shadow volume algorithm, Although wherein true to nature using ray tracing as the global illumination algorithm effect of representative, calculation amount is huge, is difficult to carry out real-time rendering, Shadow Mapping algorithm and shadow volume algorithm can be considered the approximation method of global illumination method.Shadow volume algorithm is due to in scene Object geometry complexity has a serious dependence, and has a stringent limitation to the composition of body form in scene, thus applicability compared with Difference.And Shadow Mapping algorithm to scene complexity almost without any restrictions, and principle is simple, and it is fast to draw speed, so that negative Shadow mapping algorithm becomes the most common Shading Rendering algorithm in current real-time rendering field, the basic principle is that using light source as viewpoint It can be seen that all parts being illuminated in scene, and the region that light source can't see is then shadow region.
Traditional Shadow Mapping method is broadly divided into two steps:
Step 1: entire scene being drawn using light source position as viewpoint, each visible pixels is visible in record scene Depth, these visible depth information are represented apart from the distance between the nearest object of light source and light source in certain pixel, can by these Depth texture is constituted after seeing depth information storage;
Step 2: drawing scene from the angle of true viewpoint, for each point pixel in other words of drafting, calculate its distance The distance of light source actual depth in other words, and by the distance apart from light source in other words actual depth and the first step draw in generate Corresponding depth value is compared in depth pinup picture, if the distance > visible depth apart from light source, in other words apart from the reality of light source Border depth > visible depth, then it represents that perhaps there are also other objects points or pixel to be located at yin between pixel and light source for the point In shadow, otherwise, the point or pixel be not in shade.
In order to carry out depth comparison, the depth of each point or pixel will be handled as a number in [0~1].Root According to the shadow generation method of the three-dimensional scenic of the prior art, a datum level close from light source is the datum level that depth is 0, from light A remote datum level of source is the datum level that depth is 1, be then based on, and two datum levels judge depth.This datum level Although setting method meets the cognition habit of people, but cause a problem in that, close to 1 region, floating number precision is obviously inferior to Close to 0 region, and in order to allow light to be irradiated to each point in three-dimensional scenic and preferably mould as much as possible in three-dimensional scenic Intend sunlight close to the effect of the parallel radiation earth, light can be arranged far to the place of three-dimensional scenic, so that three-dimensional The shade of view, which will appear, to be flashed.And this flash can not only impact visual effect, it is also possible to influence the feelings of observer Thread and the visual fatigue for causing observer.
Summary of the invention
For the deficiencies of the prior art, the present invention provides a kind of Shading Rendering methods, this method comprises: to render Region determines the first depth datum and the second depth datum, will be opposite in the first depth datum and the second depth datum Depth value closer to the first depth datum of light source is mapped as depth capacity benchmark, and the first depth datum and second is deep The depth value for the second depth datum for being relatively farther from light source in degree datum level is mapped as minimum-depth benchmark.
According to a preferred embodiment, a kind of Shading Rendering method, this method comprises: limiting the stereo scene to be rendered All or part of body envelope box, wherein the envelope box it is virtual perpendicular to the first virtual face of optical axis and second Face, will be opposite closer to light in the first and second depth datums respectively as the first depth datum and the second depth datum The depth value of first depth datum in source is mapped as depth capacity benchmark, will be relatively farther in the first and second depth datums The depth value of the second depth datum from light source is mapped as minimum-depth benchmark.
According to a preferred embodiment, a kind of Shading Rendering method, this method comprises: according to current view point position and side To the current visible oBject of determination;The ray envelop box of current visible oBject is determined in three-dimensional scenic;According to hanging down for the envelope box Directly the first depth datum and the second depth base are determined close to sequence in what the section of optical axis was intersected with the irradiated object Quasi- face;Determine what the irradiated object was formed in three-dimensional scenic based on the first depth datum and the second depth datum Shade, wherein the first depth datum is mapped as depth capacity benchmark Zmax, and the second depth datum is mapped as minimum Depth datum Zmin, wherein the first depth datum is relative to the second depth datum closer to light source.
According to a preferred embodiment, depth capacity benchmark is 1, and minimum-depth benchmark is 0.
According to a preferred embodiment, for source of parallel light, projective transformation is as follows: enabling w=column width, h=light beam Highly, n=Z1, f=Z2, projective transformation matrix M is as follows:
Thus depth map function is obtained are as follows:Wherein, Z1≤z≤Z2
According to a preferred embodiment, for point light source, projective transformation is as follows:
Enable the ratio of width to height of aspect=light cone, fovY=light cone subtended angle, n=Z1, f=Z2, projective transformation matrix M after optimization It is as follows:
Only consider that the component of depth direction obtains depth map function are as follows:Wherein, Z1≤z ≤Z2
According to a preferred embodiment, this method further include: according to the depth capacity benchmark and minimum-depth base of setting Depth pinup picture after accurate fixed mapping;Reflecting for corresponding pixel is determined according to the depth capacity benchmark of setting and minimum-depth benchmark Actual depth after penetrating;Visible depth corresponding with pixel corresponding in actual depth and depth pinup picture based on corresponding pixel Carry out Shading Rendering.
According to a preferred embodiment, this method further include: when from the first depth datum to the straight of corresponding pixel When linear distance is less than corresponding depth value, determine that corresponding pixel is in shade.
According to a preferred embodiment, a kind of calculating equipment, the calculating equipment is configured as executing as aforementioned preferred Method described in one of embodiment.
According to a preferred embodiment, a kind of central processing unit, the central processing unit is configured as executing as aforementioned Method described in one of preferred embodiment.
According to a preferred embodiment, a kind of graphics processor, the graphics processor is configured as executing as aforementioned Method described in one of preferred embodiment.
Detailed description of the invention
Fig. 1 is the schematic diagram of a preferred embodiment of the present invention;
Fig. 2 is the schematic diagram of Shadow Mapping in the prior art, wherein Fig. 2 a is the principle that a spatial point is in shade Schematic diagram, Fig. 2 b are the schematic diagrames that a spatial point is not in shade;
Fig. 3 is the schematic diagram of the prior art;
Fig. 4 is the floating number precision distribution map of computer;
Fig. 5 is the floating-point position view that variable density trend and the prior art of the floating number from 0 to 1 mainly utilize;
Fig. 6 is the floating-point position view that variable density trend of the floating number from 0 to 1 and the present invention mainly utilize;
Fig. 7 is the schematic diagram of a preferred embodiment of the present invention;
Fig. 8 is the schematic diagram using a preferred embodiment of source of parallel light;
Fig. 9 is the depth map curve for utilizing source of parallel light in the prior art;
Figure 10 is the depth map curve that the present invention utilizes source of parallel light;
Figure 11 is the schematic diagram using a preferred embodiment of point light source;
Figure 12 is the depth map curve for utilizing point light source in the prior art;
Figure 13 is the depth map curve that the present invention utilizes point light source;With
Figure 14 is the flow chart of a preferred embodiment of the present invention.
Reference signs list
100: scene 200: envelope box 300: light source
310: optical axis 400: true viewpoint Z1: the first depth datum
Z2: the second depth datum Zmin: minimum-depth benchmark Zmax: depth capacity benchmark
ZA: sighting distance ZB: linear distance P1: the first spatial point
P2: second space point z: the depth data before mapping
Specific embodiment
It 1,2,3,4,5,6,7,8,9,10,11,12,13 and 14 is described in detail with reference to the accompanying drawing.
Embodiment 1
Present embodiment discloses a kind of shadow generation method, a kind of Shadow Mapping method in other words, a kind of shade in other words Rendering method, the in other words a kind of Shading Rendering method in scene of game, in other words the Shading Rendering side in a kind of virtual scene Method, a kind of Shading Rendering method applied to three-dimensional visualization figure a kind of in other words, in other words in three-dimensional visualization figure Shading Rendering method, especially a kind of Shading Rendering method in three-dimensional visualization figure are not causing conflict or contradiction In the case where, the entirety and/or partial content of the preferred embodiment of other embodiments can be used as the supplement of the present embodiment.This The method of invention can be realized by system of the invention and/or other alternative components.For example, by using of the invention Each components in system realize method of the invention.
According to a preferred embodiment, this method may include that the first depth datum Z is determined for the region to be rendered1 With the second depth datum Z2.This method may include by the first and second depth datum Z2In with respect to closer to light source 300 First depth datum Z1Depth value be set as depth capacity benchmark Zmax.This method may include by the first and second depth Datum level Z2In be relatively farther from the second depth datum Z of light source 3002Depth value be set as minimum-depth benchmark Zmin.It is excellent Selection of land, this method may include: the depth capacity benchmark Z according to settingmaxWith minimum-depth benchmark ZminRender shade.It is preferred that Ground, this method may include: the depth capacity benchmark Z according to settingmaxWith minimum-depth benchmark ZminDetermine corresponding spatial point Whether in shade.Preferably, this method may include the depth capacity benchmark Z according to settingmaxWith minimum-depth benchmark ZminDepth pinup picture after determining mapping.This method may include the depth capacity benchmark Z according to settingmaxWith minimum-depth benchmark ZminActual depth after determining the mapping of corresponding pixel.Preferably, this method may include the reality based on corresponding pixel Depth and depth pinup picture determine whether corresponding pixel is in shade.For example, the practical depth after mapping of corresponding pixel Degree is less than corresponding visible depth in depth pinup picture, then it represents that corresponding pixel is in shade.Preferably, this method can wrap It includes the corresponding visible depth of corresponding pixel in actual depth and depth pinup picture based on corresponding pixel and carries out Shading Rendering.It is excellent Selection of land, this method may include when the actual depth of corresponding pixel is less than the corresponding visible depth of corresponding pixel in depth pinup picture When spending, determine that the corresponding pixel is in shade.Preferably, this method can be used for the Shading Rendering in scene of game.It is excellent Selection of land, this method can be used for the Shading Rendering in three-dimensional scene of game.Preferably, this method can be used for three-dimensional visualization The Shading Rendering of figure.Preferably, Z1、Z2It is the practical distance value for arriving light source, Zmax、ZminIt is the value after mapping respectively.Final ratio Compared be mapping after value.What is deposited in depth pinup picture is also the value after mapping.Actual depth after mapping, if it is less than Value or visible depth at textures respective pixel, illustrate further from light source, at this time in shade.
According to a preferred embodiment, this method may include: the depth capacity benchmark Z according to mapping or settingmax And the minimum-depth benchmark Z of mapping or settingminDetermine whether corresponding pixel is in shade.Preferably, this method can To include: the depth capacity benchmark Z based on mappingmaxWith the minimum-depth benchmark Z of mappingminTo generate or render shade.It is excellent Selection of land, this method may include: the depth capacity benchmark Z based on settingmaxWith the minimum-depth benchmark Z of settingminGenerating or Person renders shade.Due to by the first depth datum Z1With the second depth datum Z2In with respect to closer to the first deep of light source 300 Spend datum level Z1Depth value be set as depth capacity benchmark Zmax, by the first depth datum Z1With the second depth datum Z2In It is relatively farther from the second depth datum Z of light source 3002Depth value be set as minimum-depth benchmark Zmin, cause the present invention with There are difference for the prior art.In the prior art, chopping point is remoter from light source with bigger depth value, closer from light source to have more Small depth value.And in the present invention, chopping point actually remoter from light source has smaller depth value, and closer from light source have Bigger depth value.Judge whether linear distance of the spatial point in shade apart from light source also unlike the prior art. In the prior art, spatial point is remoter from light source with bigger linear distance, closer from light source to have smaller linear distance.And In the present invention, spatial point actually remoter from light source has smaller linear distance, closer from light source to have bigger straight line Distance.Therefore, this method may include: to work as from the first depth datum Z1To corresponding pixel linear distance be less than it is corresponding When depth value, determine that corresponding pixel is in shade.And it is as mentioned in the background art, it is to work as from first in the prior art Depth datum Z1To pixel linear distance be greater than corresponding depth value when, determine that the pixel is in shade.
According to a preferred embodiment, this method may include: that will correspond to the local field for showing image in envelope box Scape is divided into several sub- boxes independently rendered.This method may include: in the minimum depth value and image district determined in corresponding sub- box Maximum depth value.This method may include: to determine to have in the accordingly corresponding all pixels of sub- box to the first depth datum Z1Maximum linear distance pixel.This method may include: deep to first when having in the accordingly corresponding all pixels of sub- box Spend datum level Z1Maximum linear distance pixel maximum linear distance be less than the minimum depth value in corresponding sub- box when, really Determine the accordingly corresponding all pixels of sub- box to be in shade.This method may include: in response to determining the accordingly corresponding institute of sub- box There is pixel to be in shade to determine at each pixel without executing to each pixel reading depth value in corresponding sub- box In shade.The present invention at least can be realized following advantageous effects using this mode: first, envelope box is divided into several sons After box, subregion rendering effect is more preferable;Second, have when in the accordingly corresponding all pixels of sub- box to the first depth datum Z1's When the maximum linear distance of the pixel of maximum linear distance is less than the minimum depth value in corresponding sub- box, then necessarily in the sub- box All pixels are all in shade, it is not necessary that continue to judge whether to add in shade to each pixel reading depth value The fast speed of Shading Rendering.According to a preferred embodiment, this method may include: envelope one in determining several sub- boxes All sub- boxes of continuous object, determine the profile of the shade of the continuous object, by the company of envelope in determining several sub- boxes It is more shallow with respect to the pixel filling color depth of the profile closer to shade in the corresponding all pixels of all sub- boxes of continuous object Shade.
According to a preferred embodiment, this method may include: to determine the first depth datum for the region to be rendered Z1With the second depth datum Z2, by the first and second depth datum Z2In with respect to the first depth datum closer to light source 300 Face Z1Depth value be mapped as depth capacity benchmark Zmax.This method may include: by the first and second depth datum Z2Middle phase To the second depth datum Z further from light source 3002Depth value be mapped as minimum-depth benchmark Zmin
According to a preferred embodiment, this method may include: limit the stereo scene 100 to be rendered whole or The envelope box 200 of partial body, envelope box 200 perpendicular to optical axis 310 the first virtual face and the second virtual face respectively as First depth datum Z1With the second depth datum Z2.This method may include: by the first and second depth datum Z2Middle phase To the first depth datum Z closer to light source 3001Depth value be mapped as depth capacity benchmark Zmax.This method can wrap It includes: by the first and second depth datum Z2In be relatively farther from the second depth datum Z of light source 3002Depth value be mapped as Minimum-depth benchmark Zmin.First virtual face can be closer to light source 300 relative to the second virtual face.Preferably, depth capacity Benchmark ZmaxIt can be 1.Minimum-depth benchmark ZminIt can be 0.
According to a preferred embodiment, this method may include: to determine the first depth datum for the region to be rendered Z1With the second depth datum Z2.This method may include: by the first and second depth datum Z2In it is opposite closer to light source 300 the first depth datum Z1Depth value be mapped as 1.This method may include: by the first and second depth datum Z2In It is relatively farther from the second depth datum Z of light source 3002Depth value be mapped as 0.It in the prior art, is according to habit more By the first depth datum Z of close to sources 3001Depth value be mapped as 0, and the second depth datum further from light source 300 Face Z2Depth value be mapped as 1.But problem is proximate to 1 region, and floating number ratio of precision is far short of what is expected close to 0 region, so that The shade of view, which will appear, to be flashed.
According to a preferred embodiment, this method may include: limit the stereo scene 100 to be rendered whole or The envelope box 200 of partial body.This method may include: by envelope box 200 perpendicular to the first virtual face of optical axis 310 and Two virtual faces are respectively as the first depth datum Z1With the second depth datum Z2.This method may include: by first and second Depth datum Z2In with respect to the first depth datum Z closer to light source 3001Depth value be mapped as 1.This method can wrap It includes: by the first and second depth datum Z2In be relatively farther from the second depth datum Z of light source 3002Depth value be mapped as 0.First virtual face can be closer to light source 300 relative to the second virtual face.Preferably, optical axis 310 can refer to light source 300 Optical axis 310, more specifically, can refer to light source 300 issue light beam or light beam center line.
According to a preferred embodiment, this method may include: currently may be used according to the determination of current view point position and direction See object.This method may include: to determine the ray envelop box of current visible oBject in three-dimensional scenic 100.This method can wrap Include: what the section perpendicular to optical axis 310 according to envelope box 200 was intersected with irradiated object approaches sequence to determine the first depth Datum level Z1With the second depth datum Z2.This method may include: based on the first depth datum Z1With the second depth datum Z2To determine shade that irradiated object is formed in three-dimensional scenic 100.Preferably, the first depth datum Z1It can be mapped For depth capacity benchmark Zmax.Second depth datum Z2It can be mapped as minimum-depth benchmark Zmin.Preferably, the first depth Datum level Z1Relative to the second depth datum Z2Closer to light source 300.
According to a preferred embodiment, this method may include:
S1: the envelope box 200 that irradiated object is constituted is determined in three-dimensional scenic 100;
S2: the ray envelop line that 300 light beam of light source is formed when envelope box 200 is passed through in irradiation is determined, wherein ray envelop Line constitutes envelope box 200 in three-dimensional scenic 100;
S3: what the section perpendicular to optical axis 310 according to envelope box 200 was intersected with irradiated object approaches sequence to determine First depth datum Z1With the second depth datum Z2
S4: it is based on the first depth datum Z1With the second depth datum Z2To determine irradiated object in three-dimensional scenic 100 The shade of middle formation, wherein the first depth datum Z1It is mapped as depth capacity benchmark Zmax, and the second depth datum Z2Quilt It is mapped as minimum-depth benchmark Zmin, wherein the first depth datum Z1Relative to the second depth datum Z2Closer to light source 300.Preferably, 300 light beam of light source can be at least one of cuboid columnar, cylinder and taper.Preferably, light beam packet Network box can be envelope box.
According to a preferred embodiment, referring to figs. 2 and 3, Fig. 2 and Fig. 3 are the state of the art, due to cognition Habit and the usage of trade, those skilled in the art are habitually ascending according to from 300 direction of light source toward far by depth datum Sequence from 300 direction of light source is configured.This sentences the Shadow Mapping method of narrative tradition for Fig. 2, and Fig. 2 is explained well The principle of Shadow Mapping.Referring now to Fig. 2 a, by the first spatial point P1In scene 100 with the first depth datum Z1Straight line Distance ZBWith light source 300 in scene 100 and the first spatial point P1Line direction on sighting distance ZAIt compares, if linear distance ZBGreater than sighting distance ZA, then light source 300 and the first spatial point P1Line in there is object to block, first spatial point P1In shade. Referring back to Fig. 2 b, by second space point P2In scene 100 with the first depth datum Z1Linear distance ZBIn scene 100 Light source 300 and the first spatial point P1Line direction on sighting distance ZAIt compares, if linear distance ZBEqual to sighting distance ZA, then light Source 300 and the first spatial point P1Line in there is no object to block, second space point P2It is not in shade.Preferably, sighting distance ZAIt is the first depth datum Z1With shelter in light source 300 and the first spatial point P1Line direction distance and light from One depth datum Z1To the first spatial point P1The attainable distance of direction transmitting institute.But just as mentioned in the background art, This set can cause the shade of rendering to flash.Therefore, applicant is by research, and those skilled in the art is without The object of floating number precision variation tendency and Shading Rendering is not accounted in minimum-depth benchmark ZminWith depth capacity benchmark ZmaxBetween the regularity of distribution relationship in the case where, by minimum-depth benchmark ZminWith depth capacity benchmark ZmaxPlace-exchange.Though Cognition habit and the usage of trade have so been violated, but has achieved technology effect that the prior art is not enlightened at all, unexpected Fruit.In order to make it easy to understand, providing finer explanation herein.A kind of prior art is shown referring to Fig. 3, Fig. 3.In the prior art It is by the first depth datum Z1With the second depth datum Z2In with respect to the first depth datum Z closer to light source 3001's Depth value is mapped as minimum-depth benchmark Zmin, by the first depth datum Z1With the second depth datum Z2In be relatively farther from light The second depth datum Z in source 3002Depth value be mapped as depth capacity benchmark Zmax, this is opposite with setting of the invention 's.Referring to fig. 4, Fig. 4 gives a floating number precision distribution map.By taking 32 floating numbers as an example, i.e. the line of top is in Fig. 4 Shown in section, close to 0 place, i.e., 10-12The other precision of neighbouring minimum discernable is about 10-19, and close to 1 place, i.e., 100Institute The other precision of the minimum discernable at place is 10-7.If the value of each floating point expression in 0 to 1 is drawn on number axis, similar figure can be obtained Density effect shown in 5.That is, point is closeer in the place closer to 0, the place point further away from 0 is diluter.It should be noted that being It more intuitively shows, wherein illustrate only partial dot to illustrate variable density trend of the floating number from 0 to 1, actual conditions Points can be more.It can be seen that the expression of floating number in a computer is discontinuous.Due to the discontinuity of floating number, in itself The sighting distance Z of size can be comparedAWith linear distance ZBAfter carrying out a series of transformation, it may be mapped in [0,1] An identical position, Z originallyB> ZAZ may be will becomeB=ZA, and in the place further away from 0, the machine of this error Can be bigger, because the place point further away from 0 is diluter.Understand in shade calculating process a point on three-dimension object, such as in Fig. 2 The first spatial point P1By a series of transformation, the specific linear distance Z of one be finally transformed into [0,1]B, then again with The minimum-depth of corresponding position in 100 depth map of scene rendered in advance, in other words sighting distance ZAIt compares, if ZB> ZA- m, then Explanation is blocked, and does dimmed processing, is not otherwise blocked, normally calculates illumination, wherein m is a positive value, and be one very Small positive value is blocked to avoid a point by its own.At this time if three-dimension object, light source 300 or camera in scene 100 Position slightly movement will lead to ZB> ZA- m and ZB=ZA- m both of these case is beated back and forth, i.e., situation shown in Figure 5, Put the sparse unconspicuous point of two, place gap may referred to herein as the case where, so as to cause program in dimmed and normal light Bounce formation is flashed back and forth according between, and this flash is more obvious in the place far from 0.And if one end far from light source 300 It is mapped to 0, one end of close to sources 300 is mapped to 1, so that it may visually reduce this flash.Although flashing unavoidably, Flashing in terms of existing technologies can be many less.Because most cases are to check three-dimension object in the place far from light source 300, And near 0, can be more accurate distinguish floating number size.And referring to Fig. 6, if by minimum and maximum depth datum ZmaxTransposition, then corresponding point can be located closer to the areas adjacent of 0 value, and the size of numerical value can be more accurate at this time Expression, so as to accurately compare the size of two values, and then improve the case where flashing.Referring back to Fig. 7, in order to more true to nature Simulation true environment shade, general envelope box 200 enters in addition to the region envelope that can see true viewpoint 400, can also Slightly expand, with will project the region that true viewpoint 400 can be seen object shadow-casting to true viewpoint 400 In the region that can be seen, for example the building on right side projected to the region that true viewpoint 400 can be seen.Envelope box 200 in Fig. 7 Interior small dotted line frame can be the region that true viewpoint 400 can be seen, and then envelope is one relatively bigger for envelope box 200 Body.The region that true viewpoint 400 can be seen all is mostly actually to be closer to the second depth datum Z2Position, by This, by the second depth datum Z2It is set as minimum-depth benchmark ZminIt can improve the case where flashing, play state of the art Unexpected technical effect.Referring back to Fig. 8, there are two three-dimensional objects in three-dimensional scenic 100, in the figure with side view In black side's item indicate the sides of the two three-dimensional objects.In the figure, it is further seen that the light in the scene 100 is incident Direction is to project directional light from upper right lower left.It is projected and virtual settings envelope box 200, the envelope box 200 to determine Enveloping surface close to light incident direction is expressed as Z1, enveloping surface Z of the envelope box 200 far from light incident direction2, wherein this Two enveloping surfaces are orthogonal with light incident direction and tangent with the outermost vertex of two three-dimensional objects respectively.It is projected along light Direction definition depth value, i.e. linear distance ZB.For the object in the three-dimensional scenic 100, because its shape is continuous, surface Z everywhereBValue should be successive value.However, since computer needs to express with discrete data, so there are errors.In Fig. 8 In, the lower right corner for the object that right side black side item represents is greater than the upper right corner at a distance from light source 300 at a distance from light source 300, institute With the Z in the lower right cornerBIt is lower to be worth precision.Therefore, for the black column of right side, because its lower right corner is further from light source 300, institute It is easier to flash with the bottom section of 3-D graphic, and this causes puzzlement to three-dimensional scenic 100.
According to a preferred embodiment, referring to Fig. 8, projection for directional light, Conventional projection transformation is as follows:
Enable w=column width, h=light beam height, n=Z1, f=Z2, it is as follows to optimize front projection transformation matrix M:
The component for only considering depth direction, i.e., along light projecting direction, available depth map function are as follows:Wherein, Z1≤z≤Z2, enable Z1=10, Z2=1000, available depth map curve as shown in Figure 9.? Under this situation, it can be seen that in 1000 meters of the place in distal end, i.e., be observed in place of object in scene 100, depth value quilt It is mapped to 1, proximal end is then mapped to 0.Before optimization, according toIt is observed in place of object in scene 100 Numerical value is close to 1, therefore precision is inadequate, thus will appear the problem of flashing.Such as, it is assumed that the depth number before the corresponding mapping of respective pixel According to the depth data z=900 before the mapping of respective pixel in other words,
According to a preferred embodiment, projection for directional light, the projective transformation after present invention optimization is as follows:
Enable w=column width, h=light beam height, n=Z1, f=Z2, projective transformation matrix M is as follows after optimization:
Depth map function are as follows:Wherein, Z1≤z≤Z2, corresponding depth map curve is referring to Figure 10.It is excellent After change: it can be seen that being observed in place of object that is, in scene 100 in distally 1000 meters of place, depth value is mapped To 0, proximal end is then mapped to 1, after optimization, because Floating-point values be observed in the scene 100 object it Place is close to zero, therefore precision is high, is less prone to and flashes.For example, for the pixel of the z=900 assumed before aforementioned optimization, After optimization, corresponding z=110,
According to a preferred embodiment, referring to Figure 11, for the projecting method of point light source 300, Conventional projection is converted such as Under:
Enable the ratio of width to height of aspect=light cone, fovY=light cone subtended angle, n=Z1, f=Z2, optimize front projection transformation matrix M It is as follows:
Only consider the available depth map function of component of depth direction are as follows:Wherein, Z1 ≤z≤Z2,
Enable Z1=10, Z2=1000, it is assumed that unit is rice, available depth map curve as shown in figure 12.Optimization Before, 1000 meters of the place in distal end, depth value is mapped to 1, and proximal end is then mapped to 0, also, worse close to 0 it is high-precision The value of degree can be finished quickly, and the value of low precision is only left in distal end.
For the projecting method of point light source 300, the projective transformation after present invention optimization is as follows: enabling aspect=light cone The ratio of width to height, fovY=light cone subtended angle, n=Z1, f=Z2, projective transformation matrix M is as follows after optimization:
Only consider the available depth map function of component of depth direction are as follows:Wherein, Z1 ≤z≤Z2.Enable Z1=10, Z2=1000, it is assumed that unit is rice, available depth map curve as shown in fig. 13 that.From Figure 13 As can be seen that the place of 1000 meters of distal end, depth value are mapped to 0, proximal end is then mapped to 1.That is, distal end can more utilize High-precision value is flashed to reduce.
According to a preferred embodiment, this method may include: determine two it is parallel to each other and perpendicular to optical axis 310 First virtual face and the second virtual face.First virtual face can be closer to light source 300 relative to the second virtual face.This method can To include: that will be used for as the first normal direction depth for determining the first virtual plane of the depth of irradiated object in three-dimensional scenic 100 It may include: that will be used to determine that second of the depth of irradiated object in three-dimensional scenic 100 virtually to put down that angle value, which is mapped as 1 this method, The second normal direction depth value in face is mapped as 0.
According to a preferred embodiment, this method may include: limit the stereo scene 100 to be rendered whole or The envelope box 200 of the body of person part and determining first depth datum Z1Later, the first depth datum Z is determined1Vector and vertical The angle between horizontal plane set in body scene 100.This method may include: the color depth that shade is determined according to angle. This method may include: that the absolute value of angle is bigger, and the color depth of shade is higher or color depth is more shallow.Preferably, it presss from both sides The absolute value at angle is bigger, and the more shallow this method of the color depth of shade may include: that the stereo scene 100 that will be rendered is placed on one On the corresponding position of virtual earth.This method may include: the mobile light source of setting as in the stereo scene 100 to be rendered The virtual sun.This method may include: in current time and current time according to real world the true sun relative to the earth Motion profile control motion profile of the virtual sun relative to virtual earth.Preferably, according to the current time of real world Motion profile of the virtual sun relative to virtual earth is controlled relative to the motion profile of the earth with the sun true in current time It can be and periodically control.For example, the period can be 1 second, 1 minute, 5 minutes, 10 minutes, 30 minutes or 1 hour.Period Size can be according to the performance setting of current computing device.The present invention at least can be realized following Advantageous using this mode Effect: player can be changed according to the shadow color depth time for experiencing real world with change in location, yin can be improved The rendering authenticity of shadow, and can allow the variation of the indirect sensation time of partial game person, prevent part player sink into game without It the problem of efflux can be experienced, is equivalent to and plays the role of psychological hint to it.
According to a preferred embodiment, this method may include: that the game angle of dynamic mobile is determined in scene of game Color draws out the profile of the shade of game charater by echo, the pixel being located in profile using shade filling.The present invention adopts At least can be realized following advantageous effects with this mode: rendering speed is faster.Preferably, this method may include: in benefit When with pixel of the shade filling in profile, the pixel filling color depth for the opposite profile closer to shade is more shallow Shade.
According to a preferred embodiment, this method may include: that determining corresponding shade belongs to static shade and still moves State shade, for static shade, based on previous true in the unchanged situation in relative position of light source and the object for causing shade Fixed shade is rendered, and for dynamic shadow, shade is determined by the way of real-time rendering.The present invention uses this mode at least It can be realized following advantageous effects: computing cost can be reduced, rendering speed is faster.Preferably, this method may include: In Shading Rendering, to stationary object application static depth texture, to moving object application dynamic depth texture.It is excellent according to one Embodiment is selected, a kind of calculating equipment, calculating equipment can be configured as the whole or the present invention for executing method of the invention Method at least part.Preferably, calculate equipment include memory, at least one central processing unit and/or at least one Graphics processor.Memory can store one or more instructions.At at least one central processing unit and/or at least one figure Reason device, which can be configured as, is executing whole or of the invention sides that method of the invention is executed when one or more instruction At least part of method.Preferably, central processing unit, that is, CPU, graphics processor, that is, GPU.For example, the calculating equipment can be matched It is set to: determining the first depth datum Z for the region to be rendered1With the second depth datum Z2, by the first and second depth datums Face Z2In with respect to the first depth datum Z closer to light source 3001Depth value be set as depth capacity benchmark Zmax, by first With the second depth datum Z2In be relatively farther from the second depth datum Z of light source 3002Depth value be set as minimum-depth Benchmark Zmin, according to the depth capacity benchmark Z of settingmaxWith minimum-depth benchmark ZminRender shade.The calculating equipment can be matched It is set to: according to the depth capacity benchmark Z of settingmaxWith minimum-depth benchmark ZminDetermine whether corresponding spatial point is in shade In.The calculating equipment is configured as: from the first depth datum Z1To pixel linear distance be less than corresponding depth value when, Determine that the pixel is in shade.In order to brief, it is merely illustrative matter here, it is not exhaustive.
According to a preferred embodiment, a kind of central processing unit, central processing unit, which can be configured as, executes the present invention Method whole or of the invention methods at least part.The central processing unit can be connect or be coupled to lotus root and is stored with The memory of one or more instruction, the central processing unit execute sheet when can be configured as one or more instruction of execution At least part of whole or of the invention methods of the method for invention.
According to a preferred embodiment, a kind of graphics processor, graphics processor, which can be configured as, executes the present invention Method whole or of the invention methods at least part.The graphics processor can be connect or be coupled to lotus root and is stored with The memory of one or more instruction.The graphics processor executes sheet when can be configured as one or more instruction of execution At least part of whole or of the invention methods of the method for invention.
It should be noted that above-mentioned specific embodiment is exemplary, those skilled in the art can disclose in the present invention Various solutions are found out under the inspiration of content, and these solutions also belong to disclosure of the invention range and fall into this hair Within bright protection scope.It will be understood by those skilled in the art that description of the invention and its attached drawing are illustrative and are not Constitute limitations on claims.Protection scope of the present invention is defined by the claims and their equivalents.

Claims (10)

1. a kind of Shading Rendering method, which is characterized in that this method comprises:
The first depth datum (Z is determined for the region to be rendered1) and the second depth datum (Z2),
By the first depth datum (Z1) and the second depth datum (Z2) in respect to the first depth base closer to light source (300) Quasi- face (Z1) depth value be mapped as depth capacity benchmark (Zmax),
By the first depth datum (Z1) and the second depth datum (Z2) in be relatively farther from the second depth base of light source (300) Quasi- face (Z2) depth value be mapped as minimum-depth benchmark (Zmin)。
2. a kind of Shading Rendering method, which is characterized in that this method comprises:
Limit the envelope box (200) of all or part of body of the stereo scene to be rendered (100), wherein the envelope box (200) perpendicular to the first virtual face of optical axis (310) and the second virtual face respectively as the first depth datum (Z1) and the Two depth datum (Z2),
By the first and second depth datum (Z2) in respect to the first depth datum (Z closer to light source (300)1) depth Value is mapped as depth capacity benchmark (Zmax),
By the first and second depth datum (Z2) in be relatively farther from the second depth datum (Z of light source (300)2) depth Value is mapped as minimum-depth benchmark (Zmin)。
3. a kind of Shading Rendering method, which is characterized in that this method comprises:
Current visible oBject is determined according to current view point position and direction;
The ray envelop box of current visible oBject is determined in three-dimensional scenic (100);
The close sequence that the section perpendicular to optical axis (310) according to the envelope box (200) is intersected with the irradiated object To determine the first depth datum (Z1) and the second depth datum (Z2);
Based on the first depth datum (Z1) and the second depth datum (Z2) determine the irradiated object in three-dimensional scenic (100) shade formed in, wherein the first depth datum (Z1) it is mapped as depth capacity benchmark Zmax, and the second depth base Quasi- face (Z2) it is mapped as minimum-depth benchmark Zmin, wherein the first depth datum (Z1) relative to the second depth datum (Z2) closer to light source (300).
4. the method as described in one of preceding claims, which is characterized in that depth capacity benchmark (Zmax) it is 1, minimum-depth base Standard (Zmin) it is 0.
5. the method as described in one of preceding claims, which is characterized in that for source of parallel light, projective transformation is as follows:
Enable w=column width, h=light beam height, n=Z1, f=Z2, projective transformation matrix M is as follows:
Thus depth map function is obtained are as follows:Wherein, Z1≤z≤Z2
6. the method as described in one of preceding claims, which is characterized in that for point light source, projective transformation is as follows:
Enable the ratio of width to height of aspect=light cone, fovY=light cone subtended angle, n=Z1, f=Z2, projective transformation matrix M is as follows after optimization:
Only consider that the component of depth direction obtains depth map function are as follows:Wherein, Z1≤z≤Z2
7. the method as described in one of preceding claims, which is characterized in that this method further include:
According to the depth capacity benchmark (Z of settingmax) and minimum-depth benchmark (Zmin) determine the depth pinup picture after mapping;
According to the depth capacity benchmark (Z of settingmax) and minimum-depth benchmark (Zmin) determine the reality after the mapping of corresponding pixel Border depth;With
The corresponding visible depth of corresponding pixel carries out Shading Rendering in actual depth and depth pinup picture based on corresponding pixel.
8. a kind of calculating equipment, which is characterized in that the calculating equipment is configured as executing such as one of preceding claims 1 to 7 The method.
9. a kind of central processing unit, which is characterized in that the central processing unit is configured as executing such as preceding claims 1 to 7 One of described in method.
10. a kind of graphics processor, which is characterized in that the graphics processor is configured as executing such as preceding claims 1 to 7 One of described in method.
CN201910013863.5A 2019-01-07 2019-01-07 Shadow rendering method in three-dimensional visual graph Active CN109658494B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202310449109.2A CN116485987A (en) 2019-01-07 2019-01-07 Real environment simulation method and device based on shadow rendering
CN202310449108.8A CN116468845A (en) 2019-01-07 2019-01-07 Shadow mapping method and device
CN201910013863.5A CN109658494B (en) 2019-01-07 2019-01-07 Shadow rendering method in three-dimensional visual graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910013863.5A CN109658494B (en) 2019-01-07 2019-01-07 Shadow rendering method in three-dimensional visual graph

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202310449108.8A Division CN116468845A (en) 2019-01-07 2019-01-07 Shadow mapping method and device
CN202310449109.2A Division CN116485987A (en) 2019-01-07 2019-01-07 Real environment simulation method and device based on shadow rendering

Publications (2)

Publication Number Publication Date
CN109658494A true CN109658494A (en) 2019-04-19
CN109658494B CN109658494B (en) 2023-03-31

Family

ID=66119547

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201910013863.5A Active CN109658494B (en) 2019-01-07 2019-01-07 Shadow rendering method in three-dimensional visual graph
CN202310449109.2A Pending CN116485987A (en) 2019-01-07 2019-01-07 Real environment simulation method and device based on shadow rendering
CN202310449108.8A Pending CN116468845A (en) 2019-01-07 2019-01-07 Shadow mapping method and device

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202310449109.2A Pending CN116485987A (en) 2019-01-07 2019-01-07 Real environment simulation method and device based on shadow rendering
CN202310449108.8A Pending CN116468845A (en) 2019-01-07 2019-01-07 Shadow mapping method and device

Country Status (1)

Country Link
CN (3) CN109658494B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819940A (en) * 2021-01-29 2021-05-18 网易(杭州)网络有限公司 Rendering method and device and electronic equipment
CN116188668A (en) * 2023-04-25 2023-05-30 北京渲光科技有限公司 Shadow rendering method, medium and electronic device based on IOS platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704025B1 (en) * 2001-08-31 2004-03-09 Nvidia Corporation System and method for dual-depth shadow-mapping
CN101055645A (en) * 2007-05-09 2007-10-17 北京金山软件有限公司 A shade implementation method and device
CN102129677A (en) * 2010-01-15 2011-07-20 富士通株式会社 Method and system for forming shadow
CN102157012A (en) * 2011-03-23 2011-08-17 深圳超多维光电子有限公司 Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system
CN104103092A (en) * 2014-07-24 2014-10-15 无锡梵天信息技术股份有限公司 Real-time dynamic shadowing realization method based on projector lamp
CN104299257A (en) * 2014-07-18 2015-01-21 无锡梵天信息技术股份有限公司 Outdoor-sunlight-based method for realizing real-time dynamic shadow

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004007835A1 (en) * 2004-02-17 2005-09-15 Universität des Saarlandes Device for displaying dynamic complex scenes
US8264498B1 (en) * 2008-04-01 2012-09-11 Rockwell Collins, Inc. System, apparatus, and method for presenting a monochrome image of terrain on a head-up display unit
US9530244B2 (en) * 2014-11-11 2016-12-27 Intergraph Corporation Method and apparatus for shadow estimation and spreading
CN105989611B (en) * 2015-02-05 2019-01-18 南京理工大学 The piecemeal perceptual hash tracking of hatched removal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704025B1 (en) * 2001-08-31 2004-03-09 Nvidia Corporation System and method for dual-depth shadow-mapping
CN101055645A (en) * 2007-05-09 2007-10-17 北京金山软件有限公司 A shade implementation method and device
CN102129677A (en) * 2010-01-15 2011-07-20 富士通株式会社 Method and system for forming shadow
CN102157012A (en) * 2011-03-23 2011-08-17 深圳超多维光电子有限公司 Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system
CN104299257A (en) * 2014-07-18 2015-01-21 无锡梵天信息技术股份有限公司 Outdoor-sunlight-based method for realizing real-time dynamic shadow
CN104103092A (en) * 2014-07-24 2014-10-15 无锡梵天信息技术股份有限公司 Real-time dynamic shadowing realization method based on projector lamp

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LANCE WILLIAMS: "Casting curved shadows oncurved surfaces", 《COMPUTER GRAPHICS LAB NEW YORK INSTITUTE OF TECHNOLOGY OLD WESTBURY》 *
刘晓平等: "基于距离的点光源软阴影GPU生成方法", 《合肥工业大学学报(自然科学版)》 *
李恋 等: "一种基于阴影图的实时伪软阴影生成方法", 《计算机应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819940A (en) * 2021-01-29 2021-05-18 网易(杭州)网络有限公司 Rendering method and device and electronic equipment
CN112819940B (en) * 2021-01-29 2024-02-23 网易(杭州)网络有限公司 Rendering method and device and electronic equipment
CN116188668A (en) * 2023-04-25 2023-05-30 北京渲光科技有限公司 Shadow rendering method, medium and electronic device based on IOS platform

Also Published As

Publication number Publication date
CN109658494B (en) 2023-03-31
CN116468845A (en) 2023-07-21
CN116485987A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
Szirmay-Kalos et al. Approximate ray-tracing on the gpu with distance impostors
CN105678837B (en) Dynamic graphics interface shade
US7773087B2 (en) Dynamically configuring and selecting multiple ray tracing intersection methods
CN107341853B (en) Virtual-real fusion method and system for super-large virtual scene and dynamic screen shooting
CN112509151A (en) Method for generating sense of reality of virtual object in teaching scene
CN102289845B (en) Three-dimensional model drawing method and device
WO2017206325A1 (en) Calculation method and apparatus for global illumination
Lu et al. Illustrative interactive stipple rendering
CN112755535B (en) Illumination rendering method and device, storage medium and computer equipment
CN108805971B (en) Ambient light shielding method
CN106648049A (en) Stereoscopic rendering method based on eyeball tracking and eye movement point prediction
CN104050708A (en) 3D game engine LOD system achievement method
KR20130076761A (en) Method and system for indicating light direction for a volume-rendered image
CN110276823B (en) Ray tracing based real-time interactive integrated imaging generation method and system
CN104103092A (en) Real-time dynamic shadowing realization method based on projector lamp
CN102243768A (en) Method for drawing stereo picture of three-dimensional virtual scene
CA2744504A1 (en) Optimal point density using camera proximity for point-based global illumination
JP2012190428A (en) Stereoscopic image visual effect processing method
CN109658494A (en) A kind of Shading Rendering method in three-dimensional visualization figure
CN111986303A (en) Fluid rendering method and device, storage medium and terminal equipment
CN103617650A (en) Displaying method for complex three-dimensional terrain
Kuchkuda An introduction to ray tracing
KR20140019199A (en) Method of producing 3d earth globes based on natural user interface using motion-recognition infrared camera
Gruen Ray-guided volumetric water caustics in single scattering media with dxr
Zhdanov et al. Bidirectional ray tracing with caustic photon and indirect imphoton maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200521

Address after: 100193 321, floor 3, building 8, Zhongguancun Software Park, No. 8, Dongbei Wangxi Road, Haidian District, Beijing

Applicant after: DMS Corp.

Address before: Room 203, 2nd Floor, Building 4, East Courtyard, No. 10 Wangdong Road, Northwest Haidian District, Beijing 100094

Applicant before: BEIJING DAMEISHENG TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant