CN106780709A - A kind of method and device for determining global illumination information - Google Patents
A kind of method and device for determining global illumination information Download PDFInfo
- Publication number
- CN106780709A CN106780709A CN201611110103.9A CN201611110103A CN106780709A CN 106780709 A CN106780709 A CN 106780709A CN 201611110103 A CN201611110103 A CN 201611110103A CN 106780709 A CN106780709 A CN 106780709A
- Authority
- CN
- China
- Prior art keywords
- pixel
- environment
- color
- ambient light
- destination object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a kind of method for determining global illumination information, the computational accuracy of ambient light information can be effectively improved, so as to simulate more real global illumination effect, strengthen the validity of dynamic object picture.Present invention method includes:Position according to the first virtual video camera sets the bounding box of destination object, and first virtual video camera is used to follow the trail of the destination object of dynamic mobile, and the bounding box is the space development model that can be surrounded the destination object;Determine the corresponding Environment in each face of the bounding box;Destination object corresponding ambient light color of each pixel in screen space is calculated according to the Environment.The embodiment of the invention also discloses a kind of computing device, the computational accuracy of ambient light information can be effectively improved.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of method and device for determining global illumination information.
Background technology
Global illumination is very important research field in computer graphics, by the mould to light conditions in the Nature
Intend, the illumination such as soft shadow, indirect refraction in seizure true environment produced by the multiple propagation (e.g., reflect and reflect) of light is imitated
Really, these effects can greatly reinforce the sense of reality of rendering result.Current global illumination has been widely used in animation, virtual
In the fields such as reality, game.
In the fields such as animation, virtual reality or game, except static object is (fixed in the scene in scene
Object or person etc.) outside, can also include substantial amounts of dynamic object (transportable object or person etc. in the scene).It is right
For static object, global illumination can be realized by way of precomputation generates light-maps;But it is right for dynamic
As for, because it can constantly change position in the scene so that the light conditions that dynamic object is not subject in the same time
Can constantly change, particularly, in the field higher to requirement of real-time such as game or virtual reality, dynamic object meeting
Due to the difference of user's request, and it is uncertain that shift position is produced, so, just cannot be by previously generating light-maps
Mode determine light conditions of the dynamic object in diverse location point.
In order to determine light conditions of the dynamic object in diverse location point in real time, typically object is divided at present many
Individual individual components, all parts are surrounded with independent cube, principle are surrounded according to ambient light, based on cubical each face wash with watercolours
All parts are added ambient light by dye part.
However, this object all parts are rendered mode based on bounding box, by same bounding box
Ambient influnence suffered by pixel is considered identical, it is clear that the ambient light information precision that this mode is calculated is not high.
The content of the invention
A kind of method for determining global illumination information is the embodiment of the invention provides, ambient light information can be effectively improved
Computational accuracy, so as to simulate more real global illumination effect, strengthens the validity of dynamic object picture.
In view of this, the first aspect of the embodiment of the present invention provides a kind of method for determining global illumination information, including:
Position according to the first virtual video camera sets the bounding box of destination object, and the first virtual video camera is used to follow the trail of dynamic
The destination object of state movement, bounding box is the space development model that can be surrounded destination object;
Determine the corresponding Environment in each face of bounding box;
Destination object corresponding ambient light color of each pixel in screen space is calculated according to Environment.
Embodiment of the present invention second aspect provides a kind of computing device, and the device includes:
Setup module, the bounding box for setting destination object according to the position of the first virtual video camera, described first is empty
Intending video camera is used to follow the trail of the destination object of dynamic mobile, and the bounding box is that the space that can be surrounded destination object is more
Face body;
Determining module, for determining the corresponding Environment in each face of bounding box that setup module determines;
Computing module, the Environment for being determined according to determining module calculates destination object each picture in screen space
The corresponding ambient light color of vegetarian refreshments.
As can be seen from the above technical solutions, the embodiment of the present invention has advantages below:
In the embodiment of the present invention, computing device sets the bounding box of destination object according to the position of the first virtual video camera,
Determine the corresponding Environment in each face in bounding box, destination object each pixel in screen space is calculated according to Environment
The corresponding ambient light color of point.It can be seen that, this programme is each pixel based on object come the color of computing environment light, Neng Gouyou
Effect improves the computational accuracy of ambient light information, so as to simulate more real global illumination effect, enhancing dynamic object picture
Validity.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be to that will use needed for embodiment description
Accompanying drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention.
Fig. 1 is one embodiment schematic diagram of embodiment of the present invention Computer equipment;
Fig. 2 is one embodiment schematic diagram of the system of determination global illumination information in the embodiment of the present invention;
Fig. 3 is one embodiment flow chart of the method for determination global illumination information in the embodiment of the present invention;
Fig. 4 is a schematic diagram of the position of the first virtual video camera in the embodiment of the present invention;
Fig. 5 is a schematic diagram of packing box in the embodiment of the present invention;
Fig. 6 is a schematic diagram of original Environment in the embodiment of the present invention;
Fig. 7 is a schematic diagram of Environment in the embodiment of the present invention;
Fig. 8 is a schematic diagram of normal map in the embodiment of the present invention;
Fig. 9 is a schematic diagram of the pixel of normal map in the embodiment of the present invention;
Figure 10 is another schematic diagram of the pixel of normal map in the embodiment of the present invention;
Figure 11 is the 4*4 schematic diagram of grid in the embodiment of the present invention;
Figure 12 is one embodiment schematic diagram of computing device in the embodiment of the present invention;
Figure 13 is another embodiment schematic diagram of computing device in the embodiment of the present invention;
Figure 14 is another embodiment schematic diagram of computing device in the embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.
Term " first ", " second ", " the 3rd " " in description and claims of this specification and above-mentioned accompanying drawing
Four " etc. (if present) is for distinguishing similar object, without for describing specific order or precedence.Should manage
Solution so data for using can be exchanged in the appropriate case, so that embodiments of the invention described herein for example can be removing
Order beyond those for illustrating herein or describing is implemented.Additionally, term " comprising " and " having " and theirs is any
Deformation, it is intended that covering is non-exclusive to be included, for example, containing process, method, system, the product of series of steps or unit
Product or equipment are not necessarily limited to those steps clearly listed or unit, but may include not list clearly or for this
A little processes, method, product or other intrinsic steps of equipment or unit.
Suitable for any computer equipment, e.g., the computer equipment can be outside to the method and apparatus of the present embodiment
The server of game services or virtual reality service, or other equipment for possessing graphic processing data ability are provided.
Such as Fig. 1, it illustrates the meter that is applicable of method and apparatus of global illumination in the simulated scenario of the embodiment of the present application
Calculate a kind of composition structural representation of machine equipment.In Fig. 1, the computer equipment can include:Processor 101, memory
102nd, communication interface 103, display 104, input block 105 and communication bus 106.
Processor 101, memory 102, communication interface 103, display 104, input block 105 pass through communication bus
106 complete mutual communication.
In the embodiment of the present application, the processor 101 at least includes:Graphic process unit (GPU, Graphics
Processing Unit) 1012, GPU can be used for realizing analog video camera intake three-dimensional scene space in the embodiment of the present application
The related graphic processing data such as Scene image, image rendering, computing environment optical information.
Optionally, central processing unit (CPU, Central Processing Unit) can be included in the processor 101
1011, some related data processings are completed with secondary graphics processor, it is possible to achieve at the main data of the computer equipment
Reason operation, certainly, the central processing unit may be replaced with ASIC (application-specific
Integrated circuit, ASIC), digital signal processor (DSP), application specific integrated circuit (ASIC), ready-made programmable gate
Array (FPGA) or other PLDs etc..
It is used to deposit one or more than one program in memory 102, program can include program code, described program
Code includes computer-managed instruction.The memory may include high-speed RAM memory, it is also possible to also including non-volatile memories
Device (non-volatile memory), for example, at least one magnetic disk storage.
The communication interface 103 can be the interface of communication module, such as interface of gsm module.
The display 104 can be used for object and other image informations involved in display three-dimensional scene space;Also
Can show by the information of user input, or be supplied to the information of user, and the various graphical users of computer equipment connect
Mouthful, these graphical user interface can be made up of any combination such as figure, text, picture.The display can include display
Panel, can be the display panel configured using forms such as liquid crystal display, Organic Light Emitting Diodes e.g..Further, should
Display can include the touch display panel for possessing collection touch event.
The input block 105 can be used to receive the information such as character, the numeral of user input of input, and produce and user
Set and the relevant signal input of function control.The input block can include but is not limited to physical keyboard, mouse, action bars
One or more in.
Certainly, the computer equipment structure shown in Fig. 1 does not constitute the restriction to computer equipment, counts in actual applications
Calculating machine equipment can include than the more or less part shown in Fig. 1, or combine some parts.
For the ease of understanding the scheme of the application, the scene that the scheme of the application is applicable simply is introduced below,
Referring to Fig. 2, a kind of system composition structure that is applicable of method it illustrates global illumination in a kind of simulated scenario of the application is shown
It is intended to.
As shown in Fig. 2 the system can include by the service system that at least a server 201 is constituted, and many ends
End 202.
Wherein, can be stored in the server 201 in service system for realizing the functions such as game or virtual reality
Contextual data, and in terminal request contextual data, contextual data is transferred to terminal.
The terminal 202 is used to represent the scene corresponding to the contextual data of server return, and according to the operation of user, to
Server sends the renewal request for updating dynamic object position in the scene.
Server 201 in the service system is additionally operable to the renewal request in response to terminal, updates the dynamic in the scene
The position of object, and the contextual data corresponding to the scene after renewal is sent to terminal.
Such as, game player can ask game data by terminal to server, and terminal can be based on what server was returned
Game data, represents game picture, if a terminal detects that game player sends certain game object in moving game picture
During operational order, position updating request will be sent to server.Server in response to the terminal position updating request, to trip
The position of the game object is updated in play picture, and by the game picture after renewal be synchronized to the game player or with this
Game player carries out all game players of game sports.
Further, in the embodiment of the present application, server be additionally operable to receive request dynamic object position is carried out
When the renewal of renewal is asked, or in needs more new scene when the frame picture for representing, can also be right according to dynamic in scene
Ambient light color information in scene as the target location of required presentation and preset at different spatial point, it is determined that should
The global illumination effect of dynamic object, and during view data corresponding to terminal return scene, the complete of the dynamic object will be reflected
The photometric data of office's lighting effect is also sent to terminal, to cause that the global illumination that dynamic object can be showed in terminal is imitated
Really.
Based on the general character of above the present embodiment, the method to global illumination in the simulated scenario of the embodiment of the present application is carried out in detail
It is thin to introduce.Such as Fig. 3, it illustrates the flow signal of method one embodiment of global illumination in a kind of simulated scenario of the application
Figure, the method for the present embodiment can apply to computer equipment noted earlier.The method of the present embodiment can include:
For the ease of understanding the scheme of the application, some terms involved by the scheme of the application are simply situated between below
Continue:
Virtual video camera:In the digitized video that computer equipment makes, be without real video camera, therefore here
Video camera is referred to as virtual video camera in the application program of computer equipment, and virtual video camera is exactly that entity is imaged in software
The reduction of machine, image that can be in real time shooting three-dimensional virtual scene, and these image synchronizations are shown in the display device.And
The Lens of virtual video camera, picture size, the depth of field, shooting angle, if follow the trail of the attributes such as object, can be by user in structure
Freely set when building video camera;
Normal map:It is to make normal on each point of the convex-concave surface of object, by RGB color passage come labelling method
The direction of line, it is possible to understand that another parallel different surface of Cheng Yuyuan convex-concave surfaces, but actually its simply light again
Sliding plane.Normal map can make each pixel of each plane possess height value as the extension of Z-Correct bump mapping Z-correct, contain perhaps
The surface information of many details, can be created that the special stereoscopic visual effect of many kinds.Normal map is a kind of effect of two dimension,
The shape of model will not be changed, but it calculates the great additional detail within contour line.
Hemisphere coloring models:In being exactly computer graphics based on physical colored (Physically Based Shading)
The attribute of the various material scattering lights of body surface is simulated so as to render the technology of the true picture of photo with the mode of mathematical modeling.
And hemisphere coloring models refer to it is by any one point simulation in three dimensions by way of mathematical modeling in the embodiment of the present invention
Into a hemisphere, the light that this hemisphere is received in three dimensions is exactly the received light of this point, the embodiment of the present invention
It is middle that this hemisphere is referred to as hemisphere coloring models.
Environment:Many objects all mirror effect with part, such as metal, brand-new vehicle etc..They are actually
It is the color for reflecting surrounding environment, this effect is realized by the color of reflection environment textures.
The method for first introducing determination global illumination information in the embodiment of the present invention below, refers to Fig. 3, the embodiment of the present invention
One embodiment of the middle method for determining global illumination information includes:
301st, the position according to the first virtual video camera sets the bounding box of destination object;
In fields such as game, virtual realities, that constructs includes the scene reality of multiple static objects and dynamic object
It is a three-dimensional scene space on all.When the ambient light information in needing to calculate the three-dimensional scene space, it is first determined target
Object, destination object refers to the dynamic object in one or more three-dimensional scene spaces in the embodiment of the present invention.Determine target
After object, the position according to the first virtual video camera sets the bounding box of the destination object.
The first virtual video camera refers to the video camera for following the trail of the destination object dynamic mobile in the embodiment of the present invention,
And the distance between the first virtual video camera and destination object pre-set, because the first virtual video camera can follow the trail of dynamic
The destination object of state movement, i.e. the first video camera can be moved with the movement of destination object, so the first video camera here
Position be not fixed, can change with the movement of destination object.
Bounding box refers to the space development model that can be surrounded the destination object in the embodiment of the present invention, it should be appreciated that should
Bounding box is not the object being presented in the three-dimensional scenic, but an abstract object.Bounding box is needed destination object bag
Polyhedral inside is trapped among, its size is determined by the volume of destination object, its shape typically uses hexahedron, naturally it is also possible to be
Other shapes, are not construed as limiting herein.
302nd, the corresponding Environment in each face of bounding box is determined;
After computing device sets bounding box, the corresponding Environment in each face of the bounding box is determined.In the embodiment of the present invention
Ambient image around destination object is obtained based on bounding box, i.e. Environment is the object (target for reflecting in bounding box
Object) surrounding environment two dimensional image.
303rd, destination object corresponding ambient light color of each pixel in screen space is calculated according to the Environment.
After computing device obtains Environment, destination object each pixel in screen space is calculated according to the Environment
The corresponding ambient light color of point.Screen space refers to, by the film-screen of the project objects of three dimensions to two dimension, showing
Out, object is separated into the pixel on photograph one by one, and what this two-dimentional photograph was present is exactly screen space.
In the embodiment of the present invention, computing device sets the bounding box of destination object according to the position of the first virtual video camera,
Determine the corresponding Environment in each face in bounding box, destination object each pixel in screen space is calculated according to Environment
The corresponding ambient light color of point.It can be seen that, this programme is each pixel based on object come the color of computing environment light, Neng Gouyou
Effect improves the computational accuracy of ambient light information, so as to simulate more real global illumination effect, enhancing dynamic object picture
Validity.
Based on the corresponding embodiments of above-mentioned Fig. 3, in another embodiment of the embodiment of the present invention, calculating can be by as follows
Mode determines the corresponding Environment in each face of bounding box:
1st, using the center of bounding box as the position of the second virtual video camera;
After computing device determines bounding box, in the centrally disposed virtual video camera of the bounding box, in order to be previously used for chasing after
Ask the video camera of destination object to make a distinction, this virtual video camera is referred to as the second virtual video camera, the center of bounding box is
The camera site of the second virtual video camera.
2nd, the center that the second virtual video camera is respectively facing each face of bounding box is carried out into image capture, obtains each and face
The primal environment textures answered;
After setting the second virtual video camera, the center that the second virtual video camera is respectively facing each face of bounding box is carried out
Image capture, obtains the corresponding primal environment textures in each face.Specifically, bounding box is a polyhedron, and second is virtually taken the photograph
The optical center of camera is respectively aligned to the center in this each face of polyhedron, to the picture in the visual field render that to obtain this more
Corresponding several texture mapping of face body, herein for being easy to distinguish, the second virtual video camera are rendered the texture mapping for obtaining and are claimed
It is primal environment textures.The size of primal environment textures is without too big, generally 256*256, naturally it is also possible to be other
Size, is specifically not construed as limiting herein.
It should be noted that the texture in the embodiment of the present invention in texture mapping is a kind of three-D grain, texture coordinate is
Three-dimensional.Such as the corresponding texture mapping of cube, the essence of its texture coordinate is a vector sent by cube center, should
Vector can intersect with one of six textures of surrounding, and intersection point is corresponding texel.In essence, cubical texture mapping
It is that six width 2 d texture images constitute a texture cube centered on origin, such as, for coordinate (- 3, -2,1),
Due to x coordinate maximum absolute value, therefore vector points to that textures in-x directions.It is-x after y and z coordinate are transformed into (0,1)
The coordinate value of the 2 d texture on direction.
3rd, for each primal environment textures, the primal environment textures are progressively reduced, and will be every in the primal environment textures
The color equalization of individual pixel, until obtaining the Environment of preset form.
After obtaining the corresponding primal environment textures in each face, for each primal environment textures, the original ring is progressively reduced
Border textures, and by the color equalization of each pixel in the primal environment textures, until obtaining the Environment of preset form.
Specifically, the color of pixel can be equalized by Gaussian filter, can also not be limited herein by other means
It is fixed.Preset form is 32*32 pixels or other sizes, is specifically not construed as limiting herein.
A kind of concrete mode for determining the corresponding Environment in each face of bounding box is the embodiment of the invention provides, is improve
The realizability of scheme.
Based on corresponding any one embodiment of above-mentioned Fig. 3, in another embodiment of the embodiment of the present invention, computing device
Destination object corresponding ambient light color of each pixel in screen space can in the following way be calculated:
1st, destination object render based on the first virtual video camera and obtain the corresponding normal map of destination object;
Based on the picture that the first virtual video camera gets, to making normal on each point of target object surface, and lead to
The direction for crossing RGB color passage to mark normal obtains the corresponding normal map of destination object.
2nd, for each pixel in normal map, the ambient light that the pixel is received in three dimensions is simulated, and
Determine to provide in Environment the target environment pixel of the ambient light;
After obtaining the normal map of destination object, for each pixel in normal map, the pixel is simulated three
The ambient light that receives in dimension scene space, and according to simulate come result determine to provide these ambient lights in Environment
Target environment pixel.Specifically, computing device can photofit picture vegetarian refreshments is received in three dimensions in the following way ring
Border light:
(1) pixel corresponding hemisphere coloring models in three dimensions are determined;
Normal map is that each point treatment of the convex-concave surface based on destination object is obtained, therefore the pixel in normal map
Can be modeled to for the point in three dimensions based on physical colored principle corresponding to the point of target object surface in three dimensions by point
One hemisphere, the light that this hemisphere is received is exactly the light that this point is received, and this hemisphere is exactly the corresponding hemisphere of pixel
Coloring models.
(2) the hemisphere coloring models are surrounded with multiple grid, the ambient light that same grid is received is defined as same
The ambient light in individual direction, quantity and spherical mathematical formula according to the grid determine the corresponding environment light direction of each grid;
Each pixel in theory correspondence hemisphere direction on can be influenceed by infinite ambient light, need exist for by
Its is discrete to turn to limited sampled point.Specifically, the corresponding hemisphere coloring models of pixel can be surrounded with N*N grid, will
The ambient light that same grid is received is defined as the ambient light in same direction, i.e. each grid and receives the environment in direction
Light, value and spherical mathematical formula according to N calculate the direction of the ambient light of each grid reception, that is, determine each grid correspondence
Environment light direction.It should be noted that the value of N can be by User Defined, N is bigger, and computational accuracy is higher, but calculating speed
It is lower.
(3) for each grid, it is determined that the mesh at the center of the grid is pointed to along the environment light direction in the bounding box
Mark surrounding pixels point.
After determining the corresponding environment light direction of each grid, found in Environment according to environment light direction and this is provided
The pixel of ambient light, i.e. target environment pixel.Specifically, with the center of grid be vector starting point, along with this lattice
The corresponding ambient light of son direction in opposite direction, this vector can intersect with one of face of bounding box, and this intersection point is at this
Corresponding pixel is exactly target environment pixel in the Environment in individual face, that is, provides the environment that this grid is received
The pixel of light, it points to the center of this grid along environment light direction.
3rd, the pixel color according to target environment pixel calculates the corresponding ambient light color of pixel.
After determining the corresponding target environment pixel of each grid, the pixel color according to target environment pixel is calculated
The corresponding ambient light color of each pixel in normal map.From the foregoing, the pixel correspondence mesh in normal map
The quantity for marking surrounding pixels point is equal to default grid quantity, to calculate the corresponding ambient light color of pixel of normal map,
Need first to calculate the ambient light color that corresponding each the target environment pixel of this pixel is provided for this pixel, be
It is easy to description, will be detailed below this pixel referred to as target pixel points for calculating.
Computing device can in the following way calculate what corresponding each the target environment pixel of target pixel points was provided
Ambient light color:For any one target environment pixel, the pixel color of this target environment pixel is multiplied by first
The unit vector of unit vector dot product second, wherein the direction of the first unit vector is the corresponding ambient light side of target environment pixel
To the direction of the second unit vector is the normal direction of target pixel points.For the ease of description, by each target environment pixel
Corresponding ambient light color is referred to as rendered color.
After computing device calculates corresponding each the corresponding rendered color of surrounding pixels point of target pixel points, by each mesh
The corresponding rendered color of mark surrounding pixels point is multiplied by the corresponding color weight of the rendered color, then the summation of each product is obtained
The corresponding ambient light color of target pixel points.Wherein the corresponding color weight of rendered color is by the corresponding target picture of target pixel points
The quantity (quantity of i.e. preset grid) of vegetarian refreshments is obtained, and the weight sum of every kind of rendered color is 1.It should be understood that being filled due to calculating
The number of times put every time when rendering to texture sampling is conditional, if so default grid quantity is excessive, i.e. computing device
Disposably the corresponding target environment pixel of target pixel points cannot have been gathered, also cannot one-time calculation go out target pixel points
Corresponding ambient light color.Such case computing device can several times gather the corresponding target environment pixel of target pixel points
Point, and based on single acquisition to target environment pixel provide ambient light color target pixel points are rendered, finally
Each rendering result is overlapped and can be obtained by the corresponding ambient light color of final goal pixel.
The embodiment of the invention provides a kind of calculating destination object corresponding ambient light of each pixel in screen space
The mode of color, improves the realizability of scheme.
Secondly, embodiment of the present invention computing device can also be received with the pixel in multiple grid simulation normal map
Ambient light, computing device can by plaid matching quantum count setting determine ambient light sampling density, being capable of real-time implementation meter
The balance of accuracy and speed is calculated, the flexibility of scheme is improve.
Based on corresponding any one embodiment of above-mentioned Fig. 3, in another embodiment of the embodiment of the present invention, computing device
Destination object is calculated after the corresponding ambient light color of each pixel in screen space, by the scene color of each pixel
Final color of the destination object under global illumination can be just obtained plus the ambient light color.Each pixel mentioned here
Scene color, be namely based on the first virtual video camera carries out normally rendering the pixel color for obtaining to destination object.
After computing device calculates ambient light color in the embodiment of the present invention, the overall situation can be simulated according to ambient light color
The effect of illumination, improves the flexibility of scheme.
In order to make it easy to understand, below with a practical application scene in the embodiment of the present invention determine global illumination information side
Method is introduced:
The distance and shooting direction of the first virtual video camera and destination object are as shown in figure 4, wherein cylinder represents target
Object, the destination object can be shown in Fig. 5 can dynamic mobile robot, or other dynamic objects, based on Fig. 4
In the first virtual video camera position, set one can by destination object be enclosed in inside cube (bounding box), specifically may be used
With referring to the cube in Fig. 4 or Fig. 5.
Computing device is virtually taken the photograph in a cubical centrally disposed virtual video camera (the second virtual video camera) by second
The center that camera is respectively facing cubical six faces carries out image capture, obtains the two dimension that six width sizes are 256*256 pixels
Texture mapping (primal environment textures), this six width 2 d texture textures are also referred to as cubic mapping, specific as shown in Figure 6.
Progressively reduce this six width texture mapping, and with high rate filter equalled the pixel color of this six width texture mapping respectively
Homogenizing, finally obtains the texture mapping (Environment) that six width sizes are 32*32 pixels (preset form), processing procedure such as Fig. 7
It is shown.
Based on the picture that the first virtual video camera gets, make normal on each point of target object surface, and lead to
The direction for crossing RGB color path marking normal obtains the corresponding normal map of destination object, the effect of normal map referring to Fig. 8,
Fig. 8 is render the normal map for obtaining by destination object of teapot.
After rendering the normal map for obtaining destination object, using each pixel in normal map an as colored spots,
As shown in figure 9, each colored spots can receive the multiple ambient lights provided by the corresponding pixel of cubic mapping, the semicircle in Fig. 9
The destination object that correspondence is rendered, i.e. destination object, the video camera in Fig. 9 refer to the first virtual video camera.Based on physics
Chromogen is managed, and for each colored spots, takes out its corresponding hemisphere in three dimensions, the ambient light that this hemisphere is received,
It is exactly ambient light that it is received in three dimensions, as shown in Figure 10, semicircle in Figure 10 is to should colored spots corresponding half
Spherical model.Each colored spots receives the influence of infinite ambient light on the direction of hemisphere in theory, in order to by this infinite environment
Light is discrete to turn to limited sampled point, ambient light number of the colored spots on hemisphere is simulated with the grid of 4*4, such as Figure 11 institutes
Show, the ambient light that each grid is received is defined as the ambient light in direction, that is to say, that each grid only receives one
The ambient light in individual direction, quantity and spherical mathematical formula according to grid determine the corresponding environment light direction of each grid.
After determining the corresponding environment light direction of each grid, found in Environment according to environment light direction and these are provided
The pixel of ambient light, arrow as shown in figure 11 finds the vector with bounding box along the opposite direction of grid correspondence arrow
The intersection point in one of face, this intersection point corresponding pixel in the Environment of intersection is exactly the corresponding mesh of this grid
Mark surrounding pixels point.After determining 16 target environment pixels in the manner previously described, for this 16 target environment pixels,
As shown in Figure 10, by unit vector (the first unit vector) dot product this 4*4 lattice on the environment light direction of target environment pixel
The unit vector (the second unit vector) in the normal direction of colored spots that son is simulated, then by dot product and this target environment picture
The pixel color of vegetarian refreshments is wanted to be multiplied and obtains the rendered color of this target environment pixel, that is, this target environment pixel
The ambient light color of offer.After calculating the corresponding rendered color of 16 target environment pixels, respectively by this 16 targets
The rendered color color weight corresponding with its rendered color of surrounding pixels point is multiplied, then 16 products that multiplication is drawn are added,
Obtain the corresponding ambient light color of colored spots that this 4*4 grid is simulated.
Through the above way, computing device calculates the corresponding ambient light color of each colored spots in normal map, as
Destination object corresponding ambient light color of each pixel in screen space.Finally by destination object in screen space each
The corresponding ambient light color of pixel is normal in screen space with destination object to render the corresponding picture of each pixel for obtaining
Plain color is overlapped, as destination object final color for presenting in screen space, i.e., destination object is under global illumination
Effect.
The method of determination global illumination information in the embodiment of the present invention is described above, is described below in the embodiment of the present invention
Computing device, refer to Figure 12, one embodiment of computing device includes in the embodiment of the present invention:
Setup module 1201, the bounding box for setting destination object according to the position of the first virtual video camera, first is empty
Intending video camera is used to follow the trail of the destination object of dynamic mobile, and bounding box is the space development model that can be surrounded destination object;
Determining module 1202, for determining the corresponding Environment in each face of bounding box that setup module 1201 determines;
Computing module 1203, the Environment for being determined according to determining module 1202 calculates destination object in screen space
In the corresponding ambient light color of each pixel.
In the embodiment of the present invention, computing device sets the bounding box of destination object according to the position of the first virtual video camera,
Determine the corresponding Environment in each face in bounding box, destination object each pixel in screen space is calculated according to Environment
The corresponding ambient light color of point.It can be seen that, this programme is each pixel based on object come the color of computing environment light, Neng Gouyou
Effect improves the computational accuracy of ambient light information, so as to simulate more real global illumination effect, enhancing dynamic object picture
Validity.
On the basis of the corresponding embodiments of above-mentioned Figure 12, Figure 13 is referred to, dress is calculated provided in an embodiment of the present invention
In another embodiment put, determining module 1202 includes:
First determining unit 12021, for using the center of bounding box as the position of the second virtual video camera;
Acquiring unit 12022, it is each for the second virtual video camera that the first determining unit determines to be respectively facing into bounding box
The center in individual face carries out image capture, obtains the corresponding primal environment textures in each face;
Processing unit 12023, for the primal environment textures obtained for each acquiring unit, progressively reduces primal environment
Textures, and by the pixel color equalization of each surrounding pixels point in primal environment textures, until obtaining the environment of preset form
Textures.
The concrete mode that a kind of determining module 1202 determines Environment is the embodiment of the invention provides, scheme is improve
Realizability.
On the basis of the corresponding embodiments of above-mentioned Figure 12 or Figure 13, Figure 14 is referred to, provided in an embodiment of the present invention
In another embodiment of computing device, computing module 1203 includes:
Rendering unit 12031, for destination object render based on the first virtual video camera obtaining destination object pair
The normal map answered;
Second determining unit 12032, for each pixel, simulation pixel in the normal map that is obtained for rendering unit
The ambient light that point is received in three dimensions, and the target environment pixel of offer ambient light in Environment is provided;
Computing unit 12033, the pixel color of the target environment pixel for being determined according to the second determining unit is calculated
The corresponding ambient light color of each pixel.
Alternatively, the second determining unit 12032 can include in embodiments of the present invention:
First determination subelement 120321, for determining pixel corresponding hemisphere coloring models in three dimensions;
Second determination subelement 120322, for surrounding hemisphere coloring models with multiple grid, same grid is received
To ambient light be defined as the ambient light in same direction, quantity and spherical mathematical formula according to grid determine each grid
Corresponding environment light direction;
3rd determination subelement 120323, for for each grid, it is determined that referring to along environment light direction in bounding box
To the target environment pixel at the center of grid.
Alternatively, computing unit 12033 can include in embodiments of the present invention:
First computation subunit 120331, for for each target environment pixel, by the target environment pixel
Pixel color be multiplied by first the second unit vector of unit vector dot product, obtain that the target environment pixel is corresponding to render face
Color, the direction of first unit vector is the corresponding environment light direction of the target environment pixel, second Unit Vector
The direction of amount is the normal direction of the pixel;
Second computation subunit 120332, it is described for by each corresponding rendered color of target environment pixel, being multiplied by
The corresponding color weight of rendered color, then the corresponding ambient light color of the pixel is obtained to the summation of each product.
The embodiment of the invention provides a kind of computing module 1203 and calculate destination object each pixel in screen space
The mode of corresponding ambient light color, improves the realizability of scheme.
Secondly, in the embodiment of the present invention, computing module 1203 can also be with the pixel in multiple grid simulation normal map
The ambient light that point is received, can determine the sampling density of ambient light by the setting of plaid matching quantum count, being capable of real-time implementation meter
The balance of accuracy and speed is calculated, the flexibility of scheme is improve.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the unit
Divide, only a kind of division of logic function there can be other dividing mode when actually realizing, for example multiple units or component
Can combine or be desirably integrated into another system, or some features can be ignored, or do not perform.It is another, it is shown or
The coupling each other for discussing or direct-coupling or communication connection can be the indirect couplings of device or unit by some interfaces
Close or communicate to connect, can be electrical, mechanical or other forms.
The unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be according to the actual needs selected to realize the mesh of this embodiment scheme
's.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list
Unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is to realize in the form of SFU software functional unit and as independent production marketing or use
When, can store in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part for being contributed to prior art in other words or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are used to so that a computer
Equipment (can be personal computer, server, or network equipment etc.) performs the complete of each embodiment methods described of the invention
Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (English full name:Read-Only
Memory, english abbreviation:ROM), random access memory (English full name:Random Access Memory, english abbreviation:
RAM), magnetic disc or CD etc. are various can be with the medium of store program codes.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to preceding
Embodiment is stated to be described in detail the present invention, it will be understood by those within the art that:It still can be to preceding
State the technical scheme described in each embodiment to modify, or equivalent is carried out to which part technical characteristic;And these
Modification is replaced, and does not make the spirit and scope of the essence disengaging various embodiments of the present invention technical scheme of appropriate technical solution.
Claims (10)
1. it is a kind of determine global illumination information method, it is characterised in that including:
Position according to the first virtual video camera sets the bounding box of destination object, and first virtual video camera is used to follow the trail of dynamic
The destination object of state movement, the bounding box is the space development model that can be surrounded the destination object;
Determine the corresponding Environment in each face of the bounding box;
Destination object corresponding ambient light color of each pixel in screen space is calculated according to the Environment.
2. method according to claim 1, it is characterised in that the corresponding environment in each face of the determination bounding box
Textures include:
Using the center of the bounding box as the position of the second virtual video camera;
The center that second virtual video camera is respectively facing each face of bounding box is carried out into image capture, each face is obtained
Corresponding primal environment textures;
For each primal environment textures, progressively reduce the primal environment textures, and by the primal environment textures each
The pixel color equalization of surrounding pixels point, until obtaining the Environment of preset form.
3. method according to claim 1 and 2, it is characterised in that described that the target is calculated according to the Environment
Object corresponding ambient light color of each pixel in screen space includes:
The destination object render based on first virtual video camera and obtains the corresponding normal patch of the destination object
Figure;
For each pixel in the normal map, the ambient light that the pixel is received in three dimensions is simulated, and really
The target environment pixel of the ambient light is provided in the fixed Environment;
Pixel color according to the target environment pixel calculates the corresponding ambient light color of the pixel.
4. method according to claim 3, it is characterised in that what the simulation pixel was received in three dimensions
Ambient light, and determine that the target environment pixel that the ambient light is provided in the Environment includes:
Determine the pixel corresponding hemisphere coloring models in the three dimensions;
The hemisphere coloring models are surrounded with multiple grid, the ambient light that same grid is received is defined as same direction
Ambient light, quantity and spherical mathematical formula according to the grid determine the corresponding environment light direction of each grid;
For each grid, it is determined that the target at the center of the grid is pointed to along the environment light direction in the bounding box
Surrounding pixels point.
5. method according to claim 4, it is characterised in that described to be calculated according to the color of the target environment pixel
The corresponding ambient light color of the pixel includes:
For each target environment pixel, the pixel color of the target environment pixel is multiplied by the first unit vector dot product
Second unit vector, obtains the corresponding rendered color of the target environment pixel, and the direction of first unit vector is institute
The corresponding environment light direction of target environment pixel is stated, the direction of second unit vector is the normal side of the pixel
To;
By each corresponding rendered color of target environment pixel, the corresponding color weight of the rendered color is multiplied by, then to each
Individual product summation obtains the corresponding ambient light color of the pixel.
6. a kind of computing device, it is characterised in that including:
Setup module, the bounding box for setting destination object according to the position of the first virtual video camera, described first virtually takes the photograph
Camera is used to follow the trail of the destination object of dynamic mobile, and the bounding box is that the space that can be surrounded the destination object is more
Face body;
Determining module, for determining the corresponding Environment in each face of the bounding box that the setup module determines;
Computing module, the Environment for being determined according to the determining module calculates the destination object in screen space
In the corresponding ambient light color of each pixel.
7. device according to claim 6, it is characterised in that the determining module includes:
First determining unit, for using the center of the bounding box as the position of the second virtual video camera;
Acquiring unit, for second virtual video camera that first determining unit determines to be respectively facing into the bounding box
The center in each face carries out image capture, obtains the corresponding primal environment textures in each face;
Processing unit, for the primal environment textures obtained for acquiring unit each described, progressively reduces the primal environment
Textures, and by the pixel color equalization of each surrounding pixels point in the primal environment textures, until obtaining preset form
Environment.
8. the device according to claim 6 or 7, it is characterised in that computing module includes:
Rendering unit, the destination object is obtained for the destination object render based on first virtual video camera
Corresponding normal map;
Second determining unit, for each pixel in the normal map that is obtained for the rendering unit, simulation is described
The ambient light that pixel is received in three dimensions, and determine to provide the target environment picture of the ambient light in the Environment
Vegetarian refreshments;
Computing unit, the pixel color of the target environment pixel for being determined according to second determining unit calculates institute
State the corresponding ambient light color of pixel.
9. device according to claim 8, it is characterised in that second determining unit includes:
First determination subelement, for determining the pixel corresponding hemisphere coloring models in the three dimensions;
Second determination subelement, for surrounding the hemisphere coloring models, the ring that same grid is received with multiple grid
Border light is defined as the ambient light in same direction, and the quantity and spherical mathematical formula according to the grid determine each grid pair
The environment light direction answered;
3rd determination subelement, for for each grid, it is determined that being pointed to along the environment light direction in the bounding box
The target environment pixel at the center of the grid.
10. device according to claim 9, it is characterised in that the computing unit includes:
First computation subunit, for for each target environment pixel, by the pixel color of the target environment pixel
First the second unit vector of unit vector dot product is multiplied by, the corresponding rendered color of the target environment pixel, described is obtained
The direction of one unit vector is the corresponding environment light direction of the target environment pixel, and the direction of second unit vector is
The normal direction of the pixel;
Second computation subunit, for by each corresponding rendered color of target environment pixel, being multiplied by the rendered color pair
The color weight answered, then the corresponding ambient light color of the pixel is obtained to the summation of each product.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611110103.9A CN106780709B (en) | 2016-12-02 | 2016-12-02 | A kind of method and device of determining global illumination information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611110103.9A CN106780709B (en) | 2016-12-02 | 2016-12-02 | A kind of method and device of determining global illumination information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106780709A true CN106780709A (en) | 2017-05-31 |
CN106780709B CN106780709B (en) | 2018-09-07 |
Family
ID=58878342
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611110103.9A Active CN106780709B (en) | 2016-12-02 | 2016-12-02 | A kind of method and device of determining global illumination information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106780709B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108022285A (en) * | 2017-11-30 | 2018-05-11 | 杭州电魂网络科技股份有限公司 | Map rendering intent and device |
CN108447112A (en) * | 2018-01-24 | 2018-08-24 | 重庆爱奇艺智能科技有限公司 | Analogy method, device and the VR equipment of role's light environment |
CN108492353A (en) * | 2018-02-02 | 2018-09-04 | 珠海金山网络游戏科技有限公司 | A kind of method for drafting and system of global context ball |
CN108520551A (en) * | 2018-03-30 | 2018-09-11 | 苏州蜗牛数字科技股份有限公司 | Realize method, storage medium and the computing device of light textures dynamic illumination |
CN108830923A (en) * | 2018-06-08 | 2018-11-16 | 网易(杭州)网络有限公司 | Image rendering method, device and storage medium |
CN109410300A (en) * | 2018-10-10 | 2019-03-01 | 苏州好玩友网络科技有限公司 | Shadows Processing method and device and terminal device in a kind of scene of game |
WO2019085838A1 (en) * | 2017-11-03 | 2019-05-09 | 腾讯科技(深圳)有限公司 | Object rendering method and device, storage medium and electronic device |
CN110585713A (en) * | 2019-09-06 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Method and device for realizing shadow of game scene, electronic equipment and readable medium |
CN110728741A (en) * | 2019-10-11 | 2020-01-24 | 长春理工大学 | Surface light source illumination three-dimensional scene picture rendering method based on multi-detail level model |
CN111008416A (en) * | 2019-11-12 | 2020-04-14 | 江苏艾佳家居用品有限公司 | Method and system for generating illumination effect of house type scene |
CN111402348A (en) * | 2019-01-03 | 2020-07-10 | 百度在线网络技术(北京)有限公司 | Method and device for forming illumination effect and rendering engine |
CN112090084A (en) * | 2020-11-23 | 2020-12-18 | 成都完美时空网络技术有限公司 | Object rendering method and device, storage medium and electronic equipment |
CN112562053A (en) * | 2020-12-09 | 2021-03-26 | 贝壳技术有限公司 | PBR material map generation method and device |
CN112604278A (en) * | 2020-12-29 | 2021-04-06 | 广州银汉科技有限公司 | Method for simulating global illumination on intelligent equipment based on game |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615300A (en) * | 2009-07-02 | 2009-12-30 | 北京航空航天大学 | A kind of screen space micro-structure surface object ambient light occlusion method |
US8223148B1 (en) * | 2007-11-12 | 2012-07-17 | Adobe Systems Incorporated | Method and apparatus for computing indirect lighting for global illumination rendering in 3-D computer graphics |
US20140327673A1 (en) * | 2013-05-03 | 2014-11-06 | Crytek Gmbh | Real-time global illumination using pre-computed photon paths |
CN104463959A (en) * | 2014-11-25 | 2015-03-25 | 无锡梵天信息技术股份有限公司 | Method for generating cubic environment maps |
CN104484896A (en) * | 2014-10-30 | 2015-04-01 | 无锡梵天信息技术股份有限公司 | Physical method based on environment mapping for simulating human skin subsurface scattering |
CN104517313A (en) * | 2014-10-10 | 2015-04-15 | 无锡梵天信息技术股份有限公司 | AO (ambient occlusion) method based on screen space |
-
2016
- 2016-12-02 CN CN201611110103.9A patent/CN106780709B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8223148B1 (en) * | 2007-11-12 | 2012-07-17 | Adobe Systems Incorporated | Method and apparatus for computing indirect lighting for global illumination rendering in 3-D computer graphics |
CN101615300A (en) * | 2009-07-02 | 2009-12-30 | 北京航空航天大学 | A kind of screen space micro-structure surface object ambient light occlusion method |
US20140327673A1 (en) * | 2013-05-03 | 2014-11-06 | Crytek Gmbh | Real-time global illumination using pre-computed photon paths |
CN104517313A (en) * | 2014-10-10 | 2015-04-15 | 无锡梵天信息技术股份有限公司 | AO (ambient occlusion) method based on screen space |
CN104484896A (en) * | 2014-10-30 | 2015-04-01 | 无锡梵天信息技术股份有限公司 | Physical method based on environment mapping for simulating human skin subsurface scattering |
CN104463959A (en) * | 2014-11-25 | 2015-03-25 | 无锡梵天信息技术股份有限公司 | Method for generating cubic environment maps |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019085838A1 (en) * | 2017-11-03 | 2019-05-09 | 腾讯科技(深圳)有限公司 | Object rendering method and device, storage medium and electronic device |
CN108022285B (en) * | 2017-11-30 | 2021-04-20 | 杭州电魂网络科技股份有限公司 | Map rendering method and device |
CN108022285A (en) * | 2017-11-30 | 2018-05-11 | 杭州电魂网络科技股份有限公司 | Map rendering intent and device |
CN108447112A (en) * | 2018-01-24 | 2018-08-24 | 重庆爱奇艺智能科技有限公司 | Analogy method, device and the VR equipment of role's light environment |
CN108492353A (en) * | 2018-02-02 | 2018-09-04 | 珠海金山网络游戏科技有限公司 | A kind of method for drafting and system of global context ball |
CN108492353B (en) * | 2018-02-02 | 2021-12-31 | 珠海金山网络游戏科技有限公司 | Global environment ball drawing method and system |
CN108520551A (en) * | 2018-03-30 | 2018-09-11 | 苏州蜗牛数字科技股份有限公司 | Realize method, storage medium and the computing device of light textures dynamic illumination |
CN108520551B (en) * | 2018-03-30 | 2022-02-22 | 苏州蜗牛数字科技股份有限公司 | Method for realizing dynamic illumination of light map, storage medium and computing equipment |
CN108830923A (en) * | 2018-06-08 | 2018-11-16 | 网易(杭州)网络有限公司 | Image rendering method, device and storage medium |
CN108830923B (en) * | 2018-06-08 | 2022-06-17 | 网易(杭州)网络有限公司 | Image rendering method and device and storage medium |
CN109410300A (en) * | 2018-10-10 | 2019-03-01 | 苏州好玩友网络科技有限公司 | Shadows Processing method and device and terminal device in a kind of scene of game |
CN111402348A (en) * | 2019-01-03 | 2020-07-10 | 百度在线网络技术(北京)有限公司 | Method and device for forming illumination effect and rendering engine |
CN110585713A (en) * | 2019-09-06 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Method and device for realizing shadow of game scene, electronic equipment and readable medium |
CN110728741A (en) * | 2019-10-11 | 2020-01-24 | 长春理工大学 | Surface light source illumination three-dimensional scene picture rendering method based on multi-detail level model |
CN110728741B (en) * | 2019-10-11 | 2022-08-23 | 长春理工大学 | Area light source irradiation three-dimensional scene picture rendering method based on multi-detail level model |
CN111008416A (en) * | 2019-11-12 | 2020-04-14 | 江苏艾佳家居用品有限公司 | Method and system for generating illumination effect of house type scene |
CN111008416B (en) * | 2019-11-12 | 2022-07-08 | 江苏艾佳家居用品有限公司 | Method and system for generating illumination effect of house type scene |
CN112090084A (en) * | 2020-11-23 | 2020-12-18 | 成都完美时空网络技术有限公司 | Object rendering method and device, storage medium and electronic equipment |
CN112562053A (en) * | 2020-12-09 | 2021-03-26 | 贝壳技术有限公司 | PBR material map generation method and device |
CN112562053B (en) * | 2020-12-09 | 2022-11-08 | 贝壳技术有限公司 | PBR material map generation method and device |
CN112604278A (en) * | 2020-12-29 | 2021-04-06 | 广州银汉科技有限公司 | Method for simulating global illumination on intelligent equipment based on game |
CN112604278B (en) * | 2020-12-29 | 2021-09-17 | 广州银汉科技有限公司 | Method for simulating global illumination on intelligent equipment based on game |
Also Published As
Publication number | Publication date |
---|---|
CN106780709B (en) | 2018-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106780709B (en) | A kind of method and device of determining global illumination information | |
US7554540B2 (en) | System and method of visible surface determination in computer graphics using interval analysis | |
CN107223269A (en) | Three-dimensional scene positioning method and device | |
CN106780707B (en) | The method and apparatus of global illumination in simulated scenario | |
US20130127895A1 (en) | Method and Apparatus for Rendering Graphics using Soft Occlusion | |
TW201237801A (en) | Method for processing three-dimensional image vision effects | |
EP4213102A1 (en) | Rendering method and apparatus, and device | |
CN112530005B (en) | Three-dimensional model linear structure recognition and automatic restoration method | |
US20180061119A1 (en) | Quadrangulated layered depth images | |
CN109934933A (en) | Emulation mode based on virtual reality and the image simulation system based on virtual reality | |
CN111915710A (en) | Building rendering method based on real-time rendering technology | |
KR100693134B1 (en) | Three dimensional image processing | |
Pant et al. | 3D Asset Size Reduction using Mesh Retopology and Normal Texture Mapping | |
Millán et al. | Impostors, Pseudo-instancing and Image Maps for GPU Crowd Rendering. | |
Lu et al. | Design and implementation of three-dimensional game engine | |
JP7368950B2 (en) | Method and apparatus for efficient building footprint identification | |
Hempe et al. | Efficient real-time generation and rendering of interactive grass and shrubs for large sceneries | |
CN117994412A (en) | Three-dimensional scene construction method and related device | |
Sunar et al. | Crowd rendering optimization for virtual heritage system | |
Johansson | Analyzing performance for lighting of tessellated grass using LOD | |
Öhrn | Different mapping techniques for realistic surfaces | |
Carr et al. | Variants of a new volumetric model for dust cloud representation and their comparison to existing methods | |
CN118356646A (en) | Game rendering processing method and device, electronic equipment and storage medium | |
CN111625093A (en) | Dynamic scheduling display method of massive digital point cloud data in MR glasses | |
Dembogurski et al. | Interactive virtual terrain generation using augmented reality markers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |