CN113034662B - Virtual scene rendering method and device, storage medium and electronic equipment - Google Patents

Virtual scene rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113034662B
CN113034662B CN202110335489.8A CN202110335489A CN113034662B CN 113034662 B CN113034662 B CN 113034662B CN 202110335489 A CN202110335489 A CN 202110335489A CN 113034662 B CN113034662 B CN 113034662B
Authority
CN
China
Prior art keywords
flow
vertex
pixel value
target
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110335489.8A
Other languages
Chinese (zh)
Other versions
CN113034662A (en
Inventor
赵俊宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110335489.8A priority Critical patent/CN113034662B/en
Publication of CN113034662A publication Critical patent/CN113034662A/en
Application granted granted Critical
Publication of CN113034662B publication Critical patent/CN113034662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading

Abstract

The present disclosure relates to the field of computer technologies, and in particular, to a virtual scene rendering method and apparatus, a storage medium, and an electronic device. The virtual scene rendering method comprises the steps of configuring sample lines of a flow area in a three-dimensional scene, and generating a target grid based on the sample lines; calculating a flow vector according to the tangential direction of the sample line and the topographic parameters of the flow area; and calculating foam information according to the intersection area of the target grid and the flow area; and calculating vertex pixel values of the target mesh according to the flow vectors and the foam information, and rendering the target mesh according to the vertex pixel values. The virtual scene rendering method can simplify the steps of virtual scene rendering and improve the rendering efficiency.

Description

Virtual scene rendering method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a virtual scene rendering method and apparatus, a storage medium, and an electronic device.
Background
When a water system is baked in a virtual scene, a waterfall effect is usually produced in order to fit the existing virtual scene terrain.
The existing rendering steps generally include: firstly, exporting the model combination with the good scene inside to a 3dsMax or other DCC tools; then, establishing a model piece which accords with the running water or waterfall of the model trend according to the model and splitting UV; then guiding the water flow direction into other flowmap drawing tools such as flowmap tool and the like to draw the water flow direction, and storing the water flow direction into a map for exporting; and finally, re-importing the engine to the materials of the top map. The method needs to be imported and exported from a plurality of different software, so that the operation steps are complicated and the drawing efficiency is low; in addition, an additional flow map is needed to control the flow direction.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure aims to provide a virtual scene rendering method, device, storage medium, and electronic device, and aims to simplify the steps of virtual scene rendering and improve rendering efficiency.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the embodiments of the present disclosure, there is provided a virtual scene rendering method, including: configuring sample lines of a flow area in a three-dimensional scene, and generating a target grid based on the sample lines; calculating a flow vector according to the tangential direction of the spline line and the terrain parameters of the flow area; and calculating foam information according to the intersection area of the target grid and the flow area; and calculating vertex pixel values of the target mesh according to the flow vectors and the foam information, and rendering the target mesh according to the vertex pixel values.
According to some embodiments of the present disclosure, based on the foregoing solution, the configuring a spline of a flow region in a three-dimensional scene includes: setting a flow area of a three-dimensional scene, and configuring a flow track for the flow area; vertically projecting track points in the flow track to the flow area to generate an initial spline line; and resampling the initial sample line to obtain the sample line.
According to some embodiments of the present disclosure, based on the foregoing scheme, the vertically projecting the trajectory points in the flow trajectory to the flow region to generate an initial spline includes: vertically projecting the track points in the flow track to the flow area to obtain projection points; selecting a unit grid point with the shortest distance to the projection point in the projection direction as a target grid point corresponding to the projection point; and connecting the target grid points according to a preset connection rule to generate the initial spline.
According to some embodiments of the present disclosure, based on the foregoing solution, the generating a target grid based on the spline includes: selecting any point in the sample lines, and generating a target straight line which is vertical to the sample lines in a preset two-dimensional plane at the point; copying the target straight line aiming at the sample line to obtain the target straight line of each point in the sample line; carrying out offset adjustment on a target straight line of one point in the sample line according to the normal direction of the point; and performing skinning processing on the sample lines and the target straight line after the offset adjustment to generate the target grid.
According to some embodiments of the present disclosure, based on the foregoing solution, the calculating the flow vector according to the tangential direction of the spline line and the topographic parameter of the flow region includes: acquiring an initial flow vector of each vertex in the target grid in a first direction and an initial flow vector of each vertex in a second direction based on the tangential direction of the spline; acquiring a gradient vector of each vertex in the flowing area in a first direction and a gradient vector of each vertex in a second direction on the basis of the terrain parameters of the flowing area; calculating the difference value between the initial flow vector and the gradient vector in the first direction to obtain the flow vector in the first direction; and calculating the difference value between the initial flow vector and the gradient vector in the second direction to obtain the flow vector in the second direction.
According to some embodiments of the present disclosure, based on the foregoing solution, the calculating of the foam information according to the intersection area of the target mesh and the flow area includes: calculating an intersection area of the target mesh and the flow area; and generating random numbers corresponding to all vertexes in the intersecting area according to the area and the noise type of the intersecting area to serve as the foam information.
According to some embodiments of the present disclosure, based on the foregoing solution, the calculating the vertex pixel value of the target mesh according to the flow vector and the bubble information includes: calculating the first pixel value and the second pixel value of each vertex in the target grid along with time according to the flow vector; and calculating the third pixel value of each vertex in the target mesh based on the foam information.
According to some embodiments of the present disclosure, based on the foregoing solution, the calculating the first pixel value and the second pixel value of each vertex in the target mesh over time according to the flow vector includes: obtaining a first initial pixel value of each vertex in a first color channel and a second initial pixel value of each vertex in a second color channel in the target grid according to a pre-configured water flow texture mapping; and calculating the time-varying offset information of the water flow texture mapping in the first direction and the second direction respectively based on the flow vectors in the first direction and the second direction in the flow vectors; and respectively superposing the offset information on the first initial pixel value and the second initial pixel value to obtain the first pixel value and the second pixel value of each vertex in the target grid changing along with time.
According to some embodiments of the present disclosure, based on the foregoing solution, the calculating the third pixel value of each vertex in the target mesh based on the foam information includes: acquiring a third initial pixel value of a vertex in a third color channel in the preset intersection area; and superposing the foam information to the third initial pixel value to obtain the third pixel value of each vertex in the target grid.
According to some embodiments of the present disclosure, based on the foregoing scheme, the rendering the target mesh according to the vertex pixel values comprises: determining diffuse reflection color components and specular reflection color components of the target mesh based on the vertex pixel values and preconfigured material pixel values; rendering the target mesh according to the vertex pixel values, the diffuse reflection color components, and the specular reflection color components.
According to a second aspect of the embodiments of the present disclosure, there is provided a virtual scene rendering apparatus including: the grid module is used for configuring sample lines of a flow area in a three-dimensional scene and generating a target grid based on the sample lines; the calculation module is used for calculating a flow vector according to the tangential direction of the sample line and the terrain parameters of the flow area; and calculating foam information according to the intersection area of the target grid and the flow area; and the rendering module is used for calculating the vertex pixel value of the target mesh according to the flow vector and the foam information and rendering the target mesh according to the vertex pixel value.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a virtual scene rendering method as in the above embodiments.
According to a fourth aspect of an embodiment of the present disclosure, there is provided an electronic apparatus, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the virtual scene rendering method as in the above embodiments.
Exemplary embodiments of the present disclosure may have some or all of the following benefits:
in the technical solutions provided by some embodiments of the present disclosure, a target mesh is generated by configuring spline lines in a three-dimensional scene, then a flow vector and foam information are calculated to calculate a vertex pixel value of the target mesh, and finally the target mesh is rendered to obtain a water flow drawing effect. The method can avoid additionally sampling the flow chartlet from other tools to control the water flow direction, on one hand, the instruction number of the shader rendering is reduced, the leading-out and leading-in steps from different software are simplified, the rendering operation efficiency is improved, and the rendering time is saved; on the other hand, the method can be directly debugged in a rendered virtual engine, the rendering effect is intuitive, and what you see is what you get.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically illustrates a flow diagram of a virtual scene rendering method in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a schematic diagram of an initial spline in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic view of a target straight line in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a diagram of a target grid in an exemplary embodiment of the disclosure;
FIG. 5 schematically illustrates a schematic diagram of another target grid in an exemplary embodiment of the disclosure;
FIG. 6 schematically illustrates a schematic view of an intersection region in an exemplary embodiment of the disclosure;
FIG. 7 schematically illustrates a schematic view of a foam region in an exemplary embodiment of the disclosure;
fig. 8 schematically illustrates a composition diagram of a virtual scene rendering apparatus in an exemplary embodiment of the present disclosure;
FIG. 9 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure;
fig. 10 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flowcharts shown in the figures are illustrative only and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Implementation details of the technical solution of the embodiments of the present disclosure are set forth in detail below.
Fig. 1 schematically illustrates a flowchart of a virtual scene rendering method in an exemplary embodiment of the present disclosure. As shown in fig. 1, the virtual scene rendering method includes steps S1 to S3:
s1, configuring a spline of a flow area in a three-dimensional scene, and generating a target grid based on the spline;
s2, calculating a flow vector according to the tangential direction of the sample line and the topographic parameters of the flow area; and calculating foam information according to the intersection area of the target grid and the flow area;
and S3, calculating a vertex pixel value of the target grid according to the flow vector and the foam information, and rendering the target grid according to the vertex pixel value.
In the technical solutions provided by some embodiments of the present disclosure, a target mesh is generated by configuring spline lines in a three-dimensional scene, then a flow vector and foam information are calculated to calculate a vertex pixel value of the target mesh, and finally the target mesh is rendered to obtain a water flow drawing effect. The method can avoid additionally sampling the flow chartlet from other tools to control the water flow direction, on one hand, the instruction number of the shader rendering is reduced, the leading-out and leading-in steps from different software are simplified, the rendering operation efficiency is improved, and the rendering time is saved; on the other hand, the method can be directly debugged in a rendered virtual engine, and the rendering effect is visual, so that what you see is what you get.
Based on the rendering requirement of the virtual scene, the running water waterfall effect which is attached to the existing scene terrain or the scene model is often required to be manufactured. In the prior art, a model combination with a well-laid scene is generally exported to a 3dsMax or other DCC tool, a model piece conforming to running water or waterfall of the trend of the model is created according to the model, UV is split, then the model piece is imported into other flowmap drawing tools such as a flowmap tool and the like to draw a water flow direction, and the water flow direction is stored as a map and exported. And finally, re-importing the engine to render the materials of the top map.
The existing rendering technology is complex to manufacture, the manufacturing time is long, an extra flowmap chartlet needs to be sampled on the material to control the flow direction, and the operation of leading in and leading out a plurality of different software is complex.
Therefore, the virtual scene rendering method is provided, based on the Houdini (movie special effect magician) to develop a special tool to render the waterfall water flow of the virtual scene, and the additional sampling of the flowmap map is avoided to control the water flow direction, so that the rendering efficiency is improved.
Hereinafter, the steps of the virtual scene rendering method in the present exemplary embodiment will be described in more detail with reference to the drawings and the embodiments.
Step S1, sample lines of a flow area in a three-dimensional scene are configured, and a target grid is generated based on the sample lines.
In an embodiment of the present disclosure, step S1 may specifically include the following processes:
s11, acquiring a three-dimensional scene;
s12, configuring a spline of a flow area in a three-dimensional scene;
and S13, generating a target grid based on the spline.
Specifically, for step S11, a virtual scene model to be rendered is first acquired. Among them, the virtual scene may include but is not limited to: game scenes, virtual Reality (VR) scenes, animation scenes, simulator scenes, and the like. Such as rendering a game scene in a cell phone Android (Android) system, rendering an animation scene in a PC computer Android (Android) system, and so on.
A virtual scene model to be rendered may be loaded with a virtual engine. For example, a virtual scene model that requires the addition of running water or waterfall may be passed into the virtual engine unit to obtain a three-dimensional scene.
In an embodiment of the present disclosure, for step S12, a specific process of configuring spline lines of a flow region in a three-dimensional scene includes:
step S121, setting a flow area of the three-dimensional scene, and configuring a flow track for the flow area.
Specifically, since the virtual scene is a three-dimensional model and the water flow region is a part of the three-dimensional model, when rendering the water flow effect on the virtual scene, the flow region of the three-dimensional scene, that is, the region to be rendered by the water flow, needs to be specified. The configuration flow area can be intelligently extracted from the three-dimensional model through a virtual engine, and can also be set through manual intervention.
After the flow area is set, the flow locus of the flow area needs to be configured, and the flow locus represents the main water flow direction of the flow area. The user can configure by using a control in the virtual engine, for example, click to form a key point in the flow trajectory, or by sliding and drawing, etc., and the terminal generates the flow trajectory after recognizing the operation of the user on the control.
And step S122, vertically projecting the track points in the flow track to the flow area to generate an initial spline.
Specifically, the flow trajectory in the virtual engine is composed of a plurality of trajectory points, and all trajectory points are vertically projected to the flow area, so that an initial sample line of the flow trajectory corresponding to the flow area, which is configured by a user, can be obtained.
In one embodiment of the present disclosure, the vertically projecting the trajectory points in the flow trajectory to the flow region to generate initial spline lines includes: vertically projecting the track points in the flow track to the flow area to obtain projection points; selecting a unit grid point with the shortest distance to the projection point in the projection direction as a target grid point corresponding to the projection point; and connecting the target grid points according to a preset connection rule to generate the initial spline.
Specifically, each trace point in the introduced flow trace can be projected onto the flow region model in the vertical direction, and adsorbed onto the flow region model at the shortest distance after being hit, so that the spline line and the flow region can be more closely attached.
It should be noted that the coordinate system in the virtual engine is regular, there is a cell grid, and the created three-dimensional virtual scene model is not necessarily regular, so when generating the initial spline line, in order to make the initial spline line more fit to the virtual scene model, it is necessary to select the cell grid point with the shortest distance to the projection point after projection to connect into a line, that is, to adsorb to the flow region model with the shortest distance.
Fig. 2 schematically illustrates an initial spline line in an exemplary embodiment of the present disclosure, and referring to fig. 2, 201 is an adsorption point obtained by vertically projecting a trace point to the flow region, that is, a target grid point, and an initial spline line 202 is obtained by connecting a plurality of adsorption points 201 into a line.
Step S123, resampling the initial sample line to obtain the sample line.
In one embodiment of the present disclosure, to make the spline line smoother, the initial spline line may be resampled, subdividing the points in the initial spline line.
In an embodiment of the present disclosure, for step S13, the generating a target grid based on the spline includes:
step S131, selecting any point in the sample lines, and generating a target straight line perpendicular to the sample lines in a preset two-dimensional plane at the point.
In a three-dimensional coordinate system in a virtual scene, a vertical plane where the sample line is located is a yoz plane, and then the preset two-dimensional plane is an xoy plane perpendicular to the yoz plane. In the preset xoy plane, the sample line is perpendicular to the target straight line by 90 degrees.
Step S132, copying the target straight line for the sample line to obtain a target straight line of each point in the sample line.
And obtaining target straight lines of each point in the spline line by using the method in the step S131 in a copying manner, wherein the target straight lines are parallel to each other in the preset two-dimensional plane.
Step S133, performing offset adjustment on the target straight line of one point in the spline line according to the normal direction of the point;
fig. 3 schematically illustrates a schematic diagram of a target straight line in an exemplary embodiment of the disclosure, and referring to fig. 3, target straight lines corresponding to all points are shifted according to normal directions of the points in the spline line, and the shifted target straight lines 302 perpendicular to actual normal directions of the points in the spline line 301 are obtained after the shift adjustment.
Step S134 performs skinning processing on the spline line and the offset-adjusted target straight line to generate the target mesh.
Fig. 4 schematically illustrates a target mesh in an exemplary embodiment of the present disclosure, and referring to fig. 4, the spline line and the shifted target straight line are skinned into one surface based on the position of the current region, as shown in 401 of fig. 4.
Fig. 5 is a schematic diagram of another target mesh in the exemplary embodiment of the present disclosure, and referring to fig. 5, the target mesh is presented from the top perspective of fig. 4, where 501 is the target mesh and 502 is a partial flow region of the virtual scene model.
S2, calculating a flow vector according to the tangential direction of the sample line and the topographic parameters of the flow area; and calculating foam information according to the intersection area of the target grid and the flow area.
In one embodiment of the present disclosure, step S2 mainly includes the following two contents:
step S21, calculating a flow vector according to the tangential direction of the sample line and the topographic parameters of the flow area;
and S22, calculating foam information according to the intersection area of the target grid and the flow area.
It should be noted that, the execution order of calculating the flow vector and calculating the foam information is not limited, and in the method of the present disclosure, the flow vector may be calculated first, the foam information may be calculated first, or the flow vector and the foam information may be executed at the same time.
In an embodiment of the present disclosure, for step S21, the calculating the flow vector according to the tangential direction of the spline line and the topographic parameter of the flow area includes:
step S211, acquiring initial flow vectors of vertexes in the target grid in a first direction and initial flow vectors of vertexes in a second direction based on the tangential direction of the spline; and acquiring a gradient vector of each vertex in the flow area in a first direction and a gradient vector of each vertex in a second direction on the basis of the terrain parameters of the flow area.
Referring to the target grid of fig. 4, the predetermined two-dimensional plane is a xoy plane, and the first direction and the second direction are a positive x-axis direction and a positive y-axis direction in the coordinate system, respectively. For each vertex in the target mesh, a flow vector for that vertex needs to be computed.
The direction of the initial flow vector is the first direction and the second direction, the magnitude of the initial flow vector is the flow velocity in each direction, and the corresponding flow velocity can be configured according to the actual flowing water expected effect, so that the initial flow vector of the vertex in the x direction and the y direction can be obtained.
Meanwhile, the steep degree of the terrain of the virtual scene model is also considered in the rendering of the water flow, and the water flow deviates from the original flowing direction due to the rugged terrain. And acquiring the gradient information of the terrain of a point based on the vertex in the target mesh, and calculating the gradient vector of the vertex in the x and y directions according to the gradient information.
Step S212, calculating the difference value between the initial flow vector and the gradient vector in the first direction to obtain the flow vector in the first direction; and calculating the difference value between the initial flow vector and the gradient vector in the second direction to obtain the flow vector in the second direction.
After the initial flow vector and the gradient vector of each vertex in the target grid in the x direction and the y direction are obtained, the flow vector of each vertex in the x direction and the y direction can be obtained by subtracting the gradient vector from the initial flow vector, and the effect that the water flow vector is pushed away in the reverse direction is further shown, namely the flow vector of the water flow is corrected according to the terrain.
In an embodiment of the present disclosure, for step S22, the calculating foam information according to the intersection region of the target mesh and the flow region includes:
step S221, calculating an intersection region of the target mesh and the flow region.
The target grid is a relatively smooth area generated according to the sample line, and the flow area is a part of model area in the original virtual scene model, so that on the basis of the real virtual scene model, some intersected areas exist in the flow area of the target grid where the water flow is located and the virtual model, and the water flow is blocked by the obstacles in the intersected areas to generate water flow foam.
And obtaining the point of intersection of the two planes according to the plane of the target grid and the plane of the flow area, and forming a closed intersection area on the target grid according to the intersection point.
Fig. 6 schematically illustrates a schematic diagram of an intersection region in an exemplary embodiment of the present disclosure, and referring to fig. 6, an intersection region 602 of a plurality of target meshes and flow regions is included in a target mesh 601.
Step S222, generating a random number corresponding to each vertex in the intersection region according to the area of the intersection region and the noise type as the foam information.
In one embodiment of the present disclosure, the foam information may be acquired by adding image noise. The noise type includes any one of gaussian noise, impulse noise, gamma noise, rayleigh noise, exponential distribution noise and uniform distribution noise, and the disclosure is not limited in detail herein.
And adding image noise to each vertex in the intersection area to obtain a random number, and further determining the size of a foam area generated by the water flow in the intersection area according to the random number. Depending on the size of the intersection area, the size of the foam area produced also varies.
Fig. 7 schematically illustrates a foam region in an exemplary embodiment of the present disclosure, and referring to fig. 7, a target grid 701 includes a plurality of foam regions 702 therein.
And S3, calculating a vertex pixel value of the target grid according to the flow vector and the foam information, and rendering the target grid according to the vertex pixel value.
In one embodiment of the present disclosure, the vertex pixel values of the target mesh need to be computed first. The vertex pixel values comprise a first pixel value of each vertex in the target grid in a first color channel, a second pixel value in a second color channel and a third pixel value in a third color channel, namely an R value in a red color channel, a G value in a green color channel and a B value in a blue color channel, and the color of the vertex is obtained through the change of the three color channels of red (R), green (G) and blue (B) and the superposition of the three color channels.
When the vertex pixel value is calculated, the flow vector information and the foam information can be stored in different color channels in different distribution modes, and then the RGB value of the vertex is obtained and used for the shader to render.
In one embodiment of the present disclosure, the calculating vertex pixel values of the target mesh according to the flow vector and the foam information includes:
step S31, calculating the first pixel value and the second pixel value of each vertex in the target mesh over time according to the flow vector.
In an embodiment of the present disclosure, a specific process of calculating the first pixel value and the second pixel value in step S31 includes the following steps: step S311, obtaining a first initial pixel value of each vertex in a first color channel and a second initial pixel value of each vertex in a second color channel in the target grid according to a pre-configured water flow texture mapping; and step S312, respectively calculating the offset information of the water flow texture map along with the change of time in the first direction and the second direction based on the flow vectors in the first direction and the second direction in the flow vectors; step 313, superimposing the offset information on the first initial pixel value and the second initial pixel value respectively to obtain the first pixel value and the second pixel value of each vertex in the target mesh changing with time.
The water flow texture mapping is configured in advance, is used for simulating a water flow form, and each vertex in the target grid is static in an initial state, the water flow texture mapping comprises a value of a texture coordinate uv, and uv values of the vertexes in the target grid can be obtained by assigning the water flow texture mapping to the target grid, namely a first initial pixel value of a first color channel and a second initial pixel value of a second color channel are in, the first color channel can be a red channel R value, and the second color channel can be a green channel G value.
And determining the offset of the color value of each vertex in the target grid along the x direction and the y direction with time through the flow vector, and simulating the water flow change color of each vertex in the target grid.
And finally, superposing the color value offsets of each vertex in the x and y directions in the initial pixel values of the R and G channels, and calculating a first pixel value R value and a second pixel value G value of each vertex along with the change of time.
Step S32, calculating the third pixel value of each vertex in the target mesh based on the foam information.
In an embodiment of the present disclosure, the specific process of calculating the third pixel value in step S32 includes the following steps: step S321, obtaining a third initial pixel value of a vertex in a third color channel in the preset intersecting area; step S322, superimposing the foam information on the third initial pixel value to obtain the third pixel value of each vertex in the target mesh.
In particular, when the flow vector information is passed to the red and green channels of the vertex color in the target mesh, the bubble information may be passed to the blue channel of the vertex color in the target mesh.
Firstly, the vertex color of the intersection area needs to be configured, and in order to fit the actual water flow foam color, the vertex color is set to be white, namely R:255, G:255, B:255 and the third initial pixel value at the third color channel B channel is 255. And then storing the foam information changing along with the time to a channel B, and superposing the foam information and the third initial pixel value to obtain a third pixel value B value of the vertex.
Of course, the flow vector information and the foam information may be stored in different color channels in different distribution manners, where the flow vector information is stored in two color channels and the foam information is stored in the other color channel, which is not specifically limited by the present disclosure.
After the vertex pixel values of the target mesh are obtained, step S33 may be executed: rendering the target mesh according to the vertex pixel values. And rendering the color of the vertex changing along with time according to the RGB value of the vertex through a shader so as to achieve the visual water flow rendering effect.
And determining a diffuse reflection color component and a specular reflection color component of the target mesh based on the vertex pixel value and a pre-configured material pixel value, and then rendering the target mesh according to the vertex pixel value, the diffuse reflection color component and the specular reflection color component.
Next, the above method is further described in detail by an embodiment of a specific application scenario.
1. Using houdini engine for unity plug-in unity to allow the HAD tool to run in unity;
2. adding an object merge node to open to unity, adding a running water or waterfall virtual scene model and transmitting the running water or waterfall virtual scene model into an HDA tool manufactured by Houdini;
3. adding a current node in the HDA and opening the node to the unit, editing a flow track in the unit and inputting the flow track into the HDA;
4. using ray nodes for the introduced flow track and the virtual scene model, enabling each point in the introduced flow track to be projected onto the introduced model in the vertical direction, absorbing the points onto the model according to the shortest distance after hitting, and enabling the sample lines to be more attached to the model;
5. adding a sample node, re-sampling the sample line, and subdividing the points of the sample line to make the sample line smoother;
6. adding a line node to generate a line which is vertical to the sample line in the xoy plane;
7. copying the vertical line to each point of the sample line by using a copy node, and shifting according to the normal direction of each point;
8. adding skin nodes to skin the lines into a surface to obtain a target grid;
9. adding flowmap nodes, and generating initial flow vectors according to the tangential directions of the sample lines;
10. adding a flowmap obstacle node, inputting a flow area of the target grid and the virtual scene model, and pushing away the initial flow vector in the reverse direction at each vertex position of the flow area to obtain a final flow vector;
11. adding an intersectionanalysis node, inputting a flow region of the target grid and the virtual scene model, and obtaining a direct intersection point of the two models.
12. And adding a color node, and setting the color information of the intersection point as white.
13. Adding an attribute transfer node, inputting information of the target grid and the intersection point, and transmitting the color of the intersection point to the target grid to generate foam information.
14. Adding an attribute vop node, transmitting the flow vector to an RG channel of the vertex color of the target grid, and transmitting the foam information to a B channel of the vertex color of the target grid.
15. And finally, outputting to the unit, and obtaining a flow vector and foam information through a shader in the unit to perform material rendering processing.
Based on the method, a virtual scene model rendering tool is developed by using Houdini, and the tool is used for rendering the virtual scene, so that the phenomenon that a flow map is additionally sampled to control the water flow direction can be avoided, the instruction number of a Shader is reduced, the step of leading out and leading in from different software is simplified, the rendering operation efficiency is improved, the rendering time is saved, meanwhile, the tool can be directly debugged in a Unity virtual engine, the rendering effect is visual, and what you see is what you get.
Fig. 8 schematically illustrates a composition diagram of a virtual scene rendering apparatus in an exemplary embodiment of the disclosure, and as shown in fig. 8, the virtual scene rendering apparatus 800 may include a grid module 801, a calculation module 802, and a rendering module 803. Wherein:
a grid module 801, configured to configure spline lines of a flow region in a three-dimensional scene, and generate a target grid based on the spline lines;
a calculating module 802, configured to calculate a flow vector according to a tangential direction of the spline and a terrain parameter of the flow area; and calculating foam information according to the intersection area of the target grid and the flow area;
a rendering module 803, configured to calculate vertex pixel values of the target mesh according to the flow vectors and the foam information, and render the target mesh according to the vertex pixel values.
According to an exemplary embodiment of the present disclosure, the grid module 801 includes a flow trajectory unit, an initial spline unit, and a spline unit (not shown in the figure), the flow trajectory unit being configured to set a flow area of a three-dimensional scene and configure a flow trajectory for the flow area; the initial spline unit is used for vertically projecting the track points in the flow track to the flow area to generate initial splines; the sample line unit is used for resampling the initial sample line to obtain the sample line.
According to an exemplary embodiment of the present disclosure, the initial spline unit is configured to vertically project a trajectory point in the flow trajectory to the flow region to obtain a projection point; selecting a unit grid point with the shortest distance to the projection point in the projection direction as a target grid point corresponding to the projection point; and connecting the target grid points according to a preset connection rule to generate the initial spline.
According to an exemplary embodiment of the present disclosure, the grid module 801 includes a target grid unit (not shown in the figure) for selecting any point of the spline and generating a target straight line perpendicular to the spline in a preset two-dimensional plane at the point; copying the target straight line aiming at the sample line to obtain the target straight line of each point in the sample line; carrying out offset adjustment on a target straight line of one point in the sample line according to the normal direction of the point; and performing skinning processing on the sample lines and the target straight line after the offset adjustment to generate the target grid.
According to an exemplary embodiment of the present disclosure, the flow vectors include a flow vector in a first direction and a flow vector in a second direction in a preset two-dimensional plane, the calculating module 802 includes a flow vector unit (not shown in the figure) for obtaining an initial flow vector of each vertex in the first direction and an initial flow vector of each vertex in the second direction in the target mesh based on the tangential direction of the spline; acquiring a gradient vector of each vertex in the flow area in a first direction and a gradient vector of each vertex in a second direction on the basis of the terrain parameters of the flow area; calculating the difference value between the initial flow vector and the gradient vector in the first direction to obtain the flow vector in the first direction; and calculating the difference value between the initial flow vector and the gradient vector in the second direction to obtain the flow vector in the second direction.
According to an exemplary embodiment of the present disclosure, the calculation module 802 comprises a foam information unit (not shown in the figures) for calculating an intersection area of the target mesh and the flow area; and generating random numbers corresponding to all vertexes in the intersecting area according to the area and the noise type of the intersecting area to serve as the foam information.
According to an exemplary embodiment of the present disclosure, the vertex pixel values comprise a first pixel value in a first color channel, a second pixel value in a second color channel, and a third pixel value in a third color channel of each vertex in the target mesh, the rendering module 803 comprises a first unit and a second unit (not shown in the figure), the first rendering information unit is configured to calculate the first pixel value and the second pixel value of each vertex in the target mesh over time according to the flow vector; and the second rendering information unit is configured to calculate the third pixel value of each vertex in the target mesh based on the foam information.
According to an exemplary embodiment of the present disclosure, the first unit is configured to obtain, according to a pre-configured water flow texture map, a first initial pixel value of each vertex in the target mesh in a first color channel and a second initial pixel value of each vertex in the target mesh in a second color channel; and calculating offset information of the water flow texture map along with time in a first direction and a second direction respectively based on the flow vectors in the first direction and the second direction in the flow vectors; and respectively superposing the offset information on the first initial pixel value and the second initial pixel value to obtain the first pixel value and the second pixel value of each vertex in the target grid changing along with time.
According to an exemplary embodiment of the present disclosure, the second unit is configured to obtain a third initial pixel value of a vertex in a third color channel in the pre-configured intersection region; and superposing the foam information to the third initial pixel value to obtain the third pixel value of each vertex in the target grid.
According to an exemplary embodiment of the present disclosure, the rendering module 803 comprises a rendering unit (not shown in the figure) for determining a diffuse reflection color component and a specular reflection color component of the target mesh based on the vertex pixel values and preconfigured material pixel values; rendering the target mesh according to the vertex pixel values, the diffuse reflection color components, and the specular reflection color components.
The specific details of each module in the virtual scene rendering apparatus 800 have been described in detail in the corresponding virtual scene rendering method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In an exemplary embodiment of the present disclosure, there is also provided a storage medium capable of implementing the above-described method. Fig. 9 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure, and as shown in fig. 9, depicts a program product 900 for implementing the above method according to an embodiment of the disclosure, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a mobile phone. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. Fig. 10 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the disclosure.
It should be noted that the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU) 1001 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for system operation are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An Input/Output (I/O) interface 1005 is also connected to the bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a Display panel such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. When the computer program is executed by a Central Processing Unit (CPU) 1001, various functions defined in the system of the present disclosure are executed.
It should be noted that the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A method of rendering a virtual scene, comprising:
configuring sample lines of a flow area in a three-dimensional scene, and generating a target grid based on the sample lines;
acquiring initial flow vectors of each vertex in the target grid in a first direction and a second direction in a preset two-dimensional plane respectively based on the tangential direction of the sample line, and acquiring gradient vectors of each vertex in the flow area in the first direction and the second direction respectively based on the terrain parameters of the flow area; calculating the difference value between the initial flow vector and the gradient vector in the first direction and the difference value between the initial flow vector and the gradient vector in the second direction respectively to obtain the flow vectors of all the vertexes in the first direction and the second direction respectively; generating random numbers of all vertexes in the intersection region according to the area and the noise type of the intersection region of the target grid and the flow region, wherein the random numbers serve as foam information;
obtaining a first initial pixel value of each vertex in a first color channel and a second initial pixel value of each vertex in a second color channel in the target grid according to a pre-configured water flow texture map, calculating offset information of the water flow texture map changing along with time in the first direction and the second direction according to the flow vectors of the vertices in the first direction and the second direction respectively, and superposing the offset information to the first initial pixel value and the second initial pixel value respectively to obtain a first pixel value and a second pixel value of each vertex in the target grid changing along with time; and acquiring a third initial pixel value of a vertex in a third color channel in the preset intersection area, superposing the foam information to the third initial pixel value to obtain a third pixel value of each vertex in the target mesh so as to obtain a vertex pixel value, and rendering the target mesh according to the vertex pixel value.
2. The virtual scene rendering method of claim 1, wherein the configuring the spline of the flow region in the three-dimensional scene comprises:
setting a flow area of a three-dimensional scene, and configuring a flow track for the flow area;
vertically projecting track points in the flow track to the flow area to generate an initial spline line;
and resampling the initial sample line to obtain the sample line.
3. The virtual scene rendering method of claim 2, wherein the vertically projecting the trajectory points in the flow trajectory to the flow region to generate initial spline lines comprises:
vertically projecting the track points in the flow track to the flow area to obtain projection points;
selecting a unit grid point with the shortest distance to the projection point in the projection direction as a target grid point corresponding to the projection point;
and connecting the target grid points according to a preset connection rule to generate the initial sample line.
4. The virtual scene rendering method of claim 1, wherein the generating a target mesh based on the spline comprises:
selecting any point in the sample line, and generating a target straight line which is vertical to the sample line in a preset two-dimensional plane at the point;
copying the target straight line aiming at the sample line to obtain the target straight line of each point in the sample line;
carrying out offset adjustment on a target straight line of one point in the sample line according to the normal direction of the point;
and performing skinning processing on the sample lines and the target straight line after the offset adjustment to generate the target grid.
5. The virtual scene rendering method of claim 1, wherein the rendering the target mesh according to the vertex pixel values comprises:
determining diffuse reflection color components and specular reflection color components of the target mesh based on the vertex pixel values and preconfigured material pixel values;
rendering the target mesh according to the vertex pixel values, the diffuse reflection color components, and the specular reflection color components.
6. A virtual scene rendering apparatus, comprising:
the grid module is used for configuring sample lines of a flow area in a three-dimensional scene and generating a target grid based on the sample lines;
a calculating module, configured to obtain initial flow vectors of vertices in the target mesh in a first direction and a second direction in a preset two-dimensional plane, respectively, based on a tangential direction of the spline, and obtain gradient vectors of vertices in the flow area in the first direction and the second direction, respectively, based on a terrain parameter of the flow area; calculating the difference between the initial flow vector and the gradient vector in the first direction and the difference between the initial flow vector and the gradient vector in the second direction respectively to obtain the flow vectors of all vertexes in the first direction and the second direction respectively; generating random numbers of all vertexes in the intersection region according to the area and the noise type of the intersection region of the target grid and the flow region to serve as foam information;
a rendering module, configured to obtain a first initial pixel value of each vertex in a first color channel and a second initial pixel value of each vertex in a second color channel in the target mesh according to a pre-configured water flow texture map, calculate offset information of the water flow texture map changing with time in the first direction and the second direction according to the flow vectors of the vertices in the first direction and the second direction, and superimpose the offset information on the first initial pixel value and the second initial pixel value respectively to obtain a first pixel value and a second pixel value of each vertex in the target mesh changing with time; and acquiring a third initial pixel value of a vertex in a third color channel in the preset intersection area, superposing the foam information to the third initial pixel value to obtain a third pixel value of each vertex in the target grid so as to obtain a vertex pixel value, and rendering the target grid according to the vertex pixel value.
7. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements a virtual scene rendering method as claimed in any one of claims 1 to 5.
8. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the virtual scene rendering method of any one of claims 1 to 5.
CN202110335489.8A 2021-03-29 2021-03-29 Virtual scene rendering method and device, storage medium and electronic equipment Active CN113034662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110335489.8A CN113034662B (en) 2021-03-29 2021-03-29 Virtual scene rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110335489.8A CN113034662B (en) 2021-03-29 2021-03-29 Virtual scene rendering method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113034662A CN113034662A (en) 2021-06-25
CN113034662B true CN113034662B (en) 2023-03-31

Family

ID=76452739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110335489.8A Active CN113034662B (en) 2021-03-29 2021-03-29 Virtual scene rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113034662B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888398B (en) * 2021-10-21 2022-06-07 北京百度网讯科技有限公司 Hair rendering method and device and electronic equipment
CN115578277A (en) * 2022-09-30 2023-01-06 北京字跳网络技术有限公司 Liquid rendering method, device, equipment, computer readable storage medium and product
CN116563445B (en) * 2023-04-14 2024-03-19 深圳崇德动漫股份有限公司 Cartoon scene rendering method and device based on virtual reality
CN116468838B (en) * 2023-06-13 2023-08-18 江西省水投江河信息技术有限公司 Regional resource rendering method, system, computer and readable storage medium
CN117520441B (en) * 2024-01-03 2024-03-15 国网浙江省电力有限公司金华供电公司 Method, device and equipment for detecting abnormity of fund flow data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862291A (en) * 2020-07-10 2020-10-30 完美世界(北京)软件科技发展有限公司 Aqueous baking method and apparatus, storage medium, and electronic apparatus

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983082B2 (en) * 2002-11-15 2006-01-03 Warner Bros. Entertainment Inc. Reality-based light environment for digital imaging in motion pictures
TW200945249A (en) * 2008-04-28 2009-11-01 Inst Information Industry Method for rendering fluid
JP5451285B2 (en) * 2009-09-24 2014-03-26 キヤノン株式会社 Image processing apparatus and image processing method
FR2974214B1 (en) * 2011-04-12 2013-05-24 Real Fusio France METHOD AND SYSTEM FOR RENDERING A THREE-DIMENSIONAL VIRTUAL SCENE
GB2499694B8 (en) * 2012-11-09 2017-06-07 Sony Computer Entertainment Europe Ltd System and method of image reconstruction
US9892550B2 (en) * 2013-10-08 2018-02-13 Here Global B.V. Photorealistic rendering of scenes with dynamic content
CN108479067B (en) * 2018-04-12 2019-09-20 网易(杭州)网络有限公司 The rendering method and device of game picture
CN109685869B (en) * 2018-12-25 2023-04-07 网易(杭州)网络有限公司 Virtual model rendering method and device, storage medium and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862291A (en) * 2020-07-10 2020-10-30 完美世界(北京)软件科技发展有限公司 Aqueous baking method and apparatus, storage medium, and electronic apparatus

Also Published As

Publication number Publication date
CN113034662A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN113034662B (en) Virtual scene rendering method and device, storage medium and electronic equipment
CN107358643B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111340928B (en) Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment
CN104794758B (en) A kind of method of cutting out of 3-D view
US8838419B2 (en) System and method for simulating machining objects
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
US8791958B2 (en) Method, apparatus, and program for displaying an object on a computer screen
CN111768488B (en) Virtual character face model processing method and device
CN112734896A (en) Environment shielding rendering method and device, storage medium and electronic equipment
Cárcamo et al. Collaborative design model review in the AEC industry
US20210272345A1 (en) Method for Efficiently Computing and Specifying Level Sets for Use in Computer Simulations, Computer Graphics and Other Purposes
US11625900B2 (en) Broker for instancing
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN112121437B (en) Movement control method, device, medium and electronic equipment for target object
Shen et al. Urban planning using augmented reality
US8952968B1 (en) Wave modeling for computer-generated imagery using intersection prevention on water surfaces
CN109493428B (en) Optimization method and device for three-dimensional virtual model, electronic equipment and storage medium
JP2001325615A (en) Device and method for processing three-dimensional model and program providing medium
CN110363860B (en) 3D model reconstruction method and device and electronic equipment
CN114820895A (en) Animation data processing method, device, equipment and system
WO2021242121A1 (en) Method for generating splines based on surface intersection constraints in a computer image generation system
Lu Lu Large Scale Immersive Holograms with Microsoft Hololens
Choi et al. Optimal close‐up views for precise 3D manipulation
CN116899216B (en) Processing method and device for special effect fusion in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant