CN112967366B - Volume light rendering method and device, electronic equipment and storage medium - Google Patents

Volume light rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112967366B
CN112967366B CN202110273388.2A CN202110273388A CN112967366B CN 112967366 B CN112967366 B CN 112967366B CN 202110273388 A CN202110273388 A CN 202110273388A CN 112967366 B CN112967366 B CN 112967366B
Authority
CN
China
Prior art keywords
map
value
depth
volume light
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110273388.2A
Other languages
Chinese (zh)
Other versions
CN112967366A (en
Inventor
易律
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shell Wood Software Co ltd
Original Assignee
Beijing Shell Wood Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shell Wood Software Co ltd filed Critical Beijing Shell Wood Software Co ltd
Priority to CN202110273388.2A priority Critical patent/CN112967366B/en
Publication of CN112967366A publication Critical patent/CN112967366A/en
Application granted granted Critical
Publication of CN112967366B publication Critical patent/CN112967366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a volume light rendering method, a volume light rendering device, electronic equipment and a storage medium, which are used for solving the problem that a ghost appears in a rendered volume light map. The method comprises the following steps: obtaining a depth map and a shadow map of the three-dimensional model, wherein the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction taking a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction taking a light source viewpoint as a starting point; dividing each depth value in the depth map along the direction of a camera viewpoint serving as a starting point by using a plurality of sampling points to obtain a stepping value; performing random offset on the stepping value, and comparing the stepping value subjected to random offset with each depth value in the shadow map to obtain a comparison result; selecting and superposing color values according to the comparison result to obtain a superposed result map; and rendering the three-dimensional model in the direction of which the camera viewpoint is the starting point according to the superimposed result map to obtain the volume light map of the three-dimensional model.

Description

Volume light rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the technical field of graphics rendering and image processing, and in particular, to a method, an apparatus, an electronic device, and a storage medium for volume light rendering.
Background
The volume light is an illumination special effect used in games, and is used for representing that when light irradiates a shielding object, light-transmitting radioactive leakage appears around the object, and the leaked light column can be understood as the volume light; specific examples are: when sunlight irradiates the tree through the cloud layer, the sunlight can permeate through gaps of leaves to form a light column, and the light column can be diffused through fog. The light beam is called as a volume light because the light under such special effects gives a visual sense of space to the player than the light in the conventional game, and the volume light gives a more realistic sense to the player.
At present, a mode of accumulating multi-frame results and then taking an average value is generally adopted to conduct volume light rendering on a three-dimensional model in a game; however, in a specific practical process, when a moving object model or a character model appears, that is, when the content of images of adjacent frames in a video varies greatly, the error of the result obtained by adding up multiple frames of results and then taking an average value is great, so that a ghost appears in the rendered volume light map.
Disclosure of Invention
An objective of the embodiments of the present application is to provide a method, an apparatus, an electronic device, and a storage medium for volume light rendering, which are used for improving the problem of ghost shadows in a rendered volume light map.
The embodiment of the application provides a volume light rendering method, which comprises the following steps: obtaining a depth map and a shadow map of the three-dimensional model, wherein the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction taking a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction taking a light source viewpoint as a starting point; dividing each depth value in the depth map along the direction of a camera viewpoint serving as a starting point by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is a spacing distance value between adjacent sampling points in the plurality of sampling points; performing random offset on the stepping value, and comparing the stepping value subjected to random offset with each depth value in the shadow map to obtain a comparison result; selecting and superposing color values according to the comparison result to obtain a superposed result map; and rendering the three-dimensional model in the direction of which the camera viewpoint is the starting point according to the superimposed result map to obtain the volume light map of the three-dimensional model. In the implementation process, each depth value in the depth map of the three-dimensional model is divided along the direction of a camera viewpoint by using a sampling point to obtain a step value, the step value corresponding to each depth value in the depth map is randomly shifted, then the step value after the random shift is used for comparing with each depth value in the shadow map to obtain a comparison result, and then selection, superposition and rendering are carried out according to the comparison result, so that the volume light map without residual shadows is obtained; that is, the depth value corresponding to the pixel point is used as the sampling standard for screening the volume light corresponding to the pixel point, so that the result error of the sampled volume light is effectively reduced, and the situation of ghost shadows in the rendered volume light map is avoided.
Optionally, in an embodiment of the present application, obtaining a depth map and a shadow map of the three-dimensional model includes: and rendering the three-dimensional model by taking the camera viewpoint as a starting point direction to obtain a depth map, and rendering the three-dimensional model by taking the light source viewpoint as a starting point direction to obtain a shadow map. In the implementation process, the depth map and the shadow map of the three-dimensional model are rendered and obtained in advance, so that the situation that the depth map and the shadow map are only rendered and obtained when the depth map and the shadow map are needed is avoided, the calculated amount in real-time rendering is effectively reduced, and the real-time rendering speed is improved.
Optionally, in the embodiment of the present application, the step value is randomly shifted, and the step value after the random shift is used to compare with each depth value in the shadow map, including: extracting a random offset value in the direction of taking a camera viewpoint as a starting point from a preset sampling noise map; superposing the stepping value of each depth value of the depth map with the random offset value to obtain a superposed stepping value corresponding to each depth value; the superimposed step value corresponding to each depth value is compared to each depth value in the shadow map. In the implementation process, the light stepping technology requires high sampling times, under the condition of limited sampling times, the stepping value of each depth value of the depth map is overlapped with the random offset value and then sampled, and the multi-frame sampling results are overlapped, so that the sampling precision is improved, and the calculation amount of rendering is reduced.
Optionally, in an embodiment of the present application, selecting and stacking color values according to the comparison result includes: accumulating the volume light color values of each sampling point in the depth map according to the comparison result aiming at each pixel point to be rendered by the current camera to obtain a volume light accumulation result map; and acquiring a historical volume light rendering map from the cache, and selecting and superposing the volume light accumulation result map and the historical volume light rendering map with color values. In the implementation process, the color values are selected and overlapped by the volumetric light accumulation result map and the historical volumetric light rendering map, and the depth values in the map are used as screening standards for whether overlapping is carried out, so that the result error of sampling volumetric light is effectively reduced, and the situation that a ghost appears in the rendered volumetric light map is avoided.
Optionally, in an embodiment of the present application, accumulating the volumetric light color value of each sampling point in the depth map according to the comparison result includes: judging whether the comparison result is that the stepping value of the sampling point after random deviation is larger than the depth value of the sampling point in the shadow map; if yes, accumulating the volume light color value of the sampling point; if not, the color value of the volume light of the sampling point is not accumulated. In the implementation process, the depth value in the map is used as a judging standard for accumulating the volume light color values, so that the volume light color value of each pixel point is effectively obtained.
Optionally, in an embodiment of the present application, selecting and superimposing color values of the volumetric light accumulation result map and the historical volumetric light rendering map includes: for each pixel point in the volume light accumulation result map, judging whether the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value, wherein the alpha value is the depth value of the sampling point in the direction of taking the camera viewpoint as a starting point; if so, superposing the volume light color value in the volume light accumulation result map of the pixel point with the volume light color value of the pixel point in the historical volume light rendering map.
Optionally, in an embodiment of the present application, after obtaining the volumetric light map of the three-dimensional model, the method further includes: and synthesizing the animation video of the three-dimensional model according to the volume photoplotting of the three-dimensional model. In the implementation process, the volume photo map without the ghost is used for manufacturing the animation video of the three-dimensional model, so that the quality of the animation video is improved, and the ghost in the animation video is avoided.
The embodiment of the application also provides a volume light rendering device, which comprises: the model map obtaining module is used for obtaining a depth map and a shadow map of the three-dimensional model, wherein the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction taking a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction taking a light source viewpoint as a starting point; the depth map dividing module is used for dividing each depth value in the depth map along the direction of a camera viewpoint serving as a starting point by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is an interval distance value between adjacent sampling points in the plurality of sampling points; the comparison result obtaining module is used for carrying out random offset on the stepping value, and then comparing the stepping value subjected to random offset with each depth value in the shadow map to obtain a comparison result; the result mapping obtaining module is used for selecting and superposing color values according to the comparison result to obtain a superposed result mapping; and the volume light map obtaining module is used for rendering the three-dimensional model in the direction with the camera viewpoint as a starting point according to the superimposed result map to obtain the volume light map of the three-dimensional model.
Optionally, in an embodiment of the present application, the model map obtaining module includes: and the three-dimensional model rendering module is used for rendering the three-dimensional model by taking the camera viewpoint as a starting point direction to obtain a depth map, and rendering the three-dimensional model by taking the light source viewpoint as a starting point direction to obtain a shadow map.
Optionally, in an embodiment of the present application, the comparison result obtaining module includes: the noise map extracting module is used for extracting a random offset value in the direction taking the camera viewpoint as a starting point from a preset sampling noise map; the sampling random offset module is used for superposing the stepping value of each depth value of the depth map with the random offset value to obtain a superposed stepping value corresponding to each depth value; and the stepping depth comparison module is used for comparing the overlapped stepping value corresponding to each depth value with each depth value in the shadow map.
Optionally, in an embodiment of the present application, the result map obtaining module includes: the accumulating result obtaining module is used for accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point to be rendered by the current camera to obtain a volume light accumulating result map; and the map selecting and superposing module is used for acquiring the historical volume light rendering map from the cache, and selecting and superposing the volume light accumulation result map and the historical volume light rendering map with color values.
Optionally, in an embodiment of the present application, the accumulation result obtaining module includes: the comparison result judging module is used for judging whether the comparison result is that the stepping value of the sampling point after random deviation is larger than the depth value of the sampling point in the shadow map; the comparing result affirming module is used for accumulating the volume light color value of the sampling point if the comparing result is that the stepping value of the sampling point after the random deviation is larger than the depth value of the sampling point in the shadow map; and the comparison result negation module is used for not accumulating the volume light color value of the sampling point if the comparison result is that the stepping value of the sampling point after the random deviation is larger than the depth value of the sampling point in the shadow map.
Optionally, in an embodiment of the present application, the mapping selection stacking module includes: the map difference judging module is used for judging whether the difference between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value or not according to each pixel point in the volume light accumulation result map, wherein the alpha value is the depth value of the sampling point in the direction of taking the camera viewpoint as a starting point; and the light color value superposition module is used for superposing the volume light color value in the volume light accumulation result map of the pixel point and the volume light color value of the pixel point in the historical volume light rendering map if the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value.
Optionally, in an embodiment of the present application, the volume light rendering device further includes: and the animation video synthesis module is used for synthesizing the animation video of the three-dimensional model according to the volume photopatterning of the three-dimensional model.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory storing machine-readable instructions executable by the processor to perform the method as described above when executed by the processor.
The present embodiments also provide a storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a volumetric light rendering method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of rendering a three-dimensional model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of the division of step values provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of obtaining a map and selecting an overlay provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a volumetric light rendering device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before describing the volumetric light rendering method provided in the embodiments of the present application, some concepts involved in the embodiments of the present application are described first:
three-dimensional models, which are three-dimensional polygonal representations of objects, are typically displayed using a computer or other video equipment; the displayed object may be a real world entity or a fictitious object, and of course, objects existing in physical nature may be represented by a three-dimensional model, where the object may be an opaque object.
Light stepping (RayMarching) is a fast rendering method for real-time scenes, taking a camera viewpoint as a starting point direction to perform light stepping sampling as an example, emitting a light ray from a camera position to each pixel point of a screen, the light ray is also called a stepping ray, advancing according to a certain step size, detecting whether the current light ray is positioned on the surface of an object, adjusting the advancing amplitude of the light ray according to the advancing step size until reaching the surface of the object, and calculating a color value according to a general light ray tracing method. If the object surface cannot be reached, it can be determined that the pixel point has no corresponding object.
Depth texture (DepthTexture), which is a map generated during rendering, is written into a depth buffer (depth buffer) for rendering each frame in real time, where the depth buffer is used as an input for a subsequent rendering step, and each frame result stored in the depth buffer is used as a depth map, which can be used for rendering some special effects (e.g., volume light).
Shadow Texture, also known as Shadow depth map (Shadow DepthTexture), refers to a depth map used to render shadows under different light sources.
A server refers to a device that provides computing services over a network, such as: an x86 server and a non-x 86 server, the non-x 86 server comprising: mainframe, minicomputer, and UNIX servers.
It should be noted that, the method for rendering volumetric light provided in the embodiments of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal is for example: smart phones, personal computers (personal computer, PCs), tablet computers, personal digital assistants (personal digital assistant, PDAs), mobile internet appliances (mobile Internet device, MIDs), network switches or network routers, and the like.
Before introducing the volumetric light rendering method provided in the embodiments of the present application, application scenes to which the volumetric light rendering method is applicable are described, where the application scenes include, but are not limited to: a game scene in which volume light in a game screen can be rendered using the volume light rendering method, an animation scene in which an animation image or an animation video without a ghost can be produced using the volume light rendering method, and the like.
Please refer to fig. 1, which is a schematic flow chart of a volumetric light rendering method according to an embodiment of the present application; dividing each depth value in a depth map of a three-dimensional model along a camera viewpoint as a starting point direction by using sampling points to obtain a stepping value, randomly shifting the stepping value corresponding to each depth value in the depth map, comparing the randomly shifted stepping value with each depth value in a shadow map to obtain a comparison result, and selecting, superposing and rendering according to the comparison result to obtain the volume light map without residual shadows; that is, the depth value corresponding to the pixel point is used as a sampling standard for screening the volume light corresponding to the pixel point, so that the result error of the sampled volume light is effectively reduced, and the situation of ghost shadows in the rendered volume light map is avoided; the above-described volume light rendering method may include:
Step S110: a depth map and a shadow map of the three-dimensional model are obtained.
Please refer to fig. 2, which is a schematic diagram of rendering a three-dimensional model according to an embodiment of the present application; the embodiment of step S110 described above is, for example: rendering the three-dimensional model by using a ray stepping (RayMarching) technology in a direction taking a camera viewpoint as a starting point to obtain a depth map, wherein the method can also be understood that a plurality of stepping rays are sent out for sampling the three-dimensional model by taking the camera viewpoint as the starting point direction, and each stepping ray corresponds to a depth value of one pixel point in the depth map; the three-dimensional model may be rendered in the direction of the light source viewpoint as the starting point to obtain a shadow map, which is understood as that a plurality of step rays are sent out for depth sampling of the three-dimensional model in the direction of the light source viewpoint as the starting point, each step ray corresponds to a depth value of a pixel point in the shadow map, and the depth value in the shadow map is a value of a sampling point closest to the light source viewpoint in the Z-axis direction, so that the depth value in the shadow map is also called as Z-depth, and Z-depth represents the depth value of the sampling point in the Z-axis direction.
It will be appreciated that the above-mentioned map may include a plurality of elements, where an element refers to a depth value of a pixel point (in the volumetric light map to be rendered) in a certain direction, for example, the depth map includes a depth value in a direction of each step ray (where the step ray is a ray starting from a camera viewpoint and ending at a point of contact between the three-dimensional model and the step ray), and the shadow map includes a depth value in a Z-axis direction; thus, one pixel corresponds to one depth value. The sampling points are virtual pixel points in the stepping ray direction, the virtual pixel points are used only in the calculation process, the pixel points at specific positions of the sampling points are not stored, and only the result calculated according to the sampling points is stored; there may be multiple sampling points in one stepping ray direction and thus one depth value corresponds to multiple sampling points. The step value can be simply understood as a distance value between a sampling point and a last adjacent sampling point, that is, each two adjacent sampling points in the plurality of sampling points have a step value, so that the ratio of the number of sampling points to the number of step values is N (N-1); where N represents the number of sampling points, the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction from the camera viewpoint, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction from the light source viewpoint.
After step S110, step S120 is performed: and dividing each depth value in the depth map along the direction of the camera viewpoint serving as a starting point by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is a spacing distance value between adjacent sampling points in the plurality of sampling points.
Please refer to fig. 3, which illustrates a schematic diagram of division of step values provided in an embodiment of the present application; the embodiment of step S120 described above is, for example: obtaining a depth value of each pixel point from the depth map, emitting a ray stepping ray (taking a camera viewpoint as a starting point and taking a three-dimensional model as a target, namely taking a pixel point which is firstly contacted by a stepping optical fiber on the three-dimensional model as an end point), dividing the depth value into a plurality of sections along the stepping ray by using a plurality of sampling points, wherein each section can be understood as a stepping value, namely dividing the depth value on one stepping ray in a segmented way by using a plurality of sampling points, the obtained interval distance value between the sampling point and the last adjacent sampling point, wherein the number of the sampling points can be set (can be set to be 6 or 8 and the like) according to specific conditions, for example, the stepping value of 1 to 6 total 6 sampling points on the stepping ray in fig. 3; the world coordinate of the current pixel point on the near-shearing plane of the camera is used as the starting point of the step value, for example, in fig. 3, the sampling point with the number of 1 may be used as the starting point of the step value, and the near-shearing plane may be set according to a specific calculation complexity condition; then, the depth value of the current pixel point is taken as the end point of the step value, for example, the sampling point with the number of 6 in fig. 3 may be taken as the end point of the step value.
After step S120, step S130 is performed: and carrying out random offset on the stepping value, and comparing the stepping value after the random offset with each depth value in the shadow map to obtain a comparison result.
The above-mentioned implementation of step S130 is very various, including but not limited to the following:
in a first embodiment, a random offset value is obtained from a preset sampling noise map, and then the random offset value is used for random offset and comparison, where the embodiment may include:
step S131: and extracting a random offset value in the direction taking the camera viewpoint as a starting point from the preset sampling noise map.
The preset sampling noise mapping refers to a matrix mapping formed by a plurality of random offset values, wherein the random offset values are generated by random sampling, for example, time stamp data are collected, the random offset values are generated by using time stamp calculation, and the random offset values can also be generated by using a built-in pseudo random function; thus, it can be understood as sampling noise.
The embodiment of step S131 described above is, for example: obtaining a preset sampling noise map from a local cache, extracting a random offset value corresponding to each pixel point in the direction taking the camera viewpoint as a starting point from the preset sampling noise map, wherein the random offset value can be a preset range, for example, -2 to 3, and the like, the preset range is that-2 represents that all sampling points corresponding to the pixel point are moved by two units (namely, a distance value between two sampling points in the opposite direction taking the camera viewpoint as the starting point direction, then the sampling point with the number 4 in fig. 3 should be at the position of the sampling point with the number 2, other sampling points are the same, and 3 represents that all sampling points corresponding to the pixel point are moved by three units (namely, three stepping values) in the direction taking the camera viewpoint as the starting point; the pre-set sampling noise map may be pre-sampled and stored in a local cache.
Step S132: and superposing the segmented stepping value of each depth value of the depth map with each random offset value to obtain a superposed stepping value corresponding to each depth value.
The embodiment of step S132 described above is, for example: superposing the stepping value of each depth value of the depth map with the random offset value to obtain a superposed stepping value corresponding to each depth value; for example, in fig. 3, the sampling points numbered 1 to 6 may be superimposed with the random offset value-2, and then the sampling point numbered 3 should be at the position of the sampling point numbered 1 after the superimposition, the sampling point numbered 4 should be at the position of the sampling point numbered 2, and the other sampling points and so on.
Step S133: the superimposed step value corresponding to each depth value is compared to each depth value in the shadow map.
The embodiment of step S133 described above is, for example: for ease of understanding and explanation, it is assumed that the above random offset value is 0, and thus, the superimposed sampling points numbered 1 to 6 remain in place. The coordinate system of each sampling point (e.g., camera viewpoint coordinate system or world coordinate system) is converted into a light source viewpoint coordinate system, and finally, the Z-axis component corresponding to the superimposed step value corresponding to each depth value is compared with each depth value (i.e., Z-depth) in the shadow map. It will be appreciated that if the Z-axis component corresponding to the superimposed step value corresponding to the depth value is greater than the cached depth value (i.e., Z-depth) of the shadow map, then the sample point is in the shadow (e.g., sample points numbered 5 and 6), otherwise the sample point is not in the shadow (e.g., sample points numbered 1 to 4).
In a second embodiment, the step value is added to a random value within a predetermined range, and the step value is compared after random offset, for example: generating a random value within a preset range, and adding the step value to the random value within the preset range, wherein the preset range is, for example, -2 to 3, and the like; then, overlapping the stepping value of each depth value of the depth map with a random value to obtain an overlapped stepping value corresponding to each depth value; finally, the superimposed step value corresponding to each depth value is compared with each depth value in the shadow map. The technical principle of this embodiment is similar to that of the first embodiment, except that the random offset value of the second embodiment is randomly generated in real time, rather than being previously generated and stored, and the storage space can be effectively saved using the manner of real-time random generation.
After step S130, step S140 is performed: and selecting and superposing the color values according to the comparison result to obtain a superposed result map.
The embodiment of step S140 may include:
step S141: and accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point to be rendered by the current camera to obtain a volume light accumulation result map.
The embodiment of step S141 is, for example: for each pixel point to be rendered by the current camera, judging whether a comparison result is that a Z-axis component corresponding to a stepping value of the sampling point after random deviation is larger than a depth value (namely Z-depth) of the sampling point in a shadow map; if the comparison result is that the Z-axis component corresponding to the step value of the sampling point after the random offset is greater than the depth value (i.e., Z-depth) of the sampling point in the shadow map, it is indicated that the sampling point is in the shadow (e.g., sampling points numbered 5 and 6), the volumetric light color values of the sampling point should not be accumulated, i.e., the illumination contribution of the sampling point to the volumetric light color is 0; if the comparison result is that the Z-axis component corresponding to the step value of the sampling point after the random offset is not greater than the depth value (i.e., Z-depth) of the sampling point in the shadow map, it is indicated that the sampling point is not in the shadow (for example, the sampling points numbered 1 to 4), the volumetric light color values of the sampling point should be accumulated, where the volumetric light color values of the sampling point may represent scattering from the sampling point to the light source viewpoint, and the volumetric light color values obtained by accumulation may represent scattering from the camera viewpoint to the light source viewpoint; after the processing is performed on each pixel point to be rendered by the current camera, the accumulated volume light color value of each pixel point to be rendered can be obtained, and the accumulated volume light color value of all the pixel points is the volume light accumulation result map.
Step S142: and acquiring a historical volume light rendering map from the cache, and selecting and superposing the volume light accumulation result map and the historical volume light rendering map with color values.
Please refer to fig. 4, which is a schematic diagram of obtaining a map and selecting an overlay according to an embodiment of the present application; the embodiment of acquiring the historical volume light rendering map from the buffer in the step S142 is as follows: in a particular implementation, the memory of the graphics processing unit (Graphics Processing Unit, GPU) may be used as a cache for the historical volumetric rendering map, and thus the historical volumetric rendering map may be retrieved from the memory of the GPU. It will be appreciated that the historical volume light rendering map herein is one or more volume light rendering maps, and the specific number of the plurality of volume light rendering maps herein may be set according to the specific situation, for example, 4, 6, 8, etc.; when the history volume light rendering map is a plurality of volume light rendering maps, it is necessary to select and superimpose color values for each volume light rendering map, and for convenience of explanation, the following description will be given with a process of superimposing one volume light rendering map.
The embodiment of selecting and superimposing color values of the volumetric light accumulation result map and the historical volumetric light rendering map in the above step S142 is as follows: for each pixel point in the volumetric light accumulation result map, determining whether a difference between a volumetric light color value of the pixel point in the volumetric light accumulation result map and an Alpha value (Alpha) of the pixel point in the historical volumetric light rendering map is less than a preset threshold, where the preset threshold represents an acceptable error, may be set according to a specific situation, for example, set to 4, 10, or 20, and so on. The Alpha value is a depth value of the sampling point stored before in the direction of taking the viewpoint of the camera as the starting point, the depth value is stored in an Alpha channel, and it is understood that the Alpha value refers to an Alpha value stored before in an Alpha (a) channel in an RGBA image format. If the difference between the color value of the pixel in the volume light accumulation result map and the alpha value of the pixel in the historical volume light rendering map is smaller than a preset threshold value, the volume light color of the pixel in the historical volume light rendering map is indicated to be usable, namely the volume light color value of the pixel in the volume light accumulation result map and the volume light color value of the pixel in the historical volume light rendering map need to be overlapped; if the difference is greater than or equal to the preset threshold, it indicates that the color of the volume light of the pixel point in the historical volume light rendering map cannot be used, that is, no superposition is needed. It will be appreciated that if the difference is greater than or equal to the predetermined threshold, the color of the volume light of the pixel point in the historical volume light rendering map is used, which may cause a problem of ghost in the volume light map.
After step S140, step S150 is performed: and rendering the three-dimensional model in the direction of which the camera viewpoint is the starting point according to the superimposed result map to obtain the volume light map of the three-dimensional model.
The embodiment of step S150 described above is, for example: each pixel value in the superimposed result map is obtained after being superimposed, and therefore, each pixel value in the superimposed result map needs to be averaged, that is, each pixel value in the superimposed result map is divided by the number of superimposed volume light colors to obtain a volume light color average value, and then, the three-dimensional model is rendered in the direction of the camera viewpoint as the starting point according to the volume light color average value to obtain the volume light map of the three-dimensional model.
In the implementation process, each depth value in the depth map of the three-dimensional model is divided along the direction of a camera viewpoint by using a sampling point to obtain a step value, the step value corresponding to each depth value in the depth map is randomly shifted, then the step value after the random shift is used for comparing with each depth value in the shadow map to obtain a comparison result, and then selection, superposition and rendering are carried out according to the comparison result, so that the volume light map without residual shadows is obtained; that is, by using the optical line stepping and multi-frame random sampling and taking the depth value corresponding to the pixel point as the sampling standard for screening the volume light corresponding to the pixel point, the result error of the sampled volume light is effectively reduced, thereby avoiding the situation of ghost shadow in the rendered volume light map.
Optionally, in an embodiment of the present application, after obtaining the volumetric light map of the three-dimensional model, the method further includes: synthesizing the volume light map of the three-dimensional model and the volume light map rendered by the rendering light to obtain an animation video of the three-dimensional model, and carrying out subsequent processing on the animation video to obtain a processed animation video; among others, the subsequent processing herein includes, but is not limited to: the method is used for processing the audio and video of the animation, adding the subtitle, identifying the image, identifying the face and the like. In the implementation process, the volume photo map without the ghost is used for manufacturing the animation video of the three-dimensional model, so that the quality of the animation video is improved, and the ghost in the animation video is avoided.
Please refer to fig. 5, which illustrates a schematic structural diagram of a volumetric light rendering device according to an embodiment of the present application. The embodiment of the application provides a volume light rendering device 200, which comprises:
the model map obtaining module 210 is configured to obtain a depth map and a shadow map of the three-dimensional model, where the depth map is a depth value for performing depth sampling on the three-dimensional model in a direction from a camera viewpoint, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in a direction from a light source viewpoint.
The depth map dividing module 220 is configured to divide each depth value in the depth map along the camera viewpoint as a starting direction by using a plurality of sampling points to obtain a step value, where the step value is a distance value between adjacent sampling points in the plurality of sampling points.
The comparison result obtaining module 230 is configured to randomly shift the step value, and then compare each depth value in the shadow map with the step value after the random shift to obtain a comparison result.
The result mapping obtaining module 240 is configured to select and superimpose color values according to the comparison result, and obtain a superimposed result mapping.
And the volume light map obtaining module 250 is configured to render the three-dimensional model in the direction with the camera viewpoint as the starting point according to the superimposed result map, and obtain the volume light map of the three-dimensional model.
Optionally, in an embodiment of the present application, the model map obtaining module includes:
and the three-dimensional model rendering module is used for rendering the three-dimensional model by taking the camera viewpoint as a starting point direction to obtain a depth map, and rendering the three-dimensional model by taking the light source viewpoint as a starting point direction to obtain a shadow map.
Optionally, in an embodiment of the present application, the comparison result obtaining module includes:
And the noise map extracting module is used for extracting a random offset value in the direction taking the camera viewpoint as a starting point from the preset sampling noise map.
And the sampling random offset module is used for superposing the stepping value of each depth value of the depth map with the random offset value to obtain a superposed stepping value corresponding to each depth value.
And the stepping depth comparison module is used for comparing the overlapped stepping value corresponding to each depth value with each depth value in the shadow map.
Optionally, in an embodiment of the present application, the result map obtaining module includes:
and the accumulation result obtaining module is used for accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point to be rendered by the current camera to obtain a volume light accumulation result map.
And the map selecting and superposing module is used for acquiring the historical volume light rendering map from the cache, and selecting and superposing the volume light accumulation result map and the historical volume light rendering map with color values.
Optionally, in an embodiment of the present application, the accumulation result obtaining module includes:
and the comparison result judging module is used for judging whether the comparison result is that the stepping value of the sampling point after random deviation is larger than the depth value of the sampling point in the shadow map.
And the comparison result affirming module is used for accumulating the volume light color value of the sampling point if the comparison result is that the stepping value of the sampling point after random deviation is larger than the depth value of the sampling point in the shadow map.
And the comparison result negation module is used for not accumulating the volume light color value of the sampling point if the comparison result is that the stepping value of the sampling point after the random deviation is larger than the depth value of the sampling point in the shadow map.
Optionally, in an embodiment of the present application, the mapping selection stacking module includes:
the map difference judging module is used for judging whether the difference between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value or not according to each pixel point in the volume light accumulation result map, wherein the alpha value is the depth value of the sampling point in the direction of taking the camera viewpoint as the starting point.
And the light color value superposition module is used for superposing the volume light color value in the volume light accumulation result map of the pixel point and the volume light color value of the pixel point in the historical volume light rendering map if the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value.
Optionally, in an embodiment of the present application, the volume light rendering device further includes:
and the animation video synthesis module is used for synthesizing the animation video of the three-dimensional model according to the volume photopatterning of the three-dimensional model.
It should be understood that the apparatus corresponds to the above embodiment of the volumetric light rendering method, and is capable of performing the steps involved in the above embodiment of the method, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device.
An electronic device provided in an embodiment of the present application includes: a processor and a memory storing machine-readable instructions executable by the processor, which when executed by the processor perform the method as above.
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, performs a method as above.
The storage medium may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules of the embodiments in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing description is merely an optional implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and the changes or substitutions should be covered in the scope of the embodiments of the present application.

Claims (8)

1. A method of volumetric light rendering, comprising:
obtaining a depth map and a shadow map of a three-dimensional model, wherein the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction taking a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction taking a light source viewpoint as a starting point;
Dividing each depth value in the depth map along the direction of the camera viewpoint as a starting point by using a plurality of sampling points to obtain a step value, wherein the step value is a spacing distance value between adjacent sampling points in the plurality of sampling points;
performing random offset on the stepping value, and comparing the stepping value subjected to random offset with each depth value in the shadow map to obtain a comparison result;
selecting and superposing color values according to the comparison result to obtain a superposed result map;
rendering the three-dimensional model in the direction of the starting point of the camera viewpoint according to the superimposed result map to obtain a volume light map of the three-dimensional model;
wherein the selecting and superimposing color values according to the comparison result includes: accumulating the volume light color values of each sampling point in the depth map according to the comparison result aiming at each pixel point to be rendered by the current camera to obtain a volume light accumulation result map; acquiring a historical volume light rendering map from a cache, and selecting and superposing color values of the volume light accumulation result map and the historical volume light rendering map;
The selecting and superimposing color values of the volumetric light accumulation result map and the historical volumetric light rendering map includes: for each pixel point in the volume light accumulation result map, judging whether the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value, wherein the alpha value is a depth value of a sampling point in the direction of taking a camera viewpoint as a starting point; if so, superposing the volume light color value in the volume light accumulation result map of the pixel point with the volume light color value of the pixel point in the historical volume light rendering map.
2. The method of claim 1, wherein the obtaining the depth map and the shadow map of the three-dimensional model comprises:
and rendering the three-dimensional model by taking the camera viewpoint as a starting point direction to obtain a depth map, and rendering the three-dimensional model by taking the light source viewpoint as a starting point direction to obtain a shadow map.
3. The method of claim 1, wherein said randomly shifting the step values and comparing the randomly shifted step values to each depth value in the shadow map comprises:
Extracting a random offset value in the direction of taking the camera viewpoint as a starting point from a preset sampling noise map;
superposing the stepping value of each depth value of the depth map with the random offset value to obtain a superposed stepping value corresponding to each depth value;
and comparing the overlapped stepping value corresponding to each depth value with each depth value in the shadow map.
4. The method of claim 1, wherein accumulating the volumetric light color value for each sample point in the depth map based on the comparison result comprises:
judging whether the comparison result is that the stepping value of the sampling point after random deviation is larger than the depth value of the sampling point in the shadow map;
if yes, accumulating the volume light color value of the sampling point;
if not, the color value of the volume light of the sampling point is not accumulated.
5. The method of any one of claims 1-4, further comprising, after said obtaining a volumetric light map of said three-dimensional model:
and synthesizing the animation video of the three-dimensional model according to the volume photopatterning of the three-dimensional model.
6. A volume light rendering device, comprising:
The model map obtaining module is used for obtaining a depth map and a shadow map of the three-dimensional model, wherein the depth map is a depth value for carrying out depth sampling on the three-dimensional model in the direction taking a camera viewpoint as a starting point, and the shadow map is a depth value for carrying out shadow sampling on the three-dimensional model in the direction taking a light source viewpoint as a starting point;
the depth map dividing module is used for dividing each depth value in the depth map along the camera viewpoint as a starting point direction by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is an interval distance value between adjacent sampling points in the plurality of sampling points;
the comparison result obtaining module is used for carrying out random offset on the step values, and then comparing the step values after the random offset with each depth value in the shadow map to obtain a comparison result;
the result mapping obtaining module is used for selecting and superposing color values according to the comparison result to obtain a superposed result mapping;
the volume light map obtaining module is used for rendering the three-dimensional model in the direction with the camera viewpoint as a starting point according to the superimposed result map to obtain a volume light map of the three-dimensional model;
Wherein the selecting and superimposing color values according to the comparison result includes: accumulating the volume light color values of each sampling point in the depth map according to the comparison result aiming at each pixel point to be rendered by the current camera to obtain a volume light accumulation result map; acquiring a historical volume light rendering map from a cache, and selecting and superposing color values of the volume light accumulation result map and the historical volume light rendering map;
the selecting and superimposing color values of the volumetric light accumulation result map and the historical volumetric light rendering map includes: for each pixel point in the volume light accumulation result map, judging whether the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value, wherein the alpha value is a depth value of a sampling point in the direction of taking a camera viewpoint as a starting point; if so, superposing the volume light color value in the volume light accumulation result map of the pixel point with the volume light color value of the pixel point in the historical volume light rendering map.
7. An electronic device, comprising: a processor and a memory storing machine-readable instructions executable by the processor to perform the method of any one of claims 1 to 5 when executed by the processor.
8. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the method according to any of claims 1 to 5.
CN202110273388.2A 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium Active CN112967366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110273388.2A CN112967366B (en) 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110273388.2A CN112967366B (en) 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112967366A CN112967366A (en) 2021-06-15
CN112967366B true CN112967366B (en) 2023-07-28

Family

ID=76278923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110273388.2A Active CN112967366B (en) 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112967366B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470161B (en) * 2021-06-30 2022-06-07 完美世界(北京)软件科技发展有限公司 Illumination determination method for volume cloud in virtual environment, related equipment and storage medium
CN113781611A (en) * 2021-08-25 2021-12-10 北京壳木软件有限责任公司 Animation production method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN113012274A (en) * 2021-03-24 2021-06-22 北京壳木软件有限责任公司 Shadow rendering method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6903741B2 (en) * 2001-12-13 2005-06-07 Crytek Gmbh Method, computer program product and system for rendering soft shadows in a frame representing a 3D-scene
DE102009042328B4 (en) * 2009-09-21 2024-02-15 Siemens Healthcare Gmbh Efficient determination of lighting effects during volume rendering
US8698806B2 (en) * 2009-11-09 2014-04-15 Maxon Computer Gmbh System and method for performing volume rendering using shadow calculation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN113012274A (en) * 2021-03-24 2021-06-22 北京壳木软件有限责任公司 Shadow rendering method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
真实感海下光照效果实时绘制;吕梦雅;刘丁;唐勇;李颖;周升腾;;小型微型计算机系统(第10期);200-203 *

Also Published As

Publication number Publication date
CN112967366A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN111050210B (en) Method of performing operations, video processing system, and non-transitory computer readable medium
CN112967366B (en) Volume light rendering method and device, electronic equipment and storage medium
US20180121767A1 (en) Video deblurring using neural networks
WO2017213923A1 (en) Multi-view scene segmentation and propagation
EP2674919A2 (en) Streaming light propagation
CN112541876B (en) Satellite image processing method, network training method, related device and electronic equipment
US20230306563A1 (en) Image filling method and apparatus, decoding method and apparatus, electronic device, and medium
CN112200035B (en) Image acquisition method, device and vision processing method for simulating crowded scene
CN111193961B (en) Video editing apparatus and method
US10825231B2 (en) Methods of and apparatus for rendering frames for display using ray tracing
CN111127376A (en) Method and device for repairing digital video file
US20210082178A1 (en) Method and apparatus for processing a 3d scene
CN111199573B (en) Virtual-real interaction reflection method, device, medium and equipment based on augmented reality
CN111246196A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN114119854A (en) Shadow rendering method, game file packaging method and corresponding devices
TWI784349B (en) Saliency map generation method and image processing system using the same
CN116468992B (en) Repeated correction supervision space recognition and restoration method and device
KR20090064155A (en) Method and system for parallel ray tracing by using ray set
CN115018734A (en) Video restoration method and training method and device of video restoration model
KR20140058744A (en) A system for stereoscopic images with hole-filling and method thereof
JP7387029B2 (en) Single-image 3D photography technology using soft layering and depth-aware inpainting
CN115660981A (en) Image denoising method and device, electronic equipment and computer storage medium
CN115018968A (en) Image rendering method and device, storage medium and electronic equipment
CN114841870A (en) Image processing method, related device and system
CN113470161A (en) Illumination determination method for volume cloud in virtual environment, related equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant