CN114170359A - Volume fog rendering method, device and equipment and storage medium - Google Patents

Volume fog rendering method, device and equipment and storage medium Download PDF

Info

Publication number
CN114170359A
CN114170359A CN202111296242.6A CN202111296242A CN114170359A CN 114170359 A CN114170359 A CN 114170359A CN 202111296242 A CN202111296242 A CN 202111296242A CN 114170359 A CN114170359 A CN 114170359A
Authority
CN
China
Prior art keywords
stepping
current frame
fog
rendering
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111296242.6A
Other languages
Chinese (zh)
Inventor
代天麒
柴毅哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202111296242.6A priority Critical patent/CN114170359A/en
Publication of CN114170359A publication Critical patent/CN114170359A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The embodiment of the application provides a volume fog rendering method, a volume fog rendering device, volume fog rendering equipment and a storage medium. When the volume fog is rendered, rendering textures of a current frame are drawn by adopting a down-sampling resolution ratio, and when light stepping is carried out, a stepping starting point of the current frame is shifted; and after the offset light ray stepping points are adopted to perform illumination calculation so as to obtain the volume fog rendering texture of the current frame, mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame so as to obtain the updated volume fog rendering texture of the current frame. Based on the implementation mode, the rendering cost of each frame can be reduced by performing down-sampling on each frame, and missing pixels in the current frame are supplemented by performing multi-frame rendering texture mixing on the current frame and the historical frame, so that the rendering cost is reduced and the rendering smoothness is improved under the condition that the rendered volume fog has higher pixel precision.

Description

Volume fog rendering method, device and equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a volume fog rendering method, apparatus, device, and storage medium.
Background
In virtual scene development, a Fog cloud or cloud in a real scene is often simulated using a volume Fog (Volumetric Fog). However, the existing method for rendering the volume fog consumes too much computing resources, and a new solution is yet to be proposed.
Disclosure of Invention
Aspects of the present disclosure provide a volume fog rendering method, apparatus, device, and storage medium to reduce rendering overhead of volume fog.
The embodiment of the application provides a volume fog rendering method, which comprises the following steps: drawing a cubic model in a current frame of a target scene according to the down-sampling resolution of the target scene; the cubic model is used for representing the corresponding object of the volume fog in the target scene; controlling a virtual camera in the target scene, and transmitting a stepping ray to the cube model to obtain a stepping starting point of the stepping ray on the cube model; shifting the stepping starting point to obtain an updated stepping starting point corresponding to the current frame; controlling the stepping ray to step in the cube along the stepping direction from the updated stepping starting point, and calculating the illumination of the volume fog to obtain the volume fog rendering texture of the current frame; mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame to obtain a mixed volume fog rendering texture; and rendering the volume fog in the current frame according to the mixed volume fog rendering texture.
Further optionally, drawing a cube model in the current frame of the target scene according to the sampled resolution of the target scene, including: determining an image resolution of the target scene; drawing a cubic model in the current frame according to the resolution which is 1/N times of the resolution of the image; wherein the cube model faces the screen direction, and N is a positive integer greater than 1; the history frame comprises an N-1 frame prior to the current frame.
Further optionally, controlling a virtual camera in the target scene, and emitting a stepping ray to the cube model to obtain a stepping starting point of the stepping ray on the cube model includes: controlling the virtual camera to emit stepping rays to the cube model along the sight line direction to obtain a first intersection point and a second intersection point of the stepping rays and the cube model; and selecting a point closer to the virtual camera from the first intersection point and the second intersection point as the stepping starting point.
Further optionally, shifting the stepping start point to obtain an updated stepping start point corresponding to the current frame, including: shifting the stepping starting point by a specified distance along the stepping direction of the stepping ray to obtain the updated stepping starting point; the specified distance is calculated according to at least one of a stepping step size, blue noise and a framing jitter offset.
Further optionally, the specified distance is the step size blue noise per frame jitter offset.
Further optionally, controlling the stepping ray to step in the cube in a stepping direction from the updated stepping start point, and calculating the illumination of the volume fog, including: controlling the stepping ray to step to a stepping point in the cube model according to a set stepping step length; respectively reading the fog density and the shadow value of the stepping point from a preset fog density map and a preset shadow map according to the coordinate of the stepping point; and calculating the illumination of the stepping point according to the fog density and the shadow value of the stepping point.
Further optionally, the mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame to obtain a mixed volume fog rendering texture, including: carrying out bilateral fuzzy calculation on the volume fog rendering texture of the current frame to obtain the volume fog rendering texture after the fuzzy processing of the current frame; and mixing the volume fog rendering texture of the current frame after the fuzzy processing with the volume fog rendering texture of the historical frame according to the respective weighting weight of the current frame and the historical frame to obtain the mixed volume fog rendering texture.
Further optionally, before mixing the blurred volume fog rendering texture of the current frame with the blurred volume fog rendering texture of the historical frame according to the respective weighting weights of the current frame and the historical frame, the method further includes: calculating the depth value of the historical frame according to the texture mapping coordinates of the screen pixels of the current frame, the depth value of the current frame and the cutting coordinate conversion matrix of the current frame and the historical frame; and determining the weighting weight of the historical frame according to the depth value of the historical frame.
An embodiment of the present application further provides an electronic device, including: a memory and a processor; the memory is to store one or more computer instructions; the processor is to execute the one or more computer instructions to: the steps in the method provided by the embodiments of the present application are performed.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, where the computer program can implement the steps in the method provided in the embodiments of the present application when executed.
In the volume fog rendering method provided by the embodiment of the application, when the volume fog is rendered, the rendering texture of the current frame is rendered by adopting the down-sampling resolution, and when the light stepping is performed, the stepping starting point of the current frame is shifted; and after the offset light ray stepping points are adopted to perform illumination calculation so as to obtain the volume fog rendering texture of the current frame, mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame so as to obtain the updated volume fog rendering texture of the current frame. Based on the implementation mode, the rendering cost of each frame can be reduced by performing down-sampling on each frame, and missing pixels in the current frame are supplemented by performing multi-frame rendering texture mixing on the current frame and the historical frame, so that the rendering cost is reduced and the rendering smoothness is improved under the condition that the rendered volume fog has higher pixel precision.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart of a volume fog rendering method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a ray stepping algorithm provided in an exemplary embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In virtual scene development, a Fog cloud or cloud in a real scene is often simulated using a volume Fog (Volumetric Fog). However, the existing method for rendering the volume fog consumes too much computing resources, and when the volume fog with rich details needs to be rendered, the operating efficiency of the rendering system is greatly affected.
In view of the above technical problem, in some embodiments of the present application, a solution is provided, and the technical solutions provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a volume fog rendering method according to an exemplary embodiment of the present application, where as shown in fig. 1, the method includes:
step 101, drawing a cube model in a current frame of a target scene according to the down-sampling resolution of the target scene; the cube model is used to represent the corresponding object of the volume fog in the target scene.
And 102, controlling a virtual camera in the target scene, and emitting a stepping ray to the cube model to obtain a stepping starting point of the stepping ray on the cube model.
And 103, offsetting the stepping starting point to obtain an updated stepping starting point corresponding to the current frame.
And 104, controlling the stepping ray to step in the cube along the stepping direction from the updated stepping starting point, and calculating the illumination of the volume fog to obtain the volume fog rendering texture of the current frame.
And 105, mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame to obtain a mixed volume fog rendering texture.
And 106, rendering the volume fog in the current frame according to the mixed volume fog rendering texture.
The embodiment can be executed by a terminal device, and a rendering engine can be run on the terminal device and used for rendering a target scene obtained through modeling. The target scene may be a game scene, a VR (Virtual Reality) scene, and the like, which is not limited in this embodiment.
Generally, an image corresponding to a target scene is displayed in a full-resolution manner. Wherein, the full resolution means that the resolution of the image is consistent with the number of real pixels of the display, that is, each pixel on the image is completely displayed. In this embodiment, to reduce the overhead of the volume fog rendering operation, the volume fog may be rendered at a down-sampled resolution of the target scene. The down-sampled resolution may be one-half, one-fourth, one-eighth, etc. of the resolution (i.e., full resolution) of the image corresponding to the target scene, which is not limited in this embodiment.
After determining the down-sampled resolution, the rendering engine may customize a Rendered Texture (RT) map corresponding to the mesh rendering volume fog. Wherein, Mesh refers to the Mesh of the model, i.e. the Mesh obtained by modeling the object, and Pass can be regarded as a drawing process. MeshPass refers to a custom process of drawing a network. In this embodiment, the rendering engine may render an RT map of the volume fog with a custom MeshPass based on the down-sampled resolution.
When the RT mapping of the volume fog is performed, a cubic model is rendered in the current frame of the target scene with a down-sampled resolution, and the cubic model is used to represent the corresponding object of the volume fog in the target scene. The current frame refers to a frame of image to be rendered at the current time. After the cube model is drawn, the virtual camera in the target scene can be controlled, stepping rays are emitted to the cube model, and a stepping starting point of the stepping rays on the cube model is obtained. Wherein the stepping starting point is one intersection point of the stepping ray and the cubic model, and is a starting point for calculating the illumination of the volume fog by adopting a ray stepping algorithm (Raymarching).
In some optional embodiments, when the virtual camera is controlled to emit the stepping ray to the cube model along the sight line direction, a first intersection point and a second intersection point of the stepping ray and the cube model can be obtained; a point closer to the virtual camera may be selected as a stepping start point from the first intersection point and the second intersection point.
Next, the stepping starting point may be shifted to obtain an updated stepping starting point corresponding to the current frame, and the stepping ray is controlled to step within the cube along the stepping direction from the updated stepping starting point. In the stepping process, when stepping to a stepping point, the illumination of the volume fog corresponding to the stepping point can be calculated, so that the volume fog rendering texture of the current frame is obtained.
In some optional embodiments, when the stepping starting point is shifted to obtain an updated stepping starting point corresponding to the current frame, the stepping starting point may be shifted by a specified distance along the stepping direction of the stepping ray to obtain an updated stepping starting point; wherein the specified distance is calculated according to at least one of a step size, blue noise (blue) and a framing jitter offset.
Optionally, the specified distance is a step size blue noise framing jitter offset. The frame jitter offset is related to the frame number, and the frame jitter offset of different frames may be different.
In some alternative embodiments, when the step ray is controlled to step in the cube in the step direction from the updated step start point and the illumination of the volume fog is calculated, the step ray may be controlled to step to the step point in the cube model according to the set step size, as shown in fig. 2. After each step point is reached, the fog density and the shadow value of the step point can be respectively read from a preset fog density map and a preset shadow map (shadow map) according to the coordinate of the step point, and the illumination of the step point is calculated according to the fog density and the shadow value of the step point. After the illumination of each step point is obtained through calculation, the illumination of a plurality of step points in the sight line direction can be accumulated, and the illumination of the pixel point corresponding to the sight line direction on the screen is obtained.
After the illumination calculation result of the volume fog corresponding to the current frame is obtained, the volume fog rendering texture of the current frame and the volume fog rendering texture of the historical frame can be mixed to obtain mixed volume fog rendering texture, and the volume fog in the current frame is rendered according to the mixed volume fog rendering texture. The historical frame refers to a number of frames before the current frame. In this embodiment, after rendering the texture map by using the down-sampled resolution, the rendering overhead per frame is reduced, but the resolution per frame is lost to some extent. When the light of the current frame is stepped, the stepping starting point of the current frame is shifted, and the volume fog rendering texture of the current frame and the volume fog rendering texture of the historical frame are mixed, so that the loss of the resolution of the current frame can be compensated to a certain extent, and the rendering cost is reduced under the condition of keeping the pixel accuracy higher.
In some optional embodiments, when the volume fog rendering texture of the current frame and the volume fog rendering texture of the historical frame are mixed to obtain the mixed volume fog rendering texture, bilateral fuzzy (binary filter) calculation can be performed on the volume fog rendering texture of the current frame to obtain the fuzzy-processed volume fog rendering texture of the current frame; then, the fuzzy processed volume fog rendering texture of the current frame and the volume fog rendering texture of the historical frame can be mixed according to the respective weighting weight of the current frame and the historical frame, and the mixed volume fog rendering texture is obtained.
The weighting weights of the historical frames may be set according to empirical values, for example, may be set to 0.7 or 0.8. Alternatively, the respective weighting weights of the historical frames may also be determined according to the depth values of the historical frames. Before mixing the volume fog rendering texture of the current frame after the fuzzy processing with the volume fog rendering texture of the historical frame, the depth value of the historical frame can be calculated according to the texture mapping coordinates of the screen pixels of the current frame, the depth value of the current frame and the clipping coordinate conversion matrix of the current frame and the historical frame, and the weighting weight of the historical frame is determined according to the depth value of the historical frame.
After the mixed volume fog rendering texture of the current frame is obtained, the volume fog in the current frame can be rendered.
In the embodiment, when the volume fog is rendered, the rendering texture of the current frame is rendered by adopting the down-sampling resolution, and when the light stepping is performed, the stepping starting point of the current frame is shifted; and after the offset light ray stepping points are adopted to perform illumination calculation so as to obtain the volume fog rendering texture of the current frame, mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame so as to obtain the updated volume fog rendering texture of the current frame. Based on the implementation mode, the rendering cost of each frame can be reduced by performing down-sampling on each frame, and missing pixels in the current frame are supplemented by performing multi-frame rendering texture mixing on the current frame and the historical frame, so that the rendering cost is reduced and the rendering smoothness is improved under the condition that the rendered volume fog has higher pixel precision.
In some optional embodiments, when the cube model is rendered in the current frame of the target scene according to the sampled resolution of the target scene, the image resolution of the target scene may be determined, and the cube model is rendered in the current frame according to the resolution 1/N times the image resolution; the cube model faces the screen direction, and N is a positive integer greater than 1.
Alternatively, N may be 4 or 16, and the like, and the embodiment is not limited. Taking N as an example 4, when performing ray stepping at half resolution, the width and height of the current frame are reduced by 1/2, and the resolution of the rendered texture obtained by down-sampling and rendering is 1/4 of full resolution.
In order to compensate for the loss of image accuracy caused by resolution reduction, rendering textures corresponding to the current frame and an N-1 frame before the current frame can be mixed to obtain the rendering texture of the current frame. That is, the rendering cost is dispersed into N frames from the time dimension, so that the rendering cost is reduced; and offsetting the stepping starting point of each frame in the spatial dimension, mixing the rendering textures of the N frames, and making up for pixel loss, thereby achieving the balance of rendering effect and rendering overhead.
In addition, for each pixel on the screen, the stepping times corresponding to each pixel can be greatly improved based on a framing offset and superposition mode, namely, the small stepping sampling times of the basis corresponding to each pixel are expanded by N times in the space dimension and the time dimension.
In the spatial dimension, the stepping sampling times of each pixel on the screen are obtained by superposing the respective stepping sampling times of the N frames with the spatial offset. Therefore, in the spatial dimension, the number of step sampling is equivalent to N times larger. In the time dimension, the step sampling times of each pixel on the screen are obtained by superposing the sampling times corresponding to historical N-1 frames with different time sequences and the step sampling corresponding to the current frame. Therefore, in the spatial dimension, the number of step sampling is equivalent to N times larger. Therefore, the defects of the sampling rate are made up, and the rendering efficiency is improved.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 101 to 103 may be device a; for another example, the execution subject of steps 101 and 102 may be device a, and the execution subject of step 103 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 101, 102, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 3 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application, where the electronic device may be used to perform the volume fog rendering method described in the foregoing embodiments. As shown in fig. 3, the electronic apparatus includes: memory 301, processor 302, and communication component 303.
The memory 301 is used for storing computer programs and may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 301 may be implemented, among other things, by any type of volatile or non-volatile storage device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
A processor 302, coupled to the memory 301, for executing a computer program in the memory 301 for rendering a cube model in a current frame of a target scene at a down-sampled resolution of the target scene; the cubic model is used for representing the corresponding object of the volume fog in the target scene; controlling a virtual camera in the target scene, and transmitting a stepping ray to the cube model to obtain a stepping starting point of the stepping ray on the cube model; shifting the stepping starting point to obtain an updated stepping starting point corresponding to the current frame; controlling the stepping ray to step in the cube along the stepping direction from the updated stepping starting point, and calculating the illumination of the volume fog to obtain the volume fog rendering texture of the current frame; mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame to obtain a mixed volume fog rendering texture; and rendering the volume fog in the current frame according to the mixed volume fog rendering texture.
Further optionally, when the processor 302 is configured to render the cube model in the current frame of the target scene according to the sampled resolution of the target scene, specifically: determining an image resolution of the target scene; drawing a cubic model in the current frame according to the resolution which is 1/N times of the resolution of the image; wherein the cube model faces the screen direction, and N is a positive integer greater than 1; the history frame comprises an N-1 frame prior to the current frame.
Further optionally, when controlling the virtual camera in the target scene, the processor 302 is specifically configured to, when emitting a stepping ray to the cube model to obtain a stepping start point of the stepping ray on the cube model: controlling the virtual camera to emit stepping rays to the cube model along the sight line direction to obtain a first intersection point and a second intersection point of the stepping rays and the cube model; and selecting a point closer to the virtual camera from the first intersection point and the second intersection point as the stepping starting point.
Further optionally, when shifting the step starting point to obtain an updated step starting point corresponding to the current frame, the processor 302 is specifically configured to: shifting the stepping starting point by a specified distance along the stepping direction of the stepping ray to obtain the updated stepping starting point; the specified distance is calculated according to at least one of a stepping step size, blue noise and a framing jitter offset.
Further optionally, the specified distance is the step size blue noise per frame jitter offset.
Further optionally, when the processor 302 controls the stepping ray to step in the cube along the stepping direction from the updated stepping start point, and calculates the illumination of the volume fog, specifically to: controlling the stepping ray to step to a stepping point in the cube model according to a set stepping step length; respectively reading the fog density and the shadow value of the stepping point from a preset fog density map and a preset shadow map according to the coordinate of the stepping point; and calculating the illumination of the stepping point according to the fog density and the shadow value of the stepping point.
Further optionally, when the processor 302 mixes the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame to obtain a mixed volume fog rendering texture, specifically configured to: carrying out bilateral fuzzy calculation on the volume fog rendering texture of the current frame to obtain the volume fog rendering texture after the fuzzy processing of the current frame; and mixing the volume fog rendering texture of the current frame after the fuzzy processing with the volume fog rendering texture of the historical frame according to the respective weighting weight of the current frame and the historical frame to obtain the mixed volume fog rendering texture.
Further optionally, before mixing the blurred volume fog rendering texture of the current frame with the blurred volume fog rendering texture of the historical frame according to the respective weighted weights of the current frame and the historical frame, the processor 302 is further configured to: calculating the depth value of the historical frame according to the texture mapping coordinates of the screen pixels of the current frame, the depth value of the current frame and the cutting coordinate conversion matrix of the current frame and the historical frame; and determining the weighting weight of the historical frame according to the depth value of the historical frame.
Further, as shown in fig. 3, the electronic device further includes: display component 303, communication component 304, power component 305, audio component 306, and other components. Only some of the components are schematically shown in fig. 3, and it is not meant that the electronic device comprises only the components shown in fig. 3.
Among other things, display assembly 303 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Wherein the communication component 304 is configured to facilitate wired or wireless communication between the device in which the communication component resides and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply assembly 305 provides power to various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component 306 may be configured to output and/or input audio signals, among other things. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
In the embodiment, when the volume fog is rendered, the rendering texture of the current frame is rendered by adopting the down-sampling resolution, and when the light stepping is performed, the stepping starting point of the current frame is shifted; and after the offset light ray stepping points are adopted to perform illumination calculation so as to obtain the volume fog rendering texture of the current frame, mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame so as to obtain the updated volume fog rendering texture of the current frame. Based on the implementation mode, the rendering cost of each frame can be reduced by performing down-sampling on each frame, and missing pixels in the current frame are supplemented by performing multi-frame rendering texture mixing on the current frame and the historical frame, so that the rendering cost is reduced and the rendering smoothness is improved under the condition that the rendered volume fog has higher pixel precision.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A volume fog rendering method, comprising:
drawing a cubic model in a current frame of a target scene according to the down-sampling resolution of the target scene; the cubic model is used for representing the corresponding object of the volume fog in the target scene;
controlling a virtual camera in the target scene, and transmitting a stepping ray to the cube model to obtain a stepping starting point of the stepping ray on the cube model;
shifting the stepping starting point to obtain an updated stepping starting point corresponding to the current frame;
controlling the stepping ray to step in the cube along the stepping direction from the updated stepping starting point, and calculating the illumination of the volume fog to obtain the volume fog rendering texture of the current frame;
mixing the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame to obtain a mixed volume fog rendering texture;
and rendering the volume fog in the current frame according to the mixed volume fog rendering texture.
2. The method of claim 1, wherein rendering a cube model in a current frame of a target scene at a sampled resolution of the target scene comprises:
determining an image resolution of the target scene;
drawing a cubic model in the current frame according to the resolution which is 1/N times of the resolution of the image; wherein the cube model faces the screen direction, and N is a positive integer greater than 1; the history frame comprises an N-1 frame prior to the current frame.
3. The method of claim 1, wherein controlling a virtual camera in the target scene to emit a stepping ray to the cube model to obtain a stepping starting point of the stepping ray on the cube model comprises:
controlling the virtual camera to emit stepping rays to the cube model along the sight line direction to obtain a first intersection point and a second intersection point of the stepping rays and the cube model;
and selecting a point closer to the virtual camera from the first intersection point and the second intersection point as the stepping starting point.
4. The method of claim 1, wherein shifting the step start point to obtain an updated step start point corresponding to the current frame comprises:
shifting the stepping starting point by a specified distance along the stepping direction of the stepping ray to obtain the updated stepping starting point; the specified distance is calculated according to at least one of a stepping step size, blue noise and a framing jitter offset.
5. The method of claim 4, wherein the specified distance is the step size blue noise per frame jitter offset.
6. The method of claim 1, wherein controlling the stepping ray to step within the cube in a stepping direction from the updated stepping start point and calculating the illumination of the volume fog comprises:
controlling the stepping ray to step to a stepping point in the cube model according to a set stepping step length;
respectively reading the fog density and the shadow value of the stepping point from a preset fog density map and a preset shadow map according to the coordinate of the stepping point;
and calculating the illumination of the stepping point according to the fog density and the shadow value of the stepping point.
7. The method of claim 1, wherein the blending the volume fog rendering texture of the current frame with the volume fog rendering texture of the historical frame to obtain a blended volume fog rendering texture comprises:
carrying out bilateral fuzzy calculation on the volume fog rendering texture of the current frame to obtain the volume fog rendering texture after the fuzzy processing of the current frame;
and mixing the volume fog rendering texture of the current frame after the fuzzy processing with the volume fog rendering texture of the historical frame according to the respective weighting weight of the current frame and the historical frame to obtain the mixed volume fog rendering texture.
8. The method of claim 7, wherein before the blending the blurred volume fog rendered texture of the current frame with the blurred volume fog rendered texture of the historical frame according to the respective weighted weights of the current frame and the historical frame, the method further comprises:
calculating the depth value of the historical frame according to the texture mapping coordinates of the screen pixels of the current frame, the depth value of the current frame and the cutting coordinate conversion matrix of the current frame and the historical frame;
and determining the weighting weight of the historical frame according to the depth value of the historical frame.
9. An electronic device, comprising: the system comprises a memory, a central processing unit and a graphic processor;
the memory is to store one or more computer instructions;
the central processor is to execute the one or more computer instructions to: invoking the graphics processor to perform the steps in the method of any of claims 1-8.
10. A computer-readable storage medium storing a computer program, wherein the computer program is capable of performing the steps of the method of any one of claims 1 to 8 when executed.
CN202111296242.6A 2021-11-03 2021-11-03 Volume fog rendering method, device and equipment and storage medium Pending CN114170359A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111296242.6A CN114170359A (en) 2021-11-03 2021-11-03 Volume fog rendering method, device and equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111296242.6A CN114170359A (en) 2021-11-03 2021-11-03 Volume fog rendering method, device and equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114170359A true CN114170359A (en) 2022-03-11

Family

ID=80477926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111296242.6A Pending CN114170359A (en) 2021-11-03 2021-11-03 Volume fog rendering method, device and equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114170359A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664413A (en) * 2023-03-27 2023-08-29 北京拙河科技有限公司 Image volume fog eliminating method and device based on Abbe convergence operator

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664413A (en) * 2023-03-27 2023-08-29 北京拙河科技有限公司 Image volume fog eliminating method and device based on Abbe convergence operator
CN116664413B (en) * 2023-03-27 2024-02-02 北京拙河科技有限公司 Image volume fog eliminating method and device based on Abbe convergence operator

Similar Documents

Publication Publication Date Title
CN111508039B (en) Word processing method of ink screen and communication terminal
CN107015788B (en) Method and device for displaying images on mobile device in animation mode
CN110570505A (en) image rendering method, device and equipment and storage medium
CN112004032B (en) Video processing method, terminal device and storage medium
CN109600544A (en) A kind of local dynamic station image generating method and device
CN111258519B (en) Screen split implementation method, device, terminal and medium
US20220222831A1 (en) Method for processing images and electronic device therefor
CN113313802A (en) Image rendering method, device and equipment and storage medium
CN111338743B (en) Interface processing method and device and storage medium
CN111754607A (en) Picture processing method and device, electronic equipment and computer readable storage medium
CN114757837A (en) Target model rendering method, device and storage medium
CN114170359A (en) Volume fog rendering method, device and equipment and storage medium
CN113038141B (en) Video frame processing method and electronic equipment
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN114666497B (en) Imaging method, terminal device and storage medium
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
CN113935891B (en) Pixel-style scene rendering method, device and storage medium
CN108010095B (en) Texture synthesis method, device and equipment
CN115630241A (en) Page display method and device, electronic equipment and storage medium
CN113256785B (en) Image processing method, apparatus, device and medium
CN113936097A (en) Volume cloud rendering method and device and storage medium
CN115588064A (en) Video generation method and device, electronic equipment and storage medium
CN116847147A (en) Special effect video determining method and device, electronic equipment and storage medium
CN109522532A (en) A kind of calculation method and device of line chart selection range
CN113935894B (en) Ink and wash style scene rendering method and equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination