CN116228984A - Volumetric cloud modeling and rendering method based on meteorological data - Google Patents

Volumetric cloud modeling and rendering method based on meteorological data Download PDF

Info

Publication number
CN116228984A
CN116228984A CN202310236581.8A CN202310236581A CN116228984A CN 116228984 A CN116228984 A CN 116228984A CN 202310236581 A CN202310236581 A CN 202310236581A CN 116228984 A CN116228984 A CN 116228984A
Authority
CN
China
Prior art keywords
cloud
noise
volume
modeling
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310236581.8A
Other languages
Chinese (zh)
Inventor
林晓颖
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202310236581.8A priority Critical patent/CN116228984A/en
Publication of CN116228984A publication Critical patent/CN116228984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a volume cloud modeling and rendering method based on meteorological data, which comprises five data fields of cloud bottom height, cloud top height, coverage rate, cloud bottom type and cloud top type, wherein firstly, a data structure of the cloud is used for constructing a basic, depth values of volume cloud, a ray stepping algorithm is adopted for mixing rendering scenes, the degree of density of the cloud according to different areas is completely controlled by noise parameters, the superposition of different frequencies of Perlin noise and Worley noise is stored as three-dimensional textures for modeling the volume cloud, and the shape noise textures are used for generating the basic shape of the cloud by using fractal Perlin-Worley noise; the technical scheme not only can embody the cloud layer gesture driven by real data, but also can obtain the volume cloud effect with higher reality by using lower rendering cost.

Description

Volumetric cloud modeling and rendering method based on meteorological data
Technical Field
The invention relates to the technical field of volume cloud modeling and rendering, in particular to a volume cloud modeling and rendering method based on meteorological data.
Background
Modeling the volume cloud is to construct a density field of the volume cloud, and calculating a density value of the cloud at any point in space. Currently, the main current practice is to store the density information of the cloud by means of 3D texture, wherein the 3D texture can directly store the density value of the cloud, can store distance field information, can also store baked illumination information and the like, and the specific practice is various and depends on requirements.
The real-time sky volume cloud modeling method is mainly introduced, at present, the main stream practice of PC/host computer side is to carry out programmed modeling based on noise textures, and then a plurality of art resources are matched to make richer forms and expression effects. Various methods of using noise textures and art resources, such as horizon, dart guest 2, cold frost and the like, are worth referring, and even the Material Graph is directly used in the UE4, and the modeling mode is completely opened to the art/TA for doing so.
The application number is CN202111589721.7, and the method for modeling and visualizing the three-dimensional cloud scene based on the foundation cloud image is an increasingly mature technology, wherein a CPU extracts a cloud image area from the two-dimensional foundation cloud image, and Yun Jigao degrees and cloud cluster thickness are calculated as parameters of the three-dimensional cloud modeling. And determining the position information of all voxel center points forming the cloud model according to the two parameters, transmitting the position information into the GPU as a top dot array, drawing all cloud voxels according to the top dot array in a geometric shader stage of the GPU, finally entering a fragment shader stage of the GPU, determining a density value of the cloud according to the distance between each voxel and the cloud center point, and calculating the color of the voxels according to the density value so as to realize three-dimensional cloud scene modeling and visualization integration. According to the three-dimensional cloud modeling method, voxels are used as minimum geometric units, a three-dimensional cloud model is built based on the GPU, visualization is realized in a three-dimensional scene, the efficiency of the three-dimensional cloud modeling is improved, and meanwhile, the requirement of smooth roaming in a three-dimensional geographic scene is met, but the agricultural production traceability system has the following defects in the actual use process:
the prior art cannot construct the basis for the volume cloud, so that the density field of the volume cloud cannot be constructed, the rendering accuracy is limited, and the authenticity effect of the volume cloud is poor.
Disclosure of Invention
The invention aims to provide a volume cloud modeling and rendering method based on meteorological data, which aims to solve the problems that the density field of the volume cloud cannot be built due to the fact that the volume cloud cannot be built on the basis of the volume cloud, the rendering accuracy is limited, and the authenticity effect of the obtained volume cloud is poor.
In order to achieve the above purpose, the present invention provides the following technical solutions: a volume cloud modeling and rendering method based on meteorological data constructs a volume cloud data structure, wherein the volume cloud data structure comprises five data fields of cloud base height, cloud top height, coverage rate, cloud base type and cloud top type; firstly, constructing a basic data structure of a cloud, constructing a depth value of a volume cloud, adopting a ray stepping algorithm, rendering a scene for mixing, and controlling the degree of density of the cloud according to different areas by noise parameters;
the Perlin noise and the Worley noise which are overlapped with different frequencies are stored as three-dimensional textures and are used for modeling volume cloud;
the shape noise texture uses fractal Perlin-Worley noise for generating the basic shape of the cloud; the detail noise texture uses fractal Perlin noise for eroding the shape edge of the cloud to add detail;
and analyzing and manufacturing the self-shadow of the volume cloud.
Preferably, the basic cloud layer outline is constructed according to the data structure of the volume cloud, and then the three-dimensional noise is used for corroding the cloud layer outline, so that details are added to the cloud layer, and the density field of the volume cloud is obtained.
Preferably, the depth value of each pixel of the scene to be displayed and the depth value of the volume cloud are obtained, whether each pixel is blocked by the cloud layer or not is determined by comparison, and if the pixel is not blocked by the cloud layer, the original color of the pixel is directly displayed.
Preferably, a ray stepping algorithm is adopted to obtain an intersection point of rays emitted by the virtual camera and the cloud layer surface as a starting point, if the viewpoint is located in the cloud, a structured sampling method is adopted, the viewpoint position is used as the starting point, the starting point starts to step along the ray direction, and the concentration of the sampling point is obtained in the density field.
Preferably, the illumination model is solved according to the self-shielding shadow and the cloud concentration of the sampling point, so that the final transparency and color of the volume cloud are obtained, and the volume cloud is mixed with the scene to be rendered.
Preferably, the degree of the density of the clouds according to different areas is completely controlled by noise parameters, and if the art needs different types of clouds in the scene at the same time, or the degree of the density of the clouds needs to be freely controlled.
Preferably, for the rain clouds with complex shapes, the transition between 0.5 and 1.0 of the Cloud Type (which can be done naturally) needs to be performed by combining with the Cloud Lut, namely, the Cloud coverage rate and the noise erosion rate of the high position are low, the Cloud coverage rate of the low position is high, and the noise erosion rate of the low position is low according to different coverage rates and erosion rates of the noise.
Preferably, there are two main ways of self-shading the volume cloud:
the first method is more common, namely, each sampling point performs RayMaring (PC/host end) to calculate transmissittance again towards the direction of the light source, and the effect is better under the condition that the sampling steps are more, but obviously the cost is larger;
the second method is volume Shadow mapping, which uses the Beer's Shadow Map of UE4, and a more complex Transmittance Fucntion Mapping method (only a similar method is currently seen in the final illusion), the principle of which Transmittance Fucntion Mapping method is to approximate the transmittance function using a series of orthogonal basis functions.
Compared with the prior art, the invention has the beneficial effects that:
when the volume cloud modeling is carried out, the basic shape of the cloud layer is built according to cloud layer coverage rate data and cloud layer type data, and noise with various different frequencies is added in a superposition mode to build a density field of the volume cloud, when the volume cloud rendering is carried out, the accumulated concentration and illumination shielding of a single pixel are calculated by using a light stepping method, the final color and transparency of the volume cloud are obtained by solving an illumination model, the volume cloud is mixed with a scene, different sampling methods are adopted aiming at different use scenes, the volume cloud with very realistic sense when the view point is static and the cloud passing interaction with the volume cloud in a motion state are realized.
Drawings
FIG. 1 is a flow chart of a volume cloud build rendering method of the present invention.
Fig. 2 is a height profile of the present invention.
Fig. 3 is a graph of low frequency PerlinWorley noise and successively higher frequency Worley noise of the present invention.
Fig. 4 is a detailed view of the present invention using three-dimensional noise to augment the volumetric cloud.
Fig. 5 is a schematic flow chart of a structured sampling of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown.
The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Referring to fig. 1-5, the present invention provides a technical solution: constructing a volume cloud data structure, wherein the volume cloud data structure comprises five data fields of cloud base height, cloud top height, coverage rate, cloud base type and cloud top type; firstly, constructing a basic data structure of a cloud, constructing a depth value of a volume cloud, adopting a ray stepping algorithm, rendering a scene for mixing, and controlling the degree of density of the cloud according to different areas by noise parameters; superposing the Perlin noise and the Worley noise with different frequencies, and storing the Perlin noise and the Worley noise as three-dimensional textures for modeling volume cloud; the shape noise texture uses fractal Perlin-Worley noise for generating the basic shape of the cloud; the detail noise texture uses fractal Perlin noise for eroding the shape edge of the cloud to add detail;
and analyzing and manufacturing the volume cloud self-shadow, constructing a basic cloud layer contour according to a data structure of the volume cloud, corroding the cloud layer contour by using three-dimensional noise, adding details to the cloud layer, and obtaining a density field of the volume cloud.
And obtaining the depth value of each pixel of the scene to be displayed and the depth value of the volume cloud, comparing and determining whether each pixel is blocked by the cloud layer, if the pixel is not blocked by the cloud layer, directly displaying the original color of the pixel, adopting a ray stepping algorithm to obtain the intersection point of rays emitted by the virtual camera and the surface of the cloud layer as a starting point, and if the viewpoint is positioned in the cloud, adopting a structured sampling method, taking the viewpoint position as the starting point, starting stepping along the ray direction from the starting point, and obtaining the concentration of the sampling point in the density field.
According to the Cloud concentration of the self-shielding shadow and the sampling point, the illumination model is solved, the final transparency and color of the volume Cloud are obtained, the volume Cloud is mixed with the scene to be rendered, the degree of density of the Cloud according to different areas is completely controlled by noise parameters, if the art needs to have different types of clouds in the scene at the same time or the degree of density of the Cloud needs to be freely controlled, the Cloud Map needs to be used, the Cloud Map can be directly drawn by the art, but the effect cannot be observed in real time due to the mode, and the method is inconvenient, so that a brush tool is manufactured, the art is convenient to draw, and the method is similar to Blueprint Painted Clouds of UE 4.
For the rain clouds with complex shapes, the transition between 0.5 and 1.0 of the Cloud Type is needed to be carried out by combining with the Cloud Lut (the transition can be carried out in this way), namely, the coverage rate and the erosion rate of different noise are given according to the height, the Cloud coverage rate at a high position is low, the noise erosion rate is high, and the Cloud coverage rate at a low position is high, and the noise erosion rate is low;
the Erosion stored in the G channel in Cloudlut is not only used for detail noise, but also influences the calculation result of shape noise, and specific details can refer to implementation in Unity HDRP source code.
The volume cloud self-shading method mainly comprises two methods: the first method is more common, and the method is that each sampling point performs RayMaring (PC/host end) to calculate transmissittance again towards the direction of the light source, and the effect is better under the condition that the sampling steps are more, but obviously the cost is larger; the second method is volume Shadow mapping, using Beer's Shadow Map of UE4, and the more complex Transmittance Fucntion Mapping method (only similar methods are seen in the final illusion at present), the principle of Transmittance Fucntion Mapping method is to approximate the transmittance function using a series of orthogonal basis functions, the basis functions being chosen by fourier functions, DCT functions, haar wavelet functions, etc., which are more accurate in approximation to the transmittance curve in some cases than Beer's Shadow Map, and thus better rendering of the volume cloud.
And in the volume cloud modeling process, a volume cloud weather map is obtained according to real weather data, a three-channel two-dimensional texture is manufactured, r channels store coverage rate information of volume cloud, g channels store cloud bottom type information, and b channels store cloud top type information. Volume cloud coverage represents the probability of occurrence p of volume cloud on a horizontal level h orizontal . The value range of the cloud layer type data is 0.0-1.0, and the occurrence probability p of the volume cloud in the vertical direction is obtained from the vertical distribution diagram according to the type of the cloud layer vertival Based on these two probabilities, the existence probability field p of the volume cloud is calculated according to the following formula profile This probability field depicts the general outline of the volume cloud.
p profile =p h orizontal *P vertival
And (3) superposing a plurality of noises with sequentially increased frequencies and sequentially reduced amplitudes by using a fractal Brownian motion (Fractal Brownian Motion) method, generating two three-dimensional noise textures offline, wherein the calculation formula of the FBM is as follows:
Figure BDA0004122542720000071
where n is the number of superimposed textures, amplitude represents the amplitude of the noise superimposed each time, frequency represents the frequency of the superimposed noise, and noise () function represents the noise function employed. The fbm method can also obtain the dynamic effect of the volume cloud, domain warping is only needed when the noise texture is read, and the noise value after domain warping is calculated by using the following formula:
Density(p)=fbm(p+fbm(p+fbm(p)))
the noise generated by the program is pseudo-random, typically by being pre-generated and stored in a pictorial form. Volume cloud modeling requires the use of two three-dimensional noise textures. Wherein the first texture has a size of 128 x 4, the r channel stores a low frequency PelinWorley noise, the GBA channels respectively store Worley noise with sequentially increased frequencies.
Low frequency PerlinWorley noise and successively higher frequency Worley noise with a resolution of 128 3 The size of the second three-dimensional noise texture is 32 x 3, the three channels respectively store the Worley noise with higher frequency, the Worley noise is used for adding more details for the volume cloud, the high-frequency Worley noise has the resolution of 32 3
Finally, the density field of the volume cloud is calculated from the following formula
Figure BDA0004122542720000072
Wherein noise represents the noise value of the pass or warp;
rendering: selecting sampling points, generating a ray pointing to volume cloud from a virtual camera position by adopting a ray stepping method, starting from the camera position, advancing with a specific step length, calculating the position of each sampling point, sampling the cloud density of the point, accumulating the cloud density, calculating the brightness information of each sampling point, and updating the color brightness of the volume cloud segment pointed by the ray.
The farther the pixel from the camera is, the less the effect on the final rendering state of the volume cloud, so in the volume cloud ray stepping method, a variable step sampling method is adopted.
When the virtual camera position is on the ground, calculating an intersection point of the stepping ray and the bottom of the cloud layer, and taking the intersection point as a stepping starting point to perform variable step sampling. The light stepping is carried out on each pixel with the same step length, so that the volume cloud has obvious banded lines, and blue noise is adopted to add a jitter value to the step length in order to prevent the occurrence of the situation and accelerate the rendering efficiency;
when the virtual camera position is in the cloud, the camera position is used as a starting point of ray stepping, the variable step is used for forward sampling, when the virtual camera is in the cloud, a motion state is usually kept, and the change of the viewpoint causes the rapid change of the sampling point, so that the temporal sampling of the volume cloud is caused. The blue noise dithering sampling can be used for relieving the aliasing, but the optimization effect is limited, and the structured sampling method can be used for solving the problem, and the initial displacement of the light stepping is calculated according to the position of the viewpoint, so that when the viewpoint moves, only the sampling points closest to the viewpoint and farthest from the viewpoint can change. The near sampling points are relatively dense and the changes that occur have negligible impact on the volumetric cloud results. The sampling point at the farthest position has smaller weight in the density accumulation of one light stepping, and the influence on the result is almost negligible.
If the cloud density value of the sampling point is larger than 0, starting from the sampling point, stepping sampling towards the light source direction, accumulating the cloud density of the sampling point in the light source direction, and solving the illumination model to calculate the brightness of the sampling point.
Solving an illumination model:
the illumination model of the volume cloud comprises a bright-dark effect caused by multiple scattering and refraction in a cloud layer, a bright-edge effect when looking forward at the sun direction and a dark-edge effect when facing away from the sun.
The Beer law is used to describe the relationship between the transparency of the volume cloud and the optical thickness for the phenomenon of multiple scattering, refraction in the cloud layer.
cloud_transmittnnce=e -Cloud_depth
Figure BDA0004122542720000081
Where clouddepth represents the Density at which the light steps to the current sample point and the Density (p) obtains the Density value corresponding to the sample point at p.
The bright edge effect when looking at the sun direction is generated by scattering sunlight passing through the cloud layer. The phase function describes the relationship between the scattering intensity of light at various angles and the direction of incident light, and can be used to simulate the scattering of light in a cloud. MIE scattering can describe this scattering relationship more accurately, but is computationally expensive, so HG phase functions are typically used in cloud rendering to approximate MIE scattering. In order to compensate for the problem of large approximation errors in the back-scatter of the HG phase function, the TTHG phase function is used instead, and a particle swarm algorithm is used to optimize the parameters in the TTHG.
Figure BDA0004122542720000091
ph ase TTHG (θ,g α ,g β ,α)=α·phase HG (θ,g α )+(1-α)·phase HG (θ,g β )
Wherein g α 、g β The asymmetry factor of forward scattering and backward scattering are respectively represented, and alpha is the weight;
when the cloud layer is observed back to the sun, the cloud cluster can be seen to have a dark-edge effect, and the dark-edge effect cannot be simulated by the two methods. When looking at the volume cloud away from the sun, the Beer's-Powder function is used instead of the Beer function.
E=2.0·e -Cloud_depth ·(1.0-e -2·Cloud_depth )
To reduce the difference between the result and the original normalized function, the Beer's-Powder function multiplies the result by 2.
The final illumination model can be represented by the following formula:
E=2.0·e -Cloud_deoth ·(1.0-e -2·Cloud_depth )
finding out pixels which are blocked by the volume cloud according to the contour depth information of the volume cloud and the depth information of the scene to be rendered, respectively performing the ray stepping on the pixels, calculating the color and the transparency of the volume cloud on each pixel, and finally mixing the pixels with the scene to be rendered.
Figure BDA0004122542720000101
Although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art may modify the technical solutions described in the foregoing embodiments or make equivalent substitutions for some technical features thereof, and any modifications, equivalent substitutions, improvements and the like within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A volume cloud modeling and rendering method based on meteorological data constructs a volume cloud data structure, wherein the volume cloud data structure comprises five data fields of cloud base height, cloud top height, coverage rate, cloud base type and cloud top type; firstly, constructing a basic data structure of a cloud, constructing a depth value of a volume cloud, adopting a ray stepping algorithm, rendering a scene for mixing, and controlling the degree of density of the cloud according to different areas by noise parameters;
the Perlin noise and the Worley noise which are overlapped with different frequencies are stored as three-dimensional textures and are used for modeling volume cloud;
the shape noise texture uses fractal Perlin-Worley noise for generating the basic shape of the cloud; the detail noise texture uses fractal Perlin noise for eroding the shape edge of the cloud to add detail;
and analyzing and manufacturing the self-shadow of the volume cloud.
2. The volumetric cloud modeling and rendering method based on meteorological data according to claim 1, wherein: and constructing a basic cloud layer contour according to the data structure of the volume cloud, and corroding the cloud layer contour by using three-dimensional noise to add details to the cloud layer to obtain a density field of the volume cloud.
3. The volumetric cloud modeling and rendering method based on meteorological data according to claim 1, wherein: and comparing the obtained depth value of each pixel of the scene to be displayed with the depth value of the volume cloud to determine whether each pixel is shielded by the cloud layer, and if the pixel is not shielded by the cloud layer, directly displaying the original color of the pixel.
4. The volumetric cloud modeling and rendering method based on meteorological data according to claim 1, wherein: and adopting a ray stepping algorithm to obtain an intersection point of rays emitted by the virtual camera and the cloud layer surface as a starting point, adopting a structured sampling method to take the position of the viewpoint as the starting point if the viewpoint is positioned in the cloud, starting stepping along the ray direction from the starting point, and obtaining the concentration of the sampling point in the density field. And (3) for each sampling point with the concentration greater than 0, carrying out a ray stepping algorithm again in the illumination direction, and calculating the volume cloud self-shielding shadow.
5. The volumetric cloud modeling and rendering method based on meteorological data according to claim 1, wherein: and solving the illumination model according to the self-shielding shadow and the cloud concentration of the sampling point to obtain the final transparency and color of the volume cloud, and mixing the volume cloud with the scene to be rendered.
6. The volumetric cloud modeling and rendering method based on meteorological data according to claim 1, wherein: the degree of density of clouds according to different regions is completely controlled by noise parameters, if the fine arts need different types of clouds in the scene at the same time, or the degree of density of the clouds needs to be freely controlled, cloudMap, cloudMap is needed to be directly drawn by the fine arts, but the effect cannot be observed in real time due to the mode, so that the brush tool is inconvenient, the fine arts can be conveniently drawn, and the brush tool is similar to BlueringrintpaintedClouds of the UE 4.
7. The volumetric cloud modeling and rendering method based on meteorological data according to claim 1, wherein: for the rain clouds with complex shapes, the transition between 0.5 and 1.0 of CloudType (which can be done naturally) is needed by combining CloudLut, namely, according to different coverage rates and erosion rates of noise, the cloud coverage rate at high positions is low, the noise erosion rate is high, and the cloud coverage rate at low positions is high, and the noise erosion rate is low;
the eposition stored in the G channel in CloudLut is not only actually used for detail noise, but also affects the calculation result of shape noise, and specific details can refer to implementation in UnityHDRP source code.
8. The volumetric cloud modeling and rendering method based on meteorological data according to claim 1, wherein: the method for self-shading of the volume cloud mainly comprises two steps:
the first method is more common, namely, each sampling point performs RayMaring (PC/host end) to calculate transmissittance again towards the direction of the light source, and the effect is better under the condition that the sampling steps are more, but obviously the cost is larger;
the second method is volume shadow mapping, and the principle of the transmissibility method using Beer's shadow map of UE4 and the more complex transmissibility method (only similar methods are seen in the final illusion) is to approximate the transmissivity function using a series of orthogonal basis functions, where fourier functions, DCT functions, haar wavelet functions, etc. are used in selection of the basis functions, and TFM is more accurate in approximation of the transmissivity curve in some cases than Beer's shadow map, so that rendering effect on the volume cloud is better.
CN202310236581.8A 2023-03-13 2023-03-13 Volumetric cloud modeling and rendering method based on meteorological data Pending CN116228984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310236581.8A CN116228984A (en) 2023-03-13 2023-03-13 Volumetric cloud modeling and rendering method based on meteorological data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310236581.8A CN116228984A (en) 2023-03-13 2023-03-13 Volumetric cloud modeling and rendering method based on meteorological data

Publications (1)

Publication Number Publication Date
CN116228984A true CN116228984A (en) 2023-06-06

Family

ID=86590875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310236581.8A Pending CN116228984A (en) 2023-03-13 2023-03-13 Volumetric cloud modeling and rendering method based on meteorological data

Country Status (1)

Country Link
CN (1) CN116228984A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523026A (en) * 2024-01-08 2024-02-06 北京理工大学 Cloud and fog image simulation method, system, medium and terminal for infrared remote sensing imaging
CN117710557A (en) * 2024-02-05 2024-03-15 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523026A (en) * 2024-01-08 2024-02-06 北京理工大学 Cloud and fog image simulation method, system, medium and terminal for infrared remote sensing imaging
CN117523026B (en) * 2024-01-08 2024-03-29 北京理工大学 Cloud and fog image simulation method, system, medium and terminal for infrared remote sensing imaging
CN117710557A (en) * 2024-02-05 2024-03-15 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud
CN117710557B (en) * 2024-02-05 2024-05-03 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud

Similar Documents

Publication Publication Date Title
CN111508052B (en) Rendering method and device of three-dimensional grid body
CN116228984A (en) Volumetric cloud modeling and rendering method based on meteorological data
CN106570929B (en) Construction and drawing method of dynamic volume cloud
US7940269B2 (en) Real-time rendering of light-scattering media
US7940268B2 (en) Real-time rendering of light-scattering media
EP1953701A1 (en) Hybrid volume rendering in computer implemented animation
CN108805971B (en) Ambient light shielding method
US20060066608A1 (en) System and method for determining line-of-sight volume for a specified point
CN105261059A (en) Rendering method based on indirect reflection highlight calculation in screen space
CN107220372B (en) A kind of automatic laying method of three-dimensional map line feature annotation
CN101982838A (en) 3D virtual set ray tracking method for accelerating back light source irradiation
Sundén et al. Image plane sweep volume illumination
Lukasczyk et al. Voidga: A view-approximation oriented image database generation approach
Moreau et al. Importance sampling of many lights on the GPU
CN113129420B (en) Ray tracing rendering method based on depth buffer acceleration
Dachsbacher Interactive terrain rendering: towards realism with procedural models and graphics hardware
US20230274493A1 (en) Direct volume rendering apparatus
Simons et al. An Interactive Information Visualization Approach to Physically-Based Rendering.
Boulanger Real-time realistic rendering of nature scenes with dynamic lighting
Olajos Real-time rendering of volumetric clouds
Miyazaki et al. A fast rendering method of clouds using shadow-view slices
Congote et al. Volume ray casting in WebGL
Bittner The Current State of the Art in Real-Time Cloud Rendering With Raymarching
CN111179398A (en) Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS
Ford Real-Time Rendering of Atmospheric Clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination