CN113160413B - Real-time dynamic cloud layer drawing method based on cellular automaton - Google Patents

Real-time dynamic cloud layer drawing method based on cellular automaton Download PDF

Info

Publication number
CN113160413B
CN113160413B CN202110214214.9A CN202110214214A CN113160413B CN 113160413 B CN113160413 B CN 113160413B CN 202110214214 A CN202110214214 A CN 202110214214A CN 113160413 B CN113160413 B CN 113160413B
Authority
CN
China
Prior art keywords
state
cloud
cloud layer
texture
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110214214.9A
Other languages
Chinese (zh)
Other versions
CN113160413A (en
Inventor
李胜
徐浩川
汪国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110214214.9A priority Critical patent/CN113160413B/en
Publication of CN113160413A publication Critical patent/CN113160413A/en
Application granted granted Critical
Publication of CN113160413B publication Critical patent/CN113160413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a real-time dynamic cloud layer drawing method based on a cellular automaton, which comprises the following steps: 1) generating a cellular automaton of the dynamic cloud layer, wherein the cellular automaton adopts a Moore neighborhood as a death-activity judgment rule of a cell neighborhood and a life game; 2) the cellular automaton establishes a data structure of the cellular automaton according to the input global texture of the initial low-resolution image, and the data structure is used as the cellular evolution texture of each frame which changes along with the sequence; the data structure comprises a global array cellMap and each life system; 3) smoothing and interpolating and amplifying the cell evolution texture of each frame to obtain a large-size texture of the corresponding frame; 4) superposing the large-size texture corresponding to each frame with fractal noise to generate cloud layer details, and calculating the value of a density field at a point corresponding to each texture element; 5) sampling a volumetric cloud formed from a plurality of clouds; 6) and calculating the scattering effect of the point light source on the cloud layer, and finally rendering a realistic effect picture of the cloud layer.

Description

Real-time dynamic cloud layer drawing method based on cellular automaton
Technical Field
The invention belongs to the technical fields of computer graphic images, virtual reality technology and software, and relates to a real-time dynamic cloud layer drawing method and system based on a cellular automaton.
Background
Cloud layers in natural environments have different shapes and light illumination details, and how to reasonably model and draw (render) the characteristics of the cloud layers is a significant research topic.
Cellular automata (also known as cellular automata) is a mathematical model that describes complex patterns and behaviors, in which each cell distributed on an evolving grid is in a state in a finite set of discrete states. In each time step, each cell transitions to the next instant state according to the state distribution of its neighbor cells, following the same transition rules. In 1966, von neumann proposed early cellular automata to study automata systems with organism-generic, self-replicating properties. With the progress of research, the theory of cellular automata and the life game are widely researched and applied in the fields of physical simulation, social simulation, biological research, artificial life and the like. "Life games" are well known two-dimensional cellular automata (Gardner, M. (1970) Matmatic games: The fantastic associations of John Conway's new solitaire game "life". Scientific American (Vol.223, pp.120-123)), offered by John Conway in 1970. This work is intended to find simple rules that can produce complex behaviors, using the Moore neighborhood (8-neighborhood) and binary cell states (live-dead), state transitions. The dynamic cloud is constructed based on the idea that the cellular automaton carries out morphological evolution along with time.
The traditional volume cloud drawing method mostly uses fractal noise to construct a cloud layer density field, and the cloud cluster generated by the method is not strong in distribution controllability and difficult to present a specific shape and a change rule.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention aims to provide a real-time dynamic cloud layer drawing method and system based on a cellular automaton. The invention introduces cloud layer distribution textures which can be predefined and interactively operated by a user, and the dynamic change of cloud layer distribution is realized by evolving the textures through a cellular automaton. The method constructs the cloud layer density field of the cloud through distribution texture and fractal noise, and performs dynamic step length light integration from the envelope of the density field to realize volume rendering.
The cloud layer drawing method provided by the invention is integrally divided into two parts: cloud layer distribution texture evolution driven by cellular automata and cloud layer drawing driven by volume drawing. For the cloud layer distribution texture part, the system mainly reflects the trivial nature of the distribution texture evolution, realizes the dynamic change of the cloud layer by using a general cellular automaton without physical characteristics, and can generate various special effects after being combined with post-processing (namely, image post-processing in the following). Meanwhile, the system has good expansibility, and discrete maintenance and collapse treatment of different types of cells enable simultaneous evolution of cloud layer systems with different laws to be possible. In the volume rendering part of the cloud layer, the method uses fractal Brown noise to obtain the details of the cloud layer, and emphasizes on optimizing the efficiency of volume rendering integration, so that the algorithm has a certain balance in performance and effect.
The technical scheme of the invention is as follows:
a real-time dynamic cloud layer drawing method based on a cellular automaton comprises the following steps:
1) generating a cellular automaton of a dynamic cloud layer, wherein the cellular automaton adopts a Moore neighborhood as a dead-live judgment Rule of a cell neighborhood and a life game, namely different state transition modes are used for a target cell according to different current states of the target cell, wherein the state transition Rule is { B0, B1, …, B8, S0, S1, … and S8}, B0-B8 is a state transition Rule of a dead cell when the number of the living cells in the neighborhood is 0-8, and S0-S8 are state transition rules corresponding to the living cells;
2) the cellular automaton establishes a data structure of the cellular automaton according to the input global texture of the initial low-resolution image, and the data structure is used as the cellular evolution texture of each frame which changes along with the sequence; the data structure comprises a global array cellMap and various life systems, wherein different life systems correspond to different areas in the global texture; determining a group of cell coordinates and vitality information corresponding to each area according to the state transition rule and the pixel value, and storing the cell coordinates and the vitality information into a life system corresponding to each area; each pixel in the image corresponds to a cell, and the pixel value corresponds to the vitality of the cell; the global array cellMap stores cell state information corresponding to each pixel in the current frame; the cellular automaton calculates the cell change of the next frame of the corresponding life system according to the state transition Rule corresponding to each life system; after all the life systems are calculated, updating the global array cellMap according to the next frame of cell information obtained currently, then traversing all the life systems, obtaining the living cells owned by each life system, and recording and storing the living cells into the cellList;
3) smoothing and interpolating the cell evolution texture of each frame obtained in the step 2) to amplify to obtain a large-size texture of the corresponding frame;
4) superposing the large-size texture corresponding to each frame with fractal noise to generate cloud layer details, and calculating the value of a density field at a point corresponding to each texture element;
5) sampling a volume cloud formed by a plurality of cloud clusters through a stepping state transition method of a variable-length integrating state machine;
6) and calculating the scattering effect of the point light source on the cloud layer through the beer law and a Henyey-Greenstein image function, and finally rendering a realistic effect graph of the cloud layer.
Further, the state transition rule is a state transition function for determining the state of the cell i at the next moment according to the current state of the cell i and the state of the cell j in the neighborhood of the cell i; the state transfer function is
Figure BDA0002952556110000021
f is a transfer function of any form.
Further, setting a boundary constraint for the living system, where the boundary constraint is: the cells are isolated or linearly attenuated according to the distance from the cells in the living system to the center of the living system.
Further, the vital system supports random "seeding" within a certain radius, i.e. the direct generation of new cells with a certain vitality at random locations of the vital system.
Further, in step 3), an image obtained by smoothing the cell evolution texture of each frame obtained in step 2) and performing interpolation amplification processing is fused with an image obtained by performing interpolation smoothing on a pre-made high-resolution image, so as to obtain a large-size texture of a corresponding frame.
Further, the fractal noise is
Figure BDA0002952556110000031
Wherein D isgFor global scrolling of noise, WiAmplitude of layer i noise, FiM is the number of layers, Noise () is a Noise function expressed in the form of a Noise texture, and x is a variable with position as the Noise function.
Further, in step 4), the method for calculating the value of the density field at the point corresponding to each texel includes:
1-1) modeling a cloud layer constructed for volume rendering into a hemisphere with a flat bottom end and an enlarged top end, setting the height of a cloud layer reference plane as 0, and calculating the top end P of a cloud layer density field of a point P according to the actual thickness value P of the point P on the cloud layer distribution textureUpperAnd bottom end PLower
1-2) for a point with the height h and at the position of the point P, firstly calculating the relative height ratio t of the point; then calculating the cloud layer density P of the point according to the proportion tden
1-3) reduction of the density PdenThe actual scattering ratio a at this point is obtained by multiplying the overall transmission constant transmittince.
Further, the method for drawing the volume cloud comprises the following steps: firstly, generating a cloud layer envelope grid according to cloud layer distribution textures, and then drawing a volume cloud based on a stepping state transfer method of a variable-length integral state machine from the cloud layer envelope grid; the stepping state transition method based on the variable length integral state machine comprises the following steps:
2-1) dividing the STEP of the integration position into a large STEP size FAST _ STEP and a small STEP size SLOW _ STEP; setting a state 0 as not meeting the cloud, and performing FAST _ STEP with a large STEP length at the moment; setting a state 1 as that the last STEP FAST _ STEP meets the cloud, and then performing small STEP SLOW _ STEP; setting the state 2 as being in the cloud, and performing SLOW _ STEP with a small STEP size at the moment; setting a state 3 as that the upper limit of the small STEP size number is reached, and only performing FAST _ STEP till all the STEP sizes are exhausted;
2-2) when the current new position cloud layer density field after the FAST _ STEP is not 0, the state machine is transferred from the state 0 to the state 1; when the current new position density field is not 0 after last SLOW _ STEP and is in the state 1, the state is transferred to the state 2 from the state 1; when the current new position density field is 0 after last SLOW _ STEP in the state 2, the cloud layer is considered to be left, and the state 2 is transferred to the state 0; when the SLOW _ STEP number reaches the upper limit, the system unconditionally transitions to state 3.
Further, for each sampling point P on the integral path, calculating to obtain the scattering rate alpha of the point P as the opacity of the point P; the color of point P is then pre-multiplied by the opacity of point P and then blended using a blend mode of pre-multiplying opacity as the RGB color value.
Compared with the prior art, the invention has the following positive effects:
different from the traditional method, the method introduces cloud layer distribution textures which can be defined by a user and are subjected to interactive operation, and the fractal noise is fused on the basis of the textures to generate a cloud layer density field. In order to realize the changing effect of different cloud layer types, the method uses a cellular automaton to dynamically evolve the cloud layer distribution textures. Therefore, the method has the advantage of strong controllability, can realize the change effect of various types of cloud layers, and has rich and diverse real-time drawing effects.
Compared with the traditional static sky cloud layer representation and drawing method, the method has the advantages of a cloud layer volume drawing mode, namely the method has the capability of dynamic evolution: through two modes of evolution of a cellular automaton and multi-level translation of cloud layer distribution textures, the method and the system can freely evolve cloud layer distribution and achieve a good dynamic effect. In the process, the rules of the cellular automata are adjusted to obtain rich evolution rule effects. In addition, the volume rendering method provided by the invention has stronger expressive force on the volume senses of the middle and low sky cloud layers, can dynamically change illumination, dynamically adjust the color of the light source and the primary color and can realize continuous and seamless weather time conversion.
Compared with the traditional volume rendering method, the cloud layer distribution has higher controllability, and the special dynamic and different kinds of cloud layer effects which cannot be embodied by pure noise can be realized by flexibly combining the cellular automaton, the pre-rendering textures and the post-processing.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram showing a system structure of the cellular automaton.
Fig. 3 is an example of fixed-step Ray stepping (Ray Marching) sampling.
Fig. 4 is a cloud envelope grid diagram.
FIG. 5 is a variable length integrating state machine.
Detailed Description
The present invention will be described in further detail below with reference to specific examples and the accompanying drawings.
The basic flow of the method is shown in figure 1:
1. cellular automata generation of dynamic clouds
For the evolution of cloud layer distribution textures, the invention provides a compound cellular automaton (corresponding to a 'life system' and a 'evolution over time' module in a flow chart) which can simultaneously evolve different rule systems and keep the interaction of the rule systems. It is noted that the architecture of the cellular automata is not limited to a particular evolutionary rule, and multiple systems evolving in the same rule space may have distinct characteristics.
1.1 rule definition
Although the evolution process of the 'life game' has good chaotic characteristics, the 'non-dead or living' cell transformation of the 'life game' enables the local characteristics of the whole image to change violently and the stability to be poor, and the 'life game' is not suitable for being directly used as cloud layer distribution textures. The basic rule provided by the invention is an improvement of the basic rule of 'life game'. The rules continue to use the classical Moore neighborhood (8-neighborhood) as a dead-live decision rule for cell neighborhood and life games, i.e., for a particular cell, different state transition patterns are used depending on its current dead/live state. One state transition rule of the present invention may be represented by a set of values as follows:
rule { B0, B1, …, B8, S0, S1, …, S8}, where B0-B8 are state transition rules of a dead cell when the number of neighboring living cells is 0-8, and S0-S8 are corresponding state transition rules of living cells. The state transition rule is that the state transition function of the cell state at the next moment is determined according to the current state of the cell and the state of the cells in the neighborhood of the cell; the state transition rule can be written as
Figure BDA0002952556110000051
That is, the state of a cell at time t +1 is a neighborhood state combination at time t, which is called local mapping or local rule of cellular automata. In the present invention, the state transition rule may be an arbitrarily defined rule, that is, f may be a transition function in any form.
In the method of the invention, each cell's binary death/viability (0/1) is replaced by a "vitality" represented by 0-255, which corresponds to exactly one channel's range of brightness values in an 8-bit image. All cells with vitality (namely brightness value in cloud layer distribution texture) larger than zero are regarded as living, and the dead/living in the cell state transition rule is correspondingly converted into an integer in the range of-255 to 255. The cellular automaton designed in the way can obtain more fine and smooth evolution with better controllability by adjusting parameters in a wider range. For image processing, the cells here correspond to the pixels in the image, i.e. each pixel is a cell, and the pixel values correspond to the vitality of the cell.
1.2 cells and tissues of Living systems
Traditional cellular automata implementations, which are more than simple and straightforward but difficult to handle the case of simultaneous evolution of cells with different rules, store the cell states directly in a global grid array. The method provided by the invention stores the coordinates and vitality information of a group of cells with the state transition rule in a specific 'life system', and the whole cellular automata system represented by the global texture (global array cellMap) is obtained by simultaneously evolving and superposing a plurality of 'life systems'. The overall logic of the system is shown in fig. 2.
The input data of the system is a low-resolution gray-scale map (global texture), the resolution of the texture map usually adopts 128 × 128, the data structure of the cellular automata established according to the gray-scale map comprises a global array cellMap and each life system, each life system can divide the whole gray-scale map into different areas, the areas can be overlapped, and all the areas can not completely cover the whole map. Each region can be used as the original cell data of a life system evolution. Each living system, which may have different rules, stores the living cells that the system owns in its variable length array cellList. The stepping cycle of each life system comprises the following two steps:
1) at the beginning of each frame, the global cellMap stores the cell death/activity information of each point on the global regular grid of the frame. The next frame of cell changes is first calculated for each living system according to the state transition rules. This calculation will traverse the 8-neighborhood of all living cells at the moment in the life system and obtain the number of living cells in the neighborhood from the global cellMap.
2) And after all the life systems are calculated, refreshing and writing the cell information of the next frame which is just calculated into the global cellMap, traversing all the life systems again, obtaining the living cells owned by each life system, and recording and storing the living cells into the cellList. The process can obtain the cell death/activity information on the next frame of the global regular grid.
The calculation keeps the mutual influence among different regular cells, and does not need to traverse all regular grids when the number of the cells is small. In addition, a 'life system' for encapsulating cell communities with different evolution rules can provide a more specific cell control means for people; the evolution rules (state transition functions, rules) used by each living system are different. The invention firstly introduces the system boundary constraint for carrying out partition or linear attenuation according to the distance between the cell and the center of the life system, the system using the boundary constraint can dynamically simulate a cloud cluster in a certain radius, and the moving trend of the cloud cluster can be manually controlled by translating the center of the life system. The life system also supports random seeding within a certain radius, namely new cells with certain vitality are directly generated at random positions of the life system, and the design can keep the vitality of the whole system for some rules with weaker expansion characteristics; seeding an empty system with cells also allows for the creation of new cloud sources at a given location. The design of the part has expansibility, and a user can compile corresponding seeding and attenuation rules according to actual requirements to realize specific requirement effects.
2. Image post-processing and interactive evolution
The invention provides a method and a system (corresponding to a high-resolution interpolation smoothing module and an interactive image superposition fusion module in a flow chart) aiming at the dynamic change process and rendering of a dynamically changed cloud layer. The resolution of 128-128 cell evolution texture obtained by the evolution of the cellular automata is low, the system firstly performs image blurring once, smoothes and interpolates the image to be amplified to 512-512 large-size texture, and interpolates the interpolation in the evolution texture at the moment and the last moment (frame) to obtain cloud layer distribution texture used by real object drawing. It should be noted that modifying the number of iterations and the sample radius of the blur can adjust the sharpness and curvature of the cloud boundary, as will be described in further detail in the context of the volume rendering section.
The dynamically interpolated smooth 512 × 512 texture can be used as a cloud distribution texture in volume rendering, but the invention considers higher controllability of the cloud configuration and additionally introduces the following two post-processing and interactive evolution steps:
1) evolution from shape-prefabricated distribution textures: the user can prefabricate a 128-by-128 cellular automaton initial state, and completely define the cloud layer shape at the beginning of evolution. The method traverses all pixels of the 128 x 128 pre-texture at the beginning of evolution, and adds all pixels with non-zero brightness to a special starting life system. The system can use a certain cellular automata rule, and can also use completely different special rules to treat each cell as a pixel for digital image processing such as point processing. These "pixels" are still considered as living cells by other living systems and normally undergo mixed evolution, and the flexible application of the various schemes described above can achieve several different effects. And after the low-resolution image of 128 × 128 is subjected to the evolution of the cellular automaton, the high-resolution image of 512 × 512 is obtained through a high-resolution interpolation smoothing operation.
2) The final texture is processed directly: the user can directly perform interactive drawing and superposition on the smoothed 512 x 512 texture, and fusion with the 512 x 512 image obtained through high-resolution interpolation smoothing is realized by adopting an interactive image superposition and fusion operation. The treatments do not affect the actual behavior of the cellular automaton, and are very effective in realizing the effect of relative independence with cloud evolution, such as generation of 'holes' in aircraft tail clouds and cloud layers.
The interaction and post-processing steps further explain the 'trivial' cellular automata implementation of the invention, namely the evolution emphasizes the cloud layer with stronger controllability and the customizable evolution rule rather than depending on specific physical characteristics or cell behaviors.
3 cloud layer bulk density field construction
This part corresponds to the modules of 'superposition fractal noise' and 'cloud layer density field construction' in the flow chart. Before obtaining the path integral required for volume rendering, the cloud density at any coordinate in the integral space needs to be described first. The calculation starts from the 512 × 512 cloud layer distribution texture obtained in the previous step, firstly, the 512 × 512 texture is superposed with fractal noise to generate cloud layer details, and then, the value of the density field at the corresponding point of each texture element is correspondingly calculated.
3.1 fractal noise calculation
The edge details of the cloud layer are obtained by superposing the noise textures through fractal Brownian motion, namely, increasingly abundant fractal details are obtained by superposing multiple layers of basic noise textures with frequency multiplication and amplitude decrement:
Figure BDA0002952556110000071
wherein D isgFor global scrolling of noise, WiAmplitude of layer i noise, FiNoise () is a Noise function, and x is a position as a function variable, for the frequency of the layer i Noise.
The invention realizes that four layers of fractal noises are superposed in common:
Fisequentially taking 1, 3, 9 and 27.
WiSequentially taking 0.5, 0.25, 0.125 and 0.0625.
For each position in the space, the three-dimensional coordinates are split and summed to obtain the UV coordinates sampled on the two-dimensional texture. In cloud evolution, the system scrolls through noise in three dimensions to achieve changes in cloud details outside of cellular automata evolution. After the fractal noise intensity N is obtained, the actual thickness value P of the cloud layer at any point P on the cloud layer distribution texture can be calculated as follows:
p=plum·N
wherein p islumAnd calculating the brightness of any pixel point P of the cloud layer distribution texture obtained in the last step.
3.2 Density field calculation
According to the morphological characteristics of the cloud layer in nature, the cloud layer constructed for volume rendering is modeled into a hemisphere with a flat bottom end and an expanded top end. The height of a cloud layer reference plane is not regulated to be 0, and the top end P of a cloud layer density field of a point P on the cloud layer distribution texture is calculated according to the actual thickness value P of the point PUpperAnd bottom end PLower
PUpper=C·p
PLower=-0.3C·p
Wherein C is a cloud layer thickness parameter and-0.3 is a shrinkage factor of the bottom surface compared with the top surface. Modifying these constants may control the top and bottom thickness of the cloud as a whole.
PUpperAnd PLowerThe upper and lower bounds of the cloud density field at the point P location are described. The cloud is generally considered to have a lower density at the edges and a higher density in the center. For a point with a height h at the position of point P, its relative height ratio t is first calculated:
Figure BDA0002952556110000081
and then calculating the cloud layer density of the point according to the proportion t:
Figure BDA0002952556110000082
where smoothstep is a cubic interpolation function:
smoothstep(min,max,p)=-2t3+3t2
Figure BDA0002952556110000083
for PUpperAnd PLowerAnd (3) performing interpolation once again to smooth the rendering effect of the thin cloud on the cloud layer with smaller difference and thinner whole:
Pden=Pden·smoothstep(0,1.8,PUpper-PLower),if(PUpper-PLower)≤18
and 1.8 in the formula is the threshold value of cloud layer over-thinning.
Finally, the density P is measureddenThe actual scattering power α at this point is obtained by multiplying the overall transmittance constant:
α=Pden·transmittance
the opacity of the whole cloud layer can be adjusted by adjusting the whole light transmittance constant transmittince. The selection of this parameter transmit will be discussed further in the path integration strategy of the next step.
4 cloud layer volume rendering integral
This section corresponds to the "construct envelope grid", "variable length integration with step state transition", and "transparency calculation and mixing" modules in the flow chart. The volume cloud effect uses a Ray stepping (Ray Marching) algorithm to draw clouds with strong volume feeling, such as layer clouds, cumulus clouds, rain clouds and the like. The clouds are low in height, are more lumpy, and have the characteristics of clear boundaries and remarkable light and dark contrast.
4.1 overview of the basic Path integration method
The Ray Marching algorithm for volume rendering is based on the steps of sampling and accumulating calculation results from a viewpoint step by step along a sight line. Fig. 3 depicts a typical simple Ray Marching algorithm, which starts from a plane in front of the viewpoint and the color calculation of each fragment is obtained from a fixed number of path integrals, i.e. the light paths from the viewpoint through the fragment.
The simple Ray Marching algorithm with fixed times is easy to implement, but has a series of problems:
1) the efficiency is low. All sampling along the path of light not covered by the cloud layer in the screen is ineffective, as is the large amount of sampling between the two cloud layers.
2) The aliasing is obvious. The potential consequence of wasting a large number of samples is that the number of samples actually available is not sufficient to generate a smooth high quality image, resulting in banding and aliasing, and for thin clouds, visual errors such as cloud breakup, cloud loss, etc. may result.
By using a full-screen Ray Marching algorithm with a fixed step number, when a cloud layer area is large, even if 256 times of sampling are carried out on each chip of an entire optical path, the phenomena of banding and aliasing caused by uneven sampling can be observed obviously.
4.2 integration strategy of the invention
In order to solve various problems of fixed-step full-screen path integration, the invention mainly provides two methods: firstly, a cloud layer envelope grid is constructed, and then integration is carried out by using a variable length strategy from the envelope.
4.2.1 constructing an envelope grid
Path integration from a viewpoint may waste a large number of samples before reaching the cloud region, and sampling in a region where no cloud exists is more completely ineffective. After the cloud envelope grid is constructed, the path integral starting point from the fragment on the grid only falls near the cloud layer. The system constructs an envelope mesh from the 512 x 512 cloud distribution textures generated in the previous step. Since the luminance after multiplication of the superimposed fractal noise by the distribution texture is always less than or equal to the distribution texture itself, this texture actually describes the envelope region where the cloud layer may appear.
The envelope uses a 129 x 129 square grid to match the cloud distribution texture down to 128 x 128. Firstly, traversing the texture after the post-processing is finished, finding all pixels with the brightness larger than zero, recording the generation vertexes corresponding to the pixels, and then generating the corresponding vertex indexes and triangle indexes. For each square region, its pair of edges collectively connect the two vertices with the largest height interpolation.
A cloud envelope mesh generated from a cloud distribution texture is shown in fig. 4.
4.2.2 variable Length integration and step State transition
In performing path integration, it is desirable to minimize unnecessary sampling. Because the volume cloud is aggregated into a piece of cloud cluster and cloud band, the idea of variable-length integration is to increase the step length to advance when the integration path is outside the cloud and decrease the step length when the integration path reaches the inside of the cloud, and to accurately accumulate the result (the cloud is represented by the envelope grid, and is determined to be inside or outside the cloud by judging whether the integration path is inside or outside the envelope grid). The invention provides a stepping state transition method based on a variable length integral state machine, which is the state machine and the state transition shown in figure 5.
The STEP of the integration position is divided into a large STEP (FAST _ STEP) and a small STEP (SLOW _ STEP).
State 0: no clouds were encountered. At this time, FAST _ STEP is performed for a large STEP.
State 1: the last STEP FAST STEP encounters the cloud. At this time, SLOW _ STEP is performed with a small STEP size.
State 2: already in the cloud. At this time, SLOW _ STEP is performed in small STEPs.
And a state 3: the upper limit of the number of small steps has been reached. At this time, the cloud layer condition on the route is not considered any more, and only FAST _ STEP is carried out until all STEPs are exhausted.
The key state transition occasions are:
a: when the current new location cloud density field after FAST STEP is not 0, the state machine transitions from state 0 to state 1. Since the previous FAST STEP is large, this time a STEP back in FAST STEP on the integration path is performed to ensure that the next SLOW STEP captures all the cloud details.
B: when in state 1 and the current new position density field after the last SLOW _ STEP is not 0, a transition is made from state 1 to state 2.
C: when the current new position density field after last SLOW _ STEP is 0 in state 2, the cloud layer is considered to have been left, and the state is changed from state 2 to state 0.
D: when the SLOW _ STEP number reaches the upper limit, the system unconditionally transitions to state 3.
The variable step path integral designed in the way can better capture the details of various cloud layers. Even with very thin clouds, as long as there is one STEP in which FAST STEP hits any region where the density field is not 0, the algorithm switches to SLOW STEP to sample all regions with high accuracy.
4.2.3 clarity calculation and blending
Before describing the illumination model, first, how transparency is accumulated in the cloud density field is described. For a fragment, its color and transparency are calculated and mixed for all the sampling points on the integration path. For each sample point on the integral path, the scattering power α (as described in 3.2) of the point is calculated as the opacity of the point, and the result calculated according to the illumination model is the RGB color value. The mixing between sampling points comprises color value mixing and transparency mixing. For a sampling point P, the color calculation and transparency calculation for that point are noted as:
Psample=(Prgb,Pa)
wherein, PrgbIs the color of the dot, PaThe point opacity.
For each fragment, color and transparency accumulation starts from the initial value (0,0,0, 0).
For the accumulated color S and the new sampling point PsampleBlending is performed using Alpha pre-multiplication (pre-multiplied Alpha):
S′rgb=Srgb+(1.0-Sa)·Prgb·Pa
S′a=Sa+(1.0-Sa)·Pa
the above formula can also be written as a unified form:
S′=S+(1.0-Sa)·Ppre-mult;plied
the color of P is multiplied by the opacity of P, and then the color of P is mixed by using a mixed mode of the multiplied opacity. Such mixing is done at each sampling, and the final result is a pre-multiplication of Alpha's patch color, mixed with the sky background in the same way (Add, One OneMinusSrcAlpha). At the same time, once the opacity S of a certain fragmentaWhen 1 is reached (namely, the complete opacity is realized), the path integration can be terminated early, and the cloud layer volume rendering integration is completed.
4.2.4 parameter selection and optimization
For the above-mentioned parameters, the values are characterized as follows:
overall light transmittance: under the condition of not influencing the visual effect, the larger the value is, the more possible the integration is ended in advance, but the larger the value is, the cloud layer is too stiff.
Figure BDA0002952556110000111
Large STEP FAST _ STEP: if the number of times is too small, the thin cloud layer may be broken, and if the STEP size is too large, the visual effect is obviously reduced after the SLOW _ STEP is used up.
Figure BDA0002952556110000112
Small STEP SLOW _ STEP: the frequency is too few, and the effect on the visual effect is large by switching to a large step length in advance.
Figure BDA0002952556110000113
Illumination sampling times LSTEP: this sampling will be described in detail in the next step. Since the sample is in the inner loop, a small number of times can significantly affect performance.
It is desirable that the path integration ends as early as possible to reduce the amount of cycling. The effect of the present invention was found using an overall transmittance of 0.22, a maximum of 64 FAST _ STEPs, a maximum of 64 SLOW _ STEPs, and 4 illumination samples per sample point, adjusted and weighed.
5. Illumination model and illumination calculation
This section describes how to calculate the illumination color value at each sampling (at each sample point on the integration path), corresponding to the "illumination calculation" block in the flow chart. When a specific light ray enters the cloud cluster (the cloud represented by the cloud envelope grid), part of the light ray is absorbed by the cloud cluster through multiple scattering of water drops and ice crystals, and part of the light ray is scattered to other directions, and the scattered part of the light ray in other directions is also added into the light path. In the non-real-time field such as the special effect of film and television, a plurality of scattering occurring in the cloud cluster can be accurately calculated by adopting some methods, but the calculation amount along with the exponential level increase of the scattering times cannot meet the real-time requirement. The illumination model used by the invention calculates the single scattering effect of the point light source on the cloud layer through the beer's law and the Henyey-Greenstein image function.
5.1 beer's law
Beer's law describes the relationship between the absorbance and the thickness and concentration of an absorbing medium when light passes through the medium. It is expressed as follows:
Figure BDA0002952556110000114
wherein T is a light transmittance,
Figure BDA0002952556110000115
in order to transmit the light flux of the medium,
Figure BDA0002952556110000116
τ is the optical depth (optical depth) for the light flux reaching the medium. In the method, a group of path integrals facing a light source is additionally made at each sampling point, and the scattering rate alpha on the path integrals is summed to be used as the light depth tau of the sampling point:
Figure BDA0002952556110000117
beer's law states that: as the depth of light increases, the intensity of light transmitted through the sample point decreases exponentially. The above integration y describes the integration distance towards the light source. Substituting the result of this integration into the luminance calculation, the path integration equation can be expressed as:
Figure BDA0002952556110000121
this set of additional path integrals is called illumination sampling (LSTEP). The integral is used for judging which thickness of cloud layer the light source passes before reaching the sampling point, and the light intensity is correspondingly attenuated according to the beer law. The sampling is done only a few times (4 times in the present invention) due to the inner layer of the path integration cycle and stops as soon as it leaves the cloud (the sampled scattering rate a is zero).
5.2 Henyey-Greenstein phase function
The beer's law can well calculate the influence of illumination received by cloud layers with different thicknesses, but the Scattering calculation is isotropic, and Mie Scattering (Mie Scattering) mainly generated by water drops and ice crystals in the cloud layers has a remarkable anisotropic characteristic, and a plurality of meteorological phenomena such as fog rainbow and solar ring are related to the anisotropic characteristic of the Mie Scattering. A more common phenomenon is silvery white banding at the edges of the back light cloud (silver lining).
This phenomenon is caused by strong forward scattering (forward scattering peak) which dominates mie scattering: the cloud layer is thin at the edge of the cloud layer, and strong forward scattering enables the direction of an optical path to be almost unchanged and the intensity attenuation to be small. Since the phase function of mie scattering is complex, the Henyey-Greenstein phase function is commonly used to approximate the scattering distribution in applications, which is expressed as:
Figure BDA0002952556110000122
wherein g is an anisotropy coefficient, θ is an angle between an incident ray and an emergent ray, and the distribution of the phase function is more concentrated on a forward scattering part with the angle θ being 0 when the value of g is larger.
When the illumination color is calculated at each sampling point, the influence of a Henyey-Greenstein phase function is calculated according to the included angle between the sight line vector (namely the integral direction) of the point and the light source vector, and is multiplied by the color value of the beer law, and the path integral equation is changed into:
Figure BDA0002952556110000123
by applying the illumination calculation of the Henyey-Greenstein phase function, a remarkable silver lining phenomenon can be noticed at the edge of the cloud layer near the light source.
5.3 illumination parameter adjustment
The cloud layer color input from the outside of the system comprises an environment color BaseColor and a light source color LightColor of the cloud layer before illumination calculation. The final color of each sampling point is obtained by adding the basic color and the light source color, and the part which is not influenced by the light source (such as the back of a thick cloud) is the environment color. By adjusting the environment color and the light source color, the system can realize various weather effects with different colors in different time periods of a day.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (9)

1. A real-time dynamic cloud layer drawing method based on a cellular automaton comprises the following steps:
1) generating a cellular automaton of a dynamic cloud layer, wherein the cellular automaton adopts a Moore neighborhood as a dead-live judgment Rule of a cell neighborhood and a life game, namely different state transition modes are used for a target cell according to different current states of the target cell, wherein the state transition Rule is { B0, B1, …, B8, S0, S1, … and S8}, B0-B8 is a state transition Rule of a dead cell when the number of the living cells in the neighborhood is 0-8, and S0-S8 are state transition rules corresponding to the living cells;
2) the cellular automaton establishes a data structure of the cellular automaton according to the input global texture of the initial low-resolution image, and the data structure is used as the cellular evolution texture of each frame which changes along with the sequence; the data structure comprises a global array cellMap and various life systems, wherein different life systems correspond to different areas in the global texture; determining a group of cell coordinates and vitality information corresponding to each area according to the state transition rule and the pixel value, and storing the cell coordinates and the vitality information into a life system corresponding to each area; each pixel in the image corresponds to a cell, and the pixel value corresponds to the vitality of the cell; the global array cellMap stores cell state information corresponding to each pixel in the current frame; the cellular automaton calculates the cell change of the next frame of the corresponding life system according to the state transition Rule corresponding to each life system; after all the life systems are calculated, updating the global array cellMap according to the next frame of cell information obtained currently, then traversing all the life systems, obtaining the living cells owned by each life system, and recording and storing the living cells into the cellList;
3) smoothing and interpolating the cell evolution texture of each frame obtained in the step 2) to amplify to obtain a large-size texture of the corresponding frame;
4) superposing the large-size texture corresponding to each frame with fractal noise to generate cloud layer details, and calculating the value of a density field at a point corresponding to each texture element;
5) sampling a volume cloud formed by a plurality of cloud clusters through a stepping state transition method of a variable-length integrating state machine;
6) and calculating the scattering effect of the point light source on the cloud layer through the beer law and a Henyey-Greenstein image function, and rendering a realistic effect graph of the cloud layer.
2. The method of claim 1, wherein the state transition rule is a state transition function for determining the state of the cell i at the next time according to the current state of the cell i and the states of the cells j in the neighborhood of the cell i; a state transfer function of
Figure FDA0003558466170000011
f is a transfer function of any form.
3. The method of claim 1, wherein a boundary constraint is set for the living system, the boundary constraint being: the cells are isolated or linearly attenuated according to the distance from the cells in the living system to the center of the living system.
4. The method of claim 1, wherein the life system supports random "seeding" within a set radius, i.e., the direct generation of viable new cells at random locations of the life system.
5. The method as claimed in claim 1, wherein in step 3), the image obtained by smoothing the cell evolution texture of each frame obtained in step 2) and performing interpolation and amplification processing is fused with the image obtained by performing interpolation and smoothing on a pre-prepared high resolution image to obtain the large-size texture of the corresponding frame.
6. The method of claim 1, in which the fractal noise is
Figure FDA0003558466170000021
Wherein D isgFor global scrolling of noise, WiAmplitude of layer i noise, FiM is the number of layers, Noise () is a Noise function expressed in the form of a Noise texture, and x is a variable of the Noise function.
7. The method according to claim 1 or 6, wherein in step 4), the method for calculating the value of the corresponding point of the density field at each texture element comprises:
1-1) modeling a cloud layer constructed for volume rendering into a hemisphere with a flat bottom end and an enlarged top end, setting the height of a cloud layer reference plane as 0, and calculating the top end P of a cloud layer density field of a point P according to the actual thickness value P of the point P on the cloud layer distribution textureUpperAnd bottom end PLower
1-2) for a point with the height h and at the position of the point P, firstly calculating the relative height ratio t of the point; then calculating the cloud layer density P of the point according to the proportion tden
1-3) reduction of the density PdenThe actual scattering ratio a at this point is obtained by multiplying the overall transmission constant transmittince.
8. The method of claim 1 or 6, wherein the method of rendering the volume cloud is: firstly, generating a cloud layer envelope grid according to cloud layer distribution textures, and then drawing a volume cloud based on a stepping state transfer method of a variable-length integral state machine from the cloud layer envelope grid; the stepping state transition method based on the variable length integral state machine comprises the following steps:
2-1) dividing the stepping of the integral position into a large STEP FAST _ STEP and a small STEP SLOW _ STEP; setting the state 0 as not meeting the cloud, and performing FAST _ STEP with a large STEP size at the moment; setting a state 1 as that the last STEP FAST _ STEP meets the cloud, and then performing small STEP SLOW _ STEP; setting a state 2 as being in the cloud, and performing small STEP length SLOW _ STEP at the moment; setting a state 3 as that the upper limit of the small STEP size number is reached, and only performing FAST _ STEP till all the STEP sizes are exhausted;
2-2) when the current new position cloud layer density field after FAST _ STEP is not 0, the state machine is transferred from state 0 to state 1; when the current new position density field is not 0 after last SLOW _ STEP and is in the state 1, the state is transferred to the state 2 from the state 1; when the current new position density field is 0 after last SLOW _ STEP in the state 2, the cloud layer is considered to be left, and the state 2 is transferred to the state 0; when the SLOW _ STEP number reaches the upper limit, the system unconditionally transitions to state 3.
9. The method according to claim 8, wherein for each sampling point P on the integral path, the scattering rate α of the point P is calculated as the opacity of the point P; the color of point P is then pre-multiplied by the opacity of point P and then blended using a blend mode of pre-multiplying opacity as the RGB color value.
CN202110214214.9A 2021-02-25 2021-02-25 Real-time dynamic cloud layer drawing method based on cellular automaton Active CN113160413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110214214.9A CN113160413B (en) 2021-02-25 2021-02-25 Real-time dynamic cloud layer drawing method based on cellular automaton

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110214214.9A CN113160413B (en) 2021-02-25 2021-02-25 Real-time dynamic cloud layer drawing method based on cellular automaton

Publications (2)

Publication Number Publication Date
CN113160413A CN113160413A (en) 2021-07-23
CN113160413B true CN113160413B (en) 2022-07-12

Family

ID=76883492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110214214.9A Active CN113160413B (en) 2021-02-25 2021-02-25 Real-time dynamic cloud layer drawing method based on cellular automaton

Country Status (1)

Country Link
CN (1) CN113160413B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523026B (en) * 2024-01-08 2024-03-29 北京理工大学 Cloud and fog image simulation method, system, medium and terminal for infrared remote sensing imaging

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177461A (en) * 2013-01-31 2013-06-26 华北电力大学(保定) Fractal cloud model cellular automation based art pattern generation method
CN103793894A (en) * 2013-12-04 2014-05-14 国家电网公司 Cloud model cellular automata corner detection-based substation remote viewing image splicing method
CN106570929A (en) * 2016-11-07 2017-04-19 北京大学(天津滨海)新代信息技术研究院 Dynamic volume cloud construction and drawing method
CN110298909A (en) * 2019-06-28 2019-10-01 北京工业大学 A kind of weathering phenomena simulation method based on three-dimensional cellular automaton

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7724258B2 (en) * 2004-06-30 2010-05-25 Purdue Research Foundation Computer modeling and animation of natural phenomena
US20120313942A1 (en) * 2011-06-09 2012-12-13 Carestream Health, Inc. System and method for digital volume processing with gpu accelerations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177461A (en) * 2013-01-31 2013-06-26 华北电力大学(保定) Fractal cloud model cellular automation based art pattern generation method
CN103793894A (en) * 2013-12-04 2014-05-14 国家电网公司 Cloud model cellular automata corner detection-based substation remote viewing image splicing method
CN106570929A (en) * 2016-11-07 2017-04-19 北京大学(天津滨海)新代信息技术研究院 Dynamic volume cloud construction and drawing method
CN110298909A (en) * 2019-06-28 2019-10-01 北京工业大学 A kind of weathering phenomena simulation method based on three-dimensional cellular automaton

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于元胞自动机的云层实时模拟";姚海 等;《系统仿真学报》;20080630;第20卷(第11期);第2946-2950页 *

Also Published As

Publication number Publication date
CN113160413A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
Djurcilov et al. Visualizing scalar volumetric data with uncertainty
US7710418B2 (en) Systems and methods for the real-time and realistic simulation of natural atmospheric lighting phenomenon
CN102903146B (en) For the graphic processing method of scene drawing
US8810590B2 (en) Method and apparatus for spatial binning on a GPU and global path planning to avoid spatially binned objects
CN106570929B (en) Construction and drawing method of dynamic volume cloud
CN101271587B (en) Illumination and shade drafting method based on transition light label drawing
CN111784833A (en) WebGL-based flood evolution situation three-dimensional dynamic visualization display method
CN104881839A (en) Hotspot map generation method based parallel acceleration
CN113160413B (en) Real-time dynamic cloud layer drawing method based on cellular automaton
CN115222614A (en) Priori-guided multi-degradation-characteristic night light remote sensing image quality improving method
Zhang et al. Tree branch level of detail models for forest navigation
CN110400366B (en) Real-time flood disaster visualization simulation method based on OpenGL
CN104463937A (en) Animation generation method and device based on irregular object
Dachsbacher Interactive terrain rendering: towards realism with procedural models and graphics hardware
US8115780B2 (en) Image generator
CN116071479A (en) Virtual vegetation rendering method and device, storage medium and electronic equipment
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
US9514566B2 (en) Image-generated system using beta distribution to provide accurate shadow mapping
CN117710557B (en) Method, device, equipment and medium for constructing realistic volume cloud
CN109360263A (en) A kind of the Real-time Soft Shadows generation method and device of resourceoriented restricted movement equipment
CN103198495B (en) The texture compression method that importance degree drives
CN116778053B (en) Target engine-based map generation method, device, equipment and storage medium
CN115906477A (en) Real-time thunderstorm cloud simulation method based on cloud picture
Ephanov et al. Virtual texture: A large area raster resource for the gpu
KR20020031097A (en) Graphics system having a super-sampled sample buffer with efficient storage of sample position information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant