CN110084872A - A kind of the Animation of Smoke synthetic method and system of data-driven - Google Patents

A kind of the Animation of Smoke synthetic method and system of data-driven Download PDF

Info

Publication number
CN110084872A
CN110084872A CN201910228740.3A CN201910228740A CN110084872A CN 110084872 A CN110084872 A CN 110084872A CN 201910228740 A CN201910228740 A CN 201910228740A CN 110084872 A CN110084872 A CN 110084872A
Authority
CN
China
Prior art keywords
smoke
dimensional
data
animation
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910228740.3A
Other languages
Chinese (zh)
Other versions
CN110084872B (en
Inventor
朱登明
李园
王兆其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201910228740.3A priority Critical patent/CN110084872B/en
Publication of CN110084872A publication Critical patent/CN110084872A/en
Application granted granted Critical
Publication of CN110084872B publication Critical patent/CN110084872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a kind of Animation of Smoke synthetic methods of data-driven, comprising: according to smoke data collection, generates two-dimentional smog outline data;Network is generated by the training of the two dimension smog outline data, to obtain smoke creating model;By the smoke creating model, which is generated as three-dimensional smoke sequence;The three-dimensional smoke sequence is rendered, animation is generated smoke.Animation of Smoke synthetic method proposed by the present invention keeps smog information and control mode simpler intuitive, and maintains simple input and descended the authenticity of Animation of Smoke, can generate in real time user controllable shape and realistic Animation of Smoke.

Description

Data-driven smoke animation synthesis method and system
Technical Field
The invention relates to the field of computer graphics, in particular to a controllable synthesis method and system of fluid animation.
Background
The simulation of natural phenomena has been one of the hot problems in computer graphics research. The fluid simulation, particularly the real-time modeling and controllable animation technology of the smoke animation, has increasingly wide application requirements and values in various fields such as movie and television special effects, advertisements, three-dimensional game development, virtual reality and the like. However, natural smoke is a complex physical phenomenon, and due to the influence of various factors such as buoyancy, obstacles, wind energy input and internal vortex, the concentration distribution in the smoke is irregular, the smoke has various shapes, can be uniform or has severe vortex changes, the geometric shape of the smoke has extremely strong irregularity, and the boundary is difficult to distinguish and define. For many years, in the research of fluid simulation, from the traditional fluid dynamic model, the method of solving motion elements by a Navier-Stokes equation (N-S equation for short) to the non-physical method, namely the fluid animation synthesis method driven by data, the reality and quality of the synthesized smoke are continuously improved, and the calculation efficiency and the real-time performance are gradually improved.
In the prior art, "fluid animation generation method and device based on deep learning and SPH framework" (application publication No. CN108717722A), a fluid animation generation method and device based on deep learning and SPH framework is disclosed, which is introduced into an SPH fluid simulation framework after training by using a neural network model, and aims to realize the expression of low-precision fluid with high details by off-line rendering.
The patent refers to the field of 'methods or AN _ SNparatus for the accelerated generation of fluid animation'. The invention utilizes the training data obtained by calculation before and after the projection step, adjusts the transmission node weight of the artificial neural network through the training of the artificial neural network, directly obtains the final calculation model, and completely avoids the original time-consuming projection step numerical calculation process. The method is suitable for greatly accelerating the calculation of the projection step when the Eulerian method simulates the fluid animation.
In the past, a velocity field is generated by solving a parameter equation based on an N-S equation to obtain synthesized smoke, three-dimensional data input is required to obtain three-dimensional velocity or density field output based on deep learning, and the use of the parametric equation can be effectively controlled based on the deep understanding of a user on a fluid system, but the calculation amount is large, the real-time requirement is difficult to achieve, the control mode is single, and the use of the user is limited and restricted; the input of three-dimensional data is not easy for users to obtain, process and use, so that a great deal of troublesome data preprocessing work is still required to be done with investment cost during use.
Disclosure of Invention
In order to solve the problems of large calculation amount, poor real-time performance and single control mode faced by the method adopting the N-S equation and the problem that the input of three-dimensional data is not easy to obtain, process and use for users, the invention provides a data-driven smoke animation synthesis method, which utilizes a two-dimensional smoke contour to obtain a three-dimensional fluid field with a time sequence relation through a smoke generation model to generate a three-dimensional smoke animation.
Specifically, the smoke animation synthesis method of the present invention includes: generating two-dimensional smoke contour data according to the three-dimensional smoke data set; generating a network through training of the two-dimensional smoke contour data to obtain a smoke generation model; generating the two-dimensional smoke contour data into a three-dimensional smoke sequence through the smoke generation model; rendering the three-dimensional smoke sequence to generate a three-dimensional smoke animation.
The smoke animation synthesis method of the invention is characterized in that the loss function of the smoke generation modelComprises the following steps:
wherein x is the two-dimensional smoke profile data, G (x) is the three-dimensional smoke sequenceThe single frame three-dimensional density field of (a) y is the three-dimensional smoke data setSingle frame three-dimensional density field, DSDiscriminators for determining the spatial correspondence of G (x) with y, EnWhen x is of scale n, G (x) passes through DSIdentification expectation of (1), EmDiscriminator D at scale m of ySTo pairIdentification expectation of DtTo judgeAnddiscriminator of timing consistency, FjTo extract DSThe output characteristic map of the j-th layer of (1),to extract FjTime to loss functionInfluence weight coefficient of En,jWhen x is of scale n toAndrespectively extract FjThe space consistency of time phase comparison is expected, and n and j are positive integers.
According to the smoke animation synthesis method, the three-dimensional smoke data set is obtained through solving of the Navier-Stokes equation.
According to the smoke animation synthesis method, the three-dimensional smoke animation is generated through rendering under multiple visual angles based on the GPU.
The invention also provides a data-driven smoke animation synthesis system, which comprises: the two-dimensional data generation module is used for generating two-dimensional smoke contour data according to the three-dimensional smoke data set; the smoke generation model training module is used for generating a network through training of the two-dimensional smoke contour data so as to obtain a smoke generation model; the three-dimensional sequence generation module is used for generating the two-dimensional smoke contour data into a three-dimensional smoke sequence through the smoke generation model; and the smoke animation rendering module is used for rendering the three-dimensional smoke sequence to generate the three-dimensional smoke animation.
The smoke animation synthesis system of the invention, wherein the loss function of the smoke generation modelComprises the following steps:
wherein x is the two-dimensional smoke profile data, G (x) is the three-dimensional smoke sequenceThe single frame three-dimensional density field of (a) y is the three-dimensional smoke data setOf a single frame of a three-dimensional density field, DSTo determine the spatial consistency of G (x) and yDiscriminator of, EnWhen x is of scale n, G (x) passes through DSIdentification expectation of (1), EmDiscriminator D at scale m of ySTo pairIdentification expectation of (1), DtTo judgeAnddiscriminator of timing consistency, FjTo extract DSThe output characteristic map of the j-th layer of (1),to extract FjTime to loss functionInfluence weight coefficient of En,jWhen x is of scale n toAndrespectively extract FjThe space consistency of time phase comparison is expected, and n and j are positive integers.
The smoke animation synthesis system of the invention further comprises: and the smoke data set generating module is used for solving and acquiring the three-dimensional smoke data set through the Navier-Stokes equation.
The smoke animation synthesis system of the invention, wherein the smoke animation rendering module specifically comprises: the three-dimensional smoke animation is generated through rendering under multiple visual angles based on the GPU.
The invention also proposes a readable storage medium storing executable instructions for executing the data-driven smoke animation synthesis method as described above.
The invention also provides a data processing device which comprises the readable storage medium, wherein the data processing device calls and executes the executable instructions in the readable storage medium to perform three-dimensional smoke animation synthesis.
The smoke animation synthesis method provided by the invention enables smoke information and a control mode to be simpler and more intuitive, keeps the authenticity of the smoke animation under simple input, and can generate the smoke animation with a user-controllable shape and sense of reality in real time.
Drawings
FIG. 1 is a flow chart of a data-driven smoke animation synthesis method of the present invention.
Figure 2 is a schematic diagram of an aerosol generation model network of the present invention.
Figure 3 is a graph of the characteristic loss for an intermediate result of the present invention.
Fig. 4 is a schematic diagram of a data-driven smoke animation synthesis system of the present invention.
Detailed Description
In order to make the technical solution of the present invention more clear, the present invention is further described in detail below with reference to the accompanying drawings, it being understood that the specific examples described herein are only for the purpose of illustrating the present invention and are not to be construed as limiting the present invention.
The invention provides a data-driven smoke animation synthesis method aiming at the problems of large calculated amount, poor real-time performance and single control mode faced by an N-S equation method and the problem that the input of three-dimensional data is difficult to obtain, process and use for users.
Specifically, the smoke animation synthesis method of the present invention includes: generating two-dimensional smoke contour data according to the three-dimensional smoke data set; generating a network through training of the two-dimensional smoke contour data to obtain a smoke generation model; generating the two-dimensional smoke contour data into a three-dimensional smoke sequence through the smoke generation model; rendering the three-dimensional smoke sequence to generate a three-dimensional smoke animation.
Loss function of an aerosol generation modelComprises the following steps:
wherein x is two-dimensional smoke profile data, and G (x) is a three-dimensional smoke sequenceThe single-frame three-dimensional density field of (a) y is a three-dimensional smoke data setOf a single frame of a three-dimensional density field, DSDiscriminators for determining the spatial correspondence of G (x) with y, EnWhen x is of scale n, G (x) passes through DSIdentification expectation of (1), EmDiscriminator D at scale m of ySTo pairIdentification expectation of (1), DtTo judgeAnddiscriminator of timing consistency, FjTo extract DSThe output characteristic map of the j-th layer of (1),to extract FjTime to loss functionInfluence weight coefficient of En,jWhen x is of scale n toAndrespectively extract FjThe space consistency of time phase comparison is expected, and n and j are positive integers.
In an embodiment of the invention, the three-dimensional smoke dataset is obtained by solving the navier-stokes equation.
In the embodiment of the invention, the three-dimensional smoke animation is generated from the three-dimensional smoke sequence through rendering under multiple visual angles based on the GPU.
The invention also provides a data-driven smoke animation synthesis system, which comprises: the two-dimensional data generation module is used for generating two-dimensional smoke contour data according to the three-dimensional smoke data set; the smoke generation model training module is used for generating a network through training of the two-dimensional smoke contour data so as to obtain a smoke generation model; the three-dimensional sequence generation module is used for generating the two-dimensional smoke contour data into a three-dimensional smoke sequence through the smoke generation model; and the smoke animation rendering module is used for rendering the three-dimensional smoke sequence to generate the three-dimensional smoke animation.
The following describes a specific implementation of the method of the present invention with reference to the accompanying drawings. FIG. 1 is a flow chart of a data-driven smoke animation synthesis method. As shown in fig. 1, in order to perform realistic animation synthesis in a complex scene for smoke, three-dimensional smoke field data is acquired first to be compared with real data, two-dimensional projection is obtained through projection mapping, contour information is extracted according to a set threshold value, data enhancement (such as noise addition) is performed, a corresponding smoke generation model and a loss function conforming to the purpose defined by input data are built to obtain a three-dimensional smoke attribute field sequence with a time sequence relation, and then smoke animation is obtained through rendering. Specifically, the method comprises the following steps:
1) generating and acquiring three-dimensional smoke data set under complex scene of diversity by solving N-S equation
According to the current N-S equation method, a concentration field data set of various complex scenes is obtained by applying a mantaflow and parameterized setting of fluid simulation software.
2) A projection transformation-based contour extraction and representation method is provided, which represents the contour of smoke in a two-dimensional image.
21) Two-dimensional projection matrix for acquiring smoke field data
Taking the orthographic projection of the density field as an example, taking the lower left front corner as the origin, taking the directions towards the right, upwards and into the paper as the positive directions of the X, Y and Z axes respectively, and accumulating the density values of the same X and Y coordinates in the three-dimensional density field into the same plane with Z being 0 to obtain the orthographic projection.
22) Defining and extracting smoke information contained in the projection as a two-dimensional contour by setting a threshold value
The set of points defined to exceed a certain concentration value constitutes the outline of the two-dimensional projection of smoke. Setting a proper density value as a threshold value, setting the value exceeding the threshold value as 1, and setting the value less than the threshold value as 0 to obtain binary contour information.
3) A three-dimensional smoke generation model method based on a time sequence related generation network and an autoencoder is provided.
Figure 2 is a schematic diagram of an aerosol generation model network of the present invention. As shown in fig. 2, training and adjusting the smoke generation model results in a reasonable sequence of smoke density fields with a time-series relationship.
31) Generator loss during network training
For a partial loss representation of the training process of the aerosol generating model, x is the two-dimensional aerosol profile data, G (x) is a single-frame three-dimensional density field in the three-dimensional aerosol sequence generated by the aerosol generating model, DsFor the determination G (x) of the aerosol-generating model and the representation of the identifier of the spatial conformity of the true contrast single-frame three-dimensional density field y, EnInputting the original two-dimensional smoke contour data with the scale of n, wherein G (x) in the smoke generation model passes through DsThe identification of the desired representation, in the context of the discrete data discussed herein above and below, may be understood as the identification mean.
The score of the three-dimensional smoke sample generated by the generator in the discriminator is optimized, so that the effect of false and spurious is improved, and the generator capable of containing the vivid three-dimensional smoke characteristic distribution is obtained.
32) Discriminator loss during network training
For an overall representation of the loss of the aerosol generating model training process,using real data y for the aerosol generation model by fluid dynamicsInferred representations of a set of temporally successive flow field frames,using generation data G (x) for the aerosol generation model, a representation of a chronologically continuous set of flow field frames obtained by hydrodynamic extrapolation, EnFor inputting the smoke generation model with the sample size of nThrough DtIs understood to mean the identification, E, in the case of discrete data discussed in the context of this documentmFor the discriminator pair when sample y is of scale mIs understood to mean the identification, D, in the case of discrete data discussed in the context of this documenttDetermining for the aerosol generation modelAndt is a positive integer.
The ability of the discriminator to discriminate between authenticity data also needs to be continually optimised as a countermeasure target for the generator, indirectly facilitating the generator to target more realistic results.
Wherein By means of convection calculation:
wherein,to use the representation of convective operation using fluid dynamics principles,representation of three-dimensional velocity fields, x, for the t-1 st and t +1 st frames, respectively, generated by the generator in the aerosol generation modelt-1、xt-1Is the representation of the sample data of the t-1 th and t +1 th frames input in the smoke generation model, G (x)t-1)、G(xt)、G(xt+1) The three-dimensional density fields of the t-1 th frame, the t frame and the t +1 th frame generated by the generator in the smoke generation model are respectively generated, and t is a positive integer.
In order to make the generated sequence have time continuity, the irregular micro jitter caused by noise and poor continuity between frames is reduced visually, and because the velocity density between the front frame and the rear frame conforms to the fluid motion law, the optimization of the model can be guided by punishing the inconsistency of the flow results of the adjacent continuous frames.
33) Loss of features extracted from each layerFigure 3 is a graph of the characteristic loss for an intermediate result of the present invention. As shown in fig. 3:
summary of loss of multi-layer profiles in an aerosol generation model during training of the aerosol generation modelShow, FjFor extracting discriminators D of the aerosol-generating modelsA representation of the output characteristic map of layer j,to extract FjRepresentation of the weight coefficient of influence of time on the global loss function, En,jWhen the input sample size is n, F is extracted for the generation result and the real control respectivelyjThe spatial consistency of the time phase comparison is expected, and n and j are positive integers.
Intermediate results often contain many implicit features, so a mirror network (G) of constraint generation result and real data usage generators can be employed-1) And calculating and extracting the difference of results obtained by some middle layers to improve the generation capability of the model.
34) Total loss functionComprises the following steps:
wherein,for the total loss representation in the training process of the smoke generation model, G (x) is a single-frame three-dimensional density field generated in the smoke generation model, y is a real contrast single-frame three-dimensional density field, DsFor the determination G (x) of the aerosol generation model and the representation of the identifier of the spatial conformity of the true contrast single-frame three-dimensional density field y, EnWhen the input original two-dimensional data scale is n, G (x) in the smoke generation model passes through DsIs used, in the context of discrete data discussed herein, to be understood as identifying a mean,using real data for the aerosol generation modely a representation of a temporally continuous set of flow field frames derived by fluid dynamics inference,using generation data G (x) for the aerosol generation model, a representation of a chronologically continuous set of flow field frames obtained by hydrodynamic extrapolation, EmFor the discriminator pair when sample y is of scale mIs understood to mean the identification, D, in the case of discrete data discussed in the context of this documenttDetermining for the aerosol generation modelAndof the temporal consistency of FjFor extracting discriminators D of the aerosol-generating modelsA representation of the output characteristic map of layer j,to extract FjRepresentation of the weight coefficient of influence of time on the global loss function, En,jWhen the input sample size is n, F is extracted for the generation result and the real control respectivelyjThe spatial consistency of the time phase comparison is expected, and n and j are positive integers.
The total loss function is adjusted by controlling the weights of different losses, so that the model training guidance is most effective, and the generating capacity of the generator is optimal.
4) And generating a smoke animation with reality sense through rendering the generated three-dimensional smoke sequence under multiple visual angles based on the GPU.
And (3) by combining the characteristics of a GPU (graphic Processing Unit), rendering the smoke by utilizing a ray casting method on the three-dimensional data under multiple visual angles to obtain a three-dimensional model of the smoke, and realizing the smoke simulation with good visual effect and high real-time property.
Fig. 4 is a schematic diagram of a data-driven smoke animation synthesis system of the present invention. As shown in fig. 4, an embodiment of the present invention further provides a readable storage medium and a data processing apparatus. The readable storage medium of the invention stores executable instructions which, when executed by a processor of a data processing device, implement the data-driven smoke animation synthesis method described above. It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by a program instructing associated hardware (e.g., a processor) and the program may be stored in a readable storage medium, such as a read-only memory, a magnetic or optical disk, etc. All or some of the steps of the above embodiments may also be implemented using one or more integrated circuits. Accordingly, the modules in the above embodiments may be implemented in hardware, for example, by an integrated circuit, or in software, for example, by a processor executing programs/instructions stored in a memory. Embodiments of the invention are not limited to any specific form of hardware or software combination.
Although the present invention has been described with reference to the above embodiments, it should be understood that the invention is not limited to the embodiments, and that various changes and modifications can be made by one skilled in the art without departing from the spirit and scope of the invention.

Claims (10)

1. A data-driven smoke animation synthesis method, comprising:
generating two-dimensional smoke contour data according to the three-dimensional smoke data set;
generating a network through training of the two-dimensional smoke contour data to obtain a smoke generation model;
generating the two-dimensional smoke contour data into a three-dimensional smoke sequence through the smoke generation model;
rendering the three-dimensional smoke sequence to generate a three-dimensional smoke animation.
2. The smoke animation synthesis method of claim 1, wherein the smoke generation model's loss functionComprises the following steps:
wherein x is the two-dimensional smoke profile data, G (x) is the three-dimensional smoke sequenceThe single frame three-dimensional density field of (a) y is the three-dimensional smoke data setOf a single frame of a three-dimensional density field, DSDiscriminators for determining the spatial correspondence of G (x) with y, EnWhen x is of scale n, G (x) passes through DSIdentification expectation of (1), EmDiscriminator D at scale m of ySTo pairIdentification expectation of (1), DtTo judgeAnddiscriminator of timing consistency, FjTo extract DSThe output characteristic map of the j-th layer of (1),to extract FjTime to loss functionInfluence weight coefficient of En,jWhen x is of scale n toAndrespectively extract FjThe space consistency of time phase comparison is expected, and n and j are positive integers.
3. The smoke animation synthesis method of claim 1, wherein the three-dimensional smoke dataset is obtained by solving the navier-stokes equation.
4. The method of claim 1, wherein the three-dimensional smoke animation is generated by rendering at multiple views based on a GPU.
5. A data-driven smoke animation synthesis system, comprising:
the two-dimensional data generation module is used for generating two-dimensional smoke contour data according to the three-dimensional smoke data set;
the smoke generation model training module is used for generating a network through training of the two-dimensional smoke contour data so as to obtain a smoke generation model;
the three-dimensional sequence generation module is used for generating the two-dimensional smoke contour data into a three-dimensional smoke sequence through the smoke generation model;
and the smoke animation rendering module is used for rendering the three-dimensional smoke sequence to generate the three-dimensional smoke animation.
6. The smoke animation synthesis system of claim 5, wherein the loss function of the smoke generation modelComprises the following steps:
wherein x is the two-dimensional smoke profile data, G (x) is the three-dimensional smoke sequenceThe single frame three-dimensional density field of (a) y is the three-dimensional smoke data setOf a single frame of a three-dimensional density field, DSDiscriminators for determining the spatial correspondence of G (x) with y, EnWhen x is of scale n, G (x) passes through DSIdentification expectation of (1), EmDiscriminator D at scale m of ySTo pairIdentification expectation of (1), DtTo judgeAnddiscriminator of timing consistency, FjTo extract DSThe output characteristic map of the j-th layer of (1),to extract FjTime to loss functionInfluence weight coefficient of En,jWhen x is of scale n toAndrespectively extract FjThe space consistency of time phase comparison is expected, and n and j are positive integers.
7. The smoke animation synthesis system of claim 5, further comprising: and the smoke data set generating module is used for solving and acquiring the three-dimensional smoke data set through the Navier-Stokes equation.
8. The smoke animation synthesis system of claim 5, wherein the smoke animation rendering module specifically comprises: the three-dimensional smoke animation is generated through rendering under multiple visual angles based on the GPU.
9. A readable storage medium storing executable instructions for performing the data-driven smoke animation synthesis method of any one of claims 1 to 4.
10. A data processing apparatus comprising a readable storage medium as claimed in claim 9, the data processing apparatus retrieving and executing executable instructions in the readable storage medium for three-dimensional smoke animation synthesis.
CN201910228740.3A 2019-03-25 2019-03-25 Data-driven smoke animation synthesis method and system Active CN110084872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910228740.3A CN110084872B (en) 2019-03-25 2019-03-25 Data-driven smoke animation synthesis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910228740.3A CN110084872B (en) 2019-03-25 2019-03-25 Data-driven smoke animation synthesis method and system

Publications (2)

Publication Number Publication Date
CN110084872A true CN110084872A (en) 2019-08-02
CN110084872B CN110084872B (en) 2020-12-25

Family

ID=67413526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910228740.3A Active CN110084872B (en) 2019-03-25 2019-03-25 Data-driven smoke animation synthesis method and system

Country Status (1)

Country Link
CN (1) CN110084872B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380764A (en) * 2020-11-06 2021-02-19 华东师范大学 End-to-end rapid reconstruction method for gas scene under limited view

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070159049A1 (en) * 2006-01-11 2007-07-12 Samsung Electronics Co., Ltd. Method and system of rendering particle
US20110282641A1 (en) * 2010-05-16 2011-11-17 Stefan Bobby Jacob Xenos Method and system for real-time particle simulation
CN102496178A (en) * 2011-10-26 2012-06-13 北京航空航天大学 Three-dimensional smoke density field generating method based on single-viewpoint images
CN103489209B (en) * 2013-09-05 2016-05-18 浙江大学 A kind of controlled fluid animation producing method based on fluid key frame editor
CN105608727A (en) * 2016-03-01 2016-05-25 中国科学院计算技术研究所 Data driving inshore surge animation synthesis method and system
CN106340053A (en) * 2011-07-27 2017-01-18 梦工厂动画公司 Fluid dynamics framework for animated special effects

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070159049A1 (en) * 2006-01-11 2007-07-12 Samsung Electronics Co., Ltd. Method and system of rendering particle
US20110282641A1 (en) * 2010-05-16 2011-11-17 Stefan Bobby Jacob Xenos Method and system for real-time particle simulation
CN106340053A (en) * 2011-07-27 2017-01-18 梦工厂动画公司 Fluid dynamics framework for animated special effects
CN102496178A (en) * 2011-10-26 2012-06-13 北京航空航天大学 Three-dimensional smoke density field generating method based on single-viewpoint images
CN103489209B (en) * 2013-09-05 2016-05-18 浙江大学 A kind of controlled fluid animation producing method based on fluid key frame editor
CN105608727A (en) * 2016-03-01 2016-05-25 中国科学院计算技术研究所 Data driving inshore surge animation synthesis method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张娟: "流体动画生成方法研究综述", 《集成技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380764A (en) * 2020-11-06 2021-02-19 华东师范大学 End-to-end rapid reconstruction method for gas scene under limited view
CN112380764B (en) * 2020-11-06 2023-03-17 华东师范大学 Gas scene end-to-end rapid reconstruction method under limited view

Also Published As

Publication number Publication date
CN110084872B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
US7468730B2 (en) Volumetric hair simulation
US7983884B2 (en) Water particle manipulation
US7355607B2 (en) Automatic 3D modeling system and method
Raveendran et al. Blending liquids
CN107085629B (en) Fluid simulation method based on coupling of video reconstruction and Euler model
Dobashi et al. Using metaballs to modeling and animate clouds from satellite images
Jiang et al. Vr-gs: A physical dynamics-aware interactive gaussian splatting system in virtual reality
Yan et al. Interactive liquid splash modeling by user sketches
Hu et al. Generating video animation from single still image in social media based on intelligent computing
JP2008140385A (en) Real-time representation method and device of skin wrinkle at character animation time
CN117237542B (en) Three-dimensional human body model generation method and device based on text
CN110084872B (en) Data-driven smoke animation synthesis method and system
Dai et al. PBR-Net: Imitating physically based rendering using deep neural network
Saito et al. Efficient and robust skin slide simulation
Li et al. Image stylization with enhanced structure on GPU
CN114373034B (en) Image processing method, apparatus, device, storage medium, and computer program
Lee et al. CartoonModes: Cartoon stylization of video objects through modal analysis
CN106408639A (en) Curvature flow-based screen space fluid rendering method
Zamri et al. Research on atmospheric clouds: a review of cloud animation methods in computer graphics
Kuang et al. 3D Bounding Box Generative Adversarial Nets
US20110050693A1 (en) Automatic Placement of Shadow Map Partitions
CN117671110B (en) Real-time rendering system and method based on artificial intelligence
US8010330B1 (en) Extracting temporally coherent surfaces from particle systems
Vanakittistien et al. Game‐ready 3D hair model from a small set of images
Huang et al. Interactive Painting Volumetric Cloud Scenes with Simple Sketches Based on Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant