CN109461205A - A method of three-dimensional fireworks are rebuild from fireworks video - Google Patents

A method of three-dimensional fireworks are rebuild from fireworks video Download PDF

Info

Publication number
CN109461205A
CN109461205A CN201811146934.0A CN201811146934A CN109461205A CN 109461205 A CN109461205 A CN 109461205A CN 201811146934 A CN201811146934 A CN 201811146934A CN 109461205 A CN109461205 A CN 109461205A
Authority
CN
China
Prior art keywords
fireworks
dimensional
video
frame
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811146934.0A
Other languages
Chinese (zh)
Inventor
王莉莉
王志宏
刘鑫达
胡淋毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201811146934.0A priority Critical patent/CN109461205A/en
Publication of CN109461205A publication Critical patent/CN109461205A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a kind of methods for rebuilding three-dimensional fireworks from fireworks video, comprising the following steps: one three-dimensional fireworks rending model of building, the model receive certain parameter as output, generate corresponding three-dimensional firework effect;A randomization parameter generator is constructed, a collection of video is generated as training set and verifying using the model and collects;From above-mentioned training set and test set, go to return the parameter of video using neural network;Using above-mentioned neural network, for given video, learns relevant parameter and pass through fireworks rending model, obtain reconstructed results.The present invention solves the problems, such as to rebuild three-dimensional firework effect by two-dimensional video by the capability of fitting of current deep learning.

Description

A method of three-dimensional fireworks are rebuild from fireworks video
Technical field
The invention belongs to three-dimensional reconstruction fields, and in particular to a method of three-dimensional fireworks are rebuild from fireworks video.
Background technique
Three-dimensional grid model has obtained more and more extensive concern and application as an important carrier of three-dimensional media, Industrial manufacture, digital entertainment, Digital Cultural Heritage, intelligent city etc. played an important role.In recent years, with calculating The raising of machine processing capacity and the development of 3-D scanning technology and optical rehabilitation technology, the acquisition of three-dimensional grid model become more It is easy and quick to add.But usually there are various defects in the rough model initially obtained, be difficult to be directly used in various calculating, lead to It is often denoised, is repaired, the processing such as simplified and resampling is to meet application demand.
Aspect is arranged in current light source, and existing complex scene is essentially all artificially to remove setting light source.With acquisition The progress of equipment and geometrical Modeling Technology, three-dimensional scenic scale to be treated is increasing, and structure becomes increasingly complex, and light is arranged Source also increasingly wastes time.With the visible Deciding Algorithm of big quantity light source calculated based on constant time light-scene intersection For research, the recreation ground scene itself that we make has 1,200,000 triangle surfaces, several hundred a dynamic objects, 7600 light sources, And light source itself has 500,000 triangle surfaces.By tool as 3D max, 7600 light sources are arranged in fine arts personnel by hand The parameters such as position, color take the time more than two weeks, slightly adjust, and many light source positions and motion profile needs re-start Setting.So quickly, the algorithm for efficiently constructing light source true to nature is badly in need of research.
The technology that the simulation of current fireworks uses is particIe system.More than 20 years hairs are passed through in the research of particIe system Exhibition, is applied, Reeves W.T is put forward for the first time the concept of particIe system in nineteen eighty-three, and is simulated with it in all fields Flame, explosion and other effects, also successfully simulate film " Star Trek 2:The Wrath ofKhan)) in a series of spies Skill camera lens.1992, Loke et al. proposed the particIe system rendering algorithm of red-letter day fireworks, stored grain using linked list data structure Sub-information devises particIe system drawing engine (particle system rendering engine), derived from particle The track of method performance fireworks particle and the special-effect for realizing a variety of fireworks.ParticIe system by more than 20 years research with Development, forms many feasible algorithms and theory, application are also increasingly extensive.Tonnesen summarizes the work of forefathers, will ParticIe system is divided into independent particle system, has the particIe system of the particIe system and Dynamic Coupling that are fixedly connected these three types of.Wherein Independent particle system refers to that the power acted on particle is independent from each other, does not have influential particIe system to each other.Fireworks can To be simulated with independent particle system.
In terms of fireworks drafting, current fireworks model is all the explosion equation that fireworks are estimated according to priori knowledge, Then some parameters are artificially set, by adjusting repeatedly, so as to appear like natural fireworks the same for model;Also have one Part research is intended to improve the novel degree of fireworks pattern, shows the fireworks of the no special shape of nature, such as number Fireworks, or heart-shaped fireworks.Rarely having model is by reading in video, the transformation of analysis video the inside fireworks and presentation mode, so After reconstruct fireworks similar with video.Fireworks are rendered mostly using particIe system, but current method is all production One (multiple) basic particle cell, the physics law movement for then allowing these particle cells to go to explode according to fireworks.In this way The benefit done is simple, and draws speed and be exceedingly fast, and disadvantage is exactly that the result drawn and true fireworks have a certain distance, cigarette Colored particle not can be carried out any deformation, give a kind of false feeling of people.
Convolutional neural networks are widely used in image procossing, and CNN is used for Handwritten Digit Recognition earliest by Yann Lecun, and Obtain huge success.The difference of convolutional neural networks and general neural network is that convolutional neural networks contain one The feature extractor being made of convolutional layer and sub-sampling layer.In a convolutional layer of CNN, it is flat to generally comprise several features Face, each characteristic plane are made of the neuron of some rectangular arrangeds, and the neuron of same characteristic plane shares weight, here altogether The weight enjoyed is exactly convolution kernel.Convolution kernel initializes generally in the form of random decimal matrix, rolls up in the training process of network Study is obtained reasonable weight by product core.The shared direct benefit of weight (convolution kernel) bring is the company reduced between each layer of network It connects, while reducing the risk of over-fitting again.Sub-sampling is also referred to as pond, usually there is mean value sub-sampling and maximum value sub-sampling two Kind form.Sub-sampling is considered as a kind of special convolution process.Convolution sum sub-sampling enormously simplifies model complexity, reduces The parameter of model.
Recognition with Recurrent Neural Network is widely used in voice and video etc. and has in the information processing of temporal aspect.RNN is one The neural network that kind models sequence data, the i.e. output of a sequence current output and front are also related.Specific performance Form is that network can remember the information of front and be applied in the calculating currently exported, i.e., the node between hidden layer is not It is connectionless again but have connection, and not only the output including input layer further includes last moment hidden layer for the input of hidden layer Output.LSTM is a kind of type that RNN is special, is a kind of time recurrent neural network.The main distinction of it and RNN are it It joined " processor " judged whether information is useful in the algorithm, the structure of this processor effect is referred to as cell. It has been placed three fan doors in one cell, has been called input gate respectively, forgets door and out gate.One information enters the net of LSTM It, can be according to rule to determine whether useful in network.The information for only meeting algorithm certification can just leave, and the information not being inconsistent is then It is passed into silence by forgeing door.Therefore it can learn long-term Dependency Specification.It is suitable for being spaced and prolonging in processing and predicted time sequence Critical event relatively long late
Summary of the invention
The technical problem to be solved by the present invention is overcoming the insufficient defect of conventional method abstract ability, neural network is utilized Powerful nonlinear capability of fitting trains position feature of the deep learning model for the extraction fireworks from video, Color characteristic and motion feature, to render fireworks model similar with original video according to given fireworks video.This method The three-dimensional firework effect come is rendered, has very high similarity with original video.
The technical solution of present invention solution above-mentioned technical problem are as follows: a method of three-dimensional fireworks are rebuild from fireworks video, Include the following steps:
(1) rending model of the three-dimensional fireworks based on OPENGL and particIe system is constructed using particIe system, it is described Rending model receives the speed, acceleration in all directions of one group of expression three-dimensional fireworks, and three-dimensional fireworks color and size The parameter for changing over time situation renders three-dimensional fireworks;Construct the rending model of a three-dimensional fireworks based on particIe system, three After the rending model simulation fireworks explosion for tieing up fireworks, several fireworks particles scatter from explosion center.On the standard three-dimensional space right side Under hand coordinate system, it is assumed that be horizontally to the right y-axis, be straight up z-axis, in rending model, give three-dimensional fireworks section Fireworks number of particles calculate each section fireworks particle first according to the section of input (yz axial section) fireworks number of particles N With the angle theta in vertical direction (z-axis), horizontal plane (the xy axis under the angle is then obtained according to N2=sin (theta) Plane) number of particles N2, the angle gamma of each fireworks particle and y-axis is calculated according to N2, according to theta and gamma two The direction of fireworks particle is converted under spherical coordinate system by angle, and the final direction dir=(sin of particle is obtained after unitization (theta) * sin (gamma), cos (theta), sin (theta) * cos (gamma)) be fireworks particle direction.It is right later In each frame, the size and color of the ashes of the i-th frame generation are found out, is calculated on fireworks Particles Moving track by the method for interpolation The color and size of every bit;The rending model of the three-dimensional fireworks is with the centrifugation initial velocity of fireworks particle, acceleration;Three-dimensional fireworks The molecular particIe system of all grains initial velocity under external force and acceleration and fireworks particle color and size with Parameter of the attenuation ratio of time change as model.In order to obtain more true effect, the rending model joined several Random perturbation factor gives each fireworks including the initial velocity and acceleration one random coefficient of addition to each fireworks particle The color and size of particle also add a random coefficient, can thus make the position of the fireworks particle on different directions It sets, color and size slightly have difference, so that rendering effect is truer.
(2) parameter generators, speed, acceleration in random initializtion all directions, and three-dimensional fireworks are provided with Color and size change over time the parameter of situation, and generate several three-dimensional firework effects based on rending model in step (1), It is projected to along random angles and generates video in video camera, the video will generate corresponding three-dimensional as training set and verifying collection Label of the parameter of fuming effect as training data in training set constructs a neural network;A convolution mind is constructed first Through network (CNN) model, the model used here is the last softmax layer of removal and a full articulamentum is added InceptionV3 model.This model is the circle for being fitted an outer ring to each frame, that is, obtains the circle of the corresponding circle of each frame The heart and radius are as assisted tag.Implementation process for being fitted the circle of each frame of video is as follows: being divided using entirety-isolation method Video is analysed, since the centrifugal speed of fireworks particle is almost the same, so the outmost turns fireworks particle of each frame is approximate in video On a circle, therefore each frame fireworks of round approximate fits can be used.The center of circle of the circle represents all grains of three-dimensional fireworks The change in location situation of molecular particIe system, and radius represent each frame outer ring fireworks particle relative to whole system Center, i.e., with the position of the fireworks explosion center after whole system movement.The fireworks of some videos can be complete after a certain frame It totally disappeared mistake, for these videos, need all to be set to 0 without the assisted tag of the frame of any image, to improve the standard of training Exactness.Next one LSTM layers of building is as the relationship between Recognition with Recurrent Neural Network model analysis frame and frame, by above-mentioned convolution Full articulamentum before neural network model output layer is as input, the relationship being input between LSTM layer analysis frame and frame.Make The assisted tag for using above-mentioned CNN model to obtain can effectively reduce trained data volume as the training data of LSTM, improve Trained efficiency.For each video, conduct in entire neural network model is passed to using former frame and poor frame as input respectively Then two different branches are merged Liang Ge branch by a full articulamentum, the full articulamentum of the last layer is for returning Parameter required for step (1), each parameter construct the neural network model of a multi-task learning as a task. The loss function that entire neural network is used to return required parameter is defined as the mean square error function of Weight.By to data set Training, obtain a network.Give a video, the network can return out described in step 1 description fireworks each Speed, acceleration and fireworks color and size change over time the parameter of situation on a direction.
(3) using given video to be verified as input, nonlinear fitting point is carried out by step (2) neural network Analysis, obtain one group described in description flower speed, acceleration in all directions and fireworks color and size change over time Then the parameter of situation will arrive the three-dimensional after rebuilding in rending model described in the obtained parameter input step (1) Fireworks.
The principle of the present invention:
(1) video is analyzed using entirety-isolation method, it is every using circle approximate fits using each frame fireworks of circle approximate fits One frame fireworks.
With entirety-isolation method analysis fireworks flare system, as a whole, whole system has an initial velocity, also There is a downward acceleration of gravity, while other external force are influenced by wind-force etc.;And keep apart from the point of view of partially, particle from Explosion center is scattered with certain centrifugal speed and unique angle.Therefore each moment, the fireworks particle of outmost turns are all approximate Ground is present on a spherical surface, and projecting on video is on outmost turns particle is approximatively justified at one, it is possible to circle The situation of change in the center of circle is fitted the situation of change of entire fireworks particIe system position under the effect of external force, with round radius Change to be fitted the centrifugation situation of particle.
(2) it goes to return required parameter using neural network
It is difficult to reconstruct three-dimensional effect from the two-dimentional fireworks video at single visual angle using traditional method, does not also have at present There is any document there are associated solutions.And neural network has powerful nonlinear fitting ability, it can be from a large amount of data Learning law in collection and corresponding label, to establish the relationship of video Yu its parameter.Convolutional neural networks can be accurate and high Effect ground extracts the feature of picture, therefore selects convolutional neural networks to analyze each frame of video, fits each frame The corresponding center of circle and radius;The center of circle for using fitting and radius can significantly reduce data scale as intermediate result, from And accelerate training speed.Carry out the relationship between analysis frame using LSTM circulation layer later, to return out the corresponding parameter of video.
The advantages of present invention is compared with method before is: the present invention can be while ensuring quality quickly from giving Three-dimensional fireworks model is constructed in fixed fireworks video.The present invention mainly has a two o'clock contribution: first, utilize the round center of circle and radius Calculation amount is greatly reduced in the position for removing to represent each frame particle, accelerates calculating speed.Second, using neural network from number According to focusing study rule, carries out video and be fitted to nonlinearity in parameters.
Detailed description of the invention
Fig. 1 is the method for the present invention overall flow figure;
Fig. 2 is the algorithm that particle direction is calculated in the present invention;(a) algorithm pattern in direction is calculated;(b) direction is in spherical coordinate system Under schematic diagram;
Fig. 3 is the schematic diagram that fireworks particle tail method is sought in the present invention;(a) ash that the particle that each moment generates leaves Cinder schematic diagram, (b) effect diagram after interpolation;
Fig. 4 is that the frame differential of fireworks video in the present invention is intended to;(a) the i-th frame, (b) i+1 frame, (c) frame difference result;
Fig. 5 is the schematic diagram of fitting circle in the present invention;(a) frame original image fitting circle, (b) the poor frame fitting circle that frame difference method acquires
Fig. 6 is LSTM schematic network structure;
Fig. 7 is the firework effect figure after rebuilding in the present invention.
Specific embodiment
With reference to the accompanying drawing and a specific embodiment of the invention further illustrates the present invention.
For the algorithm for reconstructing of the three-dimensional fireworks, input of the invention is the fireworks video of a removal background, and entire Algorithm for reconstructing includes the following steps as shown in Figure 1:
Step (1) constructs a fireworks model using opengl, after model approximation simulation fireworks explosion, if dry granular Son scatters from explosion center.
With the flare system of entirety-isolation method analysis fireworks, as a whole, whole system has an approximation upward Initial velocity, there are one downward acceleration of gravity, while other external force are influenced by wind-force etc., have first in horizontal direction Velocity and acceleration;And keep apart from the point of view of partially, particle is from explosion center with certain centrifugal speed and unique angle It scatters, also there is approximately uniform centrifugation initial velocity and acceleration.And particle is during the motion, due to constantly burning, Size and color are all decaying in certain proportion.During the motion due to particle, ashes can be left to burn away, be embodied in It is visually exactly that fireworks can leave a tail.
In a model, the number of particles for giving fireworks section, as shown in Fig. 2 (a) and (b), first according to the section of input (yz axial section) fireworks number of particles N, calculates the angle theta in each section fireworks particle and vertical direction (z-axis), and i-th Then angle thetai=2 π/N*i of a section fireworks particle obtains the horizontal plane under the angle according to N2=sin (theta) The fireworks number of particles N2=max (N*sin (theta), 1) of (xy axial plane), calculates each fireworks particle and y-axis according to N2 Angle gamma, angle thetai=2 π/N2*i of i-th of section fireworks particle.According to two angles of theta and gamma, The direction of particle is converted under spherical coordinate system, the final direction dir=of unitization particle (sin (theta) * sin (gamma), Cos (theta), sin (theta) * cos (gamma)) be particle direction.2 (b) left figures illustrate section fireworks particle Direction, right figure illustrate after three-dimensional is converted to and seeks coordinate system, the direction of fireworks particle.It is calculated by the algorithm of Fig. 2 whole The number of particles of the molecular system of a all grains of fireworks and the direction of each particle.Assuming that present frame is i, for the i-th frame Each frame later finds out the size and color of the newly-generated ashes of the i-th frame, calculates Particles Moving track by the method for interpolation The color and size of upper every bit.
The ashes that Fig. 3 left figure (a) generates for some fireworks particle before interpolation in each frame, right figure (b) are interpolation Tail shape later, it can be seen that if intensive enough, after filling edge, so that it may for simulating various shape. The present invention is with the centrifugation initial velocity of fireworks particle, acceleration;The molecular particIe system of all grains of three-dimensional fireworks is in outer masterpiece Ginseng of the attenuation ratio that initial velocity and acceleration and fireworks particle color and size under change over time as model Number.
Step (2) acquires the newly-generated image of each frame using frame difference method, i.e., the fireworks particle after three-dimensional fireworks explosion exists The position that this frame moves to (b) is i+1 frame image, (c) is i+1 frame figure as shown in figure 4, (a) is the i-th frame image Result after picture and the i-th frame frames differencing.Regard entire fireworks particIe system as an entirety, under the effect of external force, this A entirety will move in the same way.Therefore, at any one moment, all fireworks particles can be similar in the same spherical surface On.This characteristics exhibit is exactly that the fireworks particle of outer ring is similar on the same circle on video.Round label can be by most First parameter is calculated.The image and original image obtained via these frame difference methods passes through convolutional neural networks model, structure respectively Build out two same structures but the neural network of different weights, fit respectively for characterize system motion situation and particle from The circle of heart motion conditions, the result fitted as shown in figure 5,5 (a) circles illustrated in 4 (b) situation above, 5 (b) The circle illustrated is in 4 (c) situation above.The convolutional neural networks model used in this step is removal end Softmax layers and add full articulamentum inceptionV3 network.The fireworks of some videos can disappear completely after a certain frame It loses, for these videos, needs all to be set to 0 without the assisted tag of the frame of any image, to improve the accuracy of training.
Image after original image and frame difference is passed through each frame of all videos in step (2) by step (3) respectively Convolutional neural networks model is acquired, corresponding circle is calculated, using the center of circle of the circle and radius as intermediate result, is participated in next step Training.Use intermediate result obtained above as the training data of next step, can effectively reduce trained data volume, Improve the efficiency of training.
The intermediate result that step (4) obtains step (3) will own with the relationship between LSTM layer analysis frame and frame Parameter to be asked carries out multitask recurrence as label, by neural network.Loss function is defined as the mean square error letter of weighting Number, after wherein the size of weight is according to the certain round of training, the error of each parameter is adjusted: error is bigger, then weight It is higher.In addition to this, as shown in fig. 6, each frame of original video is passed through mutually isostructural mould with the poor frame that frame difference obtains respectively Type.The bottom ConvNet of each model is exactly the inceptionV3 model to frame fit characteristic circle, and the model is by each frame Characteristic circle is extracted, to reduce the scale of input, freezen layer indicates that this layer is trained in advance, then connects down Carry out its parameter in the training process of upper layer lstm and full articulamentum to remain unchanged.Then the pass between analysis frame is gone to by LSTM layers System.Each submodel does a prediction using a full articulamentum (Dense), as supplemental training task.Two submodels By fused layer (Fusion Layer) connection connected entirely, chief training officer is gone to be engaged in.The label of subtask and main task is step Suddenly parameter required for fireworks rending model in (1).By the way that supplemental training task is added, and two tasks are closed with full articulamentum And such Recognition with Recurrent Neural Network is the fitting for carrying out each autoregressive parameter, and the full articulamentum for merging task is for carrying out pair The error correction of frame difference image and original image regression result.
Step (5) obtains intermediate result by step (3), then pass through step using given video to be verified as input (4) neural network model obtains one group of parameter after returning.By this group of parameter, it is passed to three-dimensional rendering described in step (1) The three-dimensional firework effect after rebuilding has been arrived in model.
The software platform that realization of the invention uses are as follows: (1) platform that three-dimensional rendering model uses is Microsoft Visual studio 2013 and OpenGL has used CUDA to accelerate the computational efficiency of parallel algorithm;(2) neural network model The platform used is JetBrains PyCharm 2018.1.2 and TensorFlow.Hardware platform is 4.0GHz Inter (R) Core (TM) i7-7700 CPU, 8GB memory and NVIDIA GeForce GTX1060 GPU.Method effect picture such as Fig. 7 institute Show.It is directed to three different videos in Fig. 7, illustrates two different angles of reconstructed results.The training set of this method has 10000 videos, cross validation collection have 2000 videos, train convolutional neural networks time-consuming about 1 week for being fitted middle circle, By all training videos via trained convolutional neural networks generate middle circle the center of circle and radius time-consuming about 22 hours, by Intermediate result training Recognition with Recurrent Neural Network is fitted about 14 hours of the corresponding parameter time-consuming of video, by the parameter that generates via cigarette Flower rending model, the three-dimensional firework effect of rendering one for 4 seconds are 4 seconds time-consuming.
Above embodiments are provided just for the sake of the description purpose of the present invention, and are not intended to limit the scope of the invention.This The range of invention is defined by the following claims.It does not depart from spirit and principles of the present invention and the various equivalent replacements made and repairs Change, should all cover within the scope of the present invention.

Claims (5)

1. a kind of method for rebuilding three-dimensional fireworks from fireworks video, it is characterised in that: the following steps are included:
(1) rending model of the three-dimensional fireworks based on OPENGL and particIe system, the rendering are constructed using particIe system Model receives the speed, acceleration in all directions of one group of expression three-dimensional fireworks, and three-dimensional fireworks color and size are at any time Between situation of change parameter, render three-dimensional fireworks;
(2) parameter generators, speed, acceleration in random initializtion all directions, and three-dimensional fireworks color are provided with Change over time the parameter of situation with size, and several three-dimensional firework effects generated based on rending model in step (1), along with Video is generated in machine Angles Projections to video camera, the video will generate corresponding three-dimensional fuming as training set and verifying collection Label of the parameter of effect as training data in training set constructs neural network;Neural network is for returning required parameter Loss function is defined as the mean square error function of Weight;By the training to data set, a trained nerve net is obtained Network;Give a video, the trained neural net regression go out described in step (1) the three-dimensional fireworks of description in each side Upward velocity, acceleration and fireworks color and size change over time the parameter of situation;
The neural network building is as follows: constructing a convolutional neural networks MODEL C NN first, is used to be fitted one to each frame The circle of outer ring, that is, as assisted tag, the fireworks of some videos can be in a certain frame for the center of circle for obtaining the corresponding circle of each frame and radius It completely disappears later, for these videos, the assisted tag of the frame of no any image is all set to 0, to improve training Accuracy;The CNN is the last softmax layer of removal and the inceptionV3 model that a full articulamentum is added, as volume Product neural network model;Then one LSTM layers are constructed, as Recognition with Recurrent Neural Network model, relationship between analysis frame and frame, Using the full articulamentum before the convolutional neural networks model output layer as input, it is input between LSTM layer analysis frame and frame Relationship, the assisted tag for using above-mentioned CNN model to obtain is as the training data of LSTM, the effective data for reducing training Amount, improves trained efficiency;
For each video, former frame is passed in entire neural network model with poor frame as input respectively different as two Branch, then will be merged after Liang Ge branch by a full articulamentum, the full articulamentum of the last layer, for separate regression steps (1) parameter described in, each parameter construct a multi-task learning (Multi-task Learning) as a task Neural network model multitask neural network;
(3) using given video to be verified as input, nonlinear fitting is carried out by step (2) neural network, is obtained The speed, acceleration in all directions and fireworks color and size of description flower described in one group change over time situation Then parameter will arrive the three-dimensional fireworks after rebuilding in rending model described in the obtained parameter input step (1).
2. a kind of method for rebuilding three-dimensional fireworks from fireworks video according to claim 1, it is characterised in that: the step (1) it is implemented as follows:
The rending model of a three-dimensional fireworks is constructed based on particIe system, the rending model simulation fireworks of three-dimensional fireworks explode it Afterwards, if dried particle scatters from explosion center;In rending model, the fireworks number of particles of fireworks section is given, entire grain is calculated The direction of the fireworks number of particles of subsystem and each fireworks particle;Later for each frame, the ashes of the i-th frame generation are found out Size and color, the color and size of every bit on Particles Moving track are calculated by the method for interpolation;The three-dimensional fireworks Rending model is with the centrifugation initial velocity of fireworks particle, acceleration;The molecular particIe system of all grains of three-dimensional fireworks is in external force The attenuation ratio that initial velocity and acceleration and fireworks particle color and size under effect change over time is as three-dimensional fireworks Rending model parameter.
3. a kind of method for rebuilding three-dimensional fireworks from fireworks video according to claim 2, it is characterised in that: the calculating The method in the direction of the fireworks number of particles and each fireworks particle of the molecular particIe system of all grains of three-dimensional fireworks is such as Under:
Horizontally to the right it is y-axis under the right-handed coordinate system of standard three-dimensional space, is straight up z-axis, first according to input Section, i.e. the fireworks number of particles N of yz axial section calculate the angle in each section fireworks particle and vertical direction i.e. z-axis Then theta obtains the fireworks number of particles N2 of the i.e. xy axial plane of the horizontal plane under the angle, root according to N2=sin (theta) The angle gamma that each fireworks particle and y-axis are calculated according to N2, according to two angles of theta and gamma, by fireworks particle Direction is converted to spherical coordinate system, obtained after singulation final fireworks particle direction dir=(sin (theta) * sin (gamma), Cos (theta), sin (theta) * cos (gamma)) be fireworks particle direction.
4. a kind of method for rebuilding three-dimensional fireworks from fireworks video according to claim 1 or 2, it is characterised in that: described In step (1), in order to obtain more true effect, rending model joined several random perturbation factors, the random perturbation because Element include to each fireworks particle initial velocity and acceleration be added a random coefficient, to each fireworks particle color and Size also adds a random coefficient, so that the position of the fireworks particle on different directions, color and size slightly have difference, make It is truer to obtain rendering effect.
5. a kind of method for rebuilding three-dimensional fireworks from fireworks video according to claim 1, it is characterised in that: the step (2) in, convolutional neural networks model is as follows to the process of each frame fitting circle of video: video is analyzed using entirety-isolation method, The centrifugal speed of fireworks particle is consistent, and the outmost turns fireworks particle of each frame is close using circle approximately on a circle in video Like each frame fireworks are fitted, the center of circle represents the change in location situation of the molecular particIe system of all grains of three-dimensional fireworks, and half Diameter represent each frame outer ring fireworks particle relative to whole system center, i.e., as all grains of three-dimensional fireworks are molecular The position of fireworks explosion center after particIe system is mobile.
CN201811146934.0A 2018-09-29 2018-09-29 A method of three-dimensional fireworks are rebuild from fireworks video Pending CN109461205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811146934.0A CN109461205A (en) 2018-09-29 2018-09-29 A method of three-dimensional fireworks are rebuild from fireworks video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811146934.0A CN109461205A (en) 2018-09-29 2018-09-29 A method of three-dimensional fireworks are rebuild from fireworks video

Publications (1)

Publication Number Publication Date
CN109461205A true CN109461205A (en) 2019-03-12

Family

ID=65607197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811146934.0A Pending CN109461205A (en) 2018-09-29 2018-09-29 A method of three-dimensional fireworks are rebuild from fireworks video

Country Status (1)

Country Link
CN (1) CN109461205A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993836A (en) * 2019-03-18 2019-07-09 浙江大学 A method of realizing virtual reality three-dimensional fireworks controlled shape
CN110163982A (en) * 2019-04-11 2019-08-23 浙江大学 A kind of virtual fireworks analogy method of immersion based on Sketch Searching and controlled shape
CN110619677A (en) * 2019-08-12 2019-12-27 浙江大学 Particle reconstruction method and device in three-dimensional flow field, electronic device and storage medium
GB2585078A (en) * 2019-06-28 2020-12-30 Sony Interactive Entertainment Inc Content generation system and method
CN112529997A (en) * 2020-12-28 2021-03-19 北京字跳网络技术有限公司 Firework visual effect generation method, video generation method and electronic equipment
CN112700517A (en) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 Method for generating visual effect of fireworks, electronic equipment and storage medium
WO2023134267A1 (en) * 2022-01-17 2023-07-20 腾讯科技(深圳)有限公司 Data processing method, apparatus and device, computer readable storage medium and computer program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2667538A1 (en) * 2006-10-27 2008-05-02 Thomson Licensing System and method for recovering three-dimensional particle systems from two-dimensional images
CN107211100A (en) * 2014-12-29 2017-09-26 诺基亚技术有限公司 Method, device and computer program product for the motion deblurring of image
CN107392097A (en) * 2017-06-15 2017-11-24 中山大学 A kind of 3 D human body intra-articular irrigation method of monocular color video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2667538A1 (en) * 2006-10-27 2008-05-02 Thomson Licensing System and method for recovering three-dimensional particle systems from two-dimensional images
CN107211100A (en) * 2014-12-29 2017-09-26 诺基亚技术有限公司 Method, device and computer program product for the motion deblurring of image
CN107392097A (en) * 2017-06-15 2017-11-24 中山大学 A kind of 3 D human body intra-articular irrigation method of monocular color video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHIHONG WANG 等: "《3D Firework Reconstruction from a Given Videos》", 《INTERNATIONAL JOURNAL OF INFORMATION AND COMMUNICATION SCIENCES》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993836B (en) * 2019-03-18 2020-11-17 浙江大学 Method for realizing controllable shape of virtual reality three-dimensional firework
CN109993836A (en) * 2019-03-18 2019-07-09 浙江大学 A method of realizing virtual reality three-dimensional fireworks controlled shape
CN110163982A (en) * 2019-04-11 2019-08-23 浙江大学 A kind of virtual fireworks analogy method of immersion based on Sketch Searching and controlled shape
GB2585078B (en) * 2019-06-28 2023-08-09 Sony Interactive Entertainment Inc Content generation system and method
GB2585078A (en) * 2019-06-28 2020-12-30 Sony Interactive Entertainment Inc Content generation system and method
US11328488B2 (en) 2019-06-28 2022-05-10 Sony Interactive Entertainment Inc. Content generation system and method
CN110619677A (en) * 2019-08-12 2019-12-27 浙江大学 Particle reconstruction method and device in three-dimensional flow field, electronic device and storage medium
CN110619677B (en) * 2019-08-12 2023-10-31 浙江大学 Method and device for reconstructing particles in three-dimensional flow field, electronic equipment and storage medium
CN112700517A (en) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 Method for generating visual effect of fireworks, electronic equipment and storage medium
CN112529997B (en) * 2020-12-28 2022-08-09 北京字跳网络技术有限公司 Firework visual effect generation method, video generation method and electronic equipment
CN112700517B (en) * 2020-12-28 2022-10-25 北京字跳网络技术有限公司 Method for generating visual effect of fireworks, electronic equipment and storage medium
WO2022142869A1 (en) * 2020-12-28 2022-07-07 北京字跳网络技术有限公司 Method for generating firework visual effect, video generation method, and electronic device
CN112529997A (en) * 2020-12-28 2021-03-19 北京字跳网络技术有限公司 Firework visual effect generation method, video generation method and electronic equipment
EP4250125A4 (en) * 2020-12-28 2024-06-12 Beijing Zitiao Network Technology Co., Ltd. Method for generating firework visual effect, electronic device, and storage medium
WO2023134267A1 (en) * 2022-01-17 2023-07-20 腾讯科技(深圳)有限公司 Data processing method, apparatus and device, computer readable storage medium and computer program product

Similar Documents

Publication Publication Date Title
CN109461205A (en) A method of three-dimensional fireworks are rebuild from fireworks video
Tian et al. Training and testing object detectors with virtual images
CN109255831A (en) The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
KR101964282B1 (en) 2d image data generation system using of 3d model, and thereof method
WO2015149302A1 (en) Method for rebuilding tree model on the basis of point cloud and data driving
CN108984169B (en) Cross-platform multi-element integrated development system
CN110227266A (en) Reality-virtualizing game is constructed using real world Cartographic Virtual Reality System to play environment
Paulin et al. Review and analysis of synthetic dataset generation methods and techniques for application in computer vision
CN105931283B (en) A kind of 3-dimensional digital content intelligence production cloud platform based on motion capture big data
Zhang et al. Data-driven synthetic modeling of trees
Fang et al. Simulating LIDAR point cloud for autonomous driving using real-world scenes and traffic flows
Wang et al. Construction of a virtual reality platform for UAV deep learning
Bird et al. From simulation to reality: CNN transfer learning for scene classification
Liu et al. Real-time neural rasterization for large scenes
Li Research on the application of artificial intelligence in the film industry
Zhou et al. Deeptree: Modeling trees with situated latents
Spick et al. Naive mesh-to-mesh coloured model generation using 3D GANs
Di Paola et al. A gaming approach for cultural heritage knowledge and dissemination
Queiroz et al. Generating facial ground truth with synthetic faces
CN116912727A (en) Video human behavior recognition method based on space-time characteristic enhancement network
Jiang et al. Better technology, but less realism: The perplexing development and application of vtuber technology
Tan et al. Survey on some key technologies of virtual tourism system based on Web3D
Yuan et al. Multiview SVBRDF capture from unified shape and illumination
Rivalcoba et al. Towards urban crowd visualization
Rahman Using generative adversarial networks for content generation in games

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190312

WD01 Invention patent application deemed withdrawn after publication