CN110517335A - A kind of dynamic texture video generation method, device, server and storage medium - Google Patents

A kind of dynamic texture video generation method, device, server and storage medium Download PDF

Info

Publication number
CN110517335A
CN110517335A CN201910838616.9A CN201910838616A CN110517335A CN 110517335 A CN110517335 A CN 110517335A CN 201910838616 A CN201910838616 A CN 201910838616A CN 110517335 A CN110517335 A CN 110517335A
Authority
CN
China
Prior art keywords
texture image
sample
texture
video flowing
euclidean distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910838616.9A
Other languages
Chinese (zh)
Other versions
CN110517335B (en
Inventor
唐永毅
马林
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Computer Systems Co Ltd
Original Assignee
Shenzhen Tencent Computer Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Computer Systems Co Ltd filed Critical Shenzhen Tencent Computer Systems Co Ltd
Priority to CN201910838616.9A priority Critical patent/CN110517335B/en
Publication of CN110517335A publication Critical patent/CN110517335A/en
Application granted granted Critical
Publication of CN110517335B publication Critical patent/CN110517335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Abstract

The embodiment of the present application provides a kind of dynamic texture video generation method, device, server and storage medium, by the texture image for receiving input, generating texture image sequence based on the texture image received and Texture image synthesis model, (the first texture image in the texture image sequence is the received texture image of institute, for the adjacent texture image of any two frame in texture image sequence, a later frame texture image is output result of the Texture image synthesis model to former frame texture image), so that dynamic texture video can be generated according to texture image sequence, to realize the generation to the corresponding dynamic texture video of the texture image received, on the basis of guaranteeing that the dynamic texture video generated can be from effective expression image texture on time and space, improve the formation efficiency of dynamic texture video.

Description

A kind of dynamic texture video generation method, device, server and storage medium
The application be on February 07th, 2018 the applying date, application No. is: 201810123812.3, denomination of invention are as follows: one The divisional application of kind dynamic texture video generation method, device, server and storage medium.
Technical field
The present invention relates to technical field of data processing, and in particular to a kind of dynamic texture video generation method, device, service Device and storage medium.
Background technique
It, can be to texture part (such as water flow part, flame portion, the waterfall in picture in order to increase the interest of picture Part etc.) mobilism processing is carried out, to obtain one section corresponding with the image video with dynamic texture content.
The prior art is usually to be carried out in the way of the thought of sampling and reconstruct and based on iteration-optimization to texture image Mobilism processing is to obtain dynamic texture video corresponding with texture image, however, often there are the following problems for such mode: the One, the mobilism of texture image is handled because being realized using sampling and reconstruct thought, therefore usually there will be the dynamic texture of generation Video can not from the time and spatially effective expression image texture the problem of;The second, because being realized pair based on iteration-optimization mode The mobilism of texture image is handled, therefore usually there will be the problem of generating dynamic texture video low efficiency.
In view of this, a kind of dynamic texture video generation method, device, server and storage medium are provided, to guarantee The dynamic texture video of generation can improve dynamic texture video from the time and spatially on the basis of effective expression image texture Formation efficiency, be a problem to be solved.
Summary of the invention
It is situated between in view of this, the embodiment of the present invention provides a kind of dynamic texture video generation method, device, server and storage Matter, to improve on the basis of guaranteeing that the dynamic texture video generated can be from effective expression image texture on time and space The formation efficiency of dynamic texture video.
To achieve the above object, the embodiment of the present invention provides the following technical solutions:
A kind of dynamic texture video generation method, comprising:
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;In the texture image sequence First texture image be the received texture image of institute, for the adjacent texture image of any two frame in texture image sequence, A later frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, the Texture image synthesis model is tended to output result of the convolutional neural networks to texture image sample Next frame texture image corresponding with the texture image sample is target in video flowing sample, and the training convolutional neural networks obtain It arrives;The texture image sample be the video flowing sample in texture image or the convolutional neural networks to the video Flow the output result of the texture image in sample.
A kind of dynamic texture video-generating device, comprising:
Texture image receiving unit, texture image for receiving input;
Texture image sequence generating unit, for generating texture maps based on the texture image and Texture image synthesis model As sequence;First texture image in the texture image sequence is the received texture image of institute, in texture image sequence The adjacent texture image of any two frame, a later frame texture image is the Texture image synthesis model to former frame texture image Output result;
Wherein, the Texture image synthesis model is tended to output result of the convolutional neural networks to texture image sample Next frame texture image corresponding with the texture image sample is target in video flowing sample, and the training convolutional neural networks obtain It arrives;The texture image sample be the video flowing sample in texture image or the convolutional neural networks to the video Flow the output result of the texture image in sample.
A kind of server, comprising: at least one processor and at least one processor;The memory is stored with program, The processor calls the program of the memory storage, and described program is used for:
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;In the texture image sequence First texture image be the received texture image of institute, for the adjacent texture image of any two frame in texture image sequence, A later frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, the Texture image synthesis model is tended to output result of the convolutional neural networks to texture image sample Next frame texture image corresponding with the texture image sample is target in video flowing sample, and the training convolutional neural networks obtain It arrives;The texture image sample be the video flowing sample in texture image or the convolutional neural networks to the video Flow the output result of the texture image in sample.
A kind of storage medium, the storage medium are stored with the program executed suitable for processor, and described program is used for:
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;In the texture image sequence First texture image be the received texture image of institute, for the adjacent texture image of any two frame in texture image sequence, A later frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, the Texture image synthesis model is tended to output result of the convolutional neural networks to texture image sample Next frame texture image corresponding with the texture image sample is target in video flowing sample, and the training convolutional neural networks obtain It arrives;The texture image sample be the video flowing sample in texture image or the convolutional neural networks to the video Flow the output result of the texture image in sample.
The embodiment of the present application provides a kind of dynamic texture video generation method, device, server and storage medium, by connecing It is (described to generate texture image sequence based on the texture image received and Texture image synthesis model for the texture image for receiving input First texture image in texture image sequence is the received texture image of institute, for any two frames phase in texture image sequence Adjacent texture image, a later frame texture image is output result of the Texture image synthesis model to former frame texture image), So that dynamic texture video can be generated according to texture image sequence, to realize to the corresponding dynamic of the texture image received The generation of texture video is guaranteeing that the dynamic texture video generated can be from the base of effective expression image texture on time and space On plinth, the formation efficiency of dynamic texture video is improved.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of dynamic texture video generation method flow chart provided by the embodiments of the present application;
Fig. 2 is another dynamic texture video generation method flow chart provided by the embodiments of the present application;
Fig. 3 is a kind of generating process schematic diagram of texture image sequence provided by the embodiments of the present application;
Fig. 4 is provided by the embodiments of the present application a kind of for generating the generation system framework figure of Texture image synthesis model;
Fig. 5 is the structural schematic diagram of the convolutional neural networks in a kind of model generation module provided by the embodiments of the present application;
Fig. 6 is a kind of Texture image synthesis model building method flow chart provided by the embodiments of the present application;
Fig. 7 is provided by the embodiments of the present application a kind of to determine the Euclidean distance for outputting results to the video flowing sample Method flow diagram;
Fig. 8 is another Texture image synthesis model building method journey figure provided by the embodiments of the present application;
Fig. 9 is another Texture image synthesis model building method flow chart provided by the embodiments of the present application;
Figure 10 is the structural block diagram of dynamic texture video-generating device provided in an embodiment of the present invention;
Figure 11 is a kind of detailed construction schematic diagram of Texture image synthesis model training unit provided by the embodiments of the present application;
Figure 12 is the hardware block diagram of server.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Embodiment:
Dynamic texture video generation method provided by the embodiments of the present application is related to the computer vision technique in artificial intelligence And machine learning techniques etc., first artificial intelligence technology, computer vision technique and machine learning techniques are illustrated below.
Artificial intelligence (Artificial Intelligence, AI) is to utilize digital computer or digital computer control Machine simulation, extension and the intelligence for extending people of system, perception environment obtain knowledge and the reason using Knowledge Acquirement optimum By, method, technology and application system.In other words, artificial intelligence is a complex art of computer science, it attempts to understand The essence of intelligence, and produce a kind of new intelligence machine that can be made a response in such a way that human intelligence is similar.Artificial intelligence The design principle and implementation method for namely studying various intelligence machines make machine have the function of perception, reasoning and decision.
Artificial intelligence technology is an interdisciplinary study, is related to that field is extensive, and the technology of existing hardware view also has software layer The technology in face.Artificial intelligence basic technology generally comprise as sensor, Special artificial intelligent chip, cloud computing, distributed storage, The technologies such as big data processing technique, operation/interactive system, electromechanical integration.Artificial intelligence software's technology mainly includes computer Several general orientation such as vision technique, voice processing technology, natural language processing technique and machine learning/deep learning.
Computer vision technique (Computer Vision, CV) computer vision is how a research makes machine " seeing " Science further just refer to and the machines such as replace human eye to be identified, tracked to target with video camera and computer and measured Device vision, and graphics process is further done, so that computer is treated as the image for being more suitable for eye-observation or sending instrument detection to. As a branch of science, the relevant theory and technology of computer vision research, it is intended to which foundation can be from image or multidimensional number According to the middle artificial intelligence system for obtaining information.Computer vision technique generally includes image procossing, image recognition, image, semantic reason Solution, image retrieval, OCR, video processing, video semanteme understanding, video content/Activity recognition, three-dimension object reconstruction, 3D technology, The technologies such as virtual reality, augmented reality, synchronous superposition further include that common recognition of face, fingerprint recognition etc. are raw Object feature identification technique.
Machine learning (Machine Learning, ML) is a multi-field cross discipline, be related to probability theory, statistics, The multiple subjects such as Approximation Theory, convextiry analysis, algorithm complexity theory.Specialize in the study that the mankind were simulated or realized to computer how Behavior reorganizes the existing structure of knowledge and is allowed to constantly improve the performance of itself to obtain new knowledge or skills.Engineering Habit is the core of artificial intelligence, is the fundamental way for making computer have intelligence, and application spreads the every field of artificial intelligence. Machine learning and deep learning generally include artificial neural network, confidence network, intensified learning, transfer learning, inductive learning, formula The technologies such as teaching habit.
It is regarded below with reference to computer of the following specific embodiment to the artificial intelligence that dynamic texture video generation method is related to Feel technology and machine learning techniques are illustrated.
Fig. 1 is a kind of dynamic texture video generation method flow chart provided by the embodiments of the present application, and this method can be applied to Server (such as dynamic texture video generates server or other service equipments being specially arranged), is based on receiving by server The Texture image synthesis texture image sequence for the input arrived, in order to which dynamic line can be generated by the texture image sequence of generation Manage video.
As shown in Figure 1, this method comprises:
S101, the texture image for receiving input;
Optionally, Texture image synthesis model, the generation of the Texture image synthesis model and people are provided in server The computer vision technique and machine learning techniques of work intelligence are related, a kind of preferred implementation side as the embodiment of the present application Formula, Texture image synthesis model are tended in video flowing sample and are somebody's turn to do with output result of the convolutional neural networks to texture image sample The corresponding next frame texture image of texture image sample is target, and the training convolutional neural networks obtain.
In the embodiment of the present application, it is preferred that the number of texture image sample be at least one, the texture image sample For in the video flowing sample texture image or the convolutional neural networks to the texture image in the video flowing sample Export result.
That is, the embodiment of the present application can use multiple texture image samples to convolutional neural networks be trained with Generate Texture image synthesis model, also, in this multiple texture image sample can both include the first texture image sample (herein The first texture image sample be video flowing sample in texture image), also may include the second texture image sample [herein Second texture image sample is during being trained to convolutional neural networks, and convolutional neural networks are in video flowing sample Texture image output result (that is, during being trained to convolutional neural networks, by the texture in video flowing sample Image is as input information input to the obtained output result of convolutional neural networks)].
It optionally, will be in video flowing sample when texture image sample is the texture image in the video flowing sample The next frame texture image adjacent with the texture image sample is determined as in video flowing sample corresponding with the texture image sample Next frame texture image.For example, the texture image that video flowing sample is successively sorted by 3 is constituted, (video flowing sample is by texture maps As 1, texture image 2 and the composition of texture image 3, also, texture image 1, texture image 2 and texture image 3 successively sort);If When texture image sample is the texture image 1 in video flowing sample, the texture image 2 in video flowing sample is determined as video flowing Next frame texture image corresponding with texture image sample in sample.
Optionally, when texture image sample is output of the convolutional neural networks to the texture image in the video flowing sample When as a result, determine next frame texture image adjacent with the texture image in video flowing sample, so by video flowing sample with The adjacent next frame texture image of identified next frame texture image as in video flowing sample with the texture image sample pair The next frame texture image answered.For example, the texture image that video flowing sample is successively sorted by 3 is constituted, (video flowing sample is by line It manages image 1, texture image 2 and texture image 3 to constitute, also, texture image 1, texture image 2 and texture image 3 are successively arranged Sequence);If texture image sample is output result of the convolutional neural networks to the texture image 1 in video flowing sample, by video Texture image 3 in stream sample is determined as next frame texture image corresponding with texture image sample in video flowing sample.
In the embodiment of the present application, it is preferred that the input information of Texture image synthesis model is a frame texture image, texture Image generate model to input information output result be also a frame texture image, also, from texture image sequence generate angle From the point of view of degree, Texture image synthesis model exports the next frame texture maps the result is that adjacent with the input information to input information Picture.
That is, during generating texture image sequence, if the input information of Texture image synthesis model be to A frame texture image (referred to here as target frame texture image) in the texture image sequence of generation;So, Texture image synthesis Model to the output result of the input information be in the texture image sequence to be generated with the target frame texture image phase Adjacent next frame texture image.
In the embodiment of the present application, it is preferred that after the texture image for receiving user's input, the use that will receive first The texture image of family input is as the first texture image in texture image sequence to be generated (that is, texture maps to be generated As the first frame texture image in sequence);By Texture image synthesis model to the output of first frame texture image as a result, as to The next frame texture image adjacent with first frame texture image in the texture image sequence of generation is (that is, texture maps to be generated As the second frame texture image in sequence);And by Texture image synthesis model to the output of the second frame texture image as a result, conduct The next frame texture image adjacent with the second frame texture image in texture image sequence to be generated is (that is, texture to be generated Third frame texture image in image sequence);And by Texture image synthesis model to the output of the third frame texture image as a result, As the next frame texture image ... adjacent with the third frame texture image and so in texture image sequence to be generated, Until the frame number for the texture image for including in texture image sequence to be generated is met the requirements, can not continue to execute.
S102, texture image sequence is generated based on the texture image and Texture image synthesis model;The texture image First texture image in sequence is the received texture image of institute, for the adjacent texture of any two frame in texture image sequence Image, a later frame texture image are output result of the Texture image synthesis model to former frame texture image.
Optionally, after application server receives the texture image that user inputs, by the line of the user received input Image is managed as the first texture image in texture image sequence to be generated, and then according to the texture image sequence to be generated In first texture image can generate texture image sequence, the texture image sequence based on the generation can generate dynamic texture Video (the dynamic texture video is dynamic texture video corresponding with the texture image of the input received).
For the ease of now providing another to a kind of understanding of dynamic texture video generation method provided by the embodiments of the present application Kind dynamic texture video generation method flow chart, refers to Fig. 2.
As shown in Fig. 2, this method comprises:
S201, the texture image for receiving input, and using the texture image received as in texture image sequence to be generated First texture image be stored in the texture image sequence to be generated;
S202, Texture image synthesis model is called, the Texture image synthesis model is with convolutional neural networks to texture It is target that the output result of image pattern, which tends to next frame texture image corresponding with the texture image sample in video flowing sample, The training convolutional neural networks obtain, and the texture image sample is the texture image or described in the video flowing sample Output result of the convolutional neural networks to the texture image in the video flowing sample;
Optionally, using the texture image in video flowing sample as texture image sample, and by convolutional neural networks pair The output result of texture image in video flowing sample carrys out the mode of training convolutional neural networks as texture image sample, so that The time-space information for the Texture image synthesis model capture texture image that training obtains is real to generate next frame texture image Now to the effective expression of the time-space information of image texture, to improve the accuracy of Texture image synthesis.
S203, the last one texture image in the texture image sequence to be generated is determined as the texture image life At the input information of model, be input to the Texture image synthesis model exported as a result, and using the output result as The last one texture image in the texture image sequence to be generated is stored in the texture image sequence to be generated;
S204, determine whether the number of the texture image in the texture image sequence to be generated reaches default dynamic texture Video formation condition;If it is not, executing step S203;If so, executing step S205;
Optionally, the default dynamic texture video formation condition includes pre-set for generating dynamic texture video Texture image number.That is, however, it is determined that the number of the texture image in texture image sequence to be generated reaches preparatory When the number for the texture image for generating dynamic texture video being arranged, then illustrate in the texture image sequence to be generated The number of texture image reaches default dynamic texture video formation condition;If it is determined that the texture maps in texture image sequence to be generated When the number of picture is not up to the number of the pre-set texture image for generating dynamic texture video, then illustrate described to be generated At the not up to default dynamic texture video formation condition of the number of the texture image in texture image sequence.
S205, it is determined as the texture image sequence to be generated to be used to generate the texture image sequence of dynamic texture video Column.
Optionally, it can be generated based on the texture image sequence in step S205 corresponding with the texture image received dynamic State texture video.
In the embodiment of the present application, it is preferred that the server can return corresponding with the texture image that it is inputted to user For generating the texture image sequence of dynamic texture video, can also be generated in the texture image sequence based on generation corresponding After dynamic texture video, dynamic texture video is returned to user.
As shown in figure 3, being a kind of generating process schematic diagram of texture image sequence provided by the embodiments of the present application.
Referring to Fig. 3 it is found that in the embodiment of the present application, server receives the texture image 1 of user's input, by texture image 1 is determined as the first texture image in texture image sequence to be generated;By texture image 1 as input information input to texture maps It is exported as generating model as a result, output result is determined as adjacent with texture image 1 in texture image sequence to be generated Next frame texture image 2;Texture image 2 is exported to Texture image synthesis model as input information input as a result, will Output result be determined as next frame texture image 3 ... adjacent with texture image 2 in texture image sequence to be generated until to The number for generating the texture image in texture image sequence meets preset dynamic texture video formation condition, then no longer by texture Image generates the current output result of model as input information input to the Texture image synthesis model.
In the embodiment of the present application, it is preferred that by Texture image synthesis model to current in texture image sequence to be generated The output of last frame texture image is realized as a result, as last frame texture image new in texture image sequence to be generated Propagated forward in texture image sequence generation process, for iteration-optimal way for using compared with the existing technology, effectively Improve the formation efficiency of texture image sequence, that is, improving the formation efficiency of dynamic texture video.
Fig. 4 is provided by the embodiments of the present application a kind of for generating the generation system framework figure of Texture image synthesis model.
As shown in figure 4, the generation system include: memory module 41, model generation module 42, first construct module 43, Second building module 44 and synthesis module 45.
Wherein, convolutional neural networks are provided in the model generation module, convolutional neural networks are used for a frame texture Image obtains output result as input information input to the convolutional neural networks;It includes extremely that memory module, which is applied not only to storage, The video flowing sample of a few texture image, is also used to store the output result of convolutional neural networks.
Correspondingly, model generation module determines input of the texture image as convolutional neural networks from memory module Information input obtains output result to convolutional neural networks;First building module is for receiving the defeated of the convolutional neural networks Enter information and output information, and first-loss function is constructed according to video flowing sample;Second building module is for receiving the volume The input information and output information of product neural network, and based on the input information received and output information building the second loss letter Number;Synthesis module is used to receive the first-loss function of the first building module building and the second loss of the second building module building Function, and associated losses function, and the associated losses letter based on building are constructed according to first-loss function and the second loss function Several parameters in the convolutional neural networks optimize, so that convolutional neural networks are generated close to true texture maps Picture.
In the embodiment of the present application, it is preferred that the first building module is VGG19 network module.
Fig. 5 is the structural schematic diagram of the convolutional neural networks in a kind of model generation module provided by the embodiments of the present application. As shown in figure 5, convolutional neural networks include at least one control door residual error for being used to capture the space time information of video flowing Module 51, the control door residual error module 51 are made of control door branch 61, convolutional layer branch 62 and additive layer 63 in parallel, In, the control door branch includes convolution module 71, and the convolutional layer branch includes at least one concatenated convolution module 71, institute Stating convolution module 71 includes convolutional layer 81, instantiation rule one layer 82 and activation primitive layer 83.
In the embodiment of the present application, it is preferred that the branch structure of control door branch residual error module refers to table 1, wherein volume Concatenated first convolution module in the corresponding convolutional layer branch of lamination branch -1, convolutional layer branch -2 corresponds to goes here and there in convolutional layer branch Second convolution module of connection.
Table 1
Branch Layer classification Size Step-length Port number Activation primitive
Convolutional layer branch -1 Conv 3 1 48 ReLU
Convolutional layer branch -2 Conv 3 1 48 ReLU
Control door branch Conv 3 1 48 ReLU
Additive layer - - - -
In the embodiment of the present application, it is preferred that the convolution module in convolutional neural networks includes convolutional layer, instantiation rule one Layer and activation primitive layer be not (as shown in figure 5, except first convolution module and the last one convolution module include instantiation rule one Layer).Specifically, the structure of convolutional neural networks refers to table 2.
Table 2
Wherein, ReLU is line rectification function, and Tanh is traditional method of indicating the pronunciation of a Chinese character function.
In the embodiment of the present application, the structure of the convolutional neural networks provided based on the above embodiment provides a kind of texture Image generates model building method, specifically refers to Fig. 6.
As shown in fig. 6, this method comprises:
S601, video flowing sample is obtained, the video flowing sample is made of the texture image that at least one successively sorts;
S602, texture image sample is determined;
S603, convolutional neural networks are obtained to the output result of texture image sample;
The Euclidean distance of the video flowing sample is outputted results to described in S604, determination;
Fig. 7 is provided by the embodiments of the present application a kind of to determine the Euclidean distance for outputting results to the video flowing sample Method flow diagram.
As shown in fig. 7, this method comprises:
S701, the gram matrix being made of the texture image sample and the output result is determined;
Assuming that texture image sample is input frame xt, convolutional neural networks are to texture image sample xtOutput result make a living FramingRespectively by input frame xtAnd delta frameIt is input in the first building module (VGG19 network), obtains first Construct the output of the first line rectification function (ReLU) of each convolution module of module, respective layer name are as follows: " ReLU1_1 ", " ReLU2_1 ", " ReLU3_1 ", " ReLU4_1 " and " ReLU5_1 ".It for each layer of l, inputs as x, feature is denoted as φl (x), then for input frame xtAnd delta frameGenerate space-time-gram matrix are as follows:
Wherein, MlIt is characterized the length and wide product of φ (x), That is Ml=Hl×Wl
S702, average gram matrix for characterizing the video flowing sample is determined;
Optionally, for video flowing sample, average gram matrix (referred to here as average space-time-gram square is used Battle array) characterization, it may be assumed that
S703, the Euclidean distance by minimizing the average gram matrix and the gram matrix, are configured to The first-loss function of the Euclidean distance of the video flowing sample is outputted results to described in characterization.
Optionally, by minimizing average space-time-gram matrix and generating space-time-gram matrix Euclidean distance Construct first-loss function.The first-loss function are as follows:
Wherein, | l | it is characterized number, it can in this programme Think 5, NlIt is characterized φl(x) port number.
S605, to minimize the Euclidean distance as training objective, the parameter of the convolutional neural networks is updated, until pass Return convolutional neural networks to reach convergence, obtains Texture image synthesis model.
Fig. 8 is another Texture image synthesis model building method journey figure provided by the embodiments of the present application.
As shown in figure 8, this method comprises:
S801, video flowing sample is obtained, the video flowing sample is made of the texture image that at least one successively sorts;
S802, texture image sample is determined;
S803, convolutional neural networks are obtained to the output result of texture image sample;
The Euclidean distance of the video flowing sample is outputted results to described in S804, determination;
S805, next frame texture image corresponding with the output result in the video flowing sample is determined;
The mapping distance of next frame texture image determined by being outputted results to described in S806, determination;
In the embodiment of the present application, it is preferred that next frame texture image determined by being outputted results to described in the determination Mapping distance, comprising: pass through the mapping distance of next frame texture image determined by outputting results to described in minimizing, construction For characterizing the second loss function of the mapping distance of next frame texture image determined by described output results to.
Optionally, network losses function is generated to model generation module and the first building module using the confrontation of minimum two journeys It is trained, in order to promote the visual quality for generating texture picture.Specifically, for the first building module d, mesh Mark is to differentiate that the texture picture sample of input is that true texture picture X or convolutional neural networks generate in video flowing sample Output result texture pictureIts loss function is defined as:
Wherein, NgtTo input true picture Number, NgenThe number of picture is generated for input, f is convolutional neural networks (that is, generating network).
For the convolutional neural networks in model generation module, target is generated so that the first building module differentiates mistake Texture picture accidentally, specifically, the second loss function is defined as:
S807, using the summation for minimizing the Euclidean distance and mapping distance as training objective, update the convolutional Neural The parameter of network obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
It optionally, can be by being based on first-loss function and the second loss function tectonic syntaxis loss function, to me The convolutional neural networks frame that proposes carry out parameter optimization, which is generated close to true dynamic line Manage video.
Optionally, associated losses function are as follows:
Wherein λ=0.05, for balancing first-loss function and the second loss function Contribution.
Fig. 9 is another Texture image synthesis model building method flow chart provided by the embodiments of the present application.
As shown in figure 9, this method comprises:
S901, video flowing sample is obtained, the video flowing sample is made of the texture image that at least one successively sorts;
S902, at least two texture image samples are determined;
S903, convolutional neural networks are obtained respectively to the output result of each texture image sample;
S904, each Euclidean distance for outputting results to the video flowing sample is determined respectively;
S905, based on determined each Euclidean distance, determine average Euclidean distance;
Optionally, determine that the mode of average Euclidean distance includes: to calculate institute based on identified each Euclidean distance The summation of determining each Euclidean distance, calculated summation is divided by texture in at least two texture images sample The quantity of image pattern, resulting result are average Euclidean distance.
It optionally, can be to minimize the average Euclidean distance as training objective, more after executing completion step S905 The parameter of the new convolutional neural networks obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
Further, it in a kind of Texture image synthesis model building method provided by the embodiments of the present application, can also hold Row step S906.
S906, next frame texture image corresponding with the output result in the video flowing sample is determined;
The mapping distance of next frame texture image determined by being outputted results to described in S907, determination;
The Mean mapping distance of each mapping distance determined by S908, calculating;
Optionally, the mode of the Mean mapping distance of each mapping distance determined by calculating includes: determined by calculating The summation of each mapping distance, calculated summation are decent divided by texture maps in at least two texture images sample This quantity, resulting result is as Mean mapping distance.
S909, using the summation for minimizing the average Euclidean distance and average mapping distance as training objective, described in update The parameter of convolutional neural networks obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
Optionally, if not executing step S906-S907, to minimize the average Euclidean distance as training objective, more The parameter of the new convolutional neural networks obtains Texture image synthesis model until recursive convolution neural network reaches convergence;If Step S906-S907 is executed then to minimize the summation of the average Euclidean distance and average mapping distance as training objective, more The parameter of the new convolutional neural networks obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
The embodiment of the present application provides a kind of dynamic texture video generation method, by receiving the texture image of input, is based on The texture image and Texture image synthesis model received generates texture image sequence (the first line in the texture image sequence Reason image is the received texture image of institute, for the adjacent texture image of any two frame in texture image sequence, a later frame line Managing image is output result of the Texture image synthesis model to former frame texture image) so that can be according to texture image sequence Column-generation dynamic texture video is being protected to realize the generation to the corresponding dynamic texture video of the texture image received The dynamic texture video that card generates can improve dynamic texture from the time and spatially on the basis of effective expression image texture The formation efficiency of video.
Dynamic texture video-generating device provided in an embodiment of the present invention is introduced below, dynamic line described below Reason video-generating device is regarded as, the server dynamic texture video generation method that embodiment provides to realize the present invention, institute The program module that need to be arranged.Dynamic texture video-generating device content described below can be regarded with above-described dynamic texture Frequency generation method content corresponds to each other reference.
Figure 10 is the structural block diagram of dynamic texture video-generating device provided in an embodiment of the present invention, which can be applied to Server, referring to Fig.1 0, which may include:
Texture image receiving unit 101, texture image for receiving input;
Texture image sequence generating unit 102, for generating line based on the texture image and Texture image synthesis model Manage image sequence;First texture image in the texture image sequence is the received texture image of institute, for texture image sequence The adjacent texture image of any two frame in column, a later frame texture image is the Texture image synthesis model to former frame texture The output result of image;
Wherein, the Texture image synthesis model is tended to output result of the convolutional neural networks to texture image sample Next frame texture image corresponding with the texture image sample is target in video flowing sample, and the training convolutional neural networks obtain It arrives;The texture image sample be the video flowing sample in texture image or the convolutional neural networks to the video Flow the output result of the texture image in sample.
A kind of dynamic texture video-generating device provided by the embodiments of the present application further includes Texture image synthesis model training Unit.Figure 11 is a kind of detailed construction schematic diagram of Texture image synthesis model training unit provided by the embodiments of the present application, such as Shown in Figure 11, which includes:
Video flowing sample acquisition unit 111, for obtaining video flowing sample, the video flowing sample by least one successively The texture image of sequence is constituted;
Texture image sample determination unit 112, for determining texture image sample;
Result determination unit 113 is exported, for obtaining convolutional neural networks to the output result of texture image sample;
Euclidean distance determination unit 114, for outputting results to the Euclidean distance of the video flowing sample described in determination;
Recursive unit 115, for updating the convolutional neural networks to minimize the Euclidean distance as training objective Parameter obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
Further, a kind of Texture image synthesis model training unit provided by the embodiments of the present application further includes mapping Distance determining unit, the mapping distance determination unit, is used for: determining corresponding with the output result in the video flowing sample Next frame texture image;The mapping distance of next frame texture image determined by being outputted results to described in determination;The recurrence Unit is specifically used for: using the summation for minimizing the Euclidean distance and mapping distance as training objective, updating the convolutional Neural The parameter of network obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
Optionally, the Euclidean distance determination unit is specifically used for: determining by the texture image sample and the output As a result the gram matrix constituted;Determine the average gram matrix for characterizing the video flowing sample;By minimizing institute The Euclidean distance for stating average gram matrix and the gram matrix is configured to output results to the video described in characterization Flow the first-loss function of the Euclidean distance of sample;
Optionally, the map unit is specifically used for: by outputting results to identified next frame line described in minimum The mapping distance for managing image is configured to the mapping distance of next frame texture image determined by outputting results to described in characterization Second loss function.
In the embodiment of the present application, it is preferred that the Texture image synthesis model training unit further include average Euclidean away from From determination unit, the average Euclidean distance determining unit, for based on it is identified respectively with each texture image sample pair The Euclidean distance answered determines average Euclidean distance;The recursive unit is specifically used for: being to minimize the average Euclidean distance Training objective updates the parameter of the convolutional neural networks, until recursive convolution neural network reaches convergence, obtains texture image Generate model.
In the embodiment of the present application, it is preferred that the Texture image synthesis model training unit further include Mean mapping away from From determination unit, the Mean mapping distance determining unit, for decent with each texture maps respectively based on determined by This corresponding mapping distance, determines Mean mapping distance;The recursive unit be specifically used for minimize the average Euclidean away from It is training objective from the summation with average mapping distance, updates the parameter of the convolutional neural networks, until recursive convolution nerve Network reaches convergence, obtains Texture image synthesis model.
In the embodiment of the present application, it is preferred that the convolutional neural networks include at least one be used for video flowing when The control door residual error module that empty information is captured, the control door residual error module is by control door branch in parallel, convolutional layer branch Road and additive layer are constituted;The control door branch includes convolution module, and the convolutional layer branch includes at least one concatenated volume Volume module;The convolution module includes convolutional layer, instantiation rule one layer and activation primitive layer.
Dynamic texture video-generating device provided in an embodiment of the present invention can be applied to server;Optionally, Figure 12 is shown The hardware block diagram of server, referring to Fig.1 2, the hardware configuration of server may include: at least one processor 121, until A few communication interface 122, at least one processor 123 and at least one communication bus 124;
In embodiments of the present invention, processor 121, communication interface 122, memory 123, communication bus 124 quantity be At least one, and processor 121, communication interface 122, memory 3 complete mutual communication by communication bus 124;
Processor 121 may be a central processor CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road etc.;
Memory 123 may include high speed RAM memory, it is also possible to further include nonvolatile memory (non- Volatile memory) etc., a for example, at least magnetic disk storage;
Wherein, memory is stored with program, the program that processor can call memory to store, and described program is used for:
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;In the texture image sequence First texture image be the received texture image of institute, for the adjacent texture image of any two frame in texture image sequence, A later frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, the Texture image synthesis model is tended to output result of the convolutional neural networks to texture image sample Next frame texture image corresponding with the texture image sample is target in video flowing sample, and the training convolutional neural networks obtain It arrives;The texture image sample be the video flowing sample in texture image or the convolutional neural networks to the video Flow the output result of the texture image in sample.
Optionally, the refinement function of described program and extension function can refer to above description.
The embodiment of the present invention also provides a kind of storage medium, which can be stored with the journey executed suitable for processor Sequence, described program are used for:
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;In the texture image sequence First texture image be the received texture image of institute, for the adjacent texture image of any two frame in texture image sequence, A later frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, the Texture image synthesis model is tended to output result of the convolutional neural networks to texture image sample Next frame texture image corresponding with the texture image sample is target in video flowing sample, and the training convolutional neural networks obtain It arrives;The texture image sample be the video flowing sample in texture image or the convolutional neural networks to the video Flow the output result of the texture image in sample.
Optionally, the refinement function of described program and extension function can refer to above description.
The embodiment of the present application provides a kind of dynamic texture video-generating device, server and storage medium, defeated by receiving The texture image entered generates the texture image sequence (texture based on the texture image received and Texture image synthesis model First texture image in image sequence is the received texture image of institute, adjacent for any two frame in texture image sequence Texture image, a later frame texture image are output result of the Texture image synthesis model to former frame texture image) so that Dynamic texture video can be generated according to texture image sequence, to realize to the corresponding dynamic texture of the texture image received The generation of video is guaranteeing that the dynamic texture video generated can be from the basis of effective expression image texture on time and space On, improve the formation efficiency of dynamic texture video.
With artificial intelligence technology research and progress, research and application is unfolded in multiple fields in artificial intelligence technology, such as Common smart home, intelligent wearable device, virtual assistant, intelligent sound box, intelligent marketing, unmanned, automatic Pilot, nobody Machine, robot, intelligent medical, intelligent customer service etc., it is believed that with the development of technology, artificial intelligence technology will obtain in more fields To application, and play more and more important value.
Dynamic texture video generation technique provided by the embodiments of the present application can be applied to any of the above field.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other The difference of embodiment, the same or similar parts in each embodiment may refer to each other.For device disclosed in embodiment For, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is said referring to method part It is bright.
Professional further appreciates that, unit described in conjunction with the examples disclosed in the embodiments of the present disclosure And algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware and The interchangeability of software generally describes each exemplary composition and step according to function in the above description.These Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered Think beyond the scope of this invention.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can directly be held with hardware, processor The combination of capable software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments in the case where not departing from core of the invention thought or scope.Therefore, originally Invention is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein Consistent widest scope.

Claims (12)

1. a kind of dynamic texture video generation method characterized by comprising
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;Head in the texture image sequence A texture image is the received texture image of institute, latter for the adjacent texture image of any two frame in texture image sequence Frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, it is that target training convolutional neural networks obtain that the Texture image synthesis model, which is to minimize average Euclidean distance, , the calculation of the average Euclidean distance includes: for each of at least one texture image sample texture maps Decent, calculate the convolutional neural networks to the texture image sample output results to video flowing sample it is European away from From;According to the summation of the corresponding Euclidean distance of the texture image sample each at least one described texture image sample and The quantity of texture image determines average Euclidean distance at least one described texture image sample;The texture image sample is institute State the output of texture image or the convolutional neural networks to the texture image in the video flowing sample in video flowing sample As a result.
2. the method according to claim 1, wherein further include:
Video flowing sample is obtained, the video flowing sample is made of the texture image that at least one successively sorts;
Determine at least one texture image sample;
Convolutional neural networks are obtained to the output result of the texture image sample;
The Euclidean distance of the video flowing sample is outputted results to described in determination;
Based on identified Euclidean distance corresponding with each texture image sample respectively, average Euclidean distance is determined;
To minimize the average Euclidean distance as training objective, the parameter of the convolutional neural networks is updated, until recurrence is rolled up Product neural network reaches convergence, obtains Texture image synthesis model.
3. according to the method described in claim 2, it is characterized by further comprising:
Determine next frame texture image corresponding with the output result in the video flowing sample;
The mapping distance of next frame texture image determined by being outputted results to described in determination;
Based on identified mapping distance corresponding with each texture image sample respectively, Mean mapping distance is determined;
It is described to update the parameter of the convolutional neural networks to minimize the average Euclidean distance as training objective, until passing Return convolutional neural networks to reach convergence, obtain Texture image synthesis model, comprising:
Using the summation for minimizing the average Euclidean distance and average mapping distance as training objective, the convolutional Neural net is updated The parameter of network obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
4. according to the method described in claim 2, it is characterized in that, outputting results to the video flowing sample described in the determination Euclidean distance, comprising:
Determine the gram matrix being made of the texture image sample and the output result;
Determine the average gram matrix for characterizing the video flowing sample;
By minimizing the Euclidean distance of the average gram matrix and the gram matrix, it is configured to characterize described defeated Out result to the video flowing sample Euclidean distance first-loss function.
5. according to the method described in claim 3, it is characterized in that, outputting results to identified next frame described in the determination The mapping distance of texture image, comprising:
By outputting results to the mapping distance of identified next frame texture image described in minimum, it is configured to described in characterization Second loss function of the mapping distance of next frame texture image determined by outputting results to.
6. method described in -5 any one according to claim 1, which is characterized in that the convolutional neural networks include at least one A control door residual error module for being captured to the space time information of video flowing, the control door residual error module is by control in parallel Door branch, convolutional layer branch and additive layer processed are constituted;The control door branch includes convolution module, and the convolutional layer branch includes At least one concatenated convolution module;The convolution module includes convolutional layer, instantiation rule one layer and activation primitive layer.
7. a kind of dynamic texture video-generating device characterized by comprising
Texture image receiving unit, texture image for receiving input;
Texture image sequence generating unit, for generating texture image sequence based on the texture image and Texture image synthesis model Column;First texture image in the texture image sequence is the received texture image of institute, for appointing in texture image sequence It anticipates the adjacent texture image of two frames, a later frame texture image is the Texture image synthesis model to the defeated of former frame texture image Result out;
Wherein, it is that target training convolutional neural networks obtain that the Texture image synthesis model, which is to minimize average Euclidean distance, , the calculation of the average Euclidean distance includes: for each of at least one texture image sample texture maps Decent, calculate the convolutional neural networks to the texture image sample output results to video flowing sample it is European away from From;According to the summation of the corresponding Euclidean distance of the texture image sample each at least one described texture image sample and The quantity of texture image determines average Euclidean distance at least one described texture image sample;The texture image sample is institute State the output of texture image or the convolutional neural networks to the texture image in the video flowing sample in video flowing sample As a result.
8. device according to claim 7, which is characterized in that it further include Texture image synthesis model training unit, it is described Texture image synthesis model training unit, comprising:
Video flowing sample acquisition unit, for obtaining video flowing sample, what the video flowing sample was successively sorted by least one Texture image is constituted;
Texture image sample determination unit, for determining at least one texture image sample;
Result determination unit is exported, for obtaining convolutional neural networks to the output result of the texture image sample;
Euclidean distance determination unit, for outputting results to the Euclidean distance of the video flowing sample described in determination;
Average Euclidean distance determining unit, for corresponding European with each texture image sample respectively based on determined by Distance determines average Euclidean distance;
Recursive unit, for updating the ginseng of the convolutional neural networks to minimize the average Euclidean distance as training objective Number obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
9. device according to claim 8, which is characterized in that the Texture image synthesis model training unit further includes reflecting Distance determining unit and average mapping distance determination unit are penetrated,
The mapping distance determination unit, for determining next frame line corresponding with the output result in the video flowing sample Manage image;The mapping distance of next frame texture image determined by being outputted results to described in determination;
The Mean mapping distance determining unit, for corresponding with each texture image sample respectively based on determined by Mapping distance determines Mean mapping distance;
The recursive unit, is specifically used for:
Using the summation for minimizing the average Euclidean distance and average mapping distance as training objective, the convolutional Neural net is updated The parameter of network obtains Texture image synthesis model until recursive convolution neural network reaches convergence.
10. according to device described in claim 7-9 any one, which is characterized in that the convolutional neural networks include at least One control door residual error module for being captured to the space time information of video flowing, the control door residual error module is by parallel connection Door branch, convolutional layer branch and additive layer is controlled to constitute;The control door branch includes convolution module, the convolutional layer branch packet Include at least one concatenated convolution module;The convolution module includes convolutional layer, instantiation rule one layer and activation primitive layer.
11. a kind of server characterized by comprising at least one processor and at least one processor;The memory is deposited Program is contained, the processor calls the program of the memory storage, and described program is used for:
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;Head in the texture image sequence A texture image is the received texture image of institute, latter for the adjacent texture image of any two frame in texture image sequence Frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, it is the target training convolutional neural networks that the Texture image synthesis model, which is to minimize average Euclidean distance, It obtains, the calculation of the average Euclidean distance includes: for each of at least one texture image sample line Image pattern is managed, the convolutional neural networks is calculated and the European of video flowing sample is outputted results to the texture image sample Distance;According to the summation of the corresponding Euclidean distance of the texture image sample each at least one described texture image sample with And the quantity of texture image determines average Euclidean distance at least one described texture image sample;The texture image sample is Texture image or the convolutional neural networks in the video flowing sample is to the defeated of the texture image in the video flowing sample Result out.
12. a kind of storage medium, which is characterized in that the storage medium is stored with the program executed suitable for processor, the journey Sequence is used for:
Receive the texture image of input;
Texture image sequence is generated based on the texture image and Texture image synthesis model;Head in the texture image sequence A texture image is the received texture image of institute, latter for the adjacent texture image of any two frame in texture image sequence Frame texture image is output result of the Texture image synthesis model to former frame texture image;
Wherein, it is the target training convolutional neural networks that the Texture image synthesis model, which is to minimize average Euclidean distance, It obtains, the calculation of the average Euclidean distance includes: for each of at least one texture image sample line Image pattern is managed, the convolutional neural networks is calculated and the European of video flowing sample is outputted results to the texture image sample Distance;According to the summation of the corresponding Euclidean distance of the texture image sample each at least one described texture image sample with And the quantity of texture image determines average Euclidean distance at least one described texture image sample;The texture image sample is Texture image or the convolutional neural networks in the video flowing sample is to the defeated of the texture image in the video flowing sample Result out.
CN201910838616.9A 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium Active CN110517335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838616.9A CN110517335B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810123812.3A CN110120085B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium
CN201910838616.9A CN110517335B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201810123812.3A Division CN110120085B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium

Publications (2)

Publication Number Publication Date
CN110517335A true CN110517335A (en) 2019-11-29
CN110517335B CN110517335B (en) 2022-11-11

Family

ID=67520124

Family Applications (4)

Application Number Title Priority Date Filing Date
CN201910838616.9A Active CN110517335B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium
CN201810123812.3A Active CN110120085B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium
CN201910838614.XA Active CN110533749B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium
CN201910838615.4A Active CN110458919B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN201810123812.3A Active CN110120085B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium
CN201910838614.XA Active CN110533749B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium
CN201910838615.4A Active CN110458919B (en) 2018-02-07 2018-02-07 Dynamic texture video generation method, device, server and storage medium

Country Status (1)

Country Link
CN (4) CN110517335B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882048A (en) * 2020-09-28 2020-11-03 深圳追一科技有限公司 Neural network structure searching method and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1498848A2 (en) * 2003-07-18 2005-01-19 Samsung Electronics Co., Ltd. GoF/GoP texture description, and texture-based GoF/GoP retrieval
US20100310159A1 (en) * 2009-06-04 2010-12-09 Honda Motor Co., Ltd. Semantic scene segmentation using random multinomial logit (rml)
CN107578017A (en) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN107578455A (en) * 2017-09-02 2018-01-12 西安电子科技大学 Arbitrary dimension sample texture synthetic method based on convolutional neural networks

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5774125A (en) * 1993-11-18 1998-06-30 Sony Corporation Texture mapping method in which 3-D image data is transformed into 2-D data and mapped onto a surface of an object for display
KR100612852B1 (en) * 2003-07-18 2006-08-14 삼성전자주식회사 GoF/GoP Texture descriptor method, and Texture-based GoF/GoP retrieval method and apparatus using the GoF/GoP texture descriptor
CN101710945A (en) * 2009-11-30 2010-05-19 上海交通大学 Fluid video synthesizing method based on particle grain
US8811477B2 (en) * 2010-09-01 2014-08-19 Electronics And Telecommunications Research Institute Video processing method and apparatus based on multiple texture images using video excitation signals
KR20140147729A (en) * 2013-06-20 2014-12-30 (주)로딕스 Apparatus for dynamic texturing based on stream image in rendering system and method thereof
US9355464B2 (en) * 2014-05-30 2016-05-31 Apple Inc. Dynamic generation of texture atlases
CN107463949B (en) * 2017-07-14 2020-02-21 北京协同创新研究院 Video action classification processing method and device
CN107274381A (en) * 2017-07-20 2017-10-20 深圳市唯特视科技有限公司 A kind of dynamic texture synthetic method based on double identification stream convolutional networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1498848A2 (en) * 2003-07-18 2005-01-19 Samsung Electronics Co., Ltd. GoF/GoP texture description, and texture-based GoF/GoP retrieval
US20100310159A1 (en) * 2009-06-04 2010-12-09 Honda Motor Co., Ltd. Semantic scene segmentation using random multinomial logit (rml)
CN107578455A (en) * 2017-09-02 2018-01-12 西安电子科技大学 Arbitrary dimension sample texture synthetic method based on convolutional neural networks
CN107578017A (en) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for generating image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882048A (en) * 2020-09-28 2020-11-03 深圳追一科技有限公司 Neural network structure searching method and related equipment

Also Published As

Publication number Publication date
CN110517335B (en) 2022-11-11
CN110120085A (en) 2019-08-13
CN110120085B (en) 2023-03-31
CN110533749B (en) 2022-11-11
CN110533749A (en) 2019-12-03
CN110458919B (en) 2022-11-08
CN110458919A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
Wang et al. SaliencyGAN: Deep learning semisupervised salient object detection in the fog of IoT
CN111553480B (en) Image data processing method and device, computer readable medium and electronic equipment
WO2021227726A1 (en) Methods and apparatuses for training face detection and image detection neural networks, and device
CN111028330B (en) Three-dimensional expression base generation method, device, equipment and storage medium
CN110378381A (en) Object detecting method, device and computer storage medium
CN109086683A (en) A kind of manpower posture homing method and system based on cloud semantically enhancement
CN110349572A (en) A kind of voice keyword recognition method, device, terminal and server
CN110353675A (en) The EEG signals emotion identification method and device generated based on picture
CN107423398A (en) Exchange method, device, storage medium and computer equipment
CN107194158A (en) A kind of disease aided diagnosis method based on image recognition
CN110489582A (en) Personalization shows the generation method and device, electronic equipment of image
CN109902548A (en) A kind of object properties recognition methods, calculates equipment and system at device
CN111144483A (en) Image feature point filtering method and terminal
CN109766925A (en) Feature fusion, device, electronic equipment and storage medium
CN106776928A (en) Recommend method in position based on internal memory Computational frame, fusion social environment and space-time data
CN107194893A (en) Depth image ultra-resolution method based on convolutional neural networks
CN109446952A (en) A kind of piano measure of supervision, device, computer equipment and storage medium
CN110008961A (en) Text real-time identification method, device, computer equipment and storage medium
WO2022184124A1 (en) Physiological electrical signal classification and processing method and apparatus, computer device, and storage medium
JP2022530868A (en) Target object attribute prediction method based on machine learning, related equipment and computer programs
CN107066979A (en) A kind of human motion recognition method based on depth information and various dimensions convolutional neural networks
CN112330684A (en) Object segmentation method and device, computer equipment and storage medium
CN116541538B (en) Intelligent learning knowledge point mining method and system based on big data
CN114333074A (en) Human body posture estimation method based on dynamic lightweight high-resolution network
CN113269256A (en) Construction method and application of Misrc-GAN model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant