CN113781616A - Facial animation binding acceleration method based on neural network - Google Patents
Facial animation binding acceleration method based on neural network Download PDFInfo
- Publication number
- CN113781616A CN113781616A CN202111310642.8A CN202111310642A CN113781616A CN 113781616 A CN113781616 A CN 113781616A CN 202111310642 A CN202111310642 A CN 202111310642A CN 113781616 A CN113781616 A CN 113781616A
- Authority
- CN
- China
- Prior art keywords
- controller
- neural network
- value
- values
- fully
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/44—Morphing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2213/00—Indexing scheme for animation
- G06T2213/04—Animation description language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2213/00—Indexing scheme for animation
- G06T2213/08—Animation software package
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a facial animation binding acceleration method based on a neural network, which comprises the following steps: s1, generating training data, namely randomly generating the training data on the basis of a basic facial expression library in an original binding file; s2, building a neural network, wherein the neural network comprises a plurality of fully-connected layers and a PCA layer which are sequentially connected, the PCA layer is used for calculating the main component value of training data to obtain a plurality of mixed deformers, and the input of the plurality of fully-connected layers is the value of a controller and the output is a multi-dimensional vector consisting of the coefficients of the plurality of mixed deformers; s3, training a network model, namely obtaining a neural network model when the loss is minimized by establishing a regression task; and S4, compiling the plug-in through animation software, combining the plug-in with the controller in the original binding file, and combining to form a new binding file by utilizing the trained network model. The invention simulates the Mesh deformation of the face driven by an approximation controller, and replaces the original complex binding method.
Description
Technical Field
The invention belongs to the technical field of animation binding, and particularly relates to a facial animation binding acceleration method based on a neural network.
Background
In order to generate realistic-looking character facial animation during high fidelity digital cinema production, a bound artist typically produces extremely complex bound controls and shape-modifiers for the character head.
In the prior art, complex binding controls directly cause that a binding resolving node graph is huge and complex, a scene is too heavy, so that the software interaction speed is greatly reduced, and the efficiency of a subsequent animation production link is influenced. In fact, the speed drop caused by the binding link is always a stubborn pain point in the animation production process. Each animation company attempts to accelerate the binding solution process using various conventional methods, but these attempts have not achieved significant breakthroughs due to the limitations of the DCC software's own architecture. In the field of animation, deep learning is often adopted to perform operations such as facial expression generation, for example, patent publication No. CN112200894A discloses an automatic migration method of digital human facial expression animation based on a deep learning framework, expression migration is performed by using a trained deep learning model, a controller parameter value of one digital human is input into a network model to obtain a controller parameter value corresponding to the same expression of another digital human, and the generated controller parameter value is applied to the digital human model, so as to drive the network 3D space vertex position on the digital human model. The automatic migration replaces manual operation, and the production efficiency of the virtual character animation is greatly improved. The above patent is only applicable to expression migration, i.e. solving the problem of retargeting, but there is still no outstanding improvement of accelerated binding in the field of animation rigger.
Disclosure of Invention
The invention aims to provide a neural network-based facial animation binding acceleration method, which simulates the facial Mesh deformation driven by an approximation controller to replace the original complex binding method.
The invention provides the following technical scheme:
the application provides a facial animation binding acceleration method based on a neural network, which comprises the following steps:
s1, generating training data, namely randomly generating the training data on the basis of a basic facial expression library in an original binding file, wherein the training data comprise values of a plurality of groups of controllers and values of Mesh vertexes corresponding to the values;
s2, building a neural network, wherein the neural network comprises a plurality of fully-connected layers and a PCA layer which are sequentially connected, the PCA layer is used for calculating the main component value of training data to obtain a plurality of mixed deformers, and the input of the plurality of fully-connected layers is the value of a controller and the output is a multi-dimensional vector consisting of the coefficients of the plurality of mixed deformers;
s3, training a network model, wherein the training network model is used for carrying out error calculation by adopting a cost function through establishing a regression task to obtain a neural network model when loss is minimized, and the cost function adopts a mean square error;
and S4, compiling the plug-in through animation software, combining the plug-in with the controller in the original binding file, and combining to form a new binding file by utilizing the trained network model.
Preferably, in step S1, the training data generating method specifically includes the following steps:
s11, acquiring a face controller list based on the basic facial expression library;
s12, obtaining the value and the value range of each controller;
s13, randomly changing the values of k controllers in the value range of the controllers and repeating the values for n times to generate n groups of new controller combinations;
s14, enabling k = k + 1 and n = n-i, and repeating the step S13 until k is equal to the number of the controllers;
s15, leading all the controller combinations generated after random change into animation software, and obtaining corresponding Mesh vertex coordinates, namely, the values of each group of controllers correspond to the values of the Mesh vertices one to one;
wherein k, n and i are integers greater than 0.
Preferably, in step S2, the PCA layer specifically includes the following steps:
s211, calculating the average value of the controller values of the whole data setValue of each controller in the data set minus the average valueObtaining a data difference value;
s212, adopting the data difference value in the step S211 to carry out principal component analysis, obtaining the characteristic value and the characteristic vector of the principal component, and extracting the characteristic vectors corresponding to the first n characteristic values as n mixed deformersThen the new Mesh can be represented as:wherein, in the step (A),for each mixed deformer coefficient, i.e. the output of the fully connected layer, j is an integer greater than 0 and n is the number of extracted top-ranked eigenvalues.
Preferably, in step S2, the multiple fully-connected layers are used to obtain coefficients of the hybrid deformer, calculate predicted Mesh vertex coordinate values, where the input of the first fully-connected layer is a set of controller values, and output characteristics of the hidden layer, the input of the second to mth fully-connected layers is characteristics of the hidden layer of the first fully-connected layer, and the output is a vector formed by the coefficients of the multiple hybrid deformers, where M is the number of fully-connected layers, and M is an integer greater than 0.
Preferentially, in step S3, the value of the controller is used as input, the Mesh vertex coordinate is used as a label, a regression task is established, the cost function adopts a mean square error, the error between the value of the Mesh vertex coordinate and the network prediction result is minimized, the model parameter with the lowest loss in the training process is stored, and the adam algorithm is used for optimization.
Preferably, in step S4, the method for combining binding files specifically includes the following steps:
s41, training a pytorech frame, using a libtorech library for forward operation, combining the libtorech with an interface of animation software, and compiling a plug-in, wherein the plug-in is used for inputting a value of a controller and outputting a coordinate value corresponding to a Mesh vertex;
and S42, the controller for reserving the original binding file is combined with the plug-in the step S41 to form a new binding file, and the accelerated replacement of the binding file is completed.
Preferably, the animation software comprises maya and UE.
Preferably, the method for the animation software to write the plug-in by adopting the UE comprises the following steps:
q1. train the use of a pyrtch framework, predict the libtorech interface provided using a pyrtch;
and Q2, integrating the libtorch into the UE, wherein the plug-in operation comprises the following specific steps:
q21, importing a mesh vertex coordinate value in a natural state, namely the mesh vertex coordinate value in a non-expression state, into the UE;
receiving data of a controller by adopting a levellink of the UE;
q23, inputting data into the libtorch library to obtain the coordinates of the mesh vertex;
and Q24, subtracting the coordinates in the natural state from the obtained coordinate values to obtain the displacement of each vertex;
and Q25, realizing deformation in the UE according to the displacement of each vertex.
The invention has the beneficial effects that:
1. the method comprises the steps of constructing a neural network model, simulating the deformation of a face Mesh driven by an approximation controller, replacing an original complex binding method, accelerating the original binding efficiency, calculating a sample principal component in the neural network, reducing the training difficulty, enabling the model to be more easily converged, and enabling the convergence effect to be better as the Mesh data volume is larger;
2. data are randomly generated by using the original binding file, the artist does not need to additionally provide training data, the model is objectively generated, and the accuracy is higher;
3. the trained neural network model does not depend on animation software, can run in the corresponding animation software through interfaces of various animation software, and can also run independently without the animation software.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a neural network model structure of the present invention.
Detailed Description
As shown in fig. 1, the present application provides a facial animation binding acceleration method based on a neural network, including the following steps:
s1, generating training data, wherein the training data are randomly generated on the basis of a basic facial expression library in an original binding file, and comprise values of a plurality of groups of controllers and values of Mesh vertexes corresponding to the values.
Step S1 specifically includes the following steps:
s11, acquiring a facial controller list on the basis of a basic facial expression library, wherein the basic facial expression library is the basic expression of the animation adjusted by the animator and can be the basic expression approved by the director, such as happy and sad, or only the expression in the original binding file;
s12, obtaining the value and the value range of each controller;
s13, randomly changing the values of k controllers in the value range of the controllers and repeating the values for n times to generate n groups of new controller combinations;
s14, enabling k = k + 1 and n = n-i, and repeating the step S13 until k is equal to the number of the controllers;
s15, leading all the controller combinations generated after random change into animation software, and obtaining corresponding Mesh vertex coordinates, namely, the values of each group of controllers correspond to the values of the Mesh vertices one to one; wherein k, n and i are integers greater than 0.
Specific examples are as follows:
randomly changing the value of one controller respectively on the basis of a basic facial expression library to obtain a new group of controller combinations, repeating the operation a times to form the group controller combinations, randomly changing the values of two controllers, repeating the operation b times to form the group controller combinations, and repeating the previous operation until the values of all the controllers are changed to generate c data, wherein a > b >. the. And (c) importing all the controller combinations (a + b +. and. + c) into animation software through scripts to obtain corresponding character expressions, and acquiring corresponding mesh vertex coordinates. Thus, random data generation is completed. The parameters a, b, c and n are set according to the data volume, the effect is better when the data volume is larger, but the data production time and the training time are increased along with the data volume.
And S2, building a neural network, wherein the neural network comprises a plurality of fully-connected layers and a PCA layer which are sequentially connected, and the PCA layer is used for calculating the main component value of the training data to obtain a plurality of mixed deformers. After the principal component analysis is carried out, the neural network needs to learn PCA coefficients, does not need to learn the coordinate position of each Mesh vertex, reduces the training difficulty, enables the model to be more easily converged, and is suitable for the condition that the number of the Mesh vertices is large.
In step S2, the PCA layer specifically includes the following steps:
s211, calculating the average value of the controller values of the whole data setValue of each controller in the data set minus the average valueObtaining a data difference value;
s212, adopting the data difference value in the step S211 to carry out principal component analysis, obtaining the characteristic value and the characteristic vector of the principal component, and extracting the characteristic vectors corresponding to the first n characteristic values as n mixed deformersThen the new Mesh can be represented as:wherein, in the step (A),for each mixed deformer coefficient, i.e. the output of the fully connected layer, j is an integer greater than 0 and n is the number of extracted top-ranked eigenvalues.
The multiple fully-connected layers are used for obtaining coefficients of the mixed deformer and calculating a predicted Mesh vertex coordinate value, the input of the first fully-connected layer is a value of a group of controllers and outputting characteristics of the hidden layer, the input of the second fully-connected layer to the Mth fully-connected layer is characteristics of the hidden layer of the first fully-connected layer, and the output is a vector formed by the coefficients of the multiple mixed deformers, wherein M is the number of layers of the fully-connected layers, and M is an integer greater than 0. As shown in fig. 1, the neural network is composed of eight fully-connected layers and one PCA layer, the input of the first fully-connected layer is the value of one set of controllers, the characteristic of the hidden layer is output, the input of the second to seventh fully-connected layers is the characteristic of the hidden layer of the previous fully-connected layer, the characteristic of the hidden layer is output and used as the output of the next layer, the input of the eighth fully-connected layer is the characteristic of the hidden layer output by the seventh fully-connected layer, and the output is the vector composed of the coefficients of a plurality of hybrid deformers.
And S3, training a network model, wherein the training network model is used for carrying out error calculation by adopting a cost function through establishing a regression task to obtain a neural network model when the loss is minimized, and the cost function adopts a mean square error.
In step S3, the value of the controller is used as input, the Mesh vertex coordinate is used as a label, a regression task is established, the cost function uses a mean square error to minimize the error between the value of the Mesh vertex coordinate and the network prediction result, the model parameter with the lowest loss in the training process is stored, and optimization is performed using the adam algorithm.
And S4, compiling the plug-in through animation software, combining the plug-in with the controller in the original binding file, and combining to form a new binding file by utilizing the trained network model.
In step S4, the method specifically includes the following steps:
s41, training a pyrtorch frame, using a libtorch library for forward operation, combining the libtorch with an interface of animation software, compiling a plug-in, wherein the plug-in is used for inputting a value of a controller and outputting a coordinate value corresponding to a Mesh vertex. Animation software includes maya and UE.
And S42, the controller for reserving the original binding file is combined with the plug-in the step S41 to form a new binding file, and the accelerated replacement of the binding file is completed.
Taking UE as an example, C + + is adopted in a programming language, and the method for compiling the plug-in by the animation software through the UE comprises the following steps:
q1. train the use of a pyrtch framework, predict the libtorech interface provided using a pyrtch;
and Q2, integrating the libtorch into the UE, wherein the plug-in operation comprises the following specific steps:
q21, importing a mesh vertex coordinate value in a natural state, namely the mesh vertex coordinate value in a non-expression state, into the UE;
receiving data of a controller by adopting a levellink of the UE;
q23, inputting data into the libtorch library to obtain the coordinates of the mesh vertex;
and Q24, subtracting the coordinates in the natural state from the obtained coordinate values to obtain the displacement of each vertex;
and Q25, realizing deformation in the UE according to the displacement of each vertex.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A facial animation binding acceleration method based on a neural network is characterized in that: the method comprises the following steps:
s1, generating training data, namely randomly generating the training data on the basis of a basic facial expression library in an original binding file, wherein the training data comprise values of a plurality of groups of controllers and values of Mesh vertexes corresponding to the values;
s2, building a neural network, wherein the neural network comprises a plurality of fully-connected layers and a PCA layer which are sequentially connected, the PCA layer is used for calculating the main component value of training data to obtain a plurality of mixed deformers, and the input of the plurality of fully-connected layers is the value of a controller and the output is a multi-dimensional vector consisting of the coefficients of the plurality of mixed deformers;
s3, training a network model, wherein the training network model is used for carrying out error calculation by adopting a cost function through establishing a regression task to obtain a neural network model when loss is minimized, and the cost function adopts a mean square error;
and S4, compiling the plug-in through animation software, combining the plug-in with the controller in the original binding file, and combining to form a new binding file by utilizing the trained network model.
2. The neural network-based facial animation binding acceleration method according to claim 1, characterized in that: in step S1, the training data generation method specifically includes the following steps:
s11, acquiring a face controller list based on the basic facial expression library;
s12, obtaining the value and the value range of each controller;
s13, randomly changing the values of k controllers in the value range of the controllers and repeating the values for n times to generate n groups of new controller combinations;
s14, enabling k = k + 1 and n = n-i, and repeating the step S13 until k is equal to the number of the controllers;
s15, leading all the controller combinations generated after random change into animation software, and obtaining corresponding Mesh vertex coordinates, namely, the values of each group of controllers correspond to the values of the Mesh vertices one to one;
wherein k, n and i are integers greater than 0.
3. The neural network-based facial animation binding acceleration method according to claim 1, characterized in that: in step S2, the PCA layer specifically includes the following steps:
s211, calculating the average value of the controller values of the whole data setValue of each controller in the data set minus the average valueObtaining a data difference value;
s212, adopting the data difference value in the step S211 to carry out principal component analysis, obtaining the characteristic value and the characteristic vector of the principal component, and extracting the characteristic vectors corresponding to the first n characteristic values as n mixed deformersThen the new Mesh can be represented as:wherein, in the step (A),for each mixed deformer coefficient, i.e. the output of the fully connected layer, j is an integer greater than 0 and n is the number of extracted top-ranked eigenvalues.
4. The neural network-based facial animation binding acceleration method according to claim 3, characterized in that: in step S2, the multiple fully-connected layers are used to obtain coefficients of the hybrid deformer, and calculate the predicted Mesh vertex coordinate value, where the input of the first fully-connected layer is a set of controller values, and outputs the characteristics of the hidden layer, the input of the second to mth fully-connected layers is the characteristics of the hidden layer of the first fully-connected layer, and the output is a vector formed by the coefficients of the multiple hybrid deformers, where M is the number of fully-connected layers, and M is an integer greater than 0.
5. The neural network-based facial animation binding acceleration method according to claim 1, characterized in that: in step S3, the value of the controller is used as input, the Mesh vertex coordinate is used as a label, a regression task is established, the cost function uses a mean square error to minimize the error between the value of the Mesh vertex coordinate and the network prediction result, the model parameter with the lowest loss in the training process is stored, and optimization is performed using the adam algorithm.
6. The neural network-based facial animation binding acceleration method according to claim 1, characterized in that: in step S4, the method for combining bound files specifically includes the following steps:
s41, training a pytorech frame, using a libtorech library for forward operation, combining the libtorech with an interface of animation software, and compiling a plug-in, wherein the plug-in is used for inputting a value of a controller and outputting a coordinate value corresponding to a Mesh vertex;
and S42, the controller for reserving the original binding file is combined with the plug-in the step S41 to form a new binding file, and the accelerated replacement of the binding file is completed.
7. The neural network-based facial animation binding acceleration method according to claim 6, characterized in that: the animation software includes maya and UE.
8. The neural network-based facial animation binding acceleration method according to claim 7, characterized in that: the method for writing the plug-in by the animation software by adopting the UE comprises the following steps:
q1. train the use of a pyrtch framework, predict the libtorech interface provided using a pyrtch;
and Q2, integrating the libtorch into the UE, wherein the plug-in operation comprises the following specific steps:
q21, importing a mesh vertex coordinate value in a natural state, namely the mesh vertex coordinate value in a non-expression state, into the UE;
receiving data of a controller by adopting a levellink of the UE;
q23, inputting data into the libtorch library to obtain the coordinates of the mesh vertex;
and Q24, subtracting the coordinates in the natural state from the obtained coordinate values to obtain the displacement of each vertex;
and Q25, realizing deformation in the UE according to the displacement of each vertex.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111310642.8A CN113781616B (en) | 2021-11-08 | 2021-11-08 | Facial animation binding acceleration method based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111310642.8A CN113781616B (en) | 2021-11-08 | 2021-11-08 | Facial animation binding acceleration method based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781616A true CN113781616A (en) | 2021-12-10 |
CN113781616B CN113781616B (en) | 2022-02-08 |
Family
ID=78956646
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111310642.8A Active CN113781616B (en) | 2021-11-08 | 2021-11-08 | Facial animation binding acceleration method based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781616B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114898020A (en) * | 2022-05-26 | 2022-08-12 | 唯物(杭州)科技有限公司 | 3D character real-time face driving method and device, electronic equipment and storage medium |
CN116311478A (en) * | 2023-05-16 | 2023-06-23 | 北京百度网讯科技有限公司 | Training method of face binding model, face binding method, device and equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415323A (en) * | 2019-07-30 | 2019-11-05 | 成都数字天空科技有限公司 | A kind of fusion deformation coefficient preparation method, device and storage medium |
CN112200894A (en) * | 2020-12-07 | 2021-01-08 | 江苏原力数字科技股份有限公司 | Automatic digital human facial expression animation migration method based on deep learning framework |
CN112700524A (en) * | 2021-03-25 | 2021-04-23 | 江苏原力数字科技股份有限公司 | 3D character facial expression animation real-time generation method based on deep learning |
CN113077545A (en) * | 2021-04-02 | 2021-07-06 | 华南理工大学 | Method for reconstructing dress human body model from image based on graph convolution |
US20210233299A1 (en) * | 2019-12-26 | 2021-07-29 | Zhejiang University | Speech-driven facial animation generation method |
CN113205449A (en) * | 2021-05-21 | 2021-08-03 | 珠海金山网络游戏科技有限公司 | Expression migration model training method and device and expression migration method and device |
-
2021
- 2021-11-08 CN CN202111310642.8A patent/CN113781616B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415323A (en) * | 2019-07-30 | 2019-11-05 | 成都数字天空科技有限公司 | A kind of fusion deformation coefficient preparation method, device and storage medium |
US20210233299A1 (en) * | 2019-12-26 | 2021-07-29 | Zhejiang University | Speech-driven facial animation generation method |
CN112200894A (en) * | 2020-12-07 | 2021-01-08 | 江苏原力数字科技股份有限公司 | Automatic digital human facial expression animation migration method based on deep learning framework |
CN112700524A (en) * | 2021-03-25 | 2021-04-23 | 江苏原力数字科技股份有限公司 | 3D character facial expression animation real-time generation method based on deep learning |
CN113077545A (en) * | 2021-04-02 | 2021-07-06 | 华南理工大学 | Method for reconstructing dress human body model from image based on graph convolution |
CN113205449A (en) * | 2021-05-21 | 2021-08-03 | 珠海金山网络游戏科技有限公司 | Expression migration model training method and device and expression migration method and device |
Non-Patent Citations (1)
Title |
---|
万贤美等: "真实感3D人脸表情合成技术研究进展", 《计算机辅助设计与图形学学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114898020A (en) * | 2022-05-26 | 2022-08-12 | 唯物(杭州)科技有限公司 | 3D character real-time face driving method and device, electronic equipment and storage medium |
CN116311478A (en) * | 2023-05-16 | 2023-06-23 | 北京百度网讯科技有限公司 | Training method of face binding model, face binding method, device and equipment |
CN116311478B (en) * | 2023-05-16 | 2023-08-29 | 北京百度网讯科技有限公司 | Training method of face binding model, face binding method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113781616B (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113781616B (en) | Facial animation binding acceleration method based on neural network | |
CN112149459B (en) | Video saliency object detection model and system based on cross attention mechanism | |
CN107480206A (en) | A kind of picture material answering method based on multi-modal low-rank bilinearity pond | |
CN111986075B (en) | Style migration method for target edge clarification | |
CN112200894B (en) | Automatic digital human facial expression animation migration method based on deep learning framework | |
Koyama et al. | Real-time example-based elastic deformation | |
CN112651360B (en) | Skeleton action recognition method under small sample | |
CN107967516A (en) | A kind of acceleration of neutral net based on trace norm constraint and compression method | |
TWI664853B (en) | Method and device for constructing the sensing of video compression | |
CN110188667B (en) | Face rectification method based on three-party confrontation generation network | |
CN110097615B (en) | Stylized and de-stylized artistic word editing method and system | |
CN113345061A (en) | Training method and device for motion completion model, completion method and device, and medium | |
CN110516724A (en) | Visualize the high-performance multilayer dictionary learning characteristic image processing method of operation scene | |
CN113191486A (en) | Graph data and parameter data mixed partitioning method based on parameter server architecture | |
KR20230073751A (en) | System and method for generating images of the same style based on layout | |
CN110415261B (en) | Expression animation conversion method and system for regional training | |
CN113674156B (en) | Method and system for reconstructing image super-resolution | |
CN112380764B (en) | Gas scene end-to-end rapid reconstruction method under limited view | |
DE102021124537A1 (en) | ENERGY-BASED VARIATIONAL AUTOENCODER | |
CN116664731B (en) | Face animation generation method and device, computer readable storage medium and terminal | |
CN117408910A (en) | Training method of three-dimensional model completion network, three-dimensional model completion method and device | |
CN116503296B (en) | Surgical scene image conversion method | |
CN117292031A (en) | Training method and device for 3D virtual digital lip animation generation model | |
Pajouheshgar et al. | Mesh neural cellular automata | |
CN115439581A (en) | Body animation binding acceleration method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |