CN109493403A - A method of human face animation is realized based on moving cell Expression Mapping - Google Patents

A method of human face animation is realized based on moving cell Expression Mapping Download PDF

Info

Publication number
CN109493403A
CN109493403A CN201811348656.7A CN201811348656A CN109493403A CN 109493403 A CN109493403 A CN 109493403A CN 201811348656 A CN201811348656 A CN 201811348656A CN 109493403 A CN109493403 A CN 109493403A
Authority
CN
China
Prior art keywords
human face
expression
moving cell
parameter
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811348656.7A
Other languages
Chinese (zh)
Inventor
吕科
闫衍芙
薛健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chung Chi Ning Technology Co Ltd
Original Assignee
Beijing Chung Chi Ning Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chung Chi Ning Technology Co Ltd filed Critical Beijing Chung Chi Ning Technology Co Ltd
Priority to CN201811348656.7A priority Critical patent/CN109493403A/en
Publication of CN109493403A publication Critical patent/CN109493403A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a kind of methods for realizing human face animation based on moving cell Expression Mapping, it is related to deep learning and human face animation technical field, the method for realizing human face animation includes: to redefine 24 face moving cells using facial actions code system;Video data is acquired, and the facial movement unit of each frame image in video data is labeled with expression quantization software, establishes human face expression data set;Face datection is carried out to the human face expression data set after mark, carries out feature extraction using convolutional neural networks;The Recurrent networks model that feature construction based on three-layer neural network and combination extraction returns moving cell parameter;Human face animation is realized using Recurrent networks model, in conjunction with the new expression coded system and expression Fusion Model driving virtual portrait.The problem of method applicability that the present invention can solve existing realization human face animation is not strong, and human face expression variation directly cannot be accurately portrayed using two dimensional character.

Description

A method of human face animation is realized based on moving cell Expression Mapping
Technical field
The present invention relates to deep learnings and human face animation technology technical field, and in particular to one kind is based on moving cell expression The method of Mapping implementation human face animation.
Background technique
In computer graphics and computer vision field, human face animation technology is intended to obtain the facial expression of source main body And mapped to virtual portrait face.Wherein, the most common method is face cartoon method and base based on depth camera In the face cartoon method of video image, the face cartoon method based on depth camera mainly utilizes dynamic expression model real-time The rigidity and non-rigid parameter for capturing face are dynamic using human face expression data creating face to estimate human face expression data It draws, but its equipment of depth camera is expensive, application scenarios are limited, applicability is not strong.
Face cartoon method based on video image mainly passes through the semantic feature point of locating human face, utilizes the position of characteristic point The 3D shape for returning face out is put back into, while adjusting camera parameters to calculate the pose parameter and expression coefficient of face.However, The recurrence of face 3D shape is the work of a time and effort consuming, and the expression parameter calculated be not sufficient to systematically, it is quasi- Really portray human face expression variation.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of method for realizing human face animation based on moving cell Expression Mapping, It is not strong to solve the existing method applicability for realizing human face animation, face table directly cannot be accurately portrayed using two dimensional character The problem of end of love.
To achieve the above object, the embodiment of the present invention, which provides, a kind of realizes human face animation based on moving cell Expression Mapping Method, the method for realizing human face animation includes: to redefine 24 face moving cells using facial actions code system, Form new expression coded system;Video data is acquired, and with expression quantization software to each frame in the video data The facial movement unit of image is labeled, and establishes human face expression data set;To the human face expression data set after mark into Row Face datection carries out feature extraction using convolutional neural networks;Based on three-layer neural network and combine the feature construction extracted The Recurrent networks model that moving cell parameter is returned;It is compiled using the Recurrent networks model, in conjunction with the new expression Code system and expression Fusion Model driving virtual portrait realize human face animation.
As a preferred technical solution, the new expression coded system include the 9 symmetric motion units redefined, 10 asymmetrical movement units, 2 symmetric motion descriptors and 2 asymmetrical movement descriptors.
The method for establishing human face expression data set includes: and in a natural environment, utilizes as a preferred technical solution, Video camera carries out video record to several participants, obtains in different illumination, different posture, different age group, different sexes Human face expression video sequence;With each moving cell of each frame facial image of floating-point quantification between 0-1;To each frame Moving cell involved by facial image is labeled, the final human face expression for establishing the facial expression image comprising several marks Data set.
The parameter of the moving cell is single for portraying each movement under any particular emotion as a preferred technical solution, Departure degree of the member relative to each moving cell in natural face, the parameter of all moving cells is equal under natural person's face-like state It is set to 0, so the departure degree of moving cell is lower, the parameter value of moving cell is smaller i.e. close to 0, the deviation of moving cell Degree is higher, and the parameter value of moving cell is higher i.e. close to 1.
The method of the feature extraction includes: and returns device using two dimensional character point to detect as a preferred technical solution, The two dimensional character point of every image in data set;Image is cut according to two interpupillary distances again, and will be after reduction Image normalization to network input size;Then face characteristic is extracted using depth convolutional neural networks.
The construction method of the Recurrent networks model includes: the nerve using one three layers as a preferred technical solution, Network jointly returns facial moving cell parameter;Regression result is measured using Euclidean distance loss;Wherein, described three layers Neural network include two be connected to rectification linear unit full articulamentum and for making moving cell parametric regression to most The Dropout layer of excellent dimension.
The construction method of the Recurrent networks model as a preferred technical solution, further include: pass through disclosed depth It practises frame Caffe to be trained the Recurrent networks model, with the pre-training mould provided in the deep learning frame Caffe Type initializes network parameter, is optimized based on stochastic gradient descent algorithm to Recurrent networks model, while adjustment package Decay a series of hyper parameters containing the number of iterations, learning rate, weight to minimize Euclidean distance loss, finally obtains training completion Optimum regression network model.
The method for realizing human face animation includes: to be returned using the Recurrent networks model as a preferred technical solution, Return the moving cell parameter of each frame image of the video sequence acquired in real time out;Head is estimated based on Epnp algorithm simultaneously Rigid transformation, including spin matrix and translation vector;Obtained moving cell parameter is mapped into virtual portrait, in conjunction with visual human Object basic three-dimensional face expression shape corresponding with 24 face moving cells are redefined and neutral expression shape, are moved Draw the corresponding countenance of role;The rigidity parameters of headwork are mapped into virtual portrait again to obtain the corresponding of cartoon role Head pose, to form human face animation.
The embodiment of the present invention has the advantages that
(1) video sequence that the present invention can be got based on single video camera, accurately returns out for portraying The face moving cell parameter of human face expression, and the parameter is mapped into virtual portrait face to drive face's table of cartoon role Feelings movement.
(2) present invention has very strong applicability, and ordinary user can also use mobile phone, computer etc. to contain in any environment There is the equipment of monocular cam to be shot, the moving cell parameter of face can be accurately obtained based on whole facial image, And it is mapped to virtual portrait face and carries out expression animation.
(3) present invention more accurately returns expression parameter based on deep learning algorithm directly from two dimensional image, is not necessarily to The recurrence for carrying out three-dimensional face shape according to face two dimensional character point has better animation effect to calculate expression parameter.
Detailed description of the invention
Fig. 1 is a kind of method that human face animation is realized based on moving cell Expression Mapping that the embodiment of the present invention 1 provides Flow chart.
Specific embodiment
Embodiments of the present invention are illustrated by particular specific embodiment below, those skilled in the art can be by this explanation Content disclosed by book is understood other advantages and efficacy of the present invention easily.
It should be clear that structure depicted in this specification, ratio, size etc., only to cooperate the revealed content of specification, So that those skilled in the art understands and reads, enforceable qualifications are not intended to limit the invention, therefore do not have technology On essential meaning, the modification of any structure, the change of proportionate relationship or the adjustment of size can be generated not influencing the present invention The effect of and the purpose that can reach under, should all still fall in the range of disclosed technology contents obtain and can cover.Together When, cited such as "upper" in this specification, "lower", " left side ", right ", the term of " centre ", be merely convenient to describe bright , rather than to limit the scope of the invention, relativeness is altered or modified, and is changing technology contents without essence Under, when being also considered as the enforceable scope of the present invention.
Embodiment 1
The present embodiment provides a kind of methods for realizing human face animation based on moving cell Expression Mapping, comprising:
S1: 24 face moving cells are redefined using facial actions code system, form new expression coded system;
S2: acquisition video data, and with expression quantization software to the face of each frame image in the video data Moving cell is labeled, and establishes human face expression data set;
S3: carrying out Face datection to the human face expression data set after mark, carries out feature using convolutional neural networks It extracts;
S4: the recurrence net that the feature construction based on three-layer neural network and combination extraction returns moving cell parameter Network model;
S5: it is driven using the Recurrent networks model, in conjunction with the new expression coded system and expression Fusion Model empty Quasi- personage realizing human face animation.
Specifically, the present embodiment is based on Facial Coding System, that is, FacialAction Coding System to 24 movements Unit, which redefine, generates new expression coded system, and to show different human face expressions, facial expression coded system is According to the anatomic characteristic of face, it is divided into moving cell that is several not only mutually indepedent but also connecting each other i.e. ActionUnits, and analyze the motion feature of these moving cells and its main region controlled and associated table The expression of the mankind in many actual lives is classified by feelings, the set system, it is the muscular movement of nowadays facial expression Authoritative reference standard is also used by psychologist and cartoon painter.In the present embodiment, new expression coded system includes weight 9 symmetric motion units, 10 asymmetrical movement units, 2 symmetric motion descriptors and 2 asymmetrical movements newly defined are retouched State symbol.
Since Facial Coding System, that is, FACS moving cell defined is mainly used for facial Expression Analysis, in order to make to drive people Expression fusion steps when face animation are more convenient, and a moving cell corresponds to a three-dimensional face expression shape, this implementation The left and right position of some asymmetrical movement units in FACS is considered as two different moving cells by example, such as transports eyes closed Moving cell is split as left eye closing motion unit and right eye closing motion unit, and the same mode that splits is suitable for FACS Eyelid is promoted, eyebrow pushes, chin slides, the corners of the mouth raises up, corners of the mouth abduction exercise unit.In addition, the present embodiment will close lightly mouth movement Unit is split as receiving in upper lip and receives two different moving cells in lower lip.Following table lists 24 i.e. AU of moving cell Number with definition and corresponding FACS in AU number and definition.
AU Definition FACS serial number and definition AU Definition FACS serial number and definition
1 Left eye closure AU43 Eye closure 13 The right corners of the mouth raises up AU12 Lip corner puller
2 Right eye closure AU43 Eye closure 14 Left corners of the mouth outreach AU20 Lip stretcher
3 Left eyelid is promoted AU5 Upper lid raiser 15 Right corners of the mouth outreach AU20 Lip strecher
4 Right eyelid is promoted AU5 Upperlidraiser 16 It is received in upper lip AU28 Lip Suck
5 Left eyebrow pushes AU4 Brow lowerer 17 It is received in lower lip AU28 Lip Suck
6 Right eyebrow pushes AU4 Brow lowerer 18 Lower lip is outside AD29 Jaw Thrust
7 Left eyebrow raises up AU2 Outer browraiser 19 Upper lip is upward AU10 Upper Lip Raiser
8 Right eyebrow raises up AU2 Outer brow raiser 20 Lower lip is downward AU16 Lower Lip Depressor
9 It opens one's mouth AU26 Jaw Drop 21 The corners of the mouth is downward AU17 Chin Raiser
10 Lower lip is to the left AD30 Jaw Sideways 22 Beep mouth AU18 Lip Pucker
11 Lower lip is to the right AD30 Jaw Sideways 23 Cheek is heaved AD34 Puff
12 The left corners of the mouth raises up AU12 Lip Corner Puller 24 Wrinkle nose AU9 Nose wrinkler
Further, in this embodiment the method for establishing human face expression data set includes: to utilize common monocular cam Video record is carried out in a natural environment to 122 participants, it is desirable that face at least 100,000 pixels in image.Every section of video is related to Expression 4-29, duration 10s to 120s has been finally obtained under different illumination, different postures, different age group, different sexes 123 face expression video sequences.The present invention can be completed without special camera, have very strong applicability.
With floating-point quantification each moving cell between 0-1, be conducive between source images and three-dimensional personage The mapping of expression parameter is carried out, then moving cell involved by each frame facial image in video sequence is labeled, It is accurate to 2 significant digits;Wherein, all moving cell parameters are set to 0 under natural person's face-like state, each moving cell Parameter predominantly portrays deviation journey of each moving cell relative to each moving cell in natural face under a certain particular emotion Degree, departure degree is lower, and the moving cell parameter value is smaller i.e. close to 0, and departure degree is higher, which gets over Height is i.e. close to 1.Annotating efficiency is improved using expression quantization software, finally establishes the expression completed comprising 99356 marks The human face expression data set of image.
Further, the method for feature extraction includes: to return device track human faces first with two dimensional character point, and orient 68 characteristic points of every image in human face expression data set later carry out image according to two interpupillary distances in image It cuts, then by the input size of the image normalization after reduction to network.Recycle depth convolutional neural networks to face characteristic It extracts, and gets the feature vector of 1000 dimensions from the last one full articulamentum of depth convolutional neural networks to portray people Face feature, for constructing Recurrent networks model.
Further, the method for constructing Recurrent networks model includes: jointly to be returned using one three layers of neural network Return 25 facial movement cell parameters, and measures regression result using Euclidean distance loss;Wherein, the list of three-layer neural network First number is respectively 1000,512 and 24, and the full articulamentum of the first two has been all connected with rectification linear unit to realize the non-linear of network Property, due to being not feature vector per the one-dimensional information that can be provided about face moving cell, so being tieed up Degree reduction, plus Dropout layers to allow network itself to determine for the optimal of moving cell parametric regression after full articulamentum Dimension, than manually carrying out dimension reduction more accurately and more efficiently, and dropout rate is followed successively by 0.4 and 0.3.
Whole network model is trained by disclosed deep learning frame Caffe again, with what is provided in Caffe Bvlc_googlenet pre-training model initializes network parameter, based on stochastic gradient descent algorithm to network model into Row optimization, wherein the number of iterations is 130000 times, and basic learning rate is 0.001, and momentum parameter is set as 0.9, and weight decaying is set It is set to 0.0002, has finally obtained the network model that optimal training is completed.On this basis, compared with AlexNet and VGG-16 carries out the obtained regression result of feature extraction, comprehensive accuracy and speed these two aspects factor, final choice GoogleNet is as feature extractor.
Further, the movement of each frame image of the video sequence acquired in real time is returned out using Recurrent networks model Cell parameters;The rigid transformation on head, including spin matrix and translation vector are estimated based on Epnp algorithm simultaneously;By what is obtained Moving cell parameter maps to virtual portrait, using the Three-Dimensional Dynamic expression parameter as virtual portrait, in conjunction with virtual portrait and It redefines 24 face moving cells and thinks corresponding basic three-dimensional face expression shape and neutral expression shape, obtain animation angle The corresponding countenance of color;The rigidity parameters of headwork are mapped into virtual portrait again to obtain the corresponding heads of cartoon role Posture, to form human face animation.
Wherein, the three-dimensional face shape of virtual portrait can be summarized as following functions under particular pose particular emotion:Wherein, B0For the 3D shape of natural face, BiIt is corresponding with moving cell for remaining Face 3D shape, β={ β1, β2..., β24It is expression parameter vector, i.e., the fortune returned out from source video sequence Moving cell parameter vector.
The present invention more accurately returns expression parameter based on deep learning algorithm directly from two dimensional image, is not necessarily to basis Face two dimensional character point carries out the recurrence of three-dimensional face shape to calculate expression parameter, can accurately return out for portraying people The face moving cell parameter of face expression, and the parameter is mapped into virtual portrait face to drive the countenance of cartoon role Movement.With better animation effect.
Although above having used general explanation and specific embodiment, the present invention is described in detail, at this On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore, These modifications or improvements without departing from theon the basis of the spirit of the present invention are fallen within the scope of the claimed invention.

Claims (8)

1. a kind of method for realizing human face animation based on moving cell Expression Mapping, which is characterized in that the realization human face animation Method include:
24 face moving cells are redefined using facial actions code system, form new expression coded system;
Video data is acquired, and with expression quantization software to the facial movement unit of each frame image in the video data It is labeled, establishes human face expression data set;
Face datection is carried out to the human face expression data set after mark, carries out feature extraction using convolutional neural networks;
The Recurrent networks model that feature construction based on three-layer neural network and combination extraction returns moving cell parameter;
Using the Recurrent networks model, in conjunction with the new expression coded system and expression Fusion Model driving virtual portrait reality Existing human face animation.
2. a kind of method for realizing human face animation based on moving cell Expression Mapping as described in claim 1, which is characterized in that The new expression coded system include the 9 symmetric motion units redefined, 10 asymmetrical movement units, 2 it is symmetrical Motion descriptors and 2 asymmetrical movement descriptors.
3. a kind of method for realizing human face animation based on moving cell Expression Mapping as described in claim 1, which is characterized in that The method for establishing human face expression data set includes:
In a natural environment, video record is carried out to several participants using video camera, obtain different illumination, different posture, The human face expression video sequence of different age group, different sexes;
With each moving cell of each frame facial image of floating-point quantification between 0-1;
Moving cell involved by each frame facial image is labeled, it is final to establish the expression figure comprising several marks The human face expression data set of picture.
4. a kind of method for realizing human face animation based on moving cell Expression Mapping as claimed in claim 3, which is characterized in that The parameter of the moving cell is for portraying under any particular emotion each moving cell relative to each movement in natural face The departure degree of unit, the parameter of all moving cells is set to 0 under natural person's face-like state, so the deviation journey of moving cell Degree is lower, and the parameter value of moving cell is smaller i.e. close to 0, and the departure degree of moving cell is higher, the parameter value of moving cell Higher is close to 1.
5. a kind of method for realizing human face animation based on moving cell Expression Mapping as described in claim 1, which is characterized in that The method of the feature extraction includes: to return the two dimensional character that device detects every image in data set using two dimensional character point Point;Image is cut according to two interpupillary distances again, and by the input ruler of the image normalization after reduction to network It is very little;Then face characteristic is extracted using depth convolutional neural networks.
6. a kind of method for realizing human face animation based on moving cell Expression Mapping as described in claim 1, which is characterized in that The construction method of the Recurrent networks model includes:
Facial moving cell parameter is jointly returned using one three layers of neural network;
Regression result is measured using Euclidean distance loss;
Wherein, three layers of the neural network include two be connected to rectification linear unit full articulamentum and for making to move Cell parameters revert to the Dropout layer of optimal dimension.
7. a kind of method for realizing human face animation based on moving cell Expression Mapping as claimed in claim 6, which is characterized in that The construction method of the Recurrent networks model further include: by disclosed deep learning frame Caffe to the Recurrent networks mould Type is trained, and is initialized, is based on to network parameter with the pre-training model provided in the deep learning frame Caffe Stochastic gradient descent algorithm optimizes Recurrent networks model, while adjusting comprising the number of iterations, learning rate, weight decaying one Serial hyper parameter finally obtains the optimum regression network model of training completion to minimize Euclidean distance loss.
8. a kind of method for realizing human face animation based on moving cell Expression Mapping as described in claim 1, which is characterized in that It is described realize human face animation method include:
The moving cell parameter of each frame image of the video sequence acquired in real time is returned out using the Recurrent networks model;
The rigid transformation on head, including spin matrix and translation vector are estimated based on Epnp algorithm simultaneously;
Obtained moving cell parameter is mapped into virtual portrait, in conjunction with virtual portrait and redefines 24 face moving cells Corresponding basis three-dimensional face expression shape and neutral expression shape, obtain the corresponding countenance of cartoon role;
The rigidity parameters of headwork are mapped into virtual portrait again to obtain the corresponding heads posture of cartoon role, to be formed Human face animation.
CN201811348656.7A 2018-11-13 2018-11-13 A method of human face animation is realized based on moving cell Expression Mapping Pending CN109493403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811348656.7A CN109493403A (en) 2018-11-13 2018-11-13 A method of human face animation is realized based on moving cell Expression Mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811348656.7A CN109493403A (en) 2018-11-13 2018-11-13 A method of human face animation is realized based on moving cell Expression Mapping

Publications (1)

Publication Number Publication Date
CN109493403A true CN109493403A (en) 2019-03-19

Family

ID=65695730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811348656.7A Pending CN109493403A (en) 2018-11-13 2018-11-13 A method of human face animation is realized based on moving cell Expression Mapping

Country Status (1)

Country Link
CN (1) CN109493403A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109922355A (en) * 2019-03-29 2019-06-21 广州虎牙信息科技有限公司 Virtual image live broadcasting method, virtual image live broadcast device and electronic equipment
CN109977925A (en) * 2019-04-22 2019-07-05 北京字节跳动网络技术有限公司 Expression determines method, apparatus and electronic equipment
CN110517339A (en) * 2019-08-30 2019-11-29 腾讯科技(深圳)有限公司 A kind of animating image driving method and device based on artificial intelligence
CN110599573A (en) * 2019-09-03 2019-12-20 电子科技大学 Method for realizing real-time human face interactive animation based on monocular camera
CN110942503A (en) * 2019-11-13 2020-03-31 中南大学 Micro-expression data generation method based on virtual face model
CN111460945A (en) * 2020-03-25 2020-07-28 亿匀智行(深圳)科技有限公司 Algorithm for acquiring 3D expression in RGB video based on artificial intelligence
CN111598977A (en) * 2020-05-21 2020-08-28 北京中科深智科技有限公司 Method and system for transferring and animating expression

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093490A (en) * 2013-02-02 2013-05-08 浙江大学 Real-time facial animation method based on single video camera
CN103942822A (en) * 2014-04-11 2014-07-23 浙江大学 Facial feature point tracking and facial animation method based on single video vidicon
CN106600667A (en) * 2016-12-12 2017-04-26 南京大学 Method for driving face animation with video based on convolution neural network
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107862292A (en) * 2017-11-15 2018-03-30 平安科技(深圳)有限公司 Personage's mood analysis method, device and storage medium
CN108363973A (en) * 2018-02-07 2018-08-03 电子科技大学 A kind of unconfined 3D expressions moving method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093490A (en) * 2013-02-02 2013-05-08 浙江大学 Real-time facial animation method based on single video camera
CN103942822A (en) * 2014-04-11 2014-07-23 浙江大学 Facial feature point tracking and facial animation method based on single video vidicon
CN106600667A (en) * 2016-12-12 2017-04-26 南京大学 Method for driving face animation with video based on convolution neural network
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107862292A (en) * 2017-11-15 2018-03-30 平安科技(深圳)有限公司 Personage's mood analysis method, device and storage medium
CN108363973A (en) * 2018-02-07 2018-08-03 电子科技大学 A kind of unconfined 3D expressions moving method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109922355A (en) * 2019-03-29 2019-06-21 广州虎牙信息科技有限公司 Virtual image live broadcasting method, virtual image live broadcast device and electronic equipment
CN109922355B (en) * 2019-03-29 2020-04-17 广州虎牙信息科技有限公司 Live virtual image broadcasting method, live virtual image broadcasting device and electronic equipment
CN109977925A (en) * 2019-04-22 2019-07-05 北京字节跳动网络技术有限公司 Expression determines method, apparatus and electronic equipment
CN110517339A (en) * 2019-08-30 2019-11-29 腾讯科技(深圳)有限公司 A kind of animating image driving method and device based on artificial intelligence
CN110517339B (en) * 2019-08-30 2021-05-25 腾讯科技(深圳)有限公司 Animation image driving method and device based on artificial intelligence
US11941737B2 (en) 2019-08-30 2024-03-26 Tencent Technology (Shenzhen) Company Limited Artificial intelligence-based animation character control and drive method and apparatus
CN110599573A (en) * 2019-09-03 2019-12-20 电子科技大学 Method for realizing real-time human face interactive animation based on monocular camera
CN110942503A (en) * 2019-11-13 2020-03-31 中南大学 Micro-expression data generation method based on virtual face model
CN110942503B (en) * 2019-11-13 2022-02-11 中南大学 Micro-expression data generation method based on virtual face model
CN111460945A (en) * 2020-03-25 2020-07-28 亿匀智行(深圳)科技有限公司 Algorithm for acquiring 3D expression in RGB video based on artificial intelligence
CN111598977A (en) * 2020-05-21 2020-08-28 北京中科深智科技有限公司 Method and system for transferring and animating expression
CN111598977B (en) * 2020-05-21 2021-01-29 北京中科深智科技有限公司 Method and system for transferring and animating expression

Similar Documents

Publication Publication Date Title
CN109493403A (en) A method of human face animation is realized based on moving cell Expression Mapping
US11790589B1 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
CN107358648B (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
JP6788264B2 (en) Facial expression recognition method, facial expression recognition device, computer program and advertisement management system
Terzopoulos et al. Analysis and synthesis of facial image sequences using physical and anatomical models
CN104008564B (en) A kind of human face expression cloning process
CN107610209A (en) Human face countenance synthesis method, device, storage medium and computer equipment
CN108596024A (en) A kind of illustration generation method based on human face structure information
CN108140105A (en) Head-mounted display with countenance detectability
CN109635727A (en) A kind of facial expression recognizing method and device
CN109584353A (en) A method of three-dimensional face expression model is rebuild based on monocular video
CN109410168A (en) For determining the modeling method of the convolutional neural networks model of the classification of the subgraph block in image
CN109034099A (en) A kind of expression recognition method and device
CN113496507A (en) Human body three-dimensional model reconstruction method
CN110363867A (en) Virtual dress up system, method, equipment and medium
Nguyen et al. Static hand gesture recognition using artificial neural network
CN101098241A (en) Method and system for implementing virtual image
CN108363973A (en) A kind of unconfined 3D expressions moving method
CN103761508A (en) Biological recognition method and system combining face and gestures
CN109087379A (en) The moving method of human face expression and the moving apparatus of human face expression
CN111145865A (en) Vision-based hand fine motion training guidance system and method
CN102567716A (en) Face synthetic system and implementation method
CN108932654A (en) A kind of virtually examination adornment guidance method and device
CN108908353B (en) Robot expression simulation method and device based on smooth constraint reverse mechanical model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190319