CN108335345A - The control method and device of FA Facial Animation model, computing device - Google Patents

The control method and device of FA Facial Animation model, computing device Download PDF

Info

Publication number
CN108335345A
CN108335345A CN201810145903.7A CN201810145903A CN108335345A CN 108335345 A CN108335345 A CN 108335345A CN 201810145903 A CN201810145903 A CN 201810145903A CN 108335345 A CN108335345 A CN 108335345A
Authority
CN
China
Prior art keywords
model
key point
image
group
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810145903.7A
Other languages
Chinese (zh)
Other versions
CN108335345B (en
Inventor
眭帆
眭一帆
邱学侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201810145903.7A priority Critical patent/CN108335345B/en
Publication of CN108335345A publication Critical patent/CN108335345A/en
Application granted granted Critical
Publication of CN108335345B publication Critical patent/CN108335345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of control method and device of FA Facial Animation model, computing devices.Wherein, this method includes:The second position information for each model key point for including in the first position information and FA Facial Animation model of each image key points for including in the determining image data got;According to preset key point mapping ruler, the correspondence between each image key points and each model key point is determined, be compared the first position information of each image key points with the second position information of each model key point according to correspondence;The each model key point for including in facial animation model is controlled according to comparison result to be subjected to displacement, so that the expression of FA Facial Animation model changes.According to this method, it can utilize human face expression that facial animation model is driven to make corresponding expression or action, improve the interactivity between model and user, the recreational of animation model is made to be significantly increased.

Description

The control method and device of FA Facial Animation model, computing device
Technical field
The present invention relates to image processing fields, and in particular to a kind of control method and device of FA Facial Animation model calculate Equipment.
Background technology
Character animation based on 3D animation model technologies has been widely used in the various virtual fields in industry-by-industry In scape system, various lifelike biology can be moulded using 3D animation models technology, to be effectively improved void The verisimilitude of quasi- scene.In the prior art, virtual organism is produced using animation model mostly, then again by presetting Program make various expressions or action to control the virtual organism corresponding to the animation model.
But inventor has found in the implementation of the present invention:Animation model is controlled by preset program Mode it is more inflexible, animation model can only be made to be set for changing according to programmer, cause user experience poor.
Invention content
In view of the above problems, it is proposed that the present invention overcoming the above problem in order to provide one kind or solves at least partly State control method and device, the computing device of the FA Facial Animation model of problem.
According to an aspect of the invention, there is provided a kind of control method of FA Facial Animation model comprising:
Determine the first position information and FA Facial Animation of each image key points for including in the image data got The second position information for each model key point for including in model;
According to preset key point mapping ruler, determine corresponding between each image key points and each model key point Relationship, according to correspondence by the second position information of the first position information of each image key points and each model key point It is compared;
The each model key point for including in facial animation model is controlled according to comparison result to be subjected to displacement, so that face is dynamic The expression for drawing model changes.
Optionally, wherein each image key points for including in described image data further comprise:According to facial The multiple series of images position key point of division, and each model key point for including in the FA Facial Animation model further comprises: The multigroup model position key point divided according to model position;
Then the second position information of the first position information by each image key points and each model key point into The step of row compares specifically includes:
It is directed to every group of image locations key point respectively, determines and this group of image locations key point corresponding group model portion Position key point, and determine respectively corresponding with this group of image locations key point first position expression coefficient and with the group model The corresponding second position expression coefficient of position key point;
It respectively will be corresponding to the first position expression coefficient of every group of image locations key point and this group of image locations key point The second position expression coefficient of a group model position key point be compared.
Optionally, wherein described to determine first position expression coefficient corresponding with this group of image locations key point respectively And the step of corresponding with group model position key point second position expression coefficient, specifically includes:
The first distribution of this group of image locations key point is determined according to the first position information of each image key points Information determines and the first distributed intelligence corresponding first position expression system according to preset first expression computation rule Number;
The second distribution of the group model position key point is determined according to the second position information of each model key point Information determines and the second distributed intelligence corresponding second position expression system according to preset second expression computation rule Number.
Optionally, wherein described that each model key for including in the FA Facial Animation model is controlled according to comparison result The step of point is subjected to displacement specifically includes:
Be directed to every group of image locations key point respectively, judge the first position expression coefficient of this group of image locations key point with Whether the difference between the second position expression coefficient of the group model position key point corresponding to this group of image locations key point More than predetermined threshold value;
If so, target site pass will be determined as with this group of image locations key point corresponding group model position key point Key point controls the target site key point and is subjected to displacement.
Optionally, wherein the step of control target site key point is subjected to displacement specifically includes:
According to the first position expression coefficient and preset model linkage rule, the target site key point is determined Object table be with one's heart at number, direction of displacement and/or the displacement that number determines the target site key points are with one's heart at according to the object table Amplitude;
Wherein, model linkage rule for be arranged the first position expression coefficient of each group image locations key point with it is right The object table for the model position key point answered is with one's heart at the correspondence between number.
Optionally, wherein it is associated with per group model position key point that the model linkage rule is further used for setting Association position point and the association position point association displacement rule;
The step of then the control target site key point is subjected to displacement further comprises:It determines and the target portion The association displacement rule of the associated association position point of position key point and the association position point, controls association position point It is subjected to displacement according to the association displacement rule.
Optionally, wherein model position key point includes eyes key point, and/or face position key point; Association position point includes:Eyebrow position point, face's position point and/or ear position point.
Optionally, wherein the first position for each image key points for including in the image data that the determination is got The step of information, specifically includes:
The image data corresponding to the current frame image for including in live video stream is obtained in real time, determines the present frame figure As the first position information for each image key points for including in corresponding image data.
Optionally, wherein the first position for each image key points for including in the image data that the determination is got The step of information, specifically includes:
The image data corresponding to each frame image for including in the video flowing recorded is obtained successively, determines the frame image institute The first position information for each image key points for including in corresponding image data.
Optionally, wherein the FA Facial Animation model is skeleton cartoon model, and each in the FA Facial Animation model Model key point is corresponding with preset skeletal sites.
Optionally, wherein the skeleton cartoon model further comprises:Animal skeleton animation model, and the Animal Bone Bone animation model includes:Cat skeleton cartoon model, courser skeleton cartoon model and rabbit skeleton cartoon model.
According to another aspect of the present invention, a kind of control device of FA Facial Animation model is provided, including:
Determining module is adapted to determine that in the image data got the first position letter for each image key points for including The second position information for each model key point for including in breath and FA Facial Animation model;
Comparison module is suitable for, according to preset key point mapping ruler, determining that each image key points are closed with each model Correspondence between key point, according to correspondence by the first position information of each image key points and each model key point Second position information be compared;
Change module, position occurs suitable for controlling each model key point for including in facial animation model according to comparison result It moves, so that the expression of FA Facial Animation model changes.
Optionally, wherein each image key points for including in described image data further comprise:According to facial The multiple series of images position key point of division, and each model key point for including in the FA Facial Animation model further comprises: The multigroup model position key point divided according to model position;
Then the comparison module is particularly adapted to:
It is directed to every group of image locations key point respectively, determines and this group of image locations key point corresponding group model portion Position key point, and determine respectively corresponding with this group of image locations key point first position expression coefficient and with the group model The corresponding second position expression coefficient of position key point;
It respectively will be corresponding to the first position expression coefficient of every group of image locations key point and this group of image locations key point The second position expression coefficient of a group model position key point be compared.
Optionally, wherein the comparison module is particularly adapted to:
The first distribution of this group of image locations key point is determined according to the first position information of each image key points Information determines and the first distributed intelligence corresponding first position expression system according to preset first expression computation rule Number;
The second distribution of the group model position key point is determined according to the second position information of each model key point Information determines and the second distributed intelligence corresponding second position expression system according to preset second expression computation rule Number.
Optionally, wherein the change module is particularly adapted to:
Be directed to every group of image locations key point respectively, judge the first position expression coefficient of this group of image locations key point with Whether the difference between the second position expression coefficient of the group model position key point corresponding to this group of image locations key point More than predetermined threshold value;
If so, target site pass will be determined as with this group of image locations key point corresponding group model position key point Key point controls the target site key point and is subjected to displacement.
Optionally, wherein the change module is particularly adapted to:
According to the first position expression coefficient and preset model linkage rule, the target site key point is determined Object table be with one's heart at number, direction of displacement and/or the displacement that number determines the target site key points are with one's heart at according to the object table Amplitude;
Wherein, model linkage rule for be arranged the first position expression coefficient of each group image locations key point with it is right The object table for the model position key point answered is with one's heart at the correspondence between number.
Optionally, wherein it is associated with per group model position key point that the model linkage rule is further used for setting Association position point and the association position point association displacement rule;
Then the change module is further adapted for:Determine association position associated with target site key point point with And according to the association displacement rule position occurs for the association displacement rule of the association position point, control association position point It moves.
Optionally, wherein model position key point includes eyes key point, and/or face position key point; Association position point includes:Eyebrow position point, face's position point and/or ear position point.
Optionally, wherein the determining module is particularly adapted to:
The image data corresponding to the current frame image for including in live video stream is obtained in real time, determines the present frame figure As the first position information for each image key points for including in corresponding image data.
Optionally, wherein the determining module is particularly adapted to:
The image data corresponding to each frame image for including in the video flowing recorded is obtained successively, determines the frame image institute The first position information for each image key points for including in corresponding image data.
Optionally, wherein the FA Facial Animation model is skeleton cartoon model, and each in the FA Facial Animation model Model key point is corresponding with preset skeletal sites.
Optionally, wherein the skeleton cartoon model further comprises:Animal skeleton animation model, and the Animal Bone Bone animation model includes:Cat skeleton cartoon model, courser skeleton cartoon model and rabbit skeleton cartoon model.
According to another aspect of the invention, a kind of computing device is provided, including:Processor, memory, communication interface and Communication bus, the processor, the memory and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, it is above-mentioned that the executable instruction makes the processor execute The corresponding operation of control method of FA Facial Animation model.
In accordance with a further aspect of the present invention, provide a kind of computer storage media, be stored in the storage medium to A few executable instruction, the executable instruction make processor execute the corresponding behaviour of control method such as above-mentioned FA Facial Animation model Make.
According to the control method and device of FA Facial Animation model provided by the invention, computing device, got by determination Image data in include each image key points first position information and FA Facial Animation model in include each mould The second position information of type key point, then according to the correspondence between image key points and model key point by above-mentioned first Location information and second position information are compared, and each mould for including in facial animation model is controlled according to comparison result Type key point is subjected to displacement, so that the expression of FA Facial Animation model changes.According to this method, it can realize that basis is got Image data in face facial expression correspondingly control various FA Facial Animation models and make similar expression or other Various expressions and action, increase interest.It can be seen that according to this method, human face expression can be utilized to drive face dynamic It draws model and makes corresponding expression or action, improve the interactivity between model and user, make the recreational big of animation model Width increases.
Above description is only the general introduction of technical solution of the present invention, in order to better understand the technical means of the present invention, And can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, below the special specific implementation mode for lifting the present invention.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, various other advantages and benefit are common for this field Technical staff will become clear.Attached drawing only for the purpose of illustrating preferred embodiments, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the flow chart of the control method of FA Facial Animation model according to an embodiment of the invention;
Fig. 2 shows the flow charts of the control method of FA Facial Animation model in accordance with another embodiment of the present invention;
Fig. 3 shows the functional block diagram of the control device of FA Facial Animation model according to an embodiment of the invention;
Fig. 4 shows a kind of structural schematic diagram of computing device according to an embodiment of the invention.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 shows the flow chart of the control method of FA Facial Animation model according to an embodiment of the invention.Such as Fig. 1 institutes Show, the control method of FA Facial Animation model specifically comprises the following steps:
Step S101 determines the first position information for each image key points for including in the image data got, with And the second position information for each model key point in FA Facial Animation model including.
Wherein, the above-mentioned image data got can be the current frame image for including in the live video stream obtained in real time Corresponding image data, or obtain the picture number corresponding to each frame image for including in the video flowing recorded successively According to.The image key points for including in the image data got may include:It is corresponding with facial face or face contour Characteristic point, above-mentioned each image key points can be got by way of deep learning, can also be obtained by another way It gets.After getting each image key points, then the first position information of each image key points can be obtained, first Confidence, which ceases, to be indicated in the form of position coordinates of the image key points in the coordinate system being arranged in image data, In, the specific location and set-up mode of coordinate-system setting can specifically be set by those skilled in the art according to actual conditions It sets.First position information can also be indicated in the form of other, specifically may be used other than being indicated in the form of position coordinates It is determined with the position representation according to image key points.Correspondingly, each model for including in FA Facial Animation model is crucial It puts and may include:Characteristic point corresponding with the facial face or face contour of animation model.Second position information can be with The forms of position coordinates of each model key point in the coordinate system in being set to animation model indicates, can also be with other Form indicate, herein do not limited.
Step S102 determines each image key points and each model key point according to preset key point mapping ruler Between correspondence, according to correspondence by the of the first position information of each image key points and each model key point Two location informations are compared.
Wherein, each image key points may include the multiple series of images position key point divided for facial, accordingly Ground, each model key point for including in FA Facial Animation model may include:The multigroup model position divided according to model position Key point.Image locations key point can be image eyes key point, image face according to the Type division of facial Position key point etc., correspondingly, model position key point can also be model eyes according to the Type division at mask position Position key point, model face position key point etc..In addition to above-mentioned dividing mode, can also be drawn according to other modes Point.Can be that the size of equal proportion either carries out equal proportion since FA Facial Animation model is compared with each image got Scaling after size, then preset key point mapping ruler can be the FA Facial Animation mould for any one of the above size The model position key point of type, each image locations key point and its same type is regular correspondingly, such as image eyes Position key point and the mutual corresponding rule of model eyes position key point.According to this correspondence, by each image key The first position information of point is compared with the second position information of each model key point.By the of each image key points It, can be by each image or FA Facial Animation when one location information is compared with the second position information of each model key point Model zooms in and out, to obtain the first position information that can be directly compared or second position information, then by it It is compared.Or determine the corresponding first position expression coefficient of each group image locations key point group model corresponding with its The corresponding second position expression coefficient of position key point;Then respectively by the first position expression of every group of image locations key point Coefficient is compared with the second position expression coefficient of the group model position key point corresponding to this group of image locations key point. Other than above-mentioned manner of comparison, those skilled in the art can also believe the first position of each image key points according to other modes Breath is compared with the second position information of each model key point.
Step S103 controls each model key point for including in facial animation model according to comparison result and is subjected to displacement, So that the expression of FA Facial Animation model changes.
Specifically, with respectively by the first position expression coefficient of every group of image locations key point and this group of image locations key For the manner of comparison that the second position expression coefficient of the corresponding group model position key point of point is compared.As long as can be with First position expression coefficient is different from second position expression coefficient, then it is crucial to control in facial animation model each model for including Point is subjected to displacement, so that the expression of FA Facial Animation model changes;Or first position expression coefficient and second position expression When the numerical value of coefficient difference is more than some predetermined threshold value, then controls each model key point for including in facial animation model and occur Displacement, so that the expression of FA Facial Animation model changes.Above-mentioned predetermined threshold value can be according to the precision for requiring to control come specific Setting, when to control accuracy requirement it is relatively high when, predetermined threshold value could be provided as smaller value, crucial with image eyes For the first position expression coefficient and the corresponding second position expression coefficient of model eyes position key point of point, when in image The expression of eyes as long as with the expression of eyes in mask it is slightly different as long as can control in facial animation model Including eyes key point and each model key point for being associated be subjected to displacement, to make the table of FA Facial Animation model Feelings change;When to control accuracy requirement it is relatively low when, predetermined threshold value could be provided as larger value, with image eyes portion For the first position expression coefficient and the corresponding second position expression coefficient of model eyes position key point of position key point, when When the expression difference of eyes is larger in the expression of eyes and mask in image, FA Facial Animation mould can be just controlled The eyes key point for including in type and each model key point being associated are subjected to displacement, so that FA Facial Animation model Expression changes.It, can also be according to specific comparison result in other manners if be compared according to other modes It is subjected to displacement to control each model key point for including in facial animation model, so that the expression of FA Facial Animation model changes Become.
In addition, each model key point for including in the facial animation model of control is subjected to displacement, so that FA Facial Animation model Expression when changing, each model key point for including in facial animation model can be controlled and be subjected to displacement so that face is dynamic The facial expression drawn in the expression and image of model is same or similar, to imitate the facial expression of the face in image;Work as When face is blinked in image, also with blink in FA Facial Animation model, when face opens one's mouth or shuts up in image, FA Facial Animation model also correspondingly opens one's mouth or shuts up.It, can also be by those skilled in the art certainly other than above-mentioned move mode The move mode that each model key point for including in the facial animation model of definition control is subjected to displacement, so that FA Facial Animation model Expression occur it is arbitrary change, such as when face is cried in image, FA Facial Animation model septum reset, which can be made, to be laughed at Expression, and make ear and hold up the action etc. come.
According to the control method of FA Facial Animation model provided in this embodiment, wrapped by determining in the image data got The second of each model key point for including in the first position information and FA Facial Animation model of each image key points contained Location information, then according to the correspondence between image key points and model key point by above-mentioned first position information and second Location information is compared, and controls each model key point for including in facial animation model according to comparison result and position occurs It moves, so that the expression of FA Facial Animation model changes.According to this method, can realize according in the image data got Face facial expression correspondingly controls various FA Facial Animation models and makes similar expression or other various expressions and move Make, increases interest.It can be seen that according to this method, it can utilize human face expression that facial animation model is driven to make correspondence Expression or action, improve the interactivity between model and user, the recreational of animation model made to be significantly increased.
Fig. 2 shows the flow charts of the control method of FA Facial Animation model according to an embodiment of the invention.Such as Fig. 2 institutes Show, the control method of FA Facial Animation model specifically comprises the following steps:
Step S201 determines the first position information for each image key points for including in the image data got, with And the second position information for each model key point in FA Facial Animation model including.
Specifically, the image data corresponding to the current frame image for including in live video stream can be obtained in real time, determined The first position information for each image key points for including in current frame image data;Or the video flowing recorded is obtained successively In include each frame image corresponding to image data, determine each image for including in the image data corresponding to the frame image The first position information of key point.When the figure corresponding to each current frame image for including in the live video stream to obtain in real time Can be to be uploaded to such as iqiyi.com, youku.com, fast video cloud video platform server, for video platform service when as data Device is shown the image data corresponding to each current frame image of video data in cloud video platform.It optionally, can be with To be uploaded to cloud direct broadcast server, so that cloud direct broadcast server gives current frame image real time push to each of viewing subscription client Image data corresponding to a current frame image.Optionally, can also be to be uploaded to cloud public platform server, for cloud public platform Current frame image is pushed to the image data corresponding to each current frame image of public platform concern client by server.When having When user pays close attention to the public platform, above-mentioned video data is pushed to public platform by cloud public platform server and pays close attention to client;Into one Step, cloud public platform server can also be accustomed to according to the viewing of the user of concern public platform, and push meets the video of user's custom Data pay close attention to client to public platform.In short, the image data in the present invention can obtain in real time or it is non real-time obtain, The present invention does not limit specific application scenarios.
In addition, FA Facial Animation model provided in this embodiment can be that skeleton cartoon model or other types of face are dynamic Draw model.When for skeleton cartoon model, each model key point in above-mentioned animation model is opposite with preset skeletal sites It answers.Such as each model key point of the eyes in animation model and the bone of preset eyes it is corresponding.It is above-mentioned Skeleton cartoon model can be mankind's skeleton cartoon model, can also be various animal skeleton animation models, for example be cat bone Bone animation model, courser skeleton cartoon model and rabbit skeleton cartoon model.By the way that skeleton cartoon model is set as various The skeleton cartoon model of biology, to enrich the type of controlled FA Facial Animation model, in this way when face does expression Time can control various FA Facial Animation models do therewith same or similar expression and other miscellaneous expression and Action, improves interest.
Specifically, above-mentioned each image key points may include:Feature corresponding with facial face or face contour Point.In addition, each image key points for including in image data may further include the multigroup figure divided according to facial As position key point.Specific dividing mode can be divided according to the type of facial face or face contour, for example be drawn It is divided into eyes key point, nose areas key point, face position key point etc., it can also can according to facial face or profile It is divided with the size for the degree of changing.Correspondingly, each model key point for including in FA Facial Animation model is further Including:According to multigroup model position key point that model position divides, and model position key point may include eyes Key point, and/or face position key point.Above-mentioned each image key points can be got by the method for deep learning, also It can be obtained according to other modes.For example it can be directed to the position of the position and face mask of face's face of a frame image in advance Arrangement 95 or any number of key points are set, coordinate system is then established, to obtain this above-mentioned each key point of frame image The position of coordinate information, that is, first position information, above-mentioned coordinate origin can be had by those skilled in the art according to actual conditions Body is arranged, and does not limit herein.The first position for the image key points for including in image data corresponding to the image of other frames Information can also be obtained according to this method.Correspondingly, can also in FA Facial Animation model the position of face and face mask Position correspondingly arrange 95 or any number of key points.In an optional implementation manner, better in order to realize Visual effect, FA Facial Animation model are the threedimensional models being made of 3D grids, correspondingly, crucial by obtaining above-mentioned each model Distribution of the point on 3D grids, then can determine each model key point three-dimensional coordinate information corresponding with threedimensional model, The second position information of each model key point can be then obtained according to above- mentioned information.It can be seen that according to the step, even if Not having in the case of depth camera also can be by first of each image key points for including in the image data that gets The second position information for each model key point for including in confidence breath and FA Facial Animation model, and it is subsequent by executing Step realizes the control to facial animation model, to reduce the requirement to the rigid condition of equipment camera, more It is simple and practical.
Step S202 determines each image key points and each model key point according to preset key point mapping ruler Between correspondence.
Above-mentioned preset key point mapping ruler can be that each model of the same type of each image key points closes The one-to-one mapping ruler of key point can also be each group image locations key point and each group model position pass of its same type The one-to-one mapping ruler of key point, such as image eyes key point and model eyes position key point correspond, figure As face position key point and model face position key point correspond.According to above-mentioned preset key point mapping ruler, Can determine correspondence between each image key points and each model key point and each group image locations key point with Correspondence between the key point of each group model position.
Step S203, is directed to every group of image locations key point respectively, and determination is corresponding with this group of image locations key point One group model position key point, and determine respectively corresponding with this group of image locations key point first position expression coefficient and Second position expression coefficient corresponding with the group model position key point.
Specifically, this group of image locations key point can be determined according to the first position information of above-mentioned each image key points The first distributed intelligence, then determine corresponding with above-mentioned first distributed intelligence the according to preset first expression computation rule One position expression coefficient.For example can 21 image key points be set at face position, according to each image key points on face First position information can determine the first distributed intelligence of this group of image face position key point, such as when lip is closed It waits, each image key points of the upper lower lip surface distribution to contact with each other are substantially at the state of coincidence, when lip is gradually opened When opening, the distance for each image key points being distributed on lip and position correspondingly become also with the opening of lip Change.By determining the first distributed intelligence of this group of image face position key point, is then calculated and advised according to preset first expression Then determine first position expression coefficient corresponding with above-mentioned the first distributed intelligence of face position key point.According to above-mentioned each The second position information of a model key point can determine the second distributed intelligence of the group model position key point, according to preset Second expression computation rule can determine second position expression coefficient corresponding with above-mentioned second distributed intelligence.Wherein, above-mentioned First expression computation rule and the second expression computation rule can be in the according to first distributed intelligence and in the second distributed intelligences The mutual position relationship and distance relation of each key point calculates the rule of the first expression coefficient and the second expression coefficient. And above-mentioned first expression coefficient and the second expression coefficient can be the number using the expression coefficient in the expression system of foundation as standard The expression system of numerical value between value ranging from 0 to 1, above-mentioned foundation can be blendship expression systems, can also be other The expression system of type.Such as when face opens up into maximum, corresponding expression coefficient is 1;When face is closed, Corresponding expression coefficient is 0.
Step S204, respectively by the first position expression coefficient of every group of image locations key point and this group of image locations key The second position expression coefficient of the corresponding group model position key point of point is compared.
Determined respectively in step S203 corresponding with this group of image locations key point first position expression coefficient and After second position expression coefficient corresponding with the group model position key point, in this step respectively by every group of image locations The first position expression coefficient of key point and second of the group model position key point corresponding to this group of image locations key point Position expression coefficient is compared.
Step S205 is directed to every group of image locations key point, judges the first position of this group of image locations key point respectively Between the second position expression coefficient of a group model position key point corresponding to expression coefficient and this group of image locations key point Difference whether be more than predetermined threshold value.
When the face facial expression of one group of image and the facial expression of mask at this time is just the same or difference not When big, then each group model position is crucial in the first position expression coefficient Yu mask of each group image locations key point in image The second position expression factor v of point is equal or is not much different, and is subjected to displacement at this time without Controlling model position key point, So that the expression of FA Facial Animation model changes.Optionally, when the variation degree of one or more kinds of organs in image and this When mask in corresponding this kind of organ variation degree it is the same or when being not much different, such as face in image mouth Bar opening and closing degree as the opening and closing degree at the face position in animation model or when being not much different, then this group of image locations Second expression of the first position expression coefficient of key point and the group model position key point corresponding to this group of image key points Coefficient value is equal or difference very little, is subjected to displacement at this time and can just make the model without controlling the group model position key point In expression corresponding to this group of image locations key point it is identical with the expression corresponding to the group model position key point.Therefore it is directed to The above situation can be directed to every group of image locations key point respectively in order to which the precision of the facial animation model of control is arranged, and judging should The first position expression coefficient of group image locations key point is closed with the group model position corresponding to this group of image locations key point Whether the difference between the second position expression coefficient of key point is more than predetermined threshold value.Predetermined threshold value can by those skilled in the art according to Need the precision controlled to be voluntarily arranged, for example, when to control accuracy requirement it is relatively high when, above-mentioned predetermined threshold value could be provided as 0 Or smaller numerical value, the first position expression coefficient of such each group image locations key point and its corresponding each group model position Also the control to mask can be realized when the second expression coefficient difference very little of key point, on the contrary, working as control accuracy requirement It is relatively low when, above-mentioned predetermined threshold value may be set to be larger numerical value, first of such each group image locations key point It could be realized pair when position expression coefficient and the larger second position expression coefficient of its corresponding each group model position key point difference The control of mask.By implementing the step, control accuracy neatly as needed can be compared to realize to animation mould The control of type or Partial controll, and can more accurately realize the control to facial animation model, it reduces unnecessary Operation.Also, optional, above-mentioned predetermined threshold value can also be multiple thresholds for corresponding respectively to different model position key points Value.Specifically, corresponding threshold size can be determined according to the degree of deformability at each model position.For example, for eyes For position and face position, variation is more apparent, therefore, threshold value can be arranged smaller, to promote control accuracy;Again Such as, for ear position and cheek position, therefore variation unobvious threshold value can be arranged larger, to reduce use Performance loss caused by the not noticeable adjustment operation in family.
Step S206, if so, will be determined as with this group of image locations key point corresponding group model position key point Target site key point, control targe position key point is subjected to displacement, so that the expression of FA Facial Animation model changes.
If being directed to every group of image locations key point respectively, the first position expression system of this group of image locations key point is judged Difference between the second position expression coefficient of a group model position key point corresponding to number and this group of image locations key point More than predetermined threshold value, then this group of image locations key point corresponding group model position key point can be determined as target portion Position key point, and control targe position key point is subjected to displacement, so that the expression of FA Facial Animation model changes.
Specifically, can link rule according to above-mentioned first position expression coefficient and preset model, determine that target is closed The object table of key point is with one's heart at number, and is with one's heart at direction of displacement, the displacement amplitude that number determines target critical point according to above-mentioned object table. Wherein model linkage rule is for being arranged the first position expression coefficient of each group image locations key point and corresponding model position The object table of key point is with one's heart at the correspondence between number.This correspondence can correspond equal relationship, may be used also With the various correspondences to be voluntarily arranged as needed by those skilled in the art, such as can be by each group image locations key point First position expression coefficient and the object table of corresponding model position key point are with one's heart at number and are set as equal numerical value, in this way can be with Realize the control synchronous with its corresponding each group image locations key point to each group model position key point in mask, than Such as when image face opens face, the control also correspondingly opened to equal proportion to face in mask may be implemented System, the control that eyes also correspondingly open to equal proportion in can also being realized to mask when image face eyes open System.Alternatively it is also possible to by the first position expression coefficient of each group image locations key point and corresponding model position key point Object table be with one's heart at number be set as unequal numerical value, in this way when human eye is opened in image, can also realize to face The control of the control of the rotation of human eye eyeball or eyes closed action in model;When face opens in image, realize The control that face in mask is closed.In short, model linkage rule is to be arranged the of each group image locations key point One position expression coefficient is with one's heart at the correspondence between counting with the object table of corresponding model position key point, but the present invention is simultaneously The specific set-up mode of correspondence is not limited.Really according to above-mentioned first position expression coefficient and preset model linkage rule The object table for the key point that sets the goal is with one's heart at after number, then can be with one's heart at number according to above-mentioned object table and expression coefficient calculates rule Then, the corresponding direction of displacement of target critical point, displacement amplitude are determined, to control mesh according to above-mentioned direction of displacement, displacement amplitude Mark position key point is subjected to displacement, so that the expression of FA Facial Animation model changes.
In addition, above-mentioned model links, it is associated with per group model position key point can also to be further used for setting for rule It is associated with position point and is associated with the association displacement rule of position point.Wherein, association position point includes:Eyebrow position point, portion of face Site and/or ear position point.When then above-mentioned control targe position key point is subjected to displacement, it may be determined that with above-mentioned target site The associated association position point of key point and the association displacement rule for being associated with position point, and control above-mentioned association position and press It is subjected to displacement according to association displacement rule.Specifically, above-mentioned association movement rule can be according to each group model position key point and pass Connection relation and linkage rule between connection position point are configured, due between each bone in skeleton cartoon model Connection relation can be according to the connection of the bone of the face in the connection relation of each bone of human face or all kinds of animals Relationship is preset, and the characteristics of motion of each bone in such skeleton cartoon model meets actual human face or animal The characteristics of motion of facial skeleton, to keep the simulated effect of facial skeleton animation model more preferable.Therefore, it is laughed at when model face is sent out Expression when, due to the linkage rule of its bone, with raising up for the corners of the mouth, facial cheek can also carry upwards, the shape meeting of eyes Bending;When face is cried, due to the linkage rule of its bone, with the drop-down of face, then facial cheek can with downward, Eyebrow can also purse up.Association movement rule can be set according to above-mentioned linkage rule can be with when target critical point is subjected to displacement It determines association position point associated with above-mentioned target site key point and is associated with the association displacement rule of position point, and control Association position point moves rule according to associated bit and is subjected to displacement.Such as when model face be target critical point, when face is upturned When, it may be determined that association position associated with target critical point point is eyebrow position point, face's position point etc., and is determined The specific association displacement for stating association position point is regular, such as upwarping with face, and eyebrow position point can raise up, face position Point is subjected to displacement to control above-mentioned association position point according to above-mentioned association movement rule with the displacement rule above carried.
Other than in addition in the above described manner, association movement rule is set, those skilled in the art can with self-defined setting with it is every The associated association position point of group model position key point and the association displacement rule for being associated with position point.Such as setting and model Key point associated association position point in face position can also be ear position point in addition to eyebrow position point, face's position point. In this way, when image face does the expression laughed at, as the ear in animation model that can control that the corners of the mouth raises up is holded up to come or do Others action, when image face does the expression cried, sagging with the corners of the mouth can control under ear droops in animation model Carry out or do other actions, passes through self-defined setting association position point associated with per group model position key point and pass Join the association displacement rule of position point so that association position associated with every group model position key point point and association position The association movement rule of point is not restricted to the connection relation and linkage rule of the bone in skeleton cartoon model, but more Ground diversification and abundantization, greatly enhance interest.
Wherein, association position point is either key point, can also be non-key point.Therefore, by the way that association movement is arranged Rule can make the position that will not be moved originally in model that expression shape change also occur, to enrich the animation effect of model.Example Such as, the ear of script people will not move, and therefore, if being not provided with association movement rule, the ear in model will also remain quiet It is only motionless.But for some cartoon models (such as rabbit model), the variation of ear can reflect pair very agilely The expressive features answered, therefore, the present invention is associated with movement rule by setting can make the position that can not change originally also with people The expression shape change of face and change, to improve visual effect.
In addition, substantially, the face for including in FA Facial Animation model can also be inconsistent with the face of face, for example, It can be small fish animation model, correspondingly, can mutually be reflected by key point mapping ruler, the cheek of the face and small fish that make people It penetrates, to realize preferable entertainment effect.In short, key point mapping ruler can flexibly define image key points and mould as needed Correspondence between type key point.Similarly, model linkage rule and it includes association displacement rule also can flexibly set It sets.
According to the control device method of FA Facial Animation model provided in this embodiment, by determining the image data got In include each image key points first position information and FA Facial Animation model in include each model key point Second position information;And it is directed to every group of image locations key point respectively, it determines and this group of image locations key point corresponding one Group model position key point, and determine respectively corresponding with this group of image locations key point first position expression coefficient and with The corresponding second position expression coefficient of group model position key point;Then by by the first of every group of image locations key point The second position expression coefficient of position expression coefficient and the group model position key point corresponding to this group of image locations key point It is compared, judges whether its difference is more than predetermined threshold value, control neatly as needed can be compared by predetermined threshold value Precision realizes control or the Partial controll to animation model, and can more accurately realize to facial animation model Control.Finally, if it is determined that above-mentioned difference then will one group of mould corresponding with this group of image locations key point more than predetermined threshold value Type position key point is determined as target site key point, and control targe position key point is subjected to displacement, so that FA Facial Animation model Expression change.According to this method, can realize corresponding according to the face facial expression in the image data got Ground controls various FA Facial Animation models and makes similar expression or other various expressions and action, increases interest.By As it can be seen that this is according to this method, can utilize human face expression that facial animation model is driven to make corresponding expression or action, improve Interactivity between model and user makes the recreational of animation model be significantly increased.
Fig. 3 shows the functional block diagram of the control device of FA Facial Animation model according to an embodiment of the invention.Such as Fig. 3 Shown, described device includes:
Determining module 31 is adapted to determine that in the image data got the first position letter for each image key points for including The second position information for each model key point for including in breath and the FA Facial Animation model;
Comparison module 32 is suitable for, according to preset key point mapping ruler, determining each image key points and each model Correspondence between key point, according to the correspondence by the first position information of each image key points and each model The second position information of key point is compared;
Change module 33, suitable for controlling in the FA Facial Animation model each model key point for including according to comparison result It is subjected to displacement, so that the expression of the FA Facial Animation model changes.
Optionally, wherein each image key points for including in described image data further comprise:According to facial The multiple series of images position key point of division, and each model key point for including in the FA Facial Animation model further comprises: The multigroup model position key point divided according to model position;
Then the comparison module 32 is particularly adapted to:
It is directed to every group of image locations key point respectively, determines and this group of image locations key point corresponding group model portion Position key point, and determine respectively corresponding with this group of image locations key point first position expression coefficient and with the group model The corresponding second position expression coefficient of position key point;
It respectively will be corresponding to the first position expression coefficient of every group of image locations key point and this group of image locations key point The second position expression coefficient of a group model position key point be compared.
Optionally, wherein the comparison module 32 is particularly adapted to:
The first distribution of this group of image locations key point is determined according to the first position information of each image key points Information determines and the first distributed intelligence corresponding first position expression system according to preset first expression computation rule Number;
The second distribution of the group model position key point is determined according to the second position information of each model key point Information determines and the second distributed intelligence corresponding second position expression system according to preset second expression computation rule Number.
Optionally, wherein the change module 33 is particularly adapted to:
Be directed to every group of image locations key point respectively, judge the first position expression coefficient of this group of image locations key point with Whether the difference between the second position expression coefficient of the group model position key point corresponding to this group of image locations key point More than predetermined threshold value;
If so, target site pass will be determined as with this group of image locations key point corresponding group model position key point Key point controls the target site key point and is subjected to displacement.
Optionally, wherein the change module 33 is particularly adapted to:
According to the first position expression coefficient and preset model linkage rule, the target site key point is determined Object table be with one's heart at number, direction of displacement and/or the displacement that number determines the target site key points are with one's heart at according to the object table Amplitude;
Wherein, model linkage rule for be arranged the first position expression coefficient of each group image locations key point with it is right The object table for the model position key point answered is with one's heart at the correspondence between number.
Optionally, wherein it is associated with per group model position key point that the model linkage rule is further used for setting Association position point and the association position point association displacement rule;
Then the change module is further adapted for:Determine association position associated with target site key point point with And according to the association displacement rule position occurs for the association displacement rule of the association position point, control association position point It moves.
Optionally, wherein model position key point includes eyes key point, and/or face position key point; Association position point includes:Eyebrow position point, face's position point and/or ear position point.
Optionally, wherein the determining module 31 is particularly adapted to:
The image data corresponding to the current frame image for including in live video stream is obtained in real time, determines the present frame figure As the first position information for each image key points for including in corresponding image data.
Optionally, wherein the determining module 31 is particularly adapted to:
The image data corresponding to each frame image for including in the video flowing recorded is obtained successively, determines the frame image institute The first position information for each image key points for including in corresponding image data.
Optionally, wherein the FA Facial Animation model is skeleton cartoon model, and each in the FA Facial Animation model Model key point is corresponding with preset skeletal sites.
Optionally, wherein the skeleton cartoon model further comprises:Animal skeleton animation model, and the Animal Bone Bone animation model includes:Cat skeleton cartoon model, courser skeleton cartoon model and rabbit skeleton cartoon model.
Fig. 4 shows that a kind of structural schematic diagram of computing device according to an embodiment of the invention, the present invention are specifically real Example is applied not limit the specific implementation of computing device.
As shown in figure 4, the computing device may include:Processor (processor) 402, communication interface (Communications Interface) 404, memory (memory) 406 and communication bus 408.
Wherein:
Processor 402, communication interface 404 and memory 406 complete mutual communication by communication bus 408.
Communication interface 404, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 402, for executing program 410, the control method that can specifically execute above-mentioned FA Facial Animation model is implemented Correlation step in example.
Specifically, program 410 may include program code, which includes computer-managed instruction.
Processor 402 may be central processor CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computing device includes can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 406, for storing program 410.Memory 406 may include high-speed RAM memory, it is also possible to further include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 410 specifically can be used for so that processor 402 executes following operation:
Determine the first position information for each image key points for including in the image data got and the face The second position information for each model key point for including in animation model;
According to preset key point mapping ruler, determine corresponding between each image key points and each model key point Relationship, according to the correspondence by the second position of the first position information of each image key points and each model key point Information is compared;
The each model key point for including is controlled in the FA Facial Animation model according to comparison result to be subjected to displacement, so that institute The expression for stating FA Facial Animation model changes.
The each image key points for including in a kind of optional mode, in described image data further comprise:According to The multiple series of images position key point that facial divides, and each model key for including in the FA Facial Animation model clicks through one Step includes:The multigroup model position key point divided according to model position;
Then program 410 can specifically be further used for so that processor 402 executes following operation:
It is directed to every group of image locations key point respectively, determines and this group of image locations key point corresponding group model portion Position key point, and determine respectively corresponding with this group of image locations key point first position expression coefficient and with the group model The corresponding second position expression coefficient of position key point;
It respectively will be corresponding to the first position expression coefficient of every group of image locations key point and this group of image locations key point The second position expression coefficient of a group model position key point be compared.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:
The first distribution of this group of image locations key point is determined according to the first position information of each image key points Information determines and the first distributed intelligence corresponding first position expression system according to preset first expression computation rule Number;
The second distribution of the group model position key point is determined according to the second position information of each model key point Information determines and the second distributed intelligence corresponding second position expression system according to preset second expression computation rule Number.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:
Be directed to every group of image locations key point respectively, judge the first position expression coefficient of this group of image locations key point with Whether the difference between the second position expression coefficient of the group model position key point corresponding to this group of image locations key point More than predetermined threshold value;
If so, target site pass will be determined as with this group of image locations key point corresponding group model position key point Key point controls the target site key point and is subjected to displacement.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:
According to the first position expression coefficient and preset model linkage rule, the target site key point is determined Object table be with one's heart at number, direction of displacement and/or the displacement that number determines the target site key points are with one's heart at according to the object table Amplitude;
Wherein, model linkage rule for be arranged the first position expression coefficient of each group image locations key point with it is right The object table for the model position key point answered is with one's heart at the correspondence between number.
In a kind of optional mode, wherein the model linkage rule is further used for setting and every group model position The association displacement rule of the associated association position point of key point and the association position point;
Then program 410 can specifically be further used for so that processor 402 executes following operation:
Determine the associated bit of association position point associated with the target site key point and the association position point Rule is moved, control association position point is subjected to displacement according to the association displacement rule.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:Wherein, model position key point includes eyes key point, and/or face position key point;The association position It puts and includes:Eyebrow position point, face's position point and/or ear position point.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:The image data corresponding to the current frame image for including in live video stream is obtained in real time, determines the current frame image institute The first position information for each image key points for including in corresponding image data.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:The image data corresponding to each frame image for including in the video flowing recorded is obtained successively, is determined corresponding to the frame image Image data in include each image key points first position information.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:Wherein, the FA Facial Animation model be skeleton cartoon model, and each model key point in the FA Facial Animation model with Preset skeletal sites are corresponding.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make:Wherein, the skeleton cartoon model further comprises:Animal skeleton animation model, and the animal skeleton animation model packet It includes:Cat skeleton cartoon model, courser skeleton cartoon model and rabbit skeleton cartoon model.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, it constructs required by this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:It is i.e. required to protect Shield the present invention claims the more features of feature than being expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific implementation mode are expressly incorporated in the specific implementation mode, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment Change and they are arranged in the one or more equipment different from the embodiment.It can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used any Combination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization, or to run on one or more processors Software module realize, or realized with combination thereof.It will be understood by those of skill in the art that can use in practice Microprocessor or digital signal processor (DSP) realize device that video data according to the ... of the embodiment of the present invention is handled in real time In some or all components some or all functions.The present invention is also implemented as described herein for executing Some or all equipment or program of device (for example, computer program and computer program product) of method.In this way Realization the present invention program can may be stored on the computer-readable medium, or can with one or more signal shape Formula.Such signal can be downloaded from internet website and be obtained, and either be provided on carrier signal or with any other shape Formula provides.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference mark between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be by the same hardware branch To embody.The use of word first, second, and third does not indicate that any sequence.These words can be explained and be run after fame Claim.

Claims (10)

1. a kind of control method of FA Facial Animation model, including:
Determine the first position information for each image key points for including in the image data got and the FA Facial Animation The second position information for each model key point for including in model;
According to preset key point mapping ruler, the corresponding pass between each image key points and each model key point is determined System, according to the correspondence by the second confidence of the first position information of each image key points and each model key point Breath is compared;
The each model key point for including is controlled in the FA Facial Animation model according to comparison result to be subjected to displacement, so that the face The expression of portion's animation model changes.
2. according to the method described in claim 1, each image key points for wherein, in described image data including further are wrapped It includes:According to the multiple series of images position key point that facial divides, and each model for including in the FA Facial Animation model closes Key point further comprises:The multigroup model position key point divided according to model position;
Then the second position information of the first position information by each image key points and each model key point compares Compared with the step of specifically include:
It is directed to every group of image locations key point respectively, determines and is closed with this group of image locations key point corresponding group model position Key point, and determine respectively corresponding with this group of image locations key point first position expression coefficient and with the group model position The corresponding second position expression coefficient of key point;
Respectively by the first position expression coefficient of every group of image locations key point and one corresponding to this group of image locations key point The second position expression coefficient of group model position key point is compared.
It is described to determine corresponding with this group of image locations key point the respectively 3. according to the method described in claim 2, wherein The step of one position expression coefficient and second position expression coefficient corresponding with the group model position key point, specifically includes:
The first distributed intelligence of this group of image locations key point is determined according to the first position information of each image key points, First position expression coefficient corresponding with first distributed intelligence is determined according to preset first expression computation rule;
The second distributed intelligence of the group model position key point is determined according to the second position information of each model key point, Second position expression coefficient corresponding with second distributed intelligence is determined according to preset second expression computation rule.
4. according to the method in claim 2 or 3, wherein described to be controlled in the FA Facial Animation model according to comparison result Including each model key point the step of being subjected to displacement specifically include:
It is directed to every group of image locations key point respectively, judges the first position expression coefficient and the group of this group of image locations key point Whether the difference between the second position expression coefficient of the group model position key point corresponding to image locations key point is more than Predetermined threshold value;
If so, target site key will be determined as with this group of image locations key point corresponding group model position key point Point controls the target site key point and is subjected to displacement.
5. according to the method described in claim 4, wherein, the step of control target site key point is subjected to displacement, has Body includes:
According to the first position expression coefficient and preset model linkage rule, the mesh of the target site key point is determined Expression coefficient is marked, the direction of displacement and/or displacement amplitude that number determines the target site key point are with one's heart at according to the object table;
Wherein, model linkage rule for be arranged the first position expression coefficient of each group image locations key point with it is corresponding The object table of model position key point is with one's heart at the correspondence between number.
6. according to the method described in claim 5, wherein, the model linkage rule is further used for setting and every group model portion The association displacement rule of the associated association position point of position key point and the association position point;
The step of then the control target site key point is subjected to displacement further comprises:It determines and is closed with the target site The association displacement rule of the associated association position point of key point and the association position point, control association position point according to The association displacement rule is subjected to displacement.
7. according to the method described in claim 6, wherein, model position key point include eyes key point, and/or Face position key point;Association position point includes:Eyebrow position point, face's position point and/or ear position point.
8. a kind of control device of FA Facial Animation model, including:
Determining module is adapted to determine that in the image data got the first position information for each image key points for including, with And the second position information for each model key point in the FA Facial Animation model including;
Comparison module is suitable for, according to preset key point mapping ruler, determining each image key points and each model key point Between correspondence, according to the correspondence by the first position information of each image key points and each model key point Second position information be compared;
Change module, position occurs suitable for controlling in the FA Facial Animation model each model key point for including according to comparison result It moves, so that the expression of the FA Facial Animation model changes.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory makes the processor execute as right is wanted for storing an at least executable instruction, the executable instruction Ask the corresponding operation of control method of the FA Facial Animation model described in any one of 1-7.
10. a kind of computer storage media, an at least executable instruction, the executable instruction are stored in the storage medium Make the corresponding operation of control method of FA Facial Animation model of the processor execution as described in any one of claim 1-7.
CN201810145903.7A 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment Active CN108335345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810145903.7A CN108335345B (en) 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810145903.7A CN108335345B (en) 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment

Publications (2)

Publication Number Publication Date
CN108335345A true CN108335345A (en) 2018-07-27
CN108335345B CN108335345B (en) 2021-08-24

Family

ID=62929265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810145903.7A Active CN108335345B (en) 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment

Country Status (1)

Country Link
CN (1) CN108335345B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117770A (en) * 2018-08-01 2019-01-01 吉林盘古网络科技股份有限公司 FA Facial Animation acquisition method, device and terminal device
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109147012A (en) * 2018-09-20 2019-01-04 麒麟合盛网络技术股份有限公司 Image processing method and device
CN109191548A (en) * 2018-08-28 2019-01-11 百度在线网络技术(北京)有限公司 Animation method, device, equipment and storage medium
CN110321008A (en) * 2019-06-28 2019-10-11 北京百度网讯科技有限公司 Exchange method, device, equipment and storage medium based on AR model
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN110941333A (en) * 2019-11-12 2020-03-31 北京字节跳动网络技术有限公司 Interaction method, device, medium and electronic equipment based on eye movement
CN111507143A (en) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
CN111553286A (en) * 2020-04-29 2020-08-18 北京攸乐科技有限公司 Method and electronic device for capturing ear animation characteristics
CN111614925A (en) * 2020-05-20 2020-09-01 广州视源电子科技股份有限公司 Figure image processing method and device, corresponding terminal and storage medium
US20210118148A1 (en) * 2019-10-17 2021-04-22 Beijing Dajia Internet Information Technology Co., Ltd. Method and electronic device for changing faces of facial image
WO2021083133A1 (en) * 2019-10-29 2021-05-06 广州虎牙科技有限公司 Image processing method and device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
CN105900144A (en) * 2013-06-07 2016-08-24 费斯史福特股份公司 Online modeling for real-time facial animation
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107291214A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method for driving face to move and electronic equipment
CN107679519A (en) * 2017-10-27 2018-02-09 北京光年无限科技有限公司 A kind of multi-modal interaction processing method and system based on visual human

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
CN105900144A (en) * 2013-06-07 2016-08-24 费斯史福特股份公司 Online modeling for real-time facial animation
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN107291214A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method for driving face to move and electronic equipment
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107679519A (en) * 2017-10-27 2018-02-09 北京光年无限科技有限公司 A kind of multi-modal interaction processing method and system based on visual human

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117770A (en) * 2018-08-01 2019-01-01 吉林盘古网络科技股份有限公司 FA Facial Animation acquisition method, device and terminal device
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109191548A (en) * 2018-08-28 2019-01-11 百度在线网络技术(北京)有限公司 Animation method, device, equipment and storage medium
CN109147012A (en) * 2018-09-20 2019-01-04 麒麟合盛网络技术股份有限公司 Image processing method and device
CN109147012B (en) * 2018-09-20 2023-04-14 麒麟合盛网络技术股份有限公司 Image processing method and device
CN111507143A (en) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
CN110321008A (en) * 2019-06-28 2019-10-11 北京百度网讯科技有限公司 Exchange method, device, equipment and storage medium based on AR model
CN110321008B (en) * 2019-06-28 2023-10-24 北京百度网讯科技有限公司 Interaction method, device, equipment and storage medium based on AR model
US20210118148A1 (en) * 2019-10-17 2021-04-22 Beijing Dajia Internet Information Technology Co., Ltd. Method and electronic device for changing faces of facial image
WO2021083133A1 (en) * 2019-10-29 2021-05-06 广州虎牙科技有限公司 Image processing method and device, equipment and storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN110941333A (en) * 2019-11-12 2020-03-31 北京字节跳动网络技术有限公司 Interaction method, device, medium and electronic equipment based on eye movement
CN111553286A (en) * 2020-04-29 2020-08-18 北京攸乐科技有限公司 Method and electronic device for capturing ear animation characteristics
CN111553286B (en) * 2020-04-29 2024-01-26 北京攸乐科技有限公司 Method and electronic device for capturing ear animation features
CN111614925A (en) * 2020-05-20 2020-09-01 广州视源电子科技股份有限公司 Figure image processing method and device, corresponding terminal and storage medium

Also Published As

Publication number Publication date
CN108335345B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN108335345A (en) The control method and device of FA Facial Animation model, computing device
CN108961369B (en) Method and device for generating 3D animation
DE112019006278T5 (en) FULLY INTERACTIVE, VIRTUAL SPORTS AND WELLNESS TRAINER IN REAL TIME AND PHYSIOTHERAPY SYSTEM
US11823348B2 (en) Method and system for training generative adversarial networks with heterogeneous data
CN109151540A (en) The interaction processing method and device of video image
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
US11537850B2 (en) Systems and methods for simulating sense data and creating perceptions
CN112669422A (en) Simulated 3D digital human generation method and device, electronic equipment and storage medium
Millar et al. A review of behavioural animation
CN115914505B (en) Video generation method and system based on voice-driven digital human model
CN113160041B (en) Model training method and model training device
CN114283228A (en) Virtual character driving method and system based on monocular color camera
US20220076409A1 (en) Systems and Methods for Building a Skin-to-Muscle Transformation in Computer Animation
Friedman et al. Image Co-Creation by Non-Programmers and Generative Adversarial Networks.
CN114912574A (en) Character facial expression splitting method and device, computer equipment and storage medium
CN113538643A (en) Grid model processing method, storage medium and equipment
US11562504B1 (en) System, apparatus and method for predicting lens attribute
US11875504B2 (en) Systems and methods for building a muscle-to-skin transformation in computer animation
Liu Light image enhancement based on embedded image system application in animated character images
US11341702B2 (en) Systems and methods for data bundles in computer animation
KR20200038148A (en) Method and apparatus for transporting textures of a 3d model
US11410370B1 (en) Systems and methods for computer animation of an artificial character using facial poses from a live actor
CN117671149A (en) Intelligent virtual character interaction method, device, equipment and medium
Bergman Learning Priors for Neural Scene Representations
CN118037921A (en) Virtual character rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant