CN107103646A - A kind of countenance synthesis method and device - Google Patents
A kind of countenance synthesis method and device Download PDFInfo
- Publication number
- CN107103646A CN107103646A CN201710271893.7A CN201710271893A CN107103646A CN 107103646 A CN107103646 A CN 107103646A CN 201710271893 A CN201710271893 A CN 201710271893A CN 107103646 A CN107103646 A CN 107103646A
- Authority
- CN
- China
- Prior art keywords
- expression
- face
- target
- expression data
- summit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/44—Morphing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Cosmetics (AREA)
Abstract
The embodiment of the present application discloses a kind of countenance synthesis method and device, realizes the purpose for the authenticity for improving the target face with target expression according to synthesis.Wherein, methods described includes:Obtain the expression data of the pending expression of target face;The expression data of the expression data of first expression of acquisition standard face and the second expression of the standard face;The expression data of preparatory condition is met according to the expression data of the expression data of the first of the standard face the expression and the second expression of the standard face;The first expression data when the target face has target expression is obtained using the expression data of the preparatory condition;Object function is built according to preset rules, and using first expression data as constraints, is obtained when the value of the object function meets preparatory condition, the target face has the second expression data during target expression;The target face expressed one's feelings according to first expression data of the target face and second expression data synthesis with the target.
Description
Technical field
The application is related to cartoon technique field, more particularly to a kind of countenance synthesis method and device.
Background technology
In social interaction, human face expression can pass on important, abundant information.With the quick hair of computer technology
Exhibition, Expression synthesis technology is paid close attention in fields such as graph and image processing, CADs by more and more researchers.
It is in Entertainment, media production, virtual reality design, remote dummy communication, Telemedicine, virtual video meeting, void
Intend having important purposes in terms of role's interactive application.
Human face expression synthesis based on three-dimensional space model is current study hotspot.The synthesis of human face expression is substantially
The synthesis of human face expression data.By the way that face is modeled in three dimensions, each summit can be obtained on face in three-dimensional space
Between in position coordinates.For different expressions, the position coordinates on each summit of face is differed.For example, ordinary circumstance
Under, can all there be respective change at people position such as mouth, eyes, nose when smiling, and the corners of the mouth can raise up, eyelid can shrink, nostril
It can magnify.So, when people is smiling, the position coordinates on the corresponding summit in these positions and people (do not have for example in other expressions
Expression, the expression such as angry, sad) under corresponding position coordinates differ.
Generally the expression data of face is synthesized by the way of fusion deformation (Blend Shape) at present.Fusion
The principle of deformation is:The expression number of the expression data of the first expression of standard faces and the second expression of standard faces is obtained first
Difference between, then the difference is added or subtracted each other with the expression data of the current expression of target face, obtain target person
The expression data of the target expression of face, second expression and the current expression match, the first expression and object table
Feelings match.Wherein, standard faces are the faces of standard people, are the faces as benchmark, and it can be by some reality
The face that face simulation is obtained or abstracted;Target face is the face of user.For example, it is assumed that standard faces
The first expression to smile, the second expression is does not express one's feelings;The current expression of target face is not also to express one's feelings, and target expression is
Smile.So the corresponding expression data of target face smile is according to the corresponding expression data of standard faces smile and standard faces
The difference do not expressed one's feelings between corresponding expression data, with target face corresponding expression data of not expressing one's feelings be added
Arrive.
Since in the first expression and the second expression in the case of chosen, the expression data of the first expression of standard faces
Difference between the expression data of the second expression of standard faces is fixed, so be what feature regardless of target face,
The benchmark that its expression data is calculated is identical.And the face shape of different user and the face shape of standard people typically each have difference
Not, such as eyes are not of uniform size, eyebrow height differs, facial contour is not first-class, so according to the fusion deformation side of prior art
Method obtains the table that the expression data of the target expression of target face may truly be obtained with target face when doing target expression
Feelings gap data is larger, or even the possibility for not meeting physiological phenomenon occurs.
The content of the invention
In order to solve technical problem present in prior art, this application provides countenance synthesis method and device, realize
Improve the purpose of the authenticity with target face that target expresses one's feelings according to synthesis.
This application provides a kind of countenance synthesis method, methods described includes:
The expression data of the pending expression of target face is obtained, the expression data of the pending expression is the target face
The position coordinates on each summit during with the pending expression;
The expression data of the expression data of first expression of acquisition standard face and the second expression of the standard face, described the
The position coordinates on each summit when the expression data of one expression has first expression for the standard face, second expression
Expression data is the position coordinates on each summit when the standard face has second expression, each summit of the target face and institute
Each summit for stating standard face is corresponded, and the first expression of the standard face is identical with the pending expression of the target face;
Obtained according to the expression data of the expression data of the first of the standard face the expression and the second expression of the standard face
To the expression data for meeting preparatory condition, the expression data for meeting preparatory condition is that motion amplitude is less than or equal to threshold value
The position coordinates on summit;
The first expression data when the target face has target expression is obtained using the expression data of the preparatory condition,
The target expression, the summit of first expression data and the expression data of the preparatory condition identical with second expression
Summit match;
Object function is built according to preset rules, and using first expression data as constraints, obtains working as the mesh
When the value of scalar functions meets preparatory condition, the target face has the second expression data during target expression, second expression
Data are the position coordinates on other summits in addition to the summit of first expression data of the target face;
The object table is had according to first expression data of the target face and second expression data synthesis
The target face of feelings.
Optionally, the preset rules at least include it is following one of them:
First rule:Facial overall deformation degree when the target face is expressed one's feelings from the pending expression shape change to target,
It is consistent from first expression shape change to the facial overall deformation degree convergence of the described second expression with the standard face;
Second Rule:It is smooth according to the target face of the first expression data of the target face and the formation of the second expression data
's;
Three sigma rule:Muscular definition shape and the standard face of the target face when being expressed one's feelings with target are with
Muscular definition shape convergence during two expressions is consistent;
4th rule:Position relationship of the target face when being expressed one's feelings with target between non-muscular definition, and the mark
Position relationship convergence of quasi- face when with the second expression between non-muscular definition is consistent.
Optionally, if the preset rules include the first rule, the object function is according to the target face from described
Pending expression shape change to the target express one's feelings when triangular facet motion amplitude, and the standard face is from the first expression shape change to
The difference of the motion amplitude of correspondence triangular facet is obtained during two expressions, and the triangular facet is the face being made up of three summits, the target
The triangular facet of face is used to be combined into the target face, and the triangular facet of the standard face is used to be combined into the standard face.
Optionally, if the preset rules include Second Rule, the object function is according to the target face from described
Pending expression shape change to the target express one's feelings when, the deformation extent of the first triangular facet of the target face and the second triangular facet
The difference of deformation extent is obtained, and first triangular facet and second triangular facet are the face being made up of respective three summits, institute
Stating the first triangular facet and second triangular facet is used to be combined into target face, and first triangular facet and second triangular facet are phase
Adjacent triangular facet.
Optionally, if the preset rules include three sigma rule, the object function according to the target face with
The primary vector of the corresponding vertex sequence formation of muscular definition when target is expressed one's feelings, and the standard face is with the second expression when institute
The direction difference for stating the secondary vector of the corresponding vertex sequence formation of muscular definition is obtained.
Optionally, if the preset rules include the 4th rule, the object function is according to the first of the target face
Second place coordinate difference of the poor and described standard face of position coordinates when with the second expression is obtained;The first position coordinate difference
The corresponding vertex sequence of the first non-muscular definition for being the target face when being expressed one's feelings with target and the second non-muscular definition pair
The position coordinates of corresponding vertex is poor between the vertex sequence answered;The second place coordinate difference is with the with the standard face
The corresponding vertex sequence of first non-muscular definition described in during two expressions and the corresponding vertex sequence of the second non-muscular definition it
Between corresponding vertex position coordinates it is poor;Each summit and the second non-flesh in the corresponding vertex sequence of the first non-muscular definition
Each summit is corresponded in the corresponding vertex sequence of meat lines.
Present invention also provides a kind of Expression synthesis device, described device includes:Pending expression data acquiring unit, mark
Quasi- face expression data acquiring unit, preparatory condition expression data acquiring unit, the first expression data acquiring unit, the second expression number
According to acquiring unit and Expression synthesis unit;Wherein,
The pending expression data acquiring unit, the expression data of the pending expression for obtaining target face is described
The expression data of pending expression is the position coordinates on each summit when the target face has the pending expression;
The standard face expression data acquiring unit, the expression data and the mark of the first expression for obtaining standard face
The expression data of second expression of quasi- face, when the expression data of first expression has first expression for the standard face
The position coordinates on each summit, each summit when the expression data of second expression has second expression for the standard face
Each summit of position coordinates, each summit of the target face and the standard face is corresponded, and the first table of the standard face
Feelings are identical with the pending expression of the target face;
The preparatory condition expression data acquiring unit, for according to the first of the standard face expression expression data and
The expression data of second expression of the standard face is met the expression data of preparatory condition, the table for meeting preparatory condition
Feelings data are the position coordinates on the summit that motion amplitude is less than or equal to threshold value;
The first expression data acquiring unit, the target face is obtained for the expression data using the preparatory condition
The first expression data when being expressed one's feelings with target, the target expression, first expression data identical with second expression
Summit and the summit of expression data of the preparatory condition match;
The second expression data acquiring unit, for building object function according to preset rules, and with first table
Feelings data are constraints, are obtained when the value of the object function meets preparatory condition, the target face has target expression
When the second expression data, second expression data is the target face in addition to the summit of first expression data
The position coordinates on other summits;
The Expression synthesis unit, for first expression data according to the target face and the second expression number
The target face expressed one's feelings according to synthesis with the target.
Optionally, the preset rules at least include it is following one of them:
First rule:Facial overall deformation degree when the target face is expressed one's feelings from the pending expression shape change to target,
It is consistent from first expression shape change to the facial overall deformation degree convergence of the described second expression with the standard face;
Second Rule:It is smooth according to the target face of the first expression data of the target face and the formation of the second expression data
's;
Three sigma rule:Muscular definition shape and the standard face of the target face when being expressed one's feelings with target are with
Muscular definition shape convergence during two expressions is consistent;
4th rule:Position relationship of the target face when being expressed one's feelings with target between non-muscular definition, and the mark
Position relationship convergence of quasi- face when with the second expression between non-muscular definition is consistent.
Optionally, if the preset rules include the first rule, the object function is according to the target face from described
Pending expression shape change to the target express one's feelings when triangular facet motion amplitude, and the standard face is from the first expression shape change to
The difference of the motion amplitude of correspondence triangular facet is obtained during two expressions, and the triangular facet is the face being made up of three summits, the target
The triangular facet of face is used to be combined into the target face, and the triangular facet of the standard face is used to be combined into the standard face.
Optionally, if the preset rules include Second Rule, the object function is according to the target face from described
Pending expression shape change to the target express one's feelings when, the deformation extent of the first triangular facet of the target face and the second triangular facet
The difference of deformation extent is obtained, and first triangular facet and second triangular facet are the face being made up of respective three summits, institute
Stating the first triangular facet and second triangular facet is used to be combined into target face, and first triangular facet and second triangular facet are phase
Adjacent triangular facet.
Optionally, if the preset rules include three sigma rule, the object function according to the target face with
The primary vector of the corresponding vertex sequence formation of muscular definition when target is expressed one's feelings, and the standard face is with the second expression when institute
The direction difference for stating the secondary vector of the corresponding vertex sequence formation of muscular definition is obtained.
Optionally, if the preset rules include the 4th rule, the object function is according to the first of the target face
Second place coordinate difference of the poor and described standard face of position coordinates when with the second expression is obtained;The first position coordinate difference
The corresponding vertex sequence of the first non-muscular definition for being the target face when being expressed one's feelings with target and the second non-muscular definition pair
The position coordinates of corresponding vertex is poor between the vertex sequence answered;The second place coordinate difference is with the with the standard face
The corresponding vertex sequence of first non-muscular definition described in during two expressions and the corresponding vertex sequence of the second non-muscular definition it
Between corresponding vertex position coordinates it is poor;Each summit and the second non-flesh in the corresponding vertex sequence of the first non-muscular definition
Each summit is corresponded in the corresponding vertex sequence of meat lines.
The expression data for the first expression that the application passes through the standard face and the expression data of the second expression are compared,
The summit that motion amplitude is less than or equal to threshold value is picked out, the target face that is matched with this part summit of standard face is then found
Summit, obtains the expression data of this part vertex correspondence of target face, i.e., described first expression data.Then according to preset rules structure
Object function is built, and using first expression data as constraints, is obtained when the value of the object function meets preparatory condition
When, the target face has the second expression data during target expression.Because the effect of the object function is that target face is done respectively
Trend of its change should all try one's best unanimously with standard face when planting expression, while the characteristics of having taken into account each target face oneself again.Institute
With, relative to prior art, according to the first expression data and the second expression data synthesize have the target face that target is expressed one's feelings with
The real target face expressed one's feelings with target closer to.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments described in application, for those of ordinary skill in the art, on the premise of not paying creative work,
Other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart for countenance synthesis method that the embodiment of the present application one is provided;
Fig. 2 is schematic diagram of the Plays face of the embodiment of the present application one when with the first expression;
Fig. 3 is schematic diagram of the Plays face of the embodiment of the present application one when with the second expression
Fig. 4 is the schematic diagram for splicing target face or standard face in the embodiment of the present application one using triangular facet;
Fig. 5 is a kind of structured flowchart for Expression synthesis device that the embodiment of the present application two is provided.
Embodiment
In order that those skilled in the art more fully understand application scheme, below in conjunction with the embodiment of the present application
Accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is only this
Apply for a part of embodiment, rather than whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art exist
The every other embodiment obtained under the premise of creative work is not made, the scope of the application protection is belonged to.
Embodiment one:
Referring to Fig. 1, the figure is a kind of flow chart for countenance synthesis method that the embodiment of the present application one is provided.
The countenance synthesis method that the present embodiment is provided comprises the following steps:
Step S101:Obtain the expression data of the pending expression of target face.
In the present embodiment, target face is the face of destination object, and the destination object is can have the object of expression, no
It is confined to real people or real animal or virtual personage, animal etc..For convenience, in the present embodiment
In, target face is by taking real face as an example.
The pending expression of the target face can be considered as the benchmark expression of target face, because the object table of the target face
Feelings are synthesized according to the expression data of the pending expression.The pending expression can be do not express one's feelings, cry, laughing at, anger
Etc..It is pending expression and target expression differs, but can belong to it is same kind of expression (for example smile and laugh, all
Belong to and laugh at), different types of expression can also be belonged to.
In the present embodiment, target face can be built based on three-dimensional coordinate system, the target face is by several summits
Constitute, there is the position coordinates of oneself on each summit in the three-dimensional coordinate system.So, the expression of the pending expression
Data are the position coordinates on each summit when the target face has the pending expression.As to how target face is built, category
In the common knowledge of those skilled in the art, here is omitted.
Step S102:The expression number of the expression data of first expression of acquisition standard face and the second expression of the standard face
According to.
In the present embodiment, standard face refers to the face of standard object, and so-called standard object is pair can with expression
As being not limited to real people or real animal or virtual personage, animal etc..Standard face and target face should
It is matching, if target face is face, then standard face should also be as being face;If standard face is the face of cat, then standard
Face should also be as the face for cat.For convenience, in the present embodiment, standard face is by taking real face as an example.
First expression of the standard face is identical with the pending expression of the target face, the second expression of the standard face
It is identical with the target expression of the target face.The basic conception of the present embodiment is the expression data of the first expression using standard face
The target expression of target face is obtained with the expression data of the second expression, and the expression data of the pending expression of target face
Expression data.
The structure of the standard face is also based on three-dimensional system of coordinate structure, and the standard face is also in three-dimensional system of coordinate
It is made up of several summits.Each summit when the expression data of first expression has first expression for the standard face
Position coordinates, the position on each summit is sat when the expression data of second expression has second expression for the standard face
Mark.It should be noted that each summit of standard face and each summit of target face are the noses of one-to-one relation, such as standard face
Son is made up of 50 summits, has specific position relationship between these summits, then the nose of target face should also be as by 50
Summit is constituted, and the position relationship between its these summit is similar with the position relationship between the summit of standard face.In practical application
In, each summit can be assigned and identified, for example, numbered, then can carried out by summit numbering identical mode one a pair
Should.
Step S103:According to the expression data of the first of the standard face the expression and the table of the second expression of the standard face
Feelings data are met the expression data of preparatory condition.
The expression data for meeting preparatory condition is the position coordinates on the summit that motion amplitude is less than or equal to threshold value.
Step S104:First when the target face has target expression is obtained using the expression data of the preparatory condition
Expression data.
Target expression is identical with second expression, the summit of first expression data and the preparatory condition
The summit of expression data matches.
In the present embodiment, can be according to the first of the standard face the expression data expressed one's feelings and the expression number of the second expression
According to being compared, the summit that motion amplitude is less than or equal to threshold value is picked out, this part summit with standard face is then found
The summit for the target face matched somebody with somebody, obtains the expression data of this part vertex correspondence of target face, i.e., described first expression data.In reality
In, the threshold value is a less value, that is to say, that when standard face is from the first expression shape change to the second expression, if
Motion amplitude very small summit is there are, then pending table that can be by the motion amplitude on the part summit directly with target face
The expression data of feelings is added or subtracted each other, and obtains the expression data on the part summit of target expression.In other words, fortune is being found
Behind the small summit of dynamic amplitude, the expression data of this part vertex correspondence of pending expression is directly assigned to target and expressed one's feelings this portion
Divide the expression data on summit.Expression data known to this part summit expressed one's feelings by target is used as bound term, remaining top
The expression data (i.e. the second expression data) of point is solved by object function hereafter.
Such as, j-th of vertex v of criterion face from the first expression shape change to second expression when, whether its position coordinates
Change, if amplitude of variation (i.e. motion amplitude) vdiffLess than or equal to some threshold epsilon, then C is set tohardBound term.I.e.:
Work as vdiffDuring≤ε,
Wherein,Refer to the position coordinates of standard face j-th of vertex v when with the second expression,Refer to standard
The position coordinates of face j-th of vertex v when with the first expression.Referring to Fig. 2 (a), the figure is standard face with the first expression
When schematic diagram;Referring to Fig. 2 (b), the figure is schematic diagram of standard face when with the second expression.
Refer to the position coordinates of target face j-th of vertex v when being expressed one's feelings with target,Refer to that target face exists
The position coordinates of j-th of vertex v during with pending expression.Referring to Fig. 3, the figure is target face when with pending expression
Schematic diagram.
It is meant that fixationValue.
Or, work as vdiffDuring≤ε, directly makeCertainly, compared to this scheme, though above that scheme
Right amount of calculation is bigger, but the target face of synthesis is more accurate.It should be noted that corresponding first expression data of bound term
The calculating of hereinafter object function is participated in, because these data can produce influence to other data.
Step S105:Object function is built according to preset rules, and using first expression data as constraints, is obtained
When the value of the object function meets preparatory condition, the target face has the second expression data during target expression.
In the present embodiment, object function is built according to preset rules, the object function has for solving target face
The second expression data when target is expressed one's feelings, second expression data is the target face except the top of first expression data
The position coordinates on other summits beyond point.The effect of the object function is the trend that it changes when target face does various expressions
It should all be tried one's best with standard face unanimously, while the characteristics of having taken into account each target face oneself again.
In the present embodiment, the preset rules can include at least one in following four rule:
First rule:Facial overall deformation degree when the target face is expressed one's feelings from the pending expression shape change to target,
It is consistent from first expression shape change to the facial overall deformation degree convergence of the described second expression with the standard face.
Referring to Fig. 4, target face and standard face in addition to the vertex representation in three-dimensional system of coordinate, can also with by
Triangular facet that three summits are constituted represents that these three summits are not arbitrary three summits, but for be combined into target face or
Three summits of standard face, i.e., the triangular facet of described target face is used to be combined into the target face, and the triangular facet of the standard face is used
In being combined into the standard face.The each edge of one triangular facet only has two summits respectively.Deformation of the face in expression shape change can be with
Regard that each triangular facet is realized by moving as, so, the overall deformation degree of face can be by each triangle during expression shape change
The motion amplitude in face is represented.
Facial overall deformation degree when the target face is expressed one's feelings from the pending expression shape change to target, with the standard
Face is consistent to the facial overall deformation degree convergence of the described second expression from first expression shape change, and its object is to cause most
The target expression of the target face obtained eventually is identical with the second expression of standard face, for example, be all to smile or all wail.
In order to embody above-mentioned thought, the object function can according to the target face from the pending expression shape change to
The motion amplitude of triangular facet during target expression, with the standard face from the first expression shape change to the second expression when corresponding triangle
The difference of the motion amplitude in face is obtained.
For example, the object function can be:
Emotion=∑ | | Q-T | |
Wherein, Q represents the motion of triangular facet when the target face is expressed one's feelings from the pending expression shape change to the target
Amplitude,V=[v2-v1,v3-v1,v4-v1],v1、v2And v3It is standard face respectively
Position coordinates, the position coordinates on summit 2 and the position coordinates on summit 3 of triangle vertex of surface 1 when with the first expression.v4For mark
The line on the position coordinates on quasi- face summit 4 when with the first expression, summit 4 and summit 1 (or summit 2, summit 3) is perpendicular to this
Triangular facet, and the distance between summit 4 and summit 1 (or summit 2, summit 3) they are unit length.WithIt is standard respectively
Position coordinates, the position coordinates on summit 2 and the position coordinates on summit 3 of face triangle vertex of surface 1 when with the second expression.
For the position coordinates on standard face summit 4 when with the second expression, summit 4 is vertical with the line on summit 1 (or summit 2, summit 3)
In the triangular facet, and the distance between summit 4 and summit 1 (or summit 2, summit 3) are unit length.
T represent the standard face from the first expression shape change to second expression when correspondence triangular facet motion amplitude.V'=[v'2-v'1,v'3-v'1,v'4-v'1],v'1、v'2And v'3Respectively
It is the target face position coordinates of triangle vertex of surface 1, the position coordinates on summit 2 and position on summit 3 when with pending expression
Coordinate.v'4For the position coordinates on target face summit 4 when with pending expression, summit 4 and summit 1 (or summit 2, summit 3)
Line perpendicular to the triangular facet, and the distance between summit 4 and summit 1 (or summit 2, summit 3) are unit length.WithIt is that target the face position coordinates of the triangle vertex of surface 1, position on summit 2 when being expressed one's feelings with target are sat respectively
Mark and the position coordinates on summit 3.For the position coordinates on target face summit 4 when being expressed one's feelings with target, summit 4 and summit 1 (or
Summit 2, summit 3) line perpendicular to the triangular facet, and the distance between summit 4 and summit 1 (or summit 2, summit 3) are single
Bit length.
Emotion=∑ | | Q-T | | represent that the mould that all triangular facets are distinguished with corresponding Q and T difference is summed.
Second Rule:It is smooth according to the target face of the first expression data of the target face and the formation of the second expression data
's.
In the present embodiment, the smooth purpose of target face is the noise mutation for suppressing pixel in image.Target face is entered
Row can also smoothly be carried out using triangular facet, if the adjacent triangular facet of each pair is smooth, then the target face being spliced
It is also smooth.Adjacent triangular facet refers to wherein two overlapping triangular facets of a line.
In order to embody above-mentioned thought, the object function can according to the target face from the pending expression shape change to
During the target expression, the difference of the deformation extent of the first triangular facet of the target face and the deformation extent of the second triangular facet is obtained
Arrive, first triangular facet and second triangular facet are the face being made up of respective three summits, first triangular facet and
Second triangular facet is used to be combined into target face, and first triangular facet and second triangular facet are adjacent triangular facet.
For example, the object function can be:
Wherein, n=adj (m) represents adjacent surfaces of the triangular facet n of target face for the triangular facet m of target face, TmRepresent triangle
The corresponding T of face m (being mentioned above), TnRepresent the corresponding T (being mentioned above) of triangular facet n, NTriangleRepresent the three of target face
The number of edged surface.
Three sigma rule:Muscular definition shape and the standard face of the target face when being expressed one's feelings with target are with
Muscular definition shape convergence during two expressions is consistent.
Because muscular definition of standard face when with the second expression meets physiological make-up, such as according to the life of face
Do not intersect between reason construction, every muscular definition of face.So, muscular definition of the target face when being expressed one's feelings with target
The consistent purpose of the muscular definition shape convergence of shape and standard face when with the second expression is:With the second table
After the standard face synthesis of feelings, its muscular definition is also to meet physiological make-up, for example, do not intersect.
The muscular definition of face can be represented by vertex sequence.Therefore, the object function can be according to the target face
The formation of muscular definition corresponding vertex sequence primary vector, and the standard face the corresponding summit sequence of the muscular definition
The direction difference for arranging the secondary vector formed is obtained.
For example, it is assumed that according to vertex sequence Contour={ c1,c2,c3,...,cKForm vector (c1,c2,c3,...,
cK), c1,c2,c3,...,cKThe position coordinates on each summit in the vertex sequence is represented respectively.Target face is with object table
The primary vector of the corresponding vertex sequence formation of muscular definition is during feelingsStandard face is with
The secondary vector of the corresponding vertex sequence formation of the muscular definition is during two expressions
Order
So object function
Wherein, Dirk-1For intermediate parameters, ckFor k-th of summit in vertex sequence Contour, ck-1For vertex sequence
- 1 summit of kth in Contour, 1 < k≤K.It is corresponding when having a target expression to refer to target face
Dirk-1,Refer to corresponding Dir when standard face has the second expressionk-1。
4th rule:Position relationship of the target face when being expressed one's feelings with target between non-muscular definition, and the mark
Position relationship convergence of quasi- face when with the second expression between non-muscular definition is consistent.
Face in addition to muscular definition, also have some non-muscular definitions, for example upper eyelid, lower eyelid, upper lip it is upper
Lower two edge lines, two edge lines up and down of lower lip etc., the position relationship between these lines is extremely important.Prior art
The expression data of the target expression of middle target face is the motion width for the expression data plus or minus summit fixation currently expressed one's feelings
What degree was obtained, it is possible to the phenomenon of non-muscular definition intersection occurs.For example, the eyes of standard face are from opening to closure,
The motion amplitude on summit is fixed, i.e., in closed state, upper eyelid and lower eyelid are essentially coincided the eyes of standard face.If
Target face for eyes less than standard face eyes, the target face obtained according to the motion amplitude of the fixation, its eye may
There is the lower section that the situation of closure transition, i.e. upper eyelid are located at lower eyelid;And if being more than the mesh of standard face eyes for eyes
Face is marked, the target face obtained according to the motion amplitude of the fixation, its eye are it is possible that situation about not closing, i.e. upper eyelid
Larger gap is there are between lower eyelid.No matter which kind of situation, the synthetic effect of target face is not good.
In order to avoid the appearance of this phenomenon, the 4th rule regulation:The target face non-muscle when being expressed one's feelings with target
Position relationship between lines, and position relationship convergence one of standard face when with the second expression between non-muscular definition
Cause.For example, target face is under eyes closed state, the position relationship between upper and lower eyelid should be with standard face in eyes closed
The position relationship convergence of upper and lower eyelid is consistent under state.
In the present embodiment, non-muscular definition can also be represented with vertex sequence.The object function can be according to institute
The second place coordinate difference of the first position coordinate difference and the standard face of target face when with the second expression is stated to obtain;It is described
First position coordinate difference is first non-muscular definition corresponding vertex sequence of the target face when being expressed one's feelings with target and the
The position coordinates of corresponding vertex is poor between the corresponding vertex sequence of two non-muscular definitions;The second place coordinate difference be with it is described
Standard face the first non-corresponding vertex sequence of muscular definition and second non-muscular definition pair described in when with the second expression
The position coordinates of corresponding vertex is poor between the vertex sequence answered;Each summit in the corresponding vertex sequence of the first non-muscular definition
Each summit is corresponded in vertex sequence corresponding with the described second non-muscular definition.
For example, it is assumed that the corresponding vertex sequence Contour of upper eyelideye_up={ u1,u2,u3,...,uK, u1,u2,
u3,...,uKThe position coordinates on each summit in the vertex sequence is represented respectively;The corresponding vertex sequence of lower eyelid
Contoureye_down={ d1,d2,d3,...,dK, d1,d2,d3,...,dKEach summit in the vertex sequence is represented respectively
Position coordinates.
Object function
Wherein,For target face when being expressed one's feelings with target vertex sequence Contoureye_upK-th top
Point,For target face when being expressed one's feelings with target vertex sequence Contoureye_downK-th of summit,For target face when with the second expression vertex sequence Contoureye_upK-th of summit,For
Target face vertex sequence Contour when with the second expressioneye_downK-th of summit, 1 < k≤K.
Similarly, the corresponding target of relative position relation of the lower edge of upper lip and the top edge of lower lip can also be obtained
Function.
The corresponding object function of aforementioned four rule can be optionally first, can also select wherein several or whole.If choosing
Multiple object functions are selected, then the weighting of these object functions can be obtained a total object function.Target can for example be obtained
Function:
E=α1Emotion+α2Esmooth+α3Emuscle+α4Ecross。
Wherein, α1、α2、α3And α4Respectively object function Emotion、Esmooth、EmuscleAnd EcrossWeight.For difference
Target expression, can be to α1、α2、α3And α4Different values is set to cause the target face that there is target to express one's feelings after synthesis more
Truly.
It is, of course, understood that aforementioned four is regular and design of each regular corresponding object function not structure
The restriction of paired the application, those skilled in the art can also designed, designed be regular as the case may be and regular corresponding mesh
Scalar functions.
The value of the object function meets the value minimum that preparatory condition can be the object function or than minimum
The more bigger situation of value, the application is not specifically limited.
Step S106:Institute is had according to first expression data of the target face and second expression data synthesis
State the target face of target expression.
In the present embodiment, solved by object function as bound term using the first expression data and obtain the target
Second expression data of face, this two parts expression data is exactly institute espressiove number of the target face when being expressed one's feelings with target altogether
According to, therefore there is the target face that target is expressed one's feelings after being synthesized.
The expression data for the first expression that the present embodiment passes through the standard face and the expression data of the second expression are compared
It is right, the summit that motion amplitude is less than or equal to threshold value is picked out, the target matched with this part summit of standard face is then found
The summit of face, obtains the expression data of this part vertex correspondence of target face, i.e., described first expression data.Then according to default rule
Object function is then built, and using first expression data as constraints, is obtained when the value of the object function meets default
During condition, the target face has the second expression data during target expression.Because the effect of the object function is target face
Its trend changed should all try one's best unanimously with standard face when doing various expressions, while having taken into account the spy of each target face oneself again
Point.So, relative to prior art, the target that there is target to express one's feelings synthesized according to the first expression data and the second expression data
Face and the real target face expressed one's feelings with target closer to.
A kind of countenance synthesis method provided based on above example, the embodiment of the present application additionally provides a kind of Expression synthesis
Device, describes its operation principle in detail below in conjunction with the accompanying drawings.
Embodiment two
Referring to Fig. 5, the figure is a kind of structured flowchart for Expression synthesis device that the embodiment of the present application two is provided.
The Expression synthesis device that the present embodiment is provided includes:Pending expression data acquiring unit 101, standard face expression number
According to acquiring unit 102, preparatory condition expression data acquiring unit 103, the first expression data acquiring unit 104, second expression number
According to acquiring unit 105 and Expression synthesis unit 106;Wherein,
The pending expression data acquiring unit 101, the expression data of the pending expression for obtaining target face, institute
The expression data for stating pending expression is the position coordinates on each summit when the target face has the pending expression;
The standard face expression data acquiring unit 102, the first expression data expressed one's feelings and institute for obtaining standard face
The expression data of the second expression of standard face is stated, the expression data of first expression has first table for the standard face
The position coordinates on each summit during feelings, the expression data of second expression is respectively pushed up when having second expression for the standard face
The position coordinates of point, each summit of each summit of the target face and the standard face is corresponded, and the of the standard face
One expression is identical with the pending expression of the target face;
The preparatory condition expression data acquiring unit 103, for the expression number according to the first of the standard face the expression
The expression data of preparatory condition is met according to the expression data with the second expression of the standard face, it is described to meet preparatory condition
Expression data for motion amplitude be less than or equal to threshold value summit position coordinates;
The first expression data acquiring unit 104, the mesh is obtained for the expression data using the preparatory condition
Mark face has the first expression data during target expression, and the target expression is identical with second expression, first expression
The summit of the expression data of the summit of data and the preparatory condition matches;
The second expression data acquiring unit 105, for building object function according to preset rules, and with described first
Expression data is constraints, is obtained when the value of the object function meets preparatory condition, the target face has object table
The second expression data during feelings, second expression data is the target face in addition to the summit of first expression data
Other summits position coordinates;
The Expression synthesis unit 106, for first expression data and second table according to the target face
Feelings Data Synthesis has the target face of the target expression.
The expression data for the first expression that the present embodiment passes through the standard face and the expression data of the second expression are compared
It is right, the summit that motion amplitude is less than or equal to threshold value is picked out, the target matched with this part summit of standard face is then found
The summit of face, obtains the expression data of this part vertex correspondence of target face, i.e., described first expression data.Then according to default rule
Object function is then built, and using first expression data as constraints, is obtained when the value of the object function meets default
During condition, the target face has the second expression data during target expression.Because the effect of the object function is target face
Its trend changed should all try one's best unanimously with standard face when doing various expressions, while having taken into account the spy of each target face oneself again
Point.So, relative to prior art, the target that there is target to express one's feelings synthesized according to the first expression data and the second expression data
Face and the real target face expressed one's feelings with target closer to.
Optionally, the preset rules at least include it is following one of them:
First rule:Facial overall deformation degree when the target face is expressed one's feelings from the pending expression shape change to target,
It is consistent from first expression shape change to the facial overall deformation degree convergence of the described second expression with the standard face;
Second Rule:It is smooth according to the target face of the first expression data of the target face and the formation of the second expression data
's;
Three sigma rule:Muscular definition shape and the standard face of the target face when being expressed one's feelings with target are with
Muscular definition shape convergence during two expressions is consistent;
4th rule:Position relationship of the target face when being expressed one's feelings with target between non-muscular definition, and the mark
Position relationship convergence of quasi- face when with the second expression between non-muscular definition is consistent.
Optionally, if the preset rules include the first rule, the object function is according to the target face from described
Pending expression shape change to the target express one's feelings when triangular facet motion amplitude, and the standard face is from the first expression shape change to
The difference of the motion amplitude of correspondence triangular facet is obtained during two expressions, and the triangular facet is the face being made up of three summits, the target
The triangular facet of face is used to be combined into the target face, and the triangular facet of the standard face is used to be combined into the standard face.
Optionally, if the preset rules include Second Rule, the object function is according to the target face from described
Pending expression shape change to the target express one's feelings when, the deformation extent of the first triangular facet of the target face and the second triangular facet
The difference of deformation extent is obtained, and first triangular facet and second triangular facet are the face being made up of respective three summits, institute
Stating the first triangular facet and second triangular facet is used to be combined into target face, and first triangular facet and second triangular facet are phase
Adjacent triangular facet.
Optionally, if the preset rules include three sigma rule, the object function according to the target face with
The primary vector of the corresponding vertex sequence formation of muscular definition when target is expressed one's feelings, and the standard face is with the second expression when institute
The direction difference for stating the secondary vector of the corresponding vertex sequence formation of muscular definition is obtained.
Optionally, if the preset rules include the 4th rule, the object function is according to the first of the target face
Second place coordinate difference of the poor and described standard face of position coordinates when with the second expression is obtained;The first position coordinate difference
The corresponding vertex sequence of the first non-muscular definition for being the target face when being expressed one's feelings with target and the second non-muscular definition pair
The position coordinates of corresponding vertex is poor between the vertex sequence answered;The second place coordinate difference is with the with the standard face
The corresponding vertex sequence of first non-muscular definition described in during two expressions and the corresponding vertex sequence of the second non-muscular definition it
Between corresponding vertex position coordinates it is poor;Each summit and the second non-flesh in the corresponding vertex sequence of the first non-muscular definition
Each summit is corresponded in the corresponding vertex sequence of meat lines.
When introducing the element of various embodiments of the application, article "a", "an", "this" and " described " are intended to
Indicate one or more elements.Word " comprising ", "comprising" and " having " are all inclusive and meaned except listing
Outside element, there can also be other elements.
It should be noted that one of ordinary skill in the art will appreciate that realizing the whole in above method embodiment or portion
Split flow, can be by computer program to instruct the hardware of correlation to complete, described program can be stored in a computer
In read/write memory medium, the program is upon execution, it may include such as the flow of above-mentioned each method embodiment.Wherein, the storage
Medium can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random
Access Memory, RAM) etc..
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment
Divide mutually referring to what each embodiment was stressed is the difference with other embodiment.It is real especially for device
Apply for example, because it is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to embodiment of the method
Part explanation.Device embodiment described above is only schematical, wherein described illustrate as separating component
Unit and module can be or may not be physically separate.Furthermore it is also possible to select it according to the actual needs
In some or all of unit and module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying
In the case of creative work, you can to understand and implement.
Described above is only the embodiment of the application, it is noted that for the ordinary skill people of the art
For member, on the premise of the application principle is not departed from, some improvements and modifications can also be made, these improvements and modifications also should
It is considered as the protection domain of the application.
Claims (12)
1. a kind of countenance synthesis method, it is characterised in that methods described includes:
The expression data of the pending expression of target face is obtained, the expression data of the pending expression has for the target face
The position coordinates on each summit during the pending expression;
The expression data of the expression data of first expression of acquisition standard face and the second expression of the standard face, first table
The expression data of feelings is the position coordinates on each summit when the standard face has first expression, the expression of second expression
Data are the position coordinates on each summit when the standard face has second expression, each summit of the target face and the mark
Each summit of quasi- face is corresponded, and the first expression of the standard face is identical with the pending expression of the target face;
Expired according to the expression data of the expression data of the first of the standard face the expression and the second expression of the standard face
The expression data of sufficient preparatory condition, the expression data for meeting preparatory condition is the summit that motion amplitude is less than or equal to threshold value
Position coordinates;
The first expression data when the target face has target expression is obtained using the expression data of the preparatory condition, it is described
Target expression, the top of the summit of first expression data and the expression data of the preparatory condition identical with second expression
Point matches;
Object function is built according to preset rules, and using first expression data as constraints, obtains working as the target letter
When several values meets preparatory condition, the target face has the second expression data during target expression, second expression data
For the position coordinates on other summits in addition to the summit of first expression data of the target face;
Expressed one's feelings according to first expression data of the target face and second expression data synthesis with the target
Target face.
2. according to the method described in claim 1, it is characterised in that the preset rules at least include it is following one of them:
First rule:Facial overall deformation degree when the target face is expressed one's feelings from the pending expression shape change to target, with institute
State standard face consistent to the facial overall deformation degree convergence of the described second expression from first expression shape change;
Second Rule:It is smooth according to the target face of the first expression data of the target face and the formation of the second expression data;
Three sigma rule:Muscular definition shape and the standard face of the target face when being expressed one's feelings with target are with the second table
Muscular definition shape convergence during feelings is consistent;
4th rule:Position relationship of the target face when being expressed one's feelings with target between non-muscular definition, and the standard face
Position relationship convergence when with the second expression between non-muscular definition is consistent.
3. method according to claim 2, it is characterised in that if the preset rules include the first rule, the mesh
The motion amplitude of triangular facet when scalar functions are expressed one's feelings according to the target face from the pending expression shape change to the target, and institute
State standard face from the first expression shape change to the second expression when correspondence triangular facet the difference of motion amplitude obtain, the triangular facet be by
The face that three summits are constituted, the triangular facet of the target face is used to be combined into the target face, and the triangular facet of the standard face is used for
It is combined into the standard face.
4. method according to claim 2, it is characterised in that if the preset rules include Second Rule, the mesh
When scalar functions are expressed one's feelings according to the target face from the pending expression shape change to the target, the first triangle of the target face
The difference of the deformation extent of the deformation extent in face and the second triangular facet is obtained, and first triangular facet and second triangular facet are served as reasons
The face that respective three summits are constituted, first triangular facet and second triangular facet are used to be combined into target face, described first
Triangular facet and second triangular facet are adjacent triangular facet.
5. method according to claim 2, it is characterised in that if the preset rules include three sigma rule, the mesh
The primary vector of scalar functions corresponding vertex sequence formation of the muscular definition when being expressed one's feelings with target according to the target face, and institute
Standard face direction difference of the secondary vector of the corresponding vertex sequence formation of muscular definition when with the second expression is stated to obtain.
6. method according to claim 2, it is characterised in that if the preset rules include the 4th rule, the mesh
Scalar functions are sat according to the second place of the first position coordinate difference and the standard face of target face when with the second expression
Mark difference is obtained;The first position coordinate difference is that first non-muscular definition of the target face when being expressed one's feelings with target is corresponding
The position coordinates of corresponding vertex is poor between vertex sequence and the corresponding vertex sequence of the second non-muscular definition;The second place is sat
Mark difference be with the standard face when with the second expression described in the corresponding vertex sequence of the first non-muscular definition and described second
The position coordinates of corresponding vertex is poor between the corresponding vertex sequence of non-muscular definition;The corresponding summit of the first non-muscular definition
Each summit is corresponded in each summit and the corresponding vertex sequence of the second non-muscular definition in sequence.
7. a kind of Expression synthesis device, it is characterised in that described device includes:Pending expression data acquiring unit, standard face
Expression data acquiring unit, preparatory condition expression data acquiring unit, the first expression data acquiring unit, the second expression data are obtained
Take unit and Expression synthesis unit;Wherein,
The pending expression data acquiring unit, the expression data of the pending expression for obtaining target face is described to wait to locate
The position coordinates on each summit when the expression data of reason expression has the pending expression for the target face;
The standard face expression data acquiring unit, the expression data and the standard face of the first expression for obtaining standard face
Second expression expression data, it is described first expression expression data for the standard face have it is described first expression when respectively push up
The position coordinates of point, the position on each summit when the expression data of second expression has second expression for the standard face
Each summit of coordinate, each summit of the target face and the standard face is corresponded, and the first expression of the standard face with
The pending expression of the target face is identical;
The preparatory condition expression data acquiring unit, for the expression data and described according to the first of the standard face the expression
The expression data of second expression of standard face is met the expression data of preparatory condition, the expression number for meeting preparatory condition
According to the position coordinates on the summit for being less than or equal to threshold value for motion amplitude;
The first expression data acquiring unit, obtaining the target face for the expression data using the preparatory condition has
The first expression data when target is expressed one's feelings, target expression, the top of first expression data identical with second expression
The summit of point and the expression data of the preparatory condition matches;
The second expression data acquiring unit, for building object function according to preset rules, and with the described first expression number
According to for constraints, obtain when the value of the object function meets preparatory condition, when the target face has a target expression
Second expression data, second expression data is other in addition to the summit of first expression data of the target face
The position coordinates on summit;
The Expression synthesis unit, is closed for first expression data according to the target face and second expression data
Into the target face expressed one's feelings with the target.
8. device according to claim 7, it is characterised in that the preset rules at least include it is following one of them:
First rule:Facial overall deformation degree when the target face is expressed one's feelings from the pending expression shape change to target, with institute
State standard face consistent to the facial overall deformation degree convergence of the described second expression from first expression shape change;
Second Rule:It is smooth according to the target face of the first expression data of the target face and the formation of the second expression data;
Three sigma rule:Muscular definition shape and the standard face of the target face when being expressed one's feelings with target are with the second table
Muscular definition shape convergence during feelings is consistent;
4th rule:Position relationship of the target face when being expressed one's feelings with target between non-muscular definition, and the standard face
Position relationship convergence when with the second expression between non-muscular definition is consistent.
9. device according to claim 8, it is characterised in that if the preset rules include the first rule, the mesh
The motion amplitude of triangular facet when scalar functions are expressed one's feelings according to the target face from the pending expression shape change to the target, and institute
State standard face from the first expression shape change to the second expression when correspondence triangular facet the difference of motion amplitude obtain, the triangular facet be by
The face that three summits are constituted, the triangular facet of the target face is used to be combined into the target face, and the triangular facet of the standard face is used for
It is combined into the standard face.
10. device according to claim 8, it is characterised in that if the preset rules include Second Rule, the mesh
When scalar functions are expressed one's feelings according to the target face from the pending expression shape change to the target, the first triangle of the target face
The difference of the deformation extent of the deformation extent in face and the second triangular facet is obtained, and first triangular facet and second triangular facet are served as reasons
The face that respective three summits are constituted, first triangular facet and second triangular facet are used to be combined into target face, described first
Triangular facet and second triangular facet are adjacent triangular facet.
11. device according to claim 8, it is characterised in that if the preset rules include three sigma rule, the mesh
The primary vector of scalar functions corresponding vertex sequence formation of the muscular definition when being expressed one's feelings with target according to the target face, and institute
Standard face direction difference of the secondary vector of the corresponding vertex sequence formation of muscular definition when with the second expression is stated to obtain.
12. device according to claim 8, it is characterised in that if the preset rules include the 4th rule, the mesh
Scalar functions are sat according to the second place of the first position coordinate difference and the standard face of target face when with the second expression
Mark difference is obtained;The first position coordinate difference is that first non-muscular definition of the target face when being expressed one's feelings with target is corresponding
The position coordinates of corresponding vertex is poor between vertex sequence and the corresponding vertex sequence of the second non-muscular definition;The second place is sat
Mark difference be with the standard face when with the second expression described in the corresponding vertex sequence of the first non-muscular definition and described second
The position coordinates of corresponding vertex is poor between the corresponding vertex sequence of non-muscular definition;The corresponding summit of the first non-muscular definition
Each summit is corresponded in each summit and the corresponding vertex sequence of the second non-muscular definition in sequence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710271893.7A CN107103646B (en) | 2017-04-24 | 2017-04-24 | Expression synthesis method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710271893.7A CN107103646B (en) | 2017-04-24 | 2017-04-24 | Expression synthesis method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107103646A true CN107103646A (en) | 2017-08-29 |
CN107103646B CN107103646B (en) | 2020-10-23 |
Family
ID=59656386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710271893.7A Active CN107103646B (en) | 2017-04-24 | 2017-04-24 | Expression synthesis method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107103646B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829277A (en) * | 2018-12-18 | 2019-05-31 | 深圳壹账通智能科技有限公司 | Terminal unlock method, device, computer equipment and storage medium |
CN111583372A (en) * | 2020-05-09 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Method and device for generating facial expression of virtual character, storage medium and electronic equipment |
CN113470149A (en) * | 2021-06-30 | 2021-10-01 | 完美世界(北京)软件科技发展有限公司 | Expression model generation method and device, storage medium and computer equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1920880A (en) * | 2006-09-14 | 2007-02-28 | 浙江大学 | Video flow based people face expression fantasy method |
CN101311966A (en) * | 2008-06-20 | 2008-11-26 | 浙江大学 | Three-dimensional human face animations editing and synthesis a based on operation transmission and Isomap analysis |
KR20100090058A (en) * | 2009-02-05 | 2010-08-13 | 연세대학교 산학협력단 | Iterative 3d head pose estimation method using a face normal vector |
CN101976453A (en) * | 2010-09-26 | 2011-02-16 | 浙江大学 | GPU-based three-dimensional face expression synthesis method |
CN102479388A (en) * | 2010-11-22 | 2012-05-30 | 北京盛开互动科技有限公司 | Expression interaction method based on face tracking and analysis |
CN103035022A (en) * | 2012-12-07 | 2013-04-10 | 大连大学 | Facial expression synthetic method based on feature points |
CN103198508A (en) * | 2013-04-07 | 2013-07-10 | 河北工业大学 | Human face expression animation generation method |
CN104008564A (en) * | 2014-06-17 | 2014-08-27 | 河北工业大学 | Human face expression cloning method |
CN104346824A (en) * | 2013-08-09 | 2015-02-11 | 汉王科技股份有限公司 | Method and device for automatically synthesizing three-dimensional expression based on single facial image |
CN106157372A (en) * | 2016-07-25 | 2016-11-23 | 深圳市唯特视科技有限公司 | A kind of 3D face grid reconstruction method based on video image |
CN106204750A (en) * | 2016-07-11 | 2016-12-07 | 厦门幻世网络科技有限公司 | A kind of method and device based on 3D source model editor's 3D object module |
-
2017
- 2017-04-24 CN CN201710271893.7A patent/CN107103646B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1920880A (en) * | 2006-09-14 | 2007-02-28 | 浙江大学 | Video flow based people face expression fantasy method |
CN101311966A (en) * | 2008-06-20 | 2008-11-26 | 浙江大学 | Three-dimensional human face animations editing and synthesis a based on operation transmission and Isomap analysis |
KR20100090058A (en) * | 2009-02-05 | 2010-08-13 | 연세대학교 산학협력단 | Iterative 3d head pose estimation method using a face normal vector |
CN101976453A (en) * | 2010-09-26 | 2011-02-16 | 浙江大学 | GPU-based three-dimensional face expression synthesis method |
CN102479388A (en) * | 2010-11-22 | 2012-05-30 | 北京盛开互动科技有限公司 | Expression interaction method based on face tracking and analysis |
CN103035022A (en) * | 2012-12-07 | 2013-04-10 | 大连大学 | Facial expression synthetic method based on feature points |
CN103198508A (en) * | 2013-04-07 | 2013-07-10 | 河北工业大学 | Human face expression animation generation method |
CN104346824A (en) * | 2013-08-09 | 2015-02-11 | 汉王科技股份有限公司 | Method and device for automatically synthesizing three-dimensional expression based on single facial image |
CN104008564A (en) * | 2014-06-17 | 2014-08-27 | 河北工业大学 | Human face expression cloning method |
CN106204750A (en) * | 2016-07-11 | 2016-12-07 | 厦门幻世网络科技有限公司 | A kind of method and device based on 3D source model editor's 3D object module |
CN106157372A (en) * | 2016-07-25 | 2016-11-23 | 深圳市唯特视科技有限公司 | A kind of 3D face grid reconstruction method based on video image |
Non-Patent Citations (3)
Title |
---|
DANIEL VLASIC等: "Face transfer with multilinear models", 《 ACM TRANSACTIONS ON GRAPHICS 》 * |
李旭东等: "多表情源的人脸表情合成技术", 《计算机辅助设计与图形学学报》 * |
高娅莉: "基于Blendshape的人脸表情动画生成的研究与实现", 《万方数据知识服务平台》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829277A (en) * | 2018-12-18 | 2019-05-31 | 深圳壹账通智能科技有限公司 | Terminal unlock method, device, computer equipment and storage medium |
CN111583372A (en) * | 2020-05-09 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Method and device for generating facial expression of virtual character, storage medium and electronic equipment |
CN113470149A (en) * | 2021-06-30 | 2021-10-01 | 完美世界(北京)软件科技发展有限公司 | Expression model generation method and device, storage medium and computer equipment |
CN113470149B (en) * | 2021-06-30 | 2022-05-06 | 完美世界(北京)软件科技发展有限公司 | Expression model generation method and device, storage medium and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107103646B (en) | 2020-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102241153B1 (en) | Method, apparatus, and system generating 3d avartar from 2d image | |
CN107169455B (en) | Face attribute recognition method based on depth local features | |
CN104217454B (en) | A kind of human face animation generation method of video drive | |
Chen et al. | Semantic component decomposition for face attribute manipulation | |
CN107820591A (en) | Control method, controller, Intelligent mirror and computer-readable recording medium | |
CN107103646A (en) | A kind of countenance synthesis method and device | |
US10297065B2 (en) | Methods and systems of enriching blendshape rigs with physical simulation | |
CN104217455A (en) | Animation production method for human face expressions and actions | |
CN114283052A (en) | Method and device for cosmetic transfer and training of cosmetic transfer network | |
Cao et al. | Difffashion: Reference-based fashion design with structure-aware transfer by diffusion models | |
CN115345773B (en) | Makeup migration method based on generation of confrontation network | |
Mao et al. | Unpaired multi-domain image generation via regularized conditional GANs | |
Esser et al. | A note on data biases in generative models | |
CN106446207B (en) | Makeups library banking process, personalized makeups householder method and its device | |
CN104933742A (en) | Automatic cartoon image generation method | |
Wang et al. | Wuju opera cultural creative products and research on visual image under VR technology | |
Sun et al. | Local facial makeup transfer via disentangled representation | |
Nguyen-Phuoc et al. | Alteredavatar: Stylizing dynamic 3d avatars with fast style adaptation | |
Karungaru et al. | Automatic human faces morphing using genetic algorithms based control points selection | |
CN110069716B (en) | Beautiful makeup recommendation method and system and computer-readable storage medium | |
Ma et al. | Application of virtual reality technology in the exhibition system of clothing museum | |
Hu et al. | Research on Current Situation of 3D face reconstruction based on 3D Morphable Models | |
Rowland | Computer graphic control over human face and head appearance, genetic optimisation of perceptual characteristics. | |
Fratarcangeli | Computational models for animating 3d virtual faces | |
Hailemariam et al. | Evolving 3D facial expressions using interactive genetic algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190227 Address after: 361000 Fujian Xiamen Torch High-tech Zone Software Park Innovation Building Area C 3F-A193 Applicant after: Xiamen Black Mirror Technology Co., Ltd. Address before: 9th Floor, Maritime Building, 16 Haishan Road, Huli District, Xiamen City, Fujian Province, 361000 Applicant before: XIAMEN HUANSHI NETWORK TECHNOLOGY CO., LTD. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |