CN107103646B - Expression synthesis method and device - Google Patents

Expression synthesis method and device Download PDF

Info

Publication number
CN107103646B
CN107103646B CN201710271893.7A CN201710271893A CN107103646B CN 107103646 B CN107103646 B CN 107103646B CN 201710271893 A CN201710271893 A CN 201710271893A CN 107103646 B CN107103646 B CN 107103646B
Authority
CN
China
Prior art keywords
expression
target
face
vertex
expression data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710271893.7A
Other languages
Chinese (zh)
Other versions
CN107103646A (en
Inventor
吴松城
陈军宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Black Mirror Technology Co Ltd
Original Assignee
Xiamen Black Mirror Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Black Mirror Technology Co ltd filed Critical Xiamen Black Mirror Technology Co ltd
Priority to CN201710271893.7A priority Critical patent/CN107103646B/en
Publication of CN107103646A publication Critical patent/CN107103646A/en
Application granted granted Critical
Publication of CN107103646B publication Critical patent/CN107103646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Cosmetics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an expression synthesis method and device, and the purpose of improving the authenticity of a target face with a target expression synthesized according to the target expression is achieved. Wherein the method comprises the following steps: acquiring expression data of a to-be-processed expression of a target face; acquiring expression data of a first expression of a standard face and expression data of a second expression of the standard face; obtaining expression data meeting preset conditions according to the expression data of the first expression of the standard face and the expression data of the second expression of the standard face; obtaining first expression data when the target face has a target expression by using the expression data of the preset condition; constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the value of the target function meets the preset condition and the target face has a target expression; and synthesizing the target face with the target expression according to the first expression data and the second expression data of the target face.

Description

Expression synthesis method and device
Technical Field
The application relates to the technical field of animation, in particular to an expression synthesis method and device.
Background
In social interaction, the facial expression can convey important and rich information. With the rapid development of computer technology, expression synthesis technology is receiving attention from more and more researchers in the fields of graphic image processing, computer aided design and the like. The method has important application in aspects of game entertainment, media production, virtual reality design, remote virtual communication, remote medical service, virtual video conference, virtual character interaction application and the like.
The synthesis of facial expressions based on three-dimensional space models is a research hotspot at present. The synthesis of the facial expression is substantially the synthesis of the facial expression data. By modeling the face in the three-dimensional space, the position coordinates of each vertex on the face in the three-dimensional space can be obtained. The position coordinates of the vertices of the face are not the same for different expressions. For example, when a person smiles, the mouth, eyes, nose, and the like are changed, the mouth angle is raised, the eyelids are contracted, and the nostrils are enlarged. Therefore, when a person is smiling, the position coordinates of the vertices corresponding to the parts are different from the position coordinates corresponding to other expressions (e.g., no expression, anger, hurry, etc.).
Currently, the expression data of the human face is synthesized by adopting a Blend Shape (Blend Shape) mode. The principle of fusion deformation is as follows: the method comprises the steps of firstly obtaining a difference value between expression data of a first expression of a standard face and expression data of a second expression of the standard face, then adding or subtracting the difference value and expression data of a current expression of a target face to obtain expression data of the target expression of the target face, wherein the second expression is matched with the current expression, and the first expression is matched with the target expression. The standard face is a standard face, is a reference face, and can be obtained by simulating a certain actual face or can be an abstract face; the target face is the face of the user. For example, assume that the first expression of a standard face is smile and the second expression is no expression; the current expression of the target face is also no expression, and the target expression is smile. The expression data corresponding to the target face smile is obtained by adding the expression data corresponding to the target face without expression according to the difference value between the expression data corresponding to the standard face smile and the expression data corresponding to the standard face without expression.
Since the difference between the expression data of the first expression of the standard face and the expression data of the second expression of the standard face is fixed in the case where the first expression and the second expression are selected, the basis of the expression data calculation is the same regardless of the characteristics of the target face. The facial shapes of different users are usually different from the facial shape of a standard person, for example, the eyes are different in size, the eyebrows are different in height, the facial contour is different, so that the expression data of the target expression of the target face obtained by the fusion deformation method according to the prior art may have a larger difference from the expression data obtained when the target expression is actually performed on the target face, and even a possibility that the expression data does not conform to physiological phenomena may occur.
Disclosure of Invention
In order to solve the technical problems in the prior art, the application provides an expression synthesis method and device, and the purpose of improving the authenticity of a target face with a target expression synthesized according to the target expression is achieved.
The application provides an expression synthesis method, which comprises the following steps:
acquiring expression data of a to-be-processed expression of a target face, wherein the expression data of the to-be-processed expression is position coordinates of each vertex when the target face has the to-be-processed expression;
acquiring expression data of a first expression of a standard face and expression data of a second expression of the standard face, wherein the expression data of the first expression is position coordinates of each vertex when the standard face has the first expression, the expression data of the second expression is position coordinates of each vertex when the standard face has the second expression, each vertex of a target face corresponds to each vertex of the standard face one by one, and the first expression of the standard face is the same as the to-be-processed expression of the target face;
obtaining expression data meeting a preset condition according to the expression data of the first expression of the standard face and the expression data of the second expression of the standard face, wherein the expression data meeting the preset condition is a position coordinate of a vertex of which the motion amplitude is smaller than or equal to a threshold value;
obtaining first expression data when the target face has a target expression by using the expression data of the preset condition, wherein the target expression is the same as the second expression, and the vertex of the first expression data is matched with the vertex of the expression data of the preset condition;
constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the value of the target function meets the preset condition and the target face has a target expression, wherein the second expression data are position coordinates of other vertexes of the target face except the vertex of the first expression data;
and synthesizing the target face with the target expression according to the first expression data and the second expression data of the target face.
Optionally, the preset rule at least includes one of the following:
a first rule: when the target face changes from the to-be-processed expression to the target expression, the overall deformation degree of the face is approximately consistent with the overall deformation degree of the face of the standard face from the first expression to the second expression;
the second rule is as follows: a target face formed from the first expression data and the second expression data of the target face is smooth;
a third rule: the muscle line shape of the target face with the target expression is approximately consistent with the muscle line shape of the standard face with the second expression;
the fourth rule is that: the position relation between the non-muscle lines of the target face when the target face has the target expression is approximately consistent with the position relation between the non-muscle lines of the standard face when the standard face has the second expression.
Optionally, if the preset rule includes a first rule, the target function obtains a difference between a motion amplitude of a triangular surface when the target face changes from the to-be-processed expression to the target expression and a motion amplitude of a corresponding triangular surface when the standard face changes from the first expression to the second expression, the triangular surface is a surface formed by three vertexes, the triangular surface of the target face is used for forming the target face, and the triangular surface of the standard face is used for forming the standard face.
Optionally, if preset rule includes the second rule, then the objective function is according to the target face is followed the table of waiting to handle changes to during the target expression, the difference of the deformation degree of the first triangle face of target face and the deformation degree of the second triangle face obtains, first triangle face with the second triangle face is the face that constitutes by respective three summit, first triangle face with the second triangle face is used for piecing together the target face, first triangle face with the second triangle face is adjacent triangle face.
Optionally, if the preset rule includes a third rule, the target function is obtained according to a direction difference between a first vector formed by a vertex sequence corresponding to a muscle line of the target face when the target face has the target expression and a second vector formed by a vertex sequence corresponding to the muscle line of the standard face when the standard face has the second expression.
Optionally, if the preset rule includes a fourth rule, the target function is obtained according to the first position coordinate difference of the target face and the second position coordinate difference of the standard face when the standard face has the second expression; the first position coordinate difference is the position coordinate difference of corresponding vertexes between a vertex sequence corresponding to a first non-muscle line and a vertex sequence corresponding to a second non-muscle line of the target face when the target face has the target expression; the second position coordinate difference is a position coordinate difference of a corresponding vertex between a vertex sequence corresponding to the first non-muscle line and a vertex sequence corresponding to the second non-muscle line when the standard face has a second expression; and each vertex in the vertex sequence corresponding to the first non-muscle line corresponds to each vertex in the vertex sequence corresponding to the second non-muscle line in a one-to-one manner.
The application also provides an expression synthesis device, the device includes: the system comprises a to-be-processed expression data acquisition unit, a standard face expression data acquisition unit, a preset condition expression data acquisition unit, a first expression data acquisition unit, a second expression data acquisition unit and an expression synthesis unit; wherein,
the expression data acquisition unit is used for acquiring expression data of the expression to be processed of the target face, wherein the expression data of the expression to be processed is position coordinates of each vertex when the target face has the expression to be processed;
the standard face expression data acquisition unit is used for acquiring expression data of a first expression of a standard face and expression data of a second expression of the standard face, wherein the expression data of the first expression is position coordinates of each vertex when the standard face has the first expression, the expression data of the second expression is position coordinates of each vertex when the standard face has the second expression, each vertex of the target face corresponds to each vertex of the standard face one by one, and the first expression of the standard face is the same as the to-be-processed expression of the target face;
the preset condition expression data acquisition unit is used for obtaining expression data meeting a preset condition according to the expression data of the first expression of the standard face and the expression data of the second expression of the standard face, wherein the expression data meeting the preset condition is a position coordinate of a vertex of which the motion amplitude is smaller than or equal to a threshold value;
the first expression data acquisition unit is used for acquiring first expression data when the target face has a target expression by using the expression data of the preset condition, the target expression is the same as the second expression, and the vertex of the first expression data is matched with the vertex of the expression data of the preset condition;
the second expression data acquisition unit is used for constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the target face has a target expression when the value of the target function meets a preset condition, wherein the second expression data are position coordinates of other vertexes of the target face except the vertex of the first expression data;
the expression synthesis unit is used for synthesizing the target face with the target expression according to the first expression data and the second expression data of the target face.
Optionally, the preset rule at least includes one of the following:
a first rule: when the target face changes from the to-be-processed expression to the target expression, the overall deformation degree of the face is approximately consistent with the overall deformation degree of the face of the standard face from the first expression to the second expression;
the second rule is as follows: a target face formed from the first expression data and the second expression data of the target face is smooth;
a third rule: the muscle line shape of the target face with the target expression is approximately consistent with the muscle line shape of the standard face with the second expression;
the fourth rule is that: the position relation between the non-muscle lines of the target face when the target face has the target expression is approximately consistent with the position relation between the non-muscle lines of the standard face when the standard face has the second expression.
Optionally, if the preset rule includes a first rule, the target function obtains a difference between a motion amplitude of a triangular surface when the target face changes from the to-be-processed expression to the target expression and a motion amplitude of a corresponding triangular surface when the standard face changes from the first expression to the second expression, the triangular surface is a surface formed by three vertexes, the triangular surface of the target face is used for forming the target face, and the triangular surface of the standard face is used for forming the standard face.
Optionally, if preset rule includes the second rule, then the objective function is according to the target face is followed the table of waiting to handle changes to during the target expression, the difference of the deformation degree of the first triangle face of target face and the deformation degree of the second triangle face obtains, first triangle face with the second triangle face is the face that constitutes by respective three summit, first triangle face with the second triangle face is used for piecing together the target face, first triangle face with the second triangle face is adjacent triangle face.
Optionally, if the preset rule includes a third rule, the target function is obtained according to a direction difference between a first vector formed by a vertex sequence corresponding to a muscle line of the target face when the target face has the target expression and a second vector formed by a vertex sequence corresponding to the muscle line of the standard face when the standard face has the second expression.
Optionally, if the preset rule includes a fourth rule, the target function is obtained according to the first position coordinate difference of the target face and the second position coordinate difference of the standard face when the standard face has the second expression; the first position coordinate difference is the position coordinate difference of corresponding vertexes between a vertex sequence corresponding to a first non-muscle line and a vertex sequence corresponding to a second non-muscle line of the target face when the target face has the target expression; the second position coordinate difference is a position coordinate difference of a corresponding vertex between a vertex sequence corresponding to the first non-muscle line and a vertex sequence corresponding to the second non-muscle line when the standard face has a second expression; and each vertex in the vertex sequence corresponding to the first non-muscle line corresponds to each vertex in the vertex sequence corresponding to the second non-muscle line in a one-to-one manner.
The method comprises the steps of comparing expression data of a first expression of a standard face with expression data of a second expression of the standard face, selecting a vertex of which the motion amplitude is smaller than or equal to a threshold value, finding a vertex of a target face matched with the vertex of the standard face, and obtaining the expression data corresponding to the vertex of the target face, namely the first expression data. And then, constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the value of the target function meets the preset condition and the target face has the target expression. The target function has the effect that the change trend of the target face is consistent with that of the standard face as much as possible when the target face is in various expressions, and the characteristics of each target face are considered. Therefore, compared with the prior art, the target face with the target expression synthesized according to the first expression data and the second expression data is closer to the real target face with the target expression.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an expression synthesis method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a standard face with a first expression according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a standard face with a second expression according to an embodiment of the present application
FIG. 4 is a schematic diagram of a target face or a standard face spliced by triangular faces according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a expression synthesis apparatus according to a second embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The first embodiment is as follows:
referring to fig. 1, the figure is a flowchart of an expression synthesis method provided in an embodiment of the present application.
The expression synthesis method provided by the embodiment comprises the following steps:
step S101: and acquiring expression data of the expression to be processed of the target face.
In this embodiment, the target face is a face of a target object, and the target object may be an object having an expression, and is not limited to a real person, but may be a real animal, a virtual person, an animal, or the like. For convenience of description, in this embodiment, the target faces are all real human faces as an example.
The to-be-processed expression of the target face may be regarded as a reference expression of the target face, because the target expression of the target face is synthesized according to the expression data of the to-be-processed expression. The expression to be treated may be no expression, crying, laughing, anger, etc. The expression to be processed and the target expression are different, but may belong to the same kind of expression (for example, smile and laugh, both of which belong to smile) or may belong to different kinds of expressions.
In this embodiment, the target face may be constructed based on a three-dimensional space coordinate system, where the target face is composed of a plurality of vertices, and each vertex has its own position coordinate in the three-dimensional space coordinate system. Therefore, the expression data of the expression to be processed is the position coordinates of each vertex when the target face has the expression to be processed. How to construct the target face is common knowledge of those skilled in the art and will not be described in detail herein.
Step S102: the method comprises the steps of obtaining expression data of a first expression of a standard face and expression data of a second expression of the standard face.
In the present embodiment, the standard face is a face that is a standard object, and the standard object is an object that can have an expression, and is not limited to a real person, but may be a real animal, a virtual person, an animal, or the like. The standard face and the target face should be matched, and if the target face is a human face, the standard face should also be the human face; if the standard face is the face of a cat, then the standard face should also be the face of a cat. For convenience of description, in the present embodiment, the standard face is a real human face as an example.
The first expression of the standard face is the same as the target expression of the target face to be processed, and the second expression of the standard face is the same as the target expression of the target face. The basic concept of the present embodiment is to obtain the expression data of the target expression of the target face by using the expression data of the first expression and the expression data of the second expression of the standard face, and the expression data of the to-be-processed expression of the target face.
The standard face is constructed based on a three-dimensional coordinate system, and the standard face is also composed of a plurality of vertexes in the three-dimensional coordinate system. The expression data of the first expression is the position coordinates of each vertex when the standard face has the first expression, and the expression data of the second expression is the position coordinates of each vertex when the standard face has the second expression. It should be noted that the vertices of the standard face and the vertices of the target face are in a one-to-one correspondence relationship, for example, if the nose of the standard face is composed of 50 vertices having a specific positional relationship, then the nose of the target face should also be composed of 50 vertices having a similar positional relationship to the vertices of the standard face. In practical applications, each vertex may be assigned with an identifier, such as a number, and then one-to-one correspondence may be performed in the same manner as the vertex numbers.
Step S103: and obtaining expression data meeting preset conditions according to the expression data of the first expression of the standard face and the expression data of the second expression of the standard face.
And the expression data meeting the preset condition is the position coordinates of the vertex with the motion amplitude smaller than or equal to the threshold.
Step S104: and obtaining first expression data when the target face has the target expression by using the expression data of the preset condition.
The target expression is the same as the second expression, and the vertex of the first expression data is matched with the vertex of the expression data of the preset condition.
In this embodiment, the expression data of the first expression of the standard face and the expression data of the second expression of the standard face may be compared, a vertex of which the motion amplitude is smaller than or equal to the threshold value is selected, and then a vertex of the target face matched with the vertex of the standard face is found, so as to obtain expression data corresponding to the vertex of the target face, that is, the first expression data. In practical application, the threshold is a smaller value, that is, when the standard face changes from the first expression to the second expression, if there is a vertex with a very small motion amplitude, the motion amplitudes of the vertex can be directly added or subtracted with the expression data of the target face to be processed to obtain the expression data of the vertex of the target expression. Or after finding the vertex with small motion amplitude, directly assigning the expression data corresponding to the part of the vertex of the expression to be processed to the expression data of the part of the vertex of the target expression. That is, the expression data known by the part of the vertices of the target expression is used as the constraint term, and the expression data (i.e., the second expression data) of the remaining vertices is solved by the following objective function.
For example, when the jth vertex v of the standard face changes from the first expression to the second expression, whether the position coordinate of the jth vertex v changes is judged, and if the change amplitude (namely the motion amplitude) v changes, the motion amplitudediffLess than or equal to a certain threshold, it is set to ChardAnd constraint terms. Namely:
Figure BDA0001277631320000091
when v isdiffWhen the content is less than or equal to the standard value,
Figure BDA0001277631320000092
Figure BDA0001277631320000093
wherein,
Figure BDA0001277631320000094
refers to the position coordinates of the jth vertex v of the standard face when the standard face has the second expression,
Figure BDA0001277631320000095
refers to the position coordinates of the jth vertex v of the standard face when the standard face has the first expression. Referring to fig. 2(a), the figure is a schematic diagram of a standard face when the standard face has a first expression; referring to fig. 2(b), the diagram is a schematic diagram of a standard face when the standard face has a second expression.
Figure BDA0001277631320000096
Refers to the position coordinates of the jth vertex v of the target face when it has the target expression,
Figure BDA0001277631320000097
refers to the position coordinates of the jth vertex v of the target face when the target face has the expression to be processed. Referring to fig. 3, a schematic diagram of a target face with a representation to be processed is shown.
Figure BDA0001277631320000098
Means of fixing
Figure BDA0001277631320000099
The value of (c).
Or, when v isdiffWhen the water content is less than or equal to the standard value, directly making
Figure BDA00012776313200000910
Of course, the above approach, while more computationally intensive than this approach, is more accurate for the synthesized target face. It should be noted that the first emotion data corresponding to the constraint term also participates in the calculation of the objective function, since these data will affect other data.
Step S105: and constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the value of the target function meets the preset condition and the target face has the target expression.
In this embodiment, an objective function is constructed according to a preset rule, where the objective function is used to solve second expression data when a target face has a target expression, and the second expression data is position coordinates of other vertices of the target face except for a vertex of the first expression data. The target function has the functions that the change trend of the target face is consistent with that of a standard face as much as possible when the target face is in various expressions, and the characteristics of each target face are considered.
In this embodiment, the preset rule may include at least one of the following four rules:
a first rule: and the integral deformation degree of the face when the target face changes from the to-be-processed expression to the target expression is approximately consistent with the integral deformation degree of the face when the standard face changes from the first expression to the second expression.
Referring to fig. 4, the target face and the standard face may be represented by a triangular face composed of three vertexes, instead of any three vertexes, in addition to the vertexes located in the three-dimensional coordinate system, that is, three vertexes used for composing the target face or the standard face, that is, the triangular face of the target face is used for composing the target face, and the triangular face of the standard face is used for composing the standard face. Each side of a triangular face has only two vertexes. The deformation of the face when the expression changes can be regarded as that each triangular surface is realized through movement, so that the overall deformation degree of the face when the expression changes can be represented by the movement amplitude of each triangular surface.
The overall facial deformation degree of the target face from the to-be-processed facial expression to the target facial expression is approximately consistent with the overall facial deformation degree of the standard face from the first facial expression to the second facial expression, and the purpose of the method is to enable the finally obtained target facial expression to be the same as the second facial expression of the standard face, for example, the target facial expression is smiling or crying.
In order to embody the above idea, the objective function may be obtained according to a difference between a motion amplitude of a triangular surface when the target face changes from the to-be-processed expression to the target expression and a motion amplitude of a corresponding triangular surface when the standard face changes from the first expression to the second expression.
For example, the objective function may be:
Emotion=∑||Q-T||
wherein Q represents the motion amplitude of the triangular surface when the target face changes from the to-be-processed expression to the target expression,
Figure BDA0001277631320000101
V=[v2-v1,v3-v1,v4-v1],
Figure BDA0001277631320000102
v1、v2and v3The coordinates of the vertex 1, the vertex 2, and the vertex 3 of the triangular surface when the standard face has the first expression are the coordinates, respectively. v. of4The position coordinates of the vertex 4 of the standard face when the standard face has the first expression are shown, the connecting line of the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is vertical to the triangular face, and the distance between the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is the unit length.
Figure BDA0001277631320000103
And
Figure BDA0001277631320000104
the position coordinates and the vertex 2 of the vertex 1 of the triangular face when the standard face has the second expressionAnd the position coordinates of the vertex 3.
Figure BDA0001277631320000105
The position coordinates of the vertex 4 of the standard face when the standard face has the second expression are shown, the connecting line of the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is vertical to the triangular face, and the distance between the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is the unit length.
And T represents the motion amplitude of the corresponding triangular surface when the standard face is changed from the first expression to the second expression.
Figure BDA0001277631320000111
V'=[v'2-v'1,v'3-v'1,v'4-v'1],
Figure BDA0001277631320000112
v'1、v'2And v'3The position coordinates of the vertex 1, the vertex 2 and the vertex 3 of the triangular surface when the target face has the expression to be processed are respectively. v'4The position coordinates of the vertex 4 of the target face when the target face has the expression to be processed are shown, the connecting line of the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is vertical to the triangular face, and the distance between the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is the unit length.
Figure BDA0001277631320000113
And
Figure BDA0001277631320000114
the coordinates of the vertex 1, the vertex 2, and the vertex 3 of the triangular surface when the target face has the target expression are the coordinates, respectively.
Figure BDA0001277631320000115
The coordinates of the vertex 4 when the target face has the target expression are the position coordinates of the vertex 4, the connecting line of the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is vertical to the triangular surface, and the distance between the vertex 4 and the vertex 1 (or the vertex 2 and the vertex 3) is the unit length.
Emotion| | | Q-T | | represents the modulo summation of the difference between Q and T, respectively, for all triangular faces.
The second rule is as follows: the target face formed from the first expression data and the second expression data of the target face is smooth.
In the present embodiment, the object face smoothing is intended to suppress abrupt noise changes of pixels in an image. The target face can also be smoothed by triangular faces, and if each pair of adjacent triangular faces is smooth, the spliced target face is also smooth. The adjacent triangular surfaces refer to two triangular surfaces with one edge overlapped.
In order to embody the above idea, the objective function may be obtained according to a difference between a degree of deformation of a first triangular surface and a degree of deformation of a second triangular surface of the target face when the target face changes from the to-be-processed expression to the target expression, the first triangular surface and the second triangular surface being surfaces formed by three vertexes of each other, the first triangular surface and the second triangular surface being used to compose the target face, and the first triangular surface and the second triangular surface being adjacent triangular surfaces.
For example, the objective function may be:
Figure BDA0001277631320000116
where n ═ adj (m) denotes that the triangular surface n of the target face is the adjacent surface to the triangular surface m of the target face, and TmT (mentioned above) corresponding to the triangular plane m is shown, TnDenotes T (mentioned above), N corresponding to the triangular face NTriangleThe number of triangular faces representing the target face.
A third rule: the muscle line shape of the target face when having the target expression is approximately consistent with the muscle line shape of the standard face when having the second expression.
Since the muscle lines of the standard face when having the second expression are in accordance with the physiological configuration, for example, according to the physiological configuration of the human face, each muscle line of the face does not intersect. Therefore, the purpose that the muscle line shape of the target face when having the target expression and the muscle line shape of the standard face when having the second expression are approximately consistent is to: after the standard face with the second expression is synthesized, the muscle lines thereof are also in accordance with the physiological configuration, for example, do not intersect.
The muscle lines of the face may be represented by a sequence of vertices. Therefore, the objective function may be obtained from a direction difference between a first vector formed by vertex sequences corresponding to muscle lines of the target face and a second vector formed by vertex sequences corresponding to muscle lines of the standard face.
For example, assume that the vertex sequence is based on { c ═ c1,c2,c3,...,cKForm a vector (c)1,c2,c3,...,cK),c1,c2,c3,...,cKRespectively, the position coordinates of each vertex in the vertex sequence. When the target face has the target expression, a first vector formed by vertex sequences corresponding to muscle lines is
Figure BDA0001277631320000121
When the standard face has the second expression, a second vector formed by the vertex sequence corresponding to the muscle line is
Figure BDA0001277631320000122
Order to
Figure BDA0001277631320000123
Then the objective function
Figure BDA0001277631320000124
Wherein, Dirk-1As an intermediate parameter, ckIs the k-th vertex in the vertex sequence Contour, ck-1Is the K-1 th vertex in the vertex sequence Contour, and K is more than 1 and less than or equal to K.
Figure BDA0001277631320000125
Means D corresponding to the target face with the target expressionirk-1
Figure BDA0001277631320000126
Is the corresponding Dir when the standard face has the second expressionk-1
The fourth rule is that: the position relation between the non-muscle lines of the target face when the target face has the target expression is approximately consistent with the position relation between the non-muscle lines of the standard face when the standard face has the second expression.
Besides the muscle lines, there are some non-muscle lines, such as upper eyelid, lower eyelid, two upper and lower edge lines of the upper lip, two upper and lower edge lines of the lower lip, etc., and the position relationship between these lines is very important. In the prior art, the expression data of the target expression of the target face is obtained by adding or subtracting the movement amplitude with fixed vertex to the expression data of the current expression, so that a non-muscle line crossing phenomenon may occur. For example, the eyes of the standard face are opened to closed, and the movement amplitude of the vertex is fixed, that is, the upper eyelid and the lower eyelid of the eyes of the standard face are basically coincident in the closed state. If the eyes of the target face are smaller than those of the standard face, the eyes of the target face obtained according to the fixed motion amplitude may have a closed transition condition, namely the upper eyelid is positioned below the lower eyelid; if the target face with eyes larger than those of the standard face is obtained according to the fixed motion amplitude, the eyes of the target face may not be closed, that is, a large gap exists between the upper eyelid and the lower eyelid. In either case, the synthesis effect of the target face is not good.
To avoid this phenomenon, the fourth rule specifies: the position relation between the non-muscle lines of the target face when the target face has the target expression is approximately consistent with the position relation between the non-muscle lines of the standard face when the standard face has the second expression. For example, the positional relationship between the upper and lower eyelids of the target face in the eye-closed state should be approximately consistent with the positional relationship between the upper and lower eyelids of the standard face in the eye-closed state.
In this embodiment, non-muscle lines may also be represented by a sequence of vertices. The target function can be obtained according to a first position coordinate difference of the target face and a second position coordinate difference of the standard face when the standard face has a second expression; the first position coordinate difference is the position coordinate difference of corresponding vertexes between a vertex sequence corresponding to a first non-muscle line and a vertex sequence corresponding to a second non-muscle line of the target face when the target face has the target expression; the second position coordinate difference is a position coordinate difference of a corresponding vertex between a vertex sequence corresponding to the first non-muscle line and a vertex sequence corresponding to the second non-muscle line when the standard face has a second expression; and each vertex in the vertex sequence corresponding to the first non-muscle line corresponds to each vertex in the vertex sequence corresponding to the second non-muscle line in a one-to-one manner.
For example, assume that the vertex sequence Contour corresponding to the upper eyelideye_up={u1,u2,u3,...,uK},u1,u2,u3,...,uKRespectively representing the position coordinates of each vertex in the vertex sequence; vertex sequence Contour corresponding to lower eyelideye_down={d1,d2,d3,...,dK},d1,d2,d3,...,dKRespectively, the position coordinates of each vertex in the vertex sequence.
Objective function
Figure BDA0001277631320000131
Wherein,
Figure BDA0001277631320000132
vertex sequence Contour when having target expression for target faceeye_upThe (k) th vertex of (a),
Figure BDA0001277631320000133
vertex sequence Contour when having target expression for target faceeye_downThe (k) th vertex of (a),
Figure BDA0001277631320000134
is a target faceVertex sequence Contour with second expressioneye_upThe (k) th vertex of (a),
Figure BDA0001277631320000135
vertex sequence Contour for the target face when it has the second expressioneye_downK is more than 1 and less than or equal to K at the kth vertex.
Similarly, an objective function corresponding to the relative position relationship between the lower edge of the upper lip and the upper edge of the lower lip can also be obtained.
Any one of the target functions corresponding to the four rules may be selected, or some or all of the target functions may be selected. If multiple objective functions are selected, the objective functions may be weighted to obtain a total objective function. For example, an objective function can be obtained:
E=α1Emotion2Esmooth3Emuscle4Ecross
wherein alpha is1、α2、α3And alpha4Are respectively an objective function Emotion、Esmooth、EmuscleAnd EcrossThe weight of (c). For different target expressions, the expression can be adjusted to alpha1、α2、α3And alpha4Different values are set to make the target face having the target expression more realistic after the synthesis.
Of course, it is understood that the design of the four rules and the objective function corresponding to each rule do not limit the present application, and those skilled in the art may design the rules and the objective functions corresponding to the rules according to specific situations.
The value of the objective function meeting the preset condition may be that the value of the objective function is minimum, or may be slightly larger than the minimum value, and the present application is not particularly limited.
Step S106: and synthesizing the target face with the target expression according to the first expression data and the second expression data of the target face.
In this embodiment, the first expression data is used as a constraint term, the second expression data of the target face is obtained through an objective function solution, and the two expression data together are all expression data of the target face when the target face has the target expression, so that the synthesized target face with the target expression can be obtained.
In this embodiment, by comparing the expression data of the first expression of the standard face with the expression data of the second expression, a vertex of which the motion amplitude is smaller than or equal to the threshold is selected, and then a vertex of the target face matched with the vertex of the standard face is found to obtain expression data corresponding to the vertex of the target face, that is, the first expression data. And then, constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the value of the target function meets the preset condition and the target face has the target expression. The target function has the effect that the change trend of the target face is consistent with that of the standard face as much as possible when the target face is in various expressions, and the characteristics of each target face are considered. Therefore, compared with the prior art, the target face with the target expression synthesized according to the first expression data and the second expression data is closer to the real target face with the target expression.
Based on the expression synthesis method provided by the above embodiment, the embodiment of the present application further provides an expression synthesis device, and the working principle of the expression synthesis device is described in detail below with reference to the accompanying drawings.
Example two
Referring to fig. 5, this figure is a block diagram of a facial expression synthesis apparatus according to a second embodiment of the present application.
The expression synthesis apparatus provided in this embodiment includes: an expression data acquisition unit 101 to be processed, a standard face expression data acquisition unit 102, a preset condition expression data acquisition unit 103, a first expression data acquisition unit 104, a second expression data acquisition unit 105 and an expression synthesis unit 106; wherein,
the expression data to be processed acquiring unit 101 is configured to acquire expression data of an expression to be processed of a target face, where the expression data of the expression to be processed is a position coordinate of each vertex when the target face has the expression to be processed;
the standard face expression data acquiring unit 102 is configured to acquire expression data of a first expression of a standard face and expression data of a second expression of the standard face, where the expression data of the first expression is position coordinates of vertices of the standard face when the standard face has the first expression, the expression data of the second expression is position coordinates of vertices of the standard face when the standard face has the second expression, the vertices of the target face and the vertices of the standard face correspond to each other one by one, and the first expression of the standard face is the same as a to-be-processed expression of the target face;
the preset condition expression data acquisition unit 103 is configured to obtain expression data meeting a preset condition according to the expression data of the first expression of the standard face and the expression data of the second expression of the standard face, where the expression data meeting the preset condition is a position coordinate of a vertex of which the motion amplitude is smaller than or equal to a threshold;
the first expression data obtaining unit 104 is configured to obtain first expression data when the target face has a target expression by using the expression data of the preset condition, where the target expression is the same as the second expression, and a vertex of the first expression data matches a vertex of the expression data of the preset condition;
the second expression data obtaining unit 105 is configured to construct a target function according to a preset rule, and obtain second expression data when the target face has a target expression when a value of the target function satisfies a preset condition with the first expression data as a constraint condition, where the second expression data is position coordinates of other vertices of the target face except for a vertex of the first expression data;
the expression synthesizing unit 106 is configured to synthesize a target face having the target expression according to the first expression data and the second expression data of the target face.
In this embodiment, by comparing the expression data of the first expression of the standard face with the expression data of the second expression, a vertex of which the motion amplitude is smaller than or equal to the threshold is selected, and then a vertex of the target face matched with the vertex of the standard face is found to obtain expression data corresponding to the vertex of the target face, that is, the first expression data. And then, constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the value of the target function meets the preset condition and the target face has the target expression. The target function has the effect that the change trend of the target face is consistent with that of the standard face as much as possible when the target face is in various expressions, and the characteristics of each target face are considered. Therefore, compared with the prior art, the target face with the target expression synthesized according to the first expression data and the second expression data is closer to the real target face with the target expression.
Optionally, the preset rule at least includes one of the following:
a first rule: when the target face changes from the to-be-processed expression to the target expression, the overall deformation degree of the face is approximately consistent with the overall deformation degree of the face of the standard face from the first expression to the second expression;
the second rule is as follows: a target face formed from the first expression data and the second expression data of the target face is smooth;
a third rule: the muscle line shape of the target face with the target expression is approximately consistent with the muscle line shape of the standard face with the second expression;
the fourth rule is that: the position relation between the non-muscle lines of the target face when the target face has the target expression is approximately consistent with the position relation between the non-muscle lines of the standard face when the standard face has the second expression.
Optionally, if the preset rule includes a first rule, the target function obtains a difference between a motion amplitude of a triangular surface when the target face changes from the to-be-processed expression to the target expression and a motion amplitude of a corresponding triangular surface when the standard face changes from the first expression to the second expression, the triangular surface is a surface formed by three vertexes, the triangular surface of the target face is used for forming the target face, and the triangular surface of the standard face is used for forming the standard face.
Optionally, if preset rule includes the second rule, then the objective function is according to the target face is followed the table of waiting to handle changes to during the target expression, the difference of the deformation degree of the first triangle face of target face and the deformation degree of the second triangle face obtains, first triangle face with the second triangle face is the face that constitutes by respective three summit, first triangle face with the second triangle face is used for piecing together the target face, first triangle face with the second triangle face is adjacent triangle face.
Optionally, if the preset rule includes a third rule, the target function is obtained according to a direction difference between a first vector formed by a vertex sequence corresponding to a muscle line of the target face when the target face has the target expression and a second vector formed by a vertex sequence corresponding to the muscle line of the standard face when the standard face has the second expression.
Optionally, if the preset rule includes a fourth rule, the target function is obtained according to the first position coordinate difference of the target face and the second position coordinate difference of the standard face when the standard face has the second expression; the first position coordinate difference is the position coordinate difference of corresponding vertexes between a vertex sequence corresponding to a first non-muscle line and a vertex sequence corresponding to a second non-muscle line of the target face when the target face has the target expression; the second position coordinate difference is a position coordinate difference of a corresponding vertex between a vertex sequence corresponding to the first non-muscle line and a vertex sequence corresponding to the second non-muscle line when the standard face has a second expression; and each vertex in the vertex sequence corresponding to the first non-muscle line corresponds to each vertex in the vertex sequence corresponding to the second non-muscle line in a one-to-one manner.
When introducing elements of various embodiments of the present application, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements.
It should be noted that, as one of ordinary skill in the art would understand, all or part of the processes of the above method embodiments may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when executed, the computer program may include the processes of the above method embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, and the units and modules described as separate components may or may not be physically separate. In addition, some or all of the units and modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is directed to embodiments of the present application and it is noted that numerous modifications and adaptations may be made by those skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (12)

1. An expression synthesis method, characterized in that the method comprises:
acquiring expression data of a to-be-processed expression of a target face, wherein the expression data of the to-be-processed expression is position coordinates of each vertex when the target face has the to-be-processed expression;
acquiring expression data of a first expression of a standard face and expression data of a second expression of the standard face, wherein the expression data of the first expression is position coordinates of each vertex when the standard face has the first expression, the expression data of the second expression is position coordinates of each vertex when the standard face has the second expression, each vertex of a target face corresponds to each vertex of the standard face one by one, and the first expression of the standard face is the same as the to-be-processed expression of the target face;
obtaining expression data meeting a preset condition according to the expression data of the first expression of the standard face and the expression data of the second expression of the standard face, wherein the expression data meeting the preset condition is a position coordinate of a vertex of which the motion amplitude is smaller than or equal to a threshold value;
obtaining first expression data when the target face has a target expression by using the expression data of the preset condition, wherein the target expression is the same as the second expression, and the vertex of the first expression data is matched with the vertex of the expression data of the preset condition;
constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the value of the target function meets the preset condition and the target face has a target expression, wherein the second expression data are position coordinates of other vertexes of the target face except the vertex of the first expression data; the value of the target function meets a preset condition that the difference value between the value of the target function and the minimum value of the target function is lower than a preset threshold value; the target function is used for solving second expression data when the target face has the target expression;
and synthesizing the target face with the target expression according to the first expression data and the second expression data of the target face.
2. The method of claim 1, wherein the preset rules comprise at least one of:
a first rule: when the target face changes from the to-be-processed expression to the target expression, the overall deformation degree of the face is approximately consistent with the overall deformation degree of the face of the standard face from the first expression to the second expression;
the second rule is as follows: a target face formed from the first expression data and the second expression data of the target face is smooth;
a third rule: the muscle line shape of the target face with the target expression is approximately consistent with the muscle line shape of the standard face with the second expression;
the fourth rule is that: the position relation between the non-muscle lines of the target face when the target face has the target expression is approximately consistent with the position relation between the non-muscle lines of the standard face when the standard face has the second expression.
3. The method according to claim 2, wherein if the preset rule includes a first rule, the objective function is obtained according to a difference between a movement amplitude of a triangular surface of the target face when the target face changes from the to-be-processed expression to the target expression and a movement amplitude of a corresponding triangular surface of the standard face when the standard face changes from the first expression to the second expression, the triangular surface being a surface formed by three vertices, the triangular surface of the target face being used for composing the target face, and the triangular surface of the standard face being used for composing the standard face.
4. The method according to claim 2, wherein if the preset rule includes a second rule, the objective function is obtained from a difference between a degree of deformation of a first triangular surface and a degree of deformation of a second triangular surface of the target face when the target face changes from the representation to be processed to the target expression, the first triangular surface and the second triangular surface being surfaces formed by three respective vertexes, the first triangular surface and the second triangular surface being used for composing the target face, the first triangular surface and the second triangular surface being adjacent triangular surfaces.
5. The method according to claim 2, wherein if the preset rule includes a third rule, the objective function is obtained according to a direction difference between a first vector formed by a vertex sequence corresponding to a muscle line of the target face when the target face has the target expression and a second vector formed by a vertex sequence corresponding to the muscle line of the standard face when the standard face has the second expression.
6. The method according to claim 2, wherein if the preset rule includes a fourth rule, the objective function is obtained according to a first position coordinate difference of the target face and a second position coordinate difference of the standard face when the standard face has a second expression; the first position coordinate difference is the position coordinate difference of corresponding vertexes between a vertex sequence corresponding to a first non-muscle line and a vertex sequence corresponding to a second non-muscle line of the target face when the target face has the target expression; the second position coordinate difference is a position coordinate difference of a corresponding vertex between a vertex sequence corresponding to the first non-muscle line and a vertex sequence corresponding to the second non-muscle line when the standard face has a second expression; and each vertex in the vertex sequence corresponding to the first non-muscle line corresponds to each vertex in the vertex sequence corresponding to the second non-muscle line in a one-to-one manner.
7. An expression synthesis apparatus, characterized in that the apparatus comprises: the system comprises a to-be-processed expression data acquisition unit, a standard face expression data acquisition unit, a preset condition expression data acquisition unit, a first expression data acquisition unit, a second expression data acquisition unit and an expression synthesis unit; wherein,
the expression data acquisition unit is used for acquiring expression data of the expression to be processed of the target face, wherein the expression data of the expression to be processed is position coordinates of each vertex when the target face has the expression to be processed;
the standard face expression data acquisition unit is used for acquiring expression data of a first expression of a standard face and expression data of a second expression of the standard face, wherein the expression data of the first expression is position coordinates of each vertex when the standard face has the first expression, the expression data of the second expression is position coordinates of each vertex when the standard face has the second expression, each vertex of the target face corresponds to each vertex of the standard face one by one, and the first expression of the standard face is the same as the to-be-processed expression of the target face;
the preset condition expression data acquisition unit is used for obtaining expression data meeting a preset condition according to the expression data of the first expression of the standard face and the expression data of the second expression of the standard face, wherein the expression data meeting the preset condition is a position coordinate of a vertex of which the motion amplitude is smaller than or equal to a threshold value;
the first expression data acquisition unit is used for acquiring first expression data when the target face has a target expression by using the expression data of the preset condition, the target expression is the same as the second expression, and the vertex of the first expression data is matched with the vertex of the expression data of the preset condition;
the second expression data acquisition unit is used for constructing a target function according to a preset rule, and taking the first expression data as a constraint condition to obtain second expression data when the target face has a target expression when the value of the target function meets a preset condition, wherein the second expression data are position coordinates of other vertexes of the target face except the vertex of the first expression data; the value of the target function meets a preset condition that the difference value between the value of the target function and the minimum value of the target function is lower than a preset threshold value; the target function is used for solving second expression data when the target face has the target expression;
the expression synthesis unit is used for synthesizing the target face with the target expression according to the first expression data and the second expression data of the target face.
8. The apparatus of claim 7, wherein the preset rule comprises at least one of:
a first rule: when the target face changes from the to-be-processed expression to the target expression, the overall deformation degree of the face is approximately consistent with the overall deformation degree of the face of the standard face from the first expression to the second expression;
the second rule is as follows: a target face formed from the first expression data and the second expression data of the target face is smooth;
a third rule: the muscle line shape of the target face with the target expression is approximately consistent with the muscle line shape of the standard face with the second expression;
the fourth rule is that: the position relation between the non-muscle lines of the target face when the target face has the target expression is approximately consistent with the position relation between the non-muscle lines of the standard face when the standard face has the second expression.
9. The apparatus according to claim 8, wherein if the preset rule includes a first rule, the objective function is obtained according to a difference between a motion amplitude of a triangular surface of the target face when the target face changes from the to-be-processed expression to the target expression and a motion amplitude of a corresponding triangular surface of the standard face when the standard face changes from the first expression to the second expression, the triangular surface being a surface formed by three vertices, the triangular surface of the target face being used for composing the target face, and the triangular surface of the standard face being used for composing the standard face.
10. The apparatus according to claim 8, wherein if the preset rule includes a second rule, the objective function is obtained from a difference between a degree of deformation of a first triangular surface and a degree of deformation of a second triangular surface of the target face when the target face changes from the target expression to the target expression, the first triangular surface and the second triangular surface being surfaces formed by three respective vertexes, the first triangular surface and the second triangular surface being used for composing the target face, the first triangular surface and the second triangular surface being adjacent triangular surfaces.
11. The apparatus according to claim 8, wherein if the preset rule includes a third rule, the objective function is obtained according to a direction difference between a first vector formed by a vertex sequence corresponding to a muscle line of the target face when the target face has the target expression and a second vector formed by a vertex sequence corresponding to the muscle line of the standard face when the standard face has the second expression.
12. The apparatus according to claim 8, wherein if the preset rule includes a fourth rule, the objective function is obtained according to a first position coordinate difference of the target face and a second position coordinate difference of the standard face when the standard face has a second expression; the first position coordinate difference is the position coordinate difference of corresponding vertexes between a vertex sequence corresponding to a first non-muscle line and a vertex sequence corresponding to a second non-muscle line of the target face when the target face has the target expression; the second position coordinate difference is a position coordinate difference of a corresponding vertex between a vertex sequence corresponding to the first non-muscle line and a vertex sequence corresponding to the second non-muscle line when the standard face has a second expression; and each vertex in the vertex sequence corresponding to the first non-muscle line corresponds to each vertex in the vertex sequence corresponding to the second non-muscle line in a one-to-one manner.
CN201710271893.7A 2017-04-24 2017-04-24 Expression synthesis method and device Active CN107103646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710271893.7A CN107103646B (en) 2017-04-24 2017-04-24 Expression synthesis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710271893.7A CN107103646B (en) 2017-04-24 2017-04-24 Expression synthesis method and device

Publications (2)

Publication Number Publication Date
CN107103646A CN107103646A (en) 2017-08-29
CN107103646B true CN107103646B (en) 2020-10-23

Family

ID=59656386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710271893.7A Active CN107103646B (en) 2017-04-24 2017-04-24 Expression synthesis method and device

Country Status (1)

Country Link
CN (1) CN107103646B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829277A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Terminal unlock method, device, computer equipment and storage medium
CN111583372B (en) * 2020-05-09 2021-06-25 腾讯科技(深圳)有限公司 Virtual character facial expression generation method and device, storage medium and electronic equipment
CN114913278A (en) * 2021-06-30 2022-08-16 完美世界(北京)软件科技发展有限公司 Expression model generation method and device, storage medium and computer equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920880A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based people face expression fantasy method
CN101311966A (en) * 2008-06-20 2008-11-26 浙江大学 Three-dimensional human face animations editing and synthesis a based on operation transmission and Isomap analysis
KR20100090058A (en) * 2009-02-05 2010-08-13 연세대학교 산학협력단 Iterative 3d head pose estimation method using a face normal vector
CN102479388A (en) * 2010-11-22 2012-05-30 北京盛开互动科技有限公司 Expression interaction method based on face tracking and analysis
CN103035022A (en) * 2012-12-07 2013-04-10 大连大学 Facial expression synthetic method based on feature points
CN103198508A (en) * 2013-04-07 2013-07-10 河北工业大学 Human face expression animation generation method
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106157372A (en) * 2016-07-25 2016-11-23 深圳市唯特视科技有限公司 A kind of 3D face grid reconstruction method based on video image
CN106204750A (en) * 2016-07-11 2016-12-07 厦门幻世网络科技有限公司 A kind of method and device based on 3D source model editor's 3D object module

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976453A (en) * 2010-09-26 2011-02-16 浙江大学 GPU-based three-dimensional face expression synthesis method
CN104346824A (en) * 2013-08-09 2015-02-11 汉王科技股份有限公司 Method and device for automatically synthesizing three-dimensional expression based on single facial image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920880A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based people face expression fantasy method
CN101311966A (en) * 2008-06-20 2008-11-26 浙江大学 Three-dimensional human face animations editing and synthesis a based on operation transmission and Isomap analysis
KR20100090058A (en) * 2009-02-05 2010-08-13 연세대학교 산학협력단 Iterative 3d head pose estimation method using a face normal vector
CN102479388A (en) * 2010-11-22 2012-05-30 北京盛开互动科技有限公司 Expression interaction method based on face tracking and analysis
CN103035022A (en) * 2012-12-07 2013-04-10 大连大学 Facial expression synthetic method based on feature points
CN103198508A (en) * 2013-04-07 2013-07-10 河北工业大学 Human face expression animation generation method
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106204750A (en) * 2016-07-11 2016-12-07 厦门幻世网络科技有限公司 A kind of method and device based on 3D source model editor's 3D object module
CN106157372A (en) * 2016-07-25 2016-11-23 深圳市唯特视科技有限公司 A kind of 3D face grid reconstruction method based on video image

Also Published As

Publication number Publication date
CN107103646A (en) 2017-08-29

Similar Documents

Publication Publication Date Title
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
US11423556B2 (en) Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
US11170558B2 (en) Automatic rigging of three dimensional characters for animation
US10198845B1 (en) Methods and systems for animating facial expressions
CN112669447B (en) Model head portrait creation method and device, electronic equipment and storage medium
US8624901B2 (en) Apparatus and method for generating facial animation
US20220157004A1 (en) Generating textured polygon strip hair from strand-based hair for a virtual character
JP7456670B2 (en) 3D face model construction method, 3D face model construction device, computer equipment, and computer program
US11557076B2 (en) Computer generated hair groom transfer tool
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
KR102491140B1 (en) Method and apparatus for generating virtual avatar
US20160134840A1 (en) Avatar-Mediated Telepresence Systems with Enhanced Filtering
US20140198121A1 (en) System and method for avatar generation, rendering and animation
CN103208133A (en) Method for adjusting face plumpness in image
CN107103646B (en) Expression synthesis method and device
Onizuka et al. Landmark-guided deformation transfer of template facial expressions for automatic generation of avatar blendshapes
Chai et al. Efficient mesh-based face beautifier on mobile devices
WO2024053235A1 (en) Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
Gui et al. Real-Time 3D Facial Subtle Expression Control Based on Blended Normal Maps
Jiang et al. Animating arbitrary topology 3D facial model using the MPEG-4 FaceDefTables
de Carvalho Cruz et al. A review regarding the 3D facial animation pipeline
Liu et al. 2D image deformation based on guaranteed feature correspondence and mesh mapping
Bibliowicz An automated rigging system for facial animation
CN118691723A (en) Avatar generation method and apparatus, electronic device, storage medium, and product
CN117994395A (en) Digital human face asset generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190227

Address after: 361000 Fujian Xiamen Torch High-tech Zone Software Park Innovation Building Area C 3F-A193

Applicant after: Xiamen Black Mirror Technology Co., Ltd.

Address before: 9th Floor, Maritime Building, 16 Haishan Road, Huli District, Xiamen City, Fujian Province, 361000

Applicant before: XIAMEN HUANSHI NETWORK TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant