CN111445568B - Character expression editing method, device, computer storage medium and terminal - Google Patents

Character expression editing method, device, computer storage medium and terminal Download PDF

Info

Publication number
CN111445568B
CN111445568B CN201811623613.5A CN201811623613A CN111445568B CN 111445568 B CN111445568 B CN 111445568B CN 201811623613 A CN201811623613 A CN 201811623613A CN 111445568 B CN111445568 B CN 111445568B
Authority
CN
China
Prior art keywords
dimensional
face model
perspective projection
expression
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811623613.5A
Other languages
Chinese (zh)
Other versions
CN111445568A (en
Inventor
刘更代
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Singapore Pte Ltd
Original Assignee
Guangzhou Baiguoyuan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baiguoyuan Information Technology Co Ltd filed Critical Guangzhou Baiguoyuan Information Technology Co Ltd
Priority to CN201811623613.5A priority Critical patent/CN111445568B/en
Publication of CN111445568A publication Critical patent/CN111445568A/en
Application granted granted Critical
Publication of CN111445568B publication Critical patent/CN111445568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Abstract

The invention provides a character expression editing method, a device, a computer storage medium and a terminal, wherein the character expression editing method comprises the following steps: obtaining a face model in a three-dimensional grid, and carrying out weak perspective projection on the face model to obtain perspective projection on a two-dimensional plane, wherein the perspective projection comprises two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions; acquiring a two-dimensional adjustment instruction for adjusting the position of a two-dimensional anchor point, and adjusting the position of a two-dimensional face feature point according to the two-dimensional adjustment instruction to obtain an adjusted perspective projection and a two-dimensional displacement of the corresponding two-dimensional anchor point; mapping the adjusted perspective projection and the two-dimensional displacement onto a face model to obtain a changed face model; and acquiring a three-dimensional adjustment instruction of the face model after adjustment and change, and adjusting the face model after change according to the three-dimensional adjustment instruction to obtain a three-dimensional target expression. By the character expression editing method, a more real and natural three-dimensional target expression can be obtained.

Description

Character expression editing method, device, computer storage medium and terminal
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for editing a character expression, a storage medium and a terminal.
Background
With the popularization of image acquisition equipment, it is easier to obtain digital face images, and people also prefer various interesting and strange face images. The face in a portrait picture is processed and transformed, so that the face of a person moves to generate various expressions, and the portrait picture is the most interesting and challenging task. There are many ways of driving facial expressions, and the expression of a real actor can be used to directly drive the characters in the picture, and the expression animation can be played back by prefabricating a parameter sequence, but the expressions generated by the ways are not real and vivid enough. Therefore, how to generate natural and vivid facial expressions of people is a current problem.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a character expression editing method, a device, a storage medium and a terminal, which are used for solving the problem of lack of realism of the character expression edited by the prior art.
The character expression editing method provided by the invention comprises the following steps:
obtaining a face model in a three-dimensional grid, and carrying out weak perspective projection on the face model to obtain perspective projection of the face model on a two-dimensional plane, wherein the perspective projection comprises two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions;
Acquiring a two-dimensional adjustment instruction for adjusting the position of the two-dimensional anchor point, and adjusting the position of the two-dimensional face feature point according to the two-dimensional adjustment instruction to obtain an adjusted perspective projection and a two-dimensional displacement of the corresponding two-dimensional anchor point;
mapping the adjusted perspective projection and the two-dimensional displacement to a face model in the three-dimensional grid to obtain a changed face model;
and acquiring a three-dimensional adjustment instruction for adjusting the changed face model, and adjusting the changed face model according to the three-dimensional adjustment instruction to obtain a three-dimensional target expression.
Further, the obtaining the face model in the three-dimensional grid includes:
acquiring a face model, and determining an expression matrix B and a natural expression mean value m of the face model in a three-dimensional grid according to a long vector x=m+Bw formed by coordinates of the face model in the three-dimensional grid; wherein w is the weight of the expression matrix B.
Further, mapping the adjusted perspective projection and the two-dimensional displacement onto the face model in the three-dimensional grid to obtain a changed face model, including:
according to the mapping relation between the perspective projection and the face model, acquiring a three-dimensional displacement d corresponding to the two-dimensional displacement in the face model 3
Obtaining the value of an objective function E:
wherein ,for the augmentation matrix of the expression matrix B, alpha is a first coefficient for controlling the frame smoothing before and after the expression, beta is a second coefficient for controlling the reliability of the numerical solution,w 0 the weight of the expression matrix B of the previous frame;
determining the minimum value of the objective function E, and determining the value of the weight w according to the minimum value;
and obtaining the changed face model according to the minimum value and the value of the corresponding weight w.
Further, according to the mapping relationship between the perspective projection and the face model, the three-dimensional displacement d corresponding to the two-dimensional displacement in the face model is obtained 3 Comprising:
obtaining a displacement vector of the two-dimensional displacement on an XY plane;
according to the mapping relation between the perspective projection and the face model, obtaining the three-dimensional displacement d of the two-dimensional displacement in XYZ space 3
Further, the obtaining the face model in the three-dimensional grid, according to the weak perspective projection, obtaining the perspective projection of the face model on the two-dimensional plane, wherein the perspective projection comprises a two-dimensional face feature point position and a two-dimensional anchor point position corresponding to the two-dimensional face feature point position, and the method comprises the following steps:
Acquiring a face model in a three-dimensional grid, and determining a three-dimensional anchor point position corresponding to the three-dimensional face feature point position according to the face model;
and obtaining perspective projection of the face model on a two-dimensional plane according to weak perspective projection, wherein the perspective projection comprises a two-dimensional face characteristic point position corresponding to the three-dimensional face characteristic point position and a two-dimensional anchor point position corresponding to the three-dimensional anchor point position.
Further, before the weak perspective projection is performed on the face model, the method further includes:
a scaling factor of the weak perspective projection is determined.
Further, the obtaining a three-dimensional adjustment instruction for adjusting the changed face model, according to the three-dimensional adjustment instruction, adjusting the changed face model to obtain a three-dimensional target expression, further includes:
carrying out weak perspective projection on the three-dimensional target expression to obtain a two-dimensional projection image;
and acquiring an adjustment instruction, and adjusting the two-dimensional projection image by using the adjustment instruction to obtain a two-dimensional target expression.
The invention also provides a character expression editing device, which comprises:
the projection module is used for acquiring a face model in the three-dimensional grid, carrying out weak perspective projection on the face model to obtain perspective projection of the face model on a two-dimensional plane, wherein the perspective projection comprises two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions;
The two-dimensional adjustment module is used for acquiring a two-dimensional adjustment instruction for adjusting the position of the two-dimensional anchor point, and adjusting the position of the two-dimensional face feature point according to the two-dimensional adjustment instruction to obtain an adjusted perspective projection and a two-dimensional displacement of the corresponding two-dimensional anchor point;
the mapping module is used for mapping the adjusted perspective projection and the two-dimensional displacement to the face model in the three-dimensional grid to obtain a changed face model;
the three-dimensional adjustment module is used for acquiring a three-dimensional adjustment instruction for adjusting the changed face model, and adjusting the changed face model according to the three-dimensional adjustment instruction to obtain a three-dimensional target expression.
Further, the projection module includes:
the three-dimensional model parameter determining unit is used for obtaining a face model and determining an expression matrix B and a natural expression mean value m of the face model in a three-dimensional grid according to a long vector x=m+Bw formed by coordinates of the face model in the three-dimensional grid; wherein w is the weight of the expression matrix B.
Further, the mapping module includes:
a three-dimensional displacement calculation unit for obtaining a three-dimensional displacement d corresponding to the two-dimensional displacement in the face model according to the mapping relation between the perspective projection and the face model 3
The objective function calculating unit is used for obtaining the value of the objective function E:
wherein ,for the augmentation matrix of the expression matrix B, alpha is a first coefficient for controlling the frame smoothing before and after the expression, beta is a second coefficient for controlling the reliability of numerical solution, and w 0 The weight of the expression matrix B of the previous frame;
the objective function parameter determining unit is used for determining the minimum value of the objective function E and determining the value of the weight w according to the minimum value;
and the three-dimensional face model determining unit is used for obtaining a changed face model according to the minimum value and the value of the corresponding weight w.
Further, the three-dimensional displacement calculation unit includes:
a two-dimensional displacement acquisition subunit, configured to acquire a displacement vector of the two-dimensional displacement in an XY plane;
a three-dimensional displacement calculation subunit, configured to obtain a three-dimensional displacement d of the two-dimensional displacement in XYZ space according to a mapping relationship between the perspective projection and the face model 3
Further, the projection module includes:
the three-dimensional anchor point position determining unit is used for obtaining a face model in the three-dimensional grid and determining a three-dimensional anchor point position corresponding to the three-dimensional face characteristic point position according to the face model;
The system comprises a two-dimensional face feature point and two-dimensional anchor point position determining unit, wherein the two-dimensional face feature point and two-dimensional anchor point position determining unit is used for obtaining perspective projection of the face model on a two-dimensional plane according to weak perspective projection, and the perspective projection comprises a two-dimensional face feature point position corresponding to the three-dimensional face feature point position and a two-dimensional anchor point position corresponding to the three-dimensional anchor point position.
Further, the projection module further includes:
and the weak perspective projection parameter determining unit is used for determining a scaling coefficient of the weak perspective projection before the weak perspective projection is carried out on the face model.
Further, the character expression editing apparatus further includes:
the target expression projection module is used for carrying out weak perspective projection on the three-dimensional target expression to obtain a two-dimensional projection image;
and the two-dimensional expression correction module is used for acquiring an adjustment instruction, and adjusting the two-dimensional projection image by using the adjustment instruction to obtain a two-dimensional target expression.
The present invention also proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of editing a human expression as described in any one of the preceding claims.
The invention also proposes a terminal comprising:
One or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of personally facial expression editing of any of the foregoing.
The invention has the following beneficial effects:
1. according to the invention, the three-dimensional face model is projected onto a two-dimensional plane, the facial expression is adjusted through the two-dimensional anchor point position of the two-dimensional plane, the adjusted two-dimensional perspective projection is mapped onto the three-dimensional face model, and the three-dimensional face model can be continuously adjusted, so that a more real and natural three-dimensional target expression is obtained.
2. The invention can also avoid the performance consumption problem caused by the continuous decomposition of the matrix due to the continuous change of the head rotation matrix R when the face model rotates through the objective function E; moreover, the boundary constraint is increased through the first coefficient alpha, so that the expression change of the face model is more continuous; and the obtained weight w can have less non-zero values through sparse constraint of the regularization term of the second coefficient beta and the L1, so that the facial expression in the animation is not stiff, and the natural change effect of the facial expression is further improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flowchart of a first embodiment of a method for editing a character expression according to the present invention;
FIG. 2a is an embodiment of a planar character expression to be edited;
FIG. 2b is an embodiment of three-dimensional meshing of the face model of FIG. 2 a;
FIG. 2c is a rotated head embodiment of the face model of FIG. 2 b;
FIG. 2d is an embodiment of editing a character expression with two-dimensional adjustment instructions;
FIG. 3 is a flowchart illustrating a method for editing a character expression according to another embodiment of the present invention;
FIG. 4 is a flowchart of another embodiment of a method for editing a character expression according to the present invention;
FIG. 5 is a block diagram of another embodiment of a device for editing a character expression according to the present invention;
FIG. 6 is a block diagram of another embodiment of a device for editing a character expression according to the present invention;
fig. 7 is a schematic structural diagram of a terminal embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
It will be understood by those within the art that, unless expressly stated otherwise, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, and that "first," "second," and "the" are used herein merely to distinguish one and the same technical feature and do not limit the order, quantity, etc. of that technical feature. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, a "terminal" includes both a device of a wireless signal receiver having no transmitting capability and a device of receiving and transmitting hardware having receiving and transmitting hardware capable of performing bi-directional communications over a bi-directional communication link, as will be appreciated by those skilled in the art. Such a device may include: a cellular or other communication device having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; a PCS (Personal Communications Service, personal communication system) that may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant ) that can include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, "terminal," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, to operate at any other location(s) on earth and/or in space. The "terminal" and "terminal device" used herein may also be a communication terminal, a network access terminal, and a music/video playing terminal, for example, may be a PDA, a MID (Mobile Internet Device ), and/or a mobile phone with a music/video playing function, and may also be a smart tv, a set top box, and other devices.
In existing video entertainment, it is often necessary to edit some dynamic character expressions from existing photographs or 3D models to achieve film production, game animation production, or to generate interesting and fun character expressions, etc. In order to edit a more realistic character expression, the invention provides a character expression editing method, as shown in fig. 1, comprising the following steps:
step S10: obtaining a face model in a three-dimensional grid, and carrying out weak perspective projection on the face model to obtain perspective projection of the face model on a two-dimensional plane, wherein the perspective projection comprises two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions;
step S20: acquiring a two-dimensional adjustment instruction for adjusting the position of the two-dimensional anchor point, and adjusting the position of the two-dimensional face feature point according to the two-dimensional adjustment instruction to obtain an adjusted perspective projection and a two-dimensional displacement of the corresponding two-dimensional anchor point;
step S30: mapping the adjusted perspective projection and the two-dimensional displacement to a face model in the three-dimensional grid to obtain a changed face model;
step S40: and acquiring a three-dimensional adjustment instruction for adjusting the changed face model, and adjusting the changed face model according to the three-dimensional adjustment instruction to obtain a three-dimensional target expression.
Wherein, each step is specifically as follows:
step S10: obtaining a face model in a three-dimensional grid, and carrying out weak perspective projection on the face model to obtain perspective projection of the face model on a two-dimensional plane, wherein the perspective projection comprises two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions.
Orthographic projection and perspective projection are two more common projection approaches. Orthogonal projection is to vertically project the coordinates of each vertex of the three-dimensional object to a designated plane, wherein projection lines are perpendicular to a projection plane and are parallel to each other; but no sense of distance after projection. The basic model of weak perspective projection comprises two parts of a viewpoint and a view plane; the viewpoint may represent a position of a user or an angle at which a three-dimensional object is observed; the view plane is a two-dimensional plane rendering a perspective view of a three-dimensional object. The weak perspective projection has the characteristics of vanishing sense, distance sense, regular change of the same-sized body, and the like, for example, the long-distance object can be seen to be smaller, the short-distance object can be seen to be larger and is closer to the reasonable approximation of a real vision system, so the three-dimensional face model is projected to a two-dimensional plane by adopting the weak perspective projection, and the perspective projection of the face model on the two-dimensional plane is obtained.
The perspective projection comprises two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions, so that the two-dimensional face feature point positions in a two-dimensional plane are adjusted through the two-dimensional anchor point positions, and the purpose of adjusting the facial expression in the two-dimensional plane is achieved. The feature of a part in the face may correspond to a plurality of feature points, each feature point may correspond to a respective anchor point; the two-dimensional face feature point positions and the two-dimensional anchor point positions can be in one-to-one correspondence, and when a user changes one of the two-dimensional anchor point positions, the corresponding two-dimensional face feature point positions are correspondingly changed, so that the user can adjust the user expression in multiple directions through multiple two-dimensional anchor points. For example, the mouth shape of the face model may correspond to a plurality of feature points, and a plurality of corresponding anchor points may be obtained through the feature points, and the user may change the mouth shape of the face model through any one or more of the anchor points, thereby changing the expression of the face model.
The obtained face model in the three-dimensional grid can be a pre-stored three-dimensional face model or a face model established in the three-dimensional grid according to the plane picture. In the prior art, a Blendrope technology is generally adopted to build and edit character expressions, namely a shape fusion animation technology. The technology expresses a semantic model of a human face as a linear combination mathematical model, so that the fitting problem of the human face expression is converted into the optimization problem of the mathematical model. In Blendrope, the expression of the three-dimensional target object is controlled by a plurality of values corresponding to Blendrope deformation; for example, the face may be provided with several tens of Blendshape deformation values, each Blendshape deformation value controlling only one face detail, for example, the corner of the eyes, mouth, corner of the mouth, etc. may be controlled by different Blendshape deformation values. For example, a Blendshape deformation value of 0 for a control eyelid may indicate a closed eye and a value of 100 may indicate a fully open eye. More complex facial expressions can be synthesized by combining tens of blendcope deformation values. In one embodiment of the present invention, a face model may be pre-placed in a three-dimensional grid, and a semantic model of a facial expression may be located as x=m+bw; wherein x can represent a long vector composed of coordinates of the face model of any facial expression in a three-dimensional grid, m represents a natural expression mean value of the face model in the three-dimensional grid, and can also be called as a face mean value, B is an expression matrix of the face model in the three-dimensional grid, and w is the weight of the expression matrix B. When the face model is built in the three-dimensional grid, a fixed expression matrix B and a natural expression mean value m can be determined, namely, a group of three-dimensional Blendrope models are given, and a corresponding expression matrix B can be calculated; the expression change of the face model in the three-dimensional grid is determined by the weight w, namely, when the weight w is dynamically changed in a process, the facial expression is correspondingly changed, and the facial expression of the face model is changed once when the value of the weight w is updated once.
In order to adjust the expression of the face model in the three-dimensional grid conveniently, the invention also provides the following embodiment with reference to fig. 3: the obtaining the face model in the three-dimensional grid comprises the following steps:
step S11: acquiring a face model, and determining an expression matrix B and a natural expression mean value m of the face model in a three-dimensional grid according to a long vector x=m+Bw formed by coordinates of the face model in the three-dimensional grid; wherein w is the weight of the expression matrix B.
In this embodiment, the step S10 is split into the step S11 and the step S12, so that when the face model is built in the three-dimensional grid, the expression matrix B and the natural expression mean value m can be determined, so that the weight w is calculated and the corresponding face model after the change is obtained according to the obtained parameters in the subsequent adjustment step, and the three-dimensional model parameters of the face model are directly adjusted in the three-dimensional grid.
Step S20: and acquiring a two-dimensional adjustment instruction for adjusting the position of the two-dimensional anchor point, and adjusting the position of the two-dimensional face feature point according to the two-dimensional adjustment instruction to obtain the adjusted perspective projection and the two-dimensional displacement of the corresponding two-dimensional anchor point.
In connection with the character embodiment shown in fig. 2, the present invention can build up the three-dimensional grid shown in fig. 2b from the picture in fig. 2 a; when the head in the three-dimensional grid rotates, the perspective projection of the head on the two-dimensional plane also changes correspondingly, as shown in fig. 2 c; the user can adjust the positions of the two-dimensional face feature points through the two-dimensional anchor points on the two-dimensional plane as shown in fig. 2d, so that the character expression of perspective projection is changed.
When a user adjusts the two-dimensional anchor point position on a two-dimensional plane, the two-dimensional face feature point position corresponding to the two-dimensional anchor point position also changes correspondingly. The two-dimensional adjustment instruction can comprise one or more two-dimensional anchor point position adjustment instructions, and can also further comprise an instruction for increasing or decreasing the two-dimensional anchor points, so that a user can adjust the two-dimensional face feature points to the required positions as soon as possible, and a required expression effect is achieved. The two-dimensional adjustment instruction can be input through external equipment such as a mouse and a keyboard, and can also be input through virtual modes such as virtual keys or gesture instructions on a touch screen. When the user is adjusted, an adjusted perspective projection can be generated and the two-dimensional displacement d corresponding to each two-dimensional anchor point is recorded 2 For subsequent displacement d of the two dimensions 2 Mapping into the three-dimensional grid. When the two-dimensional adjustment instruction is realized by dragging the two-dimensional anchor point of the face area, the invention can obtain the instant and vivid facial expression in a visible way.
Step S30: and mapping the adjusted perspective projection and the two-dimensional displacement to the face model in the three-dimensional grid to obtain a changed face model.
And (5) obtaining a projection mathematical relationship between the face model and the perspective projection from the step (S10), namely determining the mapping relationship between the face model and the perspective projection when the viewpoint and the view plane in the weak perspective projection are determined. For example, the projection relationship of one weak perspective projection may be q= HrL +t, where Q corresponds to the position coordinates of the two-dimensional face feature points, H is a weak perspective projection matrix, L is the three-dimensional position coordinates of the corresponding feature points in the three-dimensional grid in the face model, r is a rotation matrix constructed by rotating euler angles, and t is a translation vector. The mapping function model of the mapping relationship can be referred to a perspective projection model in the prior art, and will not be described herein. From perspective projection of a two-dimensional plane and two-dimensional displacement d 2 When the face model in the corresponding three-dimensional grid is obtained, a plurality of three-dimensional grid face models corresponding to the perspective projection can be obtained due to the increase of dimensions; in a specific application, constraint conditions of the optimal face model can be preset according to requirements, so that the face model which best meets the requirements of users is obtained.
The changed face model can comprise a three-dimensional anchor point, the three-dimensional anchor point can be generated according to the changed face model, the three-dimensional anchor point can also be generated in the step S10, and corresponding updating is carried out according to the change of the position of the two-dimensional anchor point in the step S20 so as to obtain the position of the three-dimensional anchor point after the change.
Step S40: and acquiring a three-dimensional adjustment instruction for adjusting the changed face model, and adjusting the changed face model according to the three-dimensional adjustment instruction to obtain a three-dimensional target expression.
When the adjusted perspective projection is mapped to the projected face model formed in the three-dimensional grid, the face model formed by projection may not be in a state required by the user or in an optimal state according with the face expression, so that the user can continuously adjust the face model through a further three-dimensional adjustment instruction to obtain the required three-dimensional target expression. The three-dimensional adjustment instruction can be input through a three-dimensional anchor point on the face model, when a user adjusts the position of the three-dimensional anchor point, the three-dimensional feature point of the face model changes correspondingly, namely the user can see the expression of the obtained adjusted face model, until the three-dimensional target expression required by the user is obtained.
According to the embodiment of the invention, the three-dimensional face model is projected onto the two-dimensional plane, the facial expression is adjusted through the two-dimensional anchor point position of the two-dimensional plane, the adjusted two-dimensional perspective projection is mapped onto the three-dimensional face model, and the adjustment can be continuously carried out on the three-dimensional face model, so that the more real and natural expression is obtained.
In some expression editing models, the expression of the character model can be edited by directly moving a three-dimensional anchor point, and a three-dimensional face model can be projected to a two-dimensional plane for editing, but in the two-dimensional editing process, the sum of the distances between the three-dimensional anchor point and the point of the three-dimensional feature point of the face model projected to an image plane needs to be minimized, so that a user can change the position of the three-dimensional feature point by adjusting the anchor point position on the two-dimensional plane, and the purpose of editing the expression of the face model is achieved. The adjustment process may be expressed by the following equation:
wherein Efit Is the sum of the distance differences between the three-dimensional anchor points and the points of the three-dimensional feature points of the face model projected onto the two-dimensional plane, P is the projection matrix projected from the three-dimensional face model onto the two-dimensional plane, R is the head rotation matrix,for the augmentation matrix of the expression matrix B, w is the weight of the expression matrix B, t is the anchor point displacement on the two-dimensional plane, and d 2 Is anchored atDisplacement on a two-dimensional plane, such as a two-dimensional displacement input by a user through a drag instruction. When the expression of the three-dimensional face model is adjusted through the equation, the target three-dimensional expression can be obtained through quadratic programming or linear least square method solving. However, using this equation directly would create two problems: 1. after the three-dimensional face model rotates, the head rotation matrix R changes, and the matrix of the face model in the three-dimensional coordinate system also changes, so that once the user drags the anchor point position, the model of the method needs to be subjected to matrix decomposition, and when the user continuously adjusts, the operation amount of the whole equipment is huge, and the equipment performance consumption is serious; 2. when editing the expression of the face model, the weight w may possibly shake and overshoot (over-shoot), thereby causing failure in optimizing the expression editing.
The present invention proposes another embodiment: referring to fig. 3, the mapping the adjusted perspective projection and the two-dimensional displacement to the face model in the three-dimensional grid to obtain a changed face model includes:
step S31: according to the mapping relation between the perspective projection and the face model, acquiring a three-dimensional displacement d corresponding to the two-dimensional displacement in the face model 3
Step S32: obtaining the value of an objective function E:
wherein ,for the augmentation matrix of the expression matrix B, alpha is a first coefficient for controlling the frame smoothing before and after the expression, beta is a second coefficient for controlling the reliability of numerical solution, and w 0 The weight of the expression matrix B of the previous frame;
step S33: determining the minimum value of the objective function E, and determining the value of the weight w according to the minimum value;
step S34: and obtaining the changed face model according to the minimum value and the value of the corresponding weight w.
According to the embodiment, through the objective function E, the problem of performance consumption caused by continuous matrix decomposition due to continuous change of the head rotation matrix R when the head of the face model rotates in a three-dimensional manner can be avoided; moreover, boundary constraint is added through the first coefficient alpha, and when the weight w epsilon [0,1], the expression change of the face model can be more continuous; and the obtained weight w can have less non-zero values through sparse constraint of the regularization term of the second coefficient beta and the L1, so that the facial expression in the animation is not stiff, and the natural change effect of the facial expression is further improved.
In another embodiment of the invention: the three-dimensional displacement d corresponding to the two-dimensional displacement in the face model is obtained according to the mapping relation between the perspective projection and the face model 3 Comprising:
obtaining a displacement vector of the two-dimensional displacement on an XY plane;
according to the mapping relation between the perspective projection and the face model, obtaining the three-dimensional displacement d of the two-dimensional displacement in XYZ space 3
When the face model is projected to a two-dimensional plane, the z-axis coordinate of the two-dimensional plane can be set to be zero; when the perspective projection is mapped to the face model, corresponding z-axis coordinate values are added to simplify operation switching between coordinate systems, so that equipment performance loss is further saved.
In yet another embodiment of the present invention, the obtaining a face model in a three-dimensional grid, according to a weak perspective projection, obtains a perspective projection of the face model on a two-dimensional plane, where the perspective projection includes two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions, and includes:
acquiring a face model in a three-dimensional grid, and determining a three-dimensional anchor point position corresponding to the three-dimensional face feature point position according to the face model;
and obtaining perspective projection of the face model on a two-dimensional plane according to weak perspective projection, wherein the perspective projection comprises a two-dimensional face characteristic point position corresponding to the three-dimensional face characteristic point position and a two-dimensional anchor point position corresponding to the three-dimensional anchor point position.
The two-dimensional face feature point positions and the two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions can be determined in a two-dimensional plane through image recognition, the two-dimensional face feature point positions can be determined according to the three-dimensional face feature point positions, and the two-dimensional anchor point positions are determined according to the three-dimensional anchor point positions. When the facial expression can be adjusted in the two-dimensional plane and the facial expression can be adjusted in the three-dimensional grid, the two-dimensional facial feature point position can be determined according to the three-dimensional facial feature point position, and the two-dimensional anchor point position can be determined according to the three-dimensional anchor point position, so that the anchor points between the two-dimensional plane and the three-dimensional grid have a specific function change relation, and the adjusted perspective projection and the adjusted two-dimensional displacement can be mapped into the three-dimensional grid more conveniently, so that the performance requirement on terminal image identification is simplified.
In another embodiment of the present invention, before the performing weak perspective projection on the face model, the method further includes:
a scaling factor of the weak perspective projection is determined.
In the editing of the character expression, each change in the anchor position corresponds to a change in a vector. When the face model is projected to a two-dimensional plane or the perspective projection of the two-dimensional plane is mapped to a three-dimensional grid, the scaling factors all affect the corresponding coordinate system switching change. In some embodiments, to facilitate adjusting the two-dimensional anchor point position, the scaling factor may be set to an adjustable mode, so that the user may zoom in or out the perspective projection in the two-dimensional plane as desired.
In order to obtain the picture corresponding to the adjusted three-dimensional target expression, the invention also provides another embodiment: as shown in fig. 4, the obtaining a three-dimensional adjustment instruction for adjusting the changed face model, according to the three-dimensional adjustment instruction, after adjusting the changed face model to obtain the three-dimensional target expression, further includes:
step S50: carrying out weak perspective projection on the three-dimensional target expression to obtain a two-dimensional projection image;
step S60: and acquiring an adjustment instruction, and adjusting the two-dimensional projection image by using the adjustment instruction to obtain a two-dimensional target expression.
In the embodiment, after the three-dimensional target expression is obtained, weak perspective projection can be performed on the three-dimensional target expression again so as to obtain a two-dimensional projection image; and fine adjustment can be continuously carried out on the two-dimensional projection image so as to obtain the two-dimensional target expression. Because the two-dimensional target expression is a two-dimensional image, compared with a three-dimensional model of the three-dimensional target expression, the three-dimensional target expression occupies smaller storage volume, the corresponding display model is simpler, and the requirement on the display performance of the terminal is lower; therefore, the embodiment is beneficial to displaying the corresponding two-dimensional target expression when the edited character expression is displayed through the plane display equipment, so that the requirements on the network transmission speed and the terminal display performance are reduced, and the user can share the edited character expression.
The embodiment of the invention also provides a device for editing the character expression, as shown in fig. 5, which comprises:
the projection module 10 is configured to obtain a face model in a three-dimensional grid, and perform weak perspective projection on the face model to obtain perspective projection of the face model on a two-dimensional plane, where the perspective projection includes two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions;
the two-dimensional adjustment module 20 is configured to obtain a two-dimensional adjustment instruction for adjusting the position of the two-dimensional anchor point, and adjust the position of the two-dimensional face feature point according to the two-dimensional adjustment instruction, so as to obtain an adjusted perspective projection and a two-dimensional displacement of the corresponding two-dimensional anchor point;
the mapping module 30 is configured to map the adjusted perspective projection and the two-dimensional displacement onto a face model in the three-dimensional grid, so as to obtain a changed face model;
the three-dimensional adjustment module 40 is configured to obtain a three-dimensional adjustment instruction for adjusting the changed face model, and adjust the changed face model according to the three-dimensional adjustment instruction, so as to obtain a three-dimensional target expression.
In another embodiment of the device for editing a human expression, the projection module 10 includes:
The three-dimensional model parameter determining unit is used for obtaining a face model and determining an expression matrix B and a natural expression mean value m of the face model in a three-dimensional grid according to a long vector x=m+Bw formed by coordinates of the face model in the three-dimensional grid; wherein w is the weight of the expression matrix B.
In another embodiment of the device for editing a human expression, as shown in fig. 6, the mapping module 30 includes:
a three-dimensional displacement calculation unit 31 for obtaining a three-dimensional displacement d corresponding to the two-dimensional displacement in the face model according to the mapping relationship between the perspective projection and the face model 3
An objective function calculating unit 32, configured to obtain a value of an objective function E:
wherein ,for the augmentation matrix of the expression matrix B, alpha is a first coefficient for controlling the frame smoothing before and after the expression, beta is a second coefficient for controlling the reliability of numerical solution, and w 0 The weight of the expression matrix B of the previous frame;
an objective function parameter determining unit 33, configured to determine a minimum value of the objective function E, and determine a value of the weight w according to the minimum value;
the three-dimensional face model determining unit 34 is configured to obtain a changed face model according to the minimum value and the corresponding value of the weight w.
In another embodiment of the human expression editing apparatus, the three-dimensional displacement calculation unit includes:
a two-dimensional displacement acquisition subunit, configured to acquire a displacement vector of the two-dimensional displacement in an XY plane;
a three-dimensional displacement calculation subunit, configured to obtain a three-dimensional displacement d of the two-dimensional displacement in XYZ space according to a mapping relationship between the perspective projection and the face model 3
In another embodiment of the personal expression editing apparatus, the projection module includes:
the three-dimensional anchor point position determining unit is used for obtaining a face model in the three-dimensional grid and determining a three-dimensional anchor point position corresponding to the three-dimensional face characteristic point position according to the face model;
the system comprises a two-dimensional face feature point and two-dimensional anchor point position determining unit, wherein the two-dimensional face feature point and two-dimensional anchor point position determining unit is used for obtaining perspective projection of the face model on a two-dimensional plane according to weak perspective projection, and the perspective projection comprises a two-dimensional face feature point position corresponding to the three-dimensional face feature point position and a two-dimensional anchor point position corresponding to the three-dimensional anchor point position.
In another embodiment of the personal expression editing apparatus, the projection module further includes:
and the weak perspective projection parameter determining unit is used for determining a scaling coefficient of the weak perspective projection before the weak perspective projection is carried out on the face model.
In another embodiment of the personal expression editing apparatus, the personal expression editing apparatus further includes:
the target expression projection module is used for carrying out weak perspective projection on the three-dimensional target expression to obtain a two-dimensional projection image;
and the two-dimensional expression correction module is used for acquiring an adjustment instruction, and adjusting the two-dimensional projection image by using the adjustment instruction to obtain a two-dimensional target expression.
The technical features of the above-mentioned character expression editing apparatus are the same as the corresponding technical features of the above-mentioned character expression editing method, and will not be described here again.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the human expression editing method described in any one of the above. Wherein the storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only Memory), RAMs (Random AcceSS Memory ), EPROMs (EraSable Programmable Read-Only Memory), EEPROMs (Electrically EraSable Programmable Read-Only Memory), flash Memory, magnetic cards, or optical cards. That is, a storage medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer). And may be a read-only memory, a magnetic or optical disk, etc.
The embodiment of the invention also provides a terminal, which comprises:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of personally facial expression editing as described in any of the above.
As shown in fig. 7, for convenience of explanation, only the portions related to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a vehicle-mounted computer, a server, and the like, taking the mobile phone as an example of the terminal:
fig. 7 is a block diagram showing a part of the structure of a mobile phone related to a terminal provided by an embodiment of the present invention. Referring to fig. 7, the mobile phone includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (wireless fidelity, wi-Fi) module 1570, processor 1580, power supply 1590, and the like. It will be appreciated by those skilled in the art that the handset construction shown in fig. 7 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 7:
the RF circuit 1510 may be used for receiving and transmitting signals during a message or a call, and particularly, after receiving downlink information of a base station, the signal is processed by the processor 1580; in addition, the data of the design uplink is sent to the base station. Typically, RF circuitry 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 1510 may also communicate with networks and other devices through wireless communication. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (Short Messaging Service, SMS), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing of the cellular phone by executing the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area that may store an operating system, an application program required for at least one function (such as a character expression editing program, etc.), and a storage data area; the storage data area may store data created according to the use of the cellular phone (such as face model data information, etc.), and the like. In addition, memory 1520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1530 may be used to receive input numerical or character information and generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1531 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent from the processor 1580. In addition, the touch panel 1531 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1540 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 1540 may include a display panel 1541, and alternatively, the display panel 1541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 1580 to determine the type of touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of touch event. Although in fig. 7, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1531 may be integrated with the display panel 1541 to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1550, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1541 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1541 and/or the backlight when the phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 1560, a speaker 1561, and a microphone 1562 may provide an audio interface between a user and a cell phone. The audio circuit 1560 may transmit the received electrical signal converted from audio data to the speaker 1561, and be converted into a voiceprint signal by the speaker 1561 for output; on the other hand, the microphone 1562 converts the collected voiceprint signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which are then processed by the audio data output processor 1580 for transmission, for example, to another cell phone via the RF circuit 1510 or for output to the memory 1520 for further processing.
Wi-Fi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive e-mails, browse webpages, access streaming media and the like through a Wi-Fi module 1570, so that wireless broadband Internet access is provided for the user. While fig. 7 shows Wi-Fi module 1570, it is to be understood that it is not an essential component of a cell phone and may be omitted entirely as desired without changing the essence of the invention.
The processor 1580 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions and processes data of the mobile phone by running or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby performing overall monitoring of the mobile phone. In the alternative, processor 1580 may include one or more processing units; preferably, the processor 1580 can integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, application programs, and the like, and the modem processor primarily processes wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1580.
The handset further includes a power supply 1590 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 1580 via a power management system so as to provide for the management of charging, discharging, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
It should be understood that each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
The foregoing is only a partial embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (10)

1. A character expression editing method, comprising:
acquiring a face model in a three-dimensional grid, comprising: acquiring a face model, and determining an expression matrix B and a natural expression mean value m of the face model in a three-dimensional grid according to a long vector x=m+Bw formed by coordinates of the face model in the three-dimensional grid; wherein w is the weight of the expression matrix B;
Performing weak perspective projection on the face model to obtain perspective projection of the face model on a two-dimensional plane, wherein the perspective projection comprises two-dimensional face characteristic point positions and two-dimensional anchor point positions corresponding to the two-dimensional face characteristic point positions;
acquiring a two-dimensional adjustment instruction for adjusting the position of the two-dimensional anchor point, and adjusting the position of the two-dimensional face feature point according to the two-dimensional adjustment instruction to obtain an adjusted perspective projection and a two-dimensional displacement of the corresponding two-dimensional anchor point;
mapping the adjusted perspective projection and the two-dimensional displacement to a face model in the three-dimensional grid to obtain a changed face model;
and acquiring a three-dimensional adjustment instruction for adjusting the changed face model, and adjusting the changed face model according to the three-dimensional adjustment instruction to obtain a three-dimensional target expression.
2. The method of claim 1, wherein mapping the adjusted perspective projection and two-dimensional displacement onto a face model in the three-dimensional mesh to obtain a changed face model comprises:
according to the mapping relation between the perspective projection and the face model, acquiring a three-dimensional displacement d corresponding to the two-dimensional displacement in the face model 3
Obtaining the value of an objective function E:
wherein ,for the augmentation matrix of the expression matrix B, alpha is a first coefficient for controlling the frame smoothing before and after the expression, beta is a second coefficient for controlling the reliability of numerical solution, and w 0 The weight of the expression matrix B of the previous frame;
determining the minimum value of the objective function E, and determining the value of the weight w according to the minimum value;
and obtaining the changed face model according to the minimum value and the value of the corresponding weight w.
3. The method according to claim 2, wherein the three-dimensional displacement d corresponding to the two-dimensional displacement in the face model is obtained according to a mapping relationship between the perspective projection and the face model 3 Comprising:
obtaining a displacement vector of the two-dimensional displacement on an XY plane;
according to the mapping relation between the perspective projection and the face model, obtaining the three-dimensional displacement d of the two-dimensional displacement in XYZ space 3
4. The method according to claim 1, wherein the obtaining the face model in the three-dimensional grid, according to the weak perspective projection, obtains a perspective projection of the face model on a two-dimensional plane, where the perspective projection includes two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions, includes:
Acquiring a face model in a three-dimensional grid, and determining a three-dimensional anchor point position corresponding to the three-dimensional face feature point position according to the face model;
and obtaining perspective projection of the face model on a two-dimensional plane according to weak perspective projection, wherein the perspective projection comprises a two-dimensional face characteristic point position corresponding to the three-dimensional face characteristic point position and a two-dimensional anchor point position corresponding to the three-dimensional anchor point position.
5. The method of claim 1, wherein prior to the weakly perspective projection of the face model, further comprising:
a scaling factor of the weak perspective projection is determined.
6. The method according to claim 1, wherein the obtaining the three-dimensional adjustment instruction for adjusting the changed face model, according to the three-dimensional adjustment instruction, adjusts the changed face model to obtain the three-dimensional target expression, further comprises:
carrying out weak perspective projection on the three-dimensional target expression to obtain a two-dimensional projection image;
and acquiring an adjustment instruction, and adjusting the two-dimensional projection image by using the adjustment instruction to obtain a two-dimensional target expression.
7. A character expression editing apparatus, comprising:
The projection module is used for obtaining a face model in a three-dimensional grid, carrying out weak perspective projection on the face model to obtain perspective projection of the face model on a two-dimensional plane, wherein the perspective projection comprises two-dimensional face feature point positions and two-dimensional anchor point positions corresponding to the two-dimensional face feature point positions, and the projection module comprises:
the three-dimensional model parameter determining unit is used for obtaining a face model and determining an expression matrix B and a natural expression mean value m of the face model in a three-dimensional grid according to a long vector x=m+Bw formed by coordinates of the face model in the three-dimensional grid; wherein w is the weight of the expression matrix B;
the two-dimensional adjustment module is used for acquiring a two-dimensional adjustment instruction for adjusting the position of the two-dimensional anchor point, and adjusting the position of the two-dimensional face feature point according to the two-dimensional adjustment instruction to obtain an adjusted perspective projection and a two-dimensional displacement of the corresponding two-dimensional anchor point;
the mapping module is used for mapping the adjusted perspective projection and the two-dimensional displacement to the face model in the three-dimensional grid to obtain a changed face model;
the three-dimensional adjustment module is used for acquiring a three-dimensional adjustment instruction for adjusting the changed face model, and adjusting the changed face model according to the three-dimensional adjustment instruction to obtain a three-dimensional target expression.
8. The persona expression editing device of claim 7, wherein the mapping module includes:
a three-dimensional displacement calculation unit for obtaining according to the mapping relationship between the perspective projection and the face modelTaking a three-dimensional displacement d corresponding to the two-dimensional displacement in the face model 3
The objective function calculating unit is used for obtaining the value of the objective function E:
wherein ,for the augmentation matrix of the expression matrix B, alpha is a first coefficient for controlling the frame smoothing before and after the expression, beta is a second coefficient for controlling the reliability of numerical solution, and w 0 The weight of the expression matrix B of the previous frame;
the objective function parameter determining unit is used for determining the minimum value of the objective function E and determining the value of the weight w according to the minimum value;
and the three-dimensional face model determining unit is used for obtaining a changed face model according to the minimum value and the value of the corresponding weight w.
9. A computer-readable storage medium having stored thereon a computer program, which when executed by a processor implements the human expression editing method according to any one of claims 1 to 6.
10. A terminal, the terminal comprising:
One or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of personally facial expression editing as recited in any one of claims 1 to 6.
CN201811623613.5A 2018-12-28 2018-12-28 Character expression editing method, device, computer storage medium and terminal Active CN111445568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811623613.5A CN111445568B (en) 2018-12-28 2018-12-28 Character expression editing method, device, computer storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811623613.5A CN111445568B (en) 2018-12-28 2018-12-28 Character expression editing method, device, computer storage medium and terminal

Publications (2)

Publication Number Publication Date
CN111445568A CN111445568A (en) 2020-07-24
CN111445568B true CN111445568B (en) 2023-08-15

Family

ID=71652277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811623613.5A Active CN111445568B (en) 2018-12-28 2018-12-28 Character expression editing method, device, computer storage medium and terminal

Country Status (1)

Country Link
CN (1) CN111445568B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929619B (en) * 2021-02-03 2022-04-19 广州工程技术职业学院 Tracking display structure of facial feature points in animation character
CN113095134B (en) * 2021-03-08 2024-03-29 北京达佳互联信息技术有限公司 Facial expression extraction model generation method and device and facial image generation method and device
CN115426505B (en) * 2022-11-03 2023-03-24 北京蔚领时代科技有限公司 Preset expression special effect triggering method based on face capture and related equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108242074A (en) * 2018-01-02 2018-07-03 中国科学技术大学 A kind of three-dimensional exaggeration human face generating method based on individual satire portrait painting
CN108765351A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129159A1 (en) * 2011-11-22 2013-05-23 Ronald Huijgens Face recognition method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108242074A (en) * 2018-01-02 2018-07-03 中国科学技术大学 A kind of three-dimensional exaggeration human face generating method based on individual satire portrait painting
CN108765351A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111445568A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
US11498003B2 (en) Image rendering method, device, and storage medium
US11151773B2 (en) Method and apparatus for adjusting viewing angle in virtual environment, and readable storage medium
US20230143323A1 (en) Shadow rendering method and apparatus, computer device, and storage medium
US11224810B2 (en) Method and terminal for displaying distance information in virtual scene
US10055879B2 (en) 3D human face reconstruction method, apparatus and server
US20200333941A1 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
CN109712224B (en) Virtual scene rendering method and device and intelligent device
US11715224B2 (en) Three-dimensional object reconstruction method and apparatus
US20200302670A1 (en) Image processing method, electronic device, and storage medium
WO2016173427A1 (en) Method, device and computer readable medium for creating motion blur effect
CN109101120B (en) Method and device for displaying image
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
CN111445568B (en) Character expression editing method, device, computer storage medium and terminal
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN107817939A (en) A kind of image processing method and mobile terminal
CN103473804A (en) Image processing method, device and terminal equipment
CN112634416B (en) Method and device for generating virtual image model, electronic equipment and storage medium
CN109753892B (en) Face wrinkle generation method and device, computer storage medium and terminal
US11845007B2 (en) Perspective rotation method and apparatus, device, and storage medium
US11790607B2 (en) Method and apparatus for displaying heat map, computer device, and readable storage medium
CN111445563B (en) Image generation method and related device
CN110517346B (en) Virtual environment interface display method and device, computer equipment and storage medium
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN109767482B (en) Image processing method, device, electronic equipment and storage medium
CN109343782A (en) A kind of display methods and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231008

Address after: 31a, 15 / F, building 30, maple mall, bangrang Road, Brazil, Singapore

Patentee after: Baiguoyuan Technology (Singapore) Co.,Ltd.

Address before: 511442 25 / F, building B-1, Wanda Plaza North, Wanbo business district, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU BAIGUOYUAN NETWORK TECHNOLOGY Co.,Ltd.