CN112102146B - Face image processing method, device, equipment and computer storage medium - Google Patents

Face image processing method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN112102146B
CN112102146B CN201910526812.2A CN201910526812A CN112102146B CN 112102146 B CN112102146 B CN 112102146B CN 201910526812 A CN201910526812 A CN 201910526812A CN 112102146 B CN112102146 B CN 112102146B
Authority
CN
China
Prior art keywords
face image
face
parameters
expression
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910526812.2A
Other languages
Chinese (zh)
Other versions
CN112102146A (en
Inventor
王山虎
覃威宁
张涛
朱威
唐杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momo Information Technology Co Ltd
Original Assignee
Beijing Momo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Momo Information Technology Co Ltd filed Critical Beijing Momo Information Technology Co Ltd
Priority to CN201910526812.2A priority Critical patent/CN112102146B/en
Publication of CN112102146A publication Critical patent/CN112102146A/en
Application granted granted Critical
Publication of CN112102146B publication Critical patent/CN112102146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face image processing method, a device, equipment and a computer storage medium, wherein the method comprises the following steps: key points of a plurality of first face images of a first user are received and acquired; inputting key points of a plurality of first face images into a 3DMM model for joint iteration solution to obtain a unified solution of shape parameters corresponding to the faces of the first user; receiving and acquiring key points of a plurality of second face images of a second user; inputting key points of a plurality of second face images into a 3DMM model for joint iteration solution to obtain single solutions of expression parameters and posture parameters corresponding to each second face image; substituting the unified solution of the shape parameters corresponding to the face of the first user and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain the face changing diagram after the face outline is replaced. According to the embodiment of the invention, the facial contours can be replaced, and the similarity degree after face replacement is improved.

Description

Face image processing method, device, equipment and computer storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for processing a face image.
Background
When the three characteristic parameters of shape, expression and gesture are solved by using the 3DMM model, the problem of coupling exists, namely the overall effect of solving a single image can be achieved (the three characteristic parameters are more accurate in overall view), but the gesture parameter, the expression parameter and the shape parameter are inaccurate in view. Therefore, the shape, expression and gesture parameters calculated by the 3DMM model cannot be used for realizing the requirement of exchanging the shape parameters of different face images to replace the face contours.
Disclosure of Invention
The embodiment of the invention provides a face image processing method, a device, equipment and a computer storage medium, which can replace the outline of a face and improve the similarity after face replacement.
In one aspect, an embodiment of the present invention provides a facial image processing method, including:
receiving a plurality of first face images of a first user, and respectively acquiring key points of each first face image;
inputting key points of the plurality of first face images into a three-dimensional deformation model 3DMM model for joint iteration solution to obtain a unified solution of shape parameters corresponding to the faces of the first user;
receiving a plurality of second face images of a second user, and respectively acquiring key points of each second face image;
Inputting key points of a plurality of second face images into the 3DMM model for joint iteration solution to obtain a single solution of expression parameters and posture parameters corresponding to each second face image;
substituting the unified solution of the shape parameters corresponding to the face of the first user and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after the face outline in the second face image is replaced by the face outline of the first user.
In another aspect, an embodiment of the present invention provides a facial image processing apparatus, including:
the first receiving module is used for receiving a plurality of first face images of a first user and respectively acquiring key points of each first face image;
the first solving module is used for inputting key points of the plurality of first face images into the three-dimensional deformation model 3DMM model to carry out joint iteration solving, so as to obtain a unified solution of the shape parameters corresponding to the faces of the first user;
the second receiving module is used for receiving a plurality of second face images of a second user and respectively acquiring key points of each second face image;
The second solving module is used for inputting key points of a plurality of second face images into the 3DMM model to carry out joint iteration solving to obtain a single solution of expression parameters and posture parameters corresponding to each second face image;
and the first replacing module is used for substituting the unified solution of the shape parameters corresponding to the face of the first user and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after replacing the face outline in the second face image with the face outline of the first user.
In still another aspect, an embodiment of the present invention provides a face image processing apparatus including:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a facial image processing method as described in any one of the above.
In yet another aspect, an embodiment of the present invention provides a computer storage medium having stored thereon computer program instructions that, when executed by a processor, implement a facial image processing method as described in any one of the above.
In another aspect, an embodiment of the present invention provides another face image processing method, including:
after obtaining key points of each group of training data, inputting the key points into a 3DMM model for joint iteration solution to obtain a group of solution results; combining the obtained multiple groups of solving results into a training set; each group of training data comprises a plurality of face images of a user; each group of solving results comprise a unified solution of shape parameters and a single solution of expression parameters and posture parameters corresponding to each face image;
receiving a first face image of a first user and a second face image of a second user, and respectively inputting the first face image and the second face image into a deep neural network model to obtain a unified solution of shape parameters corresponding to the first face image and a single solution of expression parameters and gesture parameters corresponding to the second face image; the deep neural network model is trained according to the training set;
substituting the unified solution of the shape parameters corresponding to the first face image and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing image after the face outline in the second face image is replaced by the face outline of the first user.
In another aspect, an embodiment of the present invention provides a facial image processing apparatus, including:
the training set construction module is used for acquiring key points of each group of training data, inputting the key points into the 3DMM model for joint iteration solution to obtain a group of solution results; combining the obtained multiple groups of solving results into a training set; each group of training data comprises a plurality of face images of a user; each group of solving results comprise a unified solution of shape parameters and a single solution of expression parameters and posture parameters corresponding to each face image;
the parameter regression module is used for receiving a first face image of a first user and a second face image of a second user and respectively inputting the first face image and the second face image of the second user into the deep neural network model to obtain a unified solution of shape parameters corresponding to the first face image and a single solution of expression parameters and gesture parameters corresponding to the second face image; the deep neural network model is trained according to the training set;
and the second replacing module is used for substituting the unified solution of the shape parameters corresponding to the first face image and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after replacing the face outline in the second face image with the face outline of the first user.
In still another aspect, an embodiment of the present invention provides a face image processing apparatus including:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a facial image processing method as described in any one of the above.
In yet another aspect, an embodiment of the present invention provides a computer storage medium having stored thereon computer program instructions that, when executed by a processor, implement a facial image processing method as described in any one of the above.
According to the face image processing method, the face image processing device, the face image processing equipment and the computer storage medium, the 3DMM model is utilized to carry out joint iteration solution on a plurality of first face images of the first user, and a unified solution of shape parameters corresponding to the faces of the first user is obtained. The same applies to a plurality of second face images of the second user, and a single solution of expression parameters and gesture parameters corresponding to the second face images can be obtained. Because the face outline is controlled by the shape parameters, the shape parameters corresponding to the second face image are replaced by using the unified solution of the shape parameters corresponding to the face of the first user, so that the purpose of face outline replacement can be achieved, and the similarity after face replacement is improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are needed to be used in the embodiments of the present invention will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a flow chart of a face image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a joint iteration solution of a 3DMM model according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating a recording process of correspondence between 2D points and 3D points according to an embodiment of the present invention;
FIG. 4 is a 2D pictorial view provided by one embodiment of the present invention;
FIG. 5 is a schematic view of the 2D map key labels of the mouth portion of FIG. 4;
FIG. 6 is a flowchart of a face image processing method according to another embodiment of the present invention;
fig. 7 is a schematic diagram of a face image processing apparatus according to another embodiment of the present invention;
fig. 8 is a schematic structural view of a face image processing apparatus provided in another embodiment of the present invention;
fig. 9 is a schematic structural view of a face image processing apparatus provided in still another embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are merely configured to illustrate the invention and are not configured to limit the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the invention by showing examples of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The 3DMM model (three-dimensional deformation model) is mainly used for reconstructing three-dimensional faces, 200 3D faces are stored in the model, when key points of the input 2D faces are received, the weight coefficients of the 200 3D faces are adjusted, linear summation is carried out, the error between the linear summation result projected to a 2D interface and the input 2D image is smaller than a preset threshold value, and at the moment, the obtained coefficients are utilized to construct a 3D image corresponding to the 2D image.
The invention uses the thought of the model to replace the face outline. The 3DMM model is to be solved into 3 parameters, namely, a shape parameter, a gesture parameter and an expression parameter, wherein the shape parameter is used for controlling the outline of a human face, so that in theory, the user a and the user B want to perform facial image processing, and only the shape parameter of the image of the user a needs to be replaced into the image of the user B.
However, in the current 3D mm model, the coefficient is solved for one 2D image by using the application method, so that the obtained three feature parameters are coupled together, that is, one feature parameter changes and the other two feature parameters also change, the individual shape parameter, expression parameter and gesture parameter are inaccurate, and there is no single solution for one 2D image, that is, the result obtained by the three feature parameters is consistent with the convergence condition, but the individual feature parameters are not necessarily consistent with the corresponding convergence condition. Therefore, when the face image processing is intended to be realized by replacing the shape parameters, a set of parameters that should be selected cannot be determined. Moreover, if the shape parameters of the first face image are to be used to adjust the shape parameters of the second face image, the purpose of replacing only the shape parameters cannot be achieved, because if the shape parameters of the second face image are changed, the changed shape parameters are often not matched with the original expression parameters and posture parameters, so that the three characteristic parameters are not in accordance with convergence conditions as a whole, and the replacement of the face contours is unnatural.
In order to solve the problems in the prior art, the embodiment of the invention provides a face image processing method, a device, equipment and a computer storage medium. The following first describes a face image processing method provided by an embodiment of the present invention.
Fig. 1 is a flowchart of a face image processing method according to an embodiment of the present invention. The method comprises the following steps:
step s101: receiving a plurality of first face images of a first user, and respectively acquiring key points of each first face image;
step s102: inputting key points of a plurality of first face images into a three-dimensional deformation model 3DMM model for joint iteration solution to obtain a unified solution of shape parameters corresponding to the faces of the first user;
step s103: receiving a plurality of second face images of a second user, and respectively acquiring key points of each second face image;
step s104: inputting key points of a plurality of second face images into a 3DMM model for joint iteration solution to obtain single solutions of expression parameters and posture parameters corresponding to each second face image;
step s105: substituting the unified solution of the shape parameters corresponding to the face of the first user and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after replacing the face outline in the second face image with the face outline of the first user.
After substituting a unified solution of a shape parameter corresponding to a face of a first user and a single solution of an expression parameter and a gesture parameter corresponding to a second face image into a 3DMM model, a 3D image is obtained, and then the 3D image is projected onto a 2D interface to obtain a 2D image, namely a face changing image after the face outline in the second face image is replaced by the face outline of the first user.
In the embodiment, a 3DMM model is utilized to perform joint iteration solution on a plurality of first face images of a first user, so that a unified solution of shape parameters corresponding to the faces of the first user is obtained. The same applies to a plurality of second face images of the second user, and a single solution of expression parameters and gesture parameters corresponding to the second face images can be obtained. Because the face outline is controlled by the shape parameters, the shape parameters corresponding to the second face image are replaced by using the unified solution of the shape parameters corresponding to the face of the first user, so that the purpose of face outline replacement can be achieved, and the similarity after face replacement is improved.
In addition, in the embodiment of the invention, in the solving process by utilizing the 3DMM model, the characteristics of a plurality of face images are integrated, so that the effect of convergence can be achieved after the joint iteration solving, and a single and converged solution is obtained, namely, the solved shape parameters and expression posture parameters reach the self convergence condition, so that the shape parameters and the expression posture parameters have no coupling relation and are not affected each other any more, and the condition of mismatching among the parameters can be avoided as much as possible when the shape parameters of a second user are replaced, and the similarity of face image processing is improved.
The relation of solving in the 3DMM model is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,s is an average face model i PCA (principal component) part corresponding to face shape, alpha i Is a shape parameter e i For the PCA part corresponding to expression, beta i Is an expression parameter; solving for alpha i And beta i To fit 200 average face models to the faces in the photo. X is X projection Is the point where the three-dimensional model projects onto the two-dimensional plane; p= [ [1,0 ]],[0,1,0]]Is an orthogonal projection matrix, s is a scale factor; r is a rotation matrix, t 2d Is a displacement matrix, (s, R, t) 2d ) Is a gesture parameter. X is to be projection And comparing the two-dimensional image with the actual key points of the 2D image, and if the comparison result meets the threshold requirement, meeting the convergence condition. m is the number of facial models with different shapes, and n is the number of facial models with different expressions.
The process of mapping the 3D point into the 2D point specifically includes: passing the coordinates (x, y, z) of the 3D point through (s, R, t 2d ) And converting the projection matrix to obtain 2D point coordinates.
In a specific embodiment, referring to fig. 2, the process of joint iterative solution includes:
step s201: determining initial coordinates of corresponding 3D points according to the key points of any one of the input face images; in the process of solving the three characteristic parameters later, the coordinates of the 3D points are changed;
Step s202: solving according to the key points and the 3D points of the face image to obtain numerical values of shape parameters, expression parameters and posture parameters;
step s203: repeatedly executing the following first iterative operation or the following second iterative operation on the input key points until the iteration ending condition is met, and taking the latest numerical values of the obtained three characteristic parameters as a unified solution of the shape parameters and a single solution of the expression parameters and the gesture parameters corresponding to each face image;
wherein the first iterative operation comprises:
setting the expression parameters and the gesture parameters as the latest values obtained at present, and carrying out simultaneous solving according to the key points of all the input face images to obtain the latest values of the shape parameters;
setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image;
setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the posture parameter according to the key point of each face image to obtain the latest value of the posture parameter corresponding to each face image;
The second iterative operation includes:
setting the expression parameters and the gesture parameters as the latest values obtained at present, and carrying out simultaneous solving according to the key points of all the input face images to obtain the latest values of the shape parameters;
setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the posture parameter according to the key point of each face image to obtain the latest value of the posture parameter corresponding to each face image;
and setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image.
In step s202, the process of solving three feature parameters according to a face image is as follows:
1.0, initializing a shape parameter shape and an expression parameter exp to 0;
the shape parameter and the expression parameter=0 are set firstly, because the shape parameter and the expression parameter are related to the 3D image, the 3D image has no gesture such as angle, the gesture parameter is used for setting the 2D image, and the projection of the 3D image on the 2D interface is continuously close to the actually input 2D key point in the process of solving the three characteristic parameters, so that the shape parameter and the expression parameter=0 are required to be set firstly, and then the adjustment is continuously carried out.
2.0, fixing the values of shape and exp, and solving an attitude parameter post;
3.0, solving exp according to shape and the post obtained by solving;
4.0, solving the shape according to the post and exp obtained by solving;
and then, repeating iteration for 2.0-4.0 according to the latest numerical value obtained by solving until the result converges.
Because the embodiment synthesizes a plurality of face images of the same person, when the joint iteration is solved, three characteristic parameters corresponding to the face images must all meet the iteration ending condition, and the convergence condition of the 3DMM model is: after substituting the feature parameters obtained according to the 2D map into the model to obtain a 3D map, projecting 3D points corresponding to the 3D map and the 2D point marks onto a 2D interface to obtain 2D points, wherein root mean square errors between the 2D points and key points directly extracted from the 2D map are smaller than a preset threshold.
In addition, as the facial contours of the face images of the same person are the same, but the facial expressions and the gestures are different, when three characteristic parameters corresponding to the face images meet convergence conditions, the obtained shape parameters simultaneously meet the face images, and the expression parameters and the gesture parameters respectively meet one face image corresponding to the face images, so that decoupling is realized between the shape parameters and the expression and gesture parameters, the condition of mutual interference is avoided, and further, after the shape parameters are replaced, the condition of mismatch between the shape parameters and the expression and gesture parameters is avoided, and because after the shape parameters are replaced for one face image, the expression parameters and the gesture parameters still meet iteration ending conditions.
In addition, as a plurality of face images are synthesized, a single solution exists in the shape parameters, and after the shape parameters are fixed, the single solution can be obtained when the expression parameters and the gesture parameters are solved later.
Based on the above idea, the iteration end condition corresponding to the first face image includes: the shape parameters remain unchanged;
the iteration end condition corresponding to the second face image comprises: the shape parameters are kept unchanged, and the root mean square error between the 2D point obtained according to the gesture parameters and the expression parameters of each second face image and the key point directly extracted from the second face image is smaller than a preset threshold value.
In the face image processing process, the shape parameters of the second face image are replaced by the unified solution of the shape parameters corresponding to the face of the first user, so that in the process of carrying out joint iteration solution on the first face image, only the condition that the shape parameters meet the convergence condition is required to be ensured, and whether the expression parameters and the gesture parameters meet the convergence condition or not can be not limited. The expression and the gesture in the second face image are all needed, so that in the process of carrying out joint iteration solution on the second face image, three characteristic parameters all need to meet convergence conditions.
In a preferred embodiment, after the face contour replacement is completed in the previous n frames of second face images of the second user, in a process of performing joint iterative solution on subsequent second face images of the second user, the method further includes:
setting the shape parameters of each subsequent second face image according to the unified solution of the shape parameters corresponding to the previous n frames of second face images; wherein n is greater than 1.
When the face of the second user in the video is to be replaced by the face of the first user, there are consecutive multi-frame images, such as hundreds of frames, in which case the second face images are typically grouped, for example, into groups of 10 frames, and then the face image processing is performed on the 10 frames of the second face images in each group, and then the face image processing is performed on the next group. For the second user, the face outline is fixed, that is to say, the shape parameters are fixed, so that after the first group of second face images are subjected to joint iteration solution to obtain the unified solution of the shape parameters, the subsequent second face images do not need to be subjected to the solution of the shape parameters, but the shape parameters of each second face image in the subsequent joint iteration solution process are set directly according to the unified solution of the shape parameters obtained by the first group of solution, and the workload of joint iteration solution is reduced.
According to actual experimental data, three characteristic parameters generally tend to converge after 1 iteration round, that is, convergence conditions are basically met, so that iteration end conditions corresponding to the first face image and the second face image can be set to be the iteration times of 2, the convergence conditions can be met after the iteration is ended, and the judging process of whether the iteration is ended or not is simplified.
In addition, when the 3DMM model is utilized, since the solving process of the 3DMM model is to make the root mean square error between the projection key points of the 3D map on the 2D interface and the key points of the actually input 2D map smaller than the preset threshold, the number of 3D points for projection in the 3D map should be the same as the number of key points of the actually input 2D map, and the two points are in one-to-one correspondence. Because the number of 3D points in the 3D graph is far more than the number of 2D key points, before joint iteration solution is performed, initial coordinates of the 3D points corresponding to the sequence numbers of the key points on the 2D graph need to be determined first, and then the solution is performed.
Currently, when determining coordinates of corresponding 3D points according to serial numbers of key points on a 2D graph, a manual labeling mode is adopted, namely, 3D points corresponding to the 2D key points are manually searched. This approach is labor intensive and low in accuracy. The embodiment also provides a method without manual labeling, which is shown in fig. 3, and the process is as follows:
Step s301: setting the shape parameters to zero, setting the expression parameters to be preset expression coefficients corresponding to the mouth opening expression, and setting the textures to be textures of the average face to be substituted into a 3DMM model to obtain a 3D image;
because the 3D image does not have the gestures of angles, sizes, positions and the like, but is only determined by the expression and the shape, a 3D image can be obtained by setting the shape parameters and the expression parameters; the shape parameters determine the outline of the face, and the expression parameters determine the expression of the face. The expression parameters are set to be preset expression coefficients corresponding to the mouth opening expression, so that the obtained 3D image is a mouth opening image, and points in the mouth can be conveniently marked. The preset expression factor depends on which value corresponds to the mouth-opening chart (e.g. 2), and the invention is not limited as to the mouth-opening amplitude.
Step s302: setting a rotation matrix in the gesture parameters to 0, setting a scale factor to a preset scale value, and setting a displacement matrix to a preset displacement value to obtain a projection matrix;
because the angles, the sizes, the positions and the like of the faces in the 2D map are set by the gesture parameters, a corresponding relation from the 3D map to the 2D map can be obtained by setting the gesture parameters, namely a projection matrix; the rotation matrix=0 indicates that the face angle is a positive face, the scale factor determines the size of the face in the 2D graph, and the displacement matrix determines the position of the face in the 2D graph.
Step s303: projecting the 3D image to a 2D interface according to the projection matrix to obtain a 2D image; see fig. 4;
step s304: extracting key points of the 2D image to obtain a plurality of 2D points and determining marks of the 2D points; see fig. 5;
step s305: and calculating the distance between the coordinates of each 2D point and the coordinates of each 3D point in the 3D image according to the projection matrix, and recording the labels of each 2D point and the coordinates of the 3D point closest to the 2D point.
Correspondingly, after the key points of the face image are received subsequently, the initial coordinates of the 3D points corresponding to the key points are determined according to the recorded marks of the 2D points and the coordinates of the 3D points closest to the marks.
The key point obtaining method comprises the following steps: the face image is input into a trained deep neural network, the deep neural network can extract key points, the coordinates of the extracted key points are sequentially output according to a preset sequence, and then the key points can be marked according to the output sequence. The labels herein may be done automatically by the system or manually, and the invention is not limited in this regard.
In this embodiment, a set of correspondence matching relations between 2D points and 3D points are obtained in advance and recorded, so that after key points of a face image (2D image) are input into a 3D mm model, initial coordinates of 3D points corresponding to labels of the key points can be automatically found according to the record.
In other embodiments, the 2D point labels and their corresponding 3D point coordinates may not be stored in a one-to-one correspondence. Since the number of 2D points and the acquisition positions of the key points are fixed, only the coordinates of those 3D points corresponding to the 2D points can be recorded. For example, assuming 100 2D points, only 100 3D point coordinates may be recorded. Thus, each time the 3D mm model is solved, the initial coordinates of the 3D points may be determined from the records.
At present, when the 3DMM model is used for reconstructing the human face, the number of the acquired key points is usually 68, and the number of the key points is smaller, so that the feature parameter calculation is performed according to the key points, and the solving accuracy of the feature parameters is lower. In a preferred embodiment, the number of the acquired key points is increased, and the number of the acquired key points is increased from 68 to 1000, so that the solving accuracy of the characteristic parameters is improved.
It should be noted that the key points in the present invention are not just key points of the face contour, but key points of the full face. Replacing only facial contours is accomplished by replacing only shape parameters.
In addition, the invention provides an embodiment for solving three characteristic parameters by means of setting (training set+deep neural network model). The deep neural network model here may be a convolutional neural network CNN regression model. Referring to fig. 6, the method includes:
Step s401: after obtaining key points of each group of training data, inputting the key points into a 3DMM model for joint iteration solution to obtain a group of solution results; combining the obtained multiple groups of solving results into a training set; each group of training data comprises a plurality of face images of a user; each group of solving results comprise a unified solution of shape parameters and a single solution of expression parameters and posture parameters corresponding to each face image;
step s402: receiving a first face image of a first user and a second face image of a second user, and respectively inputting the first face image and the second face image into a deep neural network model to obtain a unified solution of shape parameters corresponding to the first face image and a single solution of expression parameters and gesture parameters corresponding to the second face image; training the deep neural network model according to the training set;
step s403: substituting the unified solution of the shape parameters corresponding to the first face image and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing image after the face outline in the second face image is replaced by the face outline of the first user.
The deep neural network model is trained through the training set, so that after the deep neural network model receives the first face image and the second face image, the unified solution of the shape parameters corresponding to the first face image and the single solution of the expression parameters and the gesture parameters corresponding to the second face image which meet the convergence condition can be obtained through regression. Thereby facilitating the subsequent facial image processing operations accordingly.
Because the training set in the embodiment is obtained by carrying out joint iteration solution on a plurality of face images of the same user in the 3DMM model, three characteristic parameters in the solution result of each group of training data are decoupled, and the shape parameters in the three characteristic parameters can be replaced. Therefore, based on the deep neural network model obtained by training the training set, three characteristic parameters in the result obtained by regression are decoupled, and the convergence condition is satisfied.
In a preferred embodiment, the process of joint iterative solution includes:
determining initial coordinates of corresponding 3D points according to the key points of any one of the input face images;
solving according to the key points and the 3D points of the face image to obtain numerical values of shape parameters, expression parameters and posture parameters;
repeatedly executing the following first iterative operation or the following second iterative operation on the input key points until the iteration ending condition is met, and obtaining a unified solution of the shape parameters and a single solution of the expression parameters and the gesture parameters corresponding to each face image;
wherein the first iterative operation comprises:
setting the expression parameters and the gesture parameters as the latest values obtained at present, and carrying out simultaneous solving according to the key points of all the input face images to obtain the latest values of the shape parameters;
Setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image;
setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the posture parameter according to the key point of each face image to obtain the latest value of the posture parameter corresponding to each face image;
the second iterative operation includes:
setting the expression parameters and the gesture parameters as the latest values obtained at present, and carrying out simultaneous solving according to the key points of all the input face images to obtain the latest values of the shape parameters;
setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the posture parameter according to the key point of each face image to obtain the latest value of the posture parameter corresponding to each face image;
and setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image.
Fig. 7 shows a face image processing apparatus, which includes:
the first receiving module 1 is used for receiving a plurality of first face images of a first user and respectively acquiring key points of each first face image;
the first solving module 2 is used for inputting key points of a plurality of first face images into the three-dimensional deformation model 3DMM model to carry out joint iteration solving, so as to obtain a unified solution of the shape parameters corresponding to the faces of the first user;
the second receiving module 3 is configured to receive a plurality of second face images of a second user, and acquire key points of each second face image respectively;
the second solving module 4 is used for inputting key points of a plurality of second face images into the 3DMM model to perform joint iteration solving so as to obtain a single solution of expression parameters and posture parameters corresponding to each second face image;
the first replacing module 5 is configured to replace a unified solution of a shape parameter corresponding to a face of the first user and a single solution of an expression parameter and a gesture parameter corresponding to the second face image into the 3DMM model, so as to obtain a face changing diagram after replacing the face contour in the second face image with the face contour of the first user.
Wherein the first solving module 2 comprises:
the first coordinate determining unit is used for determining initial coordinates of the corresponding 3D points according to the key points of any one of the input first face images;
The first initial calculation unit is used for solving according to the key points and the 3D points of the first face image to obtain numerical values of the shape parameters, the expression parameters and the posture parameters;
the first joint iteration unit is used for repeatedly executing the following first iteration operation or the following second iteration operation on the input key points until the iteration ending condition is met, so that a unified solution of the shape parameters and a single solution of the expression parameters and the gesture parameters corresponding to each first face image are obtained; the first iterative operation and the second iterative operation are the same as the foregoing method embodiments, and are not described herein.
The second solving module 4 includes:
the second coordinate determining unit is used for determining initial coordinates of the corresponding 3D points according to the key points of any one of the input second face images;
the second initial calculation unit is used for solving according to the key points and the 3D points of the second face image to obtain numerical values of the shape parameters, the expression parameters and the posture parameters;
the second joint iteration unit is used for repeatedly executing the following first iteration operation or the following second iteration operation on the input key points until the iteration ending condition is met, so as to obtain a unified solution of the shape parameters and a single solution of the expression parameters and the gesture parameters corresponding to each second face image; the first iterative operation and the second iterative operation are the same as the foregoing method embodiments, and are not described herein.
Preferably, after the face contour replacement is completed in the previous n frames of second face images of the second user, in a process of performing joint iteration solution on subsequent second face images of the second user, the second joint iteration unit further includes: setting the shape parameters of each subsequent second face image according to the unified solution of the shape parameters corresponding to the previous n frames of second face images; wherein n is greater than 1.
Preferably, the apparatus further comprises:
the 3D image setting unit is used for setting the shape parameters to be zero, setting the expression parameters to be preset expression coefficients corresponding to the mouth opening expression, and setting the textures to be textures of the average face to be substituted into the 3DMM model to obtain a 3D image;
the projection matrix setting unit is used for setting the rotation matrix in the gesture parameters to 0, setting the scale factors to preset scale values and setting the displacement matrix to preset displacement values to obtain a projection matrix;
the projection unit is used for projecting the 3D image to the 2D interface according to the projection matrix to obtain a 2D image;
the extraction unit is used for extracting key points of the 2D image, obtaining a plurality of 2D points and determining marks of the 2D points;
a matching unit for calculating the distance between the coordinates of each 2D point and the coordinates of each 3D point in the 3D image according to the projection matrix, and recording the label of each 2D point and the coordinates of the 3D point closest to the 2D point;
Correspondingly, the first coordinate determining unit and the second coordinate determining unit determine initial coordinates of the 3D points corresponding to the key points according to the recorded marks of the 2D points and the coordinates of the 3D points closest to the marks.
Fig. 8 shows a face image processing apparatus, which includes:
the training set construction module 6 is used for obtaining key points of each group of training data and inputting the key points into the 3DMM model for joint iteration solution to obtain a group of solution results; combining the obtained multiple groups of solving results into a training set; each group of training data comprises a plurality of face images of a user; each group of solving results comprise a unified solution of shape parameters and a single solution of expression parameters and posture parameters corresponding to each face image;
the parameter regression module 7 is used for receiving the first face image of the first user and the second face image of the second user and respectively inputting the first face image and the second face image of the second user into the deep neural network model to obtain a unified solution of the shape parameters corresponding to the first face image and a single solution of the expression parameters and the gesture parameters corresponding to the second face image; training the deep neural network model according to the training set;
and the second replacing module 8 is used for substituting the unified solution of the shape parameters corresponding to the first face image and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after replacing the face outline in the second face image with the face outline of the first user.
Preferably, the training set construction module 6 includes:
the third coordinate determining unit is used for determining initial coordinates of the corresponding 3D points according to the key points of any one of the input third face images;
the third initial calculation unit is used for solving according to the key points and the 3D points of the third face image to obtain the numerical values of the shape parameters, the expression parameters and the posture parameters;
the third combined iteration unit is used for repeatedly executing the following first iteration operation or the following second iteration operation on the input key points until the iteration ending condition is met, so that a unified solution of the shape parameters and a single solution of the expression parameters and the gesture parameters corresponding to each third face image are obtained; the first iterative operation and the second iterative operation are the same as the foregoing method embodiments, and are not described herein.
Fig. 9 shows a hardware configuration diagram of a face image processing apparatus provided by an embodiment of the present invention.
The facial image processing apparatus may include a processor 901 and a memory 902 in which computer program instructions are stored. The processor 901 implements any one of the face image processing methods of the above-described embodiments by reading and executing computer program instructions stored in the memory 902.
In particular, the processor 901 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present invention.
Memory 902 may include mass storage for data or instructions. By way of example, and not limitation, the memory 902 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory 902 may include removable or non-removable (or fixed) media, where appropriate. The memory 902 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 902 is a non-volatile solid state memory. In a particular embodiment, the memory 902 includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
In one example, the facial image processing device may also include a communication interface 903 and a bus 910. As shown in fig. 9, the processor 901, the memory 902, and the communication interface 903 are connected to each other via a bus 910, and communicate with each other.
The communication interface 903 is mainly used to implement communication between each module, device, unit, and/or apparatus in the embodiment of the present invention.
Bus 910 includes hardware, software, or both, that couple the components of the online data flow billing device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 910 may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
In addition, in combination with the face image processing method in the above embodiment, the embodiment of the present invention may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the facial image processing methods of the above embodiments.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.

Claims (15)

1. A face image processing method, characterized by comprising:
receiving a plurality of first face images of a first user, and respectively acquiring key points of each first face image;
Inputting key points of the plurality of first face images into a three-dimensional deformation model 3DMM model for joint iteration solution to obtain a unified solution of shape parameters corresponding to the faces of the first user;
receiving a plurality of second face images of a second user, and respectively acquiring key points of each second face image;
inputting key points of a plurality of second face images into the 3DMM model for joint iteration solution to obtain a single solution of expression parameters and posture parameters corresponding to each second face image;
substituting the unified solution of the shape parameters corresponding to the face of the first user and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after the face outline in the second face image is replaced by the face outline of the first user.
2. The method of claim 1, wherein the process of joint iterative solution comprises:
determining initial coordinates of corresponding 3D points according to the key points of any one of the input face images;
solving according to the key points of the face image and the 3D points to obtain numerical values of shape parameters, expression parameters and posture parameters;
Repeatedly executing the following first iterative operation or the following second iterative operation on the input key points until the iteration ending condition is met, and obtaining a unified solution of the shape parameters and a single solution of expression parameters and gesture parameters corresponding to each face image;
wherein the first iterative operation comprises:
setting the expression parameters and the gesture parameters as the latest values which are obtained at present, and carrying out simultaneous solving according to key points of all input face images to obtain the latest values of the shape parameters;
setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image;
setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the gesture parameter according to the key point of each face image to obtain the latest value of the gesture parameter corresponding to each face image;
the second iterative operation includes:
setting the expression parameters and the gesture parameters as the latest values which are obtained at present, and carrying out simultaneous solving according to key points of all input face images to obtain the latest values of the shape parameters;
Setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the gesture parameter according to the key point of each face image to obtain the latest value of the gesture parameter corresponding to each face image;
and setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image.
3. The method according to claim 2, further comprising, in the process of performing joint iterative solution on the second face image subsequent to the second user, after the face contour substitution is completed on the first n frames of the second face image of the second user:
setting the shape parameters of each subsequent second face image according to the unified solution of the shape parameters corresponding to the previous n frames of second face images; wherein n is greater than 1.
4. The method of claim 2, wherein the iteration end condition corresponding to the first face image includes: the shape parameters remain unchanged;
the iteration ending condition corresponding to the second face image comprises: the shape parameters are kept unchanged, and the root mean square error between the 2D point obtained according to the gesture parameters and the expression parameters of each second face image and the key point directly extracted from the second face image is smaller than a preset threshold value.
5. The method of claim 2, wherein the iteration end condition corresponding to the first face image includes: the iteration times reach 2 times; the iteration ending condition corresponding to the second face image comprises: the number of iterations reached 2.
6. The method as recited in claim 2, further comprising:
setting the shape parameters to zero, setting the expression parameters to be preset expression coefficients corresponding to the mouth opening expression, and setting the textures to be textures of the average face to substitute the textures into the 3DMM model to obtain a 3D image;
setting a rotation matrix in the gesture parameters to 0, setting a scale factor to a preset scale value, and setting a displacement matrix to a preset displacement value to obtain a projection matrix;
projecting the 3D image to a 2D interface according to the projection matrix to obtain a 2D image;
extracting key points of the 2D image to obtain a plurality of 2D points and determining marks of the 2D points;
calculating the distance between the coordinates of each 2D point and the coordinates of each 3D point in the 3D image according to the projection matrix, and recording the label of each 2D point and the coordinates of the 3D point closest to the 2D point;
and after the key points of the face image are received, determining initial coordinates of the 3D points corresponding to the key points according to the recorded marks of the 2D points and the coordinates of the 3D points closest to the marks.
7. The method of claim 1, wherein the number of key points extracted from the first face image and the second face image is 1000.
8. A face image processing method, characterized by comprising:
after obtaining key points of each group of training data, inputting the key points into a 3DMM model for joint iteration solution to obtain a group of solution results; combining the obtained multiple groups of solving results into a training set; each group of training data comprises a plurality of face images of a user; each group of solving results comprise a unified solution of shape parameters and a single solution of expression parameters and posture parameters corresponding to each face image;
receiving a first face image of a first user and a second face image of a second user, and respectively inputting the first face image and the second face image into a deep neural network model to obtain a unified solution of shape parameters corresponding to the first face image and a single solution of expression parameters and gesture parameters corresponding to the second face image; the deep neural network model is trained according to the training set;
substituting the unified solution of the shape parameters corresponding to the first face image and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing image after the face outline in the second face image is replaced by the face outline of the first user.
9. The method of claim 8, wherein the process of joint iterative solution comprises:
determining initial coordinates of corresponding 3D points according to the key points of any one of the input face images;
solving according to the key points of the face image and the 3D points to obtain numerical values of shape parameters, expression parameters and posture parameters;
repeatedly executing the following first iterative operation or the following second iterative operation on the input key points until the iteration ending condition is met, and obtaining a unified solution of the shape parameters and a single solution of expression parameters and gesture parameters corresponding to each face image;
wherein the first iterative operation comprises:
setting the expression parameters and the gesture parameters as the latest values which are obtained at present, and carrying out simultaneous solving according to key points of all input face images to obtain the latest values of the shape parameters;
setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image;
Setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the gesture parameter according to the key point of each face image to obtain the latest value of the gesture parameter corresponding to each face image;
the second iterative operation includes:
setting the expression parameters and the gesture parameters as the latest values which are obtained at present, and carrying out simultaneous solving according to key points of all input face images to obtain the latest values of the shape parameters;
setting the shape parameter and the expression parameter as the latest value which is obtained at present, and respectively solving the value of the gesture parameter according to the key point of each face image to obtain the latest value of the gesture parameter corresponding to each face image;
and setting the shape parameter and the posture parameter as the latest value which is obtained at present, and respectively solving the value of the expression parameter according to the key point of each face image to obtain the latest value of the expression parameter corresponding to each face image.
10. A facial image processing apparatus, characterized in that the apparatus comprises:
The first receiving module is used for receiving a plurality of first face images of a first user and respectively acquiring key points of each first face image;
the first solving module is used for inputting key points of the plurality of first face images into the three-dimensional deformation model 3DMM model to carry out joint iteration solving, so as to obtain a unified solution of the shape parameters corresponding to the faces of the first user;
the second receiving module is used for receiving a plurality of second face images of a second user and respectively acquiring key points of each second face image;
the second solving module is used for inputting key points of a plurality of second face images into the 3DMM model to carry out joint iteration solving to obtain a single solution of expression parameters and posture parameters corresponding to each second face image;
and the first replacing module is used for substituting the unified solution of the shape parameters corresponding to the face of the first user and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after replacing the face outline in the second face image with the face outline of the first user.
11. A facial image processing apparatus, characterized in that the apparatus comprises:
The training set construction module is used for acquiring key points of each group of training data, inputting the key points into the 3DMM model for joint iteration solution to obtain a group of solution results; combining the obtained multiple groups of solving results into a training set; each group of training data comprises a plurality of face images of a user; each group of solving results comprise a unified solution of shape parameters and a single solution of expression parameters and posture parameters corresponding to each face image;
the parameter regression module is used for receiving a first face image of a first user and a second face image of a second user and respectively inputting the first face image and the second face image of the second user into the deep neural network model to obtain a unified solution of shape parameters corresponding to the first face image and a single solution of expression parameters and gesture parameters corresponding to the second face image; the deep neural network model is trained according to the training set;
and the second replacing module is used for substituting the unified solution of the shape parameters corresponding to the first face image and the single solution of the expression parameters and the gesture parameters corresponding to the second face image into the 3DMM model to obtain a face changing diagram after replacing the face outline in the second face image with the face outline of the first user.
12. A face image processing apparatus, characterized by comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a facial image processing method as claimed in any one of claims 1-7.
13. A face image processing apparatus, characterized by comprising: a processor and a memory storing computer program instructions;
the facial image processing method of any of claims 8-9 when executed by the processor.
14. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a facial image processing method as claimed in any one of claims 1 to 7.
15. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a facial image processing method as claimed in any one of claims 8 to 9.
CN201910526812.2A 2019-06-18 2019-06-18 Face image processing method, device, equipment and computer storage medium Active CN112102146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910526812.2A CN112102146B (en) 2019-06-18 2019-06-18 Face image processing method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910526812.2A CN112102146B (en) 2019-06-18 2019-06-18 Face image processing method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN112102146A CN112102146A (en) 2020-12-18
CN112102146B true CN112102146B (en) 2023-11-03

Family

ID=73748772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910526812.2A Active CN112102146B (en) 2019-06-18 2019-06-18 Face image processing method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112102146B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506220B (en) * 2021-07-16 2024-04-05 厦门美图之家科技有限公司 Face gesture editing method and system driven by 3D vertex and electronic equipment
CN113658035B (en) * 2021-08-17 2023-08-08 北京百度网讯科技有限公司 Face transformation method, device, equipment, storage medium and product
CN116152900B (en) * 2023-04-17 2023-07-18 腾讯科技(深圳)有限公司 Expression information acquisition method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
CN108537126A (en) * 2018-03-13 2018-09-14 东北大学 A kind of face image processing system and method
WO2019033571A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Facial feature point detection method, apparatus and storage medium
CN109685873A (en) * 2018-12-14 2019-04-26 广州市百果园信息技术有限公司 A kind of facial reconstruction method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
WO2019033571A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Facial feature point detection method, apparatus and storage medium
CN108537126A (en) * 2018-03-13 2018-09-14 东北大学 A kind of face image processing system and method
CN109685873A (en) * 2018-12-14 2019-04-26 广州市百果园信息技术有限公司 A kind of facial reconstruction method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维人脸表情获取及重建技术综述;王珊 等;《系统仿真学报》;第30卷(第7期);全文 *

Also Published As

Publication number Publication date
CN112102146A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112102146B (en) Face image processing method, device, equipment and computer storage medium
CN109521403B (en) Parameter calibration method, device and equipment of multi-line laser radar and readable medium
CN112101073B (en) Face image processing method, device, equipment and computer storage medium
CN110866934B (en) Normative coding-based complex point cloud segmentation method and system
CN111160298B (en) Robot and pose estimation method and device thereof
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
CN113899364B (en) Positioning method and device, equipment and storage medium
CN109871829A (en) A kind of detection model training method and device based on deep learning
CN112686950A (en) Pose estimation method and device, terminal equipment and computer readable storage medium
CN111340042B (en) Object contour recognition method, device, equipment and storage medium
CN115131437A (en) Pose estimation method, and training method, device, equipment and medium of relevant model
CN111368860A (en) Relocation method and terminal equipment
CN116540272B (en) Large-scale satellite orbit calculation method based on Newton interpolation formula and Hohner law
CN112097772B (en) Robot and map construction method and device thereof
CN112258647A (en) Map reconstruction method and device, computer readable medium and electronic device
CN109978043B (en) Target detection method and device
KR20090070500A (en) Mechanism for reconstructing full 3d model using single-axis turntable images
CN116030135A (en) Real-time attitude measurement system in remote operation
CN114399526A (en) Pose determination method, pose determination device and storage medium
CN114549857A (en) Image information identification method and device, computer equipment and storage medium
CN113920196A (en) Visual positioning method and device and computer equipment
CN113887290A (en) Monocular 3D detection method and device, electronic equipment and storage medium
CN111612060B (en) Interior point scale estimation method based on iteration and greedy search
CN111104922A (en) Feature matching algorithm based on ordered sampling
CN115641365B (en) Point cloud registration method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant