CN112819947A - Three-dimensional face reconstruction method and device, electronic equipment and storage medium - Google Patents

Three-dimensional face reconstruction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112819947A
CN112819947A CN202110151906.3A CN202110151906A CN112819947A CN 112819947 A CN112819947 A CN 112819947A CN 202110151906 A CN202110151906 A CN 202110151906A CN 112819947 A CN112819947 A CN 112819947A
Authority
CN
China
Prior art keywords
face
dimensional
parameters
texture
shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110151906.3A
Other languages
Chinese (zh)
Inventor
俞云杰
黄晗
郭彦东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110151906.3A priority Critical patent/CN112819947A/en
Publication of CN112819947A publication Critical patent/CN112819947A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional face reconstruction method and device, electronic equipment and a storage medium, and relates to the technical field of electronic equipment. The method comprises the following steps: the method comprises the steps of obtaining shape parameters and texture parameters of a face to be reconstructed, inputting the shape parameters into a first model, obtaining a three-dimensional face shape output by the first model, inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training, and a target three-dimensional face is generated based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map. According to the method and the device, the three-dimensional face shape and/or the face texture map are generated through the trained generation countermeasure network, so that the generated target three-dimensional face has abundant detail characteristics, and the reconstruction effect of the three-dimensional face is improved.

Description

Three-dimensional face reconstruction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to a method and an apparatus for reconstructing a three-dimensional face, an electronic device, and a storage medium.
Background
In the fields of computer vision and computer graphics, three-dimensional face reconstruction is a topic with gradually rising heat, and can be widely applied to the fields of face recognition, face editing, man-machine interaction, expression-driven animation, augmented reality, virtual reality and the like. The three-dimensional face reconstruction based on a single RGB image, namely, the shape and texture of the three-dimensional face is reconstructed by using the single RGB image, and is a durable topic. At present, the details of a three-dimensional face reconstructed based on a single RGB image are poor, and the reality is lacked.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, an electronic device, and a storage medium for reconstructing a three-dimensional face, so as to solve the above problems.
In a first aspect, an embodiment of the present application provides a method for reconstructing a three-dimensional face, where the method includes: acquiring shape parameters and texture parameters of a face to be reconstructed; inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model; inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training; and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
In a second aspect, an embodiment of the present application provides an apparatus for reconstructing a three-dimensional human face, where the apparatus includes: the parameter acquisition module is used for acquiring the shape parameters and the texture parameters of the face to be reconstructed; the human face shape obtaining module is used for inputting the shape parameters into a first model to obtain a three-dimensional human face shape output by the first model; the texture mapping obtaining module is used for inputting the texture parameters into a second model and obtaining a face texture mapping output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training; and the three-dimensional face generation module is used for generating a target three-dimensional face based on the three-dimensional face shape and the face texture mapping, wherein the target three-dimensional face comprises texture information generated based on the face texture mapping.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, the memory being coupled to the processor, the memory storing instructions, and the processor performing the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
The method, the device, the electronic equipment and the storage medium for reconstructing the three-dimensional face, which are provided by the embodiment of the application, are used for acquiring the shape parameters and the texture parameters of the face to be reconstructed, inputting the shape parameters into a first model to acquire the three-dimensional face shape output by the first model, inputting the texture parameters into a second model to acquire the face texture mapping output by the second model, wherein at least one of the first model and the second model is obtained based on generating confrontation network training, generating a target three-dimensional face based on a three-dimensional face shape and a face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map, so that a three-dimensional face shape and/or the face texture map is generated through a trained generated confrontation network, the generated target three-dimensional face has abundant detail characteristics, and the reconstruction effect of the three-dimensional face is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating a method for reconstructing a three-dimensional face according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a method for reconstructing a three-dimensional face according to another embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a method for reconstructing a three-dimensional face according to still another embodiment of the present application;
fig. 4 is a flowchart illustrating a step S306 of the three-dimensional face reconstruction method illustrated in fig. 3 of the present application;
fig. 5 is a schematic flow chart illustrating a method for reconstructing a three-dimensional face according to another embodiment of the present application;
fig. 6 is a schematic flow chart illustrating a method for reconstructing a three-dimensional face according to yet another embodiment of the present application;
fig. 7 is a flowchart illustrating a method for reconstructing a three-dimensional face according to yet another embodiment of the present application;
fig. 8 is a schematic flow chart illustrating a method for reconstructing a three-dimensional face according to yet another embodiment of the present application;
fig. 9 shows a block diagram of a three-dimensional face reconstruction apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of an electronic device for executing a method for reconstructing a three-dimensional face according to an embodiment of the present application;
fig. 11 illustrates a storage unit for storing or carrying program codes for implementing a three-dimensional face reconstruction method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
At present, the classic method of three-dimensional face reconstruction based on a single RGB image is a three-dimensional deformation model (3D portable model, 3DMM), and the basic idea is to collect three-dimensional faces in advance to complete the establishment of a database. And (3) performing principal component analysis on the shape vector, the expression vector and the texture vector of the face, converting the vectors into independent representations, and on the basis, expressing any face by utilizing the linear combination of the average face and the principal component. The inventors have found through research that there are two important but often neglected problems in 3 DMM: 1. after the original three-dimensional face scanning data is collected, a pre-made template face is aligned to the original face data, however, the alignment is limited by the low quality of the template face (the number of vertex points and the number of surface patches are small), and the aligned face has the phenomenon of smoothing the original face data, so that some face detail information (such as pox, wrinkles and the like) is lost; 2. the existing 3 DMM-based method starts from an average face, the three-dimensional face appearance characteristics are obtained by linear combination of the average face and principal components, the three-dimensional face appearance characteristics obtained by the method can be more regularized, the generated face tends to an average face, and the verisimilitude is poor.
In view of the above problems, the inventors have found and proposed a method, an apparatus, an electronic device, and a storage medium for reconstructing a three-dimensional face according to the embodiments of the present application through long-term research, and generate a three-dimensional face shape and/or a face texture map through a trained generation countermeasure network, so that a generated target three-dimensional face has rich detail features, and a reconstruction effect of the three-dimensional face is improved. The specific method for reconstructing a three-dimensional face is described in detail in the following embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for reconstructing a three-dimensional face according to an embodiment of the present application. The three-dimensional face reconstruction method is used for generating the three-dimensional face shape and/or the face texture mapping through the trained generated confrontation network, so that the generated target three-dimensional face has rich detail characteristics, and the reconstruction effect of the three-dimensional face is improved. In a specific embodiment, the method for reconstructing a three-dimensional face is applied to the apparatus 200 for reconstructing a three-dimensional face shown in fig. 9 and the electronic device 100 (fig. 10) equipped with the apparatus 200 for reconstructing a three-dimensional face. The specific process of the present embodiment will be described below by taking an electronic device as an example, but it is understood that the electronic device applied in the present embodiment may be a smart phone, a tablet computer, a desktop computer, a wearable electronic device, and the like, and is not limited herein. As will be described in detail with respect to the flow shown in fig. 1, the method for reconstructing a three-dimensional face may specifically include the following steps:
step S101: and acquiring the shape parameters and texture parameters of the face to be reconstructed.
In this embodiment, the face to be reconstructed may be a three-dimensional face that needs to be subjected to three-dimensional face reconstruction. Then obtaining the shape parameter and the texture parameter of the face to be reconstructed may include: and acquiring shape parameters and texture parameters of the three-dimensional face needing three-dimensional face reconstruction. The shape parameters may include shape parameters and expression parameters, and then obtaining the shape parameters and texture parameters of the face to be reconstructed may include: and acquiring appearance parameters, expression parameters and texture parameters of the three-dimensional face needing three-dimensional face reconstruction.
As one mode, a two-dimensional color face may be acquired by an RGB camera, the acquired two-dimensional color face is processed to obtain a three-dimensional face corresponding to the two-dimensional color face as a face to be reconstructed, and a shape parameter and a texture parameter of the face to be reconstructed are obtained.
In some embodiments, after the shape parameters of the face to be reconstructed are obtained, the shape parameters may be initialized, and after the texture parameters of the face to be reconstructed are obtained, the texture parameters may be initialized.
Step S102: and inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model.
In some embodiments, the first model and the second model may be trained and set in advance, wherein at least one of the first model and the second model is obtained based on generating the antagonistic network training. Thus, as one approach, the first model may be obtained based on generative confrontation network training and the second model may be obtained based on generative confrontation network training. As yet another approach, the first model may be obtained based on generative confrontation network training, and the second model may be a principal component analysis statistical model. As yet another approach, the first model may be a principal component analysis statistical model and the second model may be obtained based on generative confrontation network training.
In this embodiment, after obtaining the shape parameters of the face to be reconstructed, the shape parameters may be input into the first model, so as to obtain the three-dimensional face shape output by the first model. When the first model is a principal component analysis statistical model, the shape parameters can be input into the principal component analysis statistical model to obtain the three-dimensional face shape output by the principal component analysis statistical model.
Step S103: and inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training.
In this embodiment, after obtaining the texture parameter of the face to be reconstructed, the texture parameter may be input into the second model, so as to obtain the face texture map output by the second model. When the second model is a principal component analysis statistical model, the texture parameters can be input into the principal component analysis statistical model to obtain the face texture mapping output by the principal component analysis statistical model.
Step S104: and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
In this embodiment, after obtaining the three-dimensional face shape and the face texture map, a target three-dimensional face may be generated based on the three-dimensional face shape and the face texture map, where the target three-dimensional face includes texture information generated based on the face texture map, that is, the generated target three-dimensional face is a textured three-dimensional face.
In some embodiments, after obtaining the three-dimensional face shape and the face texture map, the three-dimensional face shape and the face texture map may be combined to generate the target three-dimensional face.
The method for reconstructing the three-dimensional face, provided by an embodiment of the application, includes the steps of obtaining shape parameters and texture parameters of a face to be reconstructed, inputting the shape parameters into a first model, obtaining a three-dimensional face shape output by the first model, inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training, and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face includes texture information generated based on the face texture map, so that the three-dimensional face shape and/or the face texture map are generated through the trained generation of the confrontation network, the generated target three-dimensional face has abundant detail characteristics, and the reconstruction effect of the three-dimensional face is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a method for reconstructing a three-dimensional face according to another embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 2, in this embodiment, the first model is a trained generated confrontation network, and the method for reconstructing a three-dimensional face may specifically include the following steps:
step S201: and acquiring the shape parameters and texture parameters of the face to be reconstructed.
For detailed description of step S201, please refer to step S101, which is not described herein again.
Step S202: and inputting the shape parameters into the trained generation countermeasure network to obtain the face coordinate mapping output by the trained generation countermeasure network.
In this embodiment, the first model is a trained generative countermeasure network. As one way, a training data set may be first acquired, where attributes or features of one type of data in the training data set are different from those of another type of data, and then training and modeling the generated countermeasure network according to a preset algorithm is performed on the acquired training data set, so that a rule is assembled based on the training data to obtain a trained generated countermeasure network. In this embodiment, the training data set may be, for example, a plurality of shape parameters and a plurality of face coordinate maps having correspondence.
In some embodiments, after obtaining the shape parameters of the face to be reconstructed, the shape parameters may be input into the trained generated confrontation network, and the trained face coordinate map output by the generated confrontation network is obtained.
Step S203: and obtaining the three-dimensional face shape based on the face coordinate mapping.
In this embodiment, after obtaining the face coordinate map, a three-dimensional face shape may be obtained based on the face coordinate map. In some embodiments, after obtaining the face coordinate map, coordinates of each vertex of the three-dimensional face may be obtained based on the face coordinate map, and the three-dimensional face shape may be obtained based on the coordinates of each vertex of the three-dimensional face.
Step S204: and inputting the texture parameters into a second model to obtain the face texture mapping output by the second model.
Step S205: and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
For detailed description of steps S204 to S205, please refer to steps S103 to S104, which are not described herein again.
Compared with the three-dimensional face reconstruction method shown in fig. 1, the three-dimensional face reconstruction method provided in the further embodiment of the present application further obtains a face coordinate map based on the shape parameters through the trained generation countermeasure network, and obtains a three-dimensional face shape based on the face coordinate map, thereby improving the accuracy of the obtained three-dimensional face shape.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a method for reconstructing a three-dimensional face according to still another embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 3, the method for reconstructing a three-dimensional face may specifically include the following steps:
step S301: and acquiring the shape parameters and texture parameters of the face to be reconstructed.
Step S302: and inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model.
Step S303: and inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training.
Step S304: and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
For detailed description of steps S301 to S304, please refer to steps S101 to S104, which are not described herein again.
Step S305: and rendering the target three-dimensional face based on the rendering function, the camera parameter and the illumination parameter to generate a two-dimensional rendered face.
In this embodiment, after the target three-dimensional face is obtained, the target three-dimensional face may be rendered based on the rendering function, the camera parameter, and the illumination parameter, so as to generate a two-dimensional rendered face.
In some embodiments, the camera position [ x ] may be based on a camera modelc,yc,zc]Direction of camera
Figure BDA0002931803340000061
And focal length fcParameterized camera parameter pc=[xc,yc,zc,x'c,y'c,z'c](ii) a Based on the illumination model, with the point light source position [ x ]l,yl,zl]Color [ r ]l,gl,bl]And ambient light color [ ra,ga,ba]Parameterized illumination parameter pl=[xl,yl,zl,rl,gl,bl,ra,ga,ba]. Finally, the rendering process of the whole image can be carried out
Figure BDA0002931803340000062
This is done where S, g represents the first model and the second model respectively, e.g., S represents the first generative countermeasure network, g represents the second generative countermeasure network, p represents a function that maps the three-dimensional face shape to spatial coordinates, the face texture map to texture coordinates, etc,
Figure BDA0002931803340000063
Represents a rendering function that renders a three-dimensional face into a two-dimensional image, which may be a weak perspective projection or a perspective projection. Thus, as one approach, it may be based on rendering functions
Figure BDA0002931803340000069
Camera parameter pcAnd an illumination parameter plAnd rendering the target three-dimensional face (based on the three-dimensional face shape and the face texture mapping) to generate a two-dimensional rendered face.
Step S306: and acquiring the posture of the face to be reconstructed.
In this embodiment, the pose of the face to be reconstructed may be obtained. As an implementable manner, the key point detection may be performed on the face to be reconstructed to obtain the face key points corresponding to the face to be reconstructed, and the pose of the face to be reconstructed is obtained based on the face key points.
Referring to fig. 4, fig. 4 is a flowchart illustrating a step S306 of the method for reconstructing a three-dimensional face shown in fig. 3 according to the present application. As will be explained in detail with respect to the flow shown in fig. 4, the method may specifically include the following steps:
step S3061: and carrying out key point detection on the face to be reconstructed to obtain a plurality of three-dimensional face key points of the face to be reconstructed as first face key points.
In some embodiments, the key point detection may be performed on the face to be reconstructed, and a plurality of three-dimensional face key points of the face to be reconstructed are obtained as the first face key points. In some embodiments, the key point detection may be performed on the face to be reconstructed, to obtain a plurality of three-dimensional face key points of the face to be reconstructed as first face key points, and to obtain a plurality of two-dimensional face key points of the face to be reconstructed as second face key points.
In this embodiment, a face to be reconstructed may be obtained, and a plurality of two-dimensional face key points and a plurality of three-dimensional face key points are detected and obtained as second face key points and first face key points, respectively, by an advanced and pre-trained face key point detector. As one way, a face to be reconstructed may be obtained, and 68 two-dimensional face key points are obtained by advanced and pre-trained face key point detector
Figure BDA0002931803340000064
And 68 three-dimensional face key points
Figure BDA0002931803340000065
As one way, a face to be reconstructed may be obtained, and 68 two-dimensional face key points are detected and obtained by an advanced, pre-trained face key point detector
Figure BDA0002931803340000066
Then 68 two-dimensional face key points are back projected
Figure BDA0002931803340000067
In the way, 68 three-dimensional face key points are generated
Figure BDA0002931803340000068
Step S3062: and acquiring the posture of the face to be reconstructed based on the first face key point.
In some embodiments, after obtaining the first face key points, the pose of the face to be reconstructed may be obtained based on the first face key points. As one mode, after obtaining the first face key point, an euler angle may be obtained through calculation based on the first face key point, and the posture of the face to be reconstructed is determined based on the euler angle, where the euler angle includes a roll angle, a pitch angle, and a yaw angle.
Step S307: and calculating the loss of key points of the face to be reconstructed and the target three-dimensional face based on the relation between the postures and the preset postures.
In some embodiments, a preset posture can be preset and stored, and the preset posture is used as a judgment basis for the posture of the human face to be reconstructed. Therefore, in this embodiment, after the pose of the face to be reconstructed is obtained, the pose may be compared with the preset pose to obtain a relationship between the pose and the preset pose, and the loss of the key points of the face to be reconstructed and the target three-dimensional face is calculated based on the relationship between the pose and the preset pose.
In some embodiments, a preset roll angle, a preset pitch angle and a preset yaw angle may be preset and stored, where the preset roll angle is used as a basis for determining a roll angle of a face to be reconstructed, the preset pitch angle is used as a basis for determining a lazy pitch angle of a person to be reconstructed, and the preset yaw angle is used as a basis for determining a yaw angle of the face to be reconstructed. As one mode, when the roll angle is greater than the preset roll angle, the pitch angle is greater than the preset pitch angle, and/or the yaw angle is greater than the preset yaw angle, it may be determined that the attitude is greater than the preset attitude, and when the roll angle is greater than the preset roll angle, the pitch angle is not greater than the preset pitch angle, and the yaw angle is not greater than the preset yaw angle, it may be determined that the attitude is not greater than the preset attitude.
Therefore, in this embodiment, after obtaining the first face key point, the rolling angle of the face to be reconstructed may be calculated based on the first face key point, and the rolling angle is compared with the preset rolling angle to obtain a relationship between the rolling angle and the preset rolling angle, and the key point loss of the face to be reconstructed and the target three-dimensional face is calculated based on the relationship between the rolling angle and the preset rolling angle; after the first face key point is obtained, calculating a pitch angle of the face to be reconstructed based on the first face key point, comparing the pitch angle with a preset pitch angle to obtain a relation between the pitch angle and the preset pitch angle, and calculating key point losses of the face to be reconstructed and the target three-dimensional face based on the relation between the pitch angle and the preset pitch angle; after the first face key point is obtained, calculating a yaw angle of the face to be reconstructed based on the first face key point, comparing the yaw angle with a preset yaw angle to obtain a relation between the yaw angle and the preset yaw angle, and calculating key point losses of the face to be reconstructed and the target three-dimensional face based on the relation between the yaw angle and the preset yaw angle.
In some embodiments, when the relationship representation posture of the face to be reconstructed and the preset posture is larger than the preset posture, a plurality of three-dimensional face key points of the target three-dimensional face may be obtained as third face key points, and the key point loss of the first face key points and the third face key points is calculated. As a mode, when the relation representation posture of the face to be reconstructed and the preset posture is larger than the preset posture, the key point detection may be performed on the target three-dimensional face, a plurality of three-dimensional face key points are obtained as third face key points, and the key point loss of the first face key point and the third face key point is calculated.
In some embodiments, when the posture of the face to be reconstructed is not greater than the preset posture, a plurality of two-dimensional face key points corresponding to a plurality of three-dimensional face key points of the target three-dimensional face may be obtained as fourth face key points, and the key point loss of the second face key points and the fourth face key points is calculated. As a mode, when the posture of the face to be reconstructed and the relation representation posture of the preset posture are not greater than the preset posture, the key point detection may be performed on the target three-dimensional face to obtain a plurality of three-dimensional face key points, the plurality of three-dimensional face key points are projected to the pixel coordinate system according to the camera parameters to obtain a plurality of corresponding two-dimensional face key points as fourth face key points, and the key point loss of the second face key points and the fourth face key points is calculated.
In some embodiments, L may be basedlan=||M(I0)-M'(ps,pe,pc)||2And calculating the loss of key points of the face to be reconstructed and the target three-dimensional face. In the above formula, M (I)0) For a corresponding first relation of the image to be reconstructedKey or second Key, M' (p)s,pe,pc) Is the projected coordinate of the third key point or the fourth key point corresponding to the target three-dimensional image, | | | | sweet wind2Representing the L2 norm.
Step S308: and performing iterative optimization on the shape parameters based on the key point loss to obtain first optimized shape parameters.
In this embodiment, after the key point loss is obtained through calculation, the shape parameter may be iteratively optimized based on the key point loss to obtain a first optimized shape parameter. As an embodiment, after obtaining the key point loss through calculation, the gradient of the key point loss with respect to the shape parameter may be calculated by using a gradient descent algorithm, so as to perform iterative optimization on the shape parameter, where the iterative optimization process is continuously performed until the key point loss is almost converged or until the number of iterations exceeds a threshold number, so as to complete iterative optimization on the shape parameter, thereby obtaining the first optimized shape parameter.
Step S309: inputting the first optimized shape parameter as a new shape parameter into the first model.
In some embodiments, after obtaining the first optimized shape parameter, the first optimized shape parameter may be input into the first model as a new shape parameter, so as to obtain a new three-dimensional face shape through the first model, and obtain a new target three-dimensional face through the new three-dimensional face shape, thereby improving the accuracy of the obtained three-dimensional face.
Step S310: and performing iterative optimization on the camera parameters based on the key point loss to obtain optimized camera parameters.
In this embodiment, after the key point loss is obtained through calculation, iterative optimization may be performed on the camera based on the key point loss to obtain optimized camera parameters. As an embodiment, after the key point loss is obtained through calculation, a gradient of the key point loss with respect to the camera parameter may be calculated by using a gradient descent algorithm, so as to perform iterative optimization on the camera parameter, where the iterative optimization process is continuously performed until the key point loss is almost converged or until the number of iterations exceeds a threshold number, so as to complete iterative optimization on the camera parameter, and obtain an optimized camera parameter.
Step S311: and rendering the target three-dimensional face by taking the optimized camera parameters as new camera parameters.
In some embodiments, after the optimized camera parameters are obtained, the optimized camera parameters may be used as new camera parameters to perform rendering on the target three-dimensional face, so as to obtain a new two-dimensional rendered face, and improve accuracy of the obtained two-dimensional rendered face.
Compared with the reconstruction method of the three-dimensional face shown in fig. 1, the reconstruction method of the three-dimensional face provided in the embodiment of the present application further improves the verisimilitude of the target three-dimensional face by calculating the loss of key points of the face to be reconstructed and the target three-dimensional face and optimizing the shape parameters and the camera parameters based on the loss of the key points.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating a method for reconstructing a three-dimensional face according to another embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 5, the method for reconstructing a three-dimensional face may specifically include the following steps:
step S401: and acquiring the shape parameters and texture parameters of the face to be reconstructed.
Step S402: and inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model.
Step S403: and inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training.
Step S404: and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
For detailed description of steps S401 to S404, please refer to steps S101 to S104, which are not described herein again.
Step S405: and rendering the target three-dimensional face based on the rendering function, the camera parameter and the illumination parameter to generate a two-dimensional rendered face.
For detailed description of step S405, please refer to step S305, which is not described herein again.
Step S406: and acquiring the biological characteristic information of the face to be reconstructed as first biological characteristic information, and acquiring the biological characteristic information of the two-dimensional rendering face as second biological characteristic information.
In some embodiments, an advanced face recognition network may be used to capture biometric information of a face to be reconstructed, the biometric information of the face to be reconstructed being obtained as first biometric information, and the advanced face recognition network may be used to capture biometric information of a two-dimensional rendered face, the biometric information of the two-dimensional rendered face being obtained as second biometric information. The biometric information may include, but is not limited to, the weight of the face, the length of the eyebrows, the size of the eyes, the height of the nose, the thickness of the lips, and the like. Wherein, the face recognition network may be Fn(I):
Figure BDA0002931803340000091
Wherein n represents n convolutional layers.
Step S407: and calculating the biological characteristic loss of the face to be reconstructed and the two-dimensional rendering face based on the first biological characteristic information and the second biological characteristic information.
In this embodiment, after obtaining the first biometric information of the face to be reconstructed and the second biometric information of the two-dimensional rendered face, the biometric loss of the face to be reconstructed and the two-dimensional rendered face may be calculated based on the first biometric information and the second biometric information. In some embodiments, after obtaining the first biometric information of the face to be reconstructed and the second biometric information of the two-dimensional rendered face, a first face feature vector of the first biometric information and a second face feature vector of the second biometric information may be calculated, and then a biometric loss of the face to be reconstructed and the two-dimensional rendered face may be calculated based on the first face feature vector of the first biometric information and the second face feature vector of the second biometric information.
In some embodiments, may be based on
Figure BDA0002931803340000092
And calculating the biological characteristic loss of the face to be reconstructed and the two-dimensional rendering face. In the above formula Fn(I0) A first face feature vector, F, representing the face to be reconstructedn(IR) A second face feature vector representing a two-dimensional rendered face.
Step S408: and performing iterative optimization on the shape parameter and the texture parameter based on the biological characteristic loss to obtain a second optimized shape parameter and a first optimized texture parameter.
In this embodiment, after the biometric loss is obtained through calculation, the shape parameter may be iteratively optimized based on the biometric loss to obtain a second optimized shape parameter. As an embodiment, after the biometric loss is obtained through calculation, the gradient of the biometric loss with respect to the shape parameter may be calculated by using a gradient descent algorithm, so as to perform iterative optimization on the shape parameter, where the iterative optimization process is continuously performed until the loss of the key point is almost converged or until the number of iterations exceeds a threshold number, so as to complete iterative optimization on the shape parameter, and obtain a second optimized shape parameter.
In this embodiment, after the biometric loss is obtained through calculation, iterative optimization may be performed on the texture parameter based on the biometric loss to obtain a first optimized texture parameter. As an embodiment, after the biometric loss is obtained through calculation, a gradient of the biometric loss with respect to the texture parameter may be calculated by using a gradient descent algorithm, so as to perform iterative optimization on the texture parameter, where the iterative optimization process is continuously performed until the loss of the key point is almost converged or until the number of iterations exceeds a threshold number, so as to complete iterative optimization on the texture parameter, thereby obtaining the first optimized texture parameter.
Step S409: inputting the second optimized shape parameters as new shape parameters into the first model, and inputting the first optimized texture parameters as new texture parameters into the second model.
In some embodiments, after the second optimized shape parameter is obtained, the second optimized shape parameter may be input into the first model as a new shape parameter, so as to obtain a new three-dimensional face shape through the first model, and obtain a new target three-dimensional face through the new three-dimensional face shape, thereby improving the accuracy of the obtained three-dimensional face.
In some embodiments, after the first optimized texture parameter is obtained, the first optimized texture parameter may be input into the second model as a new texture parameter, so as to obtain a new face texture coordinate through the second model, and obtain a new target three-dimensional face through the new face texture coordinate, thereby improving the accuracy of the obtained three-dimensional face.
Compared with the three-dimensional face reconstruction method shown in fig. 1, the three-dimensional face reconstruction method provided in another embodiment of the present application further improves the verisimilitude of the target three-dimensional face by calculating the biological feature loss of the face to be reconstructed and the two-dimensional rendered face, and optimizing the shape parameter and the camera parameter based on the biological feature loss.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a method for reconstructing a three-dimensional face according to yet another embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 6, the method for reconstructing a three-dimensional face may specifically include the following steps:
step S501: and acquiring the shape parameters and texture parameters of the face to be reconstructed.
Step S502: and inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model.
Step S503: and inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training.
Step S504: and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
For detailed description of steps S501 to S504, please refer to steps S101 to S104, which are not described herein again.
Step S505: and rendering the target three-dimensional face based on the rendering function, the camera parameter and the illumination parameter to generate a two-dimensional rendered face.
For a detailed description of step S505, please refer to step S305, which is not described herein again.
Step S506: and acquiring attribute content information of the face to be reconstructed as first attribute content information, and acquiring attribute content information of the two-dimensional rendering face as second attribute content information.
In some embodiments, the attribute content information of the face to be reconstructed may be captured by using a middle layer convolution layer feature in an advanced face recognition network, and the attribute content information of the face to be reconstructed is obtained as the first attribute content information, and the attribute content information of the two-dimensional rendered face may be captured by using a middle layer convolution layer in an advanced face recognition network, and the attribute content information of the two-dimensional rendered face is obtained as the second attribute content information. The attribute content information may include, but is not limited to, an expression, a gesture, illumination, and the like.
Step S507: and calculating the attribute content loss of the face to be reconstructed and the two-dimensional rendering face based on the first attribute content information and the second attribute content information.
In this embodiment, after obtaining the first attribute content information of the face to be reconstructed and the second attribute content information of the two-dimensional rendered face, the attribute content loss of the face to be reconstructed and the two-dimensional rendered face may be calculated based on the first attribute content information and the second attribute content information.
In some embodiments, may be based on
Figure BDA0002931803340000111
And calculating the attribute content loss of the face to be reconstructed and the two-dimensional rendering face. In the above formula Fj(I0) Features of the middle layer of the face, F, representing the face to be reconstructedj(IR) Representing mid-level features of a face that renders the face two-dimensionally.
Step S508: and performing iterative optimization on the shape parameter and the texture parameter based on the attribute content loss to obtain a third optimized shape parameter and a second optimized texture parameter.
In this embodiment, after the attribute content loss is obtained through calculation, iterative optimization may be performed on the shape parameter based on the attribute content loss to obtain a third optimized shape parameter. As an embodiment, after the attribute content loss is obtained through calculation, a gradient of the attribute content loss with respect to the shape parameter may be calculated by using a gradient descent algorithm, so as to perform iterative optimization on the shape parameter, where the iterative optimization process is continuously performed until the loss of the key point is almost converged or until the number of iterations is greater than a threshold number, so as to complete iterative optimization on the shape parameter, and obtain a third optimized shape parameter.
In this embodiment, after the attribute content loss is obtained through calculation, iterative optimization may be performed on the texture parameter based on the attribute content loss to obtain a second optimized texture parameter. As an embodiment, after the attribute content loss is obtained through calculation, a gradient of the attribute content loss with respect to the texture parameter may be calculated by using a gradient descent algorithm, so as to perform iterative optimization on the texture parameter, where the iterative optimization process is continuously performed until the key point loss is almost converged or until the number of iterations exceeds a threshold number, so as to complete iterative optimization on the texture parameter, and obtain a second optimized texture parameter.
Step S509: and inputting the third optimized shape parameter as a new shape parameter into the first model, and inputting the second optimized texture parameter as a new texture parameter into the second model.
In some embodiments, after obtaining the third optimized shape parameter, the third optimized shape parameter may be input into the first model as a new shape parameter, so as to obtain a new three-dimensional face shape through the first model, and obtain a new target three-dimensional face through the new three-dimensional face shape, thereby improving the accuracy of the obtained three-dimensional face.
In some embodiments, after the second optimized texture parameter is obtained, the second optimized texture parameter may be input into the second model as a new texture parameter, so as to obtain a new face texture coordinate through the second model, and obtain a new target three-dimensional face through the new face texture coordinate, thereby improving the accuracy of the obtained three-dimensional face.
Compared with the three-dimensional face reconstruction method shown in fig. 1, the three-dimensional face reconstruction method provided in another embodiment of the present application further improves the fidelity of the target three-dimensional face by calculating the attribute content information loss of the face to be reconstructed and the two-dimensional rendered face, and optimizing the shape parameter and the camera parameter based on the attribute content information loss.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a method for reconstructing a three-dimensional face according to yet another embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 7, the method for reconstructing a three-dimensional face may specifically include the following steps:
step S601: and acquiring the shape parameters and texture parameters of the face to be reconstructed.
Step S602: and inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model.
Step S603: and inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training.
Step S604: and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
For detailed description of steps S601 to S604, please refer to steps S101 to S104, which are not described herein again.
Step S605: and rendering the target three-dimensional face based on the rendering function, the camera parameter and the illumination parameter to generate a two-dimensional rendered face.
For detailed description of step S605, please refer to step S305, which is not described herein again.
Step S606: and acquiring the pixel information of the face to be reconstructed as first pixel information, and acquiring the pixel information based on the two-dimensional rendering face as second pixel information.
In some embodiments, pixel information of a face to be reconstructed is obtained as first pixel information, and pixel information of a two-dimensional rendered face is obtained as second pixel information. The pixel information of the face to be reconstructed and the pixel information of the two-dimensional rendered face may be obtained through the pixel extraction module, which is not limited herein.
Step S607: and calculating pixel loss of the face to be reconstructed and the two-dimensional rendering face based on the first pixel information and the second pixel information.
In this embodiment, after obtaining the first pixel information of the face to be reconstructed and the second pixel information of the two-dimensional rendered face, the pixel loss of the face to be reconstructed and the pixel loss of the two-dimensional rendered face may be calculated based on the first pixel information and the second pixel information.
In some embodiments, L may be basedpix=||I0-IR||1And calculating pixel loss of the face to be reconstructed and the two-dimensional rendering face. In the above formula I0Representing the face to be reconstructed, IRRepresenting a two-dimensional rendered face, | | | | luminance1Representing the norm of L1.
Step S608: and performing iterative optimization on the illumination parameters based on the pixel loss to obtain optimized illumination parameters.
In this embodiment, after the pixel loss is obtained through calculation, the illumination parameter may be iteratively optimized based on the pixel loss to obtain an optimized lighting-related parameter. As an embodiment, after the pixel loss is obtained through calculation, a gradient of the pixel loss with respect to the illumination parameter may be calculated by using a gradient descent algorithm, so as to perform iterative optimization on the illumination parameter, where the iterative optimization process is continuously performed until the loss of the key point is almost converged, or until the number of iterations exceeds a threshold, so as to complete iterative optimization on the light-off parameter, and obtain the optimized illumination parameter.
In some embodiments, after the pixel loss is obtained through calculation, iterative optimization may be performed on the shape parameter and the texture parameter based on the pixel loss, which is not described herein again.
Step S609: and rendering the target three-dimensional face by taking the optimized illumination parameters as new illumination parameters.
In some embodiments, after the optimized lighting parameters are obtained, the optimized lighting parameters can be used as new lighting parameters to perform rendering on the target three-dimensional face, so as to obtain a new two-dimensional rendered face, and improve the accuracy of the obtained two-dimensional rendered face.
Compared with the three-dimensional face reconstruction method shown in fig. 1, the three-dimensional face reconstruction method provided in another embodiment of the present application further improves the verisimilitude of the target three-dimensional face by calculating the pixel loss of the face to be reconstructed and the two-dimensional rendered face, and optimizing the shape parameter and the camera parameter based on the pixel loss.
Referring to fig. 8, fig. 8 is a schematic flowchart illustrating a method for reconstructing a three-dimensional face according to yet another embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 8, the method for reconstructing a three-dimensional face may specifically include the following steps:
step S701: the method comprises the steps of obtaining a first training data set, wherein the first training data set comprises a plurality of shape parameters and a plurality of three-dimensional face shapes, and the shape parameters correspond to the three-dimensional face shapes one to one.
In this embodiment, a first training data set is first acquired. The first training data set may include a plurality of shape parameters and a plurality of three-dimensional face shapes, and the plurality of shape parameters and the plurality of three-dimensional face shapes are in one-to-one correspondence. As one way, the first training data set may also include a plurality of shape parameters and a plurality of face coordinate maps, where the plurality of shape parameters and the plurality of face coordinate maps correspond to one another.
In some embodiments, the first training data set may be stored locally in the electronic device, may be stored and transmitted to the electronic device by another device, may be stored and transmitted to the electronic device from a server, may be collected in real time by the electronic device, and the like, which is not limited herein.
Step S702: and training a generation countermeasure network by taking the shape parameters as input parameters and the three-dimensional human face shapes as output parameters to obtain the trained first generation countermeasure network.
As one approach, after obtaining the plurality of shape parameters and the plurality of three-dimensional face shapes, the plurality of shape parameters and the plurality of three-dimensional face shapes may be trained as a first training data set on the generated confrontation network to obtain a trained first generated confrontation network. In some embodiments, a plurality of shape parameters may be used as input parameters, a plurality of three-dimensional human face shapes may be used as output parameters, and a generation countermeasure network is trained to obtain a trained first generation countermeasure network. In addition, after the trained first generation countermeasure network is obtained, the accuracy of the trained first generation countermeasure network may be verified, and whether the three-dimensional face shape output by the trained first generation countermeasure network based on the input shape parameters meets the preset requirement is determined, and when the three-dimensional face shape output by the trained first generation countermeasure network based on the input shape parameters does not meet the preset requirement, the training first training data set may be collected again to train the generation countermeasure network, or a plurality of first training data sets may be obtained again to correct the trained first generation countermeasure network, which is not limited herein.
In the training process of generating the antagonizing network, firstly, a three-dimensional face shape (face coordinate mapping) with the lowest resolution is trained, and the generator and the discriminator are trained and stabilized under the resolution. Then, the images that transition to the higher resolution are trained. Adding a new convolutional layer in the above steps, processing the new convolutional layer in a residual block mode: directly upsampling the image feature map with low resolution by 2 times into an image feature map with higher resolution by one step, and converting the feature map into an RGB map F1 by utilizing 1 × 1 convolution; the feature map of the image with the first-level resolution which is just up-sampled by 2 times is converted into an RGB map F2 by a new 1 x 1 convolution after being processed by a 3 x 3 convolution; multiplying F1 by weight 1-a and F2 by a and adding, where the weight coefficient a gradually transitions from 0 to 1, ensures a "fade-in" of the 3 x 3 convolutional layer, at which resolution the generator and arbiter training is stable. And finally, performing training by transitioning to the image with the higher resolution. And processing a new convolution layer by adding a new residual block according to the method in the step, and repeating the steps to obtain the original resolution of the three-dimensional face shape (face coordinate mapping).
Step S703: and acquiring a second training data set, wherein the second training data set comprises a plurality of texture parameters and a plurality of face texture maps, and the texture parameters correspond to the face texture maps one by one.
In this embodiment, a second training data set is first acquired. The second training data set may include a plurality of texture parameters and a plurality of face texture maps, and the plurality of texture parameters and the plurality of face texture maps correspond to one another.
In some embodiments, the second training data set may be stored locally by the electronic device, may be stored and transmitted to the electronic device by another device, may be stored and transmitted to the electronic device from a server, may be collected in real time by the electronic device, and the like, which is not limited herein.
In some embodiments, a face coordinate map of a sample face may be obtained by performing face registration on the sample face, the face coordinate map is added to the first training data set, a face texture map of the sample face is obtained, and the face texture map is added to the second training data set, and as a method, the process of performing face registration on the sample face may include the following steps:
s1, obtaining a sample face, wherein the sample face is a three-dimensional face with textures, projecting scanning data of the three-dimensional face with the textures to a two-dimensional face image to generate a front face image, and generating 68 two-dimensional face key points of the front face image by using a two-dimensional key point detector
Figure BDA0002931803340000141
68 two-dimensional face key points are back projected to generate 68 three-dimensional face key points
Figure BDA0002931803340000142
S2, obtaining 68 three-dimensional face key points of the original three-dimensional face scanning data according to S1, and registering the 68 three-dimensional face key points with 68 three-dimensional face key points of a standard face template through Procrustes transformation. In this way, the pose and size of all original three-dimensional face scan data are aligned with the standard face template.
And S3, registering neutral face data in the original three-dimensional face scanning data with a standard face template by using an NICP algorithm, wherein the neutral face data refers to a natural face without any expression.
And S4, for other 19 (or other quantities of) facial data with expressions in the original three-dimensional facial scanning data, using a deformation migration algorithm to migrate the expressions of a group of template faces to the registered neutral faces so as to generate corresponding expressions (for example, if the neutral expressions and open-mouth expressions of the template faces are known, the open-mouth action can be migrated to the registered neutral faces).
And S5, registering the expression data in the original 3D face scanning data with the deformed expression face template in the S4 by using an NICP algorithm, thereby generating a more accurate expression face grid.
S6, using a sample-based face manipulation algorithm, constructing 20 (or other numbers of) expressive face meshes for each person generated in S3 and S5 as a mixed shape model of the person, the result is a neutral expression of one face and 46 FACS mixed shapes, i.e., any expression H of one person can be expressed by a linear combination of mixed shapes:
Figure BDA0002931803340000151
and S7, sampling the parameter vector a of the mixed shape from Gaussian distribution N (mu is 0, and sigma is 1), and normalizing through an e-index normalization function, so as to perform data augmentation on the three-dimensional face. The augmented face data is converted into a face coordinate mapping (which can be converted into a three-dimensional face shape) and a face texture mapping.
Step S704: and training the generation countermeasure network by taking the texture parameters as input parameters and the face texture maps as output parameters to obtain the trained second generation countermeasure network.
As one approach, after obtaining the plurality of texture parameters and the plurality of face texture maps, the plurality of texture parameters and the plurality of face texture maps may be used as a second training data set to train the generated confrontation network to obtain a trained second generated confrontation network. In some embodiments, the generation countermeasure network may be trained using a plurality of texture parameters as input parameters and a plurality of three-dimensional face texture maps as output parameters, to obtain a trained second generation countermeasure network. In addition, after the trained second generated countermeasure network is obtained, the accuracy of the trained second generated countermeasure network may be verified, and it is determined whether the face coordinate map output by the trained second generated countermeasure network based on the input texture parameter meets a preset requirement, and when the face coordinate map output by the trained second generated countermeasure network based on the input texture parameter does not meet the preset requirement, the training second training data set may be re-acquired to train the generated countermeasure network, or multiple second training data sets may be acquired to correct the trained second generated countermeasure network, which is not limited herein.
In the second training process of generating the countermeasure network, firstly, a face texture mapping with the lowest resolution is trained, and the generator and the discriminator are trained stably under the resolution. Then, the images that transition to the higher resolution are trained. Adding a new convolutional layer in the above steps, processing the new convolutional layer in a residual block mode: directly upsampling the image feature map with low resolution by 2 times into an image feature map with higher resolution by one step, and converting the feature map into an RGB map F1 by utilizing 1 × 1 convolution; the feature map of the image with the first-level resolution which is just up-sampled by 2 times is converted into an RGB map F2 by a new 1 x 1 convolution after being processed by a 3 x 3 convolution; multiplying F1 by weight 1-a and F2 by a and adding, where the weight coefficient a gradually transitions from 0 to 1, ensures a "fade-in" of the 3 x 3 convolutional layer, at which resolution the generator and arbiter training is stable. And finally, performing training by transitioning to the image with the higher resolution. And processing a new convolution layer by adding a new residual block according to the method in the step, and repeating the steps to obtain the original resolution of the texture coordinate mapping.
Step S705: and acquiring the shape parameters and texture parameters of the face to be reconstructed.
Step S706: and inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model.
Step S707: and inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training.
Step S708: and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
For detailed description of steps S705 to S708, please refer to steps S101 to S104, which are not described herein again.
In the method for reconstructing a three-dimensional face according to another embodiment of the present application, the generation countermeasure network is trained according to a plurality of shape parameters and a plurality of three-dimensional face shapes, so as to obtain a trained first generation countermeasure network that can generate a three-dimensional face shape based on the shape parameters, the generation countermeasure network is trained according to a plurality of texture parameters and a plurality of face texture maps, and a trained second generation countermeasure network that can generate a face texture map based on the texture parameters is obtained, so as to improve the accuracy of the generated three-dimensional face shape and face texture map.
Referring to fig. 9, fig. 9 is a block diagram illustrating a three-dimensional face reconstruction apparatus according to an embodiment of the present disclosure. As will be explained below with reference to the block diagram shown in fig. 9, the apparatus 200 for reconstructing a three-dimensional human face includes: a parameter obtaining module 210, a face shape obtaining module 220, a texture map obtaining module 230, and a three-dimensional face generating module 240, wherein:
the parameter obtaining module 210 is configured to obtain shape parameters and texture parameters of a face to be reconstructed.
A face shape obtaining module 220, configured to input the shape parameter into the first model, and obtain a three-dimensional face shape output by the first model.
Further, the first model is a trained generated confrontation network, and the face shape obtaining module 220 includes: a face coordinate mapping obtaining sub-module and a face shape obtaining sub-module, wherein:
and the face coordinate mapping obtaining submodule is used for inputting the shape parameters into the trained generation confrontation network and obtaining the face coordinate mapping output by the trained generation confrontation network.
And the face shape obtaining submodule is used for obtaining the three-dimensional face shape based on the face coordinate mapping.
A texture map obtaining module 230, configured to input the texture parameter into a second model, and obtain a face texture map output by the second model, where at least one of the first model and the second model is obtained based on generative confrontation network training.
A three-dimensional face generating module 240, configured to generate a target three-dimensional face based on the three-dimensional face shape and the face texture map, where the target three-dimensional face includes texture information generated based on the face texture map.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: a two-dimensional rendering face generation module, wherein:
and the two-dimensional rendering face generation module is used for rendering the target three-dimensional face based on a rendering function, a camera parameter and an illumination parameter to generate a two-dimensional rendering face.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: the system comprises an attitude acquisition module, a key point loss calculation module, a first iterative optimization module and a first optimization parameter substitution module, wherein:
and the posture acquisition module is used for acquiring the posture of the face to be reconstructed.
Further, the gesture obtaining module comprises: the first face key point obtaining submodule and the posture obtaining submodule, wherein:
a first face key point obtaining submodule for performing key point detection on the face to be reconstructed to obtain a plurality of three-dimensional face key points of the face to be reconstructed as first face key points,
further, the first face keypoint obtaining sub-module includes: a second face keypoint obtaining unit, wherein:
and the second face key point obtaining unit is used for carrying out key point detection on the face to be reconstructed, obtaining a plurality of three-dimensional face key points of the face to be reconstructed as first face key points, and obtaining a plurality of two-dimensional face key points of the face to be reconstructed as second face key points.
And the posture acquisition submodule is used for acquiring the posture of the face to be reconstructed based on the first face key point.
And the key point loss calculation module is used for calculating the loss of the key points of the face to be reconstructed and the target three-dimensional face based on the relation between the gesture and the preset gesture.
Further, the keypoint loss calculation module comprises: a first keypoint computation submodule and a second keypoint computation submodule, wherein:
and the first key point loss calculation submodule is used for acquiring a plurality of three-dimensional face key points of the target three-dimensional face as third face key points when the gesture is greater than the preset gesture, and calculating the key point loss of the first face key points and the third face key points.
And the second key point calculation sub-module is used for acquiring a plurality of two-dimensional face key points corresponding to the plurality of three-dimensional face key points of the target three-dimensional face as fourth face key points when the gesture is not greater than the preset gesture, and calculating the key point loss of the second face key points and the fourth face key points.
And the first iterative optimization module is used for performing iterative optimization on the shape parameters based on the key point loss to obtain first optimized shape parameters.
A first optimized parameter substitution module for first inputting the first optimized shape parameter as a new shape parameter into the first model.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: an optimized camera parameter acquisition module and a second optimized parameter replacement module, wherein:
and the optimized camera parameter obtaining module is used for carrying out iterative optimization on the camera parameters based on the key point loss to obtain optimized camera parameters.
And the second optimization parameter replacing module is used for rendering the target three-dimensional face by taking the optimized camera parameters as new camera parameters.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: the system comprises a biological characteristic information acquisition module, a biological characteristic loss calculation module, a second iterative optimization module and a third optimization parameter substitution module, wherein:
and the biological characteristic information acquisition module is used for acquiring the biological characteristic information of the face to be reconstructed as first biological characteristic information and acquiring the biological characteristic information of the two-dimensional rendering face as second biological characteristic information.
And the biological characteristic loss calculation module is used for calculating the biological characteristic loss of the face to be reconstructed and the two-dimensional rendering face based on the first biological characteristic information and the second biological characteristic information.
And the second iterative optimization module is used for performing iterative optimization on the shape parameter and the texture parameter based on the biological characteristic loss to obtain a second optimized shape parameter and a first optimized texture parameter.
A third optimized parameter substitution module for inputting the second optimized shape parameter as a new shape parameter into the first model and the first optimized texture parameter as a new texture parameter into the second model.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: the attribute content information acquisition module, the attribute content loss calculation module, the third iterative optimization module and the fourth optimization parameter substitution module, wherein:
and the attribute content information acquisition module is used for acquiring the attribute content information of the face to be reconstructed as first attribute content information and acquiring the attribute content information of the two-dimensional rendering face as second attribute content information.
And the attribute content loss calculation module is used for calculating the attribute content loss of the face to be reconstructed and the two-dimensional rendering face based on the first attribute content information and the second attribute content information.
And the third iterative optimization module is used for performing iterative optimization on the shape parameters and the texture parameters based on the attribute content loss to obtain third optimized shape parameters and second optimized texture parameters.
And the fourth optimization parameter replacing module is used for inputting the third optimization shape parameter as a new shape parameter into the first model and inputting the second optimization texture parameter as a new texture parameter into the second model.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: the pixel information acquisition module, the pixel loss calculation module, the fourth iterative optimization module and the fifth optimization parameter substitution module, wherein:
and the pixel information acquisition module is used for acquiring the pixel information of the face to be reconstructed as first pixel information and acquiring the pixel information based on the two-dimensional rendering face as second pixel information.
And the pixel loss calculation module is used for calculating the pixel loss of the face to be reconstructed and the two-dimensional rendering face based on the first pixel information and the second pixel information.
And the fourth iterative optimization module is used for performing iterative optimization on the illumination parameters based on the pixel loss to obtain optimized illumination parameters.
And the fifth optimized parameter replacing module is used for rendering the target three-dimensional face by taking the optimized illumination parameters as new illumination parameters.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: a first training data set acquisition module and a first generation antagonizing network acquisition module, wherein:
the device comprises a first training data set acquisition module, a second training data set acquisition module and a third training data set acquisition module, wherein the first training data set comprises a plurality of shape parameters and a plurality of three-dimensional face shapes, and the shape parameters correspond to the three-dimensional face shapes one to one.
And the first generation countermeasure network obtaining module is used for training a generation countermeasure network by taking the shape parameters as input parameters and the three-dimensional human face shapes as output parameters to obtain the trained first generation countermeasure network.
Further, the apparatus 200 for reconstructing a three-dimensional human face further includes: a second training data set acquisition module and a second generative confrontation network acquisition module, wherein:
the second training data set acquisition module is used for acquiring a second training data set, wherein the second training data set comprises a plurality of texture parameters and a plurality of face texture maps, and the texture parameters correspond to the face texture maps one by one.
And the second generation countermeasure network obtaining module is used for training the countermeasure network by taking the texture parameters as input parameters and the face texture maps as output parameters to obtain the trained second generation countermeasure network.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 10, a block diagram of an electronic device 100 according to an embodiment of the present disclosure is shown. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or other electronic devices capable of running an application. The electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, a touch screen 130, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores, among other things. The processor 110 connects various parts within the overall electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content to be displayed; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
The touch screen 130 is used for displaying information input by a user, information provided to the user, and various graphical user interfaces of the electronic device 100, which may be composed of graphics, text, icons, numbers, video, and any combination thereof, and in one example, the touch screen 130 may be a Liquid Crystal Display (LCD) or an Organic Light-Emitting Diode (OLED), which is not limited herein.
Referring to fig. 11, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 300 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 300 includes a non-volatile computer-readable storage medium. The computer readable storage medium 300 has storage space for program code 310 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 310 may be compressed, for example, in a suitable form.
To sum up, the three-dimensional face reconstruction method, the three-dimensional face reconstruction device, the electronic device and the storage medium according to the embodiments of the present application obtain shape parameters and texture parameters of a face to be reconstructed, input the shape parameters into a first model to obtain a three-dimensional face shape output by the first model, input the texture parameters into a second model to obtain a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generating confrontation network training, generating a target three-dimensional face based on a three-dimensional face shape and a face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map, so that a three-dimensional face shape and/or the face texture map is generated through a trained generated confrontation network, the generated target three-dimensional face has abundant detail characteristics, and the reconstruction effect of the three-dimensional face is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A method for reconstructing a three-dimensional face, the method comprising:
acquiring shape parameters and texture parameters of a face to be reconstructed;
inputting the shape parameters into a first model to obtain a three-dimensional face shape output by the first model;
inputting the texture parameters into a second model, and obtaining a face texture map output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training;
and generating a target three-dimensional face based on the three-dimensional face shape and the face texture map, wherein the target three-dimensional face comprises texture information generated based on the face texture map.
2. The method of claim 1, wherein the first model is a trained generative confrontation network, and wherein inputting the shape parameters into the first model to obtain the three-dimensional face shape output by the first model comprises:
inputting the shape parameters into the trained generation countermeasure network to obtain a face coordinate mapping output by the trained generation countermeasure network;
and obtaining the three-dimensional face shape based on the face coordinate mapping.
3. The method of claim 1, wherein after generating the target three-dimensional face based on the three-dimensional face shape and the face texture map, further comprising:
and rendering the target three-dimensional face based on the rendering function, the camera parameter and the illumination parameter to generate a two-dimensional rendered face.
4. The method of claim 3, wherein the rendering the target three-dimensional face based on the rendering function, the camera parameters, and the lighting parameters, and after generating a two-dimensional rendered face, further comprises:
acquiring the posture of the face to be reconstructed;
calculating the loss of key points of the face to be reconstructed and the target three-dimensional face based on the relation between the postures and the preset postures;
performing iterative optimization on the shape parameters based on the key point loss to obtain first optimized shape parameters;
inputting the first optimized shape parameter as a new shape parameter into the first model.
5. The method of claim 4, further comprising:
iteratively optimizing the camera parameters based on the key point loss to obtain optimized camera parameters;
and rendering the target three-dimensional face by taking the optimized camera parameters as new camera parameters.
6. The method according to claim 4, wherein the obtaining the pose of the face to be reconstructed comprises:
performing key point detection on the face to be reconstructed to obtain a plurality of three-dimensional face key points of the face to be reconstructed as first face key points;
and acquiring the posture of the face to be reconstructed based on the first face key point.
7. The method according to claim 6, wherein the performing key point detection on the face to be reconstructed to obtain a plurality of three-dimensional face key points of the face to be reconstructed as first face key points comprises:
performing key point detection on the face to be reconstructed to obtain a plurality of three-dimensional face key points of the face to be reconstructed as first face key points, and obtaining a plurality of two-dimensional face key points of the face to be reconstructed as second face key points;
calculating the loss of key points of the face to be reconstructed and the target three-dimensional face based on the relationship between the gesture and the preset gesture, wherein the calculation comprises the following steps:
when the gesture is larger than the preset gesture, acquiring a plurality of three-dimensional face key points of the target three-dimensional face as third face key points, and calculating key point losses of the first face key points and the third face key points;
and when the gesture is not larger than the preset gesture, acquiring a plurality of two-dimensional face key points corresponding to a plurality of three-dimensional face key points of the target three-dimensional face as fourth face key points, and calculating the key point loss of the second face key points and the fourth face key points.
8. The method of claim 3, wherein the rendering the target three-dimensional face based on the rendering function, the camera parameters, and the lighting parameters, and after generating a two-dimensional rendered face, further comprises:
acquiring biological characteristic information of the face to be reconstructed as first biological characteristic information, and acquiring biological characteristic information of the two-dimensional rendering face as second biological characteristic information;
calculating the biological feature loss of the face to be reconstructed and the two-dimensional rendering face based on the first biological feature information and the second biological feature information;
performing iterative optimization on the shape parameter and the texture parameter based on the biological feature loss to obtain a second optimized shape parameter and a first optimized texture parameter;
inputting the second optimized shape parameters as new shape parameters into the first model, and inputting the first optimized texture parameters as new texture parameters into the second model.
9. The method of claim 3, wherein the rendering the target three-dimensional face based on the rendering function, the camera parameters, and the lighting parameters, and after generating a two-dimensional rendered face, further comprises:
acquiring attribute content information of the face to be reconstructed as first attribute content information, and acquiring attribute content information of the two-dimensional rendering face as second attribute content information;
calculating attribute content loss of the face to be reconstructed and the two-dimensional rendering face based on the first attribute content information and the second attribute content information;
performing iterative optimization on the shape parameter and the texture parameter based on the attribute content loss to obtain a third optimized shape parameter and a second optimized texture parameter;
and inputting the third optimized shape parameter as a new shape parameter into the first model, and inputting the second optimized texture parameter as a new texture parameter into the second model.
10. The method of claim 3, wherein the rendering the target three-dimensional face based on the rendering function, the camera parameters, and the lighting parameters, and after generating a two-dimensional rendered face, further comprises:
acquiring pixel information of the face to be reconstructed as first pixel information, and acquiring pixel information based on the two-dimensional rendering face as second pixel information;
calculating pixel loss of the face to be reconstructed and the two-dimensional rendering face based on the first pixel information and the second pixel information;
performing iterative optimization on the illumination parameters based on the pixel loss to obtain optimized illumination parameters;
and rendering the target three-dimensional face by taking the optimized illumination parameters as new illumination parameters.
11. The method according to any one of claims 1-10, wherein the first model is a trained first generation pairwise anti-network, and wherein the inputting the shape parameters into the first model further comprises, before obtaining the three-dimensional face shape output by the first model:
acquiring a first training data set, wherein the first training data set comprises a plurality of shape parameters and a plurality of three-dimensional face shapes, and the shape parameters correspond to the three-dimensional face shapes one by one;
and training a generation countermeasure network by taking the shape parameters as input parameters and the three-dimensional human face shapes as output parameters to obtain the trained first generation countermeasure network.
12. The method according to any one of claims 1-10, wherein the second model is a trained second generative confrontation network, and wherein before inputting the texture parameter into the second model and obtaining the face texture map output by the second model, the method further comprises:
acquiring a second training data set, wherein the second training data set comprises a plurality of texture parameters and a plurality of face texture maps, and the texture parameters correspond to the face texture maps one by one;
and training the generation countermeasure network by taking the texture parameters as input parameters and the face texture maps as output parameters to obtain the trained second generation countermeasure network.
13. An apparatus for reconstructing a three-dimensional face, the apparatus comprising:
the parameter acquisition module is used for acquiring the shape parameters and the texture parameters of the face to be reconstructed;
the human face shape obtaining module is used for inputting the shape parameters into a first model to obtain a three-dimensional human face shape output by the first model;
the texture mapping obtaining module is used for inputting the texture parameters into a second model and obtaining a face texture mapping output by the second model, wherein at least one of the first model and the second model is obtained based on generation of confrontation network training;
and the three-dimensional face generation module is used for generating a target three-dimensional face based on the three-dimensional face shape and the face texture mapping, wherein the target three-dimensional face comprises texture information generated based on the face texture mapping.
14. An electronic device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-12.
15. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 12.
CN202110151906.3A 2021-02-03 2021-02-03 Three-dimensional face reconstruction method and device, electronic equipment and storage medium Pending CN112819947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110151906.3A CN112819947A (en) 2021-02-03 2021-02-03 Three-dimensional face reconstruction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110151906.3A CN112819947A (en) 2021-02-03 2021-02-03 Three-dimensional face reconstruction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112819947A true CN112819947A (en) 2021-05-18

Family

ID=75861151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110151906.3A Pending CN112819947A (en) 2021-02-03 2021-02-03 Three-dimensional face reconstruction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112819947A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592988A (en) * 2021-08-05 2021-11-02 北京奇艺世纪科技有限公司 Three-dimensional virtual character image generation method and device
CN113723317A (en) * 2021-09-01 2021-11-30 京东科技控股股份有限公司 Reconstruction method and device of 3D face, electronic equipment and storage medium
CN113762147A (en) * 2021-09-06 2021-12-07 网易(杭州)网络有限公司 Facial expression migration method and device, electronic equipment and storage medium
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
CN114125273A (en) * 2021-11-05 2022-03-01 维沃移动通信有限公司 Face focusing method and device and electronic equipment
CN114241102A (en) * 2021-11-11 2022-03-25 清华大学 Method and device for reconstructing and editing human face details based on parameterized model
CN114299206A (en) * 2021-12-31 2022-04-08 清华大学 Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN114339190A (en) * 2021-12-29 2022-04-12 中国电信股份有限公司 Communication method, device, equipment and storage medium
CN114783022A (en) * 2022-04-08 2022-07-22 马上消费金融股份有限公司 Information processing method and device, computer equipment and storage medium
CN114821404A (en) * 2022-04-08 2022-07-29 马上消费金融股份有限公司 Information processing method and device, computer equipment and storage medium
CN115205707A (en) * 2022-09-13 2022-10-18 阿里巴巴(中国)有限公司 Sample image generation method, storage medium, and electronic device
CN115439610A (en) * 2022-09-14 2022-12-06 中国电信股份有限公司 Model training method, training device, electronic equipment and readable storage medium
CN116012513A (en) * 2021-10-20 2023-04-25 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN116714251A (en) * 2023-05-16 2023-09-08 北京盈锋科技有限公司 Character three-dimensional printing system, method, electronic equipment and storage medium
CN116714251B (en) * 2023-05-16 2024-05-31 北京盈锋科技有限公司 Character three-dimensional printing system, method, electronic equipment and storage medium

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592988A (en) * 2021-08-05 2021-11-02 北京奇艺世纪科技有限公司 Three-dimensional virtual character image generation method and device
CN113592988B (en) * 2021-08-05 2023-06-30 北京奇艺世纪科技有限公司 Three-dimensional virtual character image generation method and device
CN113723317B (en) * 2021-09-01 2024-04-09 京东科技控股股份有限公司 Reconstruction method and device of 3D face, electronic equipment and storage medium
CN113723317A (en) * 2021-09-01 2021-11-30 京东科技控股股份有限公司 Reconstruction method and device of 3D face, electronic equipment and storage medium
CN113762147A (en) * 2021-09-06 2021-12-07 网易(杭州)网络有限公司 Facial expression migration method and device, electronic equipment and storage medium
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
CN113838176B (en) * 2021-09-16 2023-09-15 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN116012513A (en) * 2021-10-20 2023-04-25 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN114125273A (en) * 2021-11-05 2022-03-01 维沃移动通信有限公司 Face focusing method and device and electronic equipment
CN114125273B (en) * 2021-11-05 2023-04-07 维沃移动通信有限公司 Face focusing method and device and electronic equipment
CN114241102A (en) * 2021-11-11 2022-03-25 清华大学 Method and device for reconstructing and editing human face details based on parameterized model
CN114241102B (en) * 2021-11-11 2024-04-19 清华大学 Face detail reconstruction and editing method based on parameterized model
CN114339190A (en) * 2021-12-29 2022-04-12 中国电信股份有限公司 Communication method, device, equipment and storage medium
CN114339190B (en) * 2021-12-29 2023-06-23 中国电信股份有限公司 Communication method, device, equipment and storage medium
CN114299206A (en) * 2021-12-31 2022-04-08 清华大学 Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN114783022B (en) * 2022-04-08 2023-07-21 马上消费金融股份有限公司 Information processing method, device, computer equipment and storage medium
CN114821404B (en) * 2022-04-08 2023-07-25 马上消费金融股份有限公司 Information processing method, device, computer equipment and storage medium
CN114821404A (en) * 2022-04-08 2022-07-29 马上消费金融股份有限公司 Information processing method and device, computer equipment and storage medium
CN114783022A (en) * 2022-04-08 2022-07-22 马上消费金融股份有限公司 Information processing method and device, computer equipment and storage medium
CN115205707A (en) * 2022-09-13 2022-10-18 阿里巴巴(中国)有限公司 Sample image generation method, storage medium, and electronic device
CN115439610A (en) * 2022-09-14 2022-12-06 中国电信股份有限公司 Model training method, training device, electronic equipment and readable storage medium
CN115439610B (en) * 2022-09-14 2024-04-26 中国电信股份有限公司 Training method and training device for model, electronic equipment and readable storage medium
CN116714251A (en) * 2023-05-16 2023-09-08 北京盈锋科技有限公司 Character three-dimensional printing system, method, electronic equipment and storage medium
CN116714251B (en) * 2023-05-16 2024-05-31 北京盈锋科技有限公司 Character three-dimensional printing system, method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112819947A (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
WO2021093453A1 (en) Method for generating 3d expression base, voice interactive method, apparatus and medium
CN110136243B (en) Three-dimensional face reconstruction method, system, device and storage medium thereof
CN109961507B (en) Face image generation method, device, equipment and storage medium
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN111710036B (en) Method, device, equipment and storage medium for constructing three-dimensional face model
CN112288665B (en) Image fusion method and device, storage medium and electronic equipment
CN111739167B (en) 3D human head reconstruction method, device, equipment and medium
CN108805979A (en) A kind of dynamic model three-dimensional rebuilding method, device, equipment and storage medium
CN113628327A (en) Head three-dimensional reconstruction method and equipment
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
EP3855386B1 (en) Method, apparatus, device and storage medium for transforming hairstyle and computer program product
CN111754622B (en) Face three-dimensional image generation method and related equipment
US20240161355A1 (en) Generation of stylized drawing of three-dimensional shapes using neural networks
KR20230085931A (en) Method and system for extracting color from face images
CN114202615A (en) Facial expression reconstruction method, device, equipment and storage medium
CN116342782A (en) Method and apparatus for generating avatar rendering model
CN115984447A (en) Image rendering method, device, equipment and medium
CN107203961B (en) Expression migration method and electronic equipment
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN113313631B (en) Image rendering method and device
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN107194980A (en) Faceform's construction method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination