CN113506367B - Three-dimensional face model training method, three-dimensional face reconstruction method and related devices - Google Patents

Three-dimensional face model training method, three-dimensional face reconstruction method and related devices Download PDF

Info

Publication number
CN113506367B
CN113506367B CN202110973590.6A CN202110973590A CN113506367B CN 113506367 B CN113506367 B CN 113506367B CN 202110973590 A CN202110973590 A CN 202110973590A CN 113506367 B CN113506367 B CN 113506367B
Authority
CN
China
Prior art keywords
dimensional face
feature points
adjusted
sense organs
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110973590.6A
Other languages
Chinese (zh)
Other versions
CN113506367A (en
Inventor
芦爱余
李志文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202110973590.6A priority Critical patent/CN113506367B/en
Publication of CN113506367A publication Critical patent/CN113506367A/en
Application granted granted Critical
Publication of CN113506367B publication Critical patent/CN113506367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a three-dimensional face model training method, a three-dimensional face reconstruction method and a related device, which are used for carrying out enhancement processing on sample images to generate new sample images, and increasing the number of the sample images as much as possible under the condition of not increasing the cost, so that the accuracy of a model is improved during model training. And repeatedly and iteratively calculating the first loss penalty term information and the second loss penalty term information through the first loss penalty term information of the five-element feature points in the three-dimensional face model to be adjusted and the five-element feature points in the standard three-dimensional face model to be adjusted, so that the learning of the three-dimensional face reconstruction network on eye closure is enhanced, the feature points of the eyes of the three-dimensional face model are attached to the standard eye feature points as much as possible, and the generated three-dimensional face special effect, digital people and three-dimensional makeup expression are driven more naturally.

Description

Three-dimensional face model training method, three-dimensional face reconstruction method and related devices
Technical Field
The invention relates to the technical field of three-dimensional face reconstruction, in particular to a three-dimensional face model training method, a three-dimensional face reconstruction method and a related device.
Background
At present, three-dimensional technology is mature, and the three-dimensional technology can be applied to three-dimensional face special effect requirements, digital human requirements and three-dimensional makeup requirements. The three-dimensional cosmetic requirement has higher requirements on the fitting degree and the accuracy of the three-dimensional face, and particularly has the requirements on some face details.
In the prior art, the generated three-dimensional face special effect, digital person and eye fitting effect in three-dimensional makeup are poor, so that the driving of the generated three-dimensional face special effect, digital person and expression of the three-dimensional makeup is influenced.
Disclosure of Invention
The invention aims at providing a three-dimensional face model training method and device, which can solve the problem that an eye effect is not fit when a three-dimensional effect is generated.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a three-dimensional face model training method, where the method includes:
performing enhancement processing on eye feature points of the sample image to generate a new sample image;
generating a standard three-dimensional face model through a three-dimensional face reconstruction network according to the new sample image, and generating a three-dimensional face model to be adjusted through the three-dimensional face reconstruction network according to the unlabeled image;
Respectively determining feature points of the five sense organs to be adjusted in the three-dimensional face model to be adjusted and feature points of the standard five sense organs in the standard three-dimensional face model;
calculating first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty term information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs;
inputting the first loss penalty term information and the second loss penalty term information into the three-dimensional face reconstruction network to obtain an updated three-dimensional face model to be adjusted;
and when the difference value between the updated three-dimensional face reconstruction parameters of the three-dimensional face model to be adjusted and the three-dimensional face reconstruction parameters of the standard three-dimensional face model is larger than a threshold value, returning to execute the steps of calculating first loss penalty item information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty item information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs until the first loss penalty item information and the second loss penalty item information are input into the three-dimensional face reconstruction network so as to obtain the updated three-dimensional face model to be adjusted, and finally, the latest first loss penalty item information and the latest second loss penalty item information meet the eye attaching condition.
In an alternative embodiment, the step of performing enhancement processing on the eye feature points of the sample image to generate a new sample image includes:
for each sample image, determining eye feature points of a face in the sample image;
adjusting coordinate points of upper eyelid in the eye feature points so that the coordinate points of the upper eyelid move to the coordinate points of lower eyelid;
determining all target images of coordinate points of the upper eyelid at different positions in the moving process;
and taking the sample image and the target image as new sample images.
In an alternative embodiment, the step of performing enhancement processing on the eye feature points of the sample image to generate a new sample image includes:
determining characteristic points of a human face in the sample image and edge points of the sample image;
constructing a deluxe triangle network based on the feature points and the edge points;
adjusting coordinate points of upper eyelid in the eye feature points in the face feature points in the delousing triangle network, so that the coordinate points of the upper eyelid move towards the coordinate points of lower eyelid;
interpolation is carried out on the Delong triangle network in the moving process, so that all images of coordinate points of the upper eyelid in different positions in the moving process are obtained;
All images in the moving process and the sample image are taken as new sample images.
In an alternative embodiment, the step of calculating the first loss penalty term information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted includes:
determining eye closing labels of left eyes and right eyes in the feature points of the five sense organs to be adjusted;
respectively determining the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye in the feature points of the five sense organs to be adjusted;
and calculating first loss penalty term information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted based on the eye closing labels of the left eye and the right eye, the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye.
In an alternative embodiment, the step of determining eye-closing labels of the left eye and the right eye in the feature points of the five sense organs to be adjusted includes:
determining a first distance and an eye distance between an upper eyelid and a lower eyelid of a left eye in feature points of the five sense organs to be adjusted;
calculating the ratio of the first distance to the eye distance as a left eye closing label;
determining a second distance between an upper eyelid and a lower eyelid of a right eye in the three-dimensional face model to be adjusted;
and calculating the ratio of the second distance to the eye distance as a right eye closing label.
In an alternative embodiment, the first loss penalty term information satisfies the following formula:
wherein the flag is lefteye For the eye closing label of the right eye in the feature points of the five sense organs to be adjusted, a flag righteye For the eye closing label of the left eye in the feature points of the five sense organs to be adjusted, the prediction lefteye_upper_eyelid For the upper eyelid coordinates of the right eye in the feature points of the five sense organs to be adjusted, the prediction lefteye_lower_eyelid For the lower eyelid coordinates of the right eye in the feature points of the five sense organs to be adjusted, the prediction righteye_upper_eyelid For the upper eyelid coordinates of the left eye in the feature points of the five sense organs to be adjusted, the prediction righteye_lower_eyelid The lower eyelid coordinates of the left eye in the feature points of the five sense organs to be adjusted.
In an alternative embodiment, the second loss penalty term information satisfies the following formula:
wherein, the prediction kpt For the coordinates of feature points of the five sense organs to be adjusted, label kpt Is the coordinates of the characteristic points of the standard five sense organs.
In a second aspect, an embodiment of the present application provides a three-dimensional face reconstruction method, where three-dimensional face reconstruction is performed by using the three-dimensional face model training method.
In a third aspect, an embodiment of the present application provides a three-dimensional face model training device, where the device includes:
the image processing module is used for carrying out enhancement processing on the eye feature points of the sample image to generate a new sample image;
the three-dimensional model reconstruction module is used for generating a standard three-dimensional face model through a three-dimensional face reconstruction network according to the new sample image, and generating a three-dimensional face model to be adjusted through the three-dimensional face reconstruction network according to the unlabeled image;
The determining module is used for respectively determining feature points of the five sense organs to be adjusted in the three-dimensional face model to be adjusted and feature points of the standard five sense organs in the standard three-dimensional face model;
the calculation module is used for calculating first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty term information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs;
the input module is used for inputting the first loss penalty term information and the second loss penalty term information into the three-dimensional face reconstruction network so as to obtain an updated three-dimensional face model to be adjusted;
and the execution module is used for returning to execute the steps of calculating the first loss penalty item information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted and the second loss penalty item information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs until the first loss penalty item information and the second loss penalty item information are input into the three-dimensional face reconstruction network so as to obtain an updated three-dimensional face model to be adjusted until the latest first loss penalty item information and the latest second loss penalty item information meet the eye attaching condition when the difference between the updated three-dimensional face reconstruction parameters of the three-dimensional face model to be adjusted and the three-dimensional face reconstruction parameters of the standard three-dimensional face model are larger than a threshold value.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the three-dimensional face model training method when executing the computer program.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the three-dimensional face model training method.
The application has the following beneficial effects:
according to the method and the device, the sample images are subjected to enhancement processing, new sample images are generated, the number of the sample images is increased as much as possible under the condition that the cost is not increased, and the accuracy of the model is improved when the model is trained. And repeatedly and iteratively calculating the first loss penalty term information and the second loss penalty term information through the first loss penalty term information of the five-element feature points in the three-dimensional face model to be adjusted and the five-element feature points in the standard three-dimensional face model to be adjusted, so that the learning of the three-dimensional face reconstruction network on eye closure is enhanced, the feature points of the eyes of the three-dimensional face model are attached to the standard eye feature points as much as possible, and the generated three-dimensional face special effect, digital people and three-dimensional makeup expression are driven more naturally.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic block diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a three-dimensional face model training method according to an embodiment of the present invention;
FIG. 3 is a second flowchart of a three-dimensional face model training method according to an embodiment of the present invention;
FIG. 4 is a third flowchart of a three-dimensional face model training method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a three-dimensional face model training method according to an embodiment of the present invention;
fig. 6 is a structural block diagram of a three-dimensional face model training device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, if the terms "upper", "lower", "inner", "outer", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or the azimuth or the positional relationship in which the inventive product is conventionally put in use, it is merely for convenience of describing the present invention and simplifying the description, and it is not indicated or implied that the apparatus or element referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus it should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
In the description of the present application, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
The inventor researches a lot, and found that in the prior art, the generated three-dimensional face special effect, digital person and eye fitting effect in three-dimensional makeup are poor, so that the driving of the generated three-dimensional face special effect, digital person and expression of the three-dimensional makeup is affected.
In view of the above-mentioned problems, the present embodiment provides a three-dimensional face model training method, a three-dimensional face reconstruction method, and a related apparatus, which can generate new sample images by performing enhancement processing on the sample images, and increase the number of sample images as much as possible without increasing the cost, so that the accuracy of the model is improved during model training. The first loss penalty term information and the second loss penalty term information are repeatedly and iteratively calculated through the first loss penalty term information of the five-element feature points in the three-dimensional face model to be adjusted and the second loss penalty term information of the five-element feature points in the standard three-dimensional face model to be adjusted, and the learning of the three-dimensional face reconstruction network on eye closure is enhanced, so that the feature points of the eyes of the three-dimensional face model constructed are attached to the standard eye feature points as much as possible, and the scheme provided by the embodiment is explained in detail below.
The embodiment provides an electronic device capable of training a three-dimensional face model. In one possible implementation, the electronic device may be a user terminal, for example, the electronic device may be, but is not limited to, a server, a smart phone, a personal computer (PersonalComputer, PC), a tablet, a personal digital assistant (Personal Digital Assistant, PDA), a mobile internet device (Mobile Internet Device, MID), an image capture device, and the like.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the disclosure. The electronic device 100 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
The electronic device 100 includes a three-dimensional face model training apparatus 110, a memory 120, and a processor 130.
The memory 120 and the processor 130 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The three-dimensional face model training apparatus 110 includes at least one software function module that may be stored in the memory 120 in the form of software or firmware (firmware) or cured in an Operating System (OS) of the electronic device 100. The processor 130 is configured to execute executable modules stored in the memory 120, such as software functional modules and computer programs included in the training device 110 based on the three-dimensional face model.
The Memory 120 may be, but is not limited to, a random access Memory (RandomAccess Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable ProgrammableRead-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable ProgrammableRead-Only Memory, EEPROM), etc. The memory 120 is configured to store a program, and the processor 130 executes the program after receiving an execution instruction.
Referring to fig. 2, fig. 2 is a flowchart of a three-dimensional face model training method applied to the electronic device 100 of fig. 1, and the method includes various steps described in detail below.
Step 201: and performing enhancement processing on the eye feature points of the sample image to generate a new sample image.
Step 202: generating a standard three-dimensional face model through a three-dimensional face reconstruction network according to the new sample image, and generating a three-dimensional face model to be adjusted through the three-dimensional face reconstruction network according to the unlabeled image.
Step 203: and respectively determining feature points of the five sense organs to be adjusted in the three-dimensional face model to be adjusted and feature points of the standard five sense organs in the standard three-dimensional face model.
Step 204: and calculating first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty term information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs.
Step 205: and inputting the first loss penalty term information and the second loss penalty term information into a three-dimensional face reconstruction network to obtain an updated three-dimensional face model to be adjusted.
Step 206: and when the difference value between the updated three-dimensional face reconstruction parameters of the three-dimensional face model to be adjusted and the three-dimensional face reconstruction parameters of the standard three-dimensional face model is larger than a threshold value, returning to execute the steps 204 to 205 until the latest first loss penalty term information and the latest second loss penalty term information meet the eye attaching condition.
In order to achieve the aim that the trained three-dimensional face reconstruction network obtains a better eye fitting effect, and eyes of a reconstruction result of the three-dimensional face model can be closed when the eyes of the human face are closed.
Because the three-dimensional face data is difficult to acquire, particularly, some data with high eye accuracy are acquired. And selecting some high-quality data from the existing training data and taking the high-quality data as a sample image to be trained. In order to increase the training sample image and make the eye fitting effect better, the method of data enhancement is needed to enhance the eye feature points of the sample image, and the sample image after enhancement and the image before enhancement are used as a new sample image.
And generating a standard face three-dimensional model through a three-dimensional face reconstruction network according to the new sample image.
The unlabeled image is a two-dimensional image of a human face. And inputting unlabeled images in the three-dimensional face reconstruction network, and generating a three-dimensional face model to be adjusted based on the prediction parameters corresponding to the unlabeled images in the three-dimensional face reconstruction network. Because the initial value of the three-dimensional face reconstruction network is inaccurate, the generated three-dimensional face model to be adjusted has larger difference with the corresponding standard three-dimensional face model.
The three-dimensional face reconstruction network may be a mobiletv 3 network model.
The visual content of the new sample image and the unlabeled training image are the same.
And determining feature points of the five sense organs to be adjusted in the three-dimensional face model to be adjusted and feature points of the standard five sense organs in the standard three-dimensional face model, wherein the feature points of the five sense organs comprise eye feature points, eyebrow feature points, nose feature points, mouth feature points and ear feature points.
Calculating a first penalty term of the characteristic points of the upper eyelid and the characteristic points of the lower eyelid in the characteristic points of the five sense organs to be adjusted, and second penalty term information of the characteristic points of the five sense organs to be adjusted and the characteristic points of the standard five sense organs, wherein the first penalty term information is the difference of the coordinates of the upper eyelid and the lower eyelid in the characteristic points of the five sense organs to be adjusted, and the second penalty term information is the difference of the characteristic points of the five sense organs of the three-dimensional face model to be adjusted and the characteristic points of the five sense organs of the standard three-dimensional face model.
And inputting the first loss penalty term information and the second loss penalty term information into a three-dimensional face reconstruction network, and adjusting parameters in the three-dimensional face reconstruction network based on the first loss penalty term information and the second loss penalty term information to obtain an updated three-dimensional face model to be adjusted.
Calculating the difference value of the three-dimensional face reconstruction parameters of the updated three-dimensional face model to be adjusted and the standard three-dimensional face model, comparing the difference value with a threshold value, and when the difference value is larger than the threshold value, indicating that the difference between the updated three-dimensional face model to be adjusted and the standard three-dimensional face model is too large, training the three-dimensional face reconstruction network again, returning to the steps 204 to 205, continuing training the three-dimensional face reconstruction network until the updated first loss penalty item information and the second loss penalty item information meet the eye fitting condition, and finishing the training of the three-dimensional face reconstruction model.
According to the method and the device, the sample images are subjected to enhancement processing, new sample images are generated, the number of the sample images is increased as much as possible under the condition that the cost is not increased, and the accuracy of the model is improved when the model is trained. And repeatedly and iteratively calculating the first loss penalty term information and the second loss penalty term information through the first loss penalty term information of the five-element feature points in the three-dimensional face model to be adjusted and the five-element feature points in the standard three-dimensional face model to be adjusted, so that the learning of the three-dimensional face reconstruction network on eye closure is enhanced, the feature points of the eyes of the three-dimensional face model are attached to the standard eye feature points as much as possible, and the generated three-dimensional face special effect, digital people and three-dimensional makeup expression are driven more naturally.
In order to increase the number of training samples, in another embodiment of the present application, as shown in fig. 3, a three-dimensional face model training method is provided, which specifically includes the following steps:
step 201-1: for each sample image, eye feature points of a face in the sample image are determined.
Step 201-2: and adjusting coordinate points of the upper eyelid in the eye characteristic points so that the coordinate points of the upper eyelid move downwards to the coordinate points of the lower eyelid.
Step 201-3: and determining all target images of which the coordinate points of the upper eyelid are at different positions in the moving process.
Step 201-4: the sample image and the target image are taken as new sample images.
Since different states of eye images of different persons cannot be acquired in real time, for example: the method comprises the steps of closing eyes by 5 degrees, 10 degrees and 15 degrees, determining eye characteristic points of faces in each sample image in order to increase the number of training samples and improve the accuracy of the trained models in reconstructing a three-dimensional face model, approaching coordinate points of upper eyelid points in the eye characteristic points to coordinate points of lower eyelid according to unit step length, acquiring a current image after each movement unit step length until left points of the upper eyelid and coordinate points of the lower eyelid coincide, determining all target images of the coordinate points of the upper eyelid at different positions in the movement process, and taking the sample images and the target images before movement as new sample images.
In another embodiment of the present application, as shown in fig. 4, a three-dimensional face model training method is provided for enhancing the eye feature points of the sample image, which specifically includes the following steps:
step 201-5: and determining characteristic points of the human face in the sample image and edge points of the sample image.
Step 201-6: and constructing a deluxe triangle network based on the feature points and the edge points.
Step 201-7: and adjusting coordinate points of upper eyelid among the eye feature points in the face feature points in the Deluo inner triangle network so that the coordinate points of the upper eyelid move towards the coordinate points of the lower eyelid.
Step 201-8: interpolation is carried out on the Delong triangle network in the moving process, and all images of coordinate points of the upper eyelid in different positions in the moving process are obtained.
Step 201-9: all images and sample images in the moving process are taken as new sample images.
Determining edge points in the sample image, wherein the edge points may be: upper left edge point, upper right edge point, lower left edge point, lower right edge point, upper edge point, lower edge point, left edge point, and right edge point. Face feature points in the sample image are determined, wherein the face feature points can be eye feature points, eyebrow feature points, mouth feature points, nose feature points and the like.
And constructing a deluxe triangle network based on the feature points and the edge points.
Delustered, which is a collection of connected but non-overlapping triangles, and the circles circumscribed by these triangles do not contain any other points of this area.
Since the present application requires movement of the feature points at the eye region, the upper eyelid coordinates of the thermal eye alignment point are adjusted, moving toward the lower eyelid coordinates in unit step size until the upper eyelid coordinates coincide with the lower eyelid coordinates. Each unit step length is moved, interpolation is carried out on the moved Delong triangle network to obtain an image, a second unit step length is moved, interpolation is carried out on the moved Delong triangle network, an image is obtained, all images are obtained, and coordinate points of upper eyelid of each image are different. All images after movement and sample images before movement are taken as new sample images.
And the coordinate points of the upper eyelid are moved towards the coordinate points of the lower eyelid by adopting a Deluo inner triangle network, and only the coordinate points of the upper eyelid are adjusted, so that other pixels of the sample image are kept unchanged as much as possible, the moved sample image is prevented from deforming, and the quality of the obtained new sample image is good.
And (3) passing the adjusted image through a three-dimensional face reconstruction formula to obtain three-dimensional face reconstruction parameters, and obtaining a standard three-dimensional face model based on the three-dimensional face reconstruction parameters.
The three-dimensional face reconstruction formula is as follows:
f is a scale factor, pr is an orthographic projection matrix, which is a 3*3 identity matrix, R is a 3*3 identity orthogonal matrix, the semantics are a rotation matrix,>is an average face, A id Is a PCA orthogonal matrix of human face shape, a id Is the shape coefficient of the face which is returned, A exp Is a PCA orthogonal matrix of facial expression, a exp Is the regressed facial expression coefficient, t 2d Is the regressive three-dimensional face model offset matrix.
Where f represents a scale factor, e.g., 0.001.R represents a face rotation matrix, e.g
Where vector 1{ r00, r10, r20}, vector 2{ r01, r11, r21}, vector 3{ r02, r12, r22},the three vectors are unit vectors, respectively, and are orthogonal to each other. Pr is the direction of the orthogonal projection camera, is oneIs a matrix of (a) in the matrix.
Is an average face, is known, and is provided by third party modeling. A is that id In BFM topology is a matrix of dimensions 53215 x 3 x 199, which is known and provided by third party modeling. a, a id In BFM topology is a 199 x 1 dimensional matrix. A is that exp In BFM topology is a matrix of 53215 x 3 x 29 dimensions, which is known and provided by third party modeling. a, a exp In BFM topology is a 29 x 1 dimensional matrix.
In order to calculate the first loss penalty term information, in another embodiment of the present application, as shown in fig. 5, a three-dimensional face model training method is provided, specifically including the following steps:
step 204-1: and determining the eye closing labels of the left eye and the right eye in the feature points of the five sense organs to be adjusted.
Step 204-2: and respectively determining the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye in the feature points of the five sense organs to be adjusted.
Step 204-3: and calculating first loss penalty term information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted based on the eye closing labels of the left eye and the right eye, the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye.
The method for determining the eye closing labels of the left eye and the right eye in the feature points of the five sense organs to be adjusted specifically comprises the following steps: determining a first distance and an eye distance between an upper eyelid and a lower eyelid of a left eye in feature points of the five sense organs to be adjusted; calculating the ratio of the first distance to the eye distance to be used as a left eye closing label; determining a second distance between an upper eyelid and a lower eyelid of a right eye in the three-dimensional face model to be adjusted; the ratio of the second distance to the eye distance is calculated as a right eye closed eye label.
Specifically: the first loss penalty term information satisfies the following formula:
wherein the flag is lefteye For the eye closing label of the right eye in the feature points of the five sense organs to be adjusted, a flag righteye For the eye closing label of the left eye in the feature points of the five sense organs to be adjusted, predictlefteye _upper_eyelid For adjusting the upper eyelid coordinates of the right eye in the feature points of the five sense organs, predictlefteye _lower_eyelid For adjusting the lower eyelid coordinates of the right eye in the feature points of the five sense organs, predictright eye _upper_eyelid For adjusting the upper eyelid coordinates of the left eye in the feature points of the five sense organs, predictright eye _lower_eyelid The lower eyelid coordinates of the left eye in the feature points of the five sense organs to be adjusted.
The eye-closure tag is 1, otherwise 0.
Specifically: second loss penalty term information satisfying the following formula:
wherein, the prediction kpt For the coordinates of feature points of the five sense organs to be adjusted, label kpt Is the coordinates of the characteristic points of the standard five sense organs.
The application also provides a three-dimensional face reconstruction method, wherein the three-dimensional face reconstruction is performed through the three-dimensional face model training method.
Referring to fig. 6, an embodiment of the present application further provides a three-dimensional face model training device 110 applied to the electronic device 100 described in fig. 1, where the three-dimensional face model training device 110 includes:
the image processing module 111 is configured to perform enhancement processing on the eye feature points of the sample image, and generate a new sample image.
In this embodiment, the image processing module 111 may be used to perform step 201 shown in fig. 2, and a description of the step 201 may be referred to for a collective description of the image processing modules 111.
The three-dimensional model reconstruction module 112 is configured to generate a standard three-dimensional face model through a three-dimensional face reconstruction network according to the new sample image, and generate a three-dimensional face model to be adjusted through the three-dimensional face reconstruction network according to the unlabeled image.
In this embodiment, the three-dimensional model reconstruction module 112 may be used to perform the step 202 shown in fig. 2, and for a specific description of the three-dimensional model reconstruction module 112, reference may be made to the description of the step 202.
The determining module 113 is configured to determine feature points of the to-be-adjusted five sense organs in the to-be-adjusted three-dimensional face model and feature points of the standard five sense organs in the standard three-dimensional face model respectively.
In this embodiment, the determining module 113 may be configured to perform the step 203 shown in fig. 2, and for a specific description of the determining module 113, reference may be made to a description of the step 203.
The calculating module 114 is configured to calculate first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty term information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs.
In this embodiment, the computing module 113 may be configured to perform the step 204 shown in fig. 2, and for a specific description of the computing module 114, reference may be made to the description of the step 204.
And the input module 115 is configured to input the first loss penalty term information and the second loss penalty term information into the three-dimensional face reconstruction network, so as to obtain an updated three-dimensional face model to be adjusted.
In this embodiment, the input module 115 may be used to perform the step 205 shown in fig. 2, and for a specific description of the input module 115, reference may be made to the description of the step 205.
And the execution module 116 is configured to return to execute the step of calculating first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty term information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs until the first loss penalty term information and the second loss penalty term information are input into the three-dimensional face reconstruction network when the updated three-dimensional face reconstruction parameter of the three-dimensional face model to be adjusted and the three-dimensional face reconstruction parameter of the standard three-dimensional face model are different from each other by more than a threshold value, so as to obtain an updated three-dimensional face model to be adjusted until the latest first loss penalty term information and the latest second loss penalty term information meet the eye fitting condition.
In this embodiment, the execution module 116 may be configured to execute the step 206 shown in fig. 2, and for a specific description of the execution module 116, reference may be made to the description of the step 206.
Optionally, in some possible embodiments, the image processing module 111 is specifically configured to:
for each sample image, determining eye feature points of a face in the sample image;
adjusting coordinate points of upper eyelid in the eye feature points so that the coordinate points of the upper eyelid move to the coordinate points of lower eyelid;
determining all target images of coordinate points of the upper eyelid at different positions in the moving process;
and taking the sample image and the target image as new sample images.
Optionally, in some possible embodiments, the image processing module 111 is specifically configured to:
determining characteristic points of a human face in the sample image and edge points of the sample image;
constructing a deluxe triangle network based on the feature points and the edge points;
adjusting coordinate points of upper eyelid in the eye feature points in the face feature points in the delousing triangle network, so that the coordinate points of the upper eyelid move towards the coordinate points of lower eyelid;
Interpolation is carried out on the Delong triangle network in the moving process, so that all images of coordinate points of the upper eyelid in different positions in the moving process are obtained;
all images in the moving process and the sample image are taken as new sample images.
Optionally, in some possible embodiments, the computing module 114 is specifically configured to:
the step of calculating first loss penalty term information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted comprises the following steps:
determining eye closing labels of left eyes and right eyes in the feature points of the five sense organs to be adjusted;
respectively determining the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye in the feature points of the five sense organs to be adjusted;
and calculating first loss penalty term information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted based on the eye closing labels of the left eye and the right eye, the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye.
Optionally, in some possible embodiments, the computing module 114 is specifically configured to:
determining a first distance and an eye distance between an upper eyelid and a lower eyelid of a left eye in feature points of the five sense organs to be adjusted;
calculating the ratio of the first distance to the eye distance as a left eye closing label;
Determining a second distance between an upper eyelid and a lower eyelid of a right eye in the three-dimensional face model to be adjusted;
and calculating the ratio of the second distance to the eye distance as a right eye closing label.
Optionally, in some possible implementations, the first loss penalty term information satisfies the following formula:
wherein the flag is lefteye For the eye closing label of the right eye in the feature points of the five sense organs to be adjusted, a flag righteye For the eye closing label of the left eye in the feature points of the five sense organs to be adjusted, the prediction lefteye_upper_eyelid For the upper eyelid coordinates of the right eye in the feature points of the five sense organs to be adjusted, the prediction lefteye_lower_eyelid To be adjusted for the five sense organs characteristicsLower eyelid coordinates of right eye in the point, prediction righteye_upper_eyelid For the upper eyelid coordinates of the left eye in the feature points of the five sense organs to be adjusted, the prediction righteye_lower_eyelid The lower eyelid coordinates of the left eye in the feature points of the five sense organs to be adjusted.
Optionally, in some possible implementations, the second loss penalty term information satisfies the following formula:
wherein, the prediction kpt For the coordinates of feature points of the five sense organs to be adjusted, label kpt Is the coordinates of the characteristic points of the standard five sense organs.
In summary, the present application generates a new sample image by performing enhancement processing on the sample image, and increases the number of sample images as much as possible without increasing the cost, so that the accuracy of the model is improved when the model is trained. And repeatedly and iteratively calculating the first loss penalty term information and the second loss penalty term information through the first loss penalty term information of the five-element feature points in the three-dimensional face model to be adjusted and the five-element feature points in the standard three-dimensional face model to be adjusted, so that the learning of the three-dimensional face reconstruction network on eye closure is enhanced, the feature points of the eyes of the three-dimensional face model are attached to the standard eye feature points as much as possible, and the generated three-dimensional face special effect, digital people and three-dimensional makeup expression are driven more naturally.
The present application also provides an electronic device 100, the electronic device 100 comprising a processor 130 and a memory 120. The memory 120 stores computer executable instructions that, when executed by the processor 130, implement the three-dimensional face model training method.
The embodiment of the application further provides a computer readable storage medium, and the storage medium stores a computer program, and when the computer program is executed by the processor 130, the three-dimensional face model training method is implemented.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method for training a three-dimensional face model, the method comprising:
performing enhancement processing on eye feature points of the sample image to generate a new sample image;
generating a standard three-dimensional face model through a three-dimensional face reconstruction network according to the new sample image, and generating a three-dimensional face model to be adjusted through the three-dimensional face reconstruction network according to the unlabeled image;
respectively determining feature points of the five sense organs to be adjusted in the three-dimensional face model to be adjusted and feature points of the standard five sense organs in the standard three-dimensional face model;
calculating first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty term information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs;
inputting the first loss penalty term information and the second loss penalty term information into the three-dimensional face reconstruction network to obtain an updated three-dimensional face model to be adjusted;
And when the difference value between the updated three-dimensional face reconstruction parameters of the three-dimensional face model to be adjusted and the three-dimensional face reconstruction parameters of the standard three-dimensional face model is larger than a threshold value, returning to execute the steps of calculating first loss penalty item information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty item information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs until the first loss penalty item information and the second loss penalty item information are input into the three-dimensional face reconstruction network so as to obtain the updated three-dimensional face model to be adjusted, and finally, the latest first loss penalty item information and the latest second loss penalty item information meet the eye attaching condition.
2. The method of claim 1, wherein the step of enhancing the ocular feature points of the sample image to generate a new sample image comprises:
for each sample image, determining eye feature points of a face in the sample image;
adjusting coordinate points of upper eyelid in the eye feature points so that the coordinate points of the upper eyelid move to the coordinate points of lower eyelid;
determining all target images of coordinate points of the upper eyelid at different positions in the moving process;
And taking the sample image and the target image as new sample images.
3. The method of claim 1, wherein the step of enhancing the ocular feature points of the sample image to generate a new sample image comprises:
determining characteristic points of a human face in the sample image and edge points of the sample image;
constructing a deluxe triangle network based on the feature points and the edge points;
adjusting coordinate points of upper eyelid in the eye feature points in the face feature points in the delousing triangle network, so that the coordinate points of the upper eyelid move towards the coordinate points of lower eyelid;
interpolation is carried out on the Delong triangle network in the moving process, so that all images of coordinate points of the upper eyelid in different positions in the moving process are obtained;
all images in the moving process and the sample image are taken as new sample images.
4. The method according to claim 1, wherein the step of calculating first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted includes:
determining eye closing labels of left eyes and right eyes in the feature points of the five sense organs to be adjusted;
respectively determining the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye in the feature points of the five sense organs to be adjusted;
And calculating first loss penalty term information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted based on the eye closing labels of the left eye and the right eye, the coordinates of the upper eyelid and the lower eyelid of the left eye and the coordinates of the upper eyelid and the lower eyelid of the right eye.
5. The method according to claim 4, wherein the step of determining eye-closure tags for left and right eyes in the feature points of the five sense organs to be adjusted comprises:
determining a first distance and an eye distance between an upper eyelid and a lower eyelid of a left eye in feature points of the five sense organs to be adjusted;
calculating the ratio of the first distance to the eye distance as a left eye closing label;
determining a second distance between an upper eyelid and a lower eyelid of a right eye in the three-dimensional face model to be adjusted;
and calculating the ratio of the second distance to the eye distance as a right eye closing label.
6. The method of claim 4, wherein the first loss penalty term information satisfies the following formula:
wherein the flag is lefteye For the eye closing label of the right eye in the feature points of the five sense organs to be adjusted, a flag righteye For the eye closing label of the left eye in the feature points of the five sense organs to be adjusted, the prediction lefteye_upper_eyelid For the upper eyelid coordinates of the right eye in the feature points of the five sense organs to be adjusted, the prediction lefteye_lower_eyelid For the lower eyelid coordinates of the right eye in the feature points of the five sense organs to be adjusted, the prediction righteye_upper_eyelid For the upper eyelid coordinates of the left eye in the feature points of the five sense organs to be adjusted, the prediction righteye_lower_eyelid The lower eyelid coordinates of the left eye in the feature points of the five sense organs to be adjusted.
7. The method of claim 6, wherein the second loss penalty term information satisfies the following formula:
wherein, the prediction kpt For the coordinates of feature points of the five sense organs to be adjusted, label kpt Is the coordinates of the characteristic points of the standard five sense organs.
8. A three-dimensional face reconstruction method, characterized in that three-dimensional face reconstruction is performed by the three-dimensional face model training method according to any one of claims 1 to 7.
9. A three-dimensional face model training apparatus, the apparatus comprising:
the image processing module is used for carrying out enhancement processing on the eye feature points of the sample image to generate a new sample image;
the three-dimensional model reconstruction module is used for generating a standard three-dimensional face model through a three-dimensional face reconstruction network according to the new sample image, and generating a three-dimensional face model to be adjusted through the three-dimensional face reconstruction network according to the unlabeled image;
the determining module is used for respectively determining feature points of the five sense organs to be adjusted in the three-dimensional face model to be adjusted and feature points of the standard five sense organs in the standard three-dimensional face model;
The calculation module is used for calculating first loss penalty term information of upper eyelid and lower eyelid in the feature points of the five sense organs to be adjusted and second loss penalty term information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs;
the input module is used for inputting the first loss penalty term information and the second loss penalty term information into the three-dimensional face reconstruction network so as to obtain an updated three-dimensional face model to be adjusted;
and the execution module is used for returning to execute the steps of calculating the first loss penalty item information of the upper eyelid and the lower eyelid in the feature points of the five sense organs to be adjusted and the second loss penalty item information of the feature points of the five sense organs to be adjusted and the feature points of the standard five sense organs until the first loss penalty item information and the second loss penalty item information are input into the three-dimensional face reconstruction network so as to obtain an updated three-dimensional face model to be adjusted until the latest first loss penalty item information and the latest second loss penalty item information meet the eye attaching condition when the difference between the updated three-dimensional face reconstruction parameters of the three-dimensional face model to be adjusted and the three-dimensional face reconstruction parameters of the standard three-dimensional face model are larger than a threshold value.
10. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1-8 when executing the computer program.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1-8.
CN202110973590.6A 2021-08-24 2021-08-24 Three-dimensional face model training method, three-dimensional face reconstruction method and related devices Active CN113506367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110973590.6A CN113506367B (en) 2021-08-24 2021-08-24 Three-dimensional face model training method, three-dimensional face reconstruction method and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110973590.6A CN113506367B (en) 2021-08-24 2021-08-24 Three-dimensional face model training method, three-dimensional face reconstruction method and related devices

Publications (2)

Publication Number Publication Date
CN113506367A CN113506367A (en) 2021-10-15
CN113506367B true CN113506367B (en) 2024-02-27

Family

ID=78016090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110973590.6A Active CN113506367B (en) 2021-08-24 2021-08-24 Three-dimensional face model training method, three-dimensional face reconstruction method and related devices

Country Status (1)

Country Link
CN (1) CN113506367B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842123B (en) * 2022-06-28 2022-09-09 北京百度网讯科技有限公司 Three-dimensional face reconstruction model training and three-dimensional face image generation method and device
CN116109743B (en) * 2023-04-11 2023-06-20 广州智算信息技术有限公司 Digital person generation method and system based on AI and image synthesis technology
CN116993929B (en) * 2023-09-27 2024-01-16 北京大学深圳研究生院 Three-dimensional face reconstruction method and device based on human eye dynamic change and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833197A (en) * 2017-10-31 2018-03-23 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing
CN109118579A (en) * 2018-08-03 2019-01-01 北京微播视界科技有限公司 The method, apparatus of dynamic generation human face three-dimensional model, electronic equipment
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN112529999A (en) * 2020-11-03 2021-03-19 百果园技术(新加坡)有限公司 Parameter estimation model training method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833197A (en) * 2017-10-31 2018-03-23 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing
CN109118579A (en) * 2018-08-03 2019-01-01 北京微播视界科技有限公司 The method, apparatus of dynamic generation human face three-dimensional model, electronic equipment
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN112529999A (en) * 2020-11-03 2021-03-19 百果园技术(新加坡)有限公司 Parameter estimation model training method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113506367A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN113506367B (en) Three-dimensional face model training method, three-dimensional face reconstruction method and related devices
Wood et al. 3d face reconstruction with dense landmarks
Koenderink What does the occluding contour tell us about solid shape?
US9111375B2 (en) Evaluation of three-dimensional scenes using two-dimensional representations
CN104978548B (en) A kind of gaze estimation method and device based on three-dimensional active shape model
CN111062328B (en) Image processing method and device and intelligent robot
CN108734078B (en) Image processing method, image processing apparatus, electronic device, storage medium, and program
Cai et al. Semi-supervised natural face de-occlusion
Paulsen et al. Multi-view consensus CNN for 3D facial landmark placement
CN114067057A (en) Human body reconstruction method, model and device based on attention mechanism
Haggag et al. Semantic body parts segmentation for quadrupedal animals
Melacci et al. A template-based approach to automatic face enhancement
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN113256799A (en) Three-dimensional face model training method and device
CN111815768A (en) Three-dimensional face reconstruction method and device
Moon et al. Interestyle: Encoding an interest region for robust stylegan inversion
Amara et al. HOLOTumour: 6DoF Phantom Head pose estimation based deep learning and brain tumour segmentation for AR visualisation and interaction
CN115546361A (en) Three-dimensional cartoon image processing method and device, computer equipment and storage medium
CN116704084B (en) Training method of facial animation generation network, facial animation generation method and device
Garbin et al. High resolution zero-shot domain adaptation of synthetically rendered face images
Hastürk et al. DUDMap: 3D RGB-D mapping for dense, unstructured, and dynamic environment
Dornaika et al. Real time 3D face and facial feature tracking
CN117011449A (en) Reconstruction method and device of three-dimensional face model, storage medium and electronic equipment
CN112184611A (en) Image generation model training method and device
Nguyen et al. Fast 3D Face Reconstruction from a Single Image Using Different Deep Learning Approaches for Facial Palsy Patients

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant