CN110807451A - Face key point detection method, device, equipment and storage medium - Google Patents

Face key point detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN110807451A
CN110807451A CN202010018423.1A CN202010018423A CN110807451A CN 110807451 A CN110807451 A CN 110807451A CN 202010018423 A CN202010018423 A CN 202010018423A CN 110807451 A CN110807451 A CN 110807451A
Authority
CN
China
Prior art keywords
dimensional face
point
dimensional
points
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010018423.1A
Other languages
Chinese (zh)
Other versions
CN110807451B (en
Inventor
林祥凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010018423.1A priority Critical patent/CN110807451B/en
Publication of CN110807451A publication Critical patent/CN110807451A/en
Application granted granted Critical
Publication of CN110807451B publication Critical patent/CN110807451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a method, a device, equipment and a storage medium for detecting key points of a human face, and relates to the technical field of artificial intelligence computer vision. The method comprises the following steps: acquiring a three-dimensional face model; projecting the three-dimensional face model from n different visual angles to obtain n two-dimensional face images; detecting corresponding two-dimensional face key points from the n two-dimensional face images; obtaining key point mapping results corresponding to the n two-dimensional face images respectively; and respectively selecting three-dimensional face key points corresponding to the projection view angles from the key point mapping results respectively corresponding to the n two-dimensional face images for integration to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model. According to the technical scheme provided by the embodiment of the application, the accuracy of detecting the key points of the face on the three-dimensional face model is improved.

Description

Face key point detection method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence computer vision, in particular to a method, a device, equipment and a storage medium for detecting key points of a human face.
Background
Face keypoint detection refers to detecting keypoints of parts such as eyeball center, canthus, nose tip, mouth corner, face contour and the like from a given face image.
In the related art, for detecting key points of a face in a two-dimensional face image, there are already mature technologies, such as: an ASM (Active Shape Model) algorithm, an AAM (Active Appearance Model) algorithm, a CPR (Cascaded position Regression) algorithm, a machine learning algorithm, and the like. And processing and analyzing the two-dimensional face image by the face key point detection algorithm to finally obtain a positioning result of the two-dimensional face key points in the two-dimensional face image.
However, for the detection of the key points of the face in the three-dimensional face model, if the face key point detection algorithm is adopted, the three-dimensional face model is directly processed and analyzed, and finally the positioning result of the key points of the three-dimensional face in the three-dimensional face model is obtained, and the positioning result of the key points of the three-dimensional face obtained in this way is not accurate enough.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for detecting face key points, which can improve the accuracy of detecting the face key points on a three-dimensional face model. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for detecting a face key point, where the method includes:
acquiring a three-dimensional face model;
projecting the three-dimensional face model from n different visual angles to obtain n two-dimensional face images, wherein n is an integer greater than 1;
detecting corresponding two-dimensional face key points from the n two-dimensional face images respectively;
obtaining key point mapping results corresponding to the n two-dimensional face images respectively; the key point mapping result corresponding to the ith two-dimensional face image in the n two-dimensional face images comprises a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model, and i is a positive integer less than or equal to n;
and respectively selecting the three-dimensional face key points corresponding to the projection visual angles from the key point mapping results respectively corresponding to the n two-dimensional face images for integration to obtain the positioning result of the three-dimensional face key points on the three-dimensional face model.
On the other hand, the embodiment of the present application provides a human face key point detection device, the device includes:
the model acquisition module is used for acquiring a three-dimensional face model;
the model projection module is used for projecting the three-dimensional face model from n different visual angles to obtain n two-dimensional face images, wherein n is an integer greater than 1;
the key point detection module is used for respectively detecting corresponding two-dimensional face key points from the n two-dimensional face images;
the result acquisition module is used for acquiring key point mapping results corresponding to the n two-dimensional face images respectively; the key point mapping result corresponding to the ith two-dimensional face image in the n two-dimensional face images comprises a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model, and i is a positive integer less than or equal to n;
and the result determining module is used for respectively selecting the three-dimensional face key points corresponding to the projection view angles from the key point mapping results respectively corresponding to the n two-dimensional face images to integrate so as to obtain the positioning result of the three-dimensional face key points on the three-dimensional face model.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the above-mentioned face keypoint detection method.
Optionally, the computer device is a terminal or a server.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the computer-readable storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the above-mentioned face keypoint detection method.
In a further aspect, an embodiment of the present application provides a computer program product, where the computer program product is used to implement the above method for detecting a face key point when being executed by a processor.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of projecting a three-dimensional face model from n different visual angles to obtain n two-dimensional face images, detecting corresponding two-dimensional face key points from the n two-dimensional face images respectively, obtaining key point mapping results corresponding to the n two-dimensional face images respectively, selecting the three-dimensional face key points corresponding to the projection visual angles from the key point mapping results corresponding to the n two-dimensional face images respectively, and integrating to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model; on the other hand, the two-dimensional detection result is mapped to obtain a three-dimensional detection result, then three-dimensional face key points corresponding to the projection view angle are selected to be integrated, and finally the positioning result of the three-dimensional face key points on the whole three-dimensional face model is obtained, so that each area on the three-dimensional face model can be ensured to select an accurate three-dimensional detection result, and the accuracy of the finally obtained positioning result of the three-dimensional face key points is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a frame diagram of a face key point detection process provided in the present application;
fig. 2 is a flowchart of a face key point detection method according to an embodiment of the present application;
FIG. 3 is a schematic projection diagram of a three-dimensional face model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of two-dimensional face key points according to an embodiment of the present application;
fig. 5 is a schematic diagram of two-dimensional face key points in a two-dimensional face image with 3 different viewing angles according to an embodiment of the present application;
fig. 6 is a flowchart of a face keypoint detection method according to another embodiment of the present application;
FIG. 7 is a schematic diagram of candidate three-dimensional contour points provided by one embodiment of the present application;
FIG. 8 is a diagram illustrating the result of contour point mapping of the two-dimensional face image of FIG. 5;
FIG. 9 is a schematic diagram of the contour point mapping result of the two-dimensional face image in FIG. 5;
FIG. 10 is a schematic diagram of three-dimensional face key points provided in another embodiment of the present application;
fig. 11 is a block diagram of a face keypoint detection apparatus according to an embodiment of the present application;
fig. 12 is a block diagram of a face keypoint detection apparatus according to another embodiment of the present application;
fig. 13 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of methods consistent with aspects of the present application, as detailed in the appended claims.
AI (Artificial Intelligence) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (Computer Vision, CV): computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D (three-dimensional) technology, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face Recognition and fingerprint Recognition.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to the technical field of 3D face reconstruction, and the method is characterized in that the face key points of a three-dimensional face model are detected by using a computer vision technology to obtain the positioning results of all the three-dimensional face key points on the three-dimensional face model.
With reference to fig. 1, a block diagram of a face keypoint detection process provided in the present application is shown. As can be seen from the process, the input data includes a three-dimensional face model 11, and the output data includes the positions of the three-dimensional face key points on the three-dimensional face model 11, such as the three-dimensional face model 11 labeled with the three-dimensional face key points 12 shown in fig. 1. After the three-dimensional face model 11 is obtained, projecting the three-dimensional face model 11 to obtain a two-dimensional face image 13; then, two-dimensional face key points 14 are detected from the two-dimensional face image 13; then, according to the projection relationship between the three-dimensional face model 11 and the two-dimensional face image 13, the two-dimensional face key points 14 are mapped onto the three-dimensional face model 11, and the positions of the three-dimensional face key points 12 on the three-dimensional face model 11 are obtained.
In the method flow provided by the embodiment of the application, the execution main body of each step may be a terminal such as a mobile phone, a tablet computer, a multimedia playing device, a wearable device, or a server. For convenience of description, in the following method embodiments, only the execution subject of each step is taken as an example of a computer device, and the computer device may be any electronic device with computing and storage capabilities, such as the above-described terminal or server.
The technical solution of the present application will be described in detail with reference to several embodiments.
Please refer to fig. 2, which shows a flowchart of a face key point detection method according to an embodiment of the present application. The method comprises the following steps (201-205):
step 201, obtaining a three-dimensional face model.
The three-dimensional face model is a three-dimensional model of a face, and the three-dimensional face model can be a three-dimensional face model of a target object generated by face reconstruction according to a face image of the target object acquired by a camera.
In some possible embodiments, the three-dimensional face model may include a point cloud and a triangular mesh. For example, point cloud data points of the surface of the three-dimensional face model are determined, and a collection of the point cloud data points may be referred to as a point cloud; connecting adjacent point cloud data points in the point cloud to obtain a plurality of triangular meshes (also called triangular topology or mesh); and filling colors or textures in each triangular mesh to obtain a plurality of triangular patches, wherein a face formed by the triangular patches is the surface of the three-dimensional face model.
Step 202, projecting the three-dimensional face model from n different viewing angles to obtain n two-dimensional face images, wherein n is an integer greater than 1.
From n different visual angles, the three-dimensional face model can be projected onto planes corresponding to the different visual angles, each plane corresponds to one visual angle, and therefore n two-dimensional face images can be obtained.
Please refer to fig. 3, which illustrates a projection diagram of a three-dimensional face model and a schematic diagram of a two-dimensional face image obtained by projecting the three-dimensional face model according to an embodiment of the present application. As shown in fig. 3, in an exemplary embodiment, the n views may include: a frontal face perspective, a left side face perspective, and a right side face perspective. The three-dimensional face model 31 at the front face view angle in fig. 3 may be projected to obtain a two-dimensional face image 34 at the front face view angle; the three-dimensional face model 32 at the left side face view angle (e.g., 45 degrees left side face) in fig. 3 may be projected to obtain a two-dimensional face image 35 at the left side face view angle; the three-dimensional face model 33 from the right side face perspective in fig. 3 (e.g., 45 degrees of the right side face) may be projected to obtain a two-dimensional face image 36 of the right side face.
In some possible embodiments, taking the projection of the three-dimensional face model from the ith view angle as an example, the vertices of a plurality of triangular meshes of the three-dimensional face model are projected from the ith view angle to obtain projection points; connecting adjacent projection points to obtain triangular meshes on a plurality of planes; and filling colors and textures in the triangular meshes on the planes to generate a two-dimensional face image corresponding to the ith visual angle, wherein i is a positive integer less than or equal to n.
In some possible embodiments, taking the projection of the three-dimensional face model from the ith view angle as an example, projecting each side of a plurality of triangular meshes of the three-dimensional face model from the ith view angle to obtain triangular meshes on a plurality of planes; and filling colors and textures in the triangular meshes on the planes to generate a two-dimensional face image corresponding to the ith visual angle.
In some possible embodiments, the method of projection may include orthogonal projection and perspective projection. The orthogonal projection is a projection mode that all projected points in the three-dimensional face model are parallel to connecting lines between corresponding projection points; the perspective projection is a projection mode that each projected point in the three-dimensional face model and a connecting line between the corresponding projection points intersect at one point.
And 203, detecting corresponding two-dimensional face key points from the n two-dimensional face images respectively.
In each two-dimensional face image, a point on a feature portion of a face or a point on an edge of the feature portion of the face may be determined as a two-dimensional face key point. Please refer to fig. 4, which illustrates a schematic diagram of two-dimensional face key points according to an embodiment of the present application. As shown in fig. 4, the characteristic parts of the human face may be five sense organs of the human face, such as eyes 41, nose 42, lips 43, ears 44, eyebrows 45, face contour 46, hairline 47, and the like.
Please refer to fig. 5, which shows a schematic diagram of two-dimensional face key points in a two-dimensional face image with 3 different viewing angles. Of these, fig. 5 (a) shows two-dimensional face key points 54 in the two-dimensional face image 51 at the front face view angle, fig. 5 (b) shows two-dimensional face key points 55 in the two-dimensional face image 52 at the left side face view angle, and fig. 5 (c) shows two-dimensional face key points 56 in the two-dimensional face image 53 at the right side face view angle. In some possible embodiments, after the two-dimensional face key points are detected, the two-dimensional face key points may be marked in the two-dimensional face image, and the position coordinates of the two-dimensional face key points may also be recorded.
And 204, obtaining key point mapping results corresponding to the n two-dimensional face images respectively.
The key point mapping result corresponding to the ith two-dimensional face image in the n two-dimensional face images comprises three-dimensional face key points corresponding to the two-dimensional face key points in the ith two-dimensional face image on the three-dimensional face model, and i is a positive integer less than or equal to n. The ith two-dimensional face image is an image obtained by projecting the three-dimensional face model from the ith visual angle.
Because the two-dimensional face image is obtained by projecting the three-dimensional face model, the points in the two-dimensional face image can find the corresponding points on the three-dimensional face model, wherein the points of the two-dimensional face key points in the two-dimensional face image, which correspond to the three-dimensional face model, are the three-dimensional face key points.
In some possible embodiments, the key point mapping result corresponding to the ith two-dimensional face image includes the position of the point cloud data point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model.
In some possible embodiments, a spatial coordinate system is established in the three-dimensional face model, and the position of the point cloud data point may be a coordinate of the point cloud data point in the spatial coordinate system.
And step 205, selecting and integrating three-dimensional face key points corresponding to the projection view angles from the key point mapping results respectively corresponding to the n two-dimensional face images to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model.
The positioning result of the three-dimensional face key point on the three-dimensional face model refers to a position of the three-dimensional face key point on the three-dimensional face model, and the position can be represented by using a mark of the point cloud data point or a position coordinate of the point cloud data point, which is not limited in the embodiment of the present application. Combining the above step 204, it can be known that the two-dimensional face key points in each two-dimensional face image can be obtained according to the key point mapping results respectively corresponding to the n two-dimensional face images, the identification or position coordinates of the corresponding three-dimensional face key points on the three-dimensional face model are integrated with the key point mapping results respectively corresponding to the n two-dimensional face images, and the positioning results of all the face key points on the three-dimensional face model can be obtained.
To sum up, in the technical solution provided in the embodiment of the present application, n two-dimensional face images are obtained by projecting a three-dimensional face model from n different viewing angles, corresponding two-dimensional face key points are detected from the n two-dimensional face images, key point mapping results corresponding to the n two-dimensional face images are obtained, and three-dimensional face key points corresponding to the projection viewing angles are selected from the key point mapping results corresponding to the n two-dimensional face images for integration, so as to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model; on the other hand, the two-dimensional detection result is mapped to obtain a three-dimensional detection result, then three-dimensional face key points corresponding to the projection view angle are selected to be integrated, and finally the positioning result of the three-dimensional face key points on the whole three-dimensional face model is obtained, so that each area on the three-dimensional face model can be ensured to select an accurate three-dimensional detection result, and the accuracy of the finally obtained positioning result of the three-dimensional face key points is improved.
Please refer to fig. 6, which shows a flowchart of a face key point detection method according to another embodiment of the present application. The method comprises the following steps (601-606):
step 601, obtaining a three-dimensional face model.
Step 602, projecting the three-dimensional face model from n different viewing angles to obtain n two-dimensional face images.
In some possible embodiments, step 602 may include several sub-steps as follows:
1. for the ith view angle in n different view angles, projecting a point cloud data point which is visible in the ith view angle in the point cloud of the three-dimensional face model into the ith two-dimensional face image to obtain a projection point corresponding to the visible point cloud data point;
in the ith view angle, partial point cloud data points in the point cloud are visible, when the three-dimensional face model is projected from the ith view angle, the visible point cloud data points are projected, and the invisible point cloud data points are not projected; or, in the ith two-dimensional face image, the projection point of the visible point cloud data point is reserved, and the projection point of the invisible point cloud data point is deleted.
In some possible embodiments, a zbuffer (z-cache blanking) algorithm is used to determine the visible point cloud data point for the ith view angle.
In an exemplary embodiment, the projection plane depth corresponding to each point cloud data point relative to the ith view angle is recorded. The depth of each point cloud data point relative to the projection plane corresponding to the ith viewing angle is the projection distance from each point cloud data point to the projection plane corresponding to the ith viewing angle; that is, the distance between each point cloud data point and the projection point of each point cloud data point at the ith viewing angle. In the ith two-dimensional face image, if a certain pixel point position corresponds to the projection point of the k point cloud data points, k depths corresponding to the k point cloud data points are compared, the projection point of the point cloud data point corresponding to the minimum depth in the k depths is reserved, and the projection points of other point cloud data points in the k point cloud data points are deleted; or only projecting the point cloud data point corresponding to the minimum depth in the k depths, and not projecting other point cloud data points in the k point cloud data points, wherein k is an integer greater than 1.
2. And rendering the texture color of the triangular patch formed by the projection points according to the texture color of the triangular patch formed by the visible point cloud data points to obtain the ith two-dimensional face image.
In some possible embodiments, the texture color of the triangle patch formed by the projection points can be rendered according to the texture color of the triangle patch formed by the visible point cloud data points by using a rendering program preset in a computer; and manually drawing the texture color of the triangular patch formed by the projection points according to the texture color of the triangular patch formed by the visible point cloud data points.
And respectively executing the two substeps for n different visual angles to obtain n two-dimensional face images.
Step 603, detecting corresponding two-dimensional face key points from the n two-dimensional face images respectively.
And step 604, obtaining key point mapping results corresponding to the n two-dimensional face images respectively.
In some possible embodiments, step 604 may include several sub-steps as follows:
1. for two-dimensional face key points in the ith two-dimensional face image, acquiring adjacent projection points corresponding to the two-dimensional face key points; the adjacent projection point is a projection point of a point cloud of the three-dimensional face model in the ith two-dimensional face image, and the projection point is closest to the two-dimensional face key point;
2. acquiring point cloud data points corresponding to the adjacent projection points according to a mapping relation, wherein the mapping relation comprises the mapping relation between the point cloud and the projection points;
3. determining three-dimensional face key points corresponding to the two-dimensional face key points on the three-dimensional face model according to point cloud data points corresponding to the adjacent projection points;
4. and acquiring a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model, and acquiring a key point mapping result corresponding to the ith two-dimensional face image.
In the ith two-dimensional face image, when a certain two-dimensional face key point does not coincide with any projection point, a point cloud data point corresponding to a projection point adjacent to the two-dimensional face key point can be determined as a three-dimensional face key point corresponding to the two-dimensional face key point.
In some possible embodiments, the two-dimensional face key points are two-dimensional contour points. Determining a three-dimensional face key point corresponding to the two-dimensional face key point on the three-dimensional face model according to the point cloud data point corresponding to the adjacent projection point, wherein the method comprises the following substeps:
3.1, selecting at least one candidate three-dimensional contour point corresponding to the two-dimensional contour point from the point cloud according to the point cloud data point corresponding to the adjacent projection point;
please refer to fig. 7, which illustrates a schematic diagram of a candidate three-dimensional contour point according to an embodiment of the present application. In fig. 7, each row shows a schematic diagram of a three-dimensional contour point 71 on the three-dimensional face model 70, wherein each candidate three-dimensional contour point corresponds to a two-dimensional contour point. As can be seen from fig. 7, the candidate three-dimensional contour points 71 corresponding to each two-dimensional contour point are located on the left and right sides of the point cloud data points corresponding to the adjacent projection points.
And 3.2, determining the point which is positioned at the outermost side of the human face in the candidate three-dimensional contour points as the three-dimensional contour point corresponding to the two-dimensional contour point on the three-dimensional human face model.
By adopting the technical scheme, point cloud data points corresponding to the adjacent projection points can be determined, and candidate three-dimensional contour points are selected from the point cloud data points on the left side and the right side of the point cloud data points corresponding to the adjacent projection points. The left and right directions of the point cloud data points corresponding to the adjacent projection points are the directions vertical to the symmetrical plane of the three-dimensional face model; that is, the directions of the left and right sides of the point cloud data points corresponding to the adjacent projection points are parallel to the left and right directions of the three-dimensional face model.
In an exemplary embodiment, if a point cloud data point corresponding to the adjacent projection point is located on the left side surface of the three-dimensional face model, a three-dimensional contour point corresponding to the two-dimensional contour point on the three-dimensional face model refers to a point located at the leftmost side of the face in the candidate three-dimensional contour points; and if the point cloud data point corresponding to the adjacent projection point is positioned on the right side surface of the three-dimensional face model, the three-dimensional contour point corresponding to the two-dimensional contour point on the three-dimensional face model refers to the point positioned on the rightmost side of the face in the candidate three-dimensional contour points.
And 605, selecting a three-dimensional face key point corresponding to the ith visual angle from the key point mapping result corresponding to the ith two-dimensional face image.
Referring to fig. 8 and 9, fig. 8 is a schematic diagram showing a contour point mapping result of the two-dimensional face image 52 in fig. 5, and fig. 9 is a schematic diagram showing a contour point mapping result of the two-dimensional face image 53 in fig. 5. As shown in fig. 8, in the contour point mapping result of the two-dimensional face image 52 obtained from the left side face view of fig. 5, the mapping result of the contour point 801 of the left side face is more accurate, and the mapping result error of the contour point 802 of the right side face is larger; as shown in fig. 9, in the contour point mapping result of the two-dimensional face image 53 obtained from the right side face view angle in fig. 5, the mapping result of the contour point 901 of the right side face is accurate, and the mapping result error of the contour point 902 of the left side face is large. It can be seen that the positions of the key points mapped from different viewing angles are greatly different. Therefore, in the key point mapping result corresponding to the ith two-dimensional face image, part of three-dimensional face key points with more accurate results can be determined as three-dimensional face key points corresponding to the ith visual angle.
In an exemplary embodiment, when the ith visual angle is a front face visual angle, determining a three-dimensional face key point corresponding to the middle part of a front face in a key point mapping result corresponding to the ith two-dimensional face image as a three-dimensional face key point corresponding to the ith visual angle; when the ith visual angle is the visual angle of the left side face, determining the three-dimensional face key point corresponding to the left side face in the key point mapping result corresponding to the ith two-dimensional face image as the three-dimensional face key point corresponding to the ith visual angle; and when the ith visual angle is the visual angle of the right side face, determining the three-dimensional face key point corresponding to the right side face in the key point mapping result corresponding to the ith two-dimensional face image as the three-dimensional face key point corresponding to the ith visual angle.
And 606, integrating the three-dimensional face key points corresponding to the selected n visual angles respectively to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model.
And the identification or position coordinates of each three-dimensional face key point on the three-dimensional face model can be determined by integrating the three-dimensional face key points corresponding to the n visual angles, so that the positioning result of each three-dimensional face key point on the three-dimensional face model is obtained.
Please refer to fig. 10, which illustrates a schematic diagram of three-dimensional face key points according to another embodiment of the present application. As shown in fig. 10, in the exemplary embodiment, a three-dimensional face key point 1001 corresponding to a front face view, a three-dimensional face key point 1002 corresponding to a left side face view, and a three-dimensional face key point 1003 corresponding to a right side face view are integrated to obtain a positioning result of each three-dimensional face key point on the three-dimensional face model.
In summary, in the technical scheme provided in the embodiment of the present application, the three-dimensional face key points corresponding to each view angle are obtained by selecting the more accurate three-dimensional face key points in the mapping result of each view angle, and the three-dimensional face key points corresponding to n selected view angles are integrated to obtain the positioning result of each three-dimensional face key point on the three-dimensional face model, so that the accuracy of the detection result of the three-dimensional face key points is improved.
In the embodiment of the application, the zbuffer (z cache blanking) algorithm is adopted to project the visible point cloud data points in the point cloud of the three-dimensional face model, so that the accuracy of the projected two-position face image is ensured, and the accuracy of the detection result of the three-dimensional face key points is further improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 11, a block diagram of a face keypoint detection apparatus according to an embodiment of the present application is shown. The device 1100 has the function of implementing the above-mentioned method example of face key point detection, and the function may be implemented by hardware, or may be implemented by hardware executing corresponding software. The apparatus 1100 may be the computer device described above, or may be provided on a computer device. The apparatus 1100 may include: a model acquisition module 1110, a model projection module 1120, a keypoint detection module 1130, a result acquisition module 1140, and a result determination module 1150.
The model obtaining module 1110 is configured to obtain a three-dimensional face model.
The model projection module 1120 is configured to project the three-dimensional face model from n different viewing angles to obtain n two-dimensional face images, where n is an integer greater than 1.
The key point detecting module 1130 is configured to detect two-dimensional face key points from the n two-dimensional face images, respectively.
The result obtaining module 1140 is configured to obtain key point mapping results corresponding to the n two-dimensional face images, respectively; the key point mapping result corresponding to the ith two-dimensional face image in the n two-dimensional face images comprises a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model, and i is a positive integer less than or equal to n.
The result determining module 1150 is configured to select the three-dimensional face key points corresponding to the projection view angle from the key point mapping results corresponding to the n two-dimensional face images, and integrate the three-dimensional face key points to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model.
To sum up, in the technical solution provided in the embodiment of the present application, n two-dimensional face images are obtained by projecting a three-dimensional face model from n different viewing angles, corresponding two-dimensional face key points are detected from the n two-dimensional face images, key point mapping results corresponding to the n two-dimensional face images are obtained, and three-dimensional face key points corresponding to the projection viewing angles are selected from the key point mapping results corresponding to the n two-dimensional face images for integration, so as to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model; on the other hand, the two-dimensional detection result is mapped to obtain a three-dimensional detection result, then three-dimensional face key points corresponding to the projection view angle are selected to be integrated, and finally the positioning result of the three-dimensional face key points on the whole three-dimensional face model is obtained, so that each area on the three-dimensional face model can be ensured to select an accurate three-dimensional detection result, and the accuracy of the finally obtained positioning result of the three-dimensional face key points is improved.
In an exemplary embodiment, as shown in fig. 12, the result obtaining module 1140 includes: a proxel acquisition sub-module 1141, a data point acquisition sub-module 1142, a keypoint determination sub-module 1143, and a keypoint acquisition sub-module 1144.
The projection point obtaining sub-module 1141 is configured to, for the two-dimensional face key point in the ith two-dimensional face image, obtain an adjacent projection point corresponding to the two-dimensional face key point, where the adjacent projection point is a projection point of a point cloud of the three-dimensional face model, which is closest to the two-dimensional face key point, in a projection point of the ith two-dimensional face image.
The data point obtaining sub-module 1142 is configured to obtain a point cloud data point corresponding to the adjacent projection point according to a mapping relationship, where the mapping relationship includes a mapping relationship between the point cloud and the projection point.
The key point determining submodule 1143 is configured to determine, according to the point cloud data point corresponding to the adjacent projection point, a three-dimensional face key point corresponding to the two-dimensional face key point on the three-dimensional face model.
The key point obtaining sub-module 1144 is configured to obtain a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model, and obtain a key point mapping result corresponding to the ith two-dimensional face image.
In an exemplary embodiment, the two-dimensional face key points are two-dimensional contour points; the keypoint determination sub-module 1143 is configured to:
selecting at least one candidate three-dimensional contour point corresponding to the two-dimensional contour point from the point cloud according to the point cloud data point corresponding to the adjacent projection point, wherein the candidate three-dimensional contour point is positioned on the left side and the right side of the point cloud data point corresponding to the adjacent projection point;
and determining the point which is positioned at the outermost side of the human face in the candidate three-dimensional contour points as the corresponding three-dimensional contour point of the two-dimensional contour point on the three-dimensional human face model.
In an exemplary embodiment, the result determination module 1150 is configured to:
selecting the three-dimensional face key points corresponding to the ith visual angle from the key point mapping result corresponding to the ith two-dimensional face image, wherein the ith two-dimensional face image is an image obtained by projecting the three-dimensional face model from the ith visual angle;
integrating the three-dimensional face key points corresponding to the n selected visual angles respectively to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model.
In an exemplary embodiment, the model projection module 1120 is configured to:
for the ith view angle in the n different view angles, projecting a point cloud data point which is visible in the ith view angle in the point cloud of the three-dimensional face model into the ith two-dimensional face image to obtain a projection point corresponding to the visible point cloud data point;
and rendering the texture color of the triangular patch formed by the projection points according to the texture color of the triangular patch formed by the visible point cloud data points to obtain the ith two-dimensional face image.
In an exemplary embodiment, the n views include: a frontal face perspective, a left side face perspective, and a right side face perspective.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 13, a schematic structural diagram of a computer device according to an embodiment of the present application is shown. Specifically, the method comprises the following steps:
the computer apparatus 1300 includes a CPU (Central Processing Unit) 1301, a system Memory 1304 including a RAM (Random Access Memory) 1302 and a ROM (Read Only Memory) 1303, and a system bus 1305 connecting the system Memory 1304 and the Central Processing Unit 1301. The computer device 1300 also includes a basic I/O (Input/Output) system 1306 to facilitate information transfer between devices within the computer, and a mass storage device 1307 for storing an operating system 1313, application programs 1314, and other program modules 1315.
The basic input/output system 1306 includes a display 1308 for displaying information and an input device 1309, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1308 and input device 1309 are connected to the central processing unit 1301 through an input-output controller 1310 connected to the system bus 1305. The basic input/output system 1306 may also include an input/output controller 1310 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1310 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1307 is connected to the central processing unit 1301 through a mass storage controller (not shown) connected to the system bus 1305. The mass storage device 1307 and its associated computer-readable media provide non-volatile storage for the computer device 1300. That is, the mass storage device 1307 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read Only Memory), flash Memory or other solid state Memory technology, CD-ROM or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1304 and mass storage device 1307 described above may be collectively referred to as memory.
According to various embodiments of the present application, the computer device 1300 may also operate as a remote computer connected to a network via a network, such as the Internet. That is, the computer device 1300 may be connected to the network 1312 through the network interface unit 1311, which is connected to the system bus 1305, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1311.
The memory also includes at least one instruction, at least one program, set of codes, or set of instructions stored in the memory and configured to be executed by one or more processors to implement the above-described method of face keypoint detection.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one instruction, at least one program, code set, or set of instructions which, when executed by a processor, implements the above-described face keypoint detection method.
Optionally, the computer-readable storage medium may include: ROM, RAM, SSD (Solid State Drives), optical disks, etc. The Random Access Memory may include a ReRAM (resistive Random Access Memory) and a DRAM (Dynamic Random Access Memory).
In an exemplary embodiment, there is also provided a computer program product for implementing the above-mentioned face keypoint detection method when the computer program product is executed by a processor.
It should be understood that reference to "a plurality" herein means two or more. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (14)

1. A method for detecting key points of a human face is characterized by comprising the following steps:
acquiring a three-dimensional face model;
projecting the three-dimensional face model from n different visual angles to obtain n two-dimensional face images, wherein n is an integer greater than 1;
detecting corresponding two-dimensional face key points from the n two-dimensional face images respectively;
obtaining key point mapping results corresponding to the n two-dimensional face images respectively; the key point mapping result corresponding to the ith two-dimensional face image in the n two-dimensional face images comprises a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model, and i is a positive integer less than or equal to n;
and respectively selecting the three-dimensional face key points corresponding to the projection visual angles from the key point mapping results respectively corresponding to the n two-dimensional face images for integration to obtain the positioning result of the three-dimensional face key points on the three-dimensional face model.
2. The method according to claim 1, wherein the obtaining of the keypoint mapping results corresponding to the n two-dimensional face images respectively comprises:
for the two-dimensional face key points in the ith two-dimensional face image, acquiring adjacent projection points corresponding to the two-dimensional face key points, wherein the adjacent projection points refer to the projection points of the point cloud of the three-dimensional face model in the ith two-dimensional face image, and the projection points are closest to the two-dimensional face key points;
acquiring point cloud data points corresponding to the adjacent projection points according to a mapping relation, wherein the mapping relation comprises the mapping relation between the point cloud and the projection points;
determining a three-dimensional face key point corresponding to the two-dimensional face key point on the three-dimensional face model according to the point cloud data point corresponding to the adjacent projection point;
and acquiring a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model to obtain a key point mapping result corresponding to the ith two-dimensional face image.
3. The method of claim 2, wherein the two-dimensional face key points are two-dimensional contour points;
determining a three-dimensional face key point corresponding to the two-dimensional face key point on the three-dimensional face model according to the point cloud data point corresponding to the adjacent projection point, wherein the determining comprises the following steps:
selecting at least one candidate three-dimensional contour point corresponding to the two-dimensional contour point from the point cloud according to the point cloud data point corresponding to the adjacent projection point, wherein the candidate three-dimensional contour point is positioned on the left side and the right side of the point cloud data point corresponding to the adjacent projection point;
and determining the point which is positioned at the outermost side of the human face in the candidate three-dimensional contour points as the corresponding three-dimensional contour point of the two-dimensional contour point on the three-dimensional human face model.
4. The method according to claim 1, wherein the step of selecting the three-dimensional face key points corresponding to the projection view angle from the key point mapping results respectively corresponding to the n two-dimensional face images to integrate to obtain the positioning result of the three-dimensional face key points on the three-dimensional face model comprises:
selecting the three-dimensional face key points corresponding to the ith visual angle from the key point mapping result corresponding to the ith two-dimensional face image, wherein the ith two-dimensional face image is an image obtained by projecting the three-dimensional face model from the ith visual angle;
integrating the three-dimensional face key points corresponding to the n selected visual angles respectively to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model.
5. The method of claim 1, wherein said projecting the three-dimensional face model from n different perspectives, resulting in n two-dimensional face images, comprises:
for the ith view angle in the n different view angles, projecting a point cloud data point which is visible in the ith view angle in the point cloud of the three-dimensional face model into the ith two-dimensional face image to obtain a projection point corresponding to the visible point cloud data point;
and rendering the texture color of the triangular patch formed by the projection points according to the texture color of the triangular patch formed by the visible point cloud data points to obtain the ith two-dimensional face image.
6. The method of any of claims 1 to 5, wherein the n views comprise: a frontal face perspective, a left side face perspective, and a right side face perspective.
7. A face keypoint detection apparatus, said apparatus comprising:
the model acquisition module is used for acquiring a three-dimensional face model;
the model projection module is used for projecting the three-dimensional face model from n different visual angles to obtain n two-dimensional face images, wherein n is an integer greater than 1;
the key point detection module is used for respectively detecting corresponding two-dimensional face key points from the n two-dimensional face images;
the result acquisition module is used for acquiring key point mapping results corresponding to the n two-dimensional face images respectively; the key point mapping result corresponding to the ith two-dimensional face image in the n two-dimensional face images comprises a three-dimensional face key point corresponding to the two-dimensional face key point in the ith two-dimensional face image on the three-dimensional face model, and i is a positive integer less than or equal to n;
and the result determining module is used for respectively selecting the three-dimensional face key points corresponding to the projection view angles from the key point mapping results respectively corresponding to the n two-dimensional face images to integrate so as to obtain the positioning result of the three-dimensional face key points on the three-dimensional face model.
8. The apparatus of claim 7, wherein the result obtaining module comprises:
a projection point obtaining sub-module, configured to obtain, for the two-dimensional face key point in the ith two-dimensional face image, a neighboring projection point corresponding to the two-dimensional face key point, where the neighboring projection point is a projection point, in a projection point of the point cloud of the three-dimensional face model in the ith two-dimensional face image, closest to the two-dimensional face key point;
the data point acquisition submodule is used for acquiring point cloud data points corresponding to the adjacent projection points according to a mapping relation, wherein the mapping relation comprises the mapping relation between the point cloud and the projection points;
the key point determining submodule is used for determining a three-dimensional face key point corresponding to the two-dimensional face key point on the three-dimensional face model according to the point cloud data point corresponding to the adjacent projection point;
and the key point obtaining submodule is used for obtaining three-dimensional face key points corresponding to the two-dimensional face key points in the ith two-dimensional face image on the three-dimensional face model, and obtaining a key point mapping result corresponding to the ith two-dimensional face image.
9. The apparatus of claim 8, wherein the two-dimensional face key points are two-dimensional contour points;
the key point determination submodule is used for:
selecting at least one candidate three-dimensional contour point corresponding to the two-dimensional contour point from the point cloud according to the point cloud data point corresponding to the adjacent projection point, wherein the candidate three-dimensional contour point is positioned on the left side and the right side of the point cloud data point corresponding to the adjacent projection point;
and determining the point which is positioned at the outermost side of the human face in the candidate three-dimensional contour points as the corresponding three-dimensional contour point of the two-dimensional contour point on the three-dimensional human face model.
10. The apparatus of claim 7, wherein the result determination module is configured to:
selecting the three-dimensional face key points corresponding to the ith visual angle from the key point mapping result corresponding to the ith two-dimensional face image, wherein the ith two-dimensional face image is an image obtained by projecting the three-dimensional face model from the ith visual angle;
integrating the three-dimensional face key points corresponding to the n selected visual angles respectively to obtain a positioning result of the three-dimensional face key points on the three-dimensional face model.
11. The apparatus of claim 7, wherein the model projection module is configured to:
for the ith view angle in the n different view angles, projecting a point cloud data point which is visible in the ith view angle in the point cloud of the three-dimensional face model into the ith two-dimensional face image to obtain a projection point corresponding to the visible point cloud data point;
and rendering the texture color of the triangular patch formed by the projection points according to the texture color of the triangular patch formed by the visible point cloud data points to obtain the ith two-dimensional face image.
12. The apparatus of any of claims 7 to 11, wherein the n views comprise: a frontal face perspective, a left side face perspective, and a right side face perspective.
13. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the face keypoint detection method of any of claims 1 to 6.
14. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of face keypoint detection according to any of the preceding claims 1 to 6.
CN202010018423.1A 2020-01-08 2020-01-08 Face key point detection method, device, equipment and storage medium Active CN110807451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010018423.1A CN110807451B (en) 2020-01-08 2020-01-08 Face key point detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010018423.1A CN110807451B (en) 2020-01-08 2020-01-08 Face key point detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110807451A true CN110807451A (en) 2020-02-18
CN110807451B CN110807451B (en) 2020-06-02

Family

ID=69493363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010018423.1A Active CN110807451B (en) 2020-01-08 2020-01-08 Face key point detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110807451B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539912A (en) * 2020-03-23 2020-08-14 中国科学院自动化研究所 Health index evaluation method and equipment based on face structure positioning and storage medium
CN111563959A (en) * 2020-05-06 2020-08-21 厦门美图之家科技有限公司 Updating method, device, equipment and medium of three-dimensional deformable model of human face
CN111652974A (en) * 2020-06-15 2020-09-11 腾讯科技(深圳)有限公司 Method, device and equipment for constructing three-dimensional face model and storage medium
CN111753644A (en) * 2020-05-09 2020-10-09 清华大学 Method and device for detecting key points on three-dimensional face scanning
CN111832648A (en) * 2020-07-10 2020-10-27 北京百度网讯科技有限公司 Key point marking method and device, electronic equipment and storage medium
CN112270737A (en) * 2020-11-25 2021-01-26 浙江商汤科技开发有限公司 Texture mapping method and device, electronic equipment and storage medium
CN112990032A (en) * 2021-03-23 2021-06-18 中国人民解放军海军航空大学航空作战勤务学院 Face image processing method and device
CN113223137A (en) * 2021-05-13 2021-08-06 广州虎牙科技有限公司 Generation method of perspective projection human face point cloud graph, application program and electronic equipment
CN113538655A (en) * 2021-06-23 2021-10-22 聚好看科技股份有限公司 Virtual face generation method and equipment
CN115278080A (en) * 2022-07-28 2022-11-01 北京五八信息技术有限公司 Mask generation method, mask generation equipment and storage medium
CN115359166A (en) * 2022-10-20 2022-11-18 北京百度网讯科技有限公司 Image generation method and device, electronic equipment and medium
CN115937964A (en) * 2022-06-27 2023-04-07 北京字跳网络技术有限公司 Method, device, equipment and storage medium for attitude estimation
CN115984461A (en) * 2022-12-12 2023-04-18 广州紫为云科技有限公司 Face three-dimensional key point detection method based on RGBD camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082765A (en) * 2006-06-01 2007-12-05 高宏 Three-dimensional portrait photograph system and realizing method thereof
CN105913416A (en) * 2016-04-06 2016-08-31 中南大学 Method for automatically segmenting three-dimensional human face model area
CN106203400A (en) * 2016-07-29 2016-12-07 广州国信达计算机网络通讯有限公司 A kind of face identification method and device
CN109377544A (en) * 2018-11-30 2019-02-22 腾讯科技(深圳)有限公司 A kind of face three-dimensional image generating method, device and readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082765A (en) * 2006-06-01 2007-12-05 高宏 Three-dimensional portrait photograph system and realizing method thereof
CN105913416A (en) * 2016-04-06 2016-08-31 中南大学 Method for automatically segmenting three-dimensional human face model area
CN106203400A (en) * 2016-07-29 2016-12-07 广州国信达计算机网络通讯有限公司 A kind of face identification method and device
CN109377544A (en) * 2018-11-30 2019-02-22 腾讯科技(深圳)有限公司 A kind of face three-dimensional image generating method, device and readable medium

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539912A (en) * 2020-03-23 2020-08-14 中国科学院自动化研究所 Health index evaluation method and equipment based on face structure positioning and storage medium
CN111563959B (en) * 2020-05-06 2023-04-28 厦门美图之家科技有限公司 Updating method, device, equipment and medium of three-dimensional deformable model of human face
CN111563959A (en) * 2020-05-06 2020-08-21 厦门美图之家科技有限公司 Updating method, device, equipment and medium of three-dimensional deformable model of human face
CN111753644A (en) * 2020-05-09 2020-10-09 清华大学 Method and device for detecting key points on three-dimensional face scanning
CN111652974A (en) * 2020-06-15 2020-09-11 腾讯科技(深圳)有限公司 Method, device and equipment for constructing three-dimensional face model and storage medium
CN111652974B (en) * 2020-06-15 2023-08-25 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for constructing three-dimensional face model
CN111832648A (en) * 2020-07-10 2020-10-27 北京百度网讯科技有限公司 Key point marking method and device, electronic equipment and storage medium
CN111832648B (en) * 2020-07-10 2024-02-09 北京百度网讯科技有限公司 Key point labeling method and device, electronic equipment and storage medium
CN112270737A (en) * 2020-11-25 2021-01-26 浙江商汤科技开发有限公司 Texture mapping method and device, electronic equipment and storage medium
CN112990032A (en) * 2021-03-23 2021-06-18 中国人民解放军海军航空大学航空作战勤务学院 Face image processing method and device
CN112990032B (en) * 2021-03-23 2022-08-16 中国人民解放军海军航空大学航空作战勤务学院 Face image processing method and device
CN113223137A (en) * 2021-05-13 2021-08-06 广州虎牙科技有限公司 Generation method of perspective projection human face point cloud graph, application program and electronic equipment
CN113538655A (en) * 2021-06-23 2021-10-22 聚好看科技股份有限公司 Virtual face generation method and equipment
CN113538655B (en) * 2021-06-23 2023-08-04 聚好看科技股份有限公司 Virtual face generation method and equipment
CN115937964A (en) * 2022-06-27 2023-04-07 北京字跳网络技术有限公司 Method, device, equipment and storage medium for attitude estimation
CN115937964B (en) * 2022-06-27 2023-12-15 北京字跳网络技术有限公司 Method, device, equipment and storage medium for estimating gesture
CN115278080A (en) * 2022-07-28 2022-11-01 北京五八信息技术有限公司 Mask generation method, mask generation equipment and storage medium
CN115359166A (en) * 2022-10-20 2022-11-18 北京百度网讯科技有限公司 Image generation method and device, electronic equipment and medium
CN115984461A (en) * 2022-12-12 2023-04-18 广州紫为云科技有限公司 Face three-dimensional key point detection method based on RGBD camera

Also Published As

Publication number Publication date
CN110807451B (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN110807451B (en) Face key point detection method, device, equipment and storage medium
CN111325823B (en) Method, device and equipment for acquiring face texture image and storage medium
US11748934B2 (en) Three-dimensional expression base generation method and apparatus, speech interaction method and apparatus, and medium
CN109859296B (en) Training method of SMPL parameter prediction model, server and storage medium
CN111126272B (en) Posture acquisition method, and training method and device of key point coordinate positioning model
CN110807836B (en) Three-dimensional face model generation method, device, equipment and medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
CN111710036B (en) Method, device, equipment and storage medium for constructing three-dimensional face model
CN111652974B (en) Method, device, equipment and storage medium for constructing three-dimensional face model
WO2022156640A1 (en) Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN110363133B (en) Method, device, equipment and storage medium for sight line detection and video processing
CN111192223B (en) Method, device and equipment for processing face texture image and storage medium
US10380796B2 (en) Methods and systems for 3D contour recognition and 3D mesh generation
JP2023545190A (en) Image line-of-sight correction method, device, electronic device, and computer program
da Silveira et al. Dense 3D scene reconstruction from multiple spherical images for 3-DoF+ VR applications
CN112749611B (en) Face point cloud model generation method and device, storage medium and electronic equipment
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN113822994B (en) Three-dimensional model construction method and device and storage medium
CN115994944A (en) Three-dimensional key point prediction method, training method and related equipment
CN113592990A (en) Three-dimensional effect generation method, device, equipment and medium for two-dimensional image
CN113706399A (en) Face image beautifying method and device, electronic equipment and storage medium
CN111581411B (en) Method, device, equipment and storage medium for constructing high-precision face shape library
CN116958394A (en) Image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant