CN109902616B - Human face three-dimensional feature point detection method and system based on deep learning - Google Patents

Human face three-dimensional feature point detection method and system based on deep learning Download PDF

Info

Publication number
CN109902616B
CN109902616B CN201910138641.6A CN201910138641A CN109902616B CN 109902616 B CN109902616 B CN 109902616B CN 201910138641 A CN201910138641 A CN 201910138641A CN 109902616 B CN109902616 B CN 109902616B
Authority
CN
China
Prior art keywords
face
dimensional
network
dimensional feature
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910138641.6A
Other languages
Chinese (zh)
Other versions
CN109902616A (en
Inventor
徐枫
王至博
杨东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910138641.6A priority Critical patent/CN109902616B/en
Publication of CN109902616A publication Critical patent/CN109902616A/en
Application granted granted Critical
Publication of CN109902616B publication Critical patent/CN109902616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for detecting three-dimensional feature points of a human face based on deep learning, wherein the method comprises the following steps: establishing a face data set, and processing a face picture in the face data set through face three-dimensional reconstruction to obtain face geometry; calibrating a specific vertex on a face template as a feature point, and establishing a data set consisting of a face picture and a face three-dimensional feature point corresponding to the face picture; training a deep neural network with the input of a human face picture and the output of a distribution heat map of the coordinates of three-dimensional feature points of the human face; during training, a confrontation network is generated to utilize an identification network, a face picture and a three-dimensional characteristic point distribution heat map are input, a true and false value is output to indicate whether the input face picture and the three-dimensional characteristic point distribution heat map are matched or not, and a detection result is obtained through a trained neural network. The method can detect the three-dimensional coordinates of the human face characteristic points in the picture, and the human face edge points and the human face model have strong connection, so that the human face reconstruction result is more accurate.

Description

Human face three-dimensional feature point detection method and system based on deep learning
Technical Field
The invention relates to the technical field of computer vision and graphics, in particular to a method and a system for detecting three-dimensional feature points of a human face based on deep learning.
Background
The concept of deep learning is derived from the research of an artificial neural network, and a multi-layer perceptron with multiple hidden layers is a deep learning structure. Deep learning forms a more abstract class or feature of high-level representation properties by combining low-level features to discover a distributed feature representation of the data.
The detection of the human face characteristic points has important application in human face recognition, human face reconstruction and human face tracking. In face reconstruction, face tracking and non-rigid registration of a face, the corresponding relationship between feature points and vertexes on a face model template is often required to be specified, in application, the use of two-dimensional face feature points is inconvenient, and face edge feature points with uncertain corresponding relationship with the face model template can cause inaccurate face reconstruction results and bring certain difficulty to application.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the invention aims to provide a human face three-dimensional feature point detection method based on deep learning. According to the method, three-dimensional feature points of the human face can be output only by inputting a single human face picture, and the feature points and the real geometry of the human face have stronger corresponding relation.
The invention also aims to provide a human face three-dimensional feature point detection system based on deep learning.
In order to achieve the above object, the present invention provides a method for detecting a three-dimensional feature point of a human face based on deep learning, which comprises the following steps: establishing a face data set, and processing a face picture in the face data set through face three-dimensional reconstruction to obtain face geometry; calibrating a specific vertex on a face three-dimensional template as a feature point, and establishing a data set consisting of a face picture and the corresponding face three-dimensional feature point; training a deep neural network with the input of a human face picture and the output of a distribution heat map of the coordinates of three-dimensional feature points of the human face; during training, a generation countermeasure network is adopted to utilize an identification network, wherein the input is a face picture and a three-dimensional characteristic point distribution heat degree graph, the output is true or false to indicate whether the input face picture and the three-dimensional characteristic point distribution heat degree graph are matched or not, and a detection result is obtained through a trained neural network.
According to the method for detecting the three-dimensional feature points of the human face based on the deep learning, disclosed by the embodiment of the invention, a network for detecting the three-dimensional feature points of the human face is obtained by training by using the deep learning method, and the corresponding relation with a human face model template and the determined feature points of the edge of the human face are obtained, so that the human face reconstruction result is more accurate, and the application is very simple.
In addition, the method for detecting the three-dimensional feature points of the human face based on the deep learning according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the feature points are three-dimensional feature points, and there is a corresponding relationship between the three-dimensional feature points and the three-dimensional face template.
Further, in an embodiment of the present invention, the processing of the face image in the face data set through the three-dimensional face reconstruction is face edge point detection, so as to obtain edge points on two sides of the face geometry, instead of the face edge points displayed in the face image.
Further, in an embodiment of the present invention, the method further includes: and during training, outputting the result as true or false to form a generated countermeasure error, and training the three-dimensional feature points of the human face in the data set and an error item constructed by a three-dimensional feature point distribution heat map of a generation network.
Further, in one embodiment of the present invention, the error term is required in the generating opposing network for training of the neural network.
In order to achieve the above object, another aspect of the present invention provides a system for detecting three-dimensional feature points of a human face based on deep learning, including: the processing module is used for establishing a face data set, processing a face picture in the face data set through face three-dimensional reconstruction to obtain face geometry, calibrating a specific vertex on a face three-dimensional template as a feature point, and establishing a data set consisting of the face picture and the corresponding face three-dimensional feature point; the preset training module is used for training a deep neural network which inputs a face picture and outputs a distribution heat map of three-dimensional feature point coordinates of the face; the generation countermeasure training module is used for adopting a generation countermeasure network to utilize an identification network during training, wherein the input is a face picture and a three-dimensional characteristic point distribution heat degree graph, and the output is true or false to indicate whether the input face picture and the three-dimensional characteristic point distribution heat degree graph are matched or not, so that a detection result is obtained through a trained neural network.
The system for detecting the three-dimensional feature points of the human face based on the deep learning of the embodiment of the invention obtains a network for detecting the three-dimensional feature points of the human face by training by using the deep learning method, obtains the corresponding relation with the human face model template and determines the edge feature points of the human face, so that the human face reconstruction result is more accurate and the application is very simple.
In addition, the system for detecting the three-dimensional feature points of the human face based on deep learning according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the feature points are three-dimensional feature points, and there is a corresponding relationship between the three-dimensional feature points and the three-dimensional face template.
Further, in an embodiment of the present invention, the processing of the face image in the face data set through the three-dimensional face reconstruction is face edge point detection, so as to obtain edge points on two sides of the face geometry, instead of the face edge points displayed in the face image.
Further, in an embodiment of the present invention, the method further includes: and during training, outputting the result as true or false to form a generated countermeasure error, and training the three-dimensional feature points of the human face in the data set and an error item constructed by a three-dimensional feature point distribution heat map of a generation network.
Further, in one embodiment of the present invention, the error term is required in the generating opposing network for training of the neural network.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flowchart of a method for detecting three-dimensional feature points of a human face based on deep learning according to an embodiment of the invention;
FIG. 2 is a flowchart of a three-dimensional face reconstruction method according to an embodiment of the invention;
FIG. 3 is a flowchart of a network training method for detecting three-dimensional feature points of a human face according to an embodiment of the invention;
FIG. 4 is a flow chart of the detection of three-dimensional feature points of a human face according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of a system for detecting three-dimensional feature points of a human face based on deep learning according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a method and a system for detecting three-dimensional feature points of a human face based on deep learning, which are provided by the embodiment of the present invention, with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for detecting a three-dimensional feature point of a human face based on deep learning according to an embodiment of the present invention.
As shown in fig. 1, the method for detecting the three-dimensional feature points of the human face based on the deep learning comprises the following steps:
in step S101, a face data set is established, and a face image in the face data set is processed through face three-dimensional reconstruction to obtain a face geometry.
The feature points are three-dimensional feature points, and the three-dimensional feature points and the human face three-dimensional template have a corresponding relation.
It should be noted that the face image in the face data set is processed to be face edge point detection through face three-dimensional reconstruction, so as to obtain edge points on two sides of the face, instead of face edge points displayed in the face image.
In step S102, a specific vertex is calibrated on the face three-dimensional template as a feature point, and a data set composed of a face image and a face three-dimensional feature point corresponding thereto is established.
That is, the face image in the data set is processed through face three-dimensional reconstruction to obtain face geometry, and some vertexes are specified in the face geometry in advance as feature points, so that the coordinates of the face feature points are obtained.
Specifically, as shown in fig. 2, an input face image is processed by a face three-dimensional reconstruction method, a face template is deformed by using the obtained input image to obtain a geometric model of a face in the input image, and a specific vertex is designated as a feature point on the face template, thereby obtaining coordinates of face three-dimensional feature points.
In step S103, the training input is a face picture, and the deep neural network is output as a distribution heat map of the three-dimensional feature point coordinates of the face.
The deep neural network is trained in advance, a countermeasure method is not suitable for pre-training, and only training data are used for constructing training errors for training.
Specifically, as shown in fig. 3, the training input is a face picture, and the training input is a face three-dimensional feature point distribution heat map, where each heat map includes a probability of three-dimensional feature point distribution and a depth of a corresponding point. The generation network is trained in advance, a face picture is input into the generation network, a heat map of distribution of three-dimensional feature points of the face is output, and the pre-training result is used as an initialization generation network for generating the countermeasure network. The combination of its output and its input is taken as a negative sample, and the combination of its input and the corresponding standard output in the data set is taken as a positive sample, which is taken as the input of the authentication network.
In step S104, during training, a generation countermeasure network is used to utilize an identification network, where the input is a face image and a three-dimensional feature point distribution heat map, and the output is true or false to indicate whether the input face image and the three-dimensional feature point distribution heat map are matched, so as to obtain a detection result through the trained neural network.
In the training, the output is true or false to form a generated countermeasure error, and the three-dimensional feature points of the human face in the data set and the error items constructed by the three-dimensional feature point distribution heat map of the generation network are trained. Wherein the error term is needed in generating the countermeasure network for training of the neural network.
In other words, the network obtained after pre-training constructs an identification network for the generation network, the input of the identification network is a human face picture and a heat map of human face three-dimensional characteristic point distribution, when the input is the combination of the human face picture in a data set and the heat map of the three-dimensional characteristic point distribution, the output is true, otherwise, the output is false, so as to construct and generate an confrontation error, and an error item constructed by the human face three-dimensional characteristic point in the data set and the three-dimensional characteristic point distribution heat map generated by the generation network, which is used in the pre-training, is also used in the network training in the generation confrontation training.
In detail, as shown in fig. 4, after the network is trained, a distribution heat map of each three-dimensional feature point corresponding to a face picture can be obtained by inputting the face picture using the network, each pixel of the heat map includes probability and depth information of the corresponding distribution at the position, after the heat map is obtained, a pixel with the highest probability can be extracted from the heat map, and the position of the feature point is calculated by combining the depth information.
According to the method for detecting the three-dimensional feature points of the human face based on the deep learning, which is provided by the embodiment of the invention, the network for detecting the three-dimensional feature points of the human face is obtained by training by using the deep learning method, and the feature points of the edge of the human face which are corresponding to the template of the human face model and are determined are obtained, so that the human face reconstruction result is more accurate, and the application is very simple.
Next, a three-dimensional face feature point detection system based on deep learning according to an embodiment of the present invention will be described with reference to the drawings.
Fig. 5 is a schematic structural diagram of a system for detecting three-dimensional feature points of a human face based on deep learning according to an embodiment of the present invention.
As shown in fig. 5, the system 10 includes: a processing module 100, a pre-set training module 200, and a generate counter training module 300.
The processing module 100 is configured to establish a face data set, process a face image in the face data set through face three-dimensional reconstruction to obtain face geometry, mark a specific vertex on a face three-dimensional template as a feature point, and establish a data set formed by the face image and a face three-dimensional feature point corresponding to the face image.
Further, in an embodiment of the present invention, the feature points are three-dimensional feature points, the three-dimensional feature points and the three-dimensional face template have a corresponding relationship, and the face image in the face data set is processed by face three-dimensional reconstruction to detect face edge points, so as to obtain edge points on two sides of the face, instead of face edge points displayed in the face image.
The preset training module 200 is used for training a deep neural network which inputs a face picture and outputs a distribution heat map of three-dimensional feature point coordinates of the face.
The generation countermeasure training module 300 is configured to employ a generation countermeasure network to utilize an identification network during training, where the input is a face picture and a three-dimensional feature point distribution heat map, and the output is true or false to indicate whether the input face picture and the three-dimensional feature point distribution heat map are matched, so as to obtain a detection result through a trained neural network.
Further, in an embodiment of the present invention, the method further includes: during training, the output is true or false to form a generated countermeasure error, and the three-dimensional feature points of the human face in the data set and the error items constructed by the three-dimensional feature point distribution heat map of the generation network are trained. Wherein the error term is needed in generating the countermeasure network for training of the neural network.
It should be noted that the foregoing explanation of the embodiment of the method for detecting a three-dimensional feature point of a human face based on deep learning is also applicable to the system, and is not repeated here.
According to the face three-dimensional feature point detection system based on deep learning provided by the embodiment of the invention, a network for detecting the face three-dimensional feature points is obtained by training by using a deep learning method, and face edge feature points which are in corresponding relation with a face model template and are determined are obtained, so that a face reconstruction result is more accurate, and the application is very simple.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A human face three-dimensional feature point detection method based on deep learning is characterized by comprising the following steps:
establishing a face data set, and processing a face picture in the face data set through face three-dimensional reconstruction to obtain face geometry;
calibrating a specific vertex on a face three-dimensional template as a feature point, and establishing a data set consisting of a face picture and the corresponding face three-dimensional feature point;
training a deep neural network with input being a face picture and output being a distribution heat map of three-dimensional feature point coordinates of the face, wherein the deep neural network is trained in advance as an initialization generation network for generating a countermeasure network, an identification network is constructed according to the initialization generation network, the output and the input combination of the identification network are used as negative samples, and the input and the standard output combination corresponding to a data set are used as positive samples and used as the input of the identification network; and
and during training, adopting the generated countermeasure network to utilize the identification network, wherein the input is a human face picture and a three-dimensional characteristic point distribution heat map, the output is true or false to indicate whether the input human face picture and the three-dimensional characteristic point distribution heat map are matched or not, so as to obtain a detection result through the trained neural network, if matched, extracting a pixel with the highest probability from the three-dimensional characteristic point distribution heat map, and calculating the position of the current characteristic point by combining the depth information of the trained deep neural network.
2. The method for detecting the three-dimensional feature points of the human face based on the deep learning of claim 1, wherein the feature points are three-dimensional feature points, and the three-dimensional feature points and the three-dimensional template of the human face have a corresponding relationship.
3. The method according to claim 1, wherein the processing of the face image in the face data set by the face three-dimensional reconstruction is face edge point detection, so as to obtain edge points on two sides of the face geometry instead of face edge points displayed in the face image.
4. The method for detecting the three-dimensional characteristic points of the human face based on the deep learning of claim 1, further comprising:
and during training, outputting the result as true or false to form a generated countermeasure error, and training the three-dimensional feature points of the human face in the data set and an error item constructed by a three-dimensional feature point distribution heat map of a generation network.
5. The method for detecting the three-dimensional characteristic points of the human face based on the deep learning of claim 4, wherein the error term is required in the generation countermeasure network for training the neural network.
6. A human face three-dimensional feature point detection system based on deep learning is characterized by comprising the following steps:
the processing module is used for establishing a face data set, processing a face picture in the face data set through face three-dimensional reconstruction to obtain face geometry, calibrating a specific vertex on a face three-dimensional template as a feature point, and establishing a data set consisting of the face picture and the corresponding face three-dimensional feature point;
the system comprises a preset training module, a data acquisition module and a data processing module, wherein the preset training module is used for training a deep neural network which is input as a face picture and output as a distribution heat map of three-dimensional feature point coordinates of a face, the deep neural network is trained in advance to serve as an initialization generation network for generating a confrontation network, an identification network is constructed according to the initialization generation network, the output and input combination of the identification network are used as negative samples, and the input and standard output combination corresponding to a data set are used as positive samples and used as the input of the identification network; and
and the generation countermeasure training module is used for generating the countermeasure network to utilize the identification network during training, wherein the input is a human face picture and a three-dimensional feature point distribution heat degree graph, the output is true or false to indicate whether the input human face picture and the three-dimensional feature point distribution heat degree graph are matched or not, so that a detection result is obtained through the trained neural network, if the match is formed, a pixel with the highest probability is extracted from the three-dimensional feature point distribution heat degree graph, and the position of the current feature point is calculated by combining the depth information of the trained deep neural network.
7. The deep learning-based human face three-dimensional feature point detection system according to claim 6, wherein the feature points are three-dimensional feature points, and the three-dimensional feature points and the human face three-dimensional template have a corresponding relationship.
8. The system according to claim 6, wherein the processing of the face image in the face data set by the face three-dimensional reconstruction is face edge point detection, so as to obtain edge points on two sides of the face geometry, instead of face edge points displayed in the face image.
9. The system for detecting the three-dimensional characteristic points of the human face based on the deep learning of claim 6, further comprising:
and during training, outputting the result as true or false to form a generated countermeasure error, and training the three-dimensional feature points of the human face in the data set and an error item constructed by a three-dimensional feature point distribution heat map of a generation network.
10. The deep learning based human face three-dimensional feature point detection system according to claim 9, wherein the error term is required in the generation countermeasure network for training of the neural network.
CN201910138641.6A 2019-02-25 2019-02-25 Human face three-dimensional feature point detection method and system based on deep learning Active CN109902616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910138641.6A CN109902616B (en) 2019-02-25 2019-02-25 Human face three-dimensional feature point detection method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910138641.6A CN109902616B (en) 2019-02-25 2019-02-25 Human face three-dimensional feature point detection method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN109902616A CN109902616A (en) 2019-06-18
CN109902616B true CN109902616B (en) 2020-12-01

Family

ID=66945627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910138641.6A Active CN109902616B (en) 2019-02-25 2019-02-25 Human face three-dimensional feature point detection method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN109902616B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569768B (en) * 2019-08-29 2022-09-02 四川大学 Construction method of face model, face recognition method, device and equipment
CN110516643A (en) * 2019-08-30 2019-11-29 电子科技大学 A kind of face 3D critical point detection method and system based on joint thermodynamic chart
CN110765976B (en) * 2019-11-01 2021-02-09 重庆紫光华山智安科技有限公司 Generation method of human face characteristic points, training method of data network and related device
CN113128292A (en) * 2019-12-31 2021-07-16 Tcl集团股份有限公司 Image identification method, storage medium and terminal equipment
CN111583422B (en) * 2020-04-17 2023-03-28 清华大学 Heuristic editing method and device for three-dimensional human body model
CN112883920A (en) * 2021-03-22 2021-06-01 清华大学 Point cloud deep learning-based three-dimensional face scanning feature point detection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980812A (en) * 2016-12-14 2017-07-25 四川长虹电器股份有限公司 Three-dimensional face features' independent positioning method based on concatenated convolutional neutral net
CN107045631A (en) * 2017-05-25 2017-08-15 北京华捷艾米科技有限公司 Facial feature points detection method, device and equipment
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN109063695A (en) * 2018-09-18 2018-12-21 图普科技(广州)有限公司 A kind of face critical point detection method, apparatus and its computer storage medium
CN109241910A (en) * 2018-09-07 2019-01-18 高新兴科技集团股份有限公司 A kind of face key independent positioning method returned based on the cascade of depth multiple features fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140185924A1 (en) * 2012-12-27 2014-07-03 Microsoft Corporation Face Alignment by Explicit Shape Regression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980812A (en) * 2016-12-14 2017-07-25 四川长虹电器股份有限公司 Three-dimensional face features' independent positioning method based on concatenated convolutional neutral net
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN107045631A (en) * 2017-05-25 2017-08-15 北京华捷艾米科技有限公司 Facial feature points detection method, device and equipment
CN109241910A (en) * 2018-09-07 2019-01-18 高新兴科技集团股份有限公司 A kind of face key independent positioning method returned based on the cascade of depth multiple features fusion
CN109063695A (en) * 2018-09-18 2018-12-21 图普科技(广州)有限公司 A kind of face critical point detection method, apparatus and its computer storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Look at Boundary: A Boundary-Aware Face Alignment Algorithm";Wayne Wu,Chen Qian et al.;《arXiv》;20180526;第1-10页 *
"Two-stage Convolutional Part Heatmap Regression for the 1st 3D Face Alignment in the Wild (3DFAW) Challenge";Adrian Bulat and Georgios Tzimiropoulos;《arXiv》;20160929;第1-9页 *
"基于深度学习的人脸多属性识别系统";杨俊钦,张雨楠,林实锋,徐伟臻;《图形图像》;20190215;第52-55页 *

Also Published As

Publication number Publication date
CN109902616A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109902616B (en) Human face three-dimensional feature point detection method and system based on deep learning
Pusztai et al. Accurate calibration of multi-lidar-multi-camera systems
CN101488187B (en) System and method for deformable object recognition
JP6708385B2 (en) Discriminator creating device, discriminator creating method, and program
CA2778651C (en) Method and system for evaluating the resemblance of a query object to reference objects
Yuan et al. An improved Otsu threshold segmentation method for underwater simultaneous localization and mapping-based navigation
EP2579210A1 (en) Face feature-point position correction device, face feature-point position correction method, and face feature-point position correction program
US20140219569A1 (en) Image recognition system and method for identifying similarities in different images
CN109753960A (en) The underwater unnatural object detection method of isolated forest based on fractal theory
CN108198172B (en) Image significance detection method and device
JP6185385B2 (en) Spatial structure estimation apparatus, spatial structure estimation method, and spatial structure estimation program
CN105938513A (en) Apparatus and method for providing reliability for computer aided diagnosis
Dey et al. Effective selection of variable point neighbourhood for feature point extraction from aerial building point cloud data
CN104303209B (en) Fingerprint ridge image synthesis system and fingerprint ridge image synthesis method
Le et al. Acquiring qualified samples for RANSAC using geometrical constraints
Haider et al. What can we learn from depth camera sensor noise?
Kang et al. Primitive fitting based on the efficient multibaysac algorithm
Fremont et al. Circular targets for 3d alignment of video and lidar sensors
Himri et al. 3D object recognition based on point clouds in underwater environment with global descriptors: A survey
Reiss et al. A low‐cost 3D reconstruction system using a single‐shot projection of a pattern matrix
Kang et al. Detecting maritime obstacles using camera images
CN104463825A (en) Apparatus and method for detecting objects in three-dimensional volumetric image
CN112883920A (en) Point cloud deep learning-based three-dimensional face scanning feature point detection method and device
CN116310837B (en) SAR ship target rotation detection method and system
CN110363863B (en) Input data generation method and system of neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant