CN109858433B - Method and device for identifying two-dimensional face picture based on three-dimensional face model - Google Patents

Method and device for identifying two-dimensional face picture based on three-dimensional face model Download PDF

Info

Publication number
CN109858433B
CN109858433B CN201910082406.1A CN201910082406A CN109858433B CN 109858433 B CN109858433 B CN 109858433B CN 201910082406 A CN201910082406 A CN 201910082406A CN 109858433 B CN109858433 B CN 109858433B
Authority
CN
China
Prior art keywords
face
dimensional
dimensional face
face image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910082406.1A
Other languages
Chinese (zh)
Other versions
CN109858433A (en
Inventor
傅可人
游志胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201910082406.1A priority Critical patent/CN109858433B/en
Publication of CN109858433A publication Critical patent/CN109858433A/en
Application granted granted Critical
Publication of CN109858433B publication Critical patent/CN109858433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model, which comprises the following steps: preprocessing a face image to be recognized to obtain feature points and attitude angles of the face image to be recognized; respectively carrying out in-plane alignment and external alignment on the face image to be recognized to obtain a first multi-pose face image set; extracting the feature vector of the first multi-pose face image set through a convolutional neural network, and solving the average feature vector of the feature vector; establishing a three-dimensional face data set, and carrying out internal and external alignment processing on each three-dimensional face model in the three-dimensional face data set to generate a second multi-pose face image set of each model; extracting the feature vector of the second multi-pose face image set through a convolutional neural network, and solving a second average feature vector; and comparing the first average characteristic vector with the second average characteristic vector to obtain a face recognition result. The method carries out two-dimensional face image recognition by utilizing rich posture information contained in the three-dimensional face, and further improves the face recognition accuracy.

Description

Method and device for identifying two-dimensional face picture based on three-dimensional face model
Technical Field
The invention relates to the technical field of computer vision and pattern recognition, in particular to a method and a device for recognizing a two-dimensional face picture based on a three-dimensional face model.
Background
As one of the widely spread biometric techniques, the face recognition technique has brought great convenience to people's daily life. The traditional face recognition technology is face recognition based on two-dimensional (2D) pictures, namely, a registration end and a recognition end both collect 2D images of a face, and then face features are extracted for comparison to complete face recognition or data. The current 2D face recognition obtains a better recognition result under a limited condition (good illumination and opposite posture), while the recognition effect is sharply reduced under an unlimited condition, for example, the face has a large posture change. At present, 2D face recognition technology gradually matures and reaches a certain bottleneck, and more characteristic information needs to be introduced to overcome the defect of 2D face recognition in the development of the face recognition technology. A face recognition technology based on a three-dimensional (3D) face model is one of the trends in future development, and the 3D face model has more abundant information such as a three-dimensional shape than a 2D face picture, and can improve recognition performance. However, 3D sensors are adopted for capturing 3D faces at both the registration end and the recognition end, and the method cannot be realized in a short time by modifying all 2D cameras in the society at present into 3D sensors. A practical scheme is that a registration end collects a 3D face, and an identification end collects a 2D face for identification, and relates to a related technology for identifying a two-dimensional face picture by using a three-dimensional face model, which is also the background provided by the patent of the invention.
At present, the technology of recognizing two-dimensional face images by using a three-dimensional face model is deficient, and most of the technologies still stay at the stage of recognizing three-dimensional faces by using the three-dimensional face model instead of recognizing two-dimensional face images by using the three-dimensional face model. The application publication number CN108427871A of the invention is chinese patent application, which discloses a 3D face rapid identity authentication method and apparatus, wherein a three-dimensional face model is rotated to the same posture as a two-dimensional image to be recognized and projected to the two-dimensional image, and then the projected two-dimensional image is compared with the two-dimensional image to be recognized for recognition.
However, there is a technical problem that only the three-dimensional model is projected to a single pose corresponding to the two-dimensional image, and the two-dimensional face image recognition is not performed by using rich pose information contained in the three-dimensional face.
Disclosure of Invention
At least one of the objectives of the present invention is to overcome the above problems in the prior art, and to provide a method and an apparatus for recognizing a two-dimensional face image based on a three-dimensional face model, which can perform two-dimensional face image recognition by using rich pose information provided by a three-dimensional face, improve the problem that recognition is impossible or recognition errors may occur in a complex environment only by using a single pose recognition, improve the robustness and fault-tolerant capability of the system, and because the adopted method is not related to the brightness of the photo, can solve the problem that the face recognition process is affected by the illumination intensity of the shooting environment, thereby achieving a more ideal face recognition effect, effectively improving the face recognition accuracy, and making face recognition more practical.
In order to achieve the above object, the present invention adopts the following aspects.
A face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model comprises the following steps:
step 101, acquiring a certain number of face images to be recognized, preprocessing the face images to be recognized to obtain feature points of the face images to be recognized, and acquiring attitude angles of the face images to be recognized;
102, respectively carrying out in-plane alignment processing and out-of-plane alignment processing on the face image to be recognized to obtain a first multi-pose face image set after alignment; the first multi-pose face image set is a two-dimensional face image set obtained by performing in-plane alignment processing and out-of-plane alignment processing on a face image to be recognized, and comprises two-dimensional face images of a plurality of faces under a plurality of poses;
103, extracting the feature vectors of the first multi-pose face image set through a convolutional neural network, and solving a first average feature vector of each face in the first multi-pose face image set under multiple poses;
104, acquiring a three-dimensional face data set, and aligning each three-dimensional face model in the three-dimensional face data set to different postures through out-of-plane alignment and in-plane alignment to generate a second multi-posture face image set which comprises two-dimensional images of each three-dimensional face model under multiple postures;
105, extracting the feature vectors of the second multi-pose face image set through a convolutional neural network, and solving a second average feature vector of the two-dimensional image of each three-dimensional face model in the second multi-pose face image set under multiple poses;
and 106, comparing a first average characteristic vector obtained according to the face image to be recognized with a second average characteristic vector obtained according to each three-dimensional face model in the three-dimensional face data set to obtain a face recognition result.
Preferably, in the face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model, the attitude angle is a yaw angle.
Preferably, in the face recognition method for recognizing a two-dimensional face image based on a three-dimensional face model, performing in-plane alignment processing on a two-dimensional face region image of the face image to be recognized includes:
and determining a similarity transformation relation between the two-dimensional face region image feature point coordinates and the template point coordinates, and obtaining a two-dimensional face image subjected to similarity transformation.
Preferably, in a face recognition method for recognizing a two-dimensional face image based on a three-dimensional face model, the performing out-of-plane alignment processing on a two-dimensional face region image of the face image to be recognized includes:
and generating a three-dimensional model from the two-dimensional face region image, determining a projection function according to the attitude angle, and projecting the generated three-dimensional model to the corresponding two-dimensional face image according to the attitude based on the projection function.
Preferably, in a face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model, the step 104 specifically includes:
acquiring a three-dimensional face data set, rotating a three-dimensional face model in the three-dimensional face data set to a corresponding attitude angle of a face image to be recognized, projecting the rotated three-dimensional face model to a two-dimensional image, and performing in-plane alignment processing on the projected two-dimensional image according to corresponding characteristic points; and performing out-of-plane alignment processing on the three-dimensional face model, namely projecting the three-dimensional face model to a corresponding two-dimensional face image according to the attitude angle to generate a second multi-attitude face image set after alignment.
Preferably, in the face recognition method for recognizing the two-dimensional face picture based on the three-dimensional face model, the convolutional neural network is one of inclusion-v 4, inclusion-Resnet-v 1 and inclusion-Resnet-v 2.
Preferably, in a face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model, the first average feature vector is compared with the second average feature vector by calculating cosine similarity or euclidean distance between vectors.
An apparatus for recognizing a two-dimensional face picture based on a three-dimensional face model includes at least one processor, and a memory communicatively connected to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described method.
In summary, due to the adoption of the technical scheme, the invention at least has the following beneficial effects:
the method has the advantages that the abundant attitude information contained in the three-dimensional face is utilized to carry out two-dimensional face image recognition, the problem that the single attitude recognition is possibly used in a complex environment and cannot be recognized or recognition errors are solved, the robustness and fault-tolerant capability of the system are improved, and the method is irrelevant to the brightness degree of a photo, so that the influence of the illumination intensity of a shooting environment in the face recognition process can be solved, the ideal face recognition effect can be achieved, the face recognition accuracy is effectively improved, and the face recognition is more practical.
Drawings
Fig. 1 is a flowchart of a face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model according to an exemplary embodiment of the present invention.
Fig. 2 is a schematic diagram of a face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model according to an exemplary embodiment of the present invention.
Fig. 3 is a schematic diagram of a three-dimensional face model face feature point according to an exemplary embodiment of the invention.
Fig. 4 is a schematic structural diagram of an apparatus for recognizing a two-dimensional face picture based on a three-dimensional face model according to an exemplary embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and embodiments, so that the objects, technical solutions and advantages of the present invention will be more clearly understood. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 and 2 illustrate a face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model according to an exemplary embodiment of the present invention. The method of this embodiment mainly includes:
step 101, acquiring a certain number of face images to be recognized, preprocessing the face images to be recognized to obtain face characteristic points of the face images to be recognized, and acquiring attitude angles of the face images to be recognized;
specifically, the face image to be recognized may be a two-dimensional face image (corresponding to a three-dimensional face model in a three-dimensional face database) acquired by screening, or may be a two-dimensional image captured by monitoring, a camera, or the like. Therefore, the image to be recognized is a two-dimensional image, and the three-dimensional face model is in the image library, and the face recognition question in this embodiment is to answer the identity of which three-dimensional face model in the library the image to be recognized belongs to, i.e., to recognize the two-dimensional face image by using the three-dimensional face model.
Preprocessing the face image to be recognized, wherein the preprocessing comprises the following steps: detecting a two-dimensional face image region by using a face detection algorithm (the methods for extracting the two-dimensional face image region are more, and here, the face detection algorithm in a document 'Joint face detection and Alignment using Multi-task case modified volumetric network' is adopted to detect the two-dimensional face image region); and extracting human Face feature points of a two-dimensional human Face region in an image to be recognized by a human Face feature point detection algorithm (there are many methods for extracting the feature points of the two-dimensional human Face region, here, we adopt a human Face feature point detection algorithm proposed by a document "How far area from solving the 2D &3D Face Alignment project" (and adatastet of 230,0003D Face maps) "to obtain 68 feature points of a human Face).
And estimating the attitude angle of a two-dimensional face image region in the face image to be recognized
Figure BDA0001960773260000061
(the Pose angle of the face is estimated here using the method proposed in the document Fine-Grained Head position Estimation Without Keypoints), where ω denotes the yaw angle (yaw), θ denotes the roll angle (roll),
Figure BDA0001960773260000062
the pitch angle (pitch) is indicated. Meanwhile, the face recognition is generally acceptedThe influence of the yaw angle (yaw) ω is most significant, so in this embodiment only the influence of ω is considered, i.e. the attitude angle is the yaw angle. And because of the symmetry of the face recognition, only the condition that omega is more than or equal to 0 degree is considered, and at the moment, if the image yaw angle does not meet the condition that omega is more than or equal to 0 degree, the two-dimensional face area image can be horizontally turned so that the yaw angle meets the condition that omega is more than or equal to 0 degree.
102, respectively carrying out in-plane alignment processing and out-of-plane alignment processing on the face image to be recognized to obtain a first multi-pose face image set after alignment; the first multi-pose face image set is a two-dimensional face image set obtained through in-plane alignment processing and out-of-plane alignment processing, and comprises two-dimensional face images of a plurality of faces under a plurality of poses;
specifically, in-plane alignment processing and out-of-plane alignment processing are performed on each face image to be recognized, and the in-plane alignment processing and the out-of-plane alignment processing can be performed simultaneously or can be performed in a certain sequence, so that the operation result is not affected. The in-plane alignment processing is to calculate similarity transformation from coordinates of feature points (fig. 3 is a schematic diagram of human face feature points) to coordinates of template points, and perform similarity transformation on a human face region by using the transformation relation. Representing the input face region image by I and using the function
Figure BDA0001960773260000066
Representing according to attitude angle
Figure BDA0001960773260000063
A determined similarity transformation operation, the transformed image being represented as
Figure BDA0001960773260000064
Figure BDA0001960773260000065
In the above formula, when the yaw angle ω of the face is smaller than 45 degrees, nine feature points with stable front are adopted for alignment; when the yaw angle of the face is larger, 3 feature points, namely 3 feature points of the centers of the two eyes and the tip of the nose are adopted to calculate similarity transformation. The above feature points can be directly obtained from 68 feature points of the human face. In addition, corresponding similarity transformations are calculated from the correspondence of the feature points to the stencil points, as is well known to those skilled in the art.
Further, the facial image to be recognized is subjected to out-of-plane alignment processing, that is, based on a method for generating a three-dimensional model from a two-dimensional image, and the generated three-dimensional model is projected to the two-dimensional image according to the posture. Specifically, a method for generating a three-dimensional model from a single two-dimensional image is represented by F, an input face area image is represented by I, and the generated three-dimensional face model is represented by F (I); by a function
Figure BDA0001960773260000071
Representing according to attitude angle
Figure BDA0001960773260000072
A determined projection function, the result of the projection of the three-dimensional model being represented as
Figure BDA0001960773260000073
In this embodiment, F generates a three-dimensional Model from a single two-dimensional image by using a 3DMM method proposed in document a portable Model For The Synthesis Of3D Faces, and uses a projection method
Figure BDA0001960773260000074
The design is as follows:
Figure BDA0001960773260000075
thereby obtaining a first multi-pose face image set after alignment; the first multi-pose face image set is a two-dimensional face image set obtained through in-plane alignment processing and out-of-plane alignment processing, and comprises two-dimensional face images of a plurality of faces under a plurality of poses.
103, extracting the feature vectors of the first multi-pose face image set through a convolutional neural network, and solving a first average feature vector of each face in the first multi-pose face image set under multiple poses;
specifically, the facial feature vectors of the series of multi-pose projection images (the first multi-pose facial image set) obtained in step 102 are extracted through corresponding convolutional neural networks, and the feature vectors obtained from each face are averaged to obtain the final feature vector. The convolutional neural network is one of common depth feature extraction neural networks of increment-v 4, increment-Resnet-v 1 and increment-Resnet-v 2, and is used for performing multiple residual error convolution processing on human image face data to extract depth feature vectors of images. And the convolutional neural network corresponding to a certain attitude should be subjected to the optimization training of the sample under the attitude, so that the corresponding convolutional neural network can extract the depth feature of the image under the attitude.
104, acquiring a three-dimensional face data set, and aligning each three-dimensional face model in the three-dimensional face data set to different postures through out-of-plane alignment and in-plane alignment to generate a second multi-posture face image set which comprises two-dimensional images of each three-dimensional face model under multiple postures;
specifically, a certain number of three-dimensional face models (acquired by a special three-dimensional image acquisition device or acquired from a corresponding registry) are acquired to establish a corresponding three-dimensional face data set. Aligning each three-dimensional face model in the data set to different postures through out-of-plane alignment and in-plane alignment, namely rotating the three-dimensional face model in the three-dimensional face data set to a corresponding posture angle of a face image to be recognized, projecting the rotated three-dimensional face model to a two-dimensional image, and performing in-plane alignment processing on the projected two-dimensional image according to corresponding characteristic points; and performing out-of-plane alignment processing on the three-dimensional face model, namely projecting the three-dimensional face model to a corresponding two-dimensional face image according to the attitude angle, wherein the finally obtained two-dimensional face image set subjected to in-plane alignment and out-of-plane alignment processing is an aligned second multi-attitude face image set, and the second multi-attitude face image set comprises two-dimensional images of each three-dimensional face model under multiple attitudes.
Wherein the inner alignment process comprises: firstly, the three-dimensional model G is rotated to the attitude angle of the face to be recognized
Figure BDA0001960773260000081
And projecting to a two-dimensional image, wherein the projected image is represented as I', and performing in-plane alignment according to the characteristic points. To maintain the consistency of face recognition, the same in-plane alignment as in step 102 is used
Figure BDA0001960773260000082
Namely, formula (1).
The outer alignment process includes: the three-dimensional model G is projected in the same way as in step 102
Figure BDA0001960773260000083
Equation (2) is projected to a two-dimensional image, which is the second multi-pose face image set. And (3) a two-dimensional face image set obtained by processing each three-dimensional face model in the three-dimensional face data set internally and externally is used for forming a second multi-pose face image set together.
105, extracting the feature vectors of the second multi-pose face image set through a convolutional neural network, and solving a second average feature vector of the two-dimensional image of each three-dimensional face model in the second multi-pose face image set under multiple poses;
specifically, the convolutional neural network in step 103 is used to perform feature extraction on the second multi-pose face image set obtained in step 104, so as to obtain an average feature vector of two-dimensional images (multi-pose two-dimensional images obtained by outer alignment and inner alignment) corresponding to each three-dimensional face model in the second multi-pose face image set.
And 106, comparing a first average characteristic vector obtained according to the face image to be recognized with a second average characteristic vector obtained according to each three-dimensional face model in the three-dimensional face data set to obtain a face recognition result.
Specifically, a first average feature vector obtained by processing a face image set to be recognized is compared with a second average feature vector obtained according to a three-dimensional face data set, so that a face recognition result is obtained. And after the first average vector and the second average vector are obtained, comparing the vectors to obtain an identification result. The face feature vector comparison is a commonly used step in the field, and can be performed by calculating cosine similarity or Euclidean distance between vectors. And when the similarity between the feature vector of a certain three-dimensional face model in the three-dimensional face data and the feature vector of a certain face image to be recognized is the maximum or the Euclidean distance between the feature vector of the certain three-dimensional face model and the feature vector of the certain face image to be recognized is the minimum, the three-dimensional face model is the recognition result corresponding to the two-dimensional face image to be recognized.
In a further embodiment of the present invention, steps 102 to 103 (step of obtaining the average feature vector of the two-dimensional face image data to be recognized) and steps 104 to 105 (step of obtaining the average feature vector of the three-dimensional face data set) may be performed in sequence, or may be performed simultaneously, and the operation sequence is not sequential.
In the embodiment, the two-dimensional face image recognition is carried out by utilizing the rich attitude information contained in the three-dimensional face, the two-dimensional face image recognition can be carried out by utilizing the rich attitude information provided by the three-dimensional face, the problem that the recognition is impossible or wrong in the complex environment by using only single attitude recognition is solved, the robustness and the fault-tolerant capability of the system are improved, and the adopted method is irrelevant to the brightness degree of a photo, so that the influence of the illumination intensity of a shooting environment in the face recognition process can be solved, the ideal face recognition effect can be achieved, the face recognition accuracy is effectively improved, and the face recognition is more practical.
Fig. 4 illustrates an apparatus for recognizing a two-dimensional face picture based on a three-dimensional face model, namely an electronic device 310 (e.g., a computer server with program execution function) including at least one processor 311, a power supply 314, and a memory 312 and an input/output interface 313 communicatively connected to the at least one processor 311, according to an exemplary embodiment of the present invention; the memory 312 stores instructions executable by the at least one processor 311, the instructions being executable by the at least one processor 311 to enable the at least one processor 311 to perform a method disclosed in any one of the embodiments; the input/output interface 313 may include a display, a keyboard, a mouse, and a USB interface for inputting/outputting data; the power supply 314 is used to provide power to the electronic device 310.
Those skilled in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
When the integrated unit of the present invention is implemented in the form of a software functional unit and sold or used as a separate product, it may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The foregoing is merely a detailed description of specific embodiments of the invention and is not intended to limit the invention. Various alterations, modifications and improvements will occur to those skilled in the art without departing from the spirit and scope of the invention.

Claims (8)

1. A face recognition method for recognizing a two-dimensional face picture based on a three-dimensional face model is characterized by comprising the following steps:
step 101, acquiring a certain number of face images to be recognized, preprocessing the face images to be recognized to obtain feature points of the face images to be recognized, and acquiring attitude angles of the face images to be recognized;
102, respectively carrying out in-plane alignment processing and out-of-plane alignment processing on the face image to be recognized to obtain a first multi-pose face image set after alignment; the first multi-pose face image set is a two-dimensional face image set obtained by performing in-plane alignment processing and out-of-plane alignment processing on a face image to be recognized, and comprises two-dimensional face images of a plurality of faces under a plurality of poses;
103, extracting the feature vectors of the first multi-pose face image set through a convolutional neural network, and solving a first average feature vector of each face in the first multi-pose face image set under multiple poses;
104, acquiring a three-dimensional face data set, and aligning each three-dimensional face model in the three-dimensional face data set to different postures through out-of-plane alignment and in-plane alignment to generate a second multi-posture face image set which comprises two-dimensional images of each three-dimensional face model under multiple postures;
105, extracting the feature vectors of the second multi-pose face image set through a convolutional neural network, and solving a second average feature vector of the two-dimensional image of each three-dimensional face model in the second multi-pose face image set under multiple poses;
and 106, comparing a first average characteristic vector obtained according to the face image to be recognized with a second average characteristic vector obtained according to each three-dimensional face model in the three-dimensional face data set to obtain a face recognition result.
2. The method of claim 1, wherein the attitude angle is a yaw angle.
3. The method of claim 1, wherein the in-plane alignment process comprises:
and determining a similarity transformation relation from the characteristic point coordinates of the face image to be recognized to the template point coordinates, and obtaining a two-dimensional face image subjected to similarity transformation.
4. The method of claim 1, wherein the out-of-plane alignment process comprises:
and generating a three-dimensional model of the face image to be recognized, determining a projection function according to the attitude angle, and projecting the generated three-dimensional model to a corresponding two-dimensional face image according to the attitude based on the projection function.
5. The method according to claim 1, wherein the step 104 specifically comprises:
acquiring a three-dimensional face data set, rotating a three-dimensional face model in the three-dimensional face data set to a corresponding attitude angle of a face image to be recognized, projecting the rotated three-dimensional face model to a two-dimensional image, and performing in-plane alignment processing on the projected two-dimensional image according to corresponding characteristic points; and performing out-of-plane alignment processing on the three-dimensional face model, namely projecting the three-dimensional face model to a corresponding two-dimensional face image according to the attitude angle to generate a second multi-attitude face image set after alignment.
6. The method of claim 1, wherein the convolutional neural network is one of inclusion-v 4, inclusion-Resnet-v 1, and inclusion-Resnet-v 2.
7. The method of claim 1, wherein the comparing of the first average eigenvector with the second average eigenvector is performed by calculating cosine similarity or Euclidean distance between the vectors.
8. An apparatus for recognizing a two-dimensional face picture based on a three-dimensional face model is characterized by comprising at least one processor and a memory which is in communication connection with the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.
CN201910082406.1A 2019-01-28 2019-01-28 Method and device for identifying two-dimensional face picture based on three-dimensional face model Active CN109858433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910082406.1A CN109858433B (en) 2019-01-28 2019-01-28 Method and device for identifying two-dimensional face picture based on three-dimensional face model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910082406.1A CN109858433B (en) 2019-01-28 2019-01-28 Method and device for identifying two-dimensional face picture based on three-dimensional face model

Publications (2)

Publication Number Publication Date
CN109858433A CN109858433A (en) 2019-06-07
CN109858433B true CN109858433B (en) 2020-06-30

Family

ID=66896606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910082406.1A Active CN109858433B (en) 2019-01-28 2019-01-28 Method and device for identifying two-dimensional face picture based on three-dimensional face model

Country Status (1)

Country Link
CN (1) CN109858433B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321821B (en) * 2019-06-24 2022-10-25 深圳爱莫科技有限公司 Human face alignment initialization method and device based on three-dimensional projection and storage medium
CN112528902B (en) * 2020-12-17 2022-05-24 四川大学 Video monitoring dynamic face recognition method and device based on 3D face model
CN113808274A (en) * 2021-09-24 2021-12-17 福建平潭瑞谦智能科技有限公司 Face recognition model construction method and system and recognition method
CN117333928B (en) * 2023-12-01 2024-03-22 深圳市宗匠科技有限公司 Face feature point detection method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
CN105654048A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Multi-visual-angle face comparison method
CN107729875A (en) * 2017-11-09 2018-02-23 上海快视信息技术有限公司 Three-dimensional face identification method and device
CN108805977A (en) * 2018-06-06 2018-11-13 浙江大学 A kind of face three-dimensional rebuilding method based on end-to-end convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
CN105654048A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Multi-visual-angle face comparison method
CN107729875A (en) * 2017-11-09 2018-02-23 上海快视信息技术有限公司 Three-dimensional face identification method and device
CN108805977A (en) * 2018-06-06 2018-11-13 浙江大学 A kind of face three-dimensional rebuilding method based on end-to-end convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于散斑立体匹配的快速三维人脸重建;谢宜江 等;《光电子 激光》;20190115;第30卷(第1期);第61-69页 *
多姿态人脸识别综述;邹国锋 等;《模式识别与人工智能》;20150731;第28卷(第7期);第613-625页 *

Also Published As

Publication number Publication date
CN109858433A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109858433B (en) Method and device for identifying two-dimensional face picture based on three-dimensional face model
Kang et al. Study of a full-view 3D finger vein verification technique
Min et al. Kinectfacedb: A kinect database for face recognition
Bronstein et al. Three-dimensional face recognition
Spreeuwers Fast and accurate 3D face recognition: using registration to an intrinsic coordinate system and fusion of multiple region classifiers
Prabhu et al. Unconstrained pose-invariant face recognition using 3D generic elastic models
Li et al. Morphable displacement field based image matching for face recognition across pose
JP5873442B2 (en) Object detection apparatus and object detection method
US8374422B2 (en) Face expressions identification
Xiong et al. Supervised descent method for solving nonlinear least squares problems in computer vision
US20140043329A1 (en) Method of augmented makeover with 3d face modeling and landmark alignment
CN104050475A (en) Reality augmenting system and method based on image feature matching
Niinuma et al. Automatic multi-view face recognition via 3D model based pose regularization
Vretos et al. 3D facial expression recognition using Zernike moments on depth images
Han et al. 3D face texture modeling from uncalibrated frontal and profile images
CN105335719A (en) Living body detection method and device
WO2016045711A1 (en) A face pose rectification method and apparatus
CN112528902B (en) Video monitoring dynamic face recognition method and device based on 3D face model
US20200065564A1 (en) Method for determining pose and for identifying a three-dimensional view of a face
CN111401157A (en) Face recognition method and system based on three-dimensional features
Colombo et al. Gappy PCA classification for occlusion tolerant 3D face detection
Wang et al. 3D face recognition by local shape difference boosting
CN108694348B (en) Tracking registration method and device based on natural features
Jiménez et al. Face tracking and pose estimation with automatic three-dimensional model construction
Jaiswal et al. Brief description of image based 3D face recognition methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant