CN111368581A - Face recognition method based on TOF camera module, face recognition device and electronic equipment - Google Patents

Face recognition method based on TOF camera module, face recognition device and electronic equipment Download PDF

Info

Publication number
CN111368581A
CN111368581A CN201811587458.6A CN201811587458A CN111368581A CN 111368581 A CN111368581 A CN 111368581A CN 201811587458 A CN201811587458 A CN 201811587458A CN 111368581 A CN111368581 A CN 111368581A
Authority
CN
China
Prior art keywords
face
point cloud
features
processing
rgb image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811587458.6A
Other languages
Chinese (zh)
Inventor
刘静
陈文�
戴怡洁
豆彩霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sunny Optical Intelligent Technology Co Ltd
Original Assignee
Zhejiang Sunny Optical Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sunny Optical Intelligent Technology Co Ltd filed Critical Zhejiang Sunny Optical Intelligent Technology Co Ltd
Priority to CN201811587458.6A priority Critical patent/CN111368581A/en
Publication of CN111368581A publication Critical patent/CN111368581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a face recognition method based on a TOF camera module, a face recognition device and electronic equipment. The face recognition method comprises the following steps: obtaining an RGBD fusion image, an RGB image and depth point cloud data of a detected face; performing face detection on the RGBD fusion image of the detected face to obtain a face region frame, wherein the face region frame is used for defining a face region in the depth point cloud data of the detected face; extracting a face region point cloud in the depth point cloud data of the detected face based on the face region frame; processing the face area point cloud to obtain face features in the face area point cloud; processing the RGB image of the detected human face to obtain human face features in the RGB image; fusing the human face features in the RGB image and the human face features in the human face area point cloud to obtain fused human face features; and processing the fused face features through a classifier to obtain a face recognition result.

Description

Face recognition method based on TOF camera module, face recognition device and electronic equipment
Technical Field
The invention relates to the field of face recognition, in particular to a face recognition method based on a TOF camera module, a face recognition device and electronic equipment.
Background
In modern society, accurate and reliable personal identification is increasingly receiving attention from people. The identification technology mainly comprises a method based on human body biological characteristics such as fingerprints, irises and faces, wherein the identity authentication based on the fingerprints and irises has high accuracy and reliability, the characteristics have uniqueness, but the application of the method is limited due to factors such as user cooperation required in use, and the face identification has wider application prospect due to the advantages of nature, friendliness and the like.
The traditional two-dimensional face recognition is based on the brightness information of the image to confirm the identity, but because the influence of factors such as illumination, posture, makeup and the like is large, the face recognition research begins to expand from two-dimensional images to three-dimensional space in recent years.
In the prior art, when face recognition research in a three-dimensional space is carried out, two-dimensional face information is completely abandoned, the calculation is complex, the requirement on the precision of the three-dimensional face is high, and the efficiency is low. For example, in patent CN107247916A and patent CN106096503A, especially in CN106096503A, the similarity between two human face curved surfaces is measured by using a method of comparing feature point information with three-dimensional human face key points, the feature point extraction algorithm is complex and has accuracy to be verified, three-dimensional information is lost, and the accuracy is not high.
Although the three-dimensional information can describe the facial features of the human face more accurately, some extracted features have cylinder body transformation invariance, are not easily influenced by finding illumination and postures, and can effectively place the invasion caused by pictures and videos, the three-dimensional information amount is huge, and the requirement on an algorithm is high.
Disclosure of Invention
One of the main advantages of the present invention is to provide a face recognition method based on a TOF camera module, a face recognition apparatus, and an electronic device, wherein the face recognition method performs face recognition based on two-dimensional information and three-dimensional information of a detected face to have better performance.
Additional advantages and features of the invention will be set forth in the detailed description which follows and in part will be apparent from the description, or may be learned by practice of the invention as set forth hereinafter.
According to an aspect of the present invention, the present application provides a face recognition method based on a TOF camera module, which includes:
acquiring an RGBD fusion image, an RGB image and depth point cloud data of a detected face through a TOF camera module;
performing face detection on the RGBD fusion image of the detected face to obtain a face region frame, wherein the face region frame is used for defining a face region in the depth point cloud data of the detected face;
extracting a face region point cloud in the depth point cloud data of the detected face based on the face region frame;
processing the face area point cloud to obtain face features in the face area point cloud;
processing the RGB image of the detected human face to obtain human face features in the RGB image;
fusing the human face features in the RGB image and the human face features in the human face area point cloud to obtain fused human face features; and
and processing the fused face features through a classifier to obtain a face recognition result.
In an embodiment of the present invention, processing the point cloud of the face area to obtain the face features in the point cloud of the face area includes:
preprocessing the point cloud of the face area, wherein the preprocessing process comprises the following steps: filtering the point cloud of the face area; removing outliers in the point cloud of the face area; carrying out point cloud mesh reconstruction on the face region point cloud after filtering processing and outlier removing processing; performing non-rigid registration on the point cloud of the face region reconstructed by the point cloud grid; and
and processing the preprocessed face region point cloud through a deep neural network model to obtain the face features in the face region point cloud.
In an embodiment of the present invention, processing the RGB image of the detected face to obtain the face features in the RGB image includes:
carrying out normalization processing on the RGB image;
aligning the RGB image after normalization processing to a standard template; and
and processing the RGB image aligned to the standard template through a deep neural network model to obtain the human face features in the RGB image.
In an embodiment of the present invention, before processing the point cloud of face regions to obtain face features in the point cloud of face regions, the method further includes:
performing face detection on the RGBD fusion image of the detected face to obtain face characteristic points in the RGBD fusion image;
based on the TOF camera module, obtaining three-dimensional coordinates of the human face characteristic points in the RGBD fusion image; and
and determining whether the measured object with the measured human face is a living body or not based on the three-dimensional coordinates of the human face characteristic points.
In an embodiment of the present invention, determining whether the object to be measured is a living body based on the three-dimensional coordinates of the feature points of the human face includes:
fitting a reference plane based on the three-dimensional coordinates of the human face feature points;
obtaining the sum of the distances from each face characteristic point to the reference plane; and
and determining the measured object as a living body in response to the sum of the distances being larger than a preset threshold value.
In an embodiment of the present invention, fusing the face features in the RGB image and the face features in the face region point cloud to obtain fused face features, including:
and respectively assigning a multiplication coefficient to the data related to the face features in the RGB image and the data related to the face features in the face area point cloud so as to fuse the face features in the RGB image and the face features in the face area point cloud.
According to another aspect of the present invention, there is also provided a face recognition apparatus based on a TOF camera module, including:
the image acquisition unit is used for acquiring an RGBD fusion image, an RGB image and depth point cloud data of a detected face;
the face detection unit is used for carrying out face detection on the RGBD fusion image of the detected face to obtain a face region frame, and the face region frame is used for defining a face region in the depth point cloud data of the detected face;
a face region point cloud extracting unit, configured to extract a face region point cloud in the depth point cloud data of the detected face based on the face region frame;
the face feature extraction unit is used for processing the face area point cloud to obtain face features in the face area point cloud; processing the RGB image of the detected human face to obtain human face features in the RGB image;
the fusion unit is used for fusing the human face features in the RGB images and the human face features in the human face area point cloud to obtain fused human face features; and
and the face recognition unit is used for processing the fused face features through a classifier to obtain a face recognition result.
According to an embodiment of the present invention, the face feature extraction unit is further configured to:
preprocessing the point cloud of the face area, wherein the preprocessing process comprises the following steps: filtering the point cloud of the face area; removing outliers in the point cloud of the face area; carrying out point cloud mesh reconstruction on the face region point cloud after filtering processing and outlier removing processing; performing non-rigid registration on the point cloud of the face region reconstructed by the point cloud grid; and
and processing the preprocessed face region point cloud through a deep neural network model to obtain the face features in the face region point cloud.
In an embodiment of the present invention, the face feature extraction unit is further configured to:
carrying out normalization processing on the RGB image;
aligning the RGB image after normalization processing to a standard template; and
and processing the RGB image aligned to the standard template through a deep neural network model to obtain the human face features in the RGB image.
In an embodiment of the present invention, the face recognition apparatus further includes: a living body detection unit for:
performing face detection on the RGBD fusion image of the detected face to obtain face characteristic points in the RGBD fusion image;
based on the TOF camera module, obtaining three-dimensional coordinates of the human face characteristic points in the RGBD fusion image; and
and determining whether the measured object with the measured human face is a living body or not based on the three-dimensional coordinates of the human face characteristic points.
According to an embodiment of the present invention, the living body detecting unit is further configured to:
fitting a reference plane based on the three-dimensional coordinates of the human face feature points;
obtaining the sum of the distances from each face characteristic point to the reference plane; and
and determining the measured object as a living body in response to the sum of the distances being larger than a preset threshold value.
In an embodiment of the present invention, the fusion unit is further configured to assign a multiplication coefficient to the data related to the face features in the RGB image and the data related to the face features in the face region point cloud respectively, so as to fuse the face features in the RGB image and the face features in the face region point cloud.
According to another aspect of the invention, there is provided an electronic device comprising
A processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the face recognition method as described above.
According to another aspect of the present invention, there is also provided a computer readable storage medium having stored thereon computer program instructions operable, when executed by a computing apparatus, to perform the face recognition method as described above.
Further objects and advantages of the invention will be fully apparent from the ensuing description and drawings.
These and other objects, features and advantages of the present invention will become more fully apparent from the following detailed description, the accompanying drawings and the claims.
Drawings
Fig. 1 illustrates a flow chart of a face recognition method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart illustrating processing of the face area point cloud to obtain face features in the face area point cloud in the face recognition method according to the embodiment of the application.
Fig. 3 is a schematic flow chart illustrating processing of an RGB image of the detected face to obtain a face feature in the RGB image in the face recognition method according to the embodiment of the application.
Fig. 4 illustrates a flow chart of the living body detection in the face recognition method according to the embodiment of the invention.
Fig. 5 illustrates another flow chart of a face recognition method according to an embodiment of the present application.
Fig. 6 illustrates a block diagram schematic diagram of a face recognition apparatus according to an embodiment of the present application.
FIG. 7 illustrates a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that in the present disclosure, the terms "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for ease of description and simplicity of description, and do not indicate or imply that the referenced devices or components must be in a particular orientation, constructed and operated in a particular orientation, and thus the above terms are not to be construed as limiting the present invention.
It is understood that the terms "a" and "an" should be interpreted as meaning that a number of one element or element is one in one embodiment, while a number of other elements is one in another embodiment, and the terms "a" and "an" should not be interpreted as limiting the number.
Fig. 1 is a schematic flowchart illustrating a face recognition method according to an embodiment of the present invention, and as shown in fig. 1, the face recognition method according to the embodiment of the present invention includes: s110, acquiring an RGBD fusion image, an RGB image and depth point cloud data of the detected face through a TOF camera module; s120, performing face detection on the RGBD fusion image of the detected face to obtain a face region frame, wherein the face region frame is used for defining a face region in the depth point cloud data of the detected face; s130, extracting a face region point cloud in the depth point cloud data of the detected face based on the face region frame; s140, processing the face area point cloud to obtain face features in the face area point cloud; s150, processing the RGB image of the detected human face to obtain human face features in the RGB image; s160, fusing the face features in the RGB image and the face features in the face area point cloud to obtain fused face features; and S170, processing the fused face features through a classifier to obtain a face recognition result.
Through the above description, it can be understood that, when performing face recognition, the face recognition method based on the TOF camera module integrates the face features of the RGB images of the detected face and the face features in the point cloud of the face region when extracting the features of the detected face, so that when performing face recognition based on the invention, not only the accuracy of face recognition is ensured, but also the computation of data is simplified.
In the step S110, the RGB fused image, the RGB image, and the depth point cloud image of the detected face, which are acquired by the TOF module, can substantially represent the features of the detected face, wherein the RGB image is a two-dimensional feature of the detected face, and the depth point cloud image can represent a three-dimensional feature of the detected face.
In the step S120, after performing face detection on the RGBD fusion map of the detected face, a face region frame of the detected face can be obtained, so as to provide a standard for extracting a face region point cloud in the depth point cloud data of the detected face. Specifically, in step S103, based on the face region frame of the detected face, a face region point cloud in the depth point cloud data of the detected face is extracted. That is, face region point clouds in the depth point cloud data that are not in the face region box will be excluded.
Fig. 2 is a schematic flow chart illustrating processing of the face area point cloud to obtain face features in the face area point cloud in the face recognition method according to the embodiment of the application. As shown in fig. 2, the process includes: s210, preprocessing the point cloud of the face area, wherein the preprocessing process comprises the following steps: filtering the point cloud of the face area; removing outliers in the point cloud of the face area; carrying out point cloud mesh reconstruction on the face region point cloud after filtering processing and outlier removing processing; performing non-rigid registration on the point cloud of the face region reconstructed by the point cloud grid; and S220, processing the preprocessed face region point cloud through a deep neural network model to obtain the face features in the face region point cloud.
Fig. 3 is a schematic flow chart illustrating processing of an RGB image of the detected face to obtain a face feature in the RGB image in the face recognition method according to the embodiment of the application. As shown in fig. 3, the process includes: s310, normalizing the RGB image; s320, aligning the RGB image after normalization processing to a standard template; s330, processing the RGB image aligned to the standard template through a deep neural network model to obtain the human face features in the RGB image.
In addition, before the step 140, the face recognition method further includes a face recognition process. Fig. 4 illustrates a flow chart of the living body detection in the face recognition method according to the embodiment of the invention. As shown in fig. 4, the in-vivo detection process includes:
s410, carrying out face detection on the RGBD fusion image of the detected face to obtain face characteristic points in the RGBD fusion image;
s420, obtaining three-dimensional coordinates of the face characteristic points in the RGBD fusion image based on the TOF camera module; and
and S430, determining whether the tested object with the tested human face is a living body or not based on the three-dimensional coordinates of the human face characteristic points.
The purpose of the living body detection is to eliminate the invasion of photos and videos and prevent the false recognition of human faces.
More specifically, the process of determining whether the object having the detected face is a living body based on the three-dimensional coordinates of the face feature points first includes fitting a reference plane based on the three-dimensional coordinates of the face feature points; then, obtaining the sum of the distances from each face characteristic point to the reference plane; and then, in response to the sum of the distances being larger than a preset threshold, determining that the measured object is a living body.
Specifically, when the sum of the distances is greater than the preset value, the detected face is indicated to be a three-dimensional structure, and then the detected object is determined to be a living body; and when the sum of the distances is smaller than the preset value, the detected face is a plane structure, so that the detected face can be a photo or a video and the like.
In step S160, the face features in the RGB image and the face features in the face region point cloud are fused to obtain fused face features. Here, a first weight and a second weight may be respectively assigned to the face feature in the RGB image and the face feature in the face region point cloud to obtain the fused face feature.
In the step S170, the fused face features are processed by a classifier to obtain a face recognition result. Specifically, the fused face features may be processed by a KNN classifier to obtain the recognition result. Those skilled in the art will appreciate that the fused facial features may also be processed by other types of classifiers, and the invention is not limited in this respect.
Fig. 5 illustrates another flow chart of a face recognition method according to an embodiment of the present application. As shown in fig. 5, according to the face recognition method in the embodiment of the present application, first, an RGBD fusion image of a detected face is acquired by a TOF camera module, and then, face detection is performed on the RGBD fusion image and face feature points therein are extracted; further, extracting a point cloud of a face area based on face detection and obtaining three-dimensional information of corresponding face feature points; then, the living body detection is performed by the three-dimensional information of the corresponding facial feature points. After the tested object is determined to be a living body, processing point cloud data to extract point cloud characteristics, and processing RGB image data to extract RGB image characteristics; and then carrying out feature fusion and obtaining a face recognition result through a KNN classifier.
According to another aspect of the invention, a face recognition device is also provided.
Fig. 6 illustrates a block diagram schematic diagram of a face recognition apparatus according to an embodiment of the present application. As shown in fig. 6, the face recognition apparatus 600 according to the embodiment of the present application includes: the image acquisition unit 610 is used for acquiring an RGBD fusion image, an RGB image and depth point cloud data of a detected face; a face detection unit 620, configured to perform face detection on the RGBD fusion image of the detected face to obtain a face region frame, where the face region frame is used to define a face region in the depth point cloud data of the detected face; a face region point cloud extracting unit 630, configured to extract a face region point cloud in the depth point cloud data of the detected face based on the face region frame; a face feature extraction unit 640, configured to process the face region point cloud to obtain a face feature in the face region point cloud; processing the RGB image of the detected human face to obtain human face features in the RGB image; a fusion unit 650, configured to fuse the face features in the RGB image and the face features in the face region point cloud to obtain fused face features; and a face recognition unit 660, configured to process the fused face features through a classifier to obtain a face recognition result.
In an example, in the above-mentioned face recognition apparatus 600, the face feature extraction unit 640 is further configured to: preprocessing the point cloud of the face area, wherein the preprocessing process comprises the following steps: filtering the point cloud of the face area; removing outliers in the point cloud of the face area; carrying out point cloud mesh reconstruction on the face region point cloud after filtering processing and outlier removing processing; performing non-rigid registration on the point cloud of the face region reconstructed by the point cloud grid
(ii) a And processing the preprocessed face region point cloud through a deep neural network model to obtain the face features in the face region point cloud.
In an example, in the above-mentioned face recognition apparatus 600, the face feature extraction unit 640 is further configured to: carrying out normalization processing on the RGB image; aligning the RGB image after normalization processing to a standard template; and processing the RGB image aligned to the standard template through a deep neural network model to obtain the human face features in the RGB image.
In one example, in the above-described face recognition apparatus 600, the living body detection unit 670 is configured to: performing face detection on the RGBD fusion image of the detected face to obtain face characteristic points in the RGBD fusion image; obtaining three-dimensional coordinates of human face characteristic points in the RGBD fusion image based on the depth point cloud data of the detected human face; and determining whether the measured object with the measured human face is a living body based on the three-dimensional coordinates of the human face feature point.
In one example, in the above-mentioned face recognition apparatus 600, the living body detection unit 670 is further configured to: fitting a reference plane based on the three-dimensional coordinates of the human face feature points; obtaining the sum of the distances from each face characteristic point to the reference plane; and determining the measured object as a living body in response to the sum of the distances being larger than a preset threshold value.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described face recognition apparatus 600 have been described in detail in the TOF camera module-based face recognition method described above with reference to fig. 1 to 5, and therefore, a repeated description thereof will be omitted.
As described above, the face recognition apparatus according to the embodiment of the present application may be implemented in various terminal devices, for example, a face recognition server. In one example, the face recognition apparatus according to the embodiment of the present application may be integrated into the terminal device as a software module and/or a hardware module. For example, the face recognition means may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the face recognition device may also be one of many hardware modules of the terminal device.
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 7.
FIG. 7 illustrates a schematic diagram of an electronic device according to an embodiment of the application.
As shown in fig. 7, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 11 to implement the face recognition method based on the TOF camera module according to the embodiments of the present application described above and/or other desired functions. Various contents such as an RGB image, an RGBD image, etc. may also be stored in the computer readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 13 may be, for example, a keyboard, a mouse, or the like.
The output device 14 may output various information including a face recognition result and the like to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 7, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Illustrative computer program product
In addition to the above methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the TOF camera module based face recognition method according to various embodiments of the present application described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + +, or the like, as well as conventional procedural programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the TOF camera module based face recognition method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof. The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (12)

1. A face recognition method based on a TOF camera module is characterized by comprising the following steps:
acquiring an RGBD fusion image, an RGB image and depth point cloud data of a detected face through a TOF camera module;
performing face detection on the RGBD fusion image of the detected face to obtain a face region frame, wherein the face region frame is used for defining a face region in the depth point cloud data of the detected face;
extracting a face region point cloud in the depth point cloud data of the detected face based on the face region frame;
processing the face area point cloud to obtain face features in the face area point cloud;
processing the RGB image of the detected human face to obtain human face features in the RGB image;
fusing the human face features in the RGB image and the human face features in the human face area point cloud to obtain fused human face features; and
and processing the fused face features through a classifier to obtain a face recognition result.
2. The method of claim 1, wherein processing the point cloud of face regions to obtain the face features in the point cloud of face regions comprises:
preprocessing the point cloud of the face area, wherein the preprocessing process comprises the following steps: filtering the point cloud of the face area; removing outliers in the point cloud of the face area; carrying out point cloud mesh reconstruction on the face region point cloud after filtering processing and outlier removing processing; performing non-rigid registration on the point cloud of the face region reconstructed by the point cloud grid; and
and processing the preprocessed face region point cloud through a deep neural network model to obtain the face features in the face region point cloud.
3. The method for recognizing the human face according to claim 1, wherein processing the RGB image of the detected human face to obtain the human face features in the RGB image comprises:
carrying out normalization processing on the RGB image;
aligning the RGB image after normalization processing to a standard template; and
and processing the RGB image aligned to the standard template through a deep neural network model to obtain the human face features in the RGB image.
4. The face recognition method of any one of claims 1-3, further comprising, before processing the face region point cloud to obtain face features in the face region point cloud:
performing face detection on the RGBD fusion image of the detected face to obtain face characteristic points in the RGBD fusion image;
based on the TOF camera module, obtaining three-dimensional coordinates of the human face characteristic points in the RGBD fusion image; and
and determining whether the measured object with the measured human face is a living body or not based on the three-dimensional coordinates of the human face characteristic points.
5. The face recognition method according to claim 4, wherein determining whether the object to be measured is a living body based on the three-dimensional coordinates of the face feature point includes:
fitting a reference plane based on the three-dimensional coordinates of the human face feature points;
obtaining the sum of the distances from each face characteristic point to the reference plane; and
and determining the measured object as a living body in response to the sum of the distances being larger than a preset threshold value.
6. The utility model provides a face identification device based on module is made a video recording to TOF which characterized in that includes:
the image acquisition unit is used for acquiring an RGBD fusion image, an RGB image and depth point cloud data of a detected face;
the face detection unit is used for carrying out face detection on the RGBD fusion image of the detected face to obtain a face region frame, and the face region frame is used for defining a face region in the depth point cloud data of the detected face;
a face region point cloud extracting unit, configured to extract a face region point cloud in the depth point cloud data of the detected face based on the face region frame;
the face feature extraction unit is used for processing the face area point cloud to obtain face features in the face area point cloud; processing the RGB image of the detected human face to obtain human face features in the RGB image;
the fusion unit is used for fusing the human face features in the RGB images and the human face features in the human face area point cloud to obtain fused human face features; and
and the face recognition unit is used for processing the fused face features through a classifier to obtain a face recognition result.
7. The face recognition apparatus according to claim 6, wherein the face feature extraction unit is further configured to:
preprocessing the point cloud of the face area, wherein the preprocessing process comprises the following steps: filtering the point cloud of the face area; removing outliers in the point cloud of the face area; carrying out point cloud mesh reconstruction on the face region point cloud after filtering processing and outlier removing processing; performing non-rigid registration on the point cloud of the face region reconstructed by the point cloud grid; and
and processing the preprocessed face region point cloud through a deep neural network model to obtain the face features in the face region point cloud.
8. The face recognition apparatus according to claim 6, wherein the face feature extraction unit is further configured to:
carrying out normalization processing on the RGB image;
aligning the RGB image after normalization processing to a standard template; and
and processing the RGB image aligned to the standard template through a deep neural network model to obtain the human face features in the RGB image.
9. The face recognition apparatus according to any one of claims 6 to 8, further comprising: a living body detection unit for:
performing face detection on the RGBD fusion image of the detected face to obtain face characteristic points in the RGBD fusion image;
based on a TOF camera module, obtaining three-dimensional coordinates of the human face characteristic points in the RGBD fusion image; and
and determining whether the measured object with the measured human face is a living body or not based on the three-dimensional coordinates of the human face characteristic points.
10. The face recognition device of claim 9, wherein the living body detection unit is further configured to:
fitting a reference plane based on the three-dimensional coordinates of the human face feature points;
obtaining the sum of the distances from each face characteristic point to the reference plane; and
and determining the measured object as a living body in response to the sum of the distances being larger than a preset threshold value.
11. An electronic device, comprising
A processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the face recognition method of any one of claims 1-5.
12. A computer readable storage medium having computer program instructions stored thereon, which, when executed by a computing device, are operable to perform the face recognition method of any one of claims 1-5.
CN201811587458.6A 2018-12-25 2018-12-25 Face recognition method based on TOF camera module, face recognition device and electronic equipment Pending CN111368581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811587458.6A CN111368581A (en) 2018-12-25 2018-12-25 Face recognition method based on TOF camera module, face recognition device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811587458.6A CN111368581A (en) 2018-12-25 2018-12-25 Face recognition method based on TOF camera module, face recognition device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111368581A true CN111368581A (en) 2020-07-03

Family

ID=71209764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811587458.6A Pending CN111368581A (en) 2018-12-25 2018-12-25 Face recognition method based on TOF camera module, face recognition device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111368581A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163557A (en) * 2020-10-19 2021-01-01 南宁职业技术学院 Face recognition method and device based on 3D structured light
CN113239828A (en) * 2021-05-20 2021-08-10 清华大学深圳国际研究生院 Face recognition method and device based on TOF camera module
CN114581978A (en) * 2022-02-28 2022-06-03 支付宝(杭州)信息技术有限公司 Face recognition method and system
KR20230017984A (en) * 2021-07-29 2023-02-07 건국대학교 산학협력단 Face recognition and device using 3d lidar sensor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system
CN105740781A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional human face in-vivo detection method and device
CN107273875A (en) * 2017-07-18 2017-10-20 广东欧珀移动通信有限公司 Human face in-vivo detection method and Related product
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
CN108197587A (en) * 2018-01-18 2018-06-22 中科视拓(北京)科技有限公司 A kind of method that multi-modal recognition of face is carried out by face depth prediction
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system
CN105740781A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional human face in-vivo detection method and device
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
CN107273875A (en) * 2017-07-18 2017-10-20 广东欧珀移动通信有限公司 Human face in-vivo detection method and Related product
CN108197587A (en) * 2018-01-18 2018-06-22 中科视拓(北京)科技有限公司 A kind of method that multi-modal recognition of face is carried out by face depth prediction
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BILLY Y. L. LI 等: "Face recognition based on Kinect" *
LUO JIANG 等: "Robust RGB-D Face Recognition Using Attribute-Aware Loss" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163557A (en) * 2020-10-19 2021-01-01 南宁职业技术学院 Face recognition method and device based on 3D structured light
CN113239828A (en) * 2021-05-20 2021-08-10 清华大学深圳国际研究生院 Face recognition method and device based on TOF camera module
CN113239828B (en) * 2021-05-20 2023-04-07 清华大学深圳国际研究生院 Face recognition method and device based on TOF camera module
KR20230017984A (en) * 2021-07-29 2023-02-07 건국대학교 산학협력단 Face recognition and device using 3d lidar sensor
KR102633944B1 (en) * 2021-07-29 2024-02-06 건국대학교 산학협력단 Face recognition and device using 3d lidar sensor
CN114581978A (en) * 2022-02-28 2022-06-03 支付宝(杭州)信息技术有限公司 Face recognition method and system

Similar Documents

Publication Publication Date Title
US11030437B2 (en) Liveness detection method and liveness detection system
US9818023B2 (en) Enhanced face detection using depth information
CN111368581A (en) Face recognition method based on TOF camera module, face recognition device and electronic equipment
CN105335722B (en) Detection system and method based on depth image information
US10423848B2 (en) Method, system, and computer-readable recording medium for long-distance person identification
KR101322168B1 (en) Apparatus for real-time face recognition
WO2016150240A1 (en) Identity authentication method and apparatus
EP2907082B1 (en) Using a probabilistic model for detecting an object in visual data
US8649602B2 (en) Systems and methods for tagging photos
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
US8938092B2 (en) Image processing system, image capture apparatus, image processing apparatus, control method therefor, and program
WO2019033569A1 (en) Eyeball movement analysis method, device and storage medium
CN111144366A (en) Strange face clustering method based on joint face quality assessment
CN107610177B (en) The method and apparatus of characteristic point is determined in a kind of synchronous superposition
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
JP6351243B2 (en) Image processing apparatus and image processing method
KR102399025B1 (en) Improved data comparison method
JP2014044503A (en) Image recognition device, method, and program
WO2017131870A1 (en) Decoy-based matching system for facial recognition
Nambiar et al. Frontal gait recognition combining 2D and 3D data
Choi et al. A multimodal user authentication system using faces and gestures
KR20190018274A (en) Method and apparatus for recognizing a subject existed in an image based on temporal movement or spatial movement of a feature point of the image
CN110688878B (en) Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device
CN107798292B (en) Object recognition method, computer program, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination