CN112016495A - Face recognition method and device and electronic equipment - Google Patents

Face recognition method and device and electronic equipment Download PDF

Info

Publication number
CN112016495A
CN112016495A CN202010917898.4A CN202010917898A CN112016495A CN 112016495 A CN112016495 A CN 112016495A CN 202010917898 A CN202010917898 A CN 202010917898A CN 112016495 A CN112016495 A CN 112016495A
Authority
CN
China
Prior art keywords
face recognition
face
image
target
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010917898.4A
Other languages
Chinese (zh)
Inventor
张为义
涂弘德
刘以勒
罗士杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Cook Intelligent Technology Co ltd
Original Assignee
Fujian Cook Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Cook Intelligent Technology Co ltd filed Critical Fujian Cook Intelligent Technology Co ltd
Priority to CN202010917898.4A priority Critical patent/CN112016495A/en
Publication of CN112016495A publication Critical patent/CN112016495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

A face recognition method, a face recognition device and electronic equipment can improve the recognition rate and the safety performance of a face recognition technology. The face recognition method comprises the following steps: acquiring a color image of a target to be identified; judging whether the brightness of the color image is suitable for face recognition or not according to an image brightness judgment model, wherein the image brightness judgment model is obtained by training a plurality of color image samples with different brightness; and determining whether to adopt the color image of the target to be recognized for face recognition according to the judgment result. In the embodiment of the application, the brightness information in the color image of the target to be recognized is detected according to the image brightness judgment model, so that the situation that the face recognition is misjudged or cannot be recognized when the brightness in the color image is too high or too dark is prevented, the speed and the accuracy of the face recognition are effectively improved, and the recognition rate and the safety performance of the face recognition technology are improved.

Description

Face recognition method and device and electronic equipment
Technical Field
The present application relates to the field of biometric detection technologies, and in particular, to a method and an apparatus for face recognition, and an electronic device.
Background
Face recognition (face recognition) is a biological detection and recognition technology for identifying the identity of a person based on facial feature information of the person. The method comprises the steps of collecting images or video streams containing human faces by using a camera or a camera, automatically detecting and tracking the human faces in the images, and further performing a series of related technologies such as image preprocessing, image feature extraction, matching and recognition of the detected human faces, wherein the related technologies are generally called portrait recognition or facial recognition. With the rapid development of computer and network technologies, face recognition technology has been widely applied in many industries and fields such as intelligent access control, mobile terminal, public security, entertainment, military and the like.
Most of the existing face recognition technologies adopt a color image as input, face recognition is carried out based on relevant features in the color image, the face recognition accuracy is low due to the fact that the face recognition is easily affected by light intensity changes, face recognition is carried out only by the color image, the safety performance is low, and forged objects such as photos and the like can be recognized successfully.
Therefore, how to improve the recognition rate and the security performance of the face recognition technology is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition device and electronic equipment, which can improve the recognition rate and the safety performance of a face recognition technology.
In a first aspect, a method for face recognition is provided, including: acquiring a color image of a target to be identified; judging whether the brightness of the color image is suitable for face recognition or not according to an image brightness judgment model, wherein the image brightness judgment model is obtained by training a plurality of color image samples with different brightness; and determining whether to adopt the color image of the target to be recognized for face recognition according to the judgment result.
In the embodiment of the application, the brightness information in the color image of the target to be recognized is detected according to the image brightness judgment model to determine whether the color image is suitable for face recognition, and then whether the color image is adopted for face recognition is determined according to the judgment result instead of directly carrying out face recognition on the color image of the target to be recognized, so that the situation that the face recognition is misjudged or cannot be recognized when the brightness in the color image is too high or too dark is prevented, the face recognition speed and accuracy are effectively improved, and the user experience is improved.
Meanwhile, in the embodiment of the application, instead of directly sensing the light intensity of the ambient light through the light intensity sensing unit such as the sensor and the like to judge whether the color image of the target to be recognized performs face recognition, the brightness of the color image is analyzed by adopting the image brightness judgment model, so that the brightness of the color image can be more accurately detected.
In some possible embodiments, the determining whether to perform face recognition by using a color image of the target to be recognized according to the determination result includes: if the judgment result is suitable, adopting the color image of the target to be recognized and the three-dimensional image of the target to be recognized to perform face recognition; and if the judgment result is not suitable, performing face recognition by using the infrared image of the target to be recognized and the three-dimensional image of the target to be recognized.
In some possible embodiments, the performing face recognition on the color image of the target to be recognized and the three-dimensional image of the target to be recognized includes: carrying out face recognition on the color image of the target to be recognized to obtain a first face recognition result; carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result; if the first face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful; and if at least one of the first face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
In some possible embodiments, the performing face recognition on the infrared image of the target to be recognized and the three-dimensional image of the target to be recognized includes: carrying out face recognition on the infrared image of the target to be recognized to obtain a second face recognition result; carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result; if the second face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful; and if at least one of the second face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
In the embodiment of the application, on the basis of judging whether the brightness of the color image of the target to be recognized is suitable for face recognition according to the image brightness judging model, further, on the basis that the color image is not suitable for face recognition, because the infrared image is not influenced by the light intensity of ambient light, the infrared image is adopted for face recognition, the influence of the change of the ambient light can be prevented, and the face recognition can be still carried out and the recognition rate is high under the conditions that the ambient light is strong, weak or other conditions, so that the technical scheme of the embodiment of the application can be applied to more face recognition scenes.
Meanwhile, on the basis of adopting a color image to perform face recognition or adopting an infrared image to perform face recognition, a three-dimensional image of a target to be recognized is also acquired to acquire the three-dimensional structural characteristics of the face to perform living body face recognition, so that the attack of a non-living body face (such as a user photo and the like) is prevented, and the accuracy and the safety of the face recognition are improved.
In some possible embodiments, the performing face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result includes: acquiring point cloud data of the target to be identified; converting the point cloud data into the three-dimensional image, wherein the three-dimensional image is a plane image comprising depth and angle information of the surface of the target to be recognized; and carrying out face recognition on the three-dimensional image to obtain a third face recognition result.
According to the technical scheme of the embodiment of the application, the three-dimensional point cloud data is preprocessed to obtain the planar three-dimensional image, so that the subsequent face recognition process can be simplified while the three-dimensional information of the target to be recognized is kept, and the accuracy and the efficiency of face recognition are comprehensively improved.
Compared with the method for carrying out face recognition by directly utilizing the depth map, the three-dimensional image in the embodiment of the application is different from the depth map, and comprises the angle change information of the surface of the target to be recognized, namely the bending degree of the surface, besides the depth information of the surface of the target to be recognized, so that more three-dimensional space characteristics of the target to be recognized are reserved, and the accuracy of subsequent face recognition can be further improved.
In some possible embodiments, the converting the point cloud data into the three-dimensional image includes: calculating a normal vector of each point cloud point in the point cloud data; and converting to obtain the three-dimensional image according to the normal vector and the depth value of each cloud point, wherein the normal vector of each cloud point comprises angle information of the surface of the target to be recognized.
In some possible embodiments, the calculating a normal vector for each point cloud point in the point cloud data includes: aligning the point cloud data with the point cloud template data to obtain standard point cloud data; performing meshing processing on the standard point cloud data to obtain meshed point cloud data; in the gridding point cloud data, the average value of normal vectors of grid surfaces adjacent to each point cloud point is calculated, and the normal vector of each point cloud point is obtained.
In some possible embodiments, the normal vector and depth value according to each point cloud point includes: projecting the gridded point cloud data into a two-dimensional coordinate plane according to the depth value; and respectively taking the values in the X direction and the Y direction of the normal vector of each point cloud point in the gridding point cloud data and the depth value as three-channel values of each pixel in the three-dimensional image so as to obtain the three-dimensional image through conversion.
In some possible embodiments, the performing face recognition on the three-dimensional image to obtain the third face recognition result includes: extracting the face features in the three-dimensional image to obtain a third face feature vector; comparing the third face feature vector with at least one third template feature vector to obtain a third face recognition result; and the at least one third template feature vector is a feature vector corresponding to the three-dimensional face image of the target user.
In some possible embodiments, the performing face recognition on the color image of the target to be recognized to obtain a first face recognition result includes: extracting the face features in the color image to obtain a first face feature vector; comparing the first face feature vector with at least one first template feature vector to obtain a first face recognition result; and the at least one first template feature vector is a feature vector corresponding to the face color image of the target user.
In some possible embodiments, the extracting the face features in the color image to obtain a first face feature vector includes: carrying out face detection on the color image to obtain a face area color image in the color image; adjusting the color image of the face region to a target size, and adjusting the positions of five sense organs in the color image of the face region to a target position; and extracting the face features of the color image of the face region to obtain the first face feature vector.
In some possible embodiments, the performing face recognition on the infrared image of the target to be recognized to obtain a second face recognition result includes: extracting the face features in the infrared image to obtain a second face feature vector; comparing the second face feature vector with at least one second template feature vector to obtain a second face recognition result; and the at least one second template feature vector is a feature vector corresponding to the face infrared image of the target user.
In some possible embodiments, the extracting the facial features in the infrared image to obtain a second facial feature vector includes: carrying out face detection on the infrared image to obtain a face area infrared image in the infrared image; adjusting the face area infrared image to a target size, and adjusting the position of five sense organs in the face area infrared image to a target position; and extracting the face features of the infrared image of the face region to obtain a second face feature vector.
In some possible embodiments, the image brightness determination model includes: a machine learning model or a deep learning model.
In some possible embodiments, the image brightness determination model is a neural network model, and the method further includes: according to the sample image, carrying out neural network training to obtain an image brightness judgment model; the sample image comprises face images under different lighting conditions.
In some possible embodiments, the sample image includes a positive sample image and a negative sample image, the positive sample image is an image in which the result of face recognition is consistent with the real result, and the negative sample image is an image in which the result of face recognition is inconsistent with the real result.
In a second aspect, an apparatus for face recognition is provided, including: the device comprises an acquisition unit, a recognition unit and a processing unit, wherein the acquisition unit is used for acquiring a color image of a target to be recognized; the image brightness judging unit is used for judging whether the brightness of the color image is suitable for face recognition or not according to an image brightness judging model, wherein the image brightness judging model is obtained by training a plurality of color image samples with different brightness; and the face recognition unit is used for determining whether to adopt the color image of the target to be recognized for face recognition according to the judgment result.
In some possible embodiments, the face recognition unit is configured to: if the judgment result is suitable, adopting the color image of the target to be recognized and the three-dimensional image of the target to be recognized to perform face recognition; and if the judgment result is not suitable, performing face recognition by using the infrared image of the target to be recognized and the three-dimensional image of the target to be recognized.
In some possible embodiments, the face recognition unit is configured to: carrying out face recognition on the color image of the target to be recognized to obtain a first face recognition result; carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result; if the first face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful; and if at least one of the first face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
In some possible embodiments, the face recognition unit is configured to: carrying out face recognition on the infrared image of the target to be recognized to obtain a second face recognition result; carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result; if the second face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful; and if at least one of the second face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
In some possible embodiments, the face recognition unit comprises a third face recognition module configured to: acquiring point cloud data of the target to be identified; converting the point cloud data into the three-dimensional image, wherein the three-dimensional image is a plane image comprising depth and angle information of the surface of the target to be recognized; and carrying out face recognition on the three-dimensional image to obtain a third face recognition result.
In some possible embodiments, the third face recognition module is configured to: calculating a normal vector of each point cloud point in the point cloud data; and converting to obtain the three-dimensional image according to the normal vector and the depth value of each cloud point, wherein the normal vector of each cloud point comprises angle information of the surface of the target to be recognized.
In some possible embodiments, the third face recognition module is configured to: aligning the point cloud data with the point cloud template data to obtain standard point cloud data; performing meshing processing on the standard point cloud data to obtain meshed point cloud data; in the gridding point cloud data, the average value of normal vectors of grid surfaces adjacent to each point cloud point is calculated, and the normal vector of each point cloud point is obtained.
In some possible embodiments, the third face recognition module is configured to: projecting the gridded point cloud data into a two-dimensional coordinate plane according to the depth value; and respectively taking the values in the X direction and the Y direction of the normal vector of each point cloud point in the gridding point cloud data and the depth value as three-channel values of each pixel in the three-dimensional image so as to obtain the three-dimensional image through conversion.
In some possible embodiments, the third face recognition module is configured to: extracting the face features in the three-dimensional image to obtain a third face feature vector; comparing the third face feature vector with at least one third template feature vector to obtain a third face recognition result; and the at least one second template feature vector is a feature vector corresponding to the three-dimensional face image of the target user.
In some possible embodiments, the face recognition unit comprises a first face recognition module configured to: extracting the face features in the color image to obtain a first face feature vector; comparing the first face feature vector with at least one first template feature vector to obtain a first face recognition result; and the at least one first template feature vector is a feature vector corresponding to the face color image of the target user.
In some possible embodiments, the first face recognition module is configured to: carrying out face detection on the color image to obtain a face area color image in the color image; adjusting the color image of the face region to a target size, and adjusting the positions of five sense organs in the color image of the face region to a target position; and extracting the face features of the color image of the face region to obtain the first face feature vector.
In some possible embodiments, the face recognition unit comprises a second face recognition module configured to: extracting the face features in the infrared image to obtain a second face feature vector; comparing the second face feature vector with at least one second template feature vector to obtain a second face recognition result; and the at least one second template feature vector is a feature vector corresponding to the face infrared image of the target user.
In some possible embodiments, the second face recognition module is configured to: carrying out face detection on the infrared image to obtain a face area infrared image in the infrared image; adjusting the face area infrared image to a target size, and adjusting the position of five sense organs in the face area infrared image to a target position; and extracting the face features of the infrared image of the face region to obtain a second face feature vector.
In some possible embodiments, the image brightness determination model includes: a machine learning model or a deep learning model.
In some possible embodiments, the image brightness determination unit is further configured to: according to the sample image, carrying out neural network training to obtain an image brightness judgment model; the sample image comprises face images under different lighting conditions.
In some possible embodiments, the sample image includes a positive sample image and a negative sample image, the positive sample image is an image in which the result of face recognition is consistent with the real result, and the negative sample image is an image in which the result of face recognition is inconsistent with the real result.
In a third aspect, an electronic device is provided, including: the apparatus for face recognition in the second aspect or any possible implementation thereof.
In some possible embodiments, the electronic device further comprises: the device comprises a color image acquisition device, an infrared image acquisition device and a point cloud acquisition device.
In a fourth aspect, a computer-readable storage medium is provided, which is used for storing program instructions, and when the program instructions are executed by a computer, the computer executes the method for face recognition in the first aspect or any possible implementation manner of the first aspect.
In a fifth aspect, a computer program product is provided, which contains instructions that, when executed by a computer, cause the computer to perform the method for face recognition in the first aspect or any of the possible implementations of the first aspect.
In particular, the computer program product may be run on the electronic device of the above third aspect.
Drawings
Fig. 1 is a schematic structural diagram of a system architecture provided in the present application.
Fig. 2 is a schematic flow chart diagram of face recognition based on color images according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a face recognition method according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of another face recognition method according to an embodiment of the application.
Fig. 5 is a schematic flow chart of face recognition using color images according to an embodiment of the present application.
Fig. 6 is a schematic block diagram of a first face recognition module according to an embodiment of the present application.
Fig. 7 is a schematic flow chart of face recognition using three-dimensional images according to an embodiment of the present application.
Fig. 8 is another schematic flow chart of face recognition using three-dimensional images according to an embodiment of the present application.
Fig. 9 is a schematic structural block diagram of a third face recognition module according to an embodiment of the present application.
Fig. 10 is a schematic structural block diagram of an apparatus for face recognition according to an embodiment of the present application.
Fig. 11 is a schematic structural block diagram of a face recognition unit according to an embodiment of the present application.
Fig. 12 is a schematic hardware structure diagram of a face recognition apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
The embodiment of the application can be applied to a face recognition system, including but not limited to products based on optical face imaging. The face recognition system can be applied to various electronic devices with image acquisition devices (such as cameras), the electronic devices can be personal computers, computer workstations, smart phones, tablet computers, smart cameras, media consumption devices, wearable devices, set top boxes, game machines, Augmented Reality (AR) AR/Virtual Reality (VR) devices, vehicle-mounted terminals and the like, and the embodiment disclosed by the application is not limited thereto.
It should be understood that the specific examples are provided herein only to assist those skilled in the art in better understanding the embodiments of the present application and are not intended to limit the scope of the embodiments of the present application.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should also be understood that the various embodiments described in this specification can be implemented individually or in combination, and the examples in this application are not limited thereto.
Unless otherwise defined, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
For better understanding of the solution of the embodiment of the present application, a brief description is given below to a possible application scenario of the embodiment of the present application with reference to fig. 1.
As shown in fig. 1, the present embodiment provides a system architecture 100. In fig. 1, a data acquisition device 160 is used to acquire training data. For the method for face recognition of the embodiment of the present application, the training data may include a training image or a training video.
After the training data is collected, data collection device 160 stores the training data in database 130, and training device 120 trains target model/rule 101 based on the training data maintained in database 130.
The above target model/rule 101 can be used to implement the face recognition method of the embodiment of the present application. The target model/rule 101 in the embodiment of the present application may specifically be a neural network. It should be noted that, in practical applications, the training data maintained in the database 130 may not necessarily all come from the acquisition of the data acquisition device 160, and may also be received from other devices. It should be noted that, the training device 120 does not necessarily perform the training of the target model/rule 101 based on the training data maintained by the database 130, and may also obtain the training data from the cloud or other places for performing the model training.
The target model/rule 101 obtained by training according to the training device 120 may be applied to different systems or devices, for example, the execution device 110 shown in fig. 1, where the execution device 110 may be a terminal, such as a mobile phone terminal, a tablet computer, a notebook computer, or the like, and may also be a server or a cloud. In fig. 1, the execution device 110 configures an input/output (I/O) interface 112 for data interaction with an external device, and a user may input data to the I/O interface 112 through the client device 140, where the input data may include: a pending video or a pending image input by the client device 140.
In some embodiments, the client device 140 may be the same device as the execution device 110, for example, the client device 140 may be a terminal device as the execution device 110.
In other embodiments, the client device 140 and the execution device 110 may be different devices, for example, the client device 140 is a terminal device, the execution device 110 is a cloud, a server, or the like, the client device 140 may interact with the execution device 310 through a communication network of any communication mechanism/communication standard, the communication network may be a wide area network, a local area network, a peer-to-peer connection, or the like, or any combination thereof.
The computing module 111 of the execution device 110 is configured to process according to input data (e.g., an image to be processed) received by the I/O interface 112. In the process of executing the relevant processing such as calculation by the calculation module 111 of the execution device 110, the execution device 110 may call data, codes, and the like in the data storage system 150 for corresponding processing, and may store data, instructions, and the like obtained by corresponding processing in the data storage system 150.
Finally, the I/O interface 112 returns the processing result, such as the face recognition result obtained as described above, to the client device 140, thereby providing it to the user.
It should be noted that the training device 120 may generate corresponding target models/rules 101 for different targets or different tasks based on different training data, and the corresponding target models/rules 101 may be used to achieve the targets or complete the tasks, so as to provide the user with the required results.
In the case shown in fig. 1, the user may manually give the input data, which may be operated through an interface provided by the I/O interface 112. Alternatively, the client device 140 may automatically send the input data to the I/O interface 112, and if the client device 140 is required to automatically send the input data to obtain authorization from the user, the user may set the corresponding permissions in the client device 140. The user can view the result output by the execution device 110 at the client device 140, and the specific presentation form can be display, sound, action, and the like. The client device 140 may also serve as a data collection terminal, collecting input data of the input I/O interface 112 and output results of the output I/O interface 112 as new sample data, and storing the new sample data in the database 130. Of course, the input data inputted to the I/O interface 112 and the output result outputted from the I/O interface 112 as shown in the figure may be directly stored in the database 130 as new sample data by the I/O interface 112 without being collected by the client device 140.
It should be noted that fig. 1 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the position relationship between the devices, modules, and the like shown in the diagram does not constitute any limitation, for example, in fig. 1, the data storage system 150 is an external memory with respect to the execution device 110, and in other cases, the data storage system 150 may also be disposed in the execution device 110.
As shown in fig. 1, a target model/rule 101 is obtained according to training of a training device 120, where the target model/rule 101 may be a neural network in the embodiment of the present application, specifically, the neural network in the embodiment of the present application may be a Convolutional Neural Network (CNN), a Regional Convolutional Neural Network (RCNN), a fast regional convolutional neural network (faster RCNN), or another type of neural network, and the present application is not limited specifically.
In some embodiments, a face recognition method is a face recognition based on a color image, for example, the execution device 110 may execute a target model/rule to process the color image to obtain a face recognition result.
For ease of understanding, the process of face recognition based on color images will be briefly described below with reference to fig. 2.
As shown in fig. 2, when face recognition is required, the specific face recognition method 10 is as follows.
S110: and collecting a color image of the target to be identified.
Specifically, the target to be recognized may be a face of a target user, a face of another user, a photograph, a 3D model, or any other object. The reflected light reflected by the target surface to be recognized is received by the image sensor 210 and converted into a color image.
S120: and processing the color image by adopting a neural network to obtain a feature vector.
As an example, the neural network may be a convolutional neural network, and the relevant parameters in the neural network are all parameters obtained after training.
Specifically, the color image is input into a convolutional neural network, and through the steps of convolution, excitation, pooling and the like, the face features of the color image are extracted to obtain feature vectors for representing the face features in the color image.
S130: and performing face recognition according to the feature vectors.
Specifically, the feature vectors are matched with a plurality of template feature vectors in a template library, and whether the similarity between the feature vectors and the template feature vectors is greater than a preset threshold value is judged to execute face recognition. And a plurality of template feature vectors in the template library are formed according to the face image of the target user.
In some cases, if the similarity between the feature vector and at least one of the template feature vectors is greater than or equal to a preset threshold, it may be determined that the acquired color image includes a face image of the target user, and the face recognition is successful.
On this basis, further, the electronic device where the face recognition device is located can be unlocked, and an Application (APP) on the electronic device can also be unlocked.
In other cases, if the similarity between the feature vector and the feature vectors of the templates is smaller than a preset threshold, it may be determined that the acquired color image does not include the face image of the target user, and face recognition fails.
In the face recognition method, the face recognition is carried out by acquiring the color image of the face and judging whether the color image comprises the face of the target user, so that the electronic equipment and an application program on the electronic equipment are unlocked. In the unlocking process, the face recognition device only carries out face recognition according to the color image, and whether the color image is derived from a live human face or other non-live human face objects such as other photos, videos and the like can not be recognized, so that the safety performance of the face recognition device is low, the color image is easily influenced by ambient light in the process, and under the condition that the light intensity of the ambient light is too strong or too dark, the recognition efficiency and the recognition accuracy can be influenced to a certain extent, and the performance of the whole face recognition device is influenced.
Based on the above problems, the present application provides a face recognition method and apparatus, which can improve the safety and accuracy of face recognition, thereby comprehensively improving the performance of face recognition.
Fig. 3 shows a schematic flow chart of a face recognition method 20 proposed in the embodiment of the present application. Alternatively, the execution subject of the face recognition method 20 may be the execution device 110 in fig. 1 above.
As shown in fig. 3, the face recognition method 20 may include the following steps.
S210: and acquiring a color image of the target to be recognized.
S220: and judging whether the brightness of the color image of the target to be recognized is suitable for face recognition or not according to the image brightness judgment model.
S230: and determining whether to adopt the color image of the target to be recognized for face recognition according to the judgment result.
In the embodiment of the present application, the object to be recognized includes, but is not limited to, a human face, a photograph, a video, a three-dimensional model, and any other objects. For example, the target to be recognized may be a face of a target user, faces of other users, a user photo, a curved surface model with a photo attached, and the like.
As an example, in some embodiments, after the image acquisition device acquires a color image of the target to be recognized, the color image is sent to a processing unit in the execution device for subsequent image processing work. Optionally, the image capturing device may be integrated into the execution device, or may be provided separately from the execution device. The image capture device includes, but is not limited to, a color camera, a video camera, or other type of image capture device.
After the color image of the target to be recognized is obtained, the step S220 is executed to determine whether the brightness of the color image of the target to be recognized is suitable for face recognition according to the image brightness determination model.
Specifically, in the embodiment of the present application, the image brightness determination model is trained according to a plurality of color image samples with different brightness.
Optionally, the image brightness judging model includes, but is not limited to, a machine learning (machine learning) model or a deep learning (deep learning) model.
As an example, the image brightness determination model may be a neural network model, for example, it may be a convolutional neural network model, which is used to classify the input image into two classes to determine whether the brightness in the input image is suitable for face recognition.
For example, in some embodiments, a convolutional neural network is used to extract luminance information in a color image to obtain a feature vector, and the feature vector is classified to obtain a result of whether the color image is suitable for face recognition.
Optionally, the image brightness determination model may be any convolutional neural network structure, including at least one convolutional layer, excitation layer, pooling layer, and/or full connection layer. The parameters in each structural layer are optimized parameters obtained through training, so that the classification result output by the convolutional neural network is consistent with the real result of the sample, and the accuracy and robustness of prediction of the convolutional neural network are improved.
In order to train the neural network for the image brightness determination model, a large amount of positive and negative sample data are obtained.
Specifically, a color sample image of a target to be recognized is input into a face recognition module, the color sample image is determined to be a positive sample image or a negative sample image according to a face recognition result and a real result of the face recognition module, and the sample image is labeled.
The positive sample image is a sample image in which the face recognition result of the face recognition module is consistent with the real result, that is, when the face recognition result of the face recognition module on the sample image is successful and the face image in the sample image is the face image of the target user, or the face recognition result of the face recognition module on the sample image is failed and the face image in the sample image is not the face image of the target user, the sample image is the positive sample image. The brightness in the positive sample image is the brightness suitable for the face recognition module to recognize, or the brightness in the positive sample image is the brightness in the target light intensity range.
On the contrary, the negative sample image is a sample image in which the face recognition result of the face recognition module is inconsistent with the real result (the face recognition module makes a false judgment), that is, the face recognition result of the face recognition module on the sample image is a successful recognition, but the face image in the sample image is not the face image of the target user, or the face recognition result of the face recognition module on the sample image is a failure, but the face image in the sample image is the face image of the target user, the sample image is the negative sample image. The brightness in the negative sample image is not suitable for the face recognition module to recognize, or the brightness in the negative sample image is too strong or too dark, and is outside the target light intensity range.
The positive sample image and the negative sample image are input into the image brightness judgment model of the embodiment of the application, and neural network training is carried out, so that the loss function of the image brightness judgment model is minimum, and the target parameter in the image brightness judgment model is obtained.
It should be noted that the face recognition module used in the positive sample and the negative sample determining process may be a face recognition module used in a subsequent face recognition process in the present application, or may also be any one of face recognition modules in related technologies, and is used to determine whether a face image in an image is a face of a target user.
In the embodiment of the application, the brightness information in the color image of the target to be recognized is detected according to the image brightness judgment model to determine whether the color image is suitable for face recognition, and then whether the color image is adopted for face recognition is determined according to the judgment result instead of directly carrying out face recognition on the color image of the target to be recognized, so that the situation that the face recognition is misjudged or cannot be recognized when the brightness in the color image is too high or too dark is prevented, the face recognition speed and accuracy are effectively improved, and the user experience is improved.
Meanwhile, in the embodiment of the application, instead of directly sensing the light intensity of the ambient light through the light intensity sensing unit such as the sensor and the like to judge whether the color image of the target to be recognized performs face recognition, the brightness of the color image is analyzed by adopting the image brightness judgment model, so that the brightness of the color image can be more accurately detected.
Further, fig. 4 shows a flowchart of another face recognition method 200 proposed in the embodiment of the present application.
As shown in fig. 4, the above step S230 may include the following steps.
S231: and adopting the color image and the three-dimensional image of the target to be recognized to perform face recognition.
S232: and performing face recognition by adopting the infrared image and the three-dimensional image of the target to be recognized.
Specifically, in the embodiment of the present application, the three-dimensional image may be a three-dimensional image carrying spatial information of the target to be recognized, for example, it may be a depth image, an image formed based on a point cloud (point cloud), or other types of data or images. The three-dimensional image is adopted for face recognition, so that the safety performance of face recognition can be improved, and the attack of non-two-dimensional plane objects such as photos and the like on the face recognition can be prevented.
Specifically, in step S231, if it is determined that the brightness of the color image of the target to be recognized is suitable for face recognition according to the image brightness determination model, the color image of the target to be recognized is used for face recognition, and in addition, a three-dimensional image of the target to be recognized is acquired, and face recognition is performed using the three-dimensional image of the target to be recognized. And (4) integrating the face recognition result of the color image and the face recognition result of the three-dimensional image to jointly determine a final face recognition result.
In some embodiments, the color image and the three-dimensional image of the target to be recognized may be obtained simultaneously, and if the brightness of the color image of the target to be recognized is suitable for face recognition, the color image and the three-dimensional image of the target to be recognized may be subjected to face recognition synchronously.
In other embodiments, if the brightness of the color image of the target to be recognized is suitable for face recognition, the color image of the target to be recognized is subjected to face recognition first, if the face recognition is successful, the face recognition is performed on the three-dimensional image, and if the face recognition fails, the face recognition failure is directly judged, so that the face recognition efficiency and the accuracy are improved.
Specifically, in step S232, if it is determined that the brightness of the color image of the target to be recognized is not suitable for face recognition according to the image brightness determination model, the color image of the target to be recognized is not used for face recognition, but an infrared image of the target to be recognized is acquired, and the infrared image is used for face recognition, and at the same time, a three-dimensional image of the target to be recognized is used for face recognition. And (4) integrating the face recognition result of the infrared image and the face recognition result of the three-dimensional image to jointly determine a final face recognition result.
Similarly, in some embodiments, the infrared image and the three-dimensional image of the target to be recognized may be acquired simultaneously, and if the brightness of the color image of the target to be recognized is not suitable for face recognition, the infrared image and the three-dimensional image of the target to be recognized may be subjected to face recognition synchronously.
In other embodiments, if the brightness of the color image of the target to be recognized is not suitable for face recognition, face recognition is performed on the infrared image of the target to be recognized first, if the face recognition is successful, face recognition is performed on the three-dimensional image, and if the face recognition fails, face recognition failure is directly determined, so that the face recognition efficiency and accuracy are improved.
In the embodiment of the application, on the basis of judging whether the brightness of the color image of the target to be recognized is suitable for face recognition according to the image brightness judging model, further, on the basis that the color image is not suitable for face recognition, because the infrared image is not influenced by the light intensity of ambient light, the infrared image is adopted for face recognition, the influence of the change of the ambient light can be prevented, and the face recognition can be still carried out and the recognition rate is high under the conditions that the ambient light is strong, weak or other conditions, so that the technical scheme of the embodiment of the application can be applied to more face recognition scenes.
Meanwhile, on the basis of adopting a color image to perform face recognition or adopting an infrared image to perform face recognition, a three-dimensional image of a target to be recognized is also acquired to acquire the three-dimensional structural characteristics of the face to perform living body face recognition, so that the attack of a non-living body face (such as a user photo and the like) is prevented, and the accuracy and the safety of the face recognition are improved.
Next, a method for performing face recognition by using a color image, an infrared image, and a three-dimensional image of a target to be recognized in the embodiment of the present application will be described with reference to fig. 5 to 9.
Fig. 5 shows a flow diagram of a method 30 of face recognition using color images. Alternatively, the method 30 for face recognition in the embodiment of the present application may be executed according to the first face recognition module 300 shown in fig. 6.
As shown in fig. 5, the method 30 for face recognition using color images may include the following steps.
S310: and acquiring a color image of the target to be recognized.
Specifically, the step S310 may be the same as the step S210, and an image capturing device may be used to capture a color image of the target to be recognized.
S320: and performing face detection (face detection) by using the color image to obtain a face area in the color image.
In particular, the first face recognition module 300 may include a first face detection module 301. In some embodiments, the first face detection module 301 may include a convolutional layer network, and in this step, face detection is performed on the input color image based on the convolutional layer network to obtain a face region frame in the color image.
By way of example, the convolutional layer network is a multi-task convolutional neural network (MTCNN), which includes three convolutional networks, P-Net, R-Net and O-Net, wherein each convolutional network includes a multi-layer convolutional layer for extracting texture features in a color picture and finally outputting the following three-part results:
a first part: whether the color image contains a face classification is judged, two 1 × 1 vectors (or can be expressed as 1 × 1 × 2 vectors) are output, and the two vectors respectively represent the probability that the color image contains the face and the probability that the color image does not contain the face.
A second part: the location of the face region box (bounding box regression). In general, for a face region box, its position can be represented by four numbers: the abscissa of the upper left corner of the frame, the ordinate of the upper left corner of the frame, the width of the frame and the height of the frame. Therefore, the section outputs 41 × 1 vectors (1 × 1 × 4 vectors) representing the relative shift of the abscissa of the upper left corner of the face region frame, the relative shift of the ordinate of the upper left corner of the face region frame, the error of the width of the face region frame, and the error of the height of the face region frame, respectively.
And a third part: the positions of 5 key points of the face (facial landmark localization). The 5 key points are respectively: the position of the left eye, the position of the right eye, the position of the nose, the position of the left mouth corner, and the position of the right mouth corner. Each keypoint, in turn, requires an abscissa and an ordinate to represent, and therefore, the section outputs 101 × 1 vectors (1 × 1 × 10 vectors) as a result.
It is to be understood that, in the embodiment of the present application, the first face detection module 301 may perform face detection on an input color image by using a convolutional layer network, and may also perform face detection by using any deep learning model or machine learning model in the related face detection technology, where the model of the face detection technology is not specifically limited in the embodiment of the present application.
S330: and carrying out face alignment on the face area.
Optionally, a first face alignment module 302 is further included in the first face recognition module 300. In this step, after obtaining the face region frame in the color image and the key point positions of the face therein, the first face alignment module 302 is configured to perform a face alignment (face alignment) operation to adjust the position and size of the face region frame. Specifically, affine transformation can be used to uniformly calibrate the face region frame according to the key points, so as to eliminate errors caused by different postures as much as possible.
After the face alignment processing, the positions of the facial features in the face region frame are basically consistent with the positions of the facial features in the sample face region frame, and the size of the face region frame is consistent with that of the sample face region frame, so that the subsequent face recognition process is conveniently executed.
It is to be understood that the face alignment in this step may use any one of face alignment methods in the related art, and this is not particularly limited in this embodiment of the application.
S340: and extracting the face features in the face region to form a first face feature vector.
Optionally, the first face recognition module 300 further includes a first face feature extraction module 303. As an example, the first facial feature extraction module 303 may be a deep convolutional neural network.
In this step, a first face feature extraction module 303, for example, a deep convolutional neural network, may be used to extract face features from the face region image in the uniformly calibrated face region frame to form a feature vector. In other words, different face region images are converted into corresponding vectors through the processing of the step, so as to perform face recognition.
In general, for two face region images of the same person, the euclidean distance between the corresponding vectors should be relatively small. For two face region images of different persons, the euclidean distance between the corresponding vectors should be relatively large.
It is understood that the first facial feature extraction module 303 in this step may be a network model obtained after neural network training of a color image sample, where relevant parameters in a layered structure of a convolutional layer, a pooling layer, and an excitation layer are target parameters after training, and the structure and parameters of the first facial feature extraction module 303 are not limited in this embodiment of the application.
It is further understood that, in addition to the first face feature extraction module 303 in this step, other deep learning modules in the first face recognition module 300, for example, the first face detection module 310, are also model modules obtained after training.
S350: and performing face recognition (face recognition) according to the first face feature vector.
As an example, after feature extraction by the deep convolutional neural network described above, a first face feature vector of 1 × 1 × N dimensions is formed, where N includes but is limited to 512.
In this step, the first face feature vector is compared with at least one first template feature vector in a face template library to obtain a first face recognition result.
Optionally, the first face feature vector is compared with at least one first template feature vector to obtain at least one similarity value, where the highest similarity value is greater than or equal to a preset threshold, the collected face image and the face image corresponding to the template feature vector are considered to be the same person, and the first face recognition result is that the recognition is successful, otherwise, if the highest similarity value is less than the preset threshold, the collected face image and the face image corresponding to the template feature vector are not considered to be the same person, and the first face recognition result is that the recognition is failed.
Specifically, at least one first template feature vector in the face template library may be a color image of a face of a target user acquired when the target user is registered, and a feature vector extracted by a face recognition module, and optionally, the face recognition module may be the first face recognition module 300.
It can be understood that, in the registration stage, a plurality of color images of the face of the target user at different angles and under different environments can be collected, and the plurality of color images are processed into a plurality of first template feature vectors one by one to form a face template library.
It will also be appreciated that the processing of the color image into the first template feature vector during the registration stage may be similar to the method 30 for face recognition using color images, with the only difference being that the color image processing during the registration stage does not include the final feature vector comparison process. The network model used in the face recognition stage may be the same as the network model used in the enrollment stage.
The method 30 for face recognition using color images was described above in conjunction with fig. 5 and 6, and it will be understood that the method for face recognition using infrared images is similar to the method for face recognition using color images, and the difference is only in the type of images processed.
Based on this, the method for performing face recognition by using infrared images in the embodiment of the present application may be executed according to the second face recognition module.
In some embodiments, the second face recognition module is used for extracting face features in the infrared image of the target to be recognized to obtain a second face feature vector; and comparing the second face feature vector with at least one second template feature vector to obtain a second face recognition result.
Optionally, the at least one second template feature vector is a feature vector extracted by a second face recognition module from the face of the target user.
As an example, the extracting, by the second face recognition module, the face features in the infrared image to obtain a second face feature vector may include:
carrying out face detection on the infrared image of the target to be recognized to obtain a face area infrared image in the infrared image;
adjusting the face area infrared image to a target size, and adjusting the position of five sense organs in the face area infrared image to a target position;
and extracting the face features of the infrared image of the face region to obtain the second face feature vector.
It is understood that, in the embodiment of the present application, the architecture of the second face recognition module may refer to the architecture of the first face recognition module 300, but the model parameters may be model parameters obtained by training infrared image samples, which may be different from the model parameters of the first face recognition module 300.
Specifically, the face recognition method in the embodiment of the present application may also refer to the above method for performing face recognition on a color image, and the related technical scheme may refer to the above related description, which is not described herein again.
A method 40 for face recognition using three-dimensional images is described below with reference to fig. 7 and 8. Optionally, the method 40 for face recognition in the embodiment of the present application may be executed according to a third face recognition module 400 shown in fig. 9.
As shown in fig. 7, the method 40 for face recognition using three-dimensional images may include the following steps.
S410: and acquiring point cloud data of the target to be identified.
S420: and converting the point cloud data into a three-dimensional image, wherein the three-dimensional image comprises a plane image of the depth and/or angle information of the target surface to be identified.
S430: and carrying out face recognition on the three-dimensional image.
Specifically, in the embodiment of the application, after the point cloud data of the target to be identified is acquired by the point cloud acquisition device, the point cloud data is sent to the processing unit, so that the processing unit acquires the point cloud data of the target to be identified.
Optionally, in this embodiment of the application, the three-dimensional image is an image obtained by converting point cloud data, and may include a target three-dimensional image of depth and/or angle information of a target surface to be recognized. It is understood that if the three-dimensional image only includes the depth information of the surface of the target to be recognized, it may be understood as a depth image of the target to be recognized.
Optionally, if the three-dimensional image includes depth and angle information of the surface of the target to be recognized, the three-dimensional image may be a multi-channel image, each pixel in the image is represented by a plurality of values, and optionally, the plurality of values in each pixel include depth and angle information of the surface of the target to be recognized.
As an example, as shown in fig. 8, the process of converting the point cloud data into the three-dimensional image in the above step S420 may include the following steps. Alternatively, this step S420 may be performed by the point cloud data conversion module 401 in fig. 9.
Firstly, a normal vector of each point cloud point in the point cloud data is calculated.
Specifically, in the embodiment of the present application, the normal vector of each point cloud point in the point cloud data includes angle information of the surface of the target to be identified.
Optionally, in some embodiments, the process of calculating the normal vector of each point cloud point in the point cloud data is as follows:
s421: and aligning the point cloud data with the point cloud template data to obtain standard point cloud data.
Specifically, the point cloud template data may be point cloud data of a template face, and the point cloud data is aligned with the point cloud template data, that is, positions of facial features in the point cloud data are aligned with positions of the facial features in the point cloud template data, so that the positions of the facial features in the aligned standard point cloud data are located at target positions. Through the alignment step, errors caused by different postures and directions of different human faces can be eliminated, and the subsequent human face recognition process is conveniently executed.
In some embodiments, the point cloud data of the target to be identified may be registered with the point cloud template data by an Iterative Closest Point (ICP) algorithm. Of course, in addition to the registration of the point cloud data and the point cloud template data by using the ICP algorithm, in the embodiment of the present application, other algorithms in the related art may be used to align the point cloud data and the point cloud template data to obtain the standard point cloud data, which is not specifically limited in the embodiment of the present application.
S422: and carrying out meshing processing on the standard point cloud data to obtain meshed point cloud data.
Specifically, the mesh is mainly used in computer graphics, and there are many kinds of meshes such as a triangle mesh and a quadrangle mesh. For example, a triangular mesh is a triangle of three points that are connected into a plane of triangles, each of which shares an edge with the other triangles. At present, most of the mesh processing in computer graphics is based on triangular meshes, which are widely used in graphics and three-dimensional modeling to simulate the surface of a complex object.
As an example, a triangulation (triangulation) algorithm may be used to triangulate the standard point cloud data, resulting in triangulated point cloud data. The triangulation algorithm includes, but is not limited to, greedy project triangulation.
Specifically, point cloud points are projected into a two-dimensional coordinate plane through a normal, and then point cloud obtained through projection is triangulated in the plane, so that the topological connection relation of each point is obtained. A spatial region growing algorithm based on Delaunay triangulation is used in the process of plane triangulation. And finally, determining topological connection among the original three-dimensional point cloud points according to the topological connection relation of the projection points in the plane, thereby obtaining the gridded point cloud data.
After the gridding point cloud data is obtained through the steps, the following steps are executed:
s423: in the gridded point cloud data, the average value of normal vectors of grid surfaces adjacent to each cloud point is calculated as a normal vector (normal vector) of each cloud point.
In this step, a normal vector of each cloud point is obtained, where the normal vector is a three-dimensional vector, and includes angle vectors of the cloud points in the three directions of X, Y, and Z, and may be used to describe the degree of curvature of the surface of the target to be identified.
In addition, the average value of the normal vectors of the grid surface adjacent to each cloud point is used as the normal vector of each cloud point, so that the change trend of the normal vectors of the cloud points adjacent to each other can be smoothed.
S424: and converting to obtain a three-dimensional image according to the normal vector and the depth value of each cloud point.
In some embodiments, the gridded point cloud data is projected into the two-dimensional coordinate plane according to the depth value, and if the projection of partial planes overlaps, the plane with the closer depth value is selected for projection.
Then, values in the X direction and the Y direction of the normal vector of each point cloud point in the gridded point cloud data and the depth value of each point cloud point are respectively used as three-channel values of each pixel in the plane image so as to obtain the three-dimensional image through conversion.
According to the technical scheme of the embodiment of the application, the three-dimensional point cloud data is preprocessed to obtain the planar three-dimensional image, so that the subsequent face recognition process can be simplified while the three-dimensional information of the target to be recognized is kept, and the accuracy and the efficiency of face recognition are comprehensively improved.
Compared with the method for carrying out face recognition by directly utilizing the depth map, the three-dimensional image in the embodiment of the application is different from the depth map, and comprises the angle change information of the surface of the target to be recognized, namely the bending degree of the surface, besides the depth information of the surface of the target to be recognized, so that more three-dimensional space characteristics of the target to be recognized are reserved, and the accuracy of subsequent face recognition can be further improved.
Specifically, in the embodiment of the present application, after the processing in step S420, the formed three-dimensional image is a planar image, and the method for performing face recognition on the three-dimensional image in step S430 may be similar to the method for performing face recognition by using a color image 30 shown in fig. 5 and 6, and the difference is only that the type of the processed image is different.
Based on this, the method for performing face recognition by using three-dimensional images in the embodiment of the present application may be executed according to the third face detection module 402 and the third face feature extraction module 403 in the third face recognition module 400 in fig. 9.
In some embodiments, the third face detection module 402 and the third face feature extraction module 403 are used to extract the face features in the three-dimensional image of the target to be recognized to obtain a third face feature vector; and comparing the third face feature vector with at least one third template feature vector to obtain a third face recognition result.
Optionally, at least one of the third template feature vectors is a feature vector extracted from the face of the target user by the third face recognition module 400.
As an example, the extracting the face features in the three-dimensional image by using the third face detection module 402 and the third face feature extraction module 403 to obtain the third face feature vector may include:
performing face detection on the three-dimensional image by using a third face detection module 402 to obtain a face region image in the three-dimensional image;
the third facial feature extraction module 403 extracts facial features from the facial region image to obtain the third facial feature vector.
It is understood that, in the embodiment of the present application, the architectures of the third face detection module 402 and the third face feature extraction module 403 may refer to the architectures of the first face detection module 301 and the first face feature extraction module 303 in the first face recognition module 300, but the model parameters may be model parameters obtained through training of the point cloud data samples, which may be different from the model parameters of the first face recognition module 300.
Specifically, the face recognition method in the embodiment of the present application may also refer to the above method for performing face recognition on a color image, and the related technical scheme may refer to the above related description, which is not described herein again.
The embodiments of the face recognition method in the present application are described in detail above with reference to fig. 3 to 9, and the embodiments of the face recognition apparatus in the present application are described in detail below with reference to fig. 10 to 12.
Fig. 10 is a schematic block diagram of a face recognition apparatus 200 according to an embodiment of the present application, where the face recognition apparatus 200 corresponds to the face recognition method 20.
As shown in fig. 10, a face recognition apparatus 200 includes:
an acquisition unit 210 configured to acquire a color image of an object to be recognized;
an image brightness judging unit 220, configured to judge whether the brightness of the color image is suitable for face recognition according to an image brightness judging model, where the image brightness judging model is obtained by training a plurality of color image samples with different brightness;
and the face recognition unit 230 is configured to determine whether to perform face recognition by using the color image of the target to be recognized according to the determination result.
It is understood that, in the embodiment of the present application, the image brightness determination model may be the same as the image brightness determination model mentioned in the above method embodiment, and the related technical solution may be referred to the above related description.
In some possible embodiments, the image brightness determination model includes: a machine learning model or a deep learning model.
In some possible embodiments, the image brightness determination unit is further configured to:
according to the sample image, carrying out neural network training to obtain an image brightness judgment model;
the sample image comprises face images under different lighting conditions.
In some possible embodiments, the sample image includes a positive sample image and a negative sample image, the positive sample image is an image in which the result of face recognition is consistent with the real result, and the negative sample image is an image in which the result of face recognition is inconsistent with the real result.
In some possible embodiments, the face recognition unit 230 is specifically configured to:
if the judgment result is suitable, adopting the color image of the target to be recognized and the three-dimensional image of the target to be recognized to perform face recognition;
and if the judgment result is not suitable, performing face recognition by using the infrared image of the target to be recognized and the three-dimensional image of the target to be recognized.
If the structure is determined to be suitable, optionally, the face recognition unit 230 is specifically configured to:
carrying out face recognition on the color image of the target to be recognized to obtain a first face recognition result;
carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result;
if the first face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful;
and if at least one of the first face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
If the structure is determined to be unsuitable, optionally, the face recognition unit 230 is specifically configured to:
carrying out face recognition on the infrared image of the target to be recognized to obtain a second face recognition result;
carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result;
if the second face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful;
and if at least one of the second face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
Fig. 11 shows a schematic structural block diagram of a face recognition unit 230 according to an embodiment of the present application. As shown in fig. 11, the face recognition unit 230 may include: a first face recognition module 231, a second face recognition module 232, and a third face recognition module 233.
It is understood that, in the embodiment of the present application, the first face recognition module 231, the second face recognition module 232, and the third face recognition module 233 may be the same as the first face recognition module 300, the second face recognition module, and the third face recognition module 400 mentioned in the foregoing method embodiments, and related technical solutions may be referred to in the foregoing related description.
Optionally, the third face recognition module 233 is specifically configured to: acquiring point cloud data of the target to be identified; converting the point cloud data into the three-dimensional image, wherein the three-dimensional image is a plane image comprising depth and angle information of the surface of the target to be recognized; and carrying out face recognition on the three-dimensional image to obtain a third face recognition result.
In some possible embodiments, the third face recognition module 233 is specifically configured to:
calculating a normal vector of each point cloud point in the point cloud data;
and converting to obtain the three-dimensional image according to the normal vector and the depth value of each cloud point, wherein the normal vector of each cloud point comprises angle information of the surface of the target to be recognized.
Optionally, the third face recognition module 233 is specifically configured to:
aligning the point cloud data with the point cloud template data to obtain standard point cloud data;
performing meshing processing on the standard point cloud data to obtain meshed point cloud data;
in the gridding point cloud data, the average value of normal vectors of grid surfaces adjacent to each point cloud point is calculated, and the normal vector of each point cloud point is obtained.
Optionally, the third face recognition module 233 is configured to:
projecting the gridded point cloud data into a two-dimensional coordinate plane according to the depth value;
and respectively taking the values in the X direction and the Y direction of the normal vector of each point cloud point in the gridding point cloud data and the depth value as three-channel values of each pixel in the three-dimensional image so as to obtain the three-dimensional image through conversion.
Further, the third face recognition module 233 is specifically configured to:
extracting the face features in the three-dimensional image to obtain a third face feature vector;
comparing the third face feature vector with at least one third template feature vector to obtain a third face recognition result;
and the at least one second template feature vector is a feature vector corresponding to the three-dimensional face image of the target user.
In addition, as shown in fig. 11, in the embodiment of the present application, the first face recognition module 231 may be specifically configured to:
extracting the face features in the color image to obtain a first face feature vector;
comparing the first face feature vector with at least one first template feature vector to obtain a first face recognition result;
and the at least one first template feature vector is a feature vector corresponding to the face color image of the target user.
In some possible embodiments, the first face recognition module 231 is specifically configured to:
carrying out face detection on the color image to obtain a face area color image in the color image;
adjusting the color image of the face region to a target size, and adjusting the positions of five sense organs in the color image of the face region to a target position;
and extracting the face features of the color image of the face region to obtain the first face feature vector.
As shown in fig. 11, in the embodiment of the present application, the second face recognition module 232 may be specifically configured to:
extracting the face features in the infrared image to obtain a second face feature vector;
comparing the second face feature vector with at least one second template feature vector to obtain a second face recognition result;
and the at least one second template feature vector is a feature vector corresponding to the face infrared image of the target user.
In some possible embodiments, the second face recognition module 232 is configured to:
carrying out face detection on the infrared image to obtain a face area infrared image in the infrared image;
adjusting the face area infrared image to a target size, and adjusting the position of five sense organs in the face area infrared image to a target position;
and extracting the face features of the infrared image of the face region to obtain a second face feature vector.
Fig. 12 is a schematic hardware structure diagram of a face recognition apparatus according to an embodiment of the present application. The face recognition apparatus 300 shown in fig. 12 (the face recognition apparatus 300 may be a computer device) includes a memory 310, a processor 320, a communication interface 330, and a bus 340. The memory 310, the processor 320 and the communication interface 330 are connected to each other through a bus 340.
The memory 310 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 310 may store a program, and the processor 320 and the communication interface 330 are used to perform the steps of the method of face recognition of the embodiments of the present application when the program stored in the memory 310 is executed by the processor 320.
The processor 320 may be a general Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or one or more integrated circuits, and is configured to execute related programs to implement functions required to be executed by modules in the face recognition device according to the embodiment of the present application, or to execute the face recognition method according to the embodiment of the present application.
Processor 320 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method for face recognition of the present application may be implemented by integrated logic circuits of hardware in the processor 320 or instructions in the form of software. The processor 320 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 310, and the processor 320 reads the information in the memory 310, and completes the functions required to be executed by the modules included in the face recognition device of the embodiment of the present application, or executes the face recognition method of the embodiment of the method of the present application, in combination with the hardware thereof.
Communication interface 330 enables communication between apparatus 300 and other devices or communication networks using transceiver means such as, but not limited to, a transceiver. For example, input data may be obtained through the communication interface 330.
Bus 340 may include a path that transfers information between various components of device 300 (e.g., memory 310, processor 320, communication interface 330).
It should be noted that although the apparatus 300 shown in fig. 12 shows only the memory 310, the processor 320, the communication interface 340 and the bus 340, in a specific implementation, it should be understood by those skilled in the art that the apparatus 300 also includes other devices necessary for normal operation. Also, those skilled in the art will appreciate that the apparatus 300 may also include hardware components to implement other additional functions, according to particular needs. Furthermore, those skilled in the art will appreciate that apparatus 300 may also include only those components necessary to implement embodiments of the present application, and need not include all of the components shown in FIG. 12.
It should be understood that the face recognition apparatus 300 may correspond to the face recognition apparatus 200 in fig. 10, the functions of the image brightness determination unit 220 and the face recognition unit 230 in the face recognition apparatus 200 may be implemented by the processor 320, and the function of the acquisition unit 210 may be implemented by the communication interface 330. To avoid repetition, detailed description is appropriately omitted here.
The embodiment of the application also provides a processing device, which comprises a processor and an interface; the processor is used for executing the face recognition method in any one of the above method embodiments.
It should be understood that the processing means may be a chip. For example, the processing device may be a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Microcontroller (MCU), a Programmable Logic Device (PLD), or other integrated chips.
The embodiment of the application also provides a platform system which comprises the face recognition device.
The embodiments of the present application also provide a computer-readable medium, on which a computer program is stored, which, when executed by a computer, implements the method of any of the above-mentioned method embodiments.
The embodiment of the present application further provides a computer program product, and the computer program product implements the method of any one of the above method embodiments when executed by a computer.
The embodiment of the application also provides electronic equipment which can comprise the face recognition device in the embodiment of the application.
For example, the electronic device is a smart door lock, a mobile phone, a computer, an access control system, or the like, which requires face recognition. The face recognition device comprises software and hardware devices used for face recognition in electronic equipment.
Optionally, one or more of an infrared image acquisition device, a color image acquisition device and a point cloud acquisition device may be further included in the electronic device.
It is understood that the infrared image collecting device, the color image collecting device, and the point cloud collecting device may be any one of collecting devices in the related art, and this is not particularly limited in this embodiment of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
As used in this specification, the terms "unit," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (36)

1. A method of face recognition, comprising:
acquiring a color image of a target to be identified;
judging whether the brightness of the color image is suitable for face recognition or not according to an image brightness judgment model, wherein the image brightness judgment model is obtained by training a plurality of color image samples with different brightness;
and determining whether to adopt the color image of the target to be recognized for face recognition according to the judgment result.
2. The method according to claim 1, wherein the determining whether to perform face recognition by using the color image of the target to be recognized according to the determination result comprises:
if the judgment result is suitable, adopting the color image of the target to be recognized and the three-dimensional image of the target to be recognized to perform face recognition;
and if the judgment result is not suitable, performing face recognition by adopting the infrared image of the target to be recognized and the three-dimensional image of the target to be recognized.
3. The method according to claim 2, wherein the face recognition of the color image of the target to be recognized and the three-dimensional image of the target to be recognized comprises:
carrying out face recognition on the color image of the target to be recognized to obtain a first face recognition result;
carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result;
if the first face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful;
and if at least one of the first face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
4. The method according to claim 2, wherein the performing face recognition on the infrared image of the target to be recognized and the three-dimensional image of the target to be recognized comprises:
carrying out face recognition on the infrared image of the target to be recognized to obtain a second face recognition result;
carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result;
if the second face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful;
and if at least one of the second face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
5. The method according to claim 3 or 4, wherein the performing face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result comprises:
acquiring point cloud data of the target to be identified;
converting the point cloud data into the three-dimensional image, wherein the three-dimensional image is a plane image comprising depth and angle information of the surface of the target to be recognized;
and carrying out face recognition on the three-dimensional image to obtain a third face recognition result.
6. The method of claim 5, wherein said converting the point cloud data into the three-dimensional image comprises:
calculating a normal vector of each point cloud point in the point cloud data;
and converting to obtain the three-dimensional image according to the normal vector and the depth value of each cloud point, wherein the normal vector of each cloud point comprises angle information of the surface of the target to be identified.
7. The method of claim 6, wherein the computing a normal vector for each cloud point in the point cloud data comprises:
aligning the point cloud data with the point cloud template data to obtain standard point cloud data;
performing meshing processing on the standard point cloud data to obtain meshed point cloud data;
and calculating the average value of normal vectors of grid surfaces adjacent to each cloud point in the gridded point cloud data to obtain the normal vector of each cloud point.
8. The method according to claim 6 or 7, wherein the normal vector and depth value according to each point cloud point comprises:
projecting the gridded point cloud data into a two-dimensional coordinate plane according to the depth value;
and respectively taking the values in the X direction and the Y direction of the normal vector of each point cloud point in the gridded point cloud data and the depth value as three-channel values of each pixel in the three-dimensional image so as to obtain the three-dimensional image through conversion.
9. The method according to any one of claims 5 to 8, wherein the performing face recognition on the three-dimensional image to obtain the third face recognition result comprises:
extracting the face features in the three-dimensional image to obtain a third face feature vector;
comparing the third face feature vector with at least one third template feature vector to obtain a third face recognition result;
and the at least one third template feature vector is a feature vector corresponding to the three-dimensional face image of the target user.
10. The method according to any one of claims 3 to 9, wherein the performing face recognition on the color image of the target to be recognized to obtain a first face recognition result comprises:
extracting the face features in the color image to obtain a first face feature vector;
comparing the first face feature vector with at least one first template feature vector to obtain a first face identification result;
and the at least one first template feature vector is a feature vector corresponding to a face color image of the target user.
11. The method according to claim 10, wherein the extracting the facial features in the color image to obtain a first facial feature vector comprises:
carrying out face detection on the color image to obtain a face area color image in the color image;
adjusting the face region color image to a target size, and adjusting the positions of five sense organs in the face region color image to a target position;
and extracting the face features of the face region color image to obtain the first face feature vector.
12. The method according to any one of claims 4 to 11, wherein the performing face recognition on the infrared image of the target to be recognized to obtain a second face recognition result comprises:
extracting the face features in the infrared image to obtain a second face feature vector;
comparing the second face feature vector with at least one second template feature vector to obtain a second face recognition result;
and the at least one second template feature vector is a feature vector corresponding to the face infrared image of the target user.
13. The method of claim 12, wherein the extracting the facial features in the infrared image to obtain a second facial feature vector comprises:
carrying out face detection on the infrared image to obtain a face area infrared image in the infrared image;
adjusting the face region infrared image to a target size, and adjusting the position of five sense organs in the face region infrared image to a target position;
and extracting the face features of the infrared image of the face region to obtain the second face feature vector.
14. The method according to any one of claims 1 to 13, wherein the image brightness determination model comprises: a machine learning model or a deep learning model.
15. The method of claim 14, wherein the image brightness determination model is a neural network model, the method further comprising:
according to the sample image, carrying out neural network training to obtain the image brightness judgment model;
the sample images comprise face images under different lighting conditions.
16. The method according to claim 15, wherein the sample images include a positive sample image and a negative sample image, the positive sample image is an image in which the face recognition result is consistent with the true result, and the negative sample image is an image in which the face recognition result is inconsistent with the true result.
17. An apparatus for face recognition, comprising:
the device comprises an acquisition unit, a recognition unit and a processing unit, wherein the acquisition unit is used for acquiring a color image of a target to be recognized;
the image brightness judging unit is used for judging whether the brightness of the color image is suitable for face recognition or not according to an image brightness judging model, wherein the image brightness judging model is obtained by training a plurality of color image samples with different brightness;
and the face recognition unit is used for determining whether to adopt the color image of the target to be recognized for face recognition according to the judgment result.
18. The apparatus of claim 17, wherein the face recognition unit is configured to:
if the judgment result is suitable, adopting the color image of the target to be recognized and the three-dimensional image of the target to be recognized to perform face recognition;
and if the judgment result is not suitable, performing face recognition by adopting the infrared image of the target to be recognized and the three-dimensional image of the target to be recognized.
19. The apparatus of claim 18, wherein the face recognition unit is configured to:
carrying out face recognition on the color image of the target to be recognized to obtain a first face recognition result;
carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result;
if the first face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful;
and if at least one of the first face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
20. The apparatus of claim 18, wherein the face recognition unit is configured to:
carrying out face recognition on the infrared image of the target to be recognized to obtain a second face recognition result;
carrying out face recognition on the three-dimensional image of the target to be recognized to obtain a third face recognition result;
if the second face recognition result and the third face recognition result are both successful, judging that the face recognition result of the target to be recognized is successful;
and if at least one of the second face recognition result and the third face recognition result is failure, judging that the face recognition result of the target to be recognized is failed.
21. The apparatus according to claim 19 or 20, wherein the face recognition unit comprises a third face recognition module,
the third face recognition module is configured to:
acquiring point cloud data of the target to be identified;
converting the point cloud data into the three-dimensional image, wherein the three-dimensional image is a plane image comprising depth and angle information of the surface of the target to be recognized;
and carrying out face recognition on the three-dimensional image to obtain a third face recognition result.
22. The apparatus of claim 21, wherein the third face recognition module is configured to:
calculating a normal vector of each point cloud point in the point cloud data;
and converting to obtain the three-dimensional image according to the normal vector and the depth value of each cloud point, wherein the normal vector of each cloud point comprises angle information of the surface of the target to be identified.
23. The apparatus of claim 22, wherein the third face recognition module is configured to:
aligning the point cloud data with the point cloud template data to obtain standard point cloud data;
performing meshing processing on the standard point cloud data to obtain meshed point cloud data;
and calculating the average value of normal vectors of grid surfaces adjacent to each cloud point in the gridded point cloud data to obtain the normal vector of each cloud point.
24. The apparatus of claim 22 or 23, wherein the third face recognition module is configured to:
projecting the gridded point cloud data into a two-dimensional coordinate plane according to the depth value;
and respectively taking the values in the X direction and the Y direction of the normal vector of each point cloud point in the gridded point cloud data and the depth value as three-channel values of each pixel in the three-dimensional image so as to obtain the three-dimensional image through conversion.
25. The apparatus according to any of claims 21-24, wherein the third face recognition module is configured to:
extracting the face features in the three-dimensional image to obtain a third face feature vector;
comparing the third face feature vector with at least one third template feature vector to obtain a third face recognition result;
and the at least one second template feature vector is a feature vector corresponding to the three-dimensional face image of the target user.
26. The apparatus according to any of claims 19 to 25, wherein the face recognition unit comprises a first face recognition module,
the first face recognition module is configured to:
extracting the face features in the color image to obtain a first face feature vector;
comparing the first face feature vector with at least one first template feature vector to obtain a first face identification result;
and the at least one first template feature vector is a feature vector corresponding to a face color image of the target user.
27. The apparatus of claim 26, wherein the first facial recognition module is configured to:
carrying out face detection on the color image to obtain a face area color image in the color image;
adjusting the face region color image to a target size, and adjusting the positions of five sense organs in the face region color image to a target position;
and extracting the face features of the face region color image to obtain the first face feature vector.
28. The apparatus according to any of the claims 20 to 27, wherein the face recognition unit comprises a second face recognition module,
the second face recognition module is configured to:
extracting the face features in the infrared image to obtain a second face feature vector;
comparing the second face feature vector with at least one second template feature vector to obtain a second face recognition result;
and the at least one second template feature vector is a feature vector corresponding to the face infrared image of the target user.
29. The apparatus of claim 28, wherein the second face recognition module is configured to:
carrying out face detection on the infrared image to obtain a face area infrared image in the infrared image;
adjusting the face region infrared image to a target size, and adjusting the position of five sense organs in the face region infrared image to a target position;
and extracting the face features of the infrared image of the face region to obtain the second face feature vector.
30. The apparatus according to any one of claims 17 to 29, wherein the image brightness determination model comprises: a machine learning model or a deep learning model.
31. The apparatus of claim 30, wherein the image brightness determination model is a neural network model, and the image brightness determination unit is further configured to:
according to the sample image, carrying out neural network training to obtain the image brightness judgment model;
the sample images comprise face images under different lighting conditions.
32. The apparatus according to claim 31, wherein the sample image comprises a positive sample image and a negative sample image, the positive sample image is an image whose face recognition result is consistent with the true result, and the negative sample image is an image whose face recognition result is inconsistent with the true result.
33. An electronic device, comprising:
an apparatus for face recognition according to any one of claims 17 to 32.
34. The electronic device of claim 33, further comprising:
the device comprises a color image acquisition device, an infrared image acquisition device and a point cloud acquisition device.
35. A computer-readable storage medium for storing program instructions which, when executed by a computer, cause the computer to perform a method of face recognition as claimed in any one of claims 1 to 16.
36. A computer program product containing instructions which, when executed by a computer, cause the computer to carry out the method of face recognition according to any one of claims 1 to 16.
CN202010917898.4A 2020-09-03 2020-09-03 Face recognition method and device and electronic equipment Pending CN112016495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010917898.4A CN112016495A (en) 2020-09-03 2020-09-03 Face recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010917898.4A CN112016495A (en) 2020-09-03 2020-09-03 Face recognition method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112016495A true CN112016495A (en) 2020-12-01

Family

ID=73516253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010917898.4A Pending CN112016495A (en) 2020-09-03 2020-09-03 Face recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112016495A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802068A (en) * 2020-12-24 2021-05-14 奥比中光科技集团股份有限公司 Device for synchronously acquiring depth, color and spectrum images and identification system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469301A (en) * 2016-08-31 2017-03-01 北京天诚盛业科技有限公司 The adjustable face identification method of self adaptation and device
CN108090405A (en) * 2016-11-23 2018-05-29 腾讯科技(深圳)有限公司 A kind of face identification method and terminal
CN110458062A (en) * 2019-07-30 2019-11-15 深圳市商汤科技有限公司 Face identification method and device, electronic equipment and storage medium
US20190354746A1 (en) * 2018-05-18 2019-11-21 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, electronic device, and storage medium
CN111095297A (en) * 2019-06-06 2020-05-01 深圳市汇顶科技股份有限公司 Face recognition device and method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469301A (en) * 2016-08-31 2017-03-01 北京天诚盛业科技有限公司 The adjustable face identification method of self adaptation and device
CN108090405A (en) * 2016-11-23 2018-05-29 腾讯科技(深圳)有限公司 A kind of face identification method and terminal
US20190354746A1 (en) * 2018-05-18 2019-11-21 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, electronic device, and storage medium
CN111095297A (en) * 2019-06-06 2020-05-01 深圳市汇顶科技股份有限公司 Face recognition device and method and electronic equipment
CN110458062A (en) * 2019-07-30 2019-11-15 深圳市商汤科技有限公司 Face identification method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802068A (en) * 2020-12-24 2021-05-14 奥比中光科技集团股份有限公司 Device for synchronously acquiring depth, color and spectrum images and identification system

Similar Documents

Publication Publication Date Title
US9818023B2 (en) Enhanced face detection using depth information
US11727661B2 (en) Method and system for determining at least one property related to at least part of a real environment
US11232286B2 (en) Method and apparatus for generating face rotation image
US9747493B2 (en) Face pose rectification method and apparatus
US8942418B2 (en) Method of providing a descriptor for at least one feature of an image and method of matching features
US8971572B1 (en) Hand pointing estimation for human computer interaction
US10846844B1 (en) Collaborative disparity decomposition
US9525862B2 (en) Method for estimating a camera motion and for determining a three-dimensional model of a real environment
JP5417494B2 (en) Image processing method and system
US9679384B2 (en) Method of detecting and describing features from an intensity image
EP3499414B1 (en) Lightweight 3d vision camera with intelligent segmentation engine for machine vision and auto identification
CN110728196B (en) Face recognition method and device and terminal equipment
WO2016107638A1 (en) An image face processing method and apparatus
KR20170092533A (en) A face pose rectification method and apparatus
CN112036339B (en) Face detection method and device and electronic equipment
CN107766864B (en) Method and device for extracting features and method and device for object recognition
Jung et al. Object detection and tracking-based camera calibration for normalized human height estimation
CN112016495A (en) Face recognition method and device and electronic equipment
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
Niese et al. A stereo and color-based method for face pose estimation and facial feature extraction
CN115482285A (en) Image alignment method, device, equipment and storage medium
CN113837053B (en) Biological face alignment model training method, biological face alignment method and device
Hao et al. Development of 3D feature detection and on board mapping algorithm from video camera for navigation
Betta et al. Metrological characterization of 3D biometric face recognition systems in actual operating conditions
Zheng et al. Highly-automatic MI based multiple 2D/3D image registration using self-initialized geodesic feature correspondences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination