CN111242097B - Face recognition method and device, computer readable medium and electronic equipment - Google Patents

Face recognition method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN111242097B
CN111242097B CN202010123695.8A CN202010123695A CN111242097B CN 111242097 B CN111242097 B CN 111242097B CN 202010123695 A CN202010123695 A CN 202010123695A CN 111242097 B CN111242097 B CN 111242097B
Authority
CN
China
Prior art keywords
similarity
face
image
depth
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010123695.8A
Other languages
Chinese (zh)
Other versions
CN111242097A (en
Inventor
沈鹏程
李绍欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010123695.8A priority Critical patent/CN111242097B/en
Publication of CN111242097A publication Critical patent/CN111242097A/en
Application granted granted Critical
Publication of CN111242097B publication Critical patent/CN111242097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application provides a face recognition method, a face recognition device, a computer readable medium and electronic equipment. The face recognition method comprises the following steps: the method comprises the steps of obtaining a color image and a depth image of a face to be recognized, determining a first similarity between the color image and a target face image in a face database and a second similarity between the depth image and the target face image, fusing the first similarity and the second similarity to obtain a fusion similarity, and determining whether the face to be recognized is matched with the target face or not based on the fusion similarity. According to the technical scheme of the embodiment of the application, the similarity between the color image and the depth image of the face to be recognized and the image in the database is calculated, the fusion similarity is determined based on the two similarities obtained through calculation, and whether the face to be recognized is matched with the target face is judged according to the fusion similarity, so that various types of features in the face image are considered comprehensively, and the accuracy and the comprehensiveness of the face recognition are improved.

Description

Face recognition method and device, computer readable medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a face recognition method and apparatus, a computer-readable medium, and an electronic device.
Background
With the development of face recognition technology, the technology is also widely applied in practical applications, for example, in face payment and entrance guard recognition. However, as the application demand increases, the required accuracy of the face recognition algorithm becomes higher and higher. In general, in the process of face recognition, the face recognition is only performed through the color features of the image, and the face recognition accuracy is low in this way.
Disclosure of Invention
Embodiments of the present application provide a face recognition method, an apparatus, a computer-readable medium, and an electronic device, so that features of a face image can be comprehensively considered at least to a certain extent, and accuracy of face recognition is improved.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a face recognition method, including: acquiring a color image and a depth image of a face to be recognized; determining a first similarity between the color image and a target face image in a face database and a second similarity between the depth image and the target face image; fusing the first similarity and the second similarity to obtain a fused similarity; and determining whether the face to be recognized is matched with the target face or not based on the fusion similarity.
According to an aspect of the embodiments of the present application, there is provided a face recognition apparatus, including: the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a color image and a depth image of a face to be recognized; the similarity unit is used for determining a first similarity between the color image and a target face image in a face database and a second similarity between the depth image and the target face image; the fusion unit is used for fusing the first similarity and the second similarity to obtain fusion similarity; and the matching unit is used for determining whether the face to be recognized is matched with the target face or not based on the fusion similarity.
In some embodiments of the present application, based on the foregoing scheme, the fusion unit includes: a first fusion unit, configured to take the first similarity as the fusion similarity if the first similarity is greater than a first threshold and the second similarity is greater than a second threshold; and a second fusion unit, configured to fuse the first similarity and the second similarity to obtain a fusion similarity if the first similarity is greater than the first threshold and the second similarity is less than or equal to a second threshold.
In some embodiments of the present application, based on the foregoing solution, the second fusion unit includes: a first calculating unit, configured to obtain a similarity difference value based on a difference between the second similarity and the second threshold; the second calculation unit is used for obtaining the penalty parameter based on the product between the similarity difference and the penalty coefficient; and the third calculation unit is used for determining the fusion similarity based on the difference between the first similarity and the penalty parameter.
In some embodiments of the present application, based on the foregoing solution, the second fusion unit includes: the construction function unit is used for constructing an increasing similarity fusion function according to the first similarity sample and the second similarity sample; and the function operation unit is used for inputting the first similarity and the second similarity into the similarity fusion function and acquiring the fusion similarity output by the similarity fusion function.
In some embodiments of the present application, based on the foregoing solution, the face recognition apparatus is further configured to: and if the first similarity is smaller than or equal to the first threshold, determining that the face to be recognized is not matched with the target face.
In some embodiments of the present application, based on the foregoing scheme, the similar unit comprises: the preprocessing unit is used for preprocessing the color image to obtain a preprocessed color image and preprocessing the depth image to obtain a preprocessed depth image; the extraction unit is used for extracting color features from the preprocessed color images and extracting depth features from the preprocessed depth images; and the calculating unit is used for calculating the similarity between the color feature and the target color feature of the target face image to obtain the first similarity, and calculating the similarity between the depth feature and the target depth feature of the target face image to obtain the second similarity.
In some embodiments of the present application, based on the foregoing solution, the preprocessing unit includes: the first detection unit is used for detecting a first face area in the color image to obtain a coordinate of the first face area; and the first registration unit is used for registering the first face area based on the coordinates of the first face area to obtain a preprocessed color image formed by the face characteristic points in the color image.
In some embodiments of the present application, based on the foregoing solution, the preprocessing unit includes: the second detection unit is used for detecting a second face area in the depth image to obtain the coordinates of the second face area; and the second registration unit is used for registering the second face area based on the coordinates of the second face area to obtain a single-channel depth map formed by key points of the face in the depth image, and converting the single-channel depth map to obtain the preprocessed depth image.
In some embodiments of the present application, based on the foregoing solution, the second registration unit includes: the conversion unit is used for carrying out normal projection conversion on the single-channel depth map to obtain normal projection maps of a set number of channels; and the restoration unit is used for carrying out image restoration processing on the normal projection drawing to obtain the preprocessed depth image.
In some embodiments of the present application, based on the foregoing scheme, the extraction unit includes: the first extraction unit is used for inputting the preprocessed color image into a first feature extraction network obtained by training, wherein the first feature extraction network is obtained by carrying out neural network supervision training on a labeled face color image; and the first acquisition unit is used for acquiring the color features output by the first feature extraction network.
In some embodiments of the present application, based on the foregoing scheme, the extraction unit includes: the second extraction unit is used for inputting the preprocessed depth image into a second feature extraction network obtained through training, and the second feature extraction network is obtained by carrying out neural network supervision training on the labeled face depth image; and the second acquisition unit is used for acquiring the depth features output by the second feature extraction network.
In some embodiments of the present application, based on the foregoing solution, the calculation unit includes: a fourth calculating unit, configured to calculate a cosine distance or an euclidean distance between the color feature and the target color feature, and obtain the first similarity according to the cosine distance or the euclidean distance; and the fifth calculating unit is used for calculating a cosine distance or an Euclidean distance between the depth feature and the target depth feature and obtaining the second similarity according to the cosine distance or the Euclidean distance.
According to an aspect of the embodiments of the present application, there is provided a computer readable medium on which a computer program is stored, the computer program, when executed by a processor, implementing the face recognition method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the face recognition method as described in the above embodiments.
In the technical scheme provided by some embodiments of the application, the similarity between the color image and the depth image of the face to be recognized and the images in the database is calculated, the fusion similarity is determined based on the two calculated similarities, and finally whether the face to be recognized is matched with the target face is judged according to the fusion similarity, so that various types of features in the face image are comprehensively considered, and the accuracy and the comprehensiveness of the face recognition are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
FIG. 2 schematically shows a flow diagram of a face recognition method according to an embodiment of the present application;
FIG. 3 schematically shows a flow chart of a similarity determination process according to an embodiment of the present application;
FIG. 4 schematically illustrates a flow diagram for face recognition based on color features and depth features according to an embodiment of the present application;
FIG. 5 schematically illustrates a schematic diagram of fusion similarity based face recognition according to an embodiment of the present application;
FIG. 6 schematically shows a flow chart for determining fusion similarity according to an embodiment of the present application;
FIG. 7 schematically illustrates a flow diagram for determining face matching based on fused similarity according to an embodiment of the present application;
FIG. 8 schematically illustrates a diagram applied to face payment according to an embodiment of the present application;
FIG. 9 schematically illustrates a schematic diagram applied to crowd detection according to an embodiment of the present application;
FIG. 10 schematically illustrates a block diagram of a face recognition apparatus according to an embodiment of the present application;
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the embodiments of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture may include a terminal device (e.g., one or more of a smartphone 101, a tablet computer 102, and a portable computer 103 shown in fig. 1, but may also be a desktop computer, etc.), a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices and server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
It should be understood that, besides the terminal equipment shown in fig. 1, the terminal equipment may also be a camera, a monitoring head, an image acquisition device, etc., and is not limited herein.
The terminal devices interact with the server 105 through the network 104 to receive or transmit messages or the like. The server 105 may be a server that provides various services. For example, the terminal device 103 (or the terminal device 101 or 102) uploads a color image and a depth image of a face to be recognized to the server 105, and the server 105 may obtain the color image and the depth image of the face to be recognized, determine a first similarity between the color image and a target face image in the face database, and a second similarity between the depth image and the target face image, fuse the first similarity and the second similarity to obtain a fusion similarity, and determine whether the face to be recognized matches the target face based on the fusion similarity. In the embodiment, the similarity between the color image and the depth image of the face to be recognized and the image in the database is calculated, the fusion similarity is determined based on the two similarities obtained through calculation, and whether the face to be recognized is matched with the target face is judged according to the fusion similarity, so that various types of features in the face image are comprehensively considered, and the accuracy and the comprehensiveness of the face recognition are improved.
It should be noted that the face recognition method provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the face recognition apparatus is generally disposed in the server 105. However, in other embodiments of the present application, the terminal device may also have a similar function to the server, so as to execute the face recognition method provided in the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 shows a flow diagram of a face recognition method according to an embodiment of the present application, which may be performed by a server, which may be the server shown in fig. 1. Referring to fig. 2, the face recognition method at least includes steps S210 to S240, and the following details are introduced:
in step S210, a color image and a depth image of a face to be recognized are acquired.
In one embodiment of the present application, when acquiring a color image and a depth image of a face to be recognized, the color image and the depth image may be captured in real time, or may be acquired from another storage device or a capturing device.
In an application scenario of face recognition or face payment for pedestrians in an outdoor environment, an image may be obtained by taking a color image and a Depth image of a face in real time, and the image may be taken by a camera or captured by a Red Green Blue Depth (RGBD) sensor.
In one embodiment of the present application, a color image is used to represent an image containing Red Green Blue (RGB) information, for example, an image with color information captured by a conventional camera; depth images are used to represent images containing depth information that reflects the geometry of the visible surface of a scene by taking the distance or depth from the image grabber to each point in the scene as a pixel value. The depth image is transformed into coordinate data to be calculated as point cloud data, and in the transformation process, each pixel point in the image frame provided by the depth data stream represents the distance from the object closest to the camera plane to the plane in the view field of the depth sensor at the specific (x, y) coordinate.
In step S220, a first similarity between the color image and the target face image in the face database and a second similarity between the depth image and the target face image are determined.
In an embodiment of the present application, after the color image and the depth image are obtained, the present embodiment obtains two similarities by respectively calculating similarities between the color image and the depth image and images in the face database corresponding to the color image and the depth image, so as to identify the identities of faces in the color image and the depth image based on the two similarities.
In an embodiment of the present application, as shown in fig. 3, the process of determining the first similarity between the color image and the target face image in the face database and the second similarity between the depth image and the target face image in step S220 includes the following steps S310 to S330, which are described in detail as follows:
in step S310, the color image is preprocessed to obtain a preprocessed color image, and the depth image is preprocessed to obtain a preprocessed depth image.
In one embodiment of the present application, color images and depth images are preprocessed when they are similarity matched. The image after preprocessing is obtained by detecting and positioning the face region in the image, and then detecting the part formed by the face characteristic points in the face region after the face region is obtained.
In an embodiment of the present application, the process of preprocessing the color image to obtain a preprocessed color image in step S310 includes the following steps:
detecting a first face area in the color image to obtain a coordinate of the first face area; and registering the first face area based on the coordinates of the first face area to obtain a preprocessed color image formed by the face characteristic points in the color image.
In an embodiment of the present application, when the color image is preprocessed, an original RGB image is registered with a standard face image, so as to obtain a standardized RGB face image. Optionally, in this embodiment, the face detection and registration method may be a Multi-task convolutional neural Network (MTCNN) method or a retina surface retaface (retina face) method, which is not described herein again.
In an embodiment of the present application, the process of preprocessing the depth image in step S310 to obtain a preprocessed depth image includes the steps of:
detecting a second face area in the depth image to obtain a coordinate of the second face area; and registering the second face area based on the coordinates of the second face area to obtain a single-channel depth map formed by key points of the face in the depth image, and converting the single-channel depth map to obtain a preprocessed depth image.
In one embodiment of the application, when the depth image is preprocessed, a second face area in the depth image is detected based on a standard depth image, and coordinates of the second face area are obtained; and registering the second face area based on the coordinates of the second face area to obtain the information of the face key points in each area in the depth image.
Furthermore, each depth image has a color image corresponding to the depth image, so that the face detection result and the registration result of the corresponding color image can be directly used to determine the corresponding face area in the depth image, and the face area is corrected. In this embodiment, the face detection result includes coordinates of the face region frame, and the registration result includes coordinates of the face key point.
Further, in the process of converting the single-channel depth map to obtain the preprocessed depth image, the method comprises the following steps:
carrying out normal projection conversion on the single-channel depth map to obtain normal projection maps of a set number of channels; and carrying out image restoration processing on the normal projection image to obtain a preprocessed depth image.
Specifically, in this embodiment, the acquired preprocessed image is a single-channel image or three-dimensional 3D point cloud data, the single-channel depth map or the original 3D point cloud data thereof is converted into a multi-channel two-dimensional 2D normal projection map through normal projection, and then image restoration processing such as hole filling is performed to obtain the preprocessed depth image. The preprocessed depth image obtained in the mode contains abundant pixel point information so as to ensure the accuracy of face recognition.
In step S320, color features are extracted from the pre-processed color image, and depth features are extracted from the pre-processed depth image.
In an embodiment of the present application, the process of extracting color features from the preprocessed color image in step S320 specifically includes:
inputting the preprocessed color images into a first feature extraction network obtained by training, wherein the first feature extraction network is obtained by carrying out neural network supervision training on the labeled face color images; and acquiring the color features output by the first feature extraction network.
In an embodiment of the application, a first feature extraction network is obtained by training based on a labeled face color picture. The acquisition of the face color picture can utilize a large amount of open source data and does not depend on the acquisition of an RGBD sensor, so that a face feature extraction network with high recognition performance can be obtained by utilizing big data training.
In this embodiment, the training mode for the first feature extraction network may be obtained by training through a neural network supervised training method, and the training algorithm may be based on an open-source face insight arcfacace method. In the training process, inputting a color face feature extraction network to a color image with the identity of A after preprocessing to obtain a color face feature vector f _ A-RGB, comparing the extracted color face feature vector with a label corresponding to the face image, adjusting parameters in the face feature extraction network according to the obtained comparison result, and finally obtaining a first feature extraction network.
In an embodiment of the present application, the process of extracting depth features from the preprocessed depth image in step S320 specifically includes:
inputting the preprocessed depth image into a second feature extraction network obtained through training, wherein the second feature extraction network is obtained by carrying out neural network supervision training on the labeled face depth image; and acquiring the depth features output by the second feature extraction network.
In an embodiment of the present application, in the process of training the second feature extraction network, because the number of face depth pictures with identity labels is generally small and the imaging quality is poorer than that of color images, the recognition performance of the trained face depth feature extraction network is also poorer than that of the face color feature extraction network. Therefore, during training, the samples of the depth images are derived to obtain a greater number of depth image samples, so as to train the second feature extraction network. In this way, the number and types of samples of the depth images participating in training are increased, and the accuracy of model training of the second feature extraction network is improved.
In one embodiment of the present application, the training algorithm of the model may be based on the open source facial insight arcfacce method. A preprocessed face Depth picture with the identity of A is derived, the face Depth picture and the derived picture thereof are input into a face Depth feature extraction network, and a face Depth feature vector f _ A-Depth can be obtained.
In step S330, a similarity between the color feature and a target color feature of the target face image is calculated to obtain a first similarity, and a similarity between the depth feature and a target depth feature of the target face image is calculated to obtain a second similarity.
In an embodiment of the application, after color features are extracted from a color image and depth features are extracted from a depth image, a first similarity between the color features and target color features of a target face image is calculated, a second similarity between the depth features and the target depth features of the target face image is calculated, and a matching situation between a face to be recognized and the target face is determined based on the first similarity and the second similarity.
In an embodiment of the present application, a database storing all face features is preset, and the database may be divided into a color RGB feature library and a Depth feature library. The RGB feature library stores color features of human faces, and the Depth feature library stores Depth features of the human faces. In this embodiment, the color features of the face to be recognized are respectively compared with the face features in the RGB feature library and the Depth feature library, the most similar face in the RGB feature library and the Depth feature library is determined to be the target face, and then the first similarity and the second similarity are respectively calculated based on the color features and the Depth features of the target face.
In an embodiment of the present application, the process of calculating a similarity between the color feature and the target color feature of the target face image in step S330 to obtain a first similarity, and calculating a similarity between the depth feature and the target depth feature of the target face image to obtain a second similarity specifically includes:
calculating a cosine distance or an Euclidean distance between the color features and the target color features, and obtaining a first similarity according to the cosine distance or the Euclidean distance;
and calculating the cosine distance or Euclidean distance between the depth feature and the target depth feature, and obtaining a second similarity according to the cosine distance or the Euclidean distance.
In an embodiment of the present application, the first similarity and the second similarity may be calculated by calculating a cosine distance or an euclidean distance. Specifically, better face recognition accuracy is obtained by utilizing the discrimination information complementarity of the face with the same identity in a color mode and a depth mode and fusing the similarity of the face in the two modes. Besides, the calculation can be performed in other manners, which is not limited herein.
For example, given a face color feature vector f _ a-RGB of a user a and a face color feature vector f _ B-RGB of a target user B, the similarity between two face colors may be calculated by a cosine distance cosine or an euclidean distance between f _ a-RGB and f _ B-RGB, and when the similarity is calculated by the cosine distance, the similarity between the color feature of the face to be recognized and the color feature of the target face is: sim1= cosine (f _ a-RGB, f _ B-RGB); similarly, given the face depth feature vector f _ a-Dep of the user a and the face depth feature vector f _ B-Dep of the target user B, the similarity of the face depth features of the two users can be calculated by the cosine distance value between the f _ a-Dep and the f _ B-Dep: sim2= cosine (f _ a-Dep, f _ B-Dep).
Fig. 4 is a flowchart of face recognition based on color features and depth features according to an embodiment of the present disclosure.
As shown in fig. 4, the above process of face recognition based on color features and depth features mainly includes the following steps:
step1: capturing a color RGB picture containing a face image of a user A and a corresponding Depth picture by an RGBD (red green blue) sensor;
step2: preprocessing the RGB picture and the Depth picture respectively to obtain pictures suitable for face feature extraction;
step3: extracting face characteristic vectors of the preprocessed RGB picture and the preprocessed Depth picture respectively by using an RGB face recognition network and a Depth face recognition network to obtain color characteristic fea _ A-RGB and Depth characteristic fea _ A-Dep;
step4: selecting a user B corresponding to a target face in a registered user library, and comparing RGB face features of the user A with face features of the user B in an RGB feature library to obtain similarity sim1; comparing the Depth face features of the user A with the face features of the user B in a Depth feature library to obtain a similarity sim2;
step5: fusing the face similarity Sim1 of the A and the B in the RGB mode and the face similarity Sim2 of the A and the B in the Depth mode to obtain a fused similarity Sim3, comparing the Sim3 with a preset threshold th, and if the Sim3 is higher than the th, considering the A and the B as the same person, otherwise, judging the A and the B as the different person;
in addition to the above steps, in order to ensure the accuracy of face recognition, it may be convenient to traverse all users in the registered user library, and execute Step4 and Step5 until the comparison of the face features in the RGB feature library and the Depth feature library is completed.
In step S230, the first similarity and the second similarity are fused to obtain a fused similarity.
In an embodiment of the application, after the first similarity and the second similarity are obtained through calculation, the first similarity and the second similarity are fused to obtain a fusion similarity based on a relationship between the first similarity and a first threshold and a relationship between the second similarity and a second threshold, so as to judge the matching condition of the face to be recognized based on the fusion similarity.
Fig. 5 is a schematic diagram of face recognition based on fusion similarity according to an embodiment of the present application.
As shown in fig. 5, in the embodiment, a color image 510 and a depth image 520 of a face to be recognized are obtained first; comparing the color image with the images in the color face database 530 to obtain a first similarity 540; and performing similarity comparison on the depth image 520 and the images in the depth face database 550 to obtain a second similarity 560, and finally fusing the first similarity 540 and the second similarity 560 to obtain a fusion similarity 570.
It should be noted that, in the face image database in this embodiment, the color image and the depth image may be stored separately, or the color image and the depth image belonging to the same face may be stored in association, where a storage manner of the images in the database is not limited.
In an embodiment of the present application, as shown in fig. 6, the process of fusing the first similarity and the second similarity in step S230 to obtain a fused similarity includes the following steps S610 to S620, which are described in detail as follows:
in step S610, if the first similarity is greater than the first threshold and the second similarity is greater than the second threshold, the first similarity is regarded as the fusion similarity.
In an embodiment of the application, the first similarity corresponds to a color feature of a human face, and after the first similarity is obtained through calculation, the size of the first similarity is determined based on a set first threshold. Firstly, performing primary judgment on whether the identities of the faces to be recognized are consistent or not based on a first similarity, wherein the first similarity is higher than a first threshold, which means that the similarity of the two identities in an RGB mode is higher; and further considering the condition of the second similarity corresponding to the corresponding depth feature, and if the second similarity is also higher than a second threshold, taking the first similarity as the fusion similarity to judge based on the first similarity.
In step S620, if the first similarity is greater than the first threshold and the second similarity is less than or equal to the second threshold, the first similarity and the second similarity are fused to obtain a fused similarity.
In an embodiment of the application, whether the identities of the faces to be recognized are consistent or not is judged for the first time based on the first similarity, and for the condition that the first similarity is higher than a first threshold, the similarity of the two identities in an RGB mode is higher; and further examining the condition of second similarity corresponding to the corresponding Depth features, wherein if the second similarity is less than or equal to a second threshold, the similarity of the face features of the two identities in the Depth mode is not high enough. Therefore, when the similarity is fused, a penalty item negatively related to the second similarity is subtracted on the basis of the first similarity, the first similarity and the second similarity are fused to obtain the fusion similarity, and the matching condition of the face to be recognized is comprehensively judged on the basis of the fusion similarity.
In the present embodiment, the first threshold and the second threshold are obtained by adjustment based on the set false recognition level and the recognition performance of the feature extraction network model itself, and specific numerical values are not limited here.
In this embodiment, the step S610 and the step S620 are executed in parallel, and they are not divided into front and back.
In an embodiment of the application, the process of fusing the first similarity and the second similarity in step S620 to obtain a fused similarity includes the following steps:
obtaining a similarity difference value based on the difference value between the second similarity and the second threshold value; obtaining a penalty parameter based on the product of the similarity difference and the penalty coefficient; and determining the fusion similarity based on the difference between the first similarity and the penalty parameter.
In one embodiment of the present application, when the similarity is fused, a difference (th 2-Sim 2) between the second similarity and the second threshold is calculated, and the difference is a positive number, i.e., a similarity difference; and then calculating the product between the similarity difference and the penalty coefficient lamda: lamda (th 2-Sim 2), which is a penalty parameter, where lamda is some constant between [0,1 ]; and finally, determining the fusion similarity as follows based on the difference between the first similarity and the penalty parameter: sim = Sim1-lamda (th 2-Sim 2).
In addition to the way of calculating the fusion similarity in the above steps, the fusion similarity may also be calculated in the following way in this embodiment:
constructing an increasing similarity fusion function according to the first similarity sample and the second similarity sample; and inputting the first similarity and the second similarity into a similarity fusion function, and acquiring the fusion similarity output by the similarity fusion function.
In an embodiment of the present application, a similarity fusion function between a first similarity sample and a second similarity sample is constructed, and the similarity fusion function maintains an increasing state, so that the similarity fusion function can be used as an alternative calculation expression for similarity fusion. The fusion similarity output by the similarity fusion function can be obtained by inputting the first similarity and the second similarity into the similarity fusion function.
Further, after the steps S310 and S320 of comparing the first similarity and the second similarity with the threshold respectively to obtain corresponding comparison results and processing modes thereof, the method further includes the steps of:
and if the first similarity is smaller than or equal to the first threshold, determining that the face to be recognized is not matched with the target face.
In an embodiment of the application, when the first similarity is smaller than or equal to the first threshold, it indicates that the value of the first similarity is already small, the difference between the face to be recognized and the target face is large, and it is not necessary to consider the color information of the face to be recognized, and it may be directly determined that the face to be recognized and the target face are not matched.
In step S240, it is determined whether the face to be recognized matches the target face based on the fusion similarity.
In an embodiment of the present application, after the fusion similarity is determined, based on a magnitude relationship between the fusion similarity and a third threshold, it may be determined whether the face to be recognized and the target face are matched. Namely, when the fusion similarity is greater than a third threshold, judging that the face to be recognized is matched with the target face; and when the fusion similarity is smaller than or equal to the third threshold, judging that the face to be recognized is not matched with the target face.
The third threshold in this embodiment is obtained by adjusting the recognition performance based on the set misrecognition level and the feature extraction network model itself, and specific numerical values are not limited here.
Fig. 7 is a flowchart for determining face matching based on fusion similarity according to the embodiment of the present application.
As shown in fig. 7, in the process of determining face matching based on the fusion similarity, firstly, a primary determination is performed on whether the identities are consistent based on the RGB feature similarity, and for the case that the RGB feature similarity is higher than the threshold th1, it means that the similarity between the two identities in the RGB mode is higher, and at this time, the corresponding Depth feature similarity is further considered; for the condition that the similarity of the Depth features is lower than a set threshold th2, the similarity of the face features of two identities in the Depth mode is not high enough, and when the similarity is fused, a penalty term which is negatively related to the similarity of the Depth features is subtracted from the RGB similarity, namely the fusion similarity Sim = Sim1-lamda (th 2-Sim 2), wherein lamda is a constant between [0,1 ]. And finally, determining whether the two users are the same person or not by comparing the fusion similarity Sim with a set threshold Sim 3. The thresholds th1, th2, and th3 are adjusted based on the set level of erroneous recognition and the recognition performance of the recognition model itself.
Fig. 8 is a schematic diagram applied to face payment according to an embodiment of the present application.
In an embodiment of the present application, the face recognition method in this embodiment may be used in an application scenario of face payment. Before the user performs face payment, registration and signing are performed in a payment program, and the payment server 810 collects registration information of the user and collects color images and depth images of the face of the user. When the user pays for the face, the camera terminal 810 acquires the color image and the depth image of the current face of the user, and compares the two acquired images with the face image in the database respectively, the comparison method is as the above embodiment, and after the matching is successful, the fund in the user account is deducted, and the payment is completed. The face payment is carried out through the face recognition mode in the embodiment, so that the face brushing payment efficiency and the safety of a user account can be improved.
Fig. 9 is a schematic diagram of people group detection according to an embodiment of the present disclosure.
In an embodiment of the present application, the face recognition method in this embodiment may be used for face detection in a crowd. The face information of each user is collected in advance through the server and stored in the database. When identifying the identity of a certain person in a crowd, the monitoring device 910 acquires an image or video of the crowd, locks the target in an image frame, and simultaneously acquires a current face image of the target, and compares the face image with an image in a database in the manner of the embodiment of the present application to obtain an identification result, so as to determine the identity information of the target. By the face recognition mode in the embodiment, the identity information of the target object in the crowd is detected, and the recognition efficiency and accuracy of the target object can be improved.
Embodiments of the apparatus of the present application are described below, which may be used to perform the face recognition method in the above embodiments of the present application. It will be appreciated that the apparatus may be a computer program (comprising program code) running on a computer device, for example an application software; the apparatus may be configured to perform corresponding steps in the methods provided in the embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the face recognition method described above in the present application.
FIG. 10 shows a block diagram of a face recognition apparatus according to an embodiment of the present application.
Referring to fig. 10, a face recognition apparatus 1000 according to an embodiment of the present application includes: an acquiring unit 1010, configured to acquire a color image and a depth image of a face to be recognized; a similarity unit 1020, configured to determine a first similarity between the color image and a target face image in the face database, and a second similarity between the depth image and the target face image; a fusion unit 1030, configured to fuse the first similarity and the second similarity to obtain a fusion similarity; and the matching unit 1040 is configured to determine whether the face to be recognized is matched with the target face based on the fusion similarity.
In some embodiments of the present application, based on the foregoing solution, the fusion unit 1030 includes: the first fusion unit is used for taking the first similarity as the fusion similarity if the first similarity is larger than a first threshold and the second similarity is larger than a second threshold; and the second fusion unit is used for fusing the first similarity and the second similarity to obtain the fusion similarity if the first similarity is greater than the first threshold and the second similarity is less than or equal to the second threshold.
In some embodiments of the present application, based on the foregoing solution, the second fusion unit includes: a first calculation unit configured to obtain a similarity difference value based on a difference between the second similarity and a second threshold; the second calculation unit is used for obtaining a penalty parameter based on the product between the similarity difference and the penalty coefficient; and the third calculating unit is used for determining the fusion similarity based on the difference between the first similarity and the penalty parameter.
In some embodiments of the present application, based on the foregoing scheme, the second fusion unit comprises: the construction function unit is used for constructing an increasing similarity fusion function according to the first similarity sample and the second similarity sample; and the function operation unit is used for inputting the first similarity and the second similarity into the similarity fusion function and acquiring the fusion similarity output by the similarity fusion function.
In some embodiments of the present application, based on the foregoing solution, the face recognition apparatus 1000 is further configured to: and if the first similarity is smaller than or equal to the first threshold, determining that the face to be recognized is not matched with the target face.
In some embodiments of the present application, based on the foregoing scheme, the similar unit 1020 includes: the preprocessing unit is used for preprocessing the color image to obtain a preprocessed color image and preprocessing the depth image to obtain a preprocessed depth image; the extraction unit is used for extracting color features from the preprocessed color image and extracting depth features from the preprocessed depth image; and the calculating unit is used for calculating the similarity between the color feature and the target color feature of the target face image to obtain a first similarity, and calculating the similarity between the depth feature and the target depth feature of the target face image to obtain a second similarity.
In some embodiments of the present application, based on the foregoing solution, the preprocessing unit includes: the first detection unit is used for detecting a first face area in the color image to obtain a coordinate of the first face area; and the first registration unit is used for registering the first face area based on the coordinates of the first face area to obtain a preprocessed color image formed by the face characteristic points in the color image.
In some embodiments of the present application, based on the foregoing solution, the preprocessing unit includes: the second detection unit is used for detecting a second face area in the depth image to obtain the coordinate of the second face area; and the second registration unit is used for registering the second face area based on the coordinates of the second face area to obtain a single-channel depth map formed by key points of the face in the depth image, and converting the single-channel depth map to obtain a preprocessed depth image.
In some embodiments of the present application, based on the foregoing solution, the second registration unit includes: the conversion unit is used for carrying out normal projection conversion on the single-channel depth map to obtain normal projection maps of a set number of channels; and the restoration unit is used for carrying out image restoration processing on the normal projection image to obtain a preprocessed depth image.
In some embodiments of the present application, based on the foregoing scheme, the extraction unit includes: the first extraction unit is used for inputting the preprocessed color images into a first feature extraction network obtained through training, wherein the first feature extraction network is obtained by carrying out neural network supervision training on the labeled face color images; and the first acquisition unit is used for acquiring the color characteristics output by the first characteristic extraction network.
In some embodiments of the present application, based on the foregoing scheme, the extraction unit includes: the second extraction unit is used for inputting the preprocessed depth image into a second feature extraction network obtained through training, and the second feature extraction network is obtained by carrying out neural network supervision training on the labeled face depth image; and the second acquisition unit is used for acquiring the depth features output by the second feature extraction network.
In some embodiments of the present application, based on the foregoing solution, the calculation unit includes: the fourth calculating unit is used for calculating a cosine distance or an Euclidean distance between the color features and the target color features and obtaining a first similarity according to the cosine distance or the Euclidean distance; and the fifth calculation unit is used for calculating the cosine distance or the Euclidean distance between the depth feature and the target depth feature and obtaining the second similarity according to the cosine distance or the Euclidean distance.
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1100 of the electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the application scope of the embodiments of the present application.
As shown in fig. 11, a computer system 1100 includes a Central Processing Unit (CPU) 1101, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for system operation are also stored. The CPU 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An Input/Output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output section 1107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, as well as a speaker and the like; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. When the computer program is executed by a Central Processing Unit (CPU) 1101, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. It is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A face recognition method, comprising:
acquiring a color image and a depth image of a face to be recognized;
determining a first similarity between the color image and a target face image in a face database and a second similarity between the depth image and the target face image;
if the first similarity is larger than a first threshold value and the second similarity is larger than a second threshold value, taking the first similarity as a fusion similarity;
if the first similarity is greater than the first threshold and the second similarity is less than or equal to a second threshold, obtaining a similarity difference value based on a difference value between the second similarity and the second threshold;
obtaining a penalty parameter based on the product of the similarity difference and a penalty coefficient;
determining a fusion similarity based on a difference between the first similarity and the penalty parameter;
and determining whether the face to be recognized is matched with the target face or not based on the fusion similarity.
2. The method of claim 1, further comprising:
and if the first similarity is smaller than or equal to the first threshold, determining that the face to be recognized is not matched with the target face.
3. The method of claim 1, wherein determining a first similarity of the color image to a target face image in a face database and a second similarity of the depth image to the target face image comprises:
preprocessing the color image to obtain a preprocessed color image, and preprocessing the depth image to obtain a preprocessed depth image;
extracting color features from the preprocessed color image, and extracting depth features from the preprocessed depth image;
and calculating the similarity between the color feature and the target color feature of the target face image to obtain the first similarity, and calculating the similarity between the depth feature and the target depth feature of the target face image to obtain the second similarity.
4. The method of claim 3, wherein pre-processing the color image to obtain a pre-processed color image comprises:
detecting a first face area in the color image to obtain a coordinate of the first face area;
and registering the first face area based on the coordinates of the first face area to obtain a preprocessed color image formed by the face characteristic points in the color image.
5. The method of claim 3, wherein pre-processing the depth image to obtain a pre-processed depth image comprises:
detecting a second face area in the depth image to obtain a coordinate of the second face area;
and registering the second face area based on the coordinates of the second face area to obtain a single-channel depth map formed by key points of the face in the depth image, and converting the single-channel depth map to obtain the preprocessed depth image.
6. The method of claim 5, wherein converting the single-channel depth map to obtain the pre-processed depth image comprises:
carrying out normal projection conversion on the single-channel depth map to obtain normal projection maps of a set number of channels;
and carrying out image restoration processing on the normal projection image to obtain the preprocessed depth image.
7. The method of claim 3, wherein extracting color features from the pre-processed color image comprises:
inputting the preprocessed color image into a first feature extraction network obtained by training, wherein the first feature extraction network is obtained by carrying out neural network supervision training on a labeled face color image;
and acquiring the color features output by the first feature extraction network.
8. The method of claim 3, wherein extracting depth features from the pre-processed depth image comprises:
inputting the preprocessed depth image into a second feature extraction network obtained through training, wherein the second feature extraction network is obtained by carrying out neural network supervision training on the labeled face depth image;
and acquiring the depth features output by the second feature extraction network.
9. The method of claim 3, wherein calculating the similarity between the color feature and a target color feature of the target face image to obtain the first similarity, and calculating the similarity between the depth feature and a target depth feature of the target face image to obtain a second similarity comprises:
calculating a cosine distance or an Euclidean distance between the color feature and the target color feature, and obtaining the first similarity according to the cosine distance or the Euclidean distance;
and calculating a cosine distance or an Euclidean distance between the depth feature and the target depth feature, and obtaining the second similarity according to the cosine distance or the Euclidean distance.
10. A face recognition apparatus, comprising:
the acquiring unit is used for acquiring a color image and a depth image of a face to be recognized;
the similarity unit is used for determining a first similarity between the color image and a target face image in a face database and a second similarity between the depth image and the target face image;
a fusion unit, configured to take the first similarity as a fusion similarity if the first similarity is greater than a first threshold and the second similarity is greater than a second threshold; if the first similarity is larger than the first threshold and the second similarity is smaller than or equal to a second threshold, obtaining a similarity difference value based on a difference value between the second similarity and the second threshold; obtaining a punishment parameter based on the product between the similarity difference and a punishment coefficient; determining a fusion similarity based on a difference between the first similarity and the penalty parameter;
and the matching unit is used for determining whether the face to be recognized is matched with the target face or not based on the fusion similarity.
11. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the face recognition method according to any one of claims 1 to 9.
12. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a face recognition method as claimed in any one of claims 1 to 9.
CN202010123695.8A 2020-02-27 2020-02-27 Face recognition method and device, computer readable medium and electronic equipment Active CN111242097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010123695.8A CN111242097B (en) 2020-02-27 2020-02-27 Face recognition method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010123695.8A CN111242097B (en) 2020-02-27 2020-02-27 Face recognition method and device, computer readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111242097A CN111242097A (en) 2020-06-05
CN111242097B true CN111242097B (en) 2023-04-18

Family

ID=70866414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010123695.8A Active CN111242097B (en) 2020-02-27 2020-02-27 Face recognition method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111242097B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968152B (en) * 2020-07-15 2023-10-17 桂林远望智能通信科技有限公司 Dynamic identity recognition method and device
CN111914769B (en) * 2020-08-06 2024-01-26 腾讯科技(深圳)有限公司 User validity determination method, device, computer readable storage medium and equipment
CN114170640B (en) * 2020-08-19 2024-02-02 腾讯科技(深圳)有限公司 Face image processing method, device, computer readable medium and equipment
CN112052830B (en) * 2020-09-25 2022-12-20 北京百度网讯科技有限公司 Method, device and computer storage medium for face detection
CN112381782B (en) * 2020-11-11 2022-09-09 腾讯科技(深圳)有限公司 Human face image quality evaluation method and device, computer equipment and storage medium
CN113572913B (en) * 2021-01-18 2022-08-05 腾讯科技(深圳)有限公司 Image encryption method, device, medium and electronic equipment
CN113139465A (en) * 2021-04-23 2021-07-20 北京华捷艾米科技有限公司 Face recognition method and device
CN113657178A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 Face recognition method, electronic device and computer-readable storage medium
CN113837105A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN115082371B (en) * 2022-08-19 2022-12-06 深圳市灵明光子科技有限公司 Image fusion method and device, mobile terminal equipment and readable storage medium
CN117011945B (en) * 2023-10-07 2024-03-19 之江实验室 Action capability assessment method, action capability assessment device, computer equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426016A (en) * 2013-08-14 2013-12-04 湖北微模式科技发展有限公司 Method and device for authenticating second-generation identity card
KR20170000748A (en) * 2015-06-24 2017-01-03 삼성전자주식회사 Method and apparatus for face recognition
CN108197587A (en) * 2018-01-18 2018-06-22 中科视拓(北京)科技有限公司 A kind of method that multi-modal recognition of face is carried out by face depth prediction
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
WO2019080579A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
CN109961062A (en) * 2019-04-16 2019-07-02 北京迈格威科技有限公司 Image-recognizing method, device, terminal and readable storage medium storing program for executing
CN110378209A (en) * 2019-06-11 2019-10-25 深圳市锐明技术股份有限公司 A kind of driver identification verification method and device
CN110796101A (en) * 2019-10-31 2020-02-14 广东光速智能设备有限公司 Face recognition method and system of embedded platform

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426016A (en) * 2013-08-14 2013-12-04 湖北微模式科技发展有限公司 Method and device for authenticating second-generation identity card
KR20170000748A (en) * 2015-06-24 2017-01-03 삼성전자주식회사 Method and apparatus for face recognition
WO2019080579A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
CN108197587A (en) * 2018-01-18 2018-06-22 中科视拓(北京)科技有限公司 A kind of method that multi-modal recognition of face is carried out by face depth prediction
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109961062A (en) * 2019-04-16 2019-07-02 北京迈格威科技有限公司 Image-recognizing method, device, terminal and readable storage medium storing program for executing
CN110378209A (en) * 2019-06-11 2019-10-25 深圳市锐明技术股份有限公司 A kind of driver identification verification method and device
CN110796101A (en) * 2019-10-31 2020-02-14 广东光速智能设备有限公司 Face recognition method and system of embedded platform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Learning rich features from RGB-D images for object detection and segmentation;S. Gupta等;《Computer Vision - ECCV 2014》;20140722;第345-360页 *
RGB-D Face Recognition via Deep Complementary and Common Feature Learning;Hao Zhangd等;《2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)》;20190607;第8-15页 *
一种基于彩色+深度的人脸识别算法;袁帅英等;《测试技术学报》;20170630;第31卷(第03期);第241-249页 *
多模态人脸识别融合方法比较研究;叶剑华等;《计算机工程与应用》;20090701;第45卷(第19期);第153-156页 *

Also Published As

Publication number Publication date
CN111242097A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111242097B (en) Face recognition method and device, computer readable medium and electronic equipment
CN108269254B (en) Image quality evaluation method and device
US11393256B2 (en) Method and device for liveness detection, and storage medium
CN104599287B (en) Method for tracing object and device, object identifying method and device
CN109086834B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN108229375B (en) Method and device for detecting face image
CN108509994B (en) Method and device for clustering character images
CN111553947A (en) Target object positioning method and device
CN114926766A (en) Identification method and device, equipment and computer readable storage medium
CN108229494B (en) Network training method, processing method, device, storage medium and electronic equipment
CN111563398A (en) Method and device for determining information of target object
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN114279433A (en) Map data automatic production method, related device and computer program product
CN110688878B (en) Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device
CN114842466A (en) Object detection method, computer program product and electronic device
CN111783677B (en) Face recognition method, device, server and computer readable medium
CN112052730B (en) 3D dynamic portrait identification monitoring equipment and method
CN111666884A (en) Living body detection method, living body detection device, computer-readable medium, and electronic apparatus
CN114581978A (en) Face recognition method and system
CN114332993A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN113837174A (en) Target object identification method and device and computer equipment
CN111582278A (en) Portrait segmentation method and device and electronic equipment
CN108694347B (en) Image processing method and device
CN112016348A (en) Face authenticity identification method and device
CN114842414B (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024382

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant