WO2022166532A1 - 人脸识别方法、装置、电子设备及存储介质 - Google Patents

人脸识别方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022166532A1
WO2022166532A1 PCT/CN2022/071091 CN2022071091W WO2022166532A1 WO 2022166532 A1 WO2022166532 A1 WO 2022166532A1 CN 2022071091 W CN2022071091 W CN 2022071091W WO 2022166532 A1 WO2022166532 A1 WO 2022166532A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute
face
category
face image
similarity
Prior art date
Application number
PCT/CN2022/071091
Other languages
English (en)
French (fr)
Inventor
王�义
陶训强
何苗
郭彦东
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022166532A1 publication Critical patent/WO2022166532A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present application relates to the technical field of face recognition, and more particularly, to a face recognition method, apparatus, electronic device and storage medium.
  • Face recognition is a biometric recognition technology based on human facial feature information. It collects images or video streams containing human faces, and automatically detects and tracks human faces in the images.
  • a series of related technologies for face recognition usually also called portrait recognition and facial recognition.
  • the face recognition result is usually determined by obtaining the similarity of face features between different images, and then comparing the similarity with the similarity threshold.
  • the accuracy of face recognition is low.
  • the present application proposes a face recognition method, device, electronic device and storage medium.
  • an embodiment of the present application provides a face recognition method, the method includes: obtaining a first face image to be recognized; obtaining an attribute category corresponding to the first face image as a first category, and The attribute category corresponding to the second face image to be compared is taken as the second category, and the attribute category is the category to which the specified face attribute belongs; based on the first category and the second category, the corresponding similarity threshold is obtained Obtain the similarity of the facial features of the first facial image and the facial features of the second facial image as the target similarity; When the target similarity is greater than the similarity threshold, determine the The first face image matches the second face image.
  • an embodiment of the present application provides a face recognition device, the device includes: an image acquisition module, a category acquisition module, a threshold acquisition module, a similarity acquisition module, and a result acquisition module, wherein the image acquisition module used to obtain the first face image to be identified; the category acquisition module is used to obtain the attribute category corresponding to the first face image as the first category, and the attribute category corresponding to the second face image to be compared As the second category, the attribute category is the category to which the specified face attribute belongs; the threshold acquisition module is configured to acquire the corresponding similarity threshold based on the first category and the second category; the similarity acquisition The module is used to obtain the similarity of the facial features of the first face image and the facial features of the second face image as the target similarity; the result acquisition module is used to obtain the similarity based on the target similarity and the target similarity. The result of the comparison between the above similarity thresholds is used to determine the face recognition result.
  • embodiments of the present application provide an electronic device, comprising: one or more processors; a memory; and one or more application programs, wherein the one or more application programs are stored in the memory and Configured to be executed by the one or more processors, the one or more programs are configured to execute the face recognition method provided by the first aspect above.
  • an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be invoked by a processor to execute the person provided in the first aspect above. face recognition method.
  • FIG. 1 shows a flowchart of a face recognition method according to an embodiment of the present application.
  • FIG. 2 shows a flowchart of a face recognition method according to another embodiment of the present application.
  • FIG. 3 shows a flowchart of a face recognition method according to still another embodiment of the present application.
  • FIG. 4 shows a flowchart of a face recognition method according to still another embodiment of the present application.
  • Fig. 5 shows a block diagram of a face recognition apparatus according to an embodiment of the present application.
  • FIG. 6 is a block diagram of an electronic device for executing a face recognition method according to an embodiment of the present application according to an embodiment of the present application.
  • FIG. 7 is a storage unit for storing or carrying a program code for implementing a face recognition method according to an embodiment of the present application according to an embodiment of the present application.
  • the current face recognition technology solutions basically follow the following processes: face detection, face alignment, face feature extraction, face feature comparison, and similarity judgment based on a given threshold. Among them, when determining the threshold, a large-scale data set containing different face attribute information is usually used for calculation.
  • a similarity threshold is usually set, and a uniform threshold is used to judge the similarity of all faces, so as to determine the face recognition result.
  • face attributes such as gender, skin color, age, etc.
  • face recognition is performed under the same similarity threshold, and the accuracy of face recognition will be affected.
  • FAR False Accept Rate
  • the accuracy of face recognition will be reduced.
  • the inventor proposes the face recognition method, device, electronic device and storage medium provided by the embodiments of the present application, which can realize the dynamic determination of similarity based on the face attributes of different face images during face recognition. Threshold, avoid the problem of inaccurate face recognition results caused by using a fixed threshold to determine the face recognition results, and improve the accuracy of face recognition.
  • the specific face recognition method will be described in detail in the following embodiments.
  • FIG. 1 shows a schematic flowchart of a face recognition method provided by an embodiment of the present application.
  • the face recognition method is applied to the face recognition apparatus 400 shown in FIG. 5 and the electronic device 100 ( FIG. 6 ) equipped with the face recognition apparatus 400 .
  • the following will take an electronic device as an example to describe the specific process of this embodiment.
  • the electronic device applied in this embodiment may be a smart phone, a tablet computer, a smart watch, a smart glasses, a notebook computer, etc.
  • the process shown in FIG. 1 will be described in detail below, and the face recognition method may specifically include the following steps:
  • Step S110 Obtain a first face image to be recognized.
  • the electronic device may acquire a face image to be subjected to face recognition as the first face image.
  • the first face image is an image including a face region.
  • the electronic device when the electronic device is a mobile terminal equipped with a camera, such as a smartphone, a tablet computer, a smart watch, etc., an image of a person's face can be captured by a front camera or a rear camera, so as to obtain a face image,
  • the electronic device may collect a face image of a person through a rear camera, and use the obtained image as the face image to be recognized, that is, the first face image.
  • the electronic device may obtain the first face image to be subjected to face recognition locally, that is, the electronic device may obtain the first face image to be subjected to face recognition from a file stored locally
  • the face image to be subjected to face recognition can be obtained from the photo album, that is, the electronic device collects the face image through the camera in advance and stores it in the local photo album, or downloads the face image from the network in advance and stores it In a local album, etc., and then when face recognition needs to be performed on a face image, the first face image to be subjected to face recognition is read from the album.
  • the first face image to be recognized by the face can also be downloaded from the network.
  • the electronic device can download the requirements from a corresponding server through a wireless network, a data network, etc. the first face image.
  • the electronic device can also receive the inputted first face image to be subjected to face recognition through an input operation by the user on other devices, so as to obtain the face image to be subjected to the first face recognition .
  • the specific manner in which the electronic device obtains the first face image may not be limited.
  • Step S120 Obtain the attribute category corresponding to the first face image as the first category, and the attribute category corresponding to the second face image to be compared as the second category, where the attribute category belongs to the specified face attribute. Classification.
  • the electronic device can identify the face attributes of the first face image, obtain the attribute category corresponding to the first face image, and Make this attribute category the first category.
  • face attributes are a series of biological characteristics that characterize face features, have strong self-stability and individual differences, and identify people's identity.
  • Face attributes include attributes of multiple dimensions such as gender, skin color, age, and expression.
  • the attribute category may be a category to which the specified face attribute belongs, and the specified face attribute may be one or more of attributes of multiple dimensions included in the face attribute.
  • the categories of the specified face attribute include: male and female.
  • the specified categories of the face attribute include: yellow, white, black, and brown .
  • the above is only an example, and does not constitute a specific limitation of the classification included in the specified face attribute.
  • the electronic device can perform face attribute recognition on the first face image through a pre-trained face attribute recognition model to obtain its corresponding attribute category;
  • the face image is used for face attribute recognition, and its corresponding attribute category is obtained.
  • the face attribute recognition model may be a neural network model, a generative adversarial network, an encoding-decoding model, etc., and the specific type of the model may not be limited.
  • the face attribute recognition model can be pre-trained to recognize face attributes in multiple dimensions, or it can be trained to recognize only specified face attributes.
  • the face image to be compared may be a pre-stored face image.
  • the face image to be compared is a face image pre-recorded by a registered user. Since the face images to be compared are pre-stored, in order to avoid real-time acquisition of the attribute categories corresponding to the face images to be compared during face recognition, it is possible to pre-acquire the attributes corresponding to the face images to be compared. attribute category, and store the attribute category of the face image to be compared.
  • the attribute category of the face image to be compared can be directly obtained, the processing amount is reduced, and the face recognition efficiency is improved;
  • face recognition all face images to be compared are acquired by attribute category, which saves processing resources.
  • Step S130 Obtain a corresponding similarity threshold based on the first category and the second category.
  • the electronic device after acquiring the attribute category corresponding to the first face image and the attribute category corresponding to the second face image, the electronic device considers the similarity between the face images and the similarity threshold During the comparison, the influence of face attributes on the accuracy can be obtained based on the above-mentioned first category and second category to obtain similarity thresholds corresponding to the first category and the second category.
  • the electronic device may acquire the similarity thresholds corresponding to the first category and the second category based on the pre-stored correspondence between the similarity thresholds and the attribute categories of the two face images.
  • each similarity threshold may be pre-matched by using the sample images in each case for different situations between the attribute categories of different two face images, and the obtained accuracy rate satisfies Similarity threshold for the desired accuracy condition. That is to say, the accuracy of face recognition can be verified under each similarity threshold for different situations between attribute categories, until the required accuracy conditions are met, the similarity threshold at this time can be used as a pre-stored value. Similarity threshold.
  • Step S140 Obtain the similarity between the face feature of the first face image and the face feature of the second face image as the target similarity.
  • the electronic device when recognizing whether the first face image and the second face image are face images corresponding to the same face, can obtain the face features of the first face image, and the second face image.
  • the facial features of the face image are obtained, and then the similarity between the facial features is obtained, and the similarity is used as the target similarity.
  • a pre-trained facial feature extraction model may be used to input the first facial image into the facial feature extraction model to obtain facial features of the first facial image, and input the second facial image to the human facial feature extraction model.
  • the face feature extraction model obtains the face features of the second face image.
  • the facial feature extraction model may be a neural network model, an encoding model, a generative adversarial network, etc.
  • the facial feature extraction model may be ResNet100, etc., and the specific facial feature extraction model may not be limited.
  • the facial features of the first human face image and the human face of the second human face image can be acquired similarity between face features to determine whether the first face image matches the second face image.
  • the cosine similarity between the face feature of the first face image and the face feature of the second face image can be obtained, and its value range can be -1 to 1;
  • the face features can be represented by feature vectors, and the Euclidean distance between the feature vectors of the face features can be obtained to determine the similarity between the face features of the first face image and the face features of the second face image.
  • the manner of obtaining the similarity between the facial features may not be limited.
  • the quantification standard of the similarity threshold may be the same as the quantification standard of the similarity between face features during face recognition, so as to accurately perform face recognition.
  • Step S150 When the target similarity is greater than the similarity threshold, determine that the first face image matches the second face image.
  • the electronic device obtains the similarity between the face features of the first face image and the face features of the second face image, according to the attribute category corresponding to the first face image and the first face image.
  • the similarity threshold is determined for the attribute categories corresponding to the two face images
  • the obtained target similarity can be compared with the similarity threshold to determine whether the target similarity is greater than the similarity threshold; if the target similarity is greater than the similarity threshold, Then it can be determined that the first human face image matches the second human face image; if the target similarity is less than or equal to the similarity threshold, it can be determined that the first human face image and the second human face image do not match.
  • the similarity is dynamically determined according to the attribute category of the face attribute of the face image to be recognized and the attribute category corresponding to the face image used for comparison
  • the degree threshold is used to avoid the use of fixed thresholds to determine the face recognition results, and the influence of face attributes on the accuracy of face recognition, thereby improving the accuracy of face recognition.
  • FIG. 2 shows a schematic flowchart of a face recognition method provided by another embodiment of the present application.
  • the face recognition method is applied to the above-mentioned electronic equipment, and the flow shown in FIG. 2 will be described in detail below, and the face recognition method may specifically include the following steps:
  • Step S210 Obtain the first face image to be recognized.
  • step S210 reference may be made to the content of the foregoing embodiments, and details are not described herein again.
  • Step S220 Obtain the attribute category corresponding to the first face image as the first category, and the attribute category corresponding to the second face image to be compared as the second category, where the attribute category belongs to the specified face attribute. Classification.
  • the electronic device when the electronic device obtains the attribute category corresponding to the first face image and the attribute category corresponding to the second face image, the electronic device can input the first face image and the second face image respectively into the The pre-trained face attribute classification model obtains the attribute category corresponding to the first face image as the first category, and the attribute category corresponding to the second face image as the second category.
  • one or more categories of face attributes can be marked on the face sample image, for example, gender, age and skin color can be marked to obtain categories marked with one or more face attributes
  • input the face sample image into the initial model to obtain the output result of the initial model, and obtain the loss value according to the output result and the label data corresponding to the face sample image;
  • the model parameters of the initial model are adjusted according to the loss value until the determined loss value satisfies the loss condition, thereby obtaining a face attribute classification model.
  • the initial model may be a neural network, etc., and the specific model may not be limited.
  • the specified face attribute can be determined by referring to the expressiveness of the face attribute, and the specified face attribute can be obtained from various face attributes as the face attribute whose attribute category needs to be determined.
  • the electronic device can obtain the attribute score corresponding to each dimension of the face attributes of the first face image in the multiple dimensions of the face attribute, and the attribute score is used to represent the face attributes of the multiple dimensions when identifying the face attributes.
  • Accuracy Based on the attribute scores corresponding to the face attributes of each dimension, the face attributes whose attribute scores are greater than the specified score are regarded as the specified face attributes.
  • the face attributes of multiple dimensions refer to multiple types of face attributes, such as gender, skin color, age, etc., and a face attribute of one dimension represents one type of face attribute.
  • the model for obtaining the attribute category (eg, the above-mentioned face attribute classification model) can output the probability of each category corresponding to each face attribute.
  • the category corresponding to the maximum probability can be taken as the attribute category corresponding to the corresponding type of face attribute, or the category greater than the preset probability threshold can be taken as the corresponding type of face
  • the attribute category corresponding to the attribute can be finally obtained, when determining the attribute category of each face attribute, the greater the probability corresponding to the determined attribute category, the more accurate the identified attribute category. Therefore, , the determined probabilities corresponding to the attribute categories to which various face attributes belong can be used to score the face attributes, so as to obtain the attribute scores corresponding to the face attributes of each dimension.
  • a standard probability can be set for the face attributes of each dimension; then, according to the standard probability, the probability corresponding to the attribute category to which the face attribute of each dimension belongs is quantified, and the face of each dimension is obtained.
  • the attribute score corresponding to the attribute is obtained.
  • the ratio of the probability corresponding to the attribute category to which the face attribute of each dimension belongs and the corresponding standard probability can be obtained to obtain the attribute score.
  • the above is only an example, and does not represent a limitation on the acquisition method of the attribute score.
  • the face attribute with the highest attribute score may also be determined as the specified face based on the attribute scores corresponding to the face attributes of each dimension. Attributes.
  • the second face image is a pre-obtained face image
  • the attribute categories of the face images that can be obtained for comparison are relatively accurate. For example, when the electronic device is a mobile terminal, when the face image for comparison is pre-recorded, the attribute scores of each dimension of the face image are determined, if the attribute score of the face attribute of one dimension is not greater than the specified score value, the user can be prompted to re-enter the face image until the attribute scores of the acquired face images in each dimension are greater than the specified score.
  • Step S230 Obtain the attribute category combination formed by the first category and the second category as a target category combination.
  • the similarity threshold since the similarity threshold corresponds to the attribute category corresponding to the two face images, the similarity threshold may correspond to the attribute category combination formed by the attribute category, and the similarity threshold corresponding to each attribute category combination It can be stored in the electronic device in advance. Therefore, after obtaining the attribute category corresponding to the first face image and the attribute category corresponding to the second face image, the electronic device can obtain the attribute category combination formed by the first category and the second category.
  • the above-specified face attribute is gender, it can include attribute category combinations of male-male, female-female, and male-female.
  • the attribute category combinations include: yellow-white, yellow-black, yellow-yellow, white-black, white-white, and black -black.
  • Step S240 Obtain the similarity threshold corresponding to the target category combination from the similarity thresholds corresponding to the multiple attribute category combinations.
  • the electronic device after acquiring the attribute category combination formed by the attribute category corresponding to the first face image and the attribute category corresponding to the second face image, the electronic device can combine the similarity degrees corresponding to the multiple attribute categories In the threshold, the similarity threshold corresponding to the target category combination is obtained.
  • the above-specified face attributes may include gender.
  • the first threshold is obtained as the similarity threshold; when the target category combination is male and male, the second threshold is obtained as the similarity threshold; when the target category combination is male and female, the first threshold is obtained
  • Three thresholds are used as similarity thresholds, wherein the magnitudes of the first threshold, the second threshold and the third threshold decrease in sequence. It can be understood that the size of the similarity threshold is related to the distribution of the similarity of the faces. That is to say, when the similarity of the faces in a certain set is larger on average, the corresponding threshold will also increase.
  • the magnitude relationship between the similarity thresholds can be: the first threshold > the second threshold > the third threshold.
  • the specific size of the similarity threshold can be determined according to the required face recognition correct rate (correct acceptance rate and false acceptance rate).
  • the above-specified face attributes may include multiple face attributes, and when multiple face attributes are included, the number of category combinations will also increase, and thus the similarity threshold will also increase.
  • the specified face attribute includes gender and skin color
  • the attribute category combination includes: the gender and skin color of the first face image and the combination formed by the gender and skin color of the second face image.
  • Step S250 Obtain the similarity between the face feature of the first face image and the face feature of the second face image as the target similarity to obtain the face feature of the first face image and the second face feature.
  • the similarity of the facial features of the two face images is used as the target similarity.
  • Step S260 When the target similarity is greater than the similarity threshold, determine that the first face image matches the second face image.
  • the electronic device obtains the similarity between the face features of the first face image and the face features of the second face image, according to the attribute category corresponding to the first face image and the first face image.
  • the similarity threshold is determined for the attribute categories corresponding to the two face images
  • the obtained target similarity can be compared with the similarity threshold to determine whether the target similarity is greater than the similarity threshold; if the target similarity is greater than the similarity threshold, Then it can be determined that the first human face image matches the second human face image; if the target similarity is less than or equal to the similarity threshold, it can be determined that the first human face image and the second human face image do not match.
  • the attribute category combination when the attribute categories of the two face images are different is considered.
  • the combination of male and female is also considered.
  • the face image for comparison can be randomly obtained to avoid the situation that the matching face image cannot be identified due to wrong attribute category.
  • the attribute category of the second face image is pre-stored, the first face image can be marked according to the attribute category of the second face image in the base library, and then the face attribute classification model can be corrected and trained.
  • the user can also be prompted to input the attribute category of the first face image, and then the first face image is marked according to the input attribute category, and then the face attribute classification model is corrected and trained to improve the face attribute classification model. accuracy.
  • the attribute category is determined according to the attribute category of the face attribute of the face image to be recognized and the attribute category corresponding to the face image used for comparison Combination
  • the similarity threshold is dynamically determined according to the combination of attribute categories, avoiding the use of fixed thresholds to determine the face recognition results, and the impact of face attributes on the accuracy of face recognition, thereby improving the accuracy of face recognition.
  • the attribute category according to the attribute score of the face attribute corresponding to the face image, the specified face attribute that needs to obtain the attribute category is determined, so as to avoid the inaccuracy caused by the face attribute recognition to the similarity threshold, and further Improve the accuracy of face recognition.
  • FIG. 3 shows a schematic flowchart of a face recognition method provided by another embodiment of the present application.
  • the face recognition method is applied to the above-mentioned electronic equipment, and the flow shown in FIG. 3 will be described in detail below.
  • the face recognition method may specifically include the following steps:
  • Step S310 For different attribute category combinations, obtain the similarity threshold and correct acceptance rate TAR under different false acceptance rates FAR during face recognition, and obtain multiple sets of index data corresponding to each attribute category combination.
  • the electronic device may, for different attribute category combinations, pre-determine each attribute category according to the false acceptance rate (FAR, False Accept Rate) and the correct acceptance rate (TAR, True Accept Rate) of face recognition.
  • the corresponding similarity thresholds are combined to determine.
  • the electronic device can obtain similarity thresholds and correct acceptance rates under different FARs during face recognition for different attribute category combinations, so as to obtain multiple sets of index data corresponding to each attribute category combination.
  • each set of index data may include a similarity threshold, FAR, and TAR.
  • the above multiple sets of index data can be obtained by matching each other with the face images in the test set.
  • the false acceptance rate can be determined in the following way: in a test process, when the face images of different people are obtained for comparison, the number of times the similarity finally obtained is greater than the similarity threshold is taken as the first number, and the faces of different people are obtained. The total number of comparisons between images is taken as the second count, and then the ratio of the first count to the second count is obtained, which is the false acceptance rate.
  • the correct acceptance rate can be determined in the following way: in a test process, when the face images of the same person are obtained for comparison, the number of times the similarity finally obtained is greater than the similarity threshold is taken as the third time, and the face images of the same person are obtained. The total number of comparisons between them is taken as the fourth number, and then the ratio of the third number to the fourth number is obtained, which is the false acceptance rate.
  • the specified face attribute is gender. Then you can obtain multiple sets of indicator data for the three gender combinations of men and men, women and women, and men and women.
  • the male face image can be matched with the male face image, and the FAR, similarity threshold and TAR can be obtained; for the combination of female and female, the female face image can be matched.
  • the face image is matched with the female face image, and the FAR, similarity threshold and TAR are obtained; for the combination of male and female, the male face image can be matched with the female face image, and the FAR, similarity threshold and TAR can be obtained.
  • Degree threshold and TAR is
  • Step S320 From the multiple sets of index data corresponding to each attribute category combination, acquire target index data in which the FAR rate satisfies the first acceptance rate condition and the TAR satisfies the second acceptance rate condition.
  • the electronic device can obtain, from the multiple sets of index data, that the FAR satisfies the first acceptance rate condition, and the TAR satisfies the second acceptance rate
  • the first acceptance rate condition may include: FAR is less than the first acceptance rate, or the FAR is the smallest; the second acceptance rate condition may include: the TAR is greater than the second acceptance rate, or the TAR is the largest.
  • the first acceptance rate condition and the second acceptance rate condition may be determined according to the false acceptance rate and the correct acceptance rate required in the actual scene.
  • the false acceptance rate can be greater than that of other scenarios; for another example, in a payment scenario, the security is relatively high, so the false acceptance rate can be smaller than that of other scenarios. acceptance rate.
  • the combination of target attribute categories that is, if there are at least two sets of index data corresponding to the combination of target attribute categories Two sets of index data, and the FAR of each set of index data in at least two sets of index data satisfies the first acceptance rate condition, and the TAR meets the second acceptance rate condition; If the weight of the wrong acceptance rate is greater than the weight of the correct acceptance rate, the set of indicator data with the smallest error acceptance rate is taken as the target indicator data; if the correct acceptance rate is equal to If the weight is greater than the weight of the false acceptance rate, the set of index data with the largest correct acceptance rate is taken as the target index data.
  • Step S330 Obtain the similarity threshold in the target index data corresponding to each attribute category combination as the similarity threshold corresponding to each attribute category combination, and store the obtained similarity thresholds corresponding to the multiple attribute category combinations .
  • the similarity threshold in the target index data corresponding to each attribute category combination can be obtained as the similarity corresponding to each attribute category combination. degree threshold. After the similarity threshold corresponding to each attribute category combination is obtained, the correspondence between the similarity threshold and each attribute category combination can be stored, so as to facilitate the determination of the corresponding similarity threshold during face recognition.
  • Step S340 Obtain the first face image to be recognized.
  • Step S350 Obtain the attribute category corresponding to the first face image as the first category, and the attribute category corresponding to the second face image to be compared as the second category, where the attribute category belongs to the specified face attribute. Classification.
  • Step S360 Acquire an attribute category combination formed by the first category and the second category as a target category combination.
  • Step S370 Obtain the similarity threshold corresponding to the target category combination from the similarity thresholds corresponding to the multiple attribute category combinations.
  • Step S380 Obtain the similarity between the face feature of the first face image and the face feature of the second face image as the target similarity.
  • Step S390 when the target similarity is greater than the similarity threshold, determine that the first face image matches the second face image.
  • the attribute category is determined according to the attribute category of the face attribute of the face image to be recognized and the attribute category corresponding to the face image used for comparison Combination
  • the similarity threshold is dynamically determined according to the combination of attribute categories, avoiding the use of fixed thresholds to determine the face recognition results, and the impact of face attributes on the accuracy of face recognition, thereby improving the accuracy of face recognition.
  • the similarity threshold corresponding to each combination of attribute categories is obtained in advance for the false acceptance rate and correct acceptance rate of the requirements, which ensures that the accuracy of face recognition can meet the requirements in the case of each combination of attribute categories.
  • FIG. 4 shows a schematic flowchart of a face recognition method provided by still another embodiment of the present application.
  • the face recognition method is applied to the above-mentioned electronic equipment, and the flow shown in FIG. 4 will be described in detail below.
  • the face recognition method may specifically include the following steps:
  • Step S410 Obtain the first face image to be recognized.
  • step S410 reference may be made to the content of the foregoing embodiments, and details are not described herein again.
  • Step S420 Acquire the attribute category corresponding to the first face image as the first category.
  • Step S430 Determine a face image library corresponding to the first category from a plurality of face image libraries, wherein the attribute categories corresponding to each face image library are different.
  • the electronic device when performing face recognition on the first face image to be recognized, may determine the face image for comparison according to the attribute category of the first face image. Specifically, the face images to be compared with the same attribute category can be obtained, so as to avoid the inaccuracy of identifying the same face in the subsequent identification process when the face attributes are different.
  • the electronic device may acquire in advance a plurality of face images for comparing the face images to be recognized, that is, the face images in the base library; Attribute category; according to the attribute category corresponding to each face image, the multiple face images are divided into different face image groups as face image libraries corresponding to different attribute categories, wherein each face image group The corresponding attribute categories are different.
  • a face image library corresponding to the first category may be acquired, that is, a face image library whose attribute category is also the first category.
  • Step S440 Acquire a second face image to be compared from the face image database, and the second category is the same as the first category.
  • the electronic device may acquire the second face image to be compared from the face image database.
  • the attribute category of the second face image should be the same as the first category, that is, the second category is the same as the first category.
  • Step S450 Obtain a corresponding similarity threshold based on the first category and the second category.
  • Step S460 Obtain the similarity between the face feature of the first face image and the face feature of the second face image as the target similarity.
  • Step S470 When the target similarity is greater than the similarity threshold, determine that the first face image matches the second face image.
  • the face image of the same attribute category as the attribute category of the first face image to be identified is obtained as the face image to be compared, the face attribute recognition may be inaccurate. Therefore, after the first face image is matched with the face image in the face image library determined above, if it is determined that the first face image and any face image in the face image library are not When matching, the face image to be compared can be obtained from the face image database corresponding to other attribute categories, and matched with the first face image, so as to avoid the failure to identify the matching face when the face attribute is incorrectly recognized. face image.
  • prompt information can also be output to prompt the user to confirm the attribute category (first category) identified this time. attribute category, and then, according to the face image in the face image library corresponding to the input attribute category, the face image is matched with the first face image, so as to determine the face recognition result.
  • the electronic device can also perform correction training on the above face attribute classification model according to the attribute category input by the user and the first face image, so as to further improve the face attribute classification model. accuracy.
  • the similarity threshold is dynamically determined according to the combination of attribute categories, avoiding the use of fixed thresholds to determine the face recognition results, and the influence of face attributes on the accuracy of face recognition, thereby improving the accuracy of face recognition.
  • FIG. 5 shows a structural block diagram of a face recognition apparatus 400 provided by an embodiment of the present application.
  • the face recognition apparatus 400 applies the above electronic equipment, and the face recognition apparatus 400 includes: an image acquisition module 410 , a category acquisition module 420 , a threshold acquisition module 430 , a similarity degree acquisition module 440 and a result acquisition module 450 .
  • the image acquisition module 410 is used to acquire the first face image to be identified;
  • the category acquisition module 420 is used to acquire the attribute category corresponding to the first face image as the first category, and the to-be-compared The attribute category corresponding to the second face image is used as the second category, and the attribute category is the category to which the specified face attribute belongs;
  • the threshold obtaining module 430 is configured to obtain the corresponding attribute based on the first category and the second category
  • the similarity obtaining module 440 is used to obtain the similarity of the facial features of the first face image and the facial features of the second face image as the target similarity;
  • the result obtained Module 450 is configured to determine a face recognition result based on the comparison result between the target similarity and the similarity threshold.
  • the threshold value acquisition module 430 may include: a combined acquisition unit and a threshold value determination unit. Wherein, the combination acquisition unit is used to acquire the attribute category combination formed by the first category and the second category as the target category combination; the threshold value determination unit is used to obtain all the attribute category combinations from the similarity thresholds corresponding to the plurality of attribute category combinations. The similarity threshold corresponding to the target category combination.
  • the specified face attribute includes gender.
  • the threshold determination unit may be specifically configured to: when the target category is a combination of female and female, obtain a first threshold as the similarity threshold; when the target category is a combination of male and male, obtain a second threshold as the similarity threshold Similarity threshold; when the target category is a combination of male and female, a third threshold is obtained as the similarity threshold, wherein the sizes of the first threshold, the second threshold and the third threshold decrease in turn .
  • the face recognition apparatus 400 may further include: a data acquisition module, a data screening module, and a threshold value storage module.
  • the data acquisition module is used to obtain different false acceptance rates FAR for face recognition for different combinations of attribute categories before obtaining the corresponding similarity threshold based on the first category and the second category.
  • the similarity threshold and correct acceptance rate TAR are obtained, and multiple sets of index data corresponding to each attribute category combination are obtained;
  • the data screening module is used to obtain the FAR rate from the multiple sets of index data corresponding to each attribute category combination.
  • the threshold storage module is used to obtain the similarity threshold in the target index data corresponding to each attribute category combination, as each attribute category
  • the corresponding similarity thresholds are combined, and the obtained similarity thresholds corresponding to the combination of multiple attribute categories are stored.
  • the category acquisition module 420 may include: an attribute identification unit, a graphic library determination unit, and an image acquisition unit.
  • the attribute identification unit is used to obtain the attribute category corresponding to the first face image as the first category
  • the graphic library determination unit is used to determine the face image library corresponding to the first category from multiple face image libraries , wherein the attribute categories corresponding to each face image library are different
  • the image acquisition unit is used to obtain a second face image to be compared from the face image library, and the second category is the same as the first category .
  • the face recognition apparatus 400 may further include: a base image acquisition module, a base image identification module, and a base image grouping module.
  • the base image acquisition module is used to obtain multiple face images for comparing the face images to be recognized;
  • the base image identification module is used to obtain the attribute category corresponding to each face image in the multiple face images;
  • the base image grouping module is used to divide the multiple face images into different face image groups according to the attribute categories corresponding to each face image, as face image libraries corresponding to different attribute categories, wherein, The attribute categories corresponding to each face image group are different.
  • the category obtaining module 420 may be specifically configured to: input the first face image and the second face image into a pre-trained face attribute classification model respectively, and obtain the first face The attribute category corresponding to the image is taken as the first category, and the attribute category corresponding to the second face image is taken as the second category.
  • the face recognition apparatus 400 may further include: an attribute score acquisition module and an attribute screening module.
  • the attribute score obtaining module is used to obtain the attribute score corresponding to the face attribute of each dimension in the face attributes of the multiple dimensions of the first face image, and the attribute score is used to characterize the human face of the multiple dimensions. Accuracy when identifying face attributes; the attribute screening module is configured to, based on the attribute scores corresponding to the face attributes of each dimension, take the face attributes whose attribute scores are greater than the specified score as the specified face attributes.
  • the coupling between the modules may be electrical, mechanical or other forms of coupling.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • the attribute category corresponding to the first face image is obtained as the first category, and the attribute corresponding to the second face image to be compared is obtained
  • the category is the second category, and the attribute category is the category to which the specified face attribute belongs.
  • the corresponding similarity threshold is obtained, and the face features of the first face image and the second person are obtained.
  • the similarity of the facial features of the face image is used as the target similarity, and then the target similarity is compared with the similarity threshold.
  • the target similarity is greater than the similarity threshold, it is determined that the first face image matches the second face image. Therefore, the similarity threshold can be dynamically determined based on the face attributes of different face images during face recognition, and the problem of inaccurate face recognition results caused by using a fixed threshold to determine the face recognition results can be avoided. Improve the accuracy of face recognition.
  • the electronic device 100 may be an electronic device capable of running an application program, such as a smart phone, a tablet computer, a smart watch, a smart glasses, a notebook computer, or the like.
  • the electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more application programs, wherein the one or more application programs may be stored in the memory 120 and configured to be executed by One or more processors 110 execute, and one or more programs are configured to execute the methods described in the foregoing method embodiments.
  • the processor 110 may include one or more processing cores.
  • the processor 110 uses various interfaces and lines to connect various parts of the entire electronic device 100, and executes by running or executing the instructions, programs, code sets or instruction sets stored in the memory 120, and calling the data stored in the memory 120.
  • the processor 110 may adopt at least one of a digital signal processing (Digital Signal Processing, DSP), a Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and a Programmable Logic Array (Programmable Logic Array, PLA). It is implemented in a hardware form.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 110 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used for rendering and drawing of the display content
  • the modem is used to handle wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 110, and is implemented by a communication chip alone.
  • the memory 120 may include random access memory (Random Access Memory, RAM), or may include read-only memory (Read-Only Memory). Memory 120 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , instructions for implementing the following method embodiments, and the like.
  • the storage data area may also store data (such as phone book, audio and video data, chat record data) created by the electronic device 100 during use.
  • FIG. 7 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 800 stores program codes, and the program codes can be invoked by the processor to execute the methods described in the above method embodiments.
  • the computer readable storage medium 800 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 800 includes a non-transitory computer-readable storage medium.
  • Computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps in the above-described methods. These program codes can be read from or written to one or more computer program products.
  • Program code 810 may be compressed, for example, in a suitable form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种人脸识别方法、装置、电子设备及存储介质,该人脸识别方法包括:获取待识别的第一人脸图像(S110);获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类(S120);基于所述第一类别以及所述第二类别,获取对应的相似度阈值(S130);获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度(S140);当所述目标相似度大于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像匹配(S150)。本方法可以实现根据待比对的两张人脸图像,动态确定相似度阈值,提升人脸识别的准确性。

Description

人脸识别方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求于2021年2月7日提交的申请号为202110179741.0的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
技术领域
本申请涉及人脸识别技术领域,更具体地,涉及一种人脸识别方法、装置、电子设备及存储介质。
背景技术
人脸识别是基于人的脸部特征信息进行身份识别的一种生物识别技术,其通过采集含有人脸的图像或视频流,并自动在图像中检测和跟踪人脸,进而对检测到的人脸进行脸部识别的一系列相关技术,通常也叫做人像识别、面部识别。传统的人脸识别技术中,通常通过获取不同图像之间人脸特征的相似度,再将相似度与相似度阈值进行比较后,确定人脸识别结果。但是将图像之间人脸特征的相似度与相似度阈值进行比较,确定人脸识别结果时,存在人脸识别准确性较低的技术问题。
发明内容
鉴于上述问题,本申请提出了一种人脸识别方法、装置、电子设备及存储介质。
第一方面,本申请实施例提供了一种人脸识别方法,所述方法包括:获取待识别的第一人脸图像;获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类;基于所述第一类别以及所述第二类别,获取对应的相似度阈值;获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度;当所述目标相似度大于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像匹配。
第二方面,本申请实施例提供了一种人脸识别装置,所述装置包括:图像获取模块、类别获取模块、阈值获取模块、相似度获取模块以及结果获取模块,其中,所述图像获取模块用于获取待识别的第一人脸图像;所述类别获取模块用于获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类;所述阈值获取模块用于基于所述第一类别以及所述第二类别,获取对应的相似度阈值;所述相似度获取模块用于获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度;所述结果获取模块用于基于所述目标相似度与所述相似度阈值之间的比较结果,确定人脸识别结果。
第三方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;存储器;一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述第一方面提供的人脸识别方法。
第四方面,本申请实施例提供了一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述第一方面提供的 人脸识别方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了根据本申请一个实施例的人脸识别方法流程图。
图2示出了根据本申请另一个实施例的人脸识别方法流程图。
图3示出了根据本申请又一个实施例的人脸识别方法流程图。
图4示出了根据本申请再一个实施例的人脸识别方法流程图。
图5示出了根据本申请一个实施例的人脸识别装置的一种框图。
图6是本申请实施例的用于执行根据本申请实施例的人脸识别方法的电子设备的框图。
图7是本申请实施例的用于保存或者携带实现根据本申请实施例的人脸识别方法的程序代码的存储单元。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
目前的人脸识别技术的方案基本都遵循以下流程:人脸检测、人脸对齐、人脸特征提取、人脸特征比对、基于给定阈值进行相似度判断。其中,在确定阈值的时候,通常使用含有不同人脸属性信息的大规模数据集来进行计算。
在传统的人脸识别方案中,通常设置好相似度阈值,对所有的人脸使用统一的阈值进行相似度判断,从而确定出人脸识别结果。但是,在人脸属性(例如性别、肤色、年龄等)不同时,同一相似度阈值的情况下进行人脸识别,人脸识别的准确性会受到影响。例如,当人脸的性别不同时,在同样的阈值下,女性的错误接受率(FAR,False Accept Rate)会比男性的FAR高10倍,也就是说,如果在人脸识别系统中,对所有的人脸使用统一的阈值进行相似判断的话,在同样的比对次数中,女性的不同人脸被判断为来自于同一个人的次数会是男性的10倍。因此,如果在人脸识别系统中不考虑到人脸属性信息的话,会降低人脸识别系统的准确性。
针对上述问题,发明人提出了本申请实施例提供的人脸识别方法、装置、电子设备以及存储介质,可以实现在人脸识别时,基于不同人脸图像的人脸属性,动态的确定相似度阈值,避免利用固定的阈值来确定人脸识别结果,而导致的人脸识别结果不准确的问题,提升人脸识别的准确性。其中,具体的人脸识别方法在后续的实施例中进行详细的说明。
请参阅图1,图1示出了本申请一个实施例提供的人脸识别方法的流程示意图。在具体的实施例中,所述人脸识别方法应用于如图5所示的人脸识别装置400以及配置有所述人脸识别装置400的电子设备100(图6)。下面将以电子设备为例,说明本实施例的具体流程,当然,可以理解的,本实施例所应用的电子设备可以为智能手机、平板电脑、智能手表、智能眼镜、笔记本电脑等,在此不做限定。下面将针对图1所示的流程进行详细的阐述,所述人脸识别方法具体可以包括以下步骤:
步骤S110:获取待识别的第一人脸图像。
在本申请实施例中,电子设备可以获取待进行人脸识别的人脸图像作为第一人脸图像。其中,第一人脸图像为包含有人脸区域的图像。
作为一种实施方式,电子设备为智能手机、平板电脑、智能手表等设置有摄像头的移动终端时,可以通过前置摄像头或者后置摄像头对人物的人脸进行图像采集,从 而获得人脸图像,例如,电子设备可以通过后置摄像头采集人物的人脸图像,并将获得的图像作为待进行人脸识别的人脸图像,即第一人脸图像。
作为又一种实施方式,电子设备可以从本地获取待进行人脸识别的第一人脸图像,也就是说,电子设备可以从本地存储的文件中获取待进行人脸识别的第一人脸图像,例如,电子设备为移动终端时,可以从相册获取待进行人脸识别的人脸图像,即电子设备预先通过摄像头采集人脸图像后存储在本地相册,或者预先从网络下载人脸图像后存储在本地相册等,然后在需要对人脸图像进行人脸识别时,从相册中读取待进行人脸识别的第一人脸图像。
作为再一种方式,电子设备为移动终端或者电脑时,也可以从网络下载待进行人脸识别的第一人脸图像,例如,电子设备可以通过无线网络、数据网络等从相应的服务器下载需求的第一人脸图像。
作为还一种实施方式,电子设备也可以通过用户在其他设备的输入操作,对输入的待进行人脸识别的第一人脸图像进行接收,从而获得待进行第一人脸识别的人脸图像。当然,电子设备具体获取第一人脸图像的方式可以不作为限定。
步骤S120:获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类。
在本申请实施例中,电子设备在获取到待进行人脸识别的第一人脸图像之后,则可以识别第一人脸图像的人脸属性,获取第一人脸图像对应的属性类别,并将该属性类别作为第一类别。其中,人脸属性是表征人脸特征的一系列生物特性,具有很强的自身稳定性和个体差异性,标识了人的身份。人脸属性包括性别、肤色、年龄、表情等多种维度的属性。属性类别可以为指定人脸属性所属的分类,指定人脸属性可以为人脸属性包括的多种维度的属性中的一种或者多种。例如,指定人脸属性为性别时,则指定人脸属性的分类包括:男性和女性,又例如,指定人脸属性为肤色时,则指定人脸属性的分类包括:黄色、白色、黑色和棕色。当然,以上仅为举例,并不构成指定人脸属性所包括的分类的具体限定。
在一些实施方式中,电子设备可以通过预先训练的人脸属性识别模型,对第一人脸图像进行人脸属性识别,得到其对应的属性类别;也可以通过人人脸属性识别模型对第二人脸图像进行人脸属性识别,得到其对应的属性类别。其中,人脸属性识别模型可以为神经网络模型、生成对抗网络、编码-解码模型等,模型的具体类型可以不做限定。人脸属性识别模型可以被预先训练为能够识别多种维度的人脸属性,也可以被训练为仅识别指定人脸属性。
可选的,待比对的人脸图像可以为预先存储的人脸图像。例如,待比对的人脸图像为注册用户预先录入的人脸图像。由于待比对的人脸图像为预先存储,为避免在进行人脸识别时,再去实时获取待比对的人脸图像对应的属性类别,因此可以预先获取待比对的人脸图像对应的属性类别,并将待比对的人脸图像的属性类别进行存储。从而在需要利用待比对的人脸图像来比对待识别的人脸图像时,可以直接得到待比对的人脸图像的属性类别,减少处理量,提升人脸识别效率;也避免了每次人脸识别时,都会对待比对的人脸图像进行属性类别获取,节约了处理资源。
步骤S130:基于所述第一类别以及所述第二类别,获取对应的相似度阈值。
在本申请实施例中,电子设备在获取到第一人脸图像对应的属性类别,以及第二人脸图像对应的属性类别之后,考虑到将人脸图像之间的相似度与相似度阈值进行比较时,人脸属性对准确性的影响,可以基于以上第一类别以及第二类别,获取与第一类别以及第二类别对应的相似度阈值。
在一些实施方式中,电子设备可以基于预先存储的相似度阈值与两张人脸图像的属性类别之间的对应关系,获取与第一类别以及第二类别对应的相似度阈值。可选的, 各个相似度阈值,可以为预先针对不同的两张人脸图像的属性类别之间的不同情况,利用各个情况下的样本图像进行人脸特征的匹配后,所获得的准确率满足所需准确率条件时的相似度阈值。也就是说,可以针对属性类别之间的不同情况,在各个相似度阈值下,验证人脸识别的准确性,直至得到满足所需准确率条件时,将此时的相似度阈值作为预先存储的相似度阈值。
步骤S140:获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度。
在本申请实施例中,电子设备在识别第一人脸图像是否与第二人脸图像为同一人脸对应的人脸图像时,可以获取第一人脸图像的人脸特征,以及第二人脸图像的人脸特征,然后再获取人脸特征之间的相似度,并将该相似度作为目标相似度。
在一些实施方式中,可以通过预先训练的人脸特征提取模型,将第一人脸图像输入人脸特征提取模型,获得第一人脸图像的人脸特征,以及将第二人脸图像输入人脸特征提取模型,获得第二人脸图像的人脸特征。其中,人脸特征提取模型可以为神经网络模型、编码模型、生成对抗网络等,例如,人脸特征提取模型可以为ResNet100等,具体的人脸特征提取模型可以不做限定。
在一些实施方式中,在获取到第一人脸图像的人脸特征,以及第二人脸图像的人脸特征之后,可以获取第一人脸图像的人脸特征与第二人脸图像的人脸特征之间的相似度,以确定第一人脸图像是否与第二人脸图像匹配。可选的,可以获取第一人脸图像的人脸特征与第二人脸图像的人脸特征之间的余弦相似度,其取值范围可以为-1到1;可选的,获取的人脸特征可以以特征向量表示,可以获取人脸特征的特征向量之间的欧式距离,以此来确定第一人脸图像的人脸特征与第二人脸图像的人脸特征之间的相似度。当然,获取人脸特征之间的相似度的方式可以不做限定。另外,由于相似度的量化标准可能不同,因此,上述的相似度阈值的量化标准可以与人脸识别时获取人脸特征之间的相似度的量化标准相同,以便准确进行人脸识别。
步骤S150:当所述目标相似度大于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像匹配。
在本申请实施例中,电子设备在获取到第一人脸图像的人脸特征与第二人脸图像的人脸特征之间的相似度,并根据第一人脸图像对应的属性类别以及第二人脸图像对应的属性类别确定出相似度阈值之后,则可以将获取的目标相似度与相似度阈值进行比较,以确定目标相似度是否大于相似度阈值;若目标相似度大于相似度阈值,则可以确定第一人脸图像与第二人脸图像匹配;若目标相似度小于或等于相似度阈值,则可以确定第一人脸图像与第二人脸图像不匹配。
本申请实施例提供的人脸识别方法,在人脸识别过程中,根据待识别的人脸图像的人脸属性的属性类别,以及用于比对的人脸图像对应的属性类别,动态确定相似度阈值,避免利用固定的阈值来确定人脸识别结果,而人脸属性对人脸识别准确性的影响,进而提升人脸识别的准确性。
请参阅图2,图2示出了本申请另一个实施例提供的人脸识别方法的流程示意图。该人脸识别方法应用于上述电子设备,下面将针对图2所示的流程进行详细的阐述,所述人脸识别方法具体可以包括以下步骤:
步骤S210:获取待识别的第一人脸图像。
在本申请实施例中,步骤S210可以参阅前述实施例的内容,在此不再赘述。
步骤S220:获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类。
在本申请实施例中,电子设备在获取第一人脸图像对应的属性类别,以及获取第二人脸图像对应的属性类别时,可以将第一人脸图像以及第二人脸图像分别输入至预 先训练的人脸属性分类模型,获得第一人脸图像对应的属性类别作为第一类别,以及第二人脸图像对应的属性类别作为第二类别。在对人脸属性分类模型进行训练时,可以对人脸样本图像标注一种或者多种人脸属性的类别,例如,标注性别、年龄和肤色,得到标注有一种或多种人脸属性的类别的人脸样本图像,从而完成训练集构建;再将人脸样本图像输入至初始模型中,获得初始模型输出的结果,根据输出的结果以及人脸样本图像对应的标注数据,获取损失值;再根据损失值对初始模型的模型参数进行调整,直至确定的损失值满足损失条件,从而得到人脸属性分类模型。其中,初始模型可以为神经网络等,具体的模型可以不做限定。
在一些实施方式中,考虑到在采集人脸图像时,人脸图像中多种人脸属性的表现度不同,会对识别人脸属性的准确性造成影响,因此,在获取用于确定相似度阈值的属性类别时,可以参考人脸属性的表现度,来确定指定人脸属性,并从多种人脸属性中获取指定人脸属性,作为需要确定属性类别的人脸属性。具体地,电子设备可以获取第一人脸图像的多种维度的人脸属性中每种维度的人脸属性对应的属性得分,属性得分用于表征对多种维度的人脸属性进行识别时的准确度;基于每种维度的人脸属性对应的属性得分,将属性得分大于指定分值的人脸属性作为指定人脸属性。其中,多种维度的人脸属性指多个种类的人脸属性,例如,性别、肤色、年龄等,一种维度的人脸属性即代表了一个种类的人脸属性。
可选的,在获取人脸图像对应的人脸属性的属性类别时,用于获取属性类别的模型(例如上述的人脸属性分类模型)可以输出每种人脸属性对应的各个类别的概率。针对每种人脸属性对应的各个类别的概率,可以取最大概率对应的类别,作为相应种类的人脸属性对应的属性类别,也可以取大于预设概率阈值的类别,作为相应种类的人脸属性对应的属性类别。虽然最终都可以得到每种人脸属性对应的属性类别,但是在确定每种人脸属性的属性类别时,确定出的属性类别对应的概率越大,则识别出的属性类别也越准确,因此,可以利用确定出的各种人脸属性所属的属性类别对应的概率,来对该种人脸属性进行评分,从而得到每种维度的人脸属性对应的属性得分。
作为一种方式,可以针对每种维度的人脸属性,设置标准概率;然后根据该标准概率,对每种维度的人脸属性所属的属性类别对应的概率进行量化,得到每种维度的人脸属性对应的属性得分。可选的,可以获取每种维度的人脸属性所属的属性类别对应的概率,与其对应的标准概率的比值,从而得到属性得分。例如,对于性别这个种类的人脸属性,其确定出的属性类别为男性,且概率为75%,而性别这个种类对应的标准概率为80%,则性别对应的属性得分为:75%/80%=0.9375。当然,以上仅为示例,并不代表对属性得分的获取方式的限定。
在以上方式中,在获取到每种维度的人脸属性对应的属性得分之和,也可以基于每种维度的人脸属性对应的属性得分,将属性得分最高的人脸属性确定为指定人脸属性。
通过以上方式,可以确定出识别的较为准确的人脸属性的属性类别,并以此来确定相似度阈值,避免人脸属性识别的不准确,而对相似度阈值的确定造成的影响。
在一些实施方式中,第二人脸图像由于是预先获得人脸图像,因此,可以在预先获取各个维度的人脸属性的属性得分都大于指定分值的人脸图像,从而在进行人脸识别时,可以获取的用于比对的人脸图像的属性类别都较为准确。例如,电子设备为移动终端时,在预先录入用于比对的人脸图像时,对人脸图像的各个维度的属性得分进行确定,若其中一个维度的人脸属性的属性得分不大于指定分值,则可以提示用户重新录入人脸图像,直至获取到的人脸图像的各个维度的属性得分都大于指定分值。
步骤S230:获取所述第一类别以及所述第二类别所构成的属性类别组合作为目标类别组合。
在本申请实施例中,由于相似度阈值是与两张人脸图像对应的属性类别对应的, 因此相似度阈值可以与属性类别构成的属性类别组合对应,每种属性类别组合对应的相似度阈值可以预先存储于电子设备。因此,电子设备在获取到第一人脸图像对应的属性类别,以及第二人脸图像对应的属性类别之后,可以获取第一类别以及第二类别所构成的属性类别组合。例如,以上指定人脸属性为性别时,则可以包括男性-男性,女性-女性,以及男性-女性的属性类别组合。又例如,以上指定人脸属性为肤色,而肤色对应的类别包括黄色、白色和黑色,则属性类别组合包括:黄色-白色、黄色-黑色、黄色-黄色、白色-黑色、白色-白色以及黑色-黑色。
步骤S240:从多个属性类别组合对应的相似度阈值中,获取所述目标类别组合对应的相似度阈值。
在本申请实施例中,电子设备在获取到第一人脸图像对应的属性类别与第二人脸图像对应的属性类别所构成的属性类别组合之后,可以从多个属性类别组合对应的相似度阈值中,获取出目标类别组合对应的相似度阈值。
在一些实施方式中,以上指定人脸属性可以包括性别。在目标类别组合为女性和女性时,获取第一阈值作为相似度阈值;在目标类别组合为男性和男性时,获取第二阈值作为相似度阈值;在目标类别组合为男性和女性时,获取第三阈值作为相似度阈值,其中,第一阈值、第二阈值以及第三阈值的大小依次降低。可以理解的,相似度阈值的大小与人脸相似度的分布有关,也就是说,当某个集合中人脸相似度平均较大时,对应的阈值也会增加,而对于女性的不同人脸,更加容易被误认为是同一人脸,因此相似度阈值之间的大小关系可以为:第一阈值>第二阈值>第三阈值。当然,相似度阈值的具体大小可以根据需求的人脸识别正确率(正确接受率以及错误接受率)确定。
在一些实施方式中,以上指定人脸属性可以包括多种人脸属性,而包括多种人脸属性时,则构成的类别组合也将增多,因此相似度阈值也会增多。例如,指定人脸属性包括性别和肤色,则属性类别组合包括:第一人脸图像的性别和肤色,与第二人脸图像的性别和肤色所构成的组合。
步骤S250:获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度。
步骤S260:当所述目标相似度大于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像匹配。
在本申请实施例中,电子设备在获取到第一人脸图像的人脸特征与第二人脸图像的人脸特征之间的相似度,并根据第一人脸图像对应的属性类别以及第二人脸图像对应的属性类别确定出相似度阈值之后,则可以将获取的目标相似度与相似度阈值进行比较,以确定目标相似度是否大于相似度阈值;若目标相似度大于相似度阈值,则可以确定第一人脸图像与第二人脸图像匹配;若目标相似度小于或等于相似度阈值,则可以确定第一人脸图像与第二人脸图像不匹配。
在一些实施方式中,考虑到属性类别可能识别出错,因此将两张人脸图像的属性类别不同时的属性类别组合考虑在内,例如,也会考虑男性-女性的组合,在进行人脸识别时,可以随机获取用于比对的人脸图像,以避免属性类别出错,而导致无法识别出匹配的人脸图像的情况。另外,若识别出第一人脸图像与第二人脸图像的属性类别不同,但最终却确定出第一人脸图像与第二人脸图像匹配时,则可以认为属性类别出错,因此,在预先存储有第二人脸图像的属性类别的情况下,可以根据底库中的第二人脸图像的属性类别,对第一人脸图像进行标注后,再对人脸属性分类模型进行校正训练,以提升人脸属性分类模型的准确性。当然,也可以提示用户输入第一人脸图像的属性类别,再根据输入的属性类别对第一人脸图像进行标注后,再对人脸属性分类模型进行校正训练,以提升人脸属性分类模型的准确性。
本申请实施例提供的人脸识别方法,在人脸识别过程中,根据待识别的人脸图像的人脸属性的属性类别,以及用于比对的人脸图像对应的属性类别,确定属性类别组合,根据属性类别组合动态确定相似度阈值,避免利用固定的阈值来确定人脸识别结果,而人脸属性对人脸识别准确性的影响,进而提升人脸识别的准确性。另外,在确定属性类别时,根据人脸图像对应的人脸属性的属性得分,来确定需要获取属性类别的指定人脸属性,避免人脸属性识别给相似度阈值带来的不准确性,进一步提升了人脸识别的准确性。
请参阅图3,图3示出了本申请又一个实施例提供的人脸识别方法的流程示意图。该人脸识别方法应用于上述电子设备,下面将针对图3所示的流程进行详细的阐述,所述人脸识别方法具体可以包括以下步骤:
步骤S310:针对不同的属性类别组合,获取人脸识别时不同的错误接受率FAR的情况下的相似度阈值以及正确接受率TAR,得到每个属性类别组合对应的多组指标数据。
在本申请实施例中,电子设备可以针对不同的属性类别组合,根据人脸识别的错误接受率(FAR,False Accept Rate)以及正确接受率(TAR,True Accept Rate),来预先对各个属性类别组合对应的相似度阈值进行确定。具体地,电子设备可以针对不同的属性类别组合,获取人脸识别时不同的FAR的情况下的相似度阈值以及正确接受率,从而得到每个属性类别组合对应的多组指标数据。该多组指标数据中,每组指标数据可以包括相似度阈值、FAR以及TAR。
其中,以上多组指标数据,可以利用测试集中的人脸图像进行相互匹配而测试得到。错误接受率可以通过以下方式确定:在一次测试过程中,获取不同人的人脸图像进行比较时,最终获取的相似度大于相似度阈值的次数作为第一次数,且获取不同人的人脸图像之间的总比较次数作为第二次数,然后获取第一次数与第二次数的比值,即为错误接受率。正确接受率可以通过以下方式确定:在一次测试过程中,获取相同人的人脸图像进行比较时,最终获取的相似度大于相似度阈值的次数作为第三次数,且获取相同人的人脸图像之间的总比较次数作为第四次数,然后获取第三次数与第四次数的比值,即为错误接受率。
示例性的,在指定人脸属性为性别时。则可以针对男性和男性,女性和女性,以及男性和女性,这三种性别组合,分别获取多组指标数据。具体地,针对男性和男性的组合,则可以将男性的人脸图像与男性的人脸图像进行匹配,并获取FAR、相似度阈值以及TAR;针对女性和女性的组合,则可以将女性的人脸图像与女性的人脸图像进行匹配,并获取FAR、相似度阈值以及TAR;针对男性和女性的组合,则可以将男性的人脸图像与女性的人脸图像进行匹配,并获取FAR、相似度阈值以及TAR。
步骤S320:从所述每个属性类别组合对应的多组指标数据中,获取所述FAR率满足第一接受率条件,且所述TAR满足第二接受率条件的目标指标数据。
在本申请实施例中,电子设备在获取到每个属性类别组合对应的多组指标数据后,则可以从多组指标数据中,获取FAR满足第一接受率条件,且TAR满足第二接受率条件的目标指标数据。其中,第一接受率条件可以包括:FAR小于第一接受率,或者FAR最小;第二接受率条件可以包括:TAR大于第二接受率,或者TAR最大。具体地第一接受率条件以及第二接受率条件,可以根据实际场景中所需的错误接受率以及正确接受率确定。例如,在门禁场景中,对于门禁需要能够灵敏的控制,错误接受率可以大于其他场景的错误接受率;又例如,支付场景中,对于安全性比较高,因此错误接受率可以小于其他场景的错误接受率。
在一些实施方式中,若针对任一个属性类别组合,例如,目标属性类别组合,获取到至少两组满足以上条件的指标数据时,即若目标属性类别组合对应的多组指标数据中,存在至少两组指标数据,且至少两组指标数据中每组指标数据的所述FAR满足 第一接受率条件,且所述TAR满足第二接受率条件;该情况下,则可以根据错误接受率所占的权重,以及正确接受率所占的权重,若错误接受率所占的权重大于正确接受率所占的权重,则取错误接受率最小的一组指标数据作为目标指标数据;若正确接受率所占的权重大于错误接受率所占的权重,则取正确接受率最大的一组指标数据作为目标指标数据。
步骤S330:获取所述每个属性类别组合对应的目标指标数据中的相似度阈值,作为每个属性类别组合对应的相似度阈值,并将获得的多个属性类别组合对应的相似度阈值进行存储。
在本申请实施例中,在确定出每个属性类别组合对应的目标指标数据后,则可以获取每个属性类别组合对应的目标指标数据中的相似度阈值,作为每个属性类别组合对应的相似度阈值。在得到每个属性类别组合对应的相似度阈值之后,则可以将相似度阈值与每个属性类别组合之间的对应关系进行存储,以方便人脸识别时确定对应的相似度阈值。
步骤S340:获取待识别的第一人脸图像。
步骤S350:获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类。
步骤S360:获取所述第一类别以及所述第二类别所构成的属性类别组合作为目标类别组合。
步骤S370:从多个属性类别组合对应的相似度阈值中,获取所述目标类别组合对应的相似度阈值。
步骤S380:获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度。
步骤S390:当所述目标相似度大于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像匹配。
在本申请实施例中,步骤S340至步骤S390可以参阅其他实施例的内容,在此不再赘述。
本申请实施例提供的人脸识别方法,在人脸识别过程中,根据待识别的人脸图像的人脸属性的属性类别,以及用于比对的人脸图像对应的属性类别,确定属性类别组合,根据属性类别组合动态确定相似度阈值,避免利用固定的阈值来确定人脸识别结果,而人脸属性对人脸识别准确性的影响,进而提升人脸识别的准确性。另外,每个属性类别组合对应的相似度阈值,为针对需求的错误接受率和正确接受率而预先获取,保证了每种属性类别的组合的情况下人脸识别的准确性能够满足需求。
请参阅图4,图4示出了本申请再一个实施例提供的人脸识别方法的流程示意图。该人脸识别方法应用于上述电子设备,下面将针对图4所示的流程进行详细的阐述,所述人脸识别方法具体可以包括以下步骤:
步骤S410:获取待识别的第一人脸图像。
在本申请实施例中,步骤S410可以参阅前述实施例的内容,在此不再赘述。
步骤S420:获取所述第一人脸图像对应的属性类别作为第一类别。
在本申请实施例中,电子设备获取第一人脸图像的属性类别的方式,可以参阅其他实施例的内容,在此不再赘述。
步骤S430:从多个人脸图像库中确定与所述第一类别对应的人脸图像库,其中,每个人脸图像库对应的属性类别不同。
在本申请实施例中,电子设备在对待识别的第一人脸图像进行人脸识别时,可以根据第一人脸图像的属性类别,来确定用于比对的人脸图像。具体地,可以获取属性类别相同的待比对的人脸图像,以避免人脸属性不同时,却在后面识别过程中识别为 相同人脸的不准确性。
在一些实施方式中,电子设备可以预先获取用于比对待识别的人脸图像的多张人脸图像,也就是底库的人脸图像;然后获取多张人脸图像中每张人脸图像对应的属性类别;根据每张人脸图像对应的属性类别,将所述多张人脸图像划分为不同的人脸图像组,作为不同属性类别对应的人脸图像库,其中,每个人脸图像组对应的属性类别不同。在人脸识别过程中,获取用于待比对的第二人脸图像时,可以获取与第一类别对应的人脸图像库,即属性类别也为第一类别的人脸图像库。
步骤S440:从所述人脸图像库中获取待比对的第二人脸图像,所述第二类别与所述第一类别相同。
在本申请实施例中,电子设备在确定出与第一人脸图像的属性类别对应的人脸图像库之后,则可以从该人脸图像库中获取待比对的第二人脸图像。第二人脸图像的属性类别应当与第一类别相同,也就是,第二类别与第一类别相同。
步骤S450:基于所述第一类别以及所述第二类别,获取对应的相似度阈值。
步骤S460:获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度。
步骤S470:当所述目标相似度大于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像匹配。
在本申请实施例中,步骤S450至步骤S470可以参阅其他实施例的内容,在此不再赘述。
在一些实施方式中,由于是获取与待识别的第一人脸图像的属性类别相同的属性类别的人脸图像,作为待比对的人脸图像,而由于人脸属性识别可能存在不准确的情况,因此,当对第一人脸图像与以上确定的人脸图像库中的人脸图像进行匹配之后,若确定出第一人脸图像与该人脸图像库中的任一人脸图像都不匹配时,则可以从其他属性类别对应的人脸图像库中获取待比对的人脸图像,与第一人脸图像进行匹配,以避免人脸属性识别错误时,而导致无法识别到匹配的人脸图像。
在一些实施方式中,当对第一人脸图像与以上确定的人脸图像库中的人脸图像进行匹配之后,若确定出第一人脸图像与该人脸图像库中的任一人脸图像都不匹配时,还可以输出提示信息,以提示用户确认本次识别出的属性类别(第一类别),若用户输入用于确认属性类别识别不准确的信息,则可以继续提示用户输入正确的属性类别,此后,再根据输入的属性类别对应的人脸图像库中的人脸图像,与第一人脸图像进行匹配,从而确定人脸识别结果。
可选的,由于属性类别识别有误,因此电子设备还可以根据用户输入的属性类别,以及对第一人脸图像,对以上人脸属性分类模型进行校正训练,以进一步提升人脸属性分类模型的准确性。
本申请实施例提供的人脸识别方法,在人脸识别过程中,根据待识别的人脸图像的人脸属性的属性类别,从相应的人脸图像库获取用于比对的人脸图像,再根据属性类别组合动态确定相似度阈值,避免利用固定的阈值来确定人脸识别结果,而人脸属性对人脸识别准确性的影响,进而提升人脸识别的准确性。
请参阅图5,其示出了本申请实施例提供的一种人脸识别装置400的结构框图。该人脸识别装置400应用上述的电子设备,该人脸识别装置400包括:图像获取模块410、类别获取模块420、阈值获取模块430、相似度获取模块440以及结果获取模块450。其中,所述图像获取模块410用于获取待识别的第一人脸图像;所述类别获取模块420用于获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类;所述阈值获取模块430用于基于所述第一类别以及所述第二类别,获取对应的相似度阈值;所述相似度获取模块440用于获取所述第一人脸图像的人脸特征与所述第二人脸图像 的人脸特征的相似度作为目标相似度;所述结果获取模块450用于基于所述目标相似度与所述相似度阈值之间的比较结果,确定人脸识别结果。
在一些实施方式中,阈值获取模块430可以包括:组合获取单元以及阈值确定单元。其中,组合获取单元用于获取所述第一类别以及所述第二类别所构成的属性类别组合作为目标类别组合;阈值确定单元用于从多个属性类别组合对应的相似度阈值中,获取所述目标类别组合对应的相似度阈值。
可选的,所述指定人脸属性包括性别。阈值确定单元可以具体用于:在所述目标类别组合为女性和女性时,获取第一阈值作为所述相似度阈值;在所述目标类别组合为男性和男性时,获取第二阈值作为所述相似度阈值;在所述目标类别组合为男性和女性时,获取第三阈值作为所述相似度阈值,其中,所述第一阈值、所述第二阈值以及所述第三阈值的大小依次降低。
可选的,该人脸识别装置400还可以包括:数据获取模块、数据筛选模块以及阈值存储模块。数据获取模块用于在所述基于所述第一类别以及所述第二类别,获取对应的相似度阈值之前,针对不同的属性类别组合,获取人脸识别时不同的错误接受率FAR的情况下的相似度阈值以及正确接受率TAR,得到每个属性类别组合对应的多组指标数据;数据筛选模块用于从所述每个属性类别组合对应的多组指标数据中,获取所述FAR率满足第一接受率条件,且所述TAR满足第二接受率条件的目标指标数据;阈值存储模块用于获取所述每个属性类别组合对应的目标指标数据中的相似度阈值,作为每个属性类别组合对应的相似度阈值,并将获得的多个属性类别组合对应的相似度阈值进行存储。
在一些实施方式中,类别获取模块420可以包括:属性识别单元、图形库确定单元以及图像获取单元。其中,属性识别单元用于获取所述第一人脸图像对应的属性类别作为第一类别;图形库确定单元用于从多个人脸图像库中确定与所述第一类别对应的人脸图像库,其中,每个人脸图像库对应的属性类别不同;图像获取单元用于从所述人脸图像库中获取待比对的第二人脸图像,所述第二类别与所述第一类别相同。
在该实施方式中,该人脸识别装置400还可以包括:底库图获取模块、底库图识别模块以及底库图分组模块。底库图获取模块用于获取用于比对待识别的人脸图像的多张人脸图像;底库图识别模块用于获取所述多张人脸图像中每张人脸图像对应的属性类别;底库图分组模块用于根据所述每张人脸图像对应的属性类别,将所述多张人脸图像划分为不同的人脸图像组,作为不同属性类别对应的人脸图像库,其中,每个人脸图像组对应的属性类别不同。
在一些实施方式中,类别获取模块420可以具体用于:将所述第一人脸图像以及所述第二人脸图像分别输入至预先训练的人脸属性分类模型,获得所述第一人脸图像对应的属性类别作为第一类别,以及所述第二人脸图像对应的属性类别作为第二类别。
在一些实施方式中,该人脸识别装置400还可以包括:属性得分获取模块以及属性筛选模块。属性得分获取模块用于获取所述第一人脸图像的多种维度的人脸属性中每种维度的人脸属性对应的属性得分,所述属性得分用于表征对所述多种维度的人脸属性进行识别时的准确度;属性筛选模块用于基于所述每种维度的人脸属性对应的属性得分,将属性得分大于指定分值的人脸属性作为所述指定人脸属性。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
综上所述,本申请提供的方案,通过获取待识别的第一人脸图像,获取第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,该属性类别为指定人脸属性所属的分类,基于第一类别以及所述第二类别,获取对应的相似度阈值,获取第一人脸图像的人脸特征与第二人脸图像的人脸特征的相似度作为目标相似度,再将目标相似度与相似度阈值进行比较,当目标相似度大于相似度阈值时,确定第一人脸图像与第二人脸图像匹配。从而可以实现在人脸识别时,基于不同人脸图像的人脸属性,动态的确定相似度阈值,避免利用固定的阈值来确定人脸识别结果,而导致的人脸识别结果不准确的问题,提升人脸识别的准确性。
请参考图6,其示出了本申请实施例提供的一种电子设备的结构框图。该电子设备100可以是智能手机、平板电脑、智能手表、智能眼镜、笔记本电脑等能够运行应用程序的电子设备。本申请中的电子设备100可以包括一个或多个如下部件:处理器110、存储器120、以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储电子设备100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
请参考图7,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质800中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质800可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质800包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质800具有执行上述方法中的任何方法步骤的程序代码810的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码810可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种人脸识别方法,其特征在于,所述方法包括:
    获取待识别的第一人脸图像;
    获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类;
    基于所述第一类别以及所述第二类别,获取对应的相似度阈值;
    获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度;
    当所述目标相似度大于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像匹配。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述第一类别以及所述第二类别,获取对应的相似度阈值,包括:
    获取所述第一类别以及所述第二类别所构成的属性类别组合作为目标类别组合;
    从多个属性类别组合对应的相似度阈值中,获取所述目标类别组合对应的相似度阈值。
  3. 根据权利要求2所述的方法,其特征在于,所述指定人脸属性包括性别,所述从多个属性类别组合对应的相似度阈值中,获取所述目标类别组合对应的相似度阈值,包括:
    在所述目标类别组合为女性和女性时,获取第一阈值作为所述相似度阈值;
    在所述目标类别组合为男性和男性时,获取第二阈值作为所述相似度阈值;
    在所述目标类别组合为男性和女性时,获取第三阈值作为所述相似度阈值,其中,所述第一阈值、所述第二阈值以及所述第三阈值的大小依次降低。
  4. 根据权利要求2或3所述的方法,其特征在于,在所述基于所述第一类别以及所述第二类别,获取对应的相似度阈值之前,所述方法还包括:
    针对不同的属性类别组合,获取人脸识别时不同的错误接受率FAR的情况下的相似度阈值以及正确接受率TAR,得到每个属性类别组合对应的多组指标数据;
    从所述每个属性类别组合对应的多组指标数据中,获取所述FAR满足第一接受率条件,且所述TAR满足第二接受率条件的目标指标数据;
    获取所述每个属性类别组合对应的目标指标数据中的相似度阈值,作为每个属性类别组合对应的相似度阈值,并将获得的多个属性类别组合对应的相似度阈值进行存储。
  5. 根据权利要求4所述的方法,其特征在于,所述第一接受率条件包括:所述FAR小于第一接受率,或所述FAR最小。
  6. 根据权利要求4或5所述的方法,其特征在于,所述第二接受率条件包括:所述TAR大于第二接受率,或者所述TAR最小。
  7. 根据权利要求4-6任一项所述的方法,其特征在于,所述从所述每个属性类别组合对应的多组指标数据中,获取所述FAR满足第一接受率条件,且所述TAR满足第二接受率条件的目标指标数据,包括:
    若目标属性类别组合对应的多组指标数据中,存在至少两组指标数据,其中,所述目标属性类别组合为任一属性类别组合,所述至少两组指标数据中每组指标数据的所述FAR满足第一接受率条件,且所述TAR满足第二接受率条件;
    获取所述FAR以及所述TAR对应的权重;
    若所述FAR对应的权重大于所述TAR对应的权重,则从所述至少两组指标数据 中获取所述FAR最小的一组指标数据,作为所述目标指标数据;
    若所述TAR对应的权重大于所述FAR对应的权重,则从所述至少两组指标数据中获取所述TAR最大的一组指标数据,作为所述目标指标数据。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,包括:
    获取所述第一人脸图像对应的属性类别作为第一类别;
    从多个人脸图像库中确定与所述第一类别对应的人脸图像库,其中,每个人脸图像库对应的属性类别不同;
    从所述人脸图像库中获取待比对的第二人脸图像,所述第二类别与所述第一类别相同。
  9. 根据权利要求8所述的方法,其特征在于,在所述从多个人脸图像库中确定与所述第一类别对应的人脸图像库之前,所述方法还包括:
    获取用于比对待识别的人脸图像的多张人脸图像;
    获取所述多张人脸图像中每张人脸图像对应的属性类别;
    根据所述每张人脸图像对应的属性类别,将所述多张人脸图像划分为不同的人脸图像组,作为不同属性类别对应的人脸图像库,其中,每个人脸图像组对应的属性类别不同。
  10. 根据权利要求1-7任一项所述的方法,其特征在于,所述获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,包括:
    将所述第一人脸图像以及所述第二人脸图像分别输入至预先训练的人脸属性分类模型,获得所述第一人脸图像对应的属性类别作为第一类别,以及所述第二人脸图像对应的属性类别作为第二类别。
  11. 根据权利要求10所述的方法,其特征在于,所述人脸属性分类模型的训练过程,包括:
    获取标注有一种或多种人脸属性的类别的人脸样本图像;
    将所述人脸样本图像输入至初始模型,得到所述初始模型输出的结果;
    根据所述输出的结果以及所述人脸样本图像对应的标注数据,获取损失值;
    根据所述损失值对所述初始模型的模型参数进行调整,直至所述损失值满足损失条件,得到所述人脸属性分类模型。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,在所述获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别之前,所述方法还包括:
    获取所述第一人脸图像的多种维度的人脸属性中每种维度的人脸属性对应的属性得分,所述属性得分用于表征对所述多种维度的人脸属性进行识别时的准确度;
    基于所述每种维度的人脸属性对应的属性得分,将属性得分大于指定分值的人脸属性作为所述指定人脸属性。
  13. 根据权利要求1-11任一项所述的方法,其特征在于,在所述获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别之前,所述方法还包括:
    获取所述第一人脸图像的多种维度的人脸属性中每种维度的人脸属性对应的属性得分,所述属性得分用于表征对所述多种维度的人脸属性进行识别时的准确度;
    基于所述每种维度的人脸属性对应的属性得分,将属性得分最高的人脸属性作为所述指定人脸属性。
  14. 根据权利要求12或13所述的方法,其特征在于,所述第一人脸属性对应的属 性类别由预先训练的人脸属性分类模型识别得到,所述获取所述第一人脸图像的多种维度的人脸属性中每种维度的人脸属性对应的属性得分,包括:
    基于所述人脸属性分类模型输出的所述每种人脸属性所属的属性类别对应的概率,获取所述每种维度的人脸属性对应的属性得分。
  15. 根据权利要求14所述的方法,其特征在于,所述基于所述每种人脸属性所属的属性类别对应的概率,获取所述每种维度的人脸属性对应的属性得分,包括:
    基于所述每种人脸属性对应的标准概率,对所述每种人脸属性所属的属性类别对应的概率进行量化,得到所述每种维度的人脸属性对应的属性得分。
  16. 根据权利要求15所述的方法,其特征在于,所述基于所述每种人脸属性对应的标准概率,对所述每种人脸属性所属的属性类别对应的概率进行量化,得到所述每种维度的人脸属性对应的属性得分,包括:
    获取所述每种人脸属性所属的属性类别对应的概率,与其对应的标准概率的比值,得到所述每种维度的人脸属性对应的属性得分。
  17. 根据权利要求1-16任一项所述的方法,其特征在于,所述方法还包括:
    当所述目标相似度小于或等于所述相似度阈值时,确定所述第一人脸图像与所述第二人脸图像不匹配。
  18. 一种人脸识别装置,其特征在于,所述装置包括:图像获取模块、类别获取模块、阈值获取模块、相似度获取模块以及结果获取模块,其中,
    所述图像获取模块用于获取待识别的第一人脸图像;
    所述类别获取模块用于获取所述第一人脸图像对应的属性类别作为第一类别,以及待比对的第二人脸图像对应的属性类别作为第二类别,所述属性类别为指定人脸属性所属的分类;
    所述阈值获取模块用于基于所述第一类别以及所述第二类别,获取对应的相似度阈值;
    所述相似度获取模块用于获取所述第一人脸图像的人脸特征与所述第二人脸图像的人脸特征的相似度作为目标相似度;
    所述结果获取模块用于基于所述目标相似度与所述相似度阈值之间的比较结果,确定人脸识别结果。
  19. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1-17任一项所述的方法。
  20. 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-17任一项所述的方法。
PCT/CN2022/071091 2021-02-07 2022-01-10 人脸识别方法、装置、电子设备及存储介质 WO2022166532A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110179741.0A CN112836661A (zh) 2021-02-07 2021-02-07 人脸识别方法、装置、电子设备及存储介质
CN202110179741.0 2021-02-07

Publications (1)

Publication Number Publication Date
WO2022166532A1 true WO2022166532A1 (zh) 2022-08-11

Family

ID=75933282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071091 WO2022166532A1 (zh) 2021-02-07 2022-01-10 人脸识别方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN112836661A (zh)
WO (1) WO2022166532A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836661A (zh) * 2021-02-07 2021-05-25 Oppo广东移动通信有限公司 人脸识别方法、装置、电子设备及存储介质
CN113469015A (zh) * 2021-06-29 2021-10-01 浙江大华技术股份有限公司 人脸识别方法、装置、电子设备及计算机存储介质
CN113255631B (zh) * 2021-07-15 2021-10-15 浙江大华技术股份有限公司 相似度阈值更新方法、人脸识别方法及相关装置
CN115661494A (zh) * 2022-06-14 2023-01-31 青岛云天励飞科技有限公司 聚类连接图的构建方法、装置、设备及可读存储介质
CN114863540B (zh) * 2022-07-05 2022-12-16 杭州魔点科技有限公司 基于人脸属性分析的人脸识别在线辅助方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197250A (zh) * 2017-12-29 2018-06-22 深圳云天励飞技术有限公司 图片检索方法、电子设备及存储介质
US20190034704A1 (en) * 2017-07-26 2019-01-31 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method and apparatus for face classification
CN109543547A (zh) * 2018-10-26 2019-03-29 平安科技(深圳)有限公司 人脸图像识别方法、装置、设备及存储介质
CN112329890A (zh) * 2020-11-27 2021-02-05 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备及存储介质
CN112836661A (zh) * 2021-02-07 2021-05-25 Oppo广东移动通信有限公司 人脸识别方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090433B (zh) * 2017-12-12 2021-02-19 厦门集微科技有限公司 人脸识别方法及装置、存储介质、处理器
CN110335269A (zh) * 2018-05-16 2019-10-15 腾讯医疗健康(深圳)有限公司 眼底图像的类别识别方法和装置
CN110866469A (zh) * 2019-10-30 2020-03-06 腾讯科技(深圳)有限公司 一种人脸五官识别方法、装置、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190034704A1 (en) * 2017-07-26 2019-01-31 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method and apparatus for face classification
CN108197250A (zh) * 2017-12-29 2018-06-22 深圳云天励飞技术有限公司 图片检索方法、电子设备及存储介质
CN109543547A (zh) * 2018-10-26 2019-03-29 平安科技(深圳)有限公司 人脸图像识别方法、装置、设备及存储介质
CN112329890A (zh) * 2020-11-27 2021-02-05 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备及存储介质
CN112836661A (zh) * 2021-02-07 2021-05-25 Oppo广东移动通信有限公司 人脸识别方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112836661A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
WO2022166532A1 (zh) 人脸识别方法、装置、电子设备及存储介质
WO2021026805A1 (zh) 对抗样本检测方法、装置、计算设备及计算机存储介质
WO2019033573A1 (zh) 面部情绪识别方法、装置及存储介质
US10318797B2 (en) Image processing apparatus and image processing method
US20210012777A1 (en) Context acquiring method and device based on voice interaction
US11341770B2 (en) Facial image identification system, identifier generation device, identification device, image identification system, and identification system
CN110321845B (zh) 一种从视频中提取表情包的方法、装置及电子设备
WO2021051598A1 (zh) 文本情感分析模型训练方法、装置、设备及可读存储介质
US11126827B2 (en) Method and system for image identification
WO2022174699A1 (zh) 图像更新方法、装置、电子设备及计算机可读介质
US10423817B2 (en) Latent fingerprint ridge flow map improvement
WO2021072876A1 (zh) 证件图像分类方法、装置、计算机设备及可读存储介质
US20220270396A1 (en) Palm print recognition method, method for training feature extraction model, device, and medium
US10997609B1 (en) Biometric based user identity verification
US20230410221A1 (en) Information processing apparatus, control method, and program
CN111062440B (zh) 一种样本选择方法、装置、设备及存储介质
WO2023123923A1 (zh) 人体重识别方法、人体重识别装置、计算机设备及介质
US10755074B2 (en) Latent fingerprint pattern estimation
CN113947209A (zh) 基于云边协同的集成学习方法、系统及存储介质
CN112861742A (zh) 人脸识别方法、装置、电子设备及存储介质
CN113269154B (zh) 一种图像识别方法、装置、设备及存储介质
CN113177479B (zh) 图像分类方法、装置、电子设备及存储介质
CN112288045B (zh) 一种印章真伪判别方法
CN111708988B (zh) 侵权视频识别方法、装置、电子设备及存储介质
JP6651085B1 (ja) 属性認識システム、学習サーバ、及び属性認識プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22748811

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22748811

Country of ref document: EP

Kind code of ref document: A1