CN113822189A - Face recognition method and device, computer readable storage medium and processor - Google Patents

Face recognition method and device, computer readable storage medium and processor Download PDF

Info

Publication number
CN113822189A
CN113822189A CN202111083091.6A CN202111083091A CN113822189A CN 113822189 A CN113822189 A CN 113822189A CN 202111083091 A CN202111083091 A CN 202111083091A CN 113822189 A CN113822189 A CN 113822189A
Authority
CN
China
Prior art keywords
face
face image
attribute
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111083091.6A
Other languages
Chinese (zh)
Inventor
张逸清
陈高
陈彦宇
马雅奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202111083091.6A priority Critical patent/CN113822189A/en
Publication of CN113822189A publication Critical patent/CN113822189A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition method, a face recognition device, a computer readable storage medium and a processor. Wherein, the method comprises the following steps: encoding attribute feature data of a first face image to obtain a face attribute code of the first face image, wherein the first face image is an image to be subjected to face recognition; determining a predetermined distance between the face attribute code and each predetermined code in a predetermined storage medium respectively; determining a plurality of second face images in a preset storage medium based on the preset distance, wherein the plurality of second face images are images which are stored in the preset storage medium in advance and are used for assisting face matching; determining a face image with the maximum similarity with the first face image in the plurality of second face images as a target face image; and obtaining a recognition result of the first face image based on the target face image. The invention solves the technical problems of low face recognition speed and poor stability in the related technology.

Description

Face recognition method and device, computer readable storage medium and processor
Technical Field
The invention relates to the field of face recognition, in particular to a face recognition method, a face recognition device, a computer readable storage medium and a processor.
Background
Currently, face recognition technology has been widely used in the fields of payment, security, attendance, etc. With the expansion of use scenes and the increase of the data volume of human faces, the speed requirement of human face recognition under large data volume is higher and higher.
The current face recognition technology usually matches the whole face database after acquiring a face image. And eliminating the face with the matching score falling outside the confidence threshold, and then sequencing the face images with the matching scores falling within the confidence threshold according to the similarity degree of the face images to give a face recognition analysis result. If the data size is large, the matching time is also long, which may bring a bad experience to the practical application of the user. In order to avoid the problem, the face database needs to be sorted according to a certain rule, so that the face can be matched with corresponding data in the database earlier, the matching times are reduced, and the recognition speed is increased.
In the prior art, some databases are reordered by using user features other than human faces, for example, the databases are arranged in a descending order from the perspective of user activity; from the time perspective, the database is rearranged within a specific time by utilizing the time habit of brushing the face of the user. But some reorder the database by simple combinations of attribute features, such as excluding data that falls outside the combined classification when matching, to reduce the amount of data. Although the above methods improve the speed of classification, the overall classification ability of these methods is weak and the stability is poor.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a face recognition method, a face recognition device, a computer readable storage medium and a processor, which are used for at least solving the technical problems of low face recognition speed and poor stability in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a face recognition method, including: encoding attribute feature data of a first face image to obtain a face attribute code of the first face image, wherein the first face image is an image to be subjected to face recognition; determining a preset distance between the face attribute code and each preset code in a preset storage medium; determining a plurality of second face images in the predetermined storage medium based on the predetermined distance, wherein the plurality of second face images are images which are stored in the predetermined storage medium in advance and are used for assisting face matching; determining a face image with the maximum similarity with the first face image in the plurality of second face images as a target face image; and obtaining the recognition result of the first face image based on the target face image.
Optionally, the encoding the attribute feature data of the first face image to obtain the face attribute code of the first face image includes: acquiring the first face image; analyzing the first face image to obtain the attribute feature data; and coding the attribute feature data to obtain the face attribute code of the first face image.
Optionally, analyzing the first face image to obtain the attribute feature data includes: inputting the first face image into a face attribute analysis model, wherein the face attribute analysis model is obtained by using multiple groups of training data in advance through machine learning training, and each group of training data in the multiple groups of training data comprises: the face image comprises a face image and a face image with attribute labels; acquiring an output result of the face attribute analysis model; and obtaining the attribute feature data based on the output result.
Optionally, before inputting the first face image into the face property analysis model, the method further comprises: collecting the face image; labeling the face image based on an attribute feature table to obtain the face image with the attribute label, wherein the attribute feature table is generated based on an attribute feature distribution state; and training a first number of the face images and the first number of the face images with attribute labels to obtain the face attribute analysis model.
Optionally, the method further comprises: and testing the human face attribute analysis model by using the human face images except the first number of human face images and the human face images except the first number of human face images with the labels.
Optionally, determining the predetermined distance between the face attribute code and each of the predetermined codes in the predetermined storage medium respectively includes: acquiring each preset code in the preset storage medium; and determining the preset distance between the face attribute code and each preset code respectively.
Optionally, determining a plurality of second facial images in the predetermined storage medium based on the predetermined distance includes: obtaining a distance threshold; comparing the preset distances with the distance threshold respectively to obtain a first comparison result; and filtering out partial face images with the distances larger than the distance threshold value based on the first comparison result to obtain a plurality of second face images.
Optionally, determining, as a target face image, a face image with the maximum similarity to the first face image in the plurality of second face images, includes: dividing the plurality of second face images and the first face image to obtain a plurality of face feature vector segments of each of the plurality of second face images and a plurality of face feature vector segments of the first face image; comparing the plurality of face feature vector segments of each face image with the plurality of face feature vector segments of the first face image to obtain a second comparison result; filtering out partial face images of which the similarity between the face feature vector segments in the plurality of second face images and the face feature vector segments of the first face images is not more than a preset threshold value based on the second comparison result to obtain a preset number of second face images; and determining the face image with the maximum similarity with the first face image in the preset number of face images as the target face image.
According to another aspect of the embodiments of the present invention, there is also provided a face recognition apparatus, including: the encoding unit is used for encoding attribute feature data of a first face image to obtain a face attribute code of the first face image, wherein the first face image is an image to be subjected to face recognition; a first determining unit, configured to determine a predetermined distance between the face attribute code and each of predetermined codes in a predetermined storage medium; a second determining unit configured to determine a plurality of second face images in the predetermined storage medium based on the predetermined distance, wherein the plurality of second face images are images stored in the predetermined storage medium in advance and used for assisting face matching; a third determining unit, configured to determine, as a target face image, a face image with a maximum similarity to the first face image in the plurality of second face images; and the identification unit is used for obtaining an identification result of the first human face image based on the target human face image.
Optionally, the encoding unit further includes an acquisition module, an analysis module and an encoding module, wherein the acquisition module is configured to acquire the first face image; the analysis module is used for analyzing the first face image to obtain the attribute feature data; the coding module is used for coding the attribute feature data to obtain a face attribute code of the first face image.
Optionally, the analysis module further includes an input sub-module, an acquisition sub-module, and a generation sub-module, where the input sub-module is configured to input the first face image into a face attribute analysis model, where the face attribute analysis model is obtained by machine learning training using multiple sets of training data in advance, and each set of training data in the multiple sets of training data includes: the face image comprises a face image and a face image with attribute labels; the acquisition submodule is used for acquiring an output result of the face attribute analysis model; the generation submodule is used for obtaining the attribute feature data based on the output result.
Optionally, the apparatus further comprises an acquisition unit, an annotation unit and a training unit, wherein the acquisition unit is configured to acquire the face image before inputting the first face image into a face attribute analysis model; the labeling unit is used for labeling the face image based on an attribute feature table to obtain the face image with the attribute label, wherein the attribute feature table is generated based on an attribute feature distribution state; the training unit is used for training a first number of the face images and the first number of the face images with the attribute labels to obtain the face attribute analysis model.
Optionally, the apparatus further includes a testing unit, configured to test the face property analysis model by using the face images except the first number of face images and the face images except the first number of face images with labels.
Optionally, the first determining unit further includes a first obtaining module and a first determining module, where the first obtaining module is configured to obtain each predetermined code in the predetermined storage medium; the first determining module is used for determining the preset distance between the face attribute code and each preset code.
Optionally, the second determining unit further includes a second obtaining module, a first comparing module and a second filtering module, where the second obtaining module is configured to obtain the distance threshold; the first comparison module is used for comparing the preset distance with the distance threshold respectively to obtain a first comparison result; and the second filtering module is used for filtering out partial face images with the distances larger than the distance threshold value based on the first comparison result to obtain a plurality of second face images.
Optionally, the third determining unit further includes a segmentation module, a second comparison module, a second filtering module, and a second determining module, where the segmentation module is configured to segment the plurality of second face images and the first face image to obtain a plurality of face feature vector segments of each of the plurality of second face images and a plurality of face feature vector segments of the first face image; the second comparison module is used for comparing the plurality of face feature vector segments of each face image with the plurality of face feature vector segments of the first face image to obtain a second comparison result; the second filtering module is used for filtering out partial face images of which the similarity between the face feature vector segments in the plurality of second face images and the face feature vector segments in the first face images is not more than a preset threshold value on the basis of the second comparison result to obtain a preset number of second face images; the second determining module is configured to determine, as the target face image, a face image with a largest similarity to the first face image among the predetermined number of face images.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program executes any one of the face recognition methods.
According to still another aspect of the embodiments of the present invention, there is further provided a processor, where the processor is configured to execute a program, where the program executes any one of the face recognition methods when running.
In the face recognition method provided in the embodiment of the present invention, first, attribute feature data of a first face image is encoded to obtain a face attribute code of the first face image, then, a predetermined distance is determined according to the face attribute code and a predetermined code in a predetermined storage medium, then, a plurality of second face images in the predetermined storage medium are determined according to the predetermined distance, and finally, a face image with the maximum similarity to the first face image in the plurality of second face images is determined as a target face image, so as to obtain a recognition result of the face image. In the scheme, the preset distance is determined according to the face attribute code and the preset code in the preset storage medium, the second face images in the preset storage medium are determined according to the preset distance, so that the situation that similar data can be found preferentially in the face recognition process is guaranteed, the matching times are reduced, the face recognition speed is improved, and the face image with the maximum similarity with the first face image in the second face images is determined as the target face image, so that the face recognition stability is high, and the technical problems of low face recognition speed and poor stability in the related technology are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a face recognition method according to an embodiment of the invention;
FIG. 2 is a flow diagram of an alternative face attribute analysis in accordance with embodiments of the present invention;
FIG. 3 is a flow diagram of an alternative method of training a face attribute analysis model according to an embodiment of the invention;
FIG. 4 is a flow diagram of an alternative face recognition according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a face recognition apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Aiming at the technical problems, the inventor carries out face attribute feature screening according to actual use crowds. And carrying out data statistics and analysis on the actual using population, researching the attribute distribution of the population, and eliminating the attribute with poor classification capability in the population by using a principal component analysis method. And summarizing the other attributes, and establishing a face attribute table for marking the face attribute characteristics. And marking training data according to the face attribute table of the people who are actually used, and training to obtain a face recognition and attribute analysis neural network model. And acquiring face attribute data through the trained model, and generating a string of corresponding binary feature codes to represent an attribute feature table of the face. And (3) by using face attribute coding, analyzing the similarity of feature codes before face recognition, arranging the feature codes in a descending order according to the similarity, matching face data in the database, and stopping matching if the matching is successful. The method ensures that the human face is preferentially matched with the similar matching data, reduces the times of human face matching and further improves the speed of human face matching. Because the attribute coding of the human face is utilized, and the simple combination of the attribute features is not utilized, the sequence of human face matching is changed, and any matching data is not removed.
By the method, in the process of matching the human face with large data volume, the matching data are sequenced by using the feature codes of the human face attribute features, so that the human face is preferentially matched with the similar matching data, the times of human face matching are reduced, and the speed of human face matching is increased. The attribute coding of the human face is utilized, and the simple combination of the attribute features is not utilized, so that the matching sequence of the human face is changed, and any matching data is not removed. The method has good stability, can improve the classification speed and ensure the matching stability, and is suitable for most use occasions.
The following is a detailed description of specific embodiments.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a face recognition method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of a face recognition method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
and step S102, encoding attribute feature data of a first face image to obtain a face attribute code of the first face image, wherein the first face image is an image to be subjected to face recognition.
And step S104, determining the preset distance between the face attribute code and each preset code in the preset storage medium.
Optionally, the predetermined storage medium is a storage medium storing a predetermined code, and the predetermined storage medium may be specifically a database, but is not limited to the database, and may also be another storage medium; the predetermined codes are a plurality of face attribute codes pre-stored in a predetermined storage medium, and the predetermined distances include, but are not limited to: cosine distance, Jaccard distance, Levenshtein distance, hamming distance, etc.
Step S106, determining a plurality of second face images in the predetermined storage medium based on the predetermined distance, wherein the plurality of second face images are images stored in the predetermined storage medium in advance and used for assisting face matching.
And step S108, determining the face image with the maximum similarity with the first face image in the plurality of second face images as a target face image.
Step S110, obtaining a recognition result of the first face image based on the target face image.
As can be seen from the above, in the embodiment of the present invention, first, attribute feature data of a first face image is encoded to obtain a face attribute code of the first face image, then, a predetermined distance is determined according to the face attribute code and a predetermined code in a predetermined storage medium, then, a plurality of second face images in the predetermined storage medium are determined according to the predetermined distance, and finally, a face image having a maximum similarity with the first face image among the plurality of second face images is determined as a target face image, so as to obtain a recognition result of the face image. In the scheme, the preset distance is determined according to the face attribute code and the preset code in the preset storage medium, the second face images in the preset storage medium are determined according to the preset distance, so that the situation that similar data can be found preferentially in the face recognition process is guaranteed, the matching times are reduced, the face recognition speed is improved, and the face image with the maximum similarity with the first face image in the second face images is determined as the target face image, so that the face recognition stability is high, and the technical problems of low face recognition speed and poor stability in the related technology are solved.
As an alternative embodiment, in step S102, the encoding of the attribute feature data of the first face image to obtain the face attribute code of the first face image includes: collecting the first face image; analyzing the first face image to obtain the attribute feature data; and coding the attribute feature data to obtain the face attribute code of the first face image. In the scheme, the collected first face image is analyzed to obtain attribute feature data, and the attribute feature data is subsequently encoded, so that the face attribute encoding can be obtained more efficiently and accurately.
In the above optional embodiment, the 12 attribute features that are commonly used by faces and have high robustness are generally used as the attribute feature data, and specifically, the following are respectively used: whether male, round face, small eye, young, high hairline, beard, thick lip, big nose, high cheekbone, under eye pouch, double chin, and thick eyebrow.
Specifically, as shown in fig. 2, first, a first face image is acquired, and face detection is performed. And if the first face image has the face, acquiring face position coordinates, and performing attribute analysis on the face in the coordinate range to obtain attribute characteristic data of the face. After the attribute feature data of the face is obtained, the attribute feature data are coded to obtain face attribute codes, and the obtained face attribute codes are results after binary classification, so that a string of binary codes with the length less than or equal to 12 can be obtained according to the attribute feature codes, and each bit corresponds to one attribute of the face in sequence. For example, using 12 attribute feature data, for one attribute: the male, round face, big eye, young, low hairline, beard-free, thick lip, big nose, low cheekbone, eye-bag-free, double chin, non-eyebrow-dense users are coded, and the generated code is: 110100110010. and finally, storing the obtained face attribute codes into a database.
In order to obtain the attribute feature data more efficiently and accurately, as an optional embodiment, the analyzing the first face image to obtain the attribute feature data includes: inputting the first face image into a face attribute analysis model, wherein the face attribute analysis model is obtained by using multiple sets of training data in advance through machine learning training, and each set of training data in the multiple sets of training data comprises: the face image comprises a face image and a face image with attribute labels; obtaining an output result of the human face attribute analysis model; and obtaining the attribute feature data based on the output result.
In this embodiment, the face attribute analysis model is obtained by training a plurality of sets of data in advance through machine learning, and in a specific training process, until the obtained face analysis model can match a face and recognize face attribute feature data, the model has ideal accuracy. Certainly, in an actual application process, the face attribute analysis model may be a multitask deep convolutional neural network classifier capable of performing face recognition and face attribute analysis, but is not limited to the multitask deep convolutional neural network classifier, and may also be another classifier capable of performing face recognition and face attribute analysis.
In an optional embodiment, before the first face image is input into the face attribute analysis model, the face recognition method further includes: collecting the face image; labeling the face image based on an attribute feature table to obtain the face image with the attribute label, wherein the attribute feature table is generated based on an attribute feature distribution state; and training a first number of the face images and the first number of the face images with the attribute labels to obtain the face attribute analysis model. In this embodiment, the acquired face images are labeled based on the attribute feature table, and then the first number of face images and the first number of face images with attribute labels are trained, so that the result obtained by training can be compared with the face images with attribute labels, the face attribute analysis model can be further adjusted according to the comparison result, and the face attribute analysis model with higher accuracy can be further obtained more quickly.
In this embodiment, the attribute feature distribution of the human face is counted and analyzed for the actually used population, and the attribute feature data of the study population is obtained. And then, using a principal component analysis method to reduce the dimension of the human face attribute features of the crowd and eliminate the attribute features with poor classification performance, thereby reducing the attribute matching data volume and improving the matching speed. For example: in the salutation, the age attribute is eliminated if the user is basically a long person; in a school of women, where the user is substantially female, the gender attribute is rejected.
Specifically, in the actual application process, as shown in fig. 3, firstly, an attribute feature table is established according to feature distribution for labeling the face attribute features. After the face training data and the matching data (a face attribute data set such as CelebA can be used) are collected, attribute labeling is carried out on all face images according to the attribute feature table. Secondly, establishing a face attribute analysis model, and then training by using a first number of face images and the first number of face images with attribute labels to obtain the face attribute analysis model.
In order to verify the accuracy of the face attribute analysis model more quickly, as an optional embodiment, the face recognition method further includes: and testing the human face attribute analysis model by using the human face images except the first number and the human face images except the first number with the labels.
Specifically, in an actual application process, the first number may be referred to as a training sample, and the face images other than the first number may be referred to as test samples. In the above embodiment, the ratio of 4: 1, distributing training samples and testing samples according to the proportion, and constructing a face data set suitable for the task. Of course, the proportion of the training sample and the test sample can be allocated according to the actual data amount, and when the data amount is small, 7: 3 to assign training and test samples, or using 6: 2: 2, distributing training samples, verification samples and test samples; when the amount of data is very large, 98: 1: 1 to assign training samples, validation samples and test samples.
In an alternative embodiment, the step S104 of determining the predetermined distance between the face attribute code and each of the predetermined codes in the predetermined storage medium includes: acquiring each preset code in the preset storage medium; and determining the preset distance between the face attribute code and each preset code. In the embodiment, each preset code in the preset storage medium is firstly acquired, then the preset distance is determined according to the face attribute code and each preset code, so that the preset distance can be obtained efficiently and accurately, and then a plurality of second face images in the preset storage medium are determined according to the obtained preset distance, so that the plurality of second face images can be further determined efficiently and accurately.
Specifically, as shown in fig. 4, when actually performing face recognition, first, face detection is performed, and if it is determined that a face exists in an image, face position coordinates are obtained. And for the face in the coordinate range, performing attribute analysis by using a face attribute analysis module model to obtain attribute feature data, and encoding the obtained attribute feature data to obtain face attribute codes. The predetermined distance is then determined based on the face attribute code and a predetermined code in a predetermined storage medium. When the obtained preset distance is smaller, the more similar the face attribute code and the preset code are proved, namely the closer the attributes of the two faces are. And finally, matching the first facial image with the second facial images.
In order to obtain a plurality of second facial images more efficiently, as an alternative embodiment, in step S106, the determining a plurality of second facial images in the predetermined storage medium based on the predetermined distance includes: obtaining a distance threshold; comparing the preset distances with the distance threshold respectively to obtain a first comparison result; and filtering out part of the face images with the distances larger than the distance threshold value based on the first comparison result to obtain a plurality of second face images.
Specifically, in an actual application process, the predetermined distance may be a cosine distance, and the predetermined storage medium may be a database. In the process of determining the plurality of second face images, firstly, face attribute codes of the plurality of second face images in the database are sorted from small to large according to cosine distances, a certain threshold value is set, the face data corresponding to the face attribute codes of the plurality of second face images with the cosine distances larger than the threshold value are excluded, subsequent face recognition is not involved, and other plurality of second face images are sequentially subjected to feature matching with the detected first face image according to the sorting sequence.
In an alternative embodiment, the step S108 of determining, as the target face image, a face image with the highest similarity to the first face image in the plurality of second face images includes: dividing the plurality of second face images and the first face image to obtain a plurality of face feature vector segments of each of the plurality of second face images and a plurality of face feature vector segments of the first face image; comparing the plurality of face feature vector segments of each face image with the plurality of face feature vector segments of the first face image to obtain a second comparison result; filtering out partial human face images of which the similarity between the human face feature vector segments in the second human face images and the human face feature vector segments of the first human face images is not more than a preset threshold value on the basis of the second comparison result to obtain a preset number of second human face images; and determining the face image with the maximum similarity with the first face image in the predetermined number of face images as the target face image. In the scheme, the face image with the largest similarity between the predetermined number of face images and the first face image is determined as the target face image, so that the target face image can be determined more efficiently, and then the recognition result is determined according to the determined target face image, so that the recognition result can be further obtained more efficiently.
In this embodiment, during matching, 512-dimensional feature vectors of a detected first face image and all screened second face images are divided to obtain 8 segments with equal length, the similarities of the 8 segments are sequentially compared from front to back, and during comparison of each segment, second face images with large differences are eliminated according to a preset threshold (usually, most of the second face images are eliminated after the comparison of the first two segments), and after the comparison of the 8 segments, the face image with the highest similarity larger than the preset threshold is used as an analysis result, and corresponding face data is returned; if no matched data exists, the identification is failed.
As can be seen from the above, the face recognition method provided by the embodiment of the present invention effectively solves the following technical problems: 1) under the condition of large data volume, the speed of face matching is low. The larger the data volume is, the more data needs to be traversed in face matching, so that the matching time is long, the speed is low, and poor use experience is brought to a user; 2) part of the methods have poor stability of face matching. The database rearrangement is performed by using the characteristics except the human face, and the method is lack of stability and scientificity and cannot be applied to most of use occasions. The following beneficial effects are realized simultaneously: 1) feature screening is used, feature dimensionality reduction is carried out through a principal component analysis method, and classification is carried out through the attribute features which are frequently used by the human face and have high robustness, so that the stability of human face matching is improved; 2) by using the attribute coding, the matching sequence of the human face is optimized, the times of human face matching are reduced, and the speed of human face matching is improved.
Example 2
According to another aspect of the embodiments of the present invention, there is also provided a face recognition apparatus, and fig. 5 is a schematic diagram of the face recognition apparatus according to the embodiments of the present invention, as shown in fig. 5, the face recognition apparatus includes: the encoding unit 50 includes a first determining unit 52, a second determining unit 54, a third determining unit 56, and an identifying unit 58. The face recognition apparatus will be explained below.
An encoding unit 50, configured to encode attribute feature data of a first face image to obtain a face attribute code of the first face image, where the first face image is an image to be subjected to face recognition;
a first determining unit 52, configured to determine a predetermined distance between each of the face attribute codes and each of predetermined codes in a predetermined storage medium;
a second determining unit 54 configured to determine, based on the predetermined distance, a plurality of second face images stored in the predetermined storage medium in advance and used for assisting face matching;
a third determining unit 56, configured to determine, as the target face image, a face image with the largest similarity to the first face image in the plurality of second face images;
a recognition unit 58, configured to obtain a recognition result of the first face image based on the target face image.
It should be noted here that the encoding unit 50, the first determining unit 52, the second determining unit 54, the third determining unit 56, and the identifying unit 58 correspond to steps S102 to S110 in embodiment 1, and the modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in embodiment 1. It should be noted that the modules described above as part of an apparatus may be implemented in a computer system such as a set of computer-executable instructions.
As can be seen from the above, in the embodiment of the present invention, the encoding unit 50 may be used to encode attribute feature data of a first face image, so as to obtain a face attribute code of the first face image, where the first face image is an image to be subjected to face recognition; then, a first determining unit 52 is used to determine the predetermined distance between the face attribute codes and each predetermined code in the predetermined storage medium; then, a second determining unit 54 is used to determine a plurality of second face images in the predetermined storage medium according to the predetermined distance, wherein the plurality of second face images are images which are stored in the predetermined storage medium in advance and are used for assisting face matching; then, a third determining unit 56 determines a face image with the maximum similarity to the first face image in the plurality of second face images as a target face image; finally, the recognition unit 58 obtains the recognition result of the first face image based on the target face image. In the scheme, the preset distance is determined according to the face attribute code and the preset code in the preset storage medium, the second face images in the preset storage medium are determined according to the preset distance, so that the situation that similar data can be found preferentially in the face recognition process is guaranteed, the matching times are reduced, the face recognition speed is improved, and the face image with the maximum similarity with the first face image in the second face images is determined as the target face image, so that the face recognition stability is high, and the technical problems of low face recognition speed and poor stability in the related technology are solved.
Optionally, the encoding unit further includes an acquisition module, an analysis module, and an encoding module, where the acquisition module is configured to acquire the first face image; the analysis module is used for analyzing the first face image to obtain the attribute feature data; the coding module is used for coding the attribute feature data to obtain a face attribute code of the first face image.
Optionally, the analysis module further includes an input sub-module, an obtaining sub-module, and a generating sub-module, where the input sub-module is configured to input the first face image into a face attribute analysis model, where the face attribute analysis model is obtained by machine learning training using multiple sets of training data in advance, and each set of training data in the multiple sets of training data includes: the face image comprises a face image and a face image with attribute labels; the acquisition submodule is used for acquiring an output result of the face attribute analysis model; the generation submodule is used for obtaining the attribute feature data based on the output result.
Optionally, the apparatus further includes an acquisition unit, a labeling unit, and a training unit, where the acquisition unit is configured to acquire the face image before inputting the first face image into a face attribute analysis model; the labeling unit is used for labeling the face image based on an attribute feature table to obtain the face image with the attribute label, wherein the attribute feature table is generated based on an attribute feature distribution state; the training unit is configured to train a first number of the face images and a first number of the face images with attribute labels to obtain the face attribute analysis model.
Optionally, the apparatus further includes a testing unit, configured to test the face attribute analysis model by using the face images other than the first number and the face images with labels other than the first number.
Optionally, the first determining unit further includes a first obtaining module and a first determining module, where the first obtaining module is configured to obtain each predetermined code in the predetermined storage medium; the first determining module is configured to determine a predetermined distance between the face attribute code and each of the predetermined codes.
Optionally, the second determining unit further includes a second obtaining module, a first comparing module and a second filtering module, where the second obtaining module is configured to obtain the distance threshold; the first comparison module is used for comparing the preset distances with the distance threshold respectively to obtain a first comparison result; the second filtering module is used for filtering out partial face images with distances larger than the distance threshold value based on the first comparison result to obtain a plurality of second face images.
Optionally, the third determining unit further includes a segmentation module, a second comparison module, a second filtering module and a second determining module, where the segmentation module is configured to segment both the plurality of second face images and the first face image to obtain a plurality of face feature vector segments of each of the plurality of second face images and a plurality of face feature vector segments of the first face image; the second comparison module is used for comparing the plurality of face feature vector segments of each face image with the plurality of face feature vector segments of the first face image to obtain a second comparison result; the second filtering module is configured to filter, based on the second comparison result, a part of the face images, of which similarity between the face feature vector segments of the plurality of second face images and the face feature vector segments of the first face image is not greater than a preset threshold, so as to obtain a predetermined number of second face images; the second determining module is configured to determine, as the target face image, a face image with a largest similarity to the first face image among the predetermined number of face images.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium including a stored computer program, wherein when the computer program is executed by a processor, the apparatus where the computer storage medium is located is controlled to execute the face recognition method of any one of the above.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a computer program, where the computer program executes to perform the face recognition method of any one of the above.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A face recognition method, comprising:
encoding attribute feature data of a first face image to obtain a face attribute code of the first face image, wherein the first face image is an image to be subjected to face recognition;
determining a preset distance between the face attribute code and each preset code in a preset storage medium;
determining a plurality of second face images in the predetermined storage medium based on the predetermined distance, wherein the plurality of second face images are images which are stored in the predetermined storage medium in advance and are used for assisting face matching;
determining a face image with the maximum similarity with the first face image in the plurality of second face images as a target face image;
and obtaining the recognition result of the first face image based on the target face image.
2. The method according to claim 1, wherein encoding attribute feature data of a first face image to obtain a face attribute code of the first face image comprises:
acquiring the first face image;
analyzing the first face image to obtain the attribute feature data;
and coding the attribute feature data to obtain the face attribute code of the first face image.
3. The method of claim 2, wherein analyzing the first face image to obtain the attribute feature data comprises:
inputting the first face image into a face attribute analysis model, wherein the face attribute analysis model is obtained by using multiple groups of training data in advance through machine learning training, and each group of training data in the multiple groups of training data comprises: the face image comprises a face image and a face image with attribute labels;
acquiring an output result of the face attribute analysis model;
and obtaining the attribute feature data based on the output result.
4. The method of claim 3, wherein prior to inputting the first face image into a face attribute analysis model, the method further comprises:
collecting the face image;
labeling the face image based on an attribute feature table to obtain the face image with the attribute label, wherein the attribute feature table is generated based on an attribute feature distribution state;
and training a first number of the face images and the first number of the face images with attribute labels to obtain the face attribute analysis model.
5. The method of claim 4, further comprising:
and testing the human face attribute analysis model by using the human face images except the first number of human face images and the human face images except the first number of human face images with the labels.
6. The method of claim 1, wherein determining the predetermined distance between the face attribute code and each of the predetermined codes in the predetermined storage medium comprises:
acquiring each preset code in the preset storage medium;
and determining the preset distance between the face attribute code and each preset code respectively.
7. The method of claim 1, wherein determining a plurality of second facial images in the predetermined storage medium based on the predetermined distance comprises:
obtaining a distance threshold;
comparing the preset distances with the distance threshold respectively to obtain a first comparison result;
and filtering out partial face images with the distances larger than the distance threshold value based on the first comparison result to obtain a plurality of second face images.
8. The method according to claim 1, wherein determining the face image with the largest similarity with the first face image in the plurality of second face images as a target face image comprises:
equally dividing the plurality of second face images and the feature vectors of the first face image to obtain a plurality of face feature vector segments of each of the plurality of second face images and a plurality of face feature vector segments of the first face image;
comparing the plurality of face feature vector segments of each face image with the plurality of face feature vector segments of the first face image to obtain a second comparison result;
filtering out partial face images of which the similarity between the face feature vector segments in the plurality of second face images and the face feature vector segments of the first face images is not more than a preset threshold value based on the second comparison result to obtain a preset number of second face images;
and determining the face image with the maximum similarity with the first face image in the preset number of face images as the target face image.
9. A face recognition apparatus, comprising:
the encoding unit is used for encoding attribute feature data of a first face image to obtain a face attribute code of the first face image, wherein the first face image is an image to be subjected to face recognition;
a first determining unit, configured to determine a predetermined distance between the face attribute code and each of predetermined codes in a predetermined storage medium;
a second determining unit configured to determine a plurality of second face images in the predetermined storage medium based on the predetermined distance, wherein the plurality of second face images are images stored in the predetermined storage medium in advance and used for assisting face matching;
a third determining unit, configured to determine, as a target face image, a face image with a maximum similarity to the first face image in the plurality of second face images;
and the identification unit is used for obtaining an identification result of the first human face image based on the target human face image.
10. A computer-readable storage medium characterized by comprising a stored program, wherein the program executes the face recognition method according to any one of claims 1 to 8.
11. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the face recognition method according to any one of claims 1 to 8 when running.
CN202111083091.6A 2021-09-15 2021-09-15 Face recognition method and device, computer readable storage medium and processor Pending CN113822189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111083091.6A CN113822189A (en) 2021-09-15 2021-09-15 Face recognition method and device, computer readable storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111083091.6A CN113822189A (en) 2021-09-15 2021-09-15 Face recognition method and device, computer readable storage medium and processor

Publications (1)

Publication Number Publication Date
CN113822189A true CN113822189A (en) 2021-12-21

Family

ID=78914577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111083091.6A Pending CN113822189A (en) 2021-09-15 2021-09-15 Face recognition method and device, computer readable storage medium and processor

Country Status (1)

Country Link
CN (1) CN113822189A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404877A (en) * 2015-12-08 2016-03-16 商汤集团有限公司 Human face attribute prediction method and apparatus based on deep study and multi-task study
CN108009465A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of face identification method and device
CN109815775A (en) * 2017-11-22 2019-05-28 深圳市祈飞科技有限公司 A kind of face identification method and system based on face character
CN110443120A (en) * 2019-06-25 2019-11-12 深圳英飞拓科技股份有限公司 A kind of face identification method and equipment
CN111738194A (en) * 2020-06-29 2020-10-02 深圳力维智联技术有限公司 Evaluation method and device for similarity of face images
CN112084904A (en) * 2020-08-26 2020-12-15 武汉普利商用机器有限公司 Face searching method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404877A (en) * 2015-12-08 2016-03-16 商汤集团有限公司 Human face attribute prediction method and apparatus based on deep study and multi-task study
CN108009465A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of face identification method and device
CN109815775A (en) * 2017-11-22 2019-05-28 深圳市祈飞科技有限公司 A kind of face identification method and system based on face character
CN110443120A (en) * 2019-06-25 2019-11-12 深圳英飞拓科技股份有限公司 A kind of face identification method and equipment
CN111738194A (en) * 2020-06-29 2020-10-02 深圳力维智联技术有限公司 Evaluation method and device for similarity of face images
CN112084904A (en) * 2020-08-26 2020-12-15 武汉普利商用机器有限公司 Face searching method, device and storage medium

Similar Documents

Publication Publication Date Title
Agarwal et al. Face recognition using eigen faces and artificial neural network
KR100969298B1 (en) Method For Social Network Analysis Based On Face Recognition In An Image or Image Sequences
Djeddi et al. ICDAR2015 competition on multi-script writer identification and gender classification using ‘QUWI’database
Hassanat et al. Victory sign biometrie for terrorists identification: Preliminary results
US10997609B1 (en) Biometric based user identity verification
CN112329573A (en) Cat nasal print recognition method and device based on cat nasal print feature extraction model
Faltemier et al. 3D face recognition with region committee voting
Hadid et al. On the use of dynamic features in face biometrics: recent advances and challenges
CN109948718B (en) System and method based on multi-algorithm fusion
Negri et al. Tackling age-invariant face recognition with non-linear PLDA and pairwise SVM
CN111523461A (en) Expression recognition system and method based on enhanced CNN and cross-layer LSTM
Becerra-Riera et al. Age and gender classification using local appearance descriptors from facial components
CN113822189A (en) Face recognition method and device, computer readable storage medium and processor
Liu et al. A3GAN: An attribute-aware attentive generative adversarial network for face aging
Ujir et al. Surface normals with modular approach and weighted voting scheme in 3D facial expression classification
Thannoon et al. Design and implementation of deception detection system based on reliable facial expression
Banerjee et al. Identity-preserving aging of face images via latent diffusion models
Fakhar et al. Score fusion in multibiometric identification based on fuzzy set theory
Liu et al. Composite face sketch recognition based on components
Tin Age Dependent Face Recognition using Eigenface
Binu et al. Multi model based biometric image retrieval for enhancing security
CN109344235B (en) Psychological behavior analysis method based on coexistence rate and association rule
Samangooei Semantic biometrics
CN111382703A (en) Finger vein identification method based on secondary screening and score fusion
Mohammed et al. An overview of uni-and multi-biometric identification of identical twins

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination