CN110069989B - Face image processing method and device and computer readable storage medium - Google Patents

Face image processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN110069989B
CN110069989B CN201910196223.2A CN201910196223A CN110069989B CN 110069989 B CN110069989 B CN 110069989B CN 201910196223 A CN201910196223 A CN 201910196223A CN 110069989 B CN110069989 B CN 110069989B
Authority
CN
China
Prior art keywords
face
image
images
face image
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910196223.2A
Other languages
Chinese (zh)
Other versions
CN110069989A (en
Inventor
宗博文
解宇涵
温舒
张俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ppdai Finance Information Service Co ltd
Original Assignee
Shanghai Ppdai Finance Information Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ppdai Finance Information Service Co ltd filed Critical Shanghai Ppdai Finance Information Service Co ltd
Priority to CN201910196223.2A priority Critical patent/CN110069989B/en
Publication of CN110069989A publication Critical patent/CN110069989A/en
Application granted granted Critical
Publication of CN110069989B publication Critical patent/CN110069989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

A face image processing method and device and a computer readable storage medium are provided, wherein the face image processing method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises a face image; carrying out face vectorization on the face image to obtain a face feature vector corresponding to the face image; based on the quality scores of the face images, and combining face feature vectors corresponding to the face images, clustering the face images; determining the category of the face image and a category center point corresponding to each category; and carrying out face recognition on the face images corresponding to the class center points of all classes. By adopting the scheme, the face image recognition efficiency can be improved.

Description

Face image processing method and device and computer readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method and a device for processing a face image and a computer-readable storage medium.
Background
Face recognition is a biometric technique for identifying an identity based on facial feature information of a person. With the improvement of the communication capability and the computing capability of the terminal of the internet of things, the application requirement of face recognition is no longer only satisfied with the processing of a single picture. In some application scenarios, face recognition needs to be performed on the video stream to obtain a face image in the video stream.
The current scheme for performing face recognition on a video stream is to regard a video as a picture with a time sequence, perform face recognition on each picture, compare all obtained faces with a face library, and finally summarize recognition results.
However, the image processing method is inefficient in performing face image recognition on each picture in the video stream.
Disclosure of Invention
The embodiment of the invention solves the technical problem of low face image recognition efficiency.
In order to solve the above technical problem, an embodiment of the present invention provides a face image processing method, including: acquiring an image to be processed, wherein the image to be processed comprises a face image; carrying out face vectorization on the face image to obtain a face feature vector corresponding to the face image; based on the quality scores of the face images, and combining face feature vectors corresponding to the face images, clustering the face images; determining the category of the face image and a category center point corresponding to each category; and carrying out face recognition on the face images corresponding to the class center points of all classes.
Optionally, the performing face vectorization on the face image to obtain a face feature vector corresponding to the face image includes: carrying out image standardization on the face image to obtain a standardized face image; and performing face vectorization processing on the standardized face image by adopting a face vectorization algorithm, and acquiring a first face feature vector with a preset dimension as a face feature vector corresponding to the face image.
Optionally, after the image normalization is performed on the face image to obtain a normalized face image, the method further includes: mirroring the standardized face image to obtain a mirrored face image; performing face vectorization processing on the mirrored face image by adopting a face vectorization algorithm to obtain a second face feature vector with a preset dimension; and calculating the mean value of the first face feature vector and the second face feature vector, and taking the calculated mean value as the face feature vector corresponding to the face image.
Optionally, the face images are clustered by using any one of the following algorithms: clustering the face images by adopting a non-maximum suppression algorithm; and clustering the face images by adopting a community discovery algorithm. Optionally, the determining a class center point corresponding to each class includes: when clustering is carried out on the face images by adopting a non-maximum value suppression algorithm, the face image with the highest quality score in each category is used as a category center point of the corresponding category; and when the face images are clustered by adopting a community discovery algorithm, the face image with the most edges in each class is taken as the class center point of the corresponding class.
Optionally, the clustering the face images by using a non-maximum suppression algorithm includes: arranging the face images according to the quality in a reverse order, and recording the face images as a face quality queue; calculating the similarity of each face image and other face images according to the face feature vector of each face image, and recording as a similarity matrix; in the ith iteration, the face image with the highest quality score is used as a class center point of the class Ci; classifying the face images with the similarity threshold value reaching a preset threshold value with the class center point of the class Ci into the class Ci according to the similarity between the face images in the similarity matrix; deleting the face images in the category Ci from a face quality queue Ti to obtain an updated face quality queue T (i + 1); deleting the similarity between the face image in the category Ci and the category Ci center point from the similarity matrix Mi to obtain an updated similarity matrix M (i + 1); and when the updated face quality queue T (i +1) is not empty, continuously selecting the face image with the highest quality score from the updated quality queue T (i +1) as the class center point of the class C (i + 1).
Optionally, the similarity between each face image and other face images is calculated by using any one of the following algorithms: cosine distance algorithm, euclidean distance algorithm.
Optionally, the clustering the face images by using a community discovery algorithm includes: dividing all face images into different categories to form a set SET (C)0 of the categories; calculating the face similarity of all the face images in each category, comparing the face similarity with a preset second threshold, wherein the weight of the face similarity is 1 when the face similarity is higher than the preset second threshold, and the weight of the face similarity is 0 when the face similarity is lower than the preset second threshold, and obtaining a set SET (E)0 of edges; during the ith iteration, calculating the clustering stability Qi under the current clustering state by using a corresponding class set (C) i and an edge set (E) i; merging two adjacent categories into the same category to obtain a new category set (C) (i + 1); calculating the weights of the edges in each combined class and the sum of the weights of the edges among the classes to obtain an edge set (e) (i + 1); calculating the merged clustering stability Q (i + 1); when the cluster stability Q (i +1) after merging is greater than the cluster stability Qi before merging, merging is accepted, a set formed by error division states is emptied, and the i +1 th iteration is carried out; when the cluster stability Q (i +1) after merging is less than or equal to the cluster stability Qi before merging, merging is not accepted, and the current partition is added into a set consisting of error partition states; judging whether all the merging possibilities are completed or not, and when all the merging possibilities are completed, taking the corresponding cluster in the ith iteration as an optimal clustering result; when all merging possibilities are not completed, the merge attempt continues.
An embodiment of the present invention further provides a face image processing apparatus, including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is suitable for acquiring an image to be processed, and the image to be processed comprises a human face image; the human face vectorization unit is suitable for carrying out human face vectorization on the human face image to obtain a human face feature vector corresponding to the human face image; the clustering unit is suitable for clustering the face images based on the quality scores of the face images and in combination with the face characteristic vectors corresponding to the face images; the determining unit is suitable for determining the categories of the face images and the category center points corresponding to the categories; and the face recognition unit is suitable for carrying out face recognition on the face images corresponding to the class center points of all classes.
Optionally, the face vectorization unit is adapted to perform image normalization on the face image to obtain a normalized face image; and performing face vectorization processing on the standardized face image by adopting a face vectorization algorithm, and acquiring a first face feature vector with a preset dimension as a face feature vector corresponding to the face image.
Optionally, the face vectorization unit is further adapted to mirror the standardized face image to obtain a mirrored face image; performing face vectorization processing on the mirrored face image by adopting a face vectorization algorithm to obtain a second face feature vector with a preset dimension; and calculating the mean value of the first face feature vector and the second face feature vector, and taking the calculated mean value as the face feature vector corresponding to the face image.
Optionally, the clustering unit is adapted to cluster the face images by using any one of the following algorithms: non-maximum suppression algorithm, community discovery algorithm.
Optionally, the determining unit is adapted to, when the face images are clustered by using a non-maximum suppression algorithm, take the face image with the highest quality score in each category as a class center point of the corresponding category; and when the face images are clustered by adopting a community discovery algorithm, the face image with the most edges in each class is taken as the class center point of the corresponding class.
Optionally, the clustering unit is adapted to sort the face images in reverse order according to the quality scores when clustering the face images by using a non-maximum suppression algorithm, and record the face images as a face quality score queue; calculating the similarity of each face image and other face images according to the face feature vector of each face image, and recording as a similarity matrix; in the ith iteration, the face image with the highest quality score is used as a class center point of the class Ci; classifying the face images with the similarity threshold value reaching a preset threshold value with the class center point of the class Ci into the class Ci according to the similarity between the face images in the similarity matrix; deleting the face images in the category Ci from a face quality queue Ti to obtain an updated face quality queue T (i + 1); deleting the similarity between the face image in the category Ci and the category Ci center point from the similarity matrix Mi to obtain an updated similarity matrix M (i + 1); and when the updated face quality queue T (i +1) is not empty, continuously selecting the face image with the highest quality score from the updated quality queue T (i +1) as the class center point of the class C (i + 1).
Optionally, the clustering unit is adapted to calculate the similarity between each face image and another face image by using any one of the following algorithms: cosine distance algorithm, euclidean distance algorithm.
Optionally, the clustering unit is adapted to, when clustering the face images by using a community discovery algorithm, divide all the face images into different categories to form a set of categories set (c) 0; calculating the face similarity of all the face images in each category, comparing the face similarity with a preset second threshold, wherein the weight of the face similarity is 1 when the face similarity is higher than the preset second threshold, and the weight of the face similarity is 0 when the face similarity is lower than the preset second threshold, and obtaining a set SET (E)0 of edges; during the ith iteration, calculating the clustering stability Qi under the current clustering state by using a corresponding class set (C) i and an edge set (E) i; merging two adjacent categories into the same category to obtain a new category set (C) (i + 1); calculating the weights of the edges in each combined class and the sum of the weights of the edges among the classes to obtain an edge set (e) (i + 1); calculating the merged clustering stability Q (i + 1); when the cluster stability Q (i +1) after merging is greater than the cluster stability Qi before merging, merging is accepted, a set formed by error division states is emptied, and the i +1 th iteration is carried out; when the cluster stability Q (i +1) after merging is less than or equal to the cluster stability Qi before merging, merging is not accepted, and the current partition is added into a set consisting of error partition states; judging whether all the merging possibilities are completed or not, and when all the merging possibilities are completed, taking the corresponding cluster in the ith iteration as an optimal clustering result; when all merging possibilities are not completed, the merge attempt continues.
The embodiment of the invention also provides a face image processing device, which comprises a memory and a processor, wherein the memory is stored with a computer instruction capable of running on the processor, and the processor executes any one of the steps of the face image processing method when running the computer instruction.
The embodiment of the present invention further provides a computer-readable storage medium, which is a non-volatile storage medium or a non-transitory storage medium, and on which computer instructions are stored, and when the computer instructions are executed, the steps of any one of the above-mentioned face image processing methods are executed.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
the method comprises the steps of conducting face feature vectorization on face images in images to be processed, clustering the face images according to corresponding quality scores of the face images and corresponding face feature vectors to obtain classes to which the face images belong and class central points of the classes, and identifying the face images corresponding to the class central points in the classes when conducting face identification, so that the number of the face images needing to be processed in the face identification process can be reduced, and the face image identification efficiency can be improved.
Furthermore, when the human face is subjected to human face vectorization, the human face image is subjected to standardized processing, so that the fault tolerance of human face detection can be improved, and the image processing precision and efficiency in the subsequent image processing process can be improved.
Furthermore, the obtained standardized face image is mirrored to obtain a mirrored face image, and the face feature vector corresponding to the standardized face image and the face feature vector corresponding to the mirrored face image are averaged to be used as the face feature vector corresponding to the face image, so that the accuracy of the obtained face feature vector can be improved.
Drawings
Fig. 1 is a flowchart of a face image processing method in an embodiment of the present invention;
FIG. 2 is a flow chart of clustering face images using a non-maxima suppression algorithm in an embodiment of the present invention;
FIG. 3 is a flowchart of clustering face images using a community discovery algorithm according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a face image processing apparatus in an embodiment of the present invention.
Detailed Description
As described above, in the current scheme of performing face recognition on a video stream, a video is regarded as a time-series picture, then face recognition is performed on each picture, all obtained faces are compared with a face library, and finally, recognition results are summarized. However, the image processing method is inefficient in performing face image recognition on each picture in the video stream.
In the embodiment of the invention, the face features of the face images in the images to be processed are vectorized, the face images are clustered according to the corresponding quality scores of the face images and the corresponding face feature vectors, the categories to which the face images belong and the category center points of the categories are obtained, and the face images corresponding to the category center points in the categories are identified during face identification, so that the number of the face images needing to be processed in the face identification process can be reduced, and the face image identification efficiency can be improved.
In order to make the aforementioned objects, features and advantages of the embodiments of the present invention more comprehensible, specific embodiments accompanied with figures are described in detail below.
Referring to fig. 1, a flowchart of a face image processing method according to an embodiment of the present invention is shown, where the face image processing method may include the following steps.
And step 11, acquiring an image to be processed, wherein the image to be processed comprises a human face image.
In a specific implementation, the image to be processed may be obtained from a video stream. In the embodiment of the present invention, images of corresponding frames may be extracted from the video stream according to a preset image extraction rule, and an image including a face image is obtained from the extracted images as an image to be processed.
And step 12, carrying out face vectorization on the face image to obtain a face feature vector corresponding to the face image.
In specific implementation, in order to facilitate subsequent face recognition, image normalization may be performed on a face image in the image to be processed to obtain a normalized face image.
In a specific implementation, the image to be processed may include one face or may include a plurality of faces. When the image to be processed comprises a plurality of faces, respectively intercepting the plurality of face areas, and correspondingly obtaining a plurality of face images.
In the embodiment of the present invention, normalizing the face image may include the following steps: carrying out size normalization on the face image to obtain a face image with a preset size; and adjusting the face direction in the obtained face image with the preset size by rotating and the like, so that key points in the face, such as eyes, nose tips, mouth corners and the like, are in preset positions. The face in the obtained face image with the preset size can be in a non-inclined state by adjusting the obtained face image with the preset size. By carrying out standardized processing on the face image in the image to be processed, the fault tolerance of face detection can be improved, and the image processing precision and efficiency in the subsequent image processing process can be improved.
In specific implementation, after a standardized face image is obtained, a face vectorization algorithm may be used to perform face vectorization on the standardized face image, and a first face feature vector of a preset dimension is obtained as a face feature vector corresponding to the standardized face image. A face image can be regarded as a 3-dimensional pixel matrix, a corresponding 1-dimensional vector can be obtained after face vectorization processing, and the length of the 1-dimensional vector can be set according to actual requirements. For example, a lightweight face vectorization algorithm with a small full connection layer is adopted to carry out face feature vectorization on the standardized face image, a 3-dimensional face image is converted into a corresponding 1-dimensional vector, and a 512-dimensional vector is adopted as a face feature vector corresponding to the face image.
In order to further improve the accuracy of the obtained face feature vector, in another embodiment of the present invention, after the standardized face image is obtained, the standardized face is mirrored to obtain a mirrored face image. And performing face vectorization processing on the mirrored face image by adopting a face vectorization algorithm to obtain a second face feature vector with a preset dimension, calculating an average value of the first face feature vector and the second face feature vector, and taking the calculated average value as a face feature vector corresponding to the face image.
And step 13, clustering the face images based on the quality scores of the face images and by combining the face characteristic vectors corresponding to the face images.
When the image to be processed is acquired in step 11, face detection may be performed on the image to determine whether the image is a face image. When the face detection is carried out, the coordinate position and the quality score of the face area can be obtained.
In a specific implementation, the quality score of the face image is a confidence level that a certain region includes a face, and the higher the confidence level, the more acceptable the assumption that the region includes a face, which can be used to describe the possibility that the certain region includes a face. The quality is related to factors such as the definition of the image, the position of the face in the image, the posture of the face (the face on the front side, the face on the side and the like), and the higher the definition of the face image is, the higher the quality corresponding to the face posture closer to the face on the front side is.
In particular implementations, the face images may be clustered in a variety of ways. For example, a non-maximum suppression algorithm is used to cluster the face images. For another example, the face images are clustered using a community finding (Fast Unfolding) algorithm.
Referring to fig. 2, a flowchart of clustering face images by using a non-maximum suppression algorithm in an embodiment of the present invention is shown, which may include the following steps.
And step 21, arranging the face images according to the quality scores in a reverse order, and recording the face images as a face quality score queue.
In a specific implementation, the face images may be arranged in a reverse order from high to low according to the quality scores, so as to obtain a face quality score queue T0. It can be understood that the face images can also be arranged in a positive order from low to high according to the quality scores, and the ordering can be performed according to the actual application requirements.
For example, the normalized face images are arranged in descending order of quality scores. As another example, the quality scores of the mirrored face images are arranged in a reverse order from high to low.
And step 22, calculating the similarity of each face image and other face images according to the face feature vectors of the face images, and recording the similarity as a similarity matrix.
In specific implementation, the similarity between each face image and each face image except the face image in the face quality queue T0 can be sequentially calculated according to the face feature vector of the face image, so as to obtain the similarity between each two face images, and obtain a similarity matrix M0.
In an embodiment of the invention, a cosine distance algorithm is adopted to calculate the similarity between two face images. In another embodiment of the invention, the similarity between two face images is calculated by adopting a Euclidean distance algorithm. It is understood that in practical applications, other algorithms may be used to calculate the similarity between two face images.
When the cosine distance algorithm is adopted to calculate the similarity of the face image, the calculated similarity can be recorded in a two-dimensional matrix M0, and the cosine distance between the face image a and the face image b, that is, the similarity between the face image a and the face image b, is recorded at the (a, b) position in the matrix.
In a specific implementation, there is no necessary logical order between step 21 and step 22. In the embodiment of the present invention, step 21 may be performed first, and then step 22 may be performed; step 22 may be performed first, and then step 21 may be performed; step 21 and step 22 may be performed simultaneously.
And step 23, taking the face image with the highest quality score as a class center point of the class Ci during the ith iteration.
In a specific implementation, when the ith iteration is performed, the corresponding quality queue is Ti, and the corresponding similarity matrix is Mi. And taking the face image with the highest quality score in the Ti as the class center point of the class Ci.
And 24, generating a similarity matrix Mi.
And step 25, classifying the face images with the similarity threshold value reaching the preset threshold value with the class center point of the class Ci into the class Ci according to the similarity between the face images in the similarity matrix.
And according to the similarity of each face image in the similarity matrix Mi, confirming the face image of which the similarity of the face image corresponding to the class center point of the class Ci is higher than a preset threshold value, and classifying the face image of which the similarity threshold value is higher than the preset threshold value into the class Ci.
And 26, deleting the face images in the category Ci from the face quality queue Ti to obtain an updated face quality queue T (i + 1).
And 27, deleting the similarity between the face image in the category Ci and the class center point of the category Ci from the similarity matrix Mi.
And deleting the similarity between the face image in the category Ci and the class center point of the category Ci from the similarity matrix Mi to obtain an updated similarity matrix M (i + 1).
In particular embodiments, there is no required logical ordering between steps 26 and 27. In the embodiment of the present invention, step 26 may be performed first, and then step 27 may be performed; step 27 may be performed first, and then step 26 may be performed; step 26 and step 27 may also be performed simultaneously.
And step 28, judging whether the updated face quality queue T (i +1) is empty or not.
And when the judgment result is negative, namely the updated face quality queue T (i +1) is not empty, respectively executing the step 23 and the step 24, and entering the cycle of the (i +1) th time. And in the (i +1) th iteration, continuously selecting the face image with the highest quality score from the updated quality score queue T (i +1) as the class center point of the class C (i +1), and clustering the corresponding face images in the face quality score queue T (i +1) according to the similarity in the similarity matrix M (i + 1).
And when the judgment result is yes, namely the updated face quality queue T (i +1) is empty, executing step 14, determining the class of the face image, and taking the face image with the highest quality score in each class as a class center point.
Referring to fig. 3, a flowchart illustrating a process of clustering face images by using a community discovery algorithm according to an embodiment of the present invention is shown, and may include the following steps.
And (4) regarding all the faces as points, and forming edges on all the faces with the similarity higher than a second threshold value to construct a graph. And (4) utilizing the stability to measure the clustering result, and performing iteration until the stability of the division state cannot be improved. The process of clustering the face images by adopting the community discovery algorithm is as follows:
step 301, dividing all face images into different categories to form a set of categories set (c) 0.
In a specific implementation, each face image may be taken as a category when first divided. For example, there are 10 face images, and each face image corresponds to one category, which is 10 categories in total.
Upon initialization, the set of erroneous partition states is null
Figure BDA0001995925960000101
Step 302, calculating the similarity between the two face images, and comparing the similarity with a preset second threshold, wherein the weight higher than the second threshold is 1, and the weight lower than the preset second threshold is 0, so as to obtain a set of edges set (e) 0.
In a specific implementation, when the similarity of the two face images is higher than the second threshold, the weight is marked as 1, and a connecting line is formed between the two face images to form an edge. When the similarity of the two face images is lower than the second threshold, the weight is marked as 0, and there is no edge between the two face images.
Step 303, during the ith iteration, calculating the clustering stability Qi in the current clustering state.
In a specific implementation, at the ith iteration, the set of corresponding classes set (c) i, the set of edges set (e) i. Based on the current cluster state set (c) i, the set of edges set (e) i, the cluster stability Qi can be calculated using formula (1).
Figure BDA0001995925960000102
Wherein the content of the first and second substances,
Figure BDA0001995925960000103
representing all weights in the graph network formed by the top points and the edges; a. thei,jRepresenting the weight between vertex i and vertex j;
Figure BDA0001995925960000104
a weight representing an edge connected to vertex i; c. CiRepresenting the category to which the vertex is assigned; delta (c)i,cj) And the method is used for judging whether the vertex i and the vertex j are divided into the same category, if so, returning to 1, and otherwise, returning to 0.
Step 304, merging two adjacent classes into the same class to obtain a new class set (c) (i + 1).
In a specific implementation, the set of merged classes set (c) (i +1) is not in set(s) i.
Step 305, the sum of the weights of the edges in each class and the sum of the weights of the edges between the classes after merging are calculated, and a set (e) (i +1) of the edges is obtained.
Based on the set (c) (i +1) of the classes obtained by the new clustering, the weights of the respective edges are merged, and the sum of the weights of the edges in the respective classes and the sum of the weights of the edges between the classes after merging are calculated. The sum of all the edge weights in the class is the self-connection weight of the class, and the weight of all the edges between the two classes is the weight of the connection of the two classes.
Step 307, the merged cluster stability Q (i +1) is calculated.
Based on the merged set (c) (i +1) and set (e) (i +1), the merged cluster stability Q (i +1) is calculated, and specifically, the calculation method and process of Qi may be referred to.
In step 307, it is judged whether Q (i +1) -Qi > 0 is established.
If yes, go to step 308; when the determination result is negative, step 310 is executed.
Step 308, receiving the merge, and nulling the set formed by the error division states.
After step 308, step 309 is performed.
Step 309, SET (C) (i +1) & SET (E) (i + 1).
The i +1 th iteration is entered, after which step 303 continues.
Step 310, not accepting merging, adding the current partition into the set formed by the error partition states.
Step 311, determine if SET (S) i has a length lenC equal to
Figure BDA0001995925960000111
When the judgment result is yes, that is, all the category merging possibilities have been tried, the current partition is the stable and best partition, that is, the optimal clustering, and step 14 is performed.
When the determination result is negative, step 312 is performed.
Step 312, SET (C) i & SET (E) i.
Stay on the ith iteration and continue to step 303, continuing to try other possible category merges.
And step 14, determining the categories of the face images and the category center points corresponding to the categories.
In specific implementation, when the face images are clustered by adopting a community discovery algorithm, the face image with the most edges in each category is used as a class center point of the corresponding category. And when clustering the face images by adopting a non-maximum value suppression algorithm, taking the face image with the highest quality score in each category as a class center point of the corresponding category.
And step 15, carrying out face recognition on the face images corresponding to the class center points of all classes.
In specific implementation, the face images corresponding to the class center points of all classes can be subjected to face recognition, other images belonging to the same class with the class center points are not subjected to face recognition, face image clustering is adopted, the purpose of removing the weight of the face images is achieved, the number of the face images needing face recognition can be effectively reduced, and therefore the face image recognition efficiency is improved, particularly the face recognition efficiency aiming at video stream images is improved.
According to the scheme, the face features of the face images in the images to be processed are vectorized, the face images are clustered according to the corresponding quality scores of the face images and the corresponding face feature vectors, the categories to which the face images belong and the class central points of the categories are obtained, and the face images corresponding to the class central points in the categories are identified during face identification.
In order to facilitate better understanding and implementation of the present invention for those skilled in the art, the embodiment of the present invention further provides a face image processing apparatus.
Referring to fig. 4, a schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention is shown. The face image processing apparatus 40 may include: an obtaining unit 41, a face vectorization unit 42, a clustering unit 43, a determination unit 44, and a face recognition unit 45, wherein:
an obtaining unit 41, adapted to obtain an image to be processed, where the image to be processed includes a face image;
a face vectorization unit 42, adapted to perform face vectorization on the face image to obtain a face feature vector corresponding to the face image;
a clustering unit 43, adapted to cluster the face images based on the quality scores of the face images and in combination with the face feature vectors corresponding to the face images;
a determining unit 44, adapted to determine the categories of the face images and the class center points corresponding to the categories;
the face recognition unit 45 is adapted to perform face recognition on the face images corresponding to the class center points of the classes.
In a specific implementation, the face vectorization unit 42 is adapted to perform image normalization on the face image to obtain a normalized face image; and performing face vectorization processing on the standardized face image by adopting a face vectorization algorithm, and acquiring a first face feature vector with a preset dimension as a face feature vector corresponding to the face image.
In a specific implementation, the face vectorization unit 42 is further adapted to mirror the standardized face image to obtain a mirrored face image; performing face vectorization processing on the mirrored face image by adopting a face vectorization algorithm to obtain a second face feature vector with a preset dimension; and calculating the mean value of the first face feature vector and the second face feature vector, and taking the calculated mean value as the face feature vector corresponding to the face image.
In a specific implementation, the clustering unit 43 is adapted to cluster the face images by using any one of the following algorithms: non-maximum suppression algorithm, community discovery algorithm.
In a specific implementation, the determining unit 44 is adapted to, when the facial images are clustered by using a non-maximum suppression algorithm, take the facial image with the highest quality score in each category as a class center point of the corresponding category; and when the face images are clustered by adopting a community discovery algorithm, the face image with the most edges in each class is taken as the class center point of the corresponding class.
In a specific implementation, the clustering unit 43 is adapted to sort the face images in reverse order according to the quality scores when the non-maximum suppression algorithm is used to cluster the face images, and record the face images as a face quality score queue; calculating the similarity of each face image and other face images according to the face feature vector of each face image, and recording as a similarity matrix; in the ith iteration, the face image with the highest quality score is used as a class center point of the class Ci; classifying the face images with the similarity threshold value reaching a preset threshold value with the class center point of the class Ci into the class Ci according to the similarity between the face images in the similarity matrix; deleting the face images in the category Ci from a face quality queue Ti to obtain an updated face quality queue T (i + 1); deleting the similarity between the face image in the category Ci and the category Ci center point from the similarity matrix Mi to obtain an updated similarity matrix M (i + 1); and when the updated face quality queue T (i +1) is not empty, continuously selecting the face image with the highest quality score from the updated quality queue T (i +1) as the class center point of the class C (i + 1).
In a specific implementation, the clustering unit 43 is adapted to calculate the similarity between each face image and other face images by using any one of the following algorithms: cosine distance algorithm, euclidean distance algorithm.
In a specific implementation, the clustering unit 43 is adapted to, when clustering the face images by using a community discovery algorithm, divide all the face images into different categories to form a set (c)0 of the categories; calculating the face similarity of all the face images in each category, comparing the face similarity with a preset second threshold, wherein the weight of the face similarity is 1 when the face similarity is higher than the preset second threshold, and the weight of the face similarity is 0 when the face similarity is lower than the preset second threshold, and obtaining a set SET (E)0 of edges; during the ith iteration, calculating the clustering stability Qi under the current clustering state by using a corresponding class set (C) i and an edge set (E) i; merging two adjacent categories into the same category to obtain a new category set (C) (i + 1); calculating the weights of the edges in each combined class and the sum of the weights of the edges among the classes to obtain an edge set (e) (i + 1); calculating the merged clustering stability Q (i + 1); when the cluster stability Q (i +1) after merging is greater than the cluster stability Qi before merging, merging is accepted, a set formed by error division states is emptied, and the i +1 th iteration is carried out; when the cluster stability Q (i +1) after merging is less than or equal to the cluster stability Qi before merging, merging is not accepted, and the current partition is added into a set consisting of error partition states; judging whether all the merging possibilities are completed or not, and when all the merging possibilities are completed, taking the corresponding cluster in the ith iteration as an optimal clustering result; when all merging possibilities are not completed, the merge attempt continues.
In a specific implementation, the working principle and the process of the facial image processing apparatus 40 may refer to descriptions in any one of the facial image processing methods provided in the above embodiments of the present invention, and are not described herein again.
The embodiment of the present invention further provides a face image processing apparatus, which includes a memory and a processor, where the memory stores a computer instruction that can be executed on the processor, and the processor executes any of the steps of the face image processing method provided in the embodiment of the present invention when executing the computer instruction.
The embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium is a non-volatile storage medium or a non-transitory storage medium, and has computer instructions stored thereon, and when the computer instructions are executed, the computer instructions perform any of the steps of the above-mentioned facial image processing method provided in the embodiment of the present invention.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in any computer readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A face image processing method is characterized by comprising the following steps:
acquiring an image to be processed, wherein the image to be processed comprises a face image;
carrying out face vectorization on the face image to obtain a face feature vector corresponding to the face image;
based on the quality scores of the face images, and combining face feature vectors corresponding to the face images, clustering the face images;
determining the category of the face image and a category center point corresponding to each category;
carrying out face recognition on the face images corresponding to the class center points of all classes;
clustering the face images by adopting a non-maximum suppression algorithm;
the clustering of the face images by adopting a non-maximum suppression algorithm comprises the following steps:
arranging the face images according to the quality in a reverse order, and recording the face images as a face quality queue;
calculating the similarity of each face image and other face images according to the face feature vector of each face image, and recording as a similarity matrix;
in the ith iteration, the face image with the highest quality score is used as a class center point of the class Ci;
classifying the face images with the similarity threshold value reaching a preset threshold value with the class center point of the class Ci into the class Ci according to the similarity between the face images in the similarity matrix;
deleting the face images in the category Ci from a face quality queue Ti to obtain an updated face quality queue T (i + 1);
deleting the similarity between the face image in the category Ci and the category Ci center point from the similarity matrix Mi to obtain an updated similarity matrix M (i + 1);
and when the updated face quality queue T (i +1) is not empty, continuously selecting the face image with the highest quality score from the updated quality queue T (i +1) as the class center point of the class C (i + 1).
2. The method for processing a face image according to claim 1, wherein the performing face vectorization on the face image to obtain a face feature vector corresponding to the face image comprises:
carrying out image standardization on the face image to obtain a standardized face image;
and performing face vectorization processing on the standardized face image by adopting a face vectorization algorithm, and acquiring a first face feature vector with a preset dimension as a face feature vector corresponding to the face image.
3. The method of claim 2, wherein after the image normalization of the face image to obtain a normalized face image, the method further comprises:
mirroring the standardized face image to obtain a mirrored face image;
performing face vectorization processing on the mirrored face image by adopting a face vectorization algorithm to obtain a second face feature vector with a preset dimension;
and calculating the mean value of the first face feature vector and the second face feature vector, and taking the calculated mean value as the face feature vector corresponding to the face image.
4. The method of claim 1, wherein the determining the class center point corresponding to each class comprises: and when clustering the face images by adopting a non-maximum value suppression algorithm, taking the face image with the highest quality score in each category as a class center point of the corresponding category.
5. The face image processing method according to claim 1, wherein the similarity of each face image with other face images is calculated by using any one of the following algorithms:
cosine distance algorithm, euclidean distance algorithm.
6. A face image processing apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is suitable for acquiring an image to be processed, and the image to be processed comprises a human face image;
the human face vectorization unit is suitable for carrying out human face vectorization on the human face image to obtain a human face feature vector corresponding to the human face image;
the clustering unit is suitable for clustering the face images based on the quality scores of the face images and in combination with the face characteristic vectors corresponding to the face images;
the determining unit is suitable for determining the categories of the face images and the category center points corresponding to the categories;
the face recognition unit is suitable for carrying out face recognition on the face images corresponding to the class center points of all classes;
the clustering unit is suitable for clustering the face images by adopting a non-maximum suppression algorithm;
the clustering unit is suitable for clustering the face images by adopting a non-maximum suppression algorithm, arranging the face images according to the quality in a reverse order and recording the face images as a face quality queue; calculating the similarity of each face image and other face images according to the face feature vector of each face image, and recording as a similarity matrix; in the ith iteration, the face image with the highest quality score is used as a class center point of the class Ci; classifying the face images with the similarity threshold value reaching a preset threshold value with the class center point of the class Ci into the class Ci according to the similarity between the face images in the similarity matrix; deleting the face images in the category Ci from a face quality queue Ti to obtain an updated face quality queue T (i + 1); deleting the similarity between the face image in the category Ci and the category Ci center point from the similarity matrix Mi to obtain an updated similarity matrix M (i + 1); and when the updated face quality queue T (i +1) is not empty, continuously selecting the face image with the highest quality score from the updated quality queue T (i +1) as the class center point of the class C (i + 1).
7. The facial image processing apparatus according to claim 6, wherein said face vectorization unit is adapted to perform image normalization on said facial image to obtain a normalized facial image; and performing face vectorization processing on the standardized face image by adopting a face vectorization algorithm, and acquiring a first face feature vector with a preset dimension as a face feature vector corresponding to the face image.
8. The facial image processing apparatus according to claim 7, wherein the face vectorization unit is further adapted to mirror the standardized facial image to obtain a mirrored facial image; performing face vectorization processing on the mirrored face image by adopting a face vectorization algorithm to obtain a second face feature vector with a preset dimension; and calculating the mean value of the first face feature vector and the second face feature vector, and taking the calculated mean value as the face feature vector corresponding to the face image.
9. The apparatus according to claim 6, wherein the determining unit is adapted to, when clustering the face images using a non-maximum suppression algorithm, take the face image with the highest quality score in each class as the class center point of the corresponding class.
10. The facial image processing apparatus according to claim 6, wherein said clustering unit is adapted to calculate the similarity of each facial image with other facial images by using any one of the following algorithms: cosine distance algorithm, euclidean distance algorithm.
11. A facial image processing apparatus comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the processor executes the computer instructions to perform the steps of the facial image processing method of any one of claims 1 to 5.
12. A computer-readable storage medium, which is a non-volatile storage medium or a non-transitory storage medium, having computer instructions stored thereon, wherein the computer instructions are executed to perform the steps of the facial image processing method according to any one of claims 1 to 5.
CN201910196223.2A 2019-03-15 2019-03-15 Face image processing method and device and computer readable storage medium Active CN110069989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910196223.2A CN110069989B (en) 2019-03-15 2019-03-15 Face image processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910196223.2A CN110069989B (en) 2019-03-15 2019-03-15 Face image processing method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110069989A CN110069989A (en) 2019-07-30
CN110069989B true CN110069989B (en) 2021-07-30

Family

ID=67366142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910196223.2A Active CN110069989B (en) 2019-03-15 2019-03-15 Face image processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110069989B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443297B (en) * 2019-07-30 2022-06-07 浙江大华技术股份有限公司 Image clustering method and device and computer storage medium
CN111128369A (en) * 2019-11-18 2020-05-08 创新工场(北京)企业管理股份有限公司 Method and device for evaluating Parkinson's disease condition of patient
CN111694979A (en) * 2020-06-11 2020-09-22 重庆中科云从科技有限公司 Archive management method, system, equipment and medium based on image
CN111738120B (en) * 2020-06-12 2023-12-05 北京奇艺世纪科技有限公司 Character recognition method, character recognition device, electronic equipment and storage medium
CN112528809A (en) * 2020-12-04 2021-03-19 东方网力科技股份有限公司 Method, device and equipment for identifying suspect and storage medium
CN113920353B (en) * 2021-11-04 2022-07-29 厦门市美亚柏科信息股份有限公司 Unsupervised face image secondary clustering method, unsupervised face image secondary clustering device and unsupervised face image secondary clustering medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013063736A1 (en) * 2011-10-31 2013-05-10 Hewlett-Packard Development Company, L.P. Temporal face sequences
CN104036261A (en) * 2014-06-30 2014-09-10 北京奇虎科技有限公司 Face recognition method and system
CN105404863A (en) * 2015-11-13 2016-03-16 小米科技有限责任公司 Figure feature recognition method and system
CN105868309A (en) * 2016-03-24 2016-08-17 广东微模式软件股份有限公司 Image quick finding and self-service printing method based on facial image clustering and recognizing techniques
CN106503633A (en) * 2016-10-10 2017-03-15 上海电机学院 The method for building up in face characteristic storehouse in a kind of video image
CN109063580A (en) * 2018-07-09 2018-12-21 北京达佳互联信息技术有限公司 Face identification method, device, electronic equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600174B2 (en) * 2005-09-28 2013-12-03 Facedouble, Inc. Method and system for attaching a metatag to a digital image
CN102930553B (en) * 2011-08-10 2016-03-30 中国移动通信集团上海有限公司 Bad video content recognition method and device
CN104966304B (en) * 2015-06-08 2019-07-16 深圳市赛为智能股份有限公司 Multi-target detection tracking based on Kalman filtering and nonparametric background model
CN105512685B (en) * 2015-12-10 2019-12-03 小米科技有限责任公司 Object identification method and device
CN106570178B (en) * 2016-11-10 2020-09-29 重庆邮电大学 High-dimensional text data feature selection method based on graph clustering
DE102016122649B3 (en) * 2016-11-24 2018-03-01 Bioid Ag Biometric method
CN107239736A (en) * 2017-04-28 2017-10-10 北京智慧眼科技股份有限公司 Method for detecting human face and detection means based on multitask concatenated convolutional neutral net
CN108875522B (en) * 2017-12-21 2022-06-10 北京旷视科技有限公司 Face clustering method, device and system and storage medium
CN109388727A (en) * 2018-09-12 2019-02-26 中国人民解放军国防科技大学 BGP face rapid retrieval method based on clustering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013063736A1 (en) * 2011-10-31 2013-05-10 Hewlett-Packard Development Company, L.P. Temporal face sequences
CN104036261A (en) * 2014-06-30 2014-09-10 北京奇虎科技有限公司 Face recognition method and system
CN105404863A (en) * 2015-11-13 2016-03-16 小米科技有限责任公司 Figure feature recognition method and system
CN105868309A (en) * 2016-03-24 2016-08-17 广东微模式软件股份有限公司 Image quick finding and self-service printing method based on facial image clustering and recognizing techniques
CN106503633A (en) * 2016-10-10 2017-03-15 上海电机学院 The method for building up in face characteristic storehouse in a kind of video image
CN109063580A (en) * 2018-07-09 2018-12-21 北京达佳互联信息技术有限公司 Face identification method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ranking, clustering and fusing the normalized LBP temporal facial features for face recognition in video sequences;P.Ithaya Rani等;《Multimedia Tools and Applications》;20170227;第77卷(第5期);第5785-5802页 *
聚类中心自动确定的谱聚类算法研究;陈晋音等;《小型微型计算机系统》;20180831(第8期);第1729-1736页 *

Also Published As

Publication number Publication date
CN110069989A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110069989B (en) Face image processing method and device and computer readable storage medium
Valle et al. A deeply-initialized coarse-to-fine ensemble of regression trees for face alignment
CN109271870B (en) Pedestrian re-identification method, device, computer equipment and storage medium
US8064653B2 (en) Method and system of person identification by facial image
US8254645B2 (en) Image processing apparatus and method, and program
Wong et al. An efficient algorithm for human face detection and facial feature extraction under different conditions
JP5406705B2 (en) Data correction apparatus and method
WO2019011165A1 (en) Facial recognition method and apparatus, electronic device, and storage medium
WO2010147137A1 (en) Pattern processing device, method therefor, and program
WO2016138838A1 (en) Method and device for recognizing lip-reading based on projection extreme learning machine
CN103996052B (en) Three-dimensional face gender classification method based on three-dimensional point cloud
Li et al. Efficient 3D face recognition handling facial expression and hair occlusion
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
KR101558547B1 (en) Age Cognition Method that is powerful to change of Face Pose and System thereof
WO2019102608A1 (en) Image processing device, image processing method, and image processing program
WO2018100668A1 (en) Image processing device, image processing method, and image processing program
CN111125390A (en) Database updating method and device, electronic equipment and computer storage medium
WO2017167313A1 (en) Expression recognition method and device
CN112651321A (en) File processing method and device and server
KR20120080629A (en) Method of computing global-to-local metrics for recognition
JP2009129237A (en) Image processing apparatus and its method
CN113158777A (en) Quality scoring method, quality scoring model training method and related device
Alsawwaf et al. In your face: person identification through ratios and distances between facial features
Shindo et al. An optimization of facial feature point detection program by using several types of convolutional neural network
US20210042565A1 (en) Method and device for updating database, electronic device, and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant