CN108829900B - Face image retrieval method and device based on deep learning and terminal - Google Patents

Face image retrieval method and device based on deep learning and terminal Download PDF

Info

Publication number
CN108829900B
CN108829900B CN201810856269.8A CN201810856269A CN108829900B CN 108829900 B CN108829900 B CN 108829900B CN 201810856269 A CN201810856269 A CN 201810856269A CN 108829900 B CN108829900 B CN 108829900B
Authority
CN
China
Prior art keywords
attribute
face image
feature vector
attribute group
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810856269.8A
Other languages
Chinese (zh)
Other versions
CN108829900A (en
Inventor
史方
王标
隆刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shiguan Tianxia Technology Co ltd
Original Assignee
Chengdu Shiguan Tianxia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shiguan Tianxia Technology Co ltd filed Critical Chengdu Shiguan Tianxia Technology Co ltd
Priority to CN201810856269.8A priority Critical patent/CN108829900B/en
Publication of CN108829900A publication Critical patent/CN108829900A/en
Application granted granted Critical
Publication of CN108829900B publication Critical patent/CN108829900B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the application discloses a face image retrieval method, a face image retrieval device and a face image retrieval terminal based on deep learning, wherein the method comprises the following steps: inputting a face image to be retrieved after preprocessing operation into a trained multi-task convolutional neural network model based on local sharing to obtain an identity characteristic vector and a plurality of attribute group characteristic vectors of the face image to be retrieved; comparing the identity characteristic vector and the attribute group characteristic vector with an identity characteristic vector and an attribute group characteristic vector of a face image stored in a database respectively to obtain an identity characteristic vector comparison result, a global attribute group characteristic vector comparison result and a local attribute characteristic vector comparison result; and screening out the target face image from the database according to the identity characteristic vector comparison result, the comparison result of the global attribute group characteristic vector and the comparison result of the local attribute characteristic vector. By adopting the technical scheme provided by the embodiment of the application, the probability of face image retrieval misrecognition can be effectively reduced.

Description

Face image retrieval method and device based on deep learning and terminal
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a terminal for retrieving a face image based on deep learning.
Background
With the rapid development of the internet, social media and intelligent multimedia devices, the generation, processing and acquisition of multimedia data represented by images and videos become more and more convenient, the multimedia application is increasingly widespread, and the data volume shows explosive growth. How to accurately find out interested images in massive image big data with small cost has become an important research hotspot in the field of multimedia and information retrieval in recent years.
Deep learning is a method in machine learning based on feature learning of data, which can automatically learn a representation of a feature from big data. In machine learning, a Convolutional Neural Network (CNN) is a deep feedforward artificial Neural Network, which has strong learning ability and high-efficiency feature expression ability, and can extract information layer by layer from pixel-level original data to abstract semantic concepts, so that the Convolutional Neural Network has outstanding advantages in the aspect of extracting global features and context information of images, and is widely applied to various fields.
The human face recognition is used as a biological feature recognition technology, has good development and application prospects due to the characteristics of non-contact and convenience in acquisition, and plays an important role in various application scenes. Taking the application of the face recognition technology in the application of the public security system as an example, some criminal suspects use the complex situations that the current social population flows in a large quantity and the identity cards can be forged and the like to disguise and impersonate the real identity of the criminal suspects, thereby bringing great difficulty to the case handling and detection of the public security system. And the face of the person is difficult to counterfeit, so that the face picture of the criminal suspect and the key population in the face resource library can be compared and identified under the condition, the real identity of the suspect can be confirmed, and the case handling efficiency can be improved. However, unlike ordinary natural images, the human face images are relatively difficult to distinguish in feature. Under the non-constrained environment, due to the interference of various factors such as illumination, posture, expression, accessories and the like, and even due to the appearance change caused by age span, the problem that the similarity of different images of the same face is low and the similarity of different faces is high easily occurs when the algorithm is compared with the face. These phenomena cause even greater difficulty in face retrieval. In addition, as the size of the face database increases, the probability of similar faces appearing therein also becomes higher, and the probability of false recognition greatly increases.
To solve this problem, one solution in the prior art is to extract face identity information and classify several face attributes at the same time through a multitask convolutional neural network. Therefore, on the basis of the traditional sequencing based on the similarity of the face identity information, the reordering of the similarity scores of the face attribute classification results is used as an assistant, so that the false recognition caused by the unsatisfactory face distinguishing performance of the simple identity information is improved.
However, due to the diversity of the human face itself and the imaging complexity of the interference in the unconstrained environment, the classification result of the human face attribute is still not robust enough at present. When the classification result of the face attribute is incorrect, it is not necessarily effective to improve the face verification. In addition, the face attributes in the prior art usually adopt global attributes such as age, gender, race and the like directly related to the face identity, and the conditions of face partial shielding or other local interference (accessories and the like) existing in the actual environment are not improved well. Therefore, a better face image retrieval method is urgently needed.
Disclosure of Invention
The embodiment of the application provides a face image retrieval method, a face image retrieval device and a face image retrieval terminal based on deep learning, so that the problem that the face image retrieval error recognition probability is high in the prior art is solved.
In a first aspect, an embodiment of the present application provides a face image retrieval method based on deep learning, including:
inputting a face image to be retrieved after preprocessing operation into a trained multi-task convolutional neural network model based on local sharing to obtain an identity characteristic vector and a plurality of attribute group characteristic vectors of the face image to be retrieved, wherein the plurality of attribute group characteristic vectors comprise a global attribute group characteristic vector and at least one local attribute group characteristic vector;
comparing the identity characteristic vector and the attribute group characteristic vector of the face image to be retrieved with the identity characteristic vector and the attribute group characteristic vector of the face image stored in a database respectively to obtain an identity characteristic vector comparison result, a global attribute group characteristic vector comparison result and a local attribute characteristic vector comparison result of the face image to be retrieved and the face image stored in the database;
and screening out the target face image from the database according to the identity characteristic vector comparison result, the comparison result of the global attribute group characteristic vector and the comparison result of the local attribute characteristic vector.
Optionally, the screening out the target face image from the database according to the identity feature vector comparison result, the global attribute group feature vector comparison result, and the local attribute feature vector comparison result includes:
screening a candidate image set in the database according to the identity characteristic vector comparison result and the global attribute group characteristic vector comparison result;
and screening a target face image from the candidate image set according to the identity characteristic vector comparison result and the local attribute characteristic vector comparison result, or according to the identity characteristic vector comparison result, the global attribute group characteristic vector comparison result and the local attribute characteristic vector comparison result.
Optionally, the identity feature vector comparison result, the global attribute group feature vector comparison result, and the local attribute feature vector comparison result are an identity feature vector similarity score, a global attribute group feature vector similarity score, and a local attribute feature vector similarity score corresponding to the face image stored in the database, respectively;
the screening of the candidate image set in the database according to the identity feature vector comparison result and the global attribute set feature vector comparison result comprises:
screening face images meeting first screening conditions in the database to obtain a rough screening face image set, wherein the first screening conditions comprise that the similarity score of the identity feature vectors of the face images is greater than or equal to a preset identity feature vector similarity threshold, and the similarity score of the feature vectors of the global attribute groups is greater than or equal to a preset global attribute group feature vector similarity threshold;
screening out face images meeting a second screening condition from the coarse screening face image set to obtain a candidate image set, wherein the second screening condition is N1 face images with the largest first fusion similarity Score Score1 in the coarse screening face image set, Score1 is (Score _ id + Score _ gAttrib)/2, Score _ id is an identity feature vector similarity Score, Score _ gAttrib is a global attribute group feature vector similarity Score, N1 is not more than N0, and N0 is the number of the face images in the coarse screening face image set.
Optionally, screening a target face image from the candidate image set according to the identity feature vector comparison result and the local attribute feature vector comparison result, or according to the identity feature vector comparison result, the global attribute group feature vector comparison result and the local attribute feature vector comparison result, including:
screening N2 face images with the largest second fusion similarity Score Score2 from the candidate image set as target face images, wherein N2 is not less than 1;
wherein, Score2 ═ w1 ═ Score _ id + w2 ═ Score _ loc, w1, w2 are weight values, w1>0, w2>0, w1+ w2 ═ 1;
if at least one local attribute group feature vector corresponding to the face image has a local attribute group feature vector meeting a threshold screening condition, taking the maximum value of the similarity Score of the local attribute group feature vectors in the local attribute group feature vectors meeting the threshold screening condition as Score _ loc; and if at least one local attribute group feature vector corresponding to the face image does not have a local attribute group feature vector meeting a threshold screening condition, taking the global attribute group feature vector similarity Score as Score _ loc, wherein the threshold screening condition is that the local attribute group feature vector similarity Score is greater than or equal to a preset local attribute group feature vector similarity threshold.
Optionally, the at least one local attribute group feature vector includes:
the face attribute group feature vector, the middle face attribute group feature vector and the lower face attribute group feature vector.
Optionally, the attributes used for characterization by the upper face attribute group feature vector include an eyebrow attribute, an eye attribute, a hair color attribute, a hair style attribute, and/or an upper face appendage attribute;
the attributes used for representing by the middle-face attribute group feature vector comprise a nose attribute, a cheek attribute, a cheekbone attribute, a temple attribute and/or a middle-face accessory attribute;
the attributes used for characterization by the lower face attribute group feature vector comprise lip attributes, chin attributes, beard attributes, mouth attributes and/or lower face accessory attributes;
the attributes that the global property group feature vector is used to characterize include a gender attribute, an expression attribute, a face attribute, a complexion attribute, a hair style attribute, and/or an age attribute.
Specifically, the attributes used for representing the upper face attribute group feature vector comprise a thick eyebrow, a cambered eyebrow, an eye bag, an air eye, a black hair, a white hair, a bang, a bald top, a hairline moving backwards, a hat and/or glasses;
the attributes used for characterization by the middle-face attribute group feature vector comprise big nose, sharp nose, red cheek, high cheekbone, temples and/or earrings;
the attributes used for representing the lower face attribute group feature vectors comprise large lips, double chin, moustache, goat huhu, half mouth, moustache and/or lipstick;
the attributes that the global attribute group feature vector is used for characterizing include male, smiling, charming, round face, melon seed face, heavy make-up, pale face, curly hair, and/or young.
Optionally, the database building process includes:
inputting a face image to be registered into a trained multitask convolutional neural network model based on local sharing, and obtaining an identity characteristic vector and a plurality of attribute group characteristic vectors corresponding to each face image in the face image to be registered;
and storing an identity feature vector and a plurality of attribute group feature vectors corresponding to each face image in a database.
Optionally, the preprocessing operation of the face image to be retrieved includes:
detecting the position of a face and the position of a key point in a face image to be retrieved;
and performing posture correction and light ray correction processing on the face image to be retrieved according to the face position and the key point position of the face image to be retrieved.
In a second aspect, an embodiment of the present application provides a facial image retrieval device based on deep learning, including:
the feature extraction module is used for inputting the face image to be retrieved after the preprocessing operation into a trained multitask convolutional neural network model based on local sharing to obtain an identity feature vector and a plurality of attribute group feature vectors of the face image to be retrieved, wherein the plurality of attribute group feature vectors comprise a global attribute group feature vector and at least one local attribute group feature vector;
the comparison analysis module is used for comparing the identity characteristic vector and the attribute group characteristic vector of the facial image to be retrieved with the identity characteristic vector and the attribute group characteristic vector of the facial image stored in the database respectively to obtain an identity characteristic vector comparison result, a global attribute group characteristic vector comparison result and a local attribute characteristic vector comparison result of the facial image to be retrieved and the facial image stored in the database;
and the screening module is used for screening the target face image from the database according to the identity characteristic vector comparison result, the global attribute group characteristic vector comparison result and the local attribute characteristic vector comparison result.
In a third aspect, an embodiment of the present application provides a terminal, including:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any of the first aspect above.
The method for searching the face image has the following advantages:
1. the multi-task convolutional neural network model provided by the embodiment of the application can be used for respectively extracting a plurality of feature vectors aiming at multiple tasks, and directly comparing the feature vectors with a database sample instead of processing results after attribute classification based on the feature vectors, so that the unreliable influence of attribute classification is avoided, and the robustness is higher.
2. In the invention, a similarity score fault-tolerant mechanism is designed in the process of comparing a plurality of feature vectors. Firstly, the identity characteristic similarity score and the full-face attribute similarity score are synthesized for coarse screening, so that omission caused by comparison based on the identity characteristic similarity is reduced; secondly, sorting is carried out based on fusion similarity scores of the comprehensive identity feature similarity score and the full-face attribute similarity feature score, and a part of candidate images with lower face probability of being the same as the query image in the same genus are excluded; and finally, finding out a local significant feature score from the similarity scores of the attribute features of the local human faces as a powerful supplement of the identity feature similarity score, and assisting human face comparison to obtain a final candidate human face image set. Because the local attribute characteristics of the face image are considered in the similarity score fault-tolerant mechanism, the pertinence of the face image is improved under the condition that the face is affected by partial occlusion and other local interference.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a local sharing unit according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a multitask convolutional neural network model based on local sharing according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a face image retrieval method based on deep learning according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a face image screening method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a face image retrieval device based on deep learning according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the present application, a multitask convolutional neural network model based on local sharing is adopted, and in order to facilitate better understanding of the technical solution in the present application by those skilled in the art, first, a brief description is made on the neural network model referred to in the present application.
A local sharing structure is introduced into a neural network model related to an embodiment of the present application, fig. 1 is a schematic structural diagram of a local sharing unit provided in the embodiment of the present application, and as shown in fig. 1, the local sharing unit sets a convolutional neural network for each attribute group, which is called a specific network. In addition, a shared network is used alone to learn shared information between different attribute groups. The local sharing unit connects the characteristics of the previous layer of all the specific networks in series, and the characteristics of the previous layer of the shared network are used as the input of the next layer of the shared network. And the input of the next layer of the shared network is connected to the output of the next layer of each specific network in series to be used as the input of the next layer of the specific network. The local sharing unit enables the sharing network to extract the shared information at a certain layer, and the specific network can further utilize the shared information to extract complementary information beneficial to the shared information, and the learning of the corresponding attribute group is promoted.
Furthermore, the local sharing unit is applied to all levels of the neural network, and the multitask convolution neural network based on local sharing is formed. Fig. 2 is a schematic structural diagram of a multitask convolutional neural network model based on local sharing according to an embodiment of the present application, in the neural network structure shown in fig. 2, face attributes are divided into 4 attribute groups according to spatial information to form 4 subtasks, and a convolutional neural network (specific network TSNet) is configured for each group to perform feature learning. Meanwhile, a shared network SNet is separately configured and connected to 4 tsnets by a local sharing unit shown in fig. 1. And finally, each TSNet takes the classification loss ALoss of the attribute group as a loss function, and the SNet judges the loss IDLoss as the loss function through the identity information to guide the training of the network, so as to output 4 attribute group feature vectors and 1 identity feature vector. In general, each TSNet corresponds to a set of attributes for learning specific features; while SNet is used to learn shared features and mine associations between property groups.
In the embodiment of the present application, the features of the upper face attribute group, the middle face attribute group, the lower face attribute group, and the full face attribute group of the human face are learned by the 4 tsnets respectively. Of course, those skilled in the art can make corresponding adjustments according to actual needs, and all of them should fall into the protection scope of the present application. It should be noted that the multi-task convolutional neural network model provided by the embodiment of the present application can extract a plurality of feature vectors for multiple tasks respectively; and the traditional multitask convolutional neural network model based on full sharing can only provide one feature vector, and identity classification and attribute classification are carried out on the basis of the feature vector.
In addition, the neural network model provided in the embodiment of the present application includes an input layer, a plurality of convolution layers, a plurality of concat layers, a plurality of full-link layers, and an output layer, which are not described herein again.
Before using the network model, the network model needs to be trained. Specifically, a face image with a preset standard size is input into the multitask convolutional neural network, the network is trained until a network model converges, and the training is stopped.
In the face image retrieval method provided by the embodiment of the application, a database for storing reference information is also required. To facilitate the continuity of the subsequent scenario description, the database is first described. The construction process of the database comprises the following steps: and sequentially inputting the face images to be registered into the multi-task convolutional neural network which finishes training, executing forward calculation to obtain corresponding output, and storing the output result in a database. The output structure comprises an identity feature vector and a plurality of attribute group feature vectors (such as an upper face attribute group feature vector, a middle face attribute group feature vector, a lower face attribute group feature vector and a full face attribute group feature vector), and the identity feature vector and the attribute group feature vectors are all one-dimensional floating point number vectors.
Fig. 3 is a schematic flowchart of a face image retrieval method based on deep learning according to an embodiment of the present application, and as shown in fig. 3, the method mainly includes the following steps.
Step S301: the method comprises the following steps of preprocessing a face image to be retrieved:
a. and detecting the position of the face and the position of the key point in the face image to be retrieved.
It should be noted that, in the embodiment of the present application, an existing face detection algorithm may be used to detect a face position in a face image, and a key point position of a face image is obtained according to the detected face position. The face position refers to information of a position of a face in an image, and generally includes pixel coordinates of an upper left corner (or a center point) of the face in the image, and a length and a width of the face. The key point positions are coordinate values of some preset human face key points, and usually the key points comprise important parts on human face features such as eyes and facial contours. Therefore, the face position information can be used for indicating the position of the face in the image, the key point position can be used for indicating the posture and expression of the face, the face image is corrected by using the information, and a normalized face image is obtained so as to facilitate the later face feature extraction.
In the embodiment of the present application, the face detection algorithm may be a face detector, such as a face detector based on Haar-like features and AdaBoost, or an object detector based on a neural network, such as R-CNN or FCN.
b. And performing posture correction and light ray correction processing on the face image to be retrieved according to the face position and the key point position of the face image to be retrieved.
In the embodiment of the application, the key point position and the illumination condition of a standard face are predefined, and the key point position of the face image is aligned to the key point position of the standard face through a preset image transformation algorithm, so that the aim of correcting the face posture is fulfilled.
And performing light correction on the aligned face image through a preset image processing algorithm, so that the illumination condition of the aligned face image is changed to the illumination condition of the standard face (namely, the aligned face image is consistent with the standard face through the light correction, for example, a gamma value can be used for correcting and adjusting the image pixel value, so that the processed image has proper contrast, and the details of the face are clear and visible).
In the concrete implementation, the pre-defining of the key point position and the illumination condition of a standard face is specifically as follows: the key point position and the illumination condition of the standard face (i.e., the average face) can be obtained by averaging calculation according to the key point position information and the illumination condition of the plurality of face images.
It should be noted that the operation times of the posture correction and the light ray correction are not limited, and the sequence can be adjusted; the illumination condition refers to an illumination environment of a face image (including a face image and a video) during shooting, and is represented in the image, that is, brightness (which can be simply understood as a pixel gray value) of the image. The preset standard illumination condition of the face image generally ensures that five sense organs of the face are clearly visible and cannot be too bright or too dark.
In a specific implementation, the preset image transformation algorithm may be a basic image transformation method such as similarity transformation and affine transformation, or may be a combination of these basic image transformations.
In the embodiment of the present application, the "aligning" operation refers to correcting the face image with the pose to a standard face. The standard face is typically a frontal face image. According to the positions of the key points of the acquired face images, the alignment of the face images is realized by performing operations such as similarity change, affine transformation and the like on the images, and the positions of the five sense organs of the aligned face images are basically consistent with that of a standard face.
Step S302: inputting a face image to be retrieved after preprocessing operation into a trained multi-task convolutional neural network model based on local sharing to obtain an identity feature vector and a plurality of attribute group feature vectors of the face image to be retrieved, wherein the plurality of attribute group feature vectors comprise a global attribute group feature vector and at least one local attribute group feature vector.
In this embodiment of the present application, the face image after the preprocessing operation is input into the trained multitask convolutional neural network, and a plurality of corresponding outputs including an identity feature vector and a plurality of attribute group feature vectors are obtained by performing its forward calculation, where the plurality of attribute group feature vectors include a global attribute group feature vector and at least one local attribute group feature vector. Specifically, the global attribute group feature vector may be a full-face attribute group feature vector; the local attribute group feature vectors may include a top face attribute group feature vector, a middle face attribute group feature vector, and a bottom face attribute group feature vector. The attribute groups are defined as shown in table 1, and each attribute group may be defined as one, more or all attributes in the corresponding attribute range in the table.
Table one:
Figure BDA0001748610670000081
Figure BDA0001748610670000091
step S303: and comparing the identity characteristic vector and the attribute group characteristic vector of the face image to be retrieved with the identity characteristic vector and the attribute group characteristic vector of the face image stored in the database respectively to obtain an identity characteristic vector comparison result, a global attribute group characteristic vector comparison result and a local attribute characteristic vector comparison result of the face image to be retrieved and the face image stored in the database.
In the embodiment of the application, the identity characteristic vector of each candidate face image prestored in the database is compared with the identity characteristic vector corresponding to the face image to be retrieved one by one, and identity characteristic similarity is calculated. In specific implementation, the similarity can be calculated according to the cosine distance between the feature vectors, and the identity feature vector similarity score between the face image to be retrieved and each candidate face image prestored in the database is obtained, wherein the higher the similarity score is, the more similar the two face images are, the higher the probability that the two face images belong to the same person is.
Comparing the plurality of attribute group feature vectors of each candidate face image prestored in the database with the plurality of attribute group feature vectors corresponding to the face image to be retrieved one by one, and calculating the feature similarity of each attribute group.
In the specific implementation, because each attribute group feature vector is a one-dimensional floating point vector as the identity feature vector, the similarity can be calculated according to the cosine distance between the attribute group feature vectors, and the higher the similarity is, the more similar the two human face images are in the category of the attribute group. The upper face attribute group, the middle face attribute group and the lower face attribute group are local attribute groups, and the whole face attribute group is a global attribute group; correspondingly, the upper face attribute group feature vector, the middle face attribute group feature vector and the lower face attribute group feature vector are local attribute group feature vectors, and the full face attribute group feature vector is a global attribute group feature vector. When the similarity of the feature vectors of any local attribute group of the two images is high, the two images are relatively close in the local attribute group category corresponding to the feature vectors of the local attribute group. When the global attribute set feature vector similarity of the two images is high, the two images are relatively close in the corresponding global attribute set category. If the similarity of the feature vector of each local attribute group and the similarity of the feature vector of the global attribute group are all higher, the probability that the two images belong to the same person is higher.
Step S304: and screening out the target face image from the database according to the identity characteristic vector comparison result, the comparison result of the global attribute group characteristic vector and the comparison result of the local attribute characteristic vector.
In this embodiment of the application, the identity feature vector comparison result, the global attribute group feature vector comparison result, and the local attribute feature vector comparison result are an identity feature vector similarity score, a global attribute group feature vector similarity score, and a local attribute feature vector similarity score corresponding to the face image stored in the database, respectively. The method specifically comprises the following steps:
step S401, screening a candidate image set in the database according to the identity feature vector similarity score and the global attribute group feature vector similarity score.
In the embodiment of the application, firstly, the face images stored in the database are screened according to the identity feature similarity and the global attribute feature similarity (full-face attribute feature similarity). For convenience of illustration, the identity feature vector similarity Score is designated as Score _ id, the global attribute feature vector similarity Score is designated as Score _ gAttrib, the identity feature vector similarity threshold is designated as T _ id, and the global attribute feature vector similarity threshold is designated as T _ gAttrib, where T _ id >0 and T _ gAttrib > 0.
Firstly, face images meeting first screening conditions are screened out from the database, and a rough screening face image set is obtained. The first screening conditions were: score _ id is more than or equal to T _ id, and Score _ gAttrib is more than or equal to T _ gAttrib, the rough-screen face image set is recorded as L0, and the number of face images in L0 is N0.
Then, performing secondary screening on the coarse screening face image set L0, and screening face images meeting second screening conditions in the coarse screening face image set to obtain a candidate image set. The second screening conditions were: and (2) taking the fusion similarity Score Score1 ═ Score _ id + Score _ gAttrib)/2 as a Score standard, sorting the Score standard according to the order of Score1 from large to small, and taking the largest N1 as a candidate image set L1, wherein N1 is less than or equal to N0.
In summary, screening a face image meeting a first screening condition in the database to obtain a rough-screened face image set, where the first screening condition includes that an identity feature vector similarity score of the face image is greater than or equal to a preset identity feature vector similarity threshold, and a global attribute group feature vector similarity score is greater than or equal to a preset global attribute group feature vector similarity threshold;
and screening out the face images meeting a second screening condition from the coarse screening face image set to obtain a candidate image set, wherein the second screening condition is N1 face images with the largest first fusion similarity Score Score1 in the coarse screening face image set.
Step S402, screening out a target face image from the candidate image set according to the identity characteristic vector similarity score and the local attribute characteristic vector similarity score, or according to the identity characteristic vector similarity score, the global attribute group characteristic vector similarity score and the local attribute characteristic vector similarity score.
In this embodiment of the present application, the local attribute group feature vector includes an upper face attribute group feature vector, a middle face attribute group feature vector, and a lower face attribute group feature vector. Correspondingly, the local attribute group feature vector similarity score comprises an upper face attribute group feature vector similarity score, a middle face attribute group feature vector similarity score and a lower face attribute group feature vector similarity score.
For convenience of illustration, the upper face attribute group feature vector similarity Score, the middle face attribute group feature vector similarity Score and the lower face attribute group feature vector similarity Score are respectively denoted as Score _ Attri1, Score _ Attrib2 and Score _ Attrib3, and the upper face attribute group feature vector similarity threshold, the middle face attribute group feature vector similarity threshold and the lower face attribute group feature vector similarity threshold are respectively denoted as T _ Attri1, T _ Attrib2 and T _ Attrib3, wherein T _ Attri1>0, T _ Attri2>0 and T _ Attrib 3> 0.
Each face image in the candidate image set L1 is traversed to determine whether the following 3 conditions are satisfied. Condition 1: score _ Attrib1 ≧ T _ Attrib1, condition 2: score _ Attrib2 ≧ T _ Attrib2, condition 3: score _ Attrib3 ≧ T _ Attrib 3. If the conditions 1-3 are all met, the local significant feature Score Score _ loc of the current candidate face image is max (Score _ Attrib1, Score _ Attrib2, Score _ Attrib 3); if only two conditions are met, Score _ loc ═ max (Score _ Attribi, Score _ Attribj), assuming that condition i and condition j are met, where i ═ 1 or 2 or 3, and j ═ 1 or 2 or 3; score _ loc ═ Score _ Attribi if only one condition i is satisfied, where i ═ 1 or 2 or 3; if none of the three conditions is satisfied, the local salient feature Score _ loc of the current candidate image is 0.
If Score _ loc of the current candidate image is not equal to 0, the final fused similarity Score2 of the current candidate image is w1 Score _ id + w2 Score _ loc; otherwise Score2 ═ w1 ═ Score _ id + w2 ═ Score _ gatrib, where w1, w2 are both weights, w1>0, w2>0, w1+ w2 ═ 1. After final fusion similarity Score calculation is performed on all the N1 candidate face images in the candidate image set L1, reordering is performed in the order of Score2 from large to small, and the N2 images with the largest Score2 are selected as final retrieval result images.
In summary, the embodiment of the present application selects the face image with the largest second fused similarity Score2 from the candidate image set as the target face image. If at least one local attribute group feature vector corresponding to the face image has a local attribute group feature vector meeting a threshold screening condition, taking the maximum value of the similarity Score of the local attribute group feature vectors in the local attribute group feature vectors meeting the threshold screening condition as Score _ loc; and if at least one local attribute group feature vector corresponding to the face image does not have a local attribute group feature vector meeting a threshold screening condition, taking the global attribute group feature vector similarity Score as Score _ loc, wherein the threshold screening condition is that the local attribute group feature vector similarity Score is greater than or equal to a preset local attribute group feature vector similarity threshold.
The method for searching the face image has the following advantages:
1. the multi-task convolutional neural network model provided by the embodiment of the application can be used for respectively extracting a plurality of feature vectors aiming at multiple tasks, and directly comparing the feature vectors with a database sample instead of processing results after attribute classification based on the feature vectors, so that the unreliable influence of attribute classification is avoided, and the robustness is higher.
2. In the invention, a similarity score fault-tolerant mechanism is designed in the process of comparing a plurality of feature vectors. Firstly, the identity characteristic similarity score and the full-face attribute similarity score are synthesized for coarse screening, so that omission caused by comparison based on the identity characteristic similarity is reduced; secondly, sorting is carried out based on fusion similarity scores of the comprehensive identity feature similarity score and the full-face attribute similarity feature score, and a part of candidate images with lower face probability of being the same as the query image in the same genus are excluded; and finally, finding out a local significant feature score from the similarity scores of the attribute features of the local human faces as a powerful supplement of the identity feature similarity score, and assisting human face comparison to obtain a final candidate human face image set. Because the local attribute characteristics of the face image are considered in the similarity score fault-tolerant mechanism, the pertinence of the face image is improved under the condition that the face is affected by partial occlusion and other local interference.
For example, when the face image with glasses is compared with the face image of the same person without glasses in the database, the feature similarity score of the upper face attribute group may not be high, but the middle face attribute group and the lower face attribute group are not affected by the feature similarity score, so that a higher local significant feature score can be generated to improve the final comparison. For another example, when a face image with a mask is aligned with a face image of the same person without a mask in the database, a higher local saliency feature score may be generated from the upper face attribute group features, thereby improving the final alignment. In addition, even if the human faces of different ages of the same person are compared, under the condition that the identity feature similarity is insufficient, because some attributes which are less influenced by age change exist, a higher local significant feature score can be generated from a local attribute group feature with higher similarity, or a higher score can be generated from a global attribute group feature to assist the identity feature comparison, so that the success rate of the comparison of the same human face with different ages is improved.
Corresponding to the above method embodiment, the present application further provides a facial image retrieval device based on deep learning, fig. 5 is a schematic structural diagram of the facial image retrieval device based on deep learning provided by the embodiment of the present application, and as shown in fig. 5, the device mainly includes the following modules.
The preprocessing module 501 is configured to perform preprocessing operation on a face image to be retrieved. The method is specifically used for:
a. detecting the position of a face and the position of a key point in a face image to be retrieved;
b. and performing posture correction and light ray correction processing on the face image to be retrieved according to the face position and the key point position of the face image to be retrieved.
The feature extraction module 502 is configured to input the pre-processed to-be-retrieved face image into a trained multi-task convolutional neural network model based on local sharing, and obtain an identity feature vector and multiple attribute group feature vectors of the to-be-retrieved face image, where the multiple attribute group feature vectors include a global attribute group feature vector and at least one local attribute group feature vector.
In this embodiment of the present application, the face image after the preprocessing operation is input into the trained multitask convolutional neural network, and a plurality of corresponding outputs including an identity feature vector and a plurality of attribute group feature vectors are obtained by performing its forward calculation, where the plurality of attribute group feature vectors include a global attribute group feature vector and at least one local attribute group feature vector.
The comparison analysis module 503 is configured to compare the identity feature vector and the attribute group feature vector of the facial image to be retrieved with the identity feature vector and the attribute group feature vector of the facial image stored in the database, respectively, to obtain an identity feature vector comparison result, a global attribute group feature vector comparison result, and a local attribute feature vector comparison result of the facial image to be retrieved with the facial image stored in the database.
In specific implementation, the similarity can be calculated according to the cosine distance between the feature vectors, and the identity feature vector similarity score between the face image to be retrieved and each candidate face image prestored in the database is obtained, wherein the higher the similarity score is, the more similar the two face images are, the higher the probability that the two face images belong to the same person is. In addition, because each attribute group feature vector is a one-dimensional floating point vector as the identity feature vector, the similarity can be calculated according to the cosine distance between the attribute group feature vectors, and the higher the similarity is, the more similar the two human face images are in the category of the attribute group.
And the screening module 504 is configured to screen out a target face image from a database according to the identity feature vector comparison result, the global attribute group feature vector comparison result, and the local attribute feature vector comparison result. The method is specifically used for:
a. and screening a candidate image set in the database according to the identity feature vector similarity score and the global attribute group feature vector similarity score.
Screening face images meeting first screening conditions in the database to obtain a rough screening face image set, wherein the first screening conditions comprise that the similarity score of the identity feature vectors of the face images is greater than or equal to a preset identity feature vector similarity threshold, and the similarity score of the feature vectors of the global attribute groups is greater than or equal to a preset global attribute group feature vector similarity threshold;
screening out face images meeting a second screening condition from the coarse screening face image set to obtain a candidate image set, wherein the second screening condition is N1 face images with the largest first fusion similarity Score Score1 in the coarse screening face image set, Score1 is (Score _ id + Score _ gAttrib)/2, Score _ id is an identity feature vector similarity Score, Score _ gAttrib is a global attribute group feature vector similarity Score, N1 is not more than N0, and N0 is the number of the face images in the coarse screening face image set.
b. And screening a target face image from the candidate image set according to the identity characteristic vector similarity score and the local attribute characteristic vector similarity score, or according to the identity characteristic vector similarity score, the global attribute group characteristic vector similarity score and the local attribute characteristic vector similarity score.
Screening N2 face images with the largest second fusion similarity Score Score2 from the candidate image set as target face images, wherein N2 is not less than 1, for example, N2 may be 1, 5, or 10, and the like, and those skilled in the art may perform corresponding setting according to actual requirements;
wherein, Score2 ═ w1 ═ Score _ id + w2 ═ Score _ loc, w1, w2 are weight values, w1>0, w2>0, w1+ w2 ═ 1;
if at least one local attribute group feature vector corresponding to the face image has a local attribute group feature vector meeting a threshold screening condition, taking the maximum value of the similarity Score of the local attribute group feature vectors in the local attribute group feature vectors meeting the threshold screening condition as Score _ loc; and if at least one local attribute group feature vector corresponding to the face image does not have a local attribute group feature vector meeting a threshold screening condition, taking the global attribute group feature vector similarity Score as Score _ loc, wherein the threshold screening condition is that the local attribute group feature vector similarity Score is greater than or equal to a preset local attribute group feature vector similarity threshold.
The device provided by the embodiment of the application for face image retrieval has the following advantages:
1. the multi-task convolutional neural network model provided by the embodiment of the application can be used for respectively extracting a plurality of feature vectors aiming at multiple tasks, and directly comparing the feature vectors with a database sample instead of processing results after attribute classification based on the feature vectors, so that the unreliable influence of attribute classification is avoided, and the robustness is higher.
2. In the invention, a similarity score fault-tolerant mechanism is designed in the process of comparing a plurality of feature vectors. Firstly, the identity characteristic similarity score and the full-face attribute similarity score are synthesized for coarse screening, so that omission caused by comparison based on the identity characteristic similarity is reduced; secondly, sorting is carried out based on fusion similarity scores of the comprehensive identity feature similarity score and the full-face attribute similarity feature score, and a part of candidate images with lower face probability of being the same as the query image in the same genus are excluded; and finally, finding out a local significant feature score from the similarity scores of the attribute features of the local human faces as a powerful supplement of the identity feature similarity score, and assisting human face comparison to obtain a final candidate human face image set.
Corresponding to the embodiment, the application also provides a terminal for searching the face image. Fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application, and as shown in fig. 6, the terminal 600 may include: a processor 610, a memory 620, and a communication unit 630. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not limiting of the application, and may be a bus architecture, a star architecture, a combination of more or fewer components than those shown, or a different arrangement of components.
The communication unit 630 is configured to establish a communication channel so that the storage device can communicate with other devices. And receiving user data sent by other equipment or sending the user data to other equipment.
The processor 610, which is a control center of the storage device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and/or processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, the processor 610 may include only a Central Processing Unit (CPU). In the embodiments of the present application, the CPU may be a single arithmetic core or may include multiple arithmetic cores.
The memory 620 may be implemented by any type of volatile or non-volatile storage device or combination of volatile and non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk, or any combination thereof.
The executable instructions in memory 620, when executed by processor 610, enable terminal 600 to perform some or all of the steps in the above-described method embodiments.
In specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (7)

1. A face image retrieval method based on deep learning is characterized by comprising the following steps:
inputting a face image to be retrieved after preprocessing operation into a trained multi-task convolutional neural network model based on local sharing to obtain an identity characteristic vector and a plurality of attribute group characteristic vectors of the face image to be retrieved, wherein the plurality of attribute group characteristic vectors comprise a global attribute group characteristic vector and at least one local attribute group characteristic vector;
comparing the identity characteristic vector and the attribute group characteristic vector of the face image to be retrieved with the identity characteristic vector and the attribute group characteristic vector of the face image stored in a database respectively to obtain an identity characteristic vector comparison result, a global attribute group characteristic vector comparison result and a local attribute characteristic vector comparison result of the face image to be retrieved and the face image stored in the database;
screening a target face image from a database according to the identity characteristic vector comparison result, the global attribute group characteristic vector comparison result and the local attribute characteristic vector comparison result;
wherein, the step of screening out the target face image in the database according to the identity feature vector comparison result, the global attribute group feature vector comparison result and the local attribute feature vector comparison result comprises the following steps:
screening a candidate image set in the database according to the identity characteristic vector comparison result and the global attribute group characteristic vector comparison result;
screening a target face image from the candidate image set according to the identity characteristic vector comparison result and the local attribute characteristic vector comparison result, or according to the identity characteristic vector comparison result, the global attribute group characteristic vector comparison result and the local attribute characteristic vector comparison result;
the identity feature vector comparison result, the global attribute group feature vector comparison result and the local attribute feature vector comparison result are respectively an identity feature vector similarity score, a global attribute group feature vector similarity score and a local attribute feature vector similarity score corresponding to the face image stored in the database;
the screening of the candidate image set in the database according to the identity feature vector comparison result and the global attribute set feature vector comparison result comprises:
screening face images meeting first screening conditions in the database to obtain a rough screening face image set, wherein the first screening conditions comprise that the similarity score of the identity feature vectors of the face images is greater than or equal to a preset identity feature vector similarity threshold, and the similarity score of the feature vectors of the global attribute groups is greater than or equal to a preset global attribute group feature vector similarity threshold;
screening face images meeting a second screening condition from the coarse screening face image set to obtain a candidate image set, wherein the second screening condition is N1 face images with the largest first fusion similarity Score Score1 in the coarse screening face image set, wherein Score1 is (Score _ id + Score _ gAttrib)/2, Score _ id is an identity feature vector similarity Score, Score _ gAttrib is a global attribute group feature vector similarity Score, N1 is not more than N0, and N0 is the number of the face images in the coarse screening face image set;
screening a target face image from the candidate image set according to the identity feature vector comparison result and the local attribute feature vector comparison result, or according to the identity feature vector comparison result, the global attribute group feature vector comparison result and the local attribute feature vector comparison result, and the method comprises the following steps:
screening out a face image with the largest second fusion similarity Score Score2 from the candidate image set as a target face image;
wherein, Score2 ═ w1 ═ Score _ id + w2 ═ Score _ loc, w1, w2 are weight values, w1>0, w2>0, w1+ w2 ═ 1;
if at least one local attribute group feature vector corresponding to the face image has a local attribute group feature vector meeting a threshold screening condition, taking the maximum value of the similarity Score of the local attribute group feature vectors in the local attribute group feature vectors meeting the threshold screening condition as Score _ loc; and if at least one local attribute group feature vector corresponding to the face image does not have a local attribute group feature vector meeting a threshold screening condition, taking the global attribute group feature vector similarity Score as Score _ loc, wherein the threshold screening condition is that the local attribute group feature vector similarity Score is greater than or equal to a preset local attribute group feature vector similarity threshold.
2. The method of claim 1, wherein the at least one local attribute group feature vector comprises:
the face attribute group feature vector, the middle face attribute group feature vector and the lower face attribute group feature vector.
3. The method of claim 2,
the attributes used for representing by the upper face attribute group feature vector comprise an eyebrow attribute, an eye attribute, a hair color attribute, a hair style attribute and/or an upper face accessory attribute;
the attributes used for representing by the middle-face attribute group feature vector comprise a nose attribute, a cheek attribute, a cheekbone attribute, a temple attribute and/or a middle-face accessory attribute;
the attributes used for characterization by the lower face attribute group feature vector comprise lip attributes, chin attributes, beard attributes, mouth attributes and/or lower face accessory attributes;
the attributes that the global property group feature vector is used to characterize include a gender attribute, an expression attribute, a face attribute, a complexion attribute, a hair style attribute, and/or an age attribute.
4. The method of claim 1, wherein the database building process comprises:
inputting a face image to be registered into a trained multitask convolutional neural network model based on local sharing, and obtaining an identity characteristic vector and a plurality of attribute group characteristic vectors corresponding to each face image in the face image to be registered;
and storing an identity feature vector and a plurality of attribute group feature vectors corresponding to each face image in a database.
5. The method according to claim 1, wherein the preprocessing operation of the face image to be retrieved comprises:
detecting the position of a face and the position of a key point in a face image to be retrieved;
and performing posture correction and light ray correction processing on the face image to be retrieved according to the face position and the key point position of the face image to be retrieved.
6. A face image retrieval device based on deep learning is characterized by comprising:
the feature extraction module is used for inputting the face image to be retrieved after the preprocessing operation into a trained multitask convolutional neural network model based on local sharing to obtain an identity feature vector and a plurality of attribute group feature vectors of the face image to be retrieved, wherein the plurality of attribute group feature vectors comprise a global attribute group feature vector and at least one local attribute group feature vector;
the comparison analysis module is used for comparing the identity characteristic vector and the attribute group characteristic vector of the facial image to be retrieved with the identity characteristic vector and the attribute group characteristic vector of the facial image stored in the database respectively to obtain an identity characteristic vector comparison result, a global attribute group characteristic vector comparison result and a local attribute characteristic vector comparison result of the facial image to be retrieved and the facial image stored in the database;
and the screening module is used for screening the target face image from the database according to the identity characteristic vector comparison result, the global attribute group characteristic vector comparison result and the local attribute characteristic vector comparison result.
7. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-5.
CN201810856269.8A 2018-07-31 2018-07-31 Face image retrieval method and device based on deep learning and terminal Expired - Fee Related CN108829900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810856269.8A CN108829900B (en) 2018-07-31 2018-07-31 Face image retrieval method and device based on deep learning and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810856269.8A CN108829900B (en) 2018-07-31 2018-07-31 Face image retrieval method and device based on deep learning and terminal

Publications (2)

Publication Number Publication Date
CN108829900A CN108829900A (en) 2018-11-16
CN108829900B true CN108829900B (en) 2020-11-10

Family

ID=64152313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810856269.8A Expired - Fee Related CN108829900B (en) 2018-07-31 2018-07-31 Face image retrieval method and device based on deep learning and terminal

Country Status (1)

Country Link
CN (1) CN108829900B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583403A (en) * 2018-12-06 2019-04-05 联想(北京)有限公司 Image processing method, processor and electronic equipment
CN109658194A (en) * 2018-12-20 2019-04-19 焦点科技股份有限公司 A kind of lead referral method and system based on video frequency tracking
CN109711357A (en) * 2018-12-28 2019-05-03 北京旷视科技有限公司 A kind of face identification method and device
CN109993102B (en) * 2019-03-28 2021-09-17 北京达佳互联信息技术有限公司 Similar face retrieval method, device and storage medium
CN110287890A (en) * 2019-06-26 2019-09-27 银河水滴科技(北京)有限公司 A kind of recognition methods and device based on gait feature and pedestrian's weight identification feature
CN111368101B (en) * 2020-03-05 2021-06-18 腾讯科技(深圳)有限公司 Multimedia resource information display method, device, equipment and storage medium
CN111985360A (en) * 2020-08-05 2020-11-24 上海依图网络科技有限公司 Face recognition method, device, equipment and medium
CN114120386A (en) * 2020-08-31 2022-03-01 腾讯科技(深圳)有限公司 Face recognition method, device, equipment and storage medium
CN112241689A (en) * 2020-09-24 2021-01-19 北京澎思科技有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN112364827B (en) * 2020-11-30 2023-11-10 腾讯科技(深圳)有限公司 Face recognition method, device, computer equipment and storage medium
CN112417197B (en) * 2020-12-02 2022-02-25 云从科技集团股份有限公司 Sorting method, sorting device, machine readable medium and equipment
CN112417198A (en) * 2020-12-07 2021-02-26 武汉柏禾智科技有限公司 Face image retrieval method
CN112949599B (en) * 2021-04-07 2022-01-14 青岛民航凯亚系统集成有限公司 Candidate content pushing method based on big data
CN113269125B (en) * 2021-06-10 2024-05-14 北京中科闻歌科技股份有限公司 Face recognition method, device, equipment and storage medium
CN113688764A (en) * 2021-08-31 2021-11-23 瓴盛科技有限公司 Training method and device for face optimization model and computer readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014229012A (en) * 2013-05-21 2014-12-08 沖電気工業株式会社 Person attribute estimation apparatus, and person attribute estimation method and program
CN105404877A (en) * 2015-12-08 2016-03-16 商汤集团有限公司 Human face attribute prediction method and apparatus based on deep study and multi-task study
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN106874877A (en) * 2017-02-20 2017-06-20 南通大学 A kind of combination is local and global characteristics without constraint face verification method
CN107145857A (en) * 2017-04-29 2017-09-08 深圳市深网视界科技有限公司 Face character recognition methods, device and method for establishing model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022317A (en) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 Face identification method and apparatus
US10289822B2 (en) * 2016-07-22 2019-05-14 Nec Corporation Liveness detection for antispoof face recognition
CN107330359A (en) * 2017-05-23 2017-11-07 深圳市深网视界科技有限公司 A kind of method and apparatus of face contrast

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014229012A (en) * 2013-05-21 2014-12-08 沖電気工業株式会社 Person attribute estimation apparatus, and person attribute estimation method and program
CN105404877A (en) * 2015-12-08 2016-03-16 商汤集团有限公司 Human face attribute prediction method and apparatus based on deep study and multi-task study
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN106874877A (en) * 2017-02-20 2017-06-20 南通大学 A kind of combination is local and global characteristics without constraint face verification method
CN107145857A (en) * 2017-04-29 2017-09-08 深圳市深网视界科技有限公司 Face character recognition methods, device and method for establishing model

Also Published As

Publication number Publication date
CN108829900A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108829900B (en) Face image retrieval method and device based on deep learning and terminal
CN111310624B (en) Occlusion recognition method, occlusion recognition device, computer equipment and storage medium
WO2021077984A1 (en) Object recognition method and apparatus, electronic device, and readable storage medium
CN106815566B (en) Face retrieval method based on multitask convolutional neural network
US10402632B2 (en) Pose-aligned networks for deep attribute modeling
WO2019128508A1 (en) Method and apparatus for processing image, storage medium, and electronic device
CN106778450B (en) Face recognition method and device
EP3989104A1 (en) Facial feature extraction model training method and apparatus, facial feature extraction method and apparatus, device, and storage medium
WO2020140723A1 (en) Method, apparatus and device for detecting dynamic facial expression, and storage medium
CN112364827B (en) Face recognition method, device, computer equipment and storage medium
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN105335719A (en) Living body detection method and device
CN109858375A (en) Living body faces detection method, terminal and computer readable storage medium
CN111368751A (en) Image processing method, image processing device, storage medium and electronic equipment
CN107316029A (en) A kind of live body verification method and equipment
CN111553838A (en) Model parameter updating method, device, equipment and storage medium
Zhang et al. Facial component-landmark detection with weakly-supervised lr-cnn
CN113468925B (en) Occlusion face recognition method, intelligent terminal and storage medium
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN111460416A (en) WeChat applet platform-based human face feature and dynamic attribute authentication method
CN112101479B (en) Hair style identification method and device
Shukla et al. Deep Learning Model to Identify Hide Images using CNN Algorithm
CN115830720A (en) Living body detection method, living body detection device, computer equipment and storage medium
Kainz et al. Students’ Attendance Monitoring through the Face Recognition
CN112149598A (en) Side face evaluation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201110