CN106815566B - Face retrieval method based on multitask convolutional neural network - Google Patents

Face retrieval method based on multitask convolutional neural network Download PDF

Info

Publication number
CN106815566B
CN106815566B CN201611242736.5A CN201611242736A CN106815566B CN 106815566 B CN106815566 B CN 106815566B CN 201611242736 A CN201611242736 A CN 201611242736A CN 106815566 B CN106815566 B CN 106815566B
Authority
CN
China
Prior art keywords
face
face image
neural network
convolutional neural
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611242736.5A
Other languages
Chinese (zh)
Other versions
CN106815566A (en
Inventor
孙哲南
赫然
谭铁牛
宋凌霄
曹冬
李琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Zhongke Intelligent Identification Co ltd
Original Assignee
Tianjin Zhongke Intelligent Identification Industry Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Zhongke Intelligent Identification Industry Technology Research Institute Co ltd filed Critical Tianjin Zhongke Intelligent Identification Industry Technology Research Institute Co ltd
Priority to CN201611242736.5A priority Critical patent/CN106815566B/en
Publication of CN106815566A publication Critical patent/CN106815566A/en
Application granted granted Critical
Publication of CN106815566B publication Critical patent/CN106815566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face retrieval method based on a multitask convolutional neural network, which comprises the following steps: for any one face image, detecting and acquiring the face position and the key point position; carrying out preprocessing operation on the face image; pre-establishing a multitask convolution neural network, and then training; inputting the face image subjected to the preprocessing operation into the multitask convolutional neural network which completes training; a face feature database is established in advance; identity feature similarity calculation is carried out on the face image and a face feature database to obtain a candidate face image list; calculating the similarity of a plurality of attribute feature expression vectors of the face image and the candidate face image; normalization processing and fusion; and sorting according to the fusion similarity scores to obtain a retrieval result. The invention can ensure high-quality identification of the face image, quickly and effectively carry out corresponding user identity identification judgment on a large number of face images and meet the requirements of people on the face identification function.

Description

Face retrieval method based on multitask convolutional neural network
Technical Field
The invention relates to the technical fields of artificial intelligence, mode recognition, digital image processing and the like, in particular to a face retrieval method based on a multitask convolutional neural network.
Background
At present, with the continuous development of human science and technology, the face recognition technology is more and more popularized in daily life of people, and the face recognition technology is always a leading-edge and hot technology and has a great position in the aspects of artificial intelligence research and public safety application.
As a class of biological feature recognition technology, the human face recognition has good development and application prospects due to the characteristics of non-contact and convenient collection. The face recognition technology plays an important role in various application scenes, such as airport security inspection, frontier inspection clearance and the like. With the rapid development of internet finance in recent years, the face recognition technology has great application advantages in mobile payment. The purpose of face recognition is to know the identity of a user according to an acquired face image or video of the user. At present, the face recognition technology still cannot meet practical requirements in an outdoor uncontrolled environment, and the main difficulties of the face recognition technology are illumination change, user posture expression change, age and body type change and shielding.
In recent years, deep learning has achieved a remarkable effect in many fields of machine vision. The most notable model belongs to a convolutional neural network, and the model uses a plurality of convolutional layers and pooling layers, so that effective hierarchical features in image or video data can be extracted, and stronger nonlinear expression is realized. The convolutional neural network has the effect obviously superior to that of the traditional method in the fields of object classification, action recognition, image segmentation, face recognition and the like. In some low-layer vision problems, such as image denoising, image super-resolution enhancement, image deblurring and the like, the deep learning technology achieves good effects. In the field of face recognition, a face recognition method based on a neural network and deep learning is also concerned due to excellent performance, and the leading face recognition algorithm at home and abroad is mostly based on a deep learning model at present. The face recognition method based on deep learning is generally divided into two steps: firstly, calculating a feature expression of an input face image by using a neural network model; and then obtaining the face image according to the similarity between the feature expressions.
However, with the advent of the big data age, the data size that people need to process is often very large. Along with the increase of the capacity of the database, the probability of similar faces appearing in the database is increased, when the face recognition is carried out based on the existing face recognition method, the probability of false recognition appearing along with face one-to-one verification is greatly increased, the requirement of people on quick and accurate recognition of the face recognition cannot be met, the overall working efficiency and the working quality of the face recognition are low, and the use experience of people is reduced.
Therefore, there is an urgent need to develop a technology that can quickly and effectively identify and judge the user identities corresponding to a large number of face images while ensuring high-quality identification of the face images (including the face images and videos), so as to meet the requirements of people on the face identification function, improve the overall work efficiency and the work quality of the face identification, and save precious time of people.
Disclosure of Invention
In view of the above, the present invention provides a face retrieval method based on a multitask convolutional neural network, which can perform high-quality identification on a face image, and at the same time, quickly and effectively perform corresponding user identity identification and judgment on a large number of face images, so as to meet the requirements of people on the face identification function, improve the overall work efficiency and work quality of face identification, save precious time of people, facilitate improvement of product use experience of people, and have great production practice significance.
Therefore, the invention provides a face retrieval method based on a multitask convolutional neural network, which comprises the following steps:
the first step is as follows: detecting and acquiring the face position of any one face image needing face recognition, and detecting and acquiring the key point position of the face image according to the acquired face position;
the second step is that: preprocessing the face image according to the face position and the key point position of the face image;
the third step: the method comprises the steps of establishing a multitask convolutional neural network in advance, inputting a human face image with a preset standard size into the multitask convolutional neural network, and training the multitask convolutional neural network until a model of the multitask convolutional neural network is converged to finish training;
the fourth step: inputting the preprocessed face image into the trained multitask convolutional neural network to obtain an identity feature expression vector and a plurality of attribute feature expression vectors corresponding to the face image;
the fifth step: a face feature database is established in advance, and a plurality of candidate face images with known user identities, identity feature expression vectors corresponding to each candidate face image and a plurality of attribute feature expression vectors are stored in the face feature database in advance;
and a sixth step: comparing the identity feature expression vectors corresponding to the face images with the identity feature expression vectors of each candidate face image prestored in the face feature database one by one, calculating the similarity in real time, correspondingly sorting the obtained similarity according to the sequence of the similarity from large to small, and outputting a plurality of candidate face images prestored in the face feature database so as to obtain a candidate face image list;
the seventh step: comparing the attribute feature expression vectors corresponding to the face image with the attribute feature expression vectors corresponding to each candidate face image in the candidate face image list one by one and calculating the similarity in real time to obtain similarity scores between the face image and each candidate face image;
eighth step: respectively carrying out normalization processing on a plurality of similarity scores between the face image and each candidate face image and then carrying out score fusion to obtain a fusion similarity score between the face image and each candidate face image;
the ninth step: and reordering a plurality of candidate face images in the candidate face image list according to the sequence from large to small of the fusion similarity score, wherein the candidate face image list obtained after reordering is the result of face recognition retrieval of the face images.
Wherein the second step specifically comprises the steps of:
the method comprises the steps of predefining a key point position and an illumination condition of a standard face;
aligning the key point position of the face image to the key point position of a standard face through a preset image transformation algorithm;
and performing light ray correction on the aligned face image through a preset image processing algorithm, so that the illumination condition of the aligned face image is changed to the illumination condition of the standard face.
The method specifically comprises the following steps of predefining the positions of key points and lighting conditions of a standard face:
and obtaining the key point position and the illumination condition of the standard human face by averaging calculation according to the key point position information and the illumination condition of a plurality of human face images in advance.
The multitask convolutional neural network comprises an input layer, a plurality of preset convolutional layers, a plurality of preset pooling layers, a plurality of preset full-connection layers and an output layer, wherein the input layer, the plurality of preset convolutional layers, the plurality of preset pooling layers and the output layer are used for sequentially processing an input face image.
In the third step, the step of inputting the face image with the preset standard size into the multitask convolutional neural network and training the multitask convolutional neural network comprises the following substeps:
inputting a human face image with a preset standard size into a plurality of preset human face attribute feature classification loss functions, and then calculating loss values of a plurality of human face attribute features corresponding to the human face image with the preset standard size;
and reversely adjusting all weights in the multitask convolutional neural network by using the loss values of the plurality of face attribute features corresponding to the face image with the preset standard size together, so that the weighted loss sum of all weights is minimum.
The plurality of preset face attribute feature classification loss functions comprise preset face identity classification loss functions, face gender classification loss functions and face age classification loss functions;
the face identity classification loss function is a softmax loss function, the face gender classification loss function is a change loss function, and the face age classification loss function is a square loss function.
Wherein the formula of the softmax loss function is as follows:
Figure GDA0002371086430000041
where N is the number of categories, x is the input face image, yIdentity∈RN×1Is a category vector representing the category of the face image,
Figure GDA0002371086430000042
representing the output of the ith node of the face identity classifier learned by the multitask convolutional neural network;
the formula of the change loss function is as follows:
Figure GDA0002371086430000043
wherein, yGenderE { -1, +1} is a label representing the gender of the face image,
Figure GDA0002371086430000044
the method is characterized in that a multitask convolution neural network is used for predicting and outputting the gender of an input face image;
the formula of the square loss function is as follows:
Figure GDA0002371086430000051
wherein, yAgeIs the true age value of the face image,
Figure GDA0002371086430000052
the method is used for predicting and outputting the age of the input face image by the multitask convolutional neural network.
In the third step, the step of training the convolutional neural network specifically includes:
inputting any one preset human face image with standard size and corresponding human face identity and attribute label information into an input layer of the convolutional neural network, extracting a characteristic value of the human face image by a convolutional layer and a pooling layer of the convolutional neural network, and then outputting the characteristic value from an output layer;
sending the characteristic value of the face image with the preset standard size into a classifier for classification, respectively obtaining face identity and attribute label information which are obtained by judgment of a convolutional neural network, and calculating to obtain a loss value of the characteristic value of the face image according to the label information obtained by a comparison algorithm and the face identity and attribute label information of the face image;
and reversely adjusting all weights in the convolutional neural network by using the loss value of the characteristic value of the human face image with the preset standard size, and finishing the training of the convolutional neural network.
Wherein, in the eighth step, the formula of the normalization process is as follows:
Figure GDA0002371086430000053
where μ is the mean of the similarity scores and σ is the variance of the scores.
Compared with the prior art, the human face retrieval method based on the multitask convolutional neural network can quickly and effectively identify and judge the user identities of a large number of human face images while ensuring high-quality identification of the human face images, meets the requirements of people on human face identification functions, improves the overall working efficiency and working quality of human face identification, saves precious time of people, is beneficial to improving the product use experience of people, and has great production practice significance.
Drawings
FIG. 1 is a flow chart of a face retrieval method based on a multitask convolutional neural network provided by the invention;
fig. 2 is a schematic structural diagram of an embodiment of each component in a constructed multitask convolutional neural network in the human face retrieval method based on the multitask convolutional neural network provided by the invention;
fig. 3 is a table diagram of the recognition accuracy of face recognition on the existing public world face recognition evaluation database LFW based on the face retrieval method based on the multitask convolutional neural network provided by the invention.
Detailed Description
In order that those skilled in the art will better understand the technical solution of the present invention, the following detailed description of the present invention is provided in conjunction with the accompanying drawings and embodiments.
FIG. 1 is a flow chart of a face retrieval method based on a multitask convolutional neural network provided by the invention;
referring to fig. 1, the face retrieval method based on the multitask convolutional neural network provided by the invention comprises the following steps:
the first step is as follows: detecting and acquiring the face position of any face image (including a face image and a video) needing face recognition, and detecting and acquiring the key point position of the face image according to the acquired face position;
in the present invention, in the first step, it should be noted that the existing face detection algorithm may be applied to detect the face position in the face image, and the key point position of the face image is obtained according to the face position obtained by the detection.
In the present invention, the key points of the face image are predefined key points, and may be, for example, two eyes, a nose tip, a mouth contour, a face periphery contour, and the like.
In the present invention, it should be noted that the existing face detection algorithm may be an existing face detector, such as a face detector based on Haar-like features and AdaBoost proposed by Viola et al, or an object detector based on a neural network, such as R-CNN or FCN. Existing face detection algorithms can be used to detect whether a face is present in an image.
In the present invention, it should be noted that the face position refers to information of a position of a face in an image, and generally includes pixel coordinates of an upper left corner (or a center point) of the face in the image, and a length and a width of the face. The key point positions are coordinate values of some preset human face key points, and usually the key points comprise important parts on human face features such as eyes and facial contours. Therefore, the face position information can be used for indicating the position of the face in the image, the key point position can be used for indicating the posture and expression of the face, the face image is corrected by using the information, and a normalized face image is obtained so as to facilitate the later face feature extraction.
The second step is that: preprocessing the face image according to the face position and the key point position of the face image;
in the present invention, the second step specifically includes the steps of:
the method comprises the steps of predefining a key point position and an illumination condition of a standard face;
aligning the key point position of the face image to the key point position of a standard face through a preset image transformation algorithm so as to achieve the aim of correcting the face posture;
and performing light correction on the aligned face image through a preset image processing algorithm, so that the illumination condition of the aligned face image is changed to the illumination condition of the standard face (namely, the aligned face image is consistent with the standard face through the light correction, for example, a gamma value can be used for correcting and adjusting the image pixel value, so that the processed image has proper contrast, and the details of the face are clear and visible).
In the second step, in terms of specific implementation, the pre-defining the key point position and the illumination condition of a standard face specifically includes: the key point position and the illumination condition of the standard face (i.e., the average face) can be obtained by averaging calculation according to the key point position information and the illumination condition of the plurality of face images.
In the second step, it should be noted that the operation times of the posture correction and the ray correction are not limited, and the sequence can be adjusted.
In the present invention, it should be noted that the illumination condition refers to an illumination environment of a face image (including a face image and a video) during shooting, which is represented in the image or the video, that is, brightness of the image (which can be simply understood as a pixel gray value). The preset standard illumination condition of the face image generally ensures that five sense organs of the face are clearly visible and cannot be too bright or too dark.
In a specific implementation of the present invention, the preset image transformation algorithm may be a basic image transformation method such as similarity transformation and affine transformation, or may be a combination of these basic image transformations.
In the present invention, the "alignment" operation refers to correcting the face image with pose or expression to the standard face. The standard face is typically a frontal, non-expressive face image. According to the positions of the key points of the acquired face images, the alignment of the face images is realized by performing operations such as similarity change, affine transformation and the like on the images, and the positions of the five sense organs of the aligned face images are basically consistent with that of a standard face.
The third step: the method comprises the steps that a multitask convolutional neural network is established in advance, the multitask convolutional neural network comprises an input layer, a plurality of preset convolutional layers, a plurality of preset pooling layers, a plurality of preset full-connection layers and an output layer, wherein the input layer, the plurality of preset convolutional layers, the plurality of preset pooling layers, the plurality of preset full-connection layers and the output layer are sequentially processed on a face image, referring to the figure 2, then the face image with a preset standard size is input into the multitask convolutional neural network, and the multitask convolutional neural network is trained until a model of the multitask convolutional neural network is converged to finish training;
in the present invention, as shown in fig. 2, in a specific implementation, the multitask convolutional neural network includes a plurality of convolutional layers (e.g., convolutional layer 1, convolutional layer 2, convolutional layer 3, convolutional layer 4, convolutional layer 5, and convolutional layer 6 shown in fig. 2), a plurality of pooling layers (e.g., five pooling layers shown in fig. 2), and a plurality of fully-connected layers, where between any two adjacent layers, an output of a previous layer is an input of a next layer. The convolution layer is used for performing convolution operation on input data (namely input face images), the convolution filter realizes that the same parameters are shared at all positions of the image, the parameter quantity of the network model is reduced, and the characteristic response image of the input face image can be obtained through the convolution layer. The pooling layer is used for performing maximum pooling operation on input data (namely, the input face image), so that data dimensionality is effectively reduced, and meanwhile, the spatial invariance of the extracted features is enhanced. The input and output of the full-connection layer are all connected one by one, more global information can be extracted, and high-dimensional features can be converted into compact one-dimensional feature vectors in a full-connection mode.
For the multitask convolution neural network, the first layer is a convolution layer, and the preprocessed human face image is used as input. The tail end of the multitask convolution neural network is provided with a plurality of full connection layers which respectively correspond to loss functions of a plurality of tasks.
In the present invention, it should be noted that the multitask convolutional neural network uses the preprocessed face image as input, and can simultaneously obtain the face identity feature expression and various attribute information (i.e. face identity and attribute label information) of the face image. The identity features are expressed as one-dimensional feature vectors, and the similarity of the feature vectors can be used for measuring the similarity of the human face. The attribute information includes gender, age, ethnicity, and other auxiliary attribute information related to the identity information. The automatic learning of the convolutional neural network overcomes the defect that the manual design of a filter in the traditional method is time-consuming and labor-consuming. The multitask convolutional neural network can share most bottom layer connections, so that network parameters are effectively reduced, and calculation cost is reduced. Meanwhile, information among a plurality of tasks is mutually supplemented, so that the multi-task convolutional neural network can autonomously learn more robust and effective information.
It should be noted that the output of the trained multi-task convolutional neural network is an identity feature expression vector and a plurality of attribute feature expression vectors (e.g., gender feature vector, age feature vector) corresponding to the input face image.
It should be further noted that, for the multitask convolutional neural network, joint learning of the neural network is performed through joint supervision of multiple tasks, so that the accuracy and the robustness of the neural network can be improved. Specifically, the multi-task convolutional neural network takes the classification of the face images as a main task, takes gender estimation, age estimation, ethnic classification and the like as auxiliary tasks, and the information of a plurality of tasks is supplemented with each other, so that the overall effect is improved.
Specifically, the optimization goal of the multitask convolutional neural network is to minimize the weighted loss sum of each subtask.
In the present invention, in the third step, the step of inputting the face image with a preset standard size into the multitask convolutional neural network and training the multitask convolutional neural network includes the following substeps:
inputting a human face image with a preset standard size into a plurality of preset human face attribute characteristic classification loss functions such as a preset human face identity classification loss function, a human face gender classification loss function and a human face age classification loss function, and then calculating loss values of a plurality of human face attribute characteristics corresponding to the human face image with the preset standard size;
and reversely adjusting all weights (namely the weights in a plurality of preset face attribute feature classification loss functions, namely the weights of each sub-classification task) in the multitask convolutional neural network by using the loss values of the plurality of face attribute features corresponding to the face image with the preset standard size together so as to minimize the weighted loss sum of all the weights.
In a specific implementation aspect of the present invention, in the third step, the step of inputting a face image with a preset standard size into the multitask convolutional neural network and training the multitask convolutional neural network may specifically include the following sub-steps:
inputting any one preset human face image with standard size and corresponding human face identity and attribute label information into an input layer of the convolutional neural network, extracting a characteristic value of the human face image by a convolutional layer and a pooling layer of the convolutional neural network, and then outputting the characteristic value from an output layer;
sending the characteristic value of the face image with the preset standard size into a classifier for classification, respectively obtaining face identity and attribute label information which are obtained by judgment of a convolutional neural network, and calculating to obtain a loss value of the characteristic value of the face image according to the label information obtained by a comparison algorithm and the face identity and attribute label information of the face image;
and reversely adjusting all weights in the convolutional neural network by using the loss value of the characteristic value of the human face image with the preset standard size, and finishing the training of the convolutional neural network.
In the invention, in concrete implementation, the face image classification task divides the face images into different categories according to different individual identities, and the task can use a series of classification loss functions represented by softmax loss functions as optimization targets, namely the face identity classification loss functions can be softmax loss functions. The formula of the softmax loss function may be as follows:
Figure GDA0002371086430000101
where N is the number of categories, x is the input face image, yIdentity∈RN×1Is a category vector representing the category of the face image,
Figure GDA0002371086430000102
and representing the output of the ith node of the face identity classifier learned by the multitask convolutional neural network.
In the invention, in concrete implementation, the face gender estimation task divides the face image into two categories according to different genders, and the task can use a two-category loss function represented by a change loss as an optimization target, namely the face gender category loss function can be a change loss function. The formula of the change loss function may be as follows:
Figure GDA0002371086430000103
wherein, yGenderE { -1, +1} is a label representing the gender of the face image,
Figure GDA0002371086430000104
the method is used for predicting and outputting the gender of the input human face image by the multitask convolutional neural network.
In the present invention, in a concrete implementation, the human face age estimation task is to predict the age of a human face according to a human face image, which is a regression task. The task may use a series of regression loss functions, represented by square loss, as the optimization target, i.e., the face age classification loss function may be a square loss function. The formula of the square loss function may be as follows:
Figure GDA0002371086430000105
wherein, yAgeIs the true age value of the face image,
Figure GDA0002371086430000106
the method is used for predicting and outputting the age of the input face image by the multitask convolutional neural network.
It should be noted that, in the present invention, the identity classification, gender classification, and age estimation task are not the only task composition form of the multitask convolutional neural network, and subtasks may be replaced by ethnic classification, hair style identification, and the like. The subtasks of the multitask convolutional neural network are also not limited to three, but may be any number of combinations of a plurality. The optimization goal of the entire multitask convolutional neural network is the weighted sum of the subtasks, as follows:
L=λILIGLGALA+…;
where λ is the loss weight of the subtask.
It should be further noted that the value of the weight λ of each subtask is not a fixed value, and may be determined according to the attribute value of the input face image. Estimating the weight λ of a task by genderGAs an example, λGThe size of the infant is related to the age of the individual in the input image, and the weight lambda of the infantGLower than adults.
The fourth step: inputting the preprocessed face image into the trained multitask convolutional neural network to obtain corresponding output (specifically, the output can be obtained by executing forward calculation of the multitask convolutional neural network), wherein the output is an identity feature expression vector and a plurality of attribute feature expression vectors corresponding to the face image;
the fifth step: a face feature database is established in advance, and a plurality of candidate face images with known user identities, identity feature expression vectors corresponding to each candidate face image and a plurality of attribute feature expression vectors (such as gender feature vectors and age feature vectors) are stored in the face feature database in advance;
in the present invention, it should be noted that the pre-established face feature database is a face database pre-established by a user, and may be, for example, a world face recognition and evaluation database LFW.
In the fifth step, the construction process of the face feature database specifically comprises the following steps: and inputting the face image to be registered into the multitask convolution neural network, executing forward calculation, and storing the output of the multitask convolution neural network in a feature database. The identity characteristic expression vector is a one-dimensional floating point number vector, and the form of the attribute diagnosis-specific expression vector is slightly different according to different specific tasks. Classification tasks, such as gender classification, ethnicity classification, etc., whose feature vectors are integer values that characterize the class, and for the age estimation regression task, whose feature vectors are floating point values that indicate the age. If the information such as the age, the sex and the like of the registrant can be acquired when the database is built, the attribute feature vector of the registrant can be directly built by the real information.
And a sixth step: comparing the identity feature expression vectors corresponding to the face images with the identity feature expression vectors of each candidate face image prestored in the face feature database one by one, calculating the similarity in real time, correspondingly sorting the obtained similarity according to the sequence of the similarity from large to small, and outputting a plurality of candidate face images prestored in the face feature database so as to obtain a candidate face image list;
in the present invention, in particular, the similarity of the identity feature expression vector is expressed by using a similarity score value. The similarity score can be calculated in a variety of ways, such as from the cosine distance or the negative Euclidean distance of the feature vector. The higher the similarity score is, the more similar the two face images are, and the more likely it is that the two face images belong to the same person.
In the present invention, in the sixth step, for the identity feature vector, the calculation of the similarity s usually uses a cosine distance or a negative euclidean distance.
The cosine distance fraction is calculated as:
Figure GDA0002371086430000121
wherein the content of the first and second substances,
Figure GDA0002371086430000122
respectively two face images x1And, x2D-dimensional identity feature vector.
The calculation formula of the negative Euclidean distance fraction is as follows:
Figure GDA0002371086430000123
wherein the content of the first and second substances,
Figure GDA0002371086430000124
respectively two face images x1And, x2D-dimensional identity feature vector.
The seventh step: comparing the plurality of attribute feature expression vectors (obtained through the fourth step) corresponding to the face image with the plurality of attribute feature expression vectors corresponding to each candidate face image in the candidate face image list one by one and calculating the similarity in real time to obtain a plurality of similarity scores (namely the similarity scores between the plurality of attribute feature expression vectors) between the face image and each candidate face image;
in the present invention, it should be noted that the similarity of the attribute features is expressed by using the similarity score value. The higher the similarity score is, the more similar the two face images are in a certain attribute, and the more likely the two face images belong to the same person.
It should be noted that each face image has a plurality of attribute feature vectors. The similarity is calculated by each attribute vector.
For the age estimation feature vector, the calculation of the similarity s uses a negative squared distance, and the calculation formula is as follows:
Figure GDA0002371086430000125
wherein the content of the first and second substances,
Figure GDA0002371086430000131
respectively two face images x1And, x2Age estimate (age estimate feature vector).
For the characteristic vectors of the national or gender attributes, the similarity s is directly determined according to whether the characteristic vectors belong to the same class, and the calculation formula is as follows:
Figure GDA0002371086430000132
wherein the content of the first and second substances,
Figure GDA0002371086430000133
respectively two face images x1And x2An estimate of a certain attribute class.
Eighth step: respectively carrying out normalization processing on a plurality of similarity scores between the face image and each candidate face image, and then carrying out score fusion to obtain a fusion similarity score (also called actual similarity score defined by the application) between the face image and each candidate face image;
in the present invention, in the eighth step, the similarity score s is normalized by a normalization process, and a formula of the normalization process (i.e., normalization process) is as follows:
Figure GDA0002371086430000134
where mu is the mean of the similarity scores, sigma is the variance of the scores, snewThe similarity score after the normalization processing is obtained.
In the present invention, it should be noted that the similarity scores between large-scale face images are counted, and the mean and variance of the similarity scores can be calculated. Mu is the mean of the similarity scores and sigma is the variance of the scores.
In the present invention, the purpose of the normalization processing is to ensure that the similarity scores of each subtask (identity feature expression, age, gender, etc.) have a substantially uniform distribution, and avoid that the similarity scores of some tasks are too high or too low, which may result in too large or too small an effect on determining whether the faces are the same.
In the invention, the score fusion is a similarity score obtained by integrating a plurality of tasks to obtain a final similarity. Each face image has a plurality of feature vectors including identity feature vectors and attribute feature vectors. The similarity is calculated for each vector. Considering that various tasks have different effectiveness and reliability for identity information judgment, the similarity is summed according to different weights (for example, compared with gender classification, the ethnic classification task has lower accuracy, lower reliability and smaller weight), and the final similarity is obtained by fusion. The fused similarity is also a numerical value. The larger the value of the fusion similarity is, the higher the probability that the two faces are from the same person is represented.
The ninth step: according to the sequence from large to small of the fusion similarity scores, a plurality of candidate face images in the candidate face image list are reordered, and the candidate face image list obtained after reordering is the result of face recognition retrieval of the face images (generally, the user identity corresponding to the candidate face image with the maximum fusion similarity is the user identity of the face image needing face recognition).
It should be noted that, for the present invention, in order to improve the accuracy of the face recognition algorithm and ensure the rapidity and high efficiency of the recognition algorithm, the present invention provides a face retrieval method based on a multitask convolutional neural network, which considers that a face image database usually records attribute information of gender, age, ethnicity, etc. of a user while being constructed, and the information can play a role of auxiliary recognition during face recognition, so that the present invention can improve the accuracy and robustness of the neural network by jointly supervising the joint learning of the neural network by a face image classification task and auxiliary tasks such as gender estimation, age estimation, ethnicity classification, etc. The information of a plurality of tasks is mutually supplemented, and the overall effect of the recognition/retrieval algorithm is improved. Meanwhile, due to the specific connection mode of the multitask convolutional neural network, most part of neural network connection is shared among a plurality of tasks, so that the trained multitask convolutional neural network can simultaneously complete different tasks, the calculation cost is effectively reduced, and the operation speed is increased.
The invention provides a face retrieval method based on a multitask convolutional neural network, aiming at the problems of face recognition and face retrieval. Compared with the traditional face recognition method, the method integrates multiple items of identity information for recognition, and meanwhile, the arrangement of a public network can ensure that the calculation cost is low and the calculation speed is high.
Example (b): in order to explain the specific implementation mode of the invention in detail and verify the effectiveness of the invention, the method provided by the invention is applied to an open face database, namely an LFW face database. The database contains a total of 13233 images of 5749 individuals.
In an embodiment, the present invention employs the BLUFR protocol for LFW data sets to demonstrate the effectiveness of the present invention. The BLUFR test protocol specifies two test scenarios: 10-fold cross validation face comparison experiments and open set face recognition experiments. In the face comparison experiment, 9708 face images of 4249 persons are included in each verification on average (the face images and videos are collectively called face images). In open set face recognition, the registered image library contains 1000 different person images, the retrieval database contains 4249 persons, wherein only 1000 person images have corresponding images in the registered library, and the images of the rest 3249 persons do not appear in the registered library.
The method comprises the following specific steps:
firstly, the training process can be as follows: and collecting a large number of face images as training data to design a neural network model. In particular, the multitask convolutional neural network model used in the present invention includes five convolutional layers and four pooling layers, and the network structure is shown in fig. 2. The multitask convolutional neural network performs three tasks simultaneously: face image classification, face gender classification and age estimation. Loss functions used by the three tasks are softmax loss, change loss and square loss respectively, and loss weights are 1, 0.3 and 0.3 respectively. And continuously adjusting the learning rate along with the network training until the training loss is not reduced any more, thereby obtaining the final model.
The test process can comprise the following steps:
step S1: firstly, carrying out face detection and key point detection on all input face images to obtain face position information and key point position information of all input images;
step S2: and carrying out preprocessing operations such as posture correction and illumination balance on the face image according to the face position information and the key point information acquired in the last step. Specifically, for the LFW dataset, the present invention corrects the input face image to a frontal face using rotation and scaling;
step S3: taking the preprocessed face image obtained in the step S2 as the input of a multitask convolutional neural network, and executing the forward calculation of a multitask convolutional neural network model to obtain the identity characteristic expression, the gender characteristic expression and the predicted age of the face image in the test set;
step S4: for the face comparison scene, this step is omitted. And for the face retrieval scene, calculating the cosine distance between the retrieved face image and the face identity characteristics of each image in the registry as the similarity of the retrieved face image and each image. According to the similarity, sorting from big to small, and taking the first 50 face images to form a candidate list;
step S5: and for the face comparison scene, calculating face identity similarity, gender similarity and age similarity between the image pairs. And then, summing and fusing all the similarity scores, sequencing and updating the list from large to small, wherein the finally obtained list is the retrieval result of the input face image. And for the face retrieval scene, calculating the gender similarity and the age similarity of each image in the candidate list and the retrieval image. And then, the face identity similarity obtained in the step S4 is added and fused, the list is updated in an order from big to small, and the finally obtained list is the retrieval result.
Fig. 3 shows the correct passing rate of the method of the present invention when the false recognition rate is 0.1% in the face comparison scene, and the top1 accuracy rate when the false recognition rate is 1% in the open set face recognition scene.
The invention discloses a face retrieval method based on a multitask convolutional neural network. The invention provides a multitask neural network structure aiming at the problem that the probability of face false recognition is increased along with the increase of the capacity of a database in face recognition. And combining the human face image comparison and attribute analysis tasks to obtain more robust feature representation and attribute features of the human face image, and integrating various information to perform human face retrieval. The invention utilizes a multi-task learning mechanism, and the obtained model can simultaneously complete different recognition tasks. The specific method comprises the following steps: preprocessing the input face image, and correcting the angle and expression of the face image; extracting the characteristics and attribute labels of the corrected face images (namely the face images and the videos) by using a multitask convolutional neural network; calculating the similarity between the search image and the registered image in the database according to the characteristic expression of the search image to obtain a search result; and reordering the retrieval result according to the similarity and the attribute similarity between the retrieval image and a plurality of registered images (namely candidate face images with a plurality of known user identities) in a pre-established face feature database.
In summary, compared with the prior art, the invention provides a face retrieval method based on a multitask convolutional neural network, which can quickly and effectively perform corresponding user identity identification judgment on a large number of face images while ensuring high-quality identification of the face images, meets the requirements of people on the face identification function, improves the overall working efficiency and working quality of face identification, saves precious time of people, is beneficial to improving the product use experience of people, and has great production practice significance.
By using the technology provided by the invention, the convenience of work and life of people can be greatly improved, and the living standard of people is greatly improved.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A face retrieval method based on a multitask convolutional neural network is characterized by comprising the following steps:
the first step is as follows: detecting and acquiring the face position of any one face image needing face recognition, and detecting and acquiring the key point position of the face image according to the acquired face position;
the second step is that: preprocessing the face image according to the face position and the key point position of the face image;
the third step: the method comprises the steps of establishing a multitask convolutional neural network in advance, inputting a human face image with a preset standard size into the multitask convolutional neural network, and training the multitask convolutional neural network until a model of the multitask convolutional neural network is converged to finish training;
the fourth step: inputting the preprocessed face image into the trained multitask convolutional neural network to obtain an identity feature expression vector and a plurality of attribute feature expression vectors corresponding to the face image;
the fifth step: a face feature database is established in advance, and a plurality of candidate face images with known user identities, identity feature expression vectors corresponding to each candidate face image and a plurality of attribute feature expression vectors are stored in the face feature database in advance;
and a sixth step: comparing the identity feature expression vectors corresponding to the face images with the identity feature expression vectors of each candidate face image prestored in the face feature database one by one, calculating the similarity in real time, correspondingly sorting the obtained similarity according to the sequence of the similarity from large to small, and outputting a plurality of candidate face images prestored in the face feature database so as to obtain a candidate face image list;
the seventh step: comparing the attribute feature expression vectors corresponding to the face image with the attribute feature expression vectors corresponding to each candidate face image in the candidate face image list one by one and calculating the similarity in real time to obtain similarity scores between the face image and each candidate face image;
eighth step: respectively carrying out normalization processing on a plurality of similarity scores between the face image and each candidate face image and then carrying out score fusion to obtain a fusion similarity score between the face image and each candidate face image;
the ninth step: and reordering a plurality of candidate face images in the candidate face image list according to the sequence from large to small of the fusion similarity score, wherein the candidate face image list obtained after reordering is the result of face recognition retrieval of the face images.
2. The method according to claim 1, characterized in that said second step comprises in particular the steps of:
the method comprises the steps of predefining a key point position and an illumination condition of a standard face;
aligning the key point position of the face image to the key point position of a standard face through a preset image transformation algorithm;
and performing light ray correction on the aligned face image through a preset image processing algorithm, so that the illumination condition of the aligned face image is changed to the illumination condition of the standard face.
3. The method of claim 2, wherein the predefined key point positions and lighting conditions of a standard face are specifically:
and obtaining the key point position and the illumination condition of the standard human face by averaging calculation according to the key point position information and the illumination condition of a plurality of human face images in advance.
4. The method of claim 1, wherein the multitask convolutional neural network comprises an input layer, a plurality of convolutional layers, a plurality of pooling layers, a plurality of full link layers and an output layer, which are sequentially processed on the input face image.
5. The method as claimed in claim 1, wherein in the third step, the step of inputting the face image with the preset standard size into the multitask convolutional neural network and training the multitask convolutional neural network comprises the following sub-steps:
inputting a human face image with a preset standard size into a plurality of preset human face attribute feature classification loss functions, and then calculating loss values of a plurality of human face attribute features corresponding to the human face image with the preset standard size;
and reversely adjusting all weights in the multitask convolutional neural network by using the loss values of the plurality of face attribute features corresponding to the face image with the preset standard size together, so that the weighted loss sum of all weights is minimum.
6. The method of claim 5, wherein the plurality of predetermined face attribute feature classification loss functions comprises a predetermined face identity classification loss function, a face gender classification loss function, and a face age classification loss function;
the face identity classification loss function is a softmax loss function, the face gender classification loss function is a change loss function, and the face age classification loss function is a square loss function.
7. The method of claim 6, wherein the softmax loss function has the formula:
Figure FDA0002571471640000031
where N is the number of categories, x is the input face image, yIdentity∈RN×1Is a category vector representing the category of the face image,
Figure FDA0002571471640000032
representing the output of the ith node of the face identity classifier learned by the multitask convolutional neural network;
the formula of the change loss function is as follows:
Figure FDA0002571471640000033
wherein, yGenderE { -1, +1} is a label representing the gender of the face image,
Figure FDA0002571471640000034
the method is characterized in that a multitask convolution neural network is used for predicting and outputting the gender of an input face image;
the formula of the square loss function is as follows:
Figure FDA0002571471640000035
wherein, yAgeIs the true age value of the face image,
Figure FDA0002571471640000036
is a face image inputted by a multitask convolution neural networkAnd (4) outputting the prediction of the age.
8. The method according to claim 1, characterized in that in the third step, the step of training the convolutional neural network is specifically:
inputting any one preset human face image with standard size and corresponding human face identity and attribute label information into an input layer of the convolutional neural network, extracting a characteristic value of the human face image by a convolutional layer and a pooling layer of the convolutional neural network, and then outputting the characteristic value from an output layer;
sending the characteristic value of the face image with the preset standard size into a classifier for classification, respectively obtaining face identity and attribute label information which are obtained by judgment of a convolutional neural network, and calculating to obtain a loss value of the characteristic value of the face image according to the label information obtained by a comparison algorithm and the face identity and attribute label information of the face image;
and reversely adjusting all weights in the convolutional neural network by using the loss value of the characteristic value of the human face image with the preset standard size, and finishing the training of the convolutional neural network.
9. The method according to any one of claims 1 to 7, wherein in the eighth step, the normalization process is formulated as follows:
Figure FDA0002571471640000041
where μ is the mean similarity score, σ is the variance score, s is the similarity score between the face image and each candidate face imagenewThe similarity score after the normalization processing is obtained.
CN201611242736.5A 2016-12-29 2016-12-29 Face retrieval method based on multitask convolutional neural network Active CN106815566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611242736.5A CN106815566B (en) 2016-12-29 2016-12-29 Face retrieval method based on multitask convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611242736.5A CN106815566B (en) 2016-12-29 2016-12-29 Face retrieval method based on multitask convolutional neural network

Publications (2)

Publication Number Publication Date
CN106815566A CN106815566A (en) 2017-06-09
CN106815566B true CN106815566B (en) 2021-04-16

Family

ID=59110449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611242736.5A Active CN106815566B (en) 2016-12-29 2016-12-29 Face retrieval method based on multitask convolutional neural network

Country Status (1)

Country Link
CN (1) CN106815566B (en)

Families Citing this family (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220990B (en) * 2017-06-22 2020-09-08 成都品果科技有限公司 Hair segmentation method based on deep learning
CN109359499A (en) * 2017-07-26 2019-02-19 虹软科技股份有限公司 A kind of method and apparatus for face classifier
CN107390722A (en) * 2017-07-28 2017-11-24 上海瞬动科技有限公司合肥分公司 A kind of unmanned plane during flying angle intelligent control method
US11341631B2 (en) 2017-08-09 2022-05-24 Shenzhen Keya Medical Technology Corporation System and method for automatically detecting a physiological condition from a medical image of a patient
CN107609493B (en) * 2017-08-25 2021-04-13 广州视源电子科技股份有限公司 Method and device for optimizing human face image quality evaluation model
CN107609497B (en) * 2017-08-31 2019-12-31 武汉世纪金桥安全技术有限公司 Real-time video face recognition method and system based on visual tracking technology
CN107682216B (en) * 2017-09-01 2018-06-05 南京南瑞集团公司 A kind of network traffics protocol recognition method based on deep learning
CN108875489A (en) * 2017-09-30 2018-11-23 北京旷视科技有限公司 Method for detecting human face, device, system, storage medium and capture machine
CN107992795B (en) * 2017-10-27 2021-08-31 江西高创保安服务技术有限公司 Group partner based on population information base and real name call record and its head and target identification method
CN107862383B (en) * 2017-11-09 2021-09-17 睿魔智能科技(深圳)有限公司 Multitask deep learning method and system for human visual perception
CN108229308A (en) 2017-11-23 2018-06-29 北京市商汤科技开发有限公司 Recongnition of objects method, apparatus, storage medium and electronic equipment
CN107844781A (en) * 2017-11-28 2018-03-27 腾讯科技(深圳)有限公司 Face character recognition methods and device, electronic equipment and storage medium
CN107766850B (en) * 2017-11-30 2020-12-29 电子科技大学 Face recognition method based on combination of face attribute information
CN108875515A (en) * 2017-12-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system, storage medium and capture machine
CN109918976B (en) * 2017-12-13 2021-04-02 航天信息股份有限公司 Portrait comparison algorithm fusion method and device thereof
CN107895160A (en) * 2017-12-21 2018-04-10 曙光信息产业(北京)有限公司 Human face detection and tracing device and method
CN108171692B (en) * 2017-12-26 2021-03-26 安徽科大讯飞医疗信息技术有限公司 Lung image retrieval method and device
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108875533B (en) * 2018-01-29 2021-03-05 北京旷视科技有限公司 Face recognition method, device, system and computer storage medium
CN110110734B (en) * 2018-02-01 2023-04-07 富士通株式会社 Open set identification method, information processing apparatus, and storage medium
CN108492200B (en) * 2018-02-07 2022-06-17 中国科学院信息工程研究所 User attribute inference method and device based on convolutional neural network
CN108108499B (en) * 2018-02-07 2023-05-26 腾讯科技(深圳)有限公司 Face retrieval method, device, storage medium and equipment
CN110147796A (en) * 2018-02-12 2019-08-20 杭州海康威视数字技术股份有限公司 Image matching method and device
CN108334863B (en) * 2018-03-09 2020-09-04 百度在线网络技术(北京)有限公司 Identity authentication method, system, terminal and computer readable storage medium
CN108491773B (en) * 2018-03-12 2022-11-08 中国工商银行股份有限公司 Identification method and system
CN108596839A (en) * 2018-03-22 2018-09-28 中山大学 A kind of human-face cartoon generation method and its device based on deep learning
CN108664989B (en) * 2018-03-27 2019-11-01 北京达佳互联信息技术有限公司 Image tag determines method, apparatus and terminal
CN110263603B (en) * 2018-05-14 2021-08-06 桂林远望智能通信科技有限公司 Face recognition method and device based on central loss and residual error visual simulation network
CN110490295B (en) * 2018-05-15 2022-04-05 华为技术有限公司 Data processing method and processing device
CN110516512B (en) * 2018-05-21 2023-08-25 北京中科奥森数据科技有限公司 Training method of pedestrian attribute analysis model, pedestrian attribute identification method and device
CN108765014A (en) * 2018-05-30 2018-11-06 中海云智慧(北京)物联网科技有限公司 A kind of intelligent advertisement put-on method based on access control system
CN108875016A (en) * 2018-06-20 2018-11-23 上海百林通信网络科技服务股份有限公司 A kind of sample technology of sharing and evaluation method based on face recognition application
CN108921209A (en) * 2018-06-21 2018-11-30 杭州骑轻尘信息技术有限公司 Image identification method, device and electronic equipment
CN108932314A (en) * 2018-06-21 2018-12-04 南京农业大学 A kind of chrysanthemum image content retrieval method based on the study of depth Hash
CN108920622B (en) * 2018-06-29 2021-07-20 北京奇艺世纪科技有限公司 Training method, training device and recognition device for intention recognition
CN109308495B (en) * 2018-07-05 2021-07-02 科亚医疗科技股份有限公司 Apparatus and system for automatically predicting physiological condition from medical image of patient
CN108960167B (en) * 2018-07-11 2023-08-18 腾讯科技(深圳)有限公司 Hairstyle identification method, device, computer readable storage medium and computer equipment
CN109190470B (en) * 2018-07-27 2022-09-27 北京市商汤科技开发有限公司 Pedestrian re-identification method and device
CN108829900B (en) * 2018-07-31 2020-11-10 成都视观天下科技有限公司 Face image retrieval method and device based on deep learning and terminal
CN109063656B (en) * 2018-08-08 2021-08-24 厦门市美亚柏科信息股份有限公司 Method and device for carrying out face query by using multiple face engines
CN109344855B (en) * 2018-08-10 2021-09-24 华南理工大学 Depth model face beauty evaluation method based on sequencing guided regression
CN109344703B (en) * 2018-08-24 2021-06-25 深圳市商汤科技有限公司 Object detection method and device, electronic equipment and storage medium
CN109117808B (en) * 2018-08-24 2020-11-03 深圳前海达闼云端智能科技有限公司 Face recognition method and device, electronic equipment and computer readable medium
CN109344713B (en) * 2018-08-31 2021-11-02 电子科技大学 Face recognition method of attitude robust
CN109190561B (en) * 2018-09-04 2022-03-22 四川长虹电器股份有限公司 Face recognition method and system in video playing
CN109726627B (en) * 2018-09-29 2021-03-23 初速度(苏州)科技有限公司 Neural network model training and universal ground wire detection method
CN109271957B (en) * 2018-09-30 2020-10-20 厦门市巨龙信息科技有限公司 Face gender identification method and device
CN109544737A (en) * 2018-11-01 2019-03-29 深圳市靓工创新应用科技有限公司 User's passing method and system
CN109583831A (en) * 2018-11-09 2019-04-05 考拉征信服务有限公司 Background check method and device
CN109583569B (en) * 2018-11-30 2021-08-31 熵基科技股份有限公司 Multi-mode feature fusion method and device based on convolutional neural network
CN109829371B (en) * 2018-12-26 2022-04-26 深圳云天励飞技术有限公司 Face detection method and device
EP3674974A1 (en) * 2018-12-31 2020-07-01 Samsung Electronics Co., Ltd. Apparatus and method with user verification
US10635918B1 (en) * 2019-01-30 2020-04-28 StradVision, Inc. Method and device for managing smart database for face recognition based on continual learning
CN109829433B (en) * 2019-01-31 2021-06-25 北京市商汤科技开发有限公司 Face image recognition method and device, electronic equipment and storage medium
CN109829520B (en) * 2019-01-31 2021-12-21 北京字节跳动网络技术有限公司 Image processing method and device
CN111523652B (en) * 2019-02-01 2023-05-02 阿里巴巴集团控股有限公司 Processor, data processing method thereof and image pickup device
CN109978016B (en) * 2019-03-06 2022-08-23 重庆邮电大学 Network user identity identification method
CN109800744B (en) * 2019-03-18 2021-08-20 深圳市商汤科技有限公司 Image clustering method and device, electronic equipment and storage medium
CN109993102B (en) * 2019-03-28 2021-09-17 北京达佳互联信息技术有限公司 Similar face retrieval method, device and storage medium
CN110188615B (en) * 2019-04-30 2021-08-06 中国科学院计算技术研究所 Facial expression recognition method, device, medium and system
CN112149449A (en) * 2019-06-26 2020-12-29 北京华捷艾米科技有限公司 Face attribute recognition method and system based on deep learning
CN110263756A (en) * 2019-06-28 2019-09-20 东北大学 A kind of human face super-resolution reconstructing system based on joint multi-task learning
CN110490057B (en) * 2019-07-08 2020-10-27 光控特斯联(上海)信息科技有限公司 Self-adaptive identification method and system based on human face big data artificial intelligence clustering
CN110348416A (en) * 2019-07-17 2019-10-18 北方工业大学 Multi-task face recognition method based on multi-scale feature fusion convolutional neural network
CN110414489A (en) * 2019-08-21 2019-11-05 五邑大学 A kind of face beauty prediction technique based on multi-task learning
CN110826402B (en) * 2019-09-27 2024-03-29 深圳市华付信息技术有限公司 Face quality estimation method based on multitasking
CN110674770A (en) * 2019-09-29 2020-01-10 上海依图网络科技有限公司 System and method for facial expression detection
CN110728234A (en) * 2019-10-12 2020-01-24 爱驰汽车有限公司 Driver face recognition method, system, device and medium
CN110866466B (en) * 2019-10-30 2023-12-26 平安科技(深圳)有限公司 Face recognition method, device, storage medium and server
CN110826525B (en) * 2019-11-18 2023-05-26 天津高创安邦技术有限公司 Face recognition method and system
CN112825119A (en) * 2019-11-20 2021-05-21 北京眼神智能科技有限公司 Face attribute judgment method and device, computer readable storage medium and equipment
TWI781408B (en) * 2019-11-27 2022-10-21 靜宜大學 Artificial intelligence based cell detection method by using hyperspectral data analysis technology
CN110929099B (en) * 2019-11-28 2023-07-21 杭州小影创新科技股份有限公司 Short video frame semantic extraction method and system based on multi-task learning
CN110874587B (en) * 2019-12-26 2020-07-28 浙江大学 Face characteristic parameter extraction system
CN111339869A (en) * 2020-02-18 2020-06-26 北京拙河科技有限公司 Face recognition method, face recognition device, computer readable storage medium and equipment
CN111401171B (en) * 2020-03-06 2023-09-22 咪咕文化科技有限公司 Face image recognition method and device, electronic equipment and storage medium
CN111368772B (en) * 2020-03-11 2023-08-22 杭州海康威视系统技术有限公司 Identity recognition method, device, equipment and storage medium
CN111339990B (en) * 2020-03-13 2023-03-24 乐鑫信息科技(上海)股份有限公司 Face recognition system and method based on dynamic update of face features
CN111401294B (en) * 2020-03-27 2022-07-15 山东财经大学 Multi-task face attribute classification method and system based on adaptive feature fusion
CN111539351B (en) * 2020-04-27 2023-11-03 广东电网有限责任公司广州供电局 Multi-task cascading face frame selection comparison method
CN111582141B (en) * 2020-04-30 2023-05-09 京东方科技集团股份有限公司 Face recognition model training method, face recognition method and device
CN111753641B (en) * 2020-05-07 2023-07-18 中山大学 Gender prediction method based on high-dimensional characteristics of human face
CN111310743B (en) * 2020-05-11 2020-08-25 腾讯科技(深圳)有限公司 Face recognition method and device, electronic equipment and readable storage medium
CN111680595A (en) * 2020-05-29 2020-09-18 新疆爱华盈通信息技术有限公司 Face recognition method and device and electronic equipment
CN111726264B (en) * 2020-06-18 2021-11-19 中国电子科技集团公司第三十六研究所 Network protocol variation detection method, device, electronic equipment and storage medium
CN112633051A (en) * 2020-09-11 2021-04-09 博云视觉(北京)科技有限公司 Online face clustering method based on image search
CN112163497B (en) * 2020-09-22 2023-08-04 广东工业大学 Construction site accident prediction method and device based on image recognition
CN112418078B (en) * 2020-11-20 2021-11-09 北京云从科技有限公司 Score modulation method, face recognition device and medium
CN112487222B (en) * 2020-11-30 2021-11-30 江苏正赫通信息科技有限公司 Method for quickly searching and effectively storing similar human faces
CN112417197B (en) * 2020-12-02 2022-02-25 云从科技集团股份有限公司 Sorting method, sorting device, machine readable medium and equipment
CN112686851B (en) * 2020-12-25 2022-02-08 合肥联宝信息技术有限公司 Image detection method, device and storage medium
CN112699846B (en) * 2021-01-12 2022-06-07 武汉大学 Specific character and specific behavior combined retrieval method and device with identity consistency check function
CN112906508B (en) * 2021-02-01 2024-05-28 四川观想科技股份有限公司 Face living body detection method based on convolutional neural network
CN112949599B (en) * 2021-04-07 2022-01-14 青岛民航凯亚系统集成有限公司 Candidate content pushing method based on big data
CN113298156A (en) * 2021-05-28 2021-08-24 有米科技股份有限公司 Neural network training method and device for image gender classification
CN113378951B (en) * 2021-06-22 2024-06-21 中海石油(中国)有限公司 Visual analogy method, system, readable medium and equipment for oilfield portrait
CN113361506B (en) * 2021-08-11 2022-04-29 中科南京智能技术研究院 Face recognition method and system for mobile terminal
CN113743379B (en) * 2021-11-03 2022-07-12 杭州魔点科技有限公司 Light-weight living body identification method, system, device and medium for multi-modal characteristics
CN114332914A (en) * 2021-11-29 2022-04-12 中国电子科技集团公司电子科学研究院 Personnel feature identification method, device and computer-readable storage medium
CN116610922A (en) * 2023-07-13 2023-08-18 浙江大学滨江研究院 Non-invasive load identification method and system based on multi-strategy learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN104112125A (en) * 2014-07-24 2014-10-22 大连大学 Method for identity recognition based on palm print and finger crease feature fusion
CN105117463A (en) * 2015-08-24 2015-12-02 北京旷视科技有限公司 Information processing method and information processing device
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN106203395A (en) * 2016-07-26 2016-12-07 厦门大学 Face character recognition methods based on the study of the multitask degree of depth

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101162499A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Method for using human face formwork combination to contrast
CN101201894B (en) * 2007-11-06 2010-08-11 重庆大学 Method for recognizing human face from commercial human face database based on gridding computing technology
CN105224929A (en) * 2015-10-10 2016-01-06 北京邮电大学 A kind of method of searching human face photo
CN106022317A (en) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 Face identification method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN104112125A (en) * 2014-07-24 2014-10-22 大连大学 Method for identity recognition based on palm print and finger crease feature fusion
CN105117463A (en) * 2015-08-24 2015-12-02 北京旷视科技有限公司 Information processing method and information processing device
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN106203395A (en) * 2016-07-26 2016-12-07 厦门大学 Face character recognition methods based on the study of the multitask degree of depth

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多任务学习及卷积神经网络在人脸识别中的应用;邵蔚元等;《计算机工程与应用》;20160731;第52卷(第13期);第32-37页 *

Also Published As

Publication number Publication date
CN106815566A (en) 2017-06-09

Similar Documents

Publication Publication Date Title
CN106815566B (en) Face retrieval method based on multitask convolutional neural network
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN106372581B (en) Method for constructing and training face recognition feature extraction network
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN108829900B (en) Face image retrieval method and device based on deep learning and terminal
CN111339990B (en) Face recognition system and method based on dynamic update of face features
CN103443804B (en) Method of facial landmark detection
Tang et al. Facial landmark detection by semi-supervised deep learning
CN111178208A (en) Pedestrian detection method, device and medium based on deep learning
CN111274916A (en) Face recognition method and face recognition device
CN107341688A (en) The acquisition method and system of a kind of customer experience
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN105335719A (en) Living body detection method and device
Liu et al. An end-to-end deep model with discriminative facial features for facial expression recognition
CN111028216A (en) Image scoring method and device, storage medium and electronic equipment
CN109325408A (en) A kind of gesture judging method and storage medium
CN115223239A (en) Gesture recognition method and system, computer equipment and readable storage medium
CN114299279A (en) Unmarked group rhesus monkey motion amount estimation method based on face detection and recognition
CN105550642A (en) Gender identification method and system based on multi-scale linear difference characteristic low-rank expression
CN107220612B (en) Fuzzy face discrimination method taking high-frequency analysis of local neighborhood of key points as core
Liu et al. GDMN: Group decision-making network for person re-identification
CN114998966A (en) Facial expression recognition method based on feature fusion
CN111178141B (en) LSTM human body behavior identification method based on attention mechanism
Galiyawala et al. Dsa-pr: discrete soft biometric attribute-based person retrieval in surveillance videos
Zhang et al. Object detection based on deep learning and b-spline level set in color images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 300457 unit 1001, block 1, msd-g1, TEDA, No.57, 2nd Street, Binhai New Area Economic and Technological Development Zone, Tianjin

Patentee after: Tianjin Zhongke intelligent identification Co.,Ltd.

Address before: 300457 No. 57, Second Avenue, Economic and Technological Development Zone, Binhai New Area, Tianjin

Patentee before: TIANJIN ZHONGKE INTELLIGENT IDENTIFICATION INDUSTRY TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd.

CP03 Change of name, title or address