WO2019119505A1 - Procédé et dispositif de reconnaissance faciale, dispositif informatique et support d'enregistrement - Google Patents

Procédé et dispositif de reconnaissance faciale, dispositif informatique et support d'enregistrement Download PDF

Info

Publication number
WO2019119505A1
WO2019119505A1 PCT/CN2017/119465 CN2017119465W WO2019119505A1 WO 2019119505 A1 WO2019119505 A1 WO 2019119505A1 CN 2017119465 W CN2017119465 W CN 2017119465W WO 2019119505 A1 WO2019119505 A1 WO 2019119505A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
feature vector
preset
samples
face
Prior art date
Application number
PCT/CN2017/119465
Other languages
English (en)
Chinese (zh)
Inventor
严蕤
牟永强
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Publication of WO2019119505A1 publication Critical patent/WO2019119505A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Definitions

  • the invention belongs to the field of image processing, and in particular relates to a method and device for face recognition, a computer device and a storage medium.
  • Face recognition is a biometric technology based on human facial feature information for identification. It is widely used in the fields of identity verification, security monitoring, access control and attendance systems, and judicial criminal investigation. Face recognition mainly includes processes such as face detection, face alignment, face feature extraction, and face similarity determination. Among them, the determination of face similarity is an important part of face recognition, which can directly affect the result of face recognition.
  • the existing methods for determining face similarity mainly include: (1) a method for determining face similarity based on distance, such as Euclidean distance, cosine distance or Mahalanobis distance, but the method is ineffective, It is difficult to distinguish samples that are closer in feature space distribution.
  • a method of determining face similarity based on classification such as a classification method of a support vector machine.
  • the model complexity of the method increases with the increase of training data, resulting in large computational complexity and low computational efficiency, which leads to poor performance and low efficiency of subsequent face recognition.
  • the existing method of face recognition has a problem of poor effect and low efficiency.
  • the invention provides a method and device for face recognition, a computer device and a storage medium, and aims to solve the problem that the existing face recognition method has poor effect and low efficiency.
  • a first aspect of the present invention provides a method for face recognition, the method comprising:
  • the face image to be recognized is identified by using the trained regression model.
  • the acquiring the fused feature vector of the any two samples includes:
  • the preset training set includes a category identifier corresponding to the sample, and the reference similarity for acquiring the any two samples includes:
  • the reference similarity of the two samples is the sum of the cosine distance and the preset constant
  • the reference similarity of the two samples is the difference between the cosine distance and the preset constant.
  • the regression model is determined according to the fusion feature vector of the two samples that are different from each other in the preset training set and the reference similarity training model, and the regression model after the training is determined to include:
  • the regression model includes at least a first fully connected layer and a second fully connected a layer, and the first fully connected layer and the second fully connected layer respectively perform feature mapping transformation on the any of the fusion feature vectors by using an activation function;
  • the parameter of the first fully connected layer and the parameter of the second fully connected layer of the regression model are adjusted by a process of back propagation using a random gradient descent;
  • the above iterative process is repeated until the error satisfies the preset convergence condition, and the parameters of the first fully connected layer and the parameters of the second fully connected layer of the last iterative process before the preset convergence condition are used as the first full regression model
  • the parameters of the connection layer and the parameters of the second fully connected layer determine the regression model after training.
  • the preset convergence condition includes:
  • the error is less than or equal to a preset error threshold or the error percentage corresponding to the error is less than or equal to a preset error percentage.
  • the using the trained regression model to identify the face image to be recognized includes:
  • first face image and the second face image are not the same person's face image .
  • the using the trained regression model to identify the face image to be recognized includes:
  • the method further includes:
  • the face images included in the preset search database are arranged in descending order of cosine distance, and the face images ranked in the top N are used as candidate sets, where N is a positive integer;
  • the fusion feature vector for determining the feature vector of the target face image and the feature vector of each face image included in the preset search database respectively includes:
  • the similarity between the face picture and each face picture included in the preset search database includes:
  • the arranged face images as search results include:
  • a second aspect of the present invention provides a device for recognizing a face, the device comprising:
  • a feature vector extraction module configured to extract feature vectors of any two samples in the preset training set according to the preset facial feature extraction model
  • a normalization module configured to respectively normalize feature vectors of any two samples
  • a fusion feature vector acquisition module configured to acquire a fusion feature vector of any two samples
  • a similarity obtaining module configured to acquire a reference similarity of the any two samples
  • a traversing acquisition module configured to sequentially traverse all the two samples that are different from each other in the preset training set, and obtain a fusion feature vector and a reference similarity of all the two samples that are different from each other in the preset training set;
  • a training module configured to determine a regression model after training according to the fusion feature vector of the two samples that are different from each other in the preset training set and the reference similarity training regression model;
  • the identification module is configured to identify the face image to be recognized by using the trained regression model.
  • a third aspect of the present invention provides a computer apparatus, comprising: a processor, wherein the processor is configured to implement a method of face recognition according to any of the above embodiments when executing a computer program stored in a memory.
  • a fourth aspect of the present invention provides a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor to implement the method for face recognition according to any of the above embodiments.
  • the feature vectors of two mutually different samples in the preset training set are fused, and the regression model after training is determined according to the fusion feature vectors of the two samples that are different from each other and the reference similarity training regression model. model.
  • the fusion feature vector includes the texture feature and the dynamic mode feature of the face image. Therefore, the trained regression model can effectively distinguish the samples of different category markers, and use the trained regression model to identify the face image to be recognized. Can effectively improve the effect and accuracy of face recognition.
  • FIG. 1 is a flowchart of an implementation of a method for face recognition according to an embodiment of the present invention
  • step S106 is a flowchart of implementing step S106 in the method for face recognition according to an embodiment of the present invention
  • step S107 in the method for face recognition according to an embodiment of the present invention
  • step S107 is a flowchart of another implementation of step S107 in the method for face recognition according to the embodiment of the present invention.
  • FIG. 5 is a flowchart of still another implementation of step S107 in the method for face recognition according to the embodiment of the present invention.
  • FIG. 6 is a functional block diagram of a device for face recognition according to an embodiment of the present invention.
  • FIG. 7 is a structural block diagram of a training module 106 in a device for face recognition according to an embodiment of the present invention.
  • FIG. 8 is a structural block diagram of an identification module 107 in a device for recognizing a face according to an embodiment of the present invention.
  • FIG. 9 is a block diagram showing another structure of the identification module 107 in the device for recognizing a face according to an embodiment of the present invention.
  • FIG. 10 is a block diagram showing another structure of the identification module 107 in the device for recognizing a face according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a computer apparatus according to an embodiment of the present invention.
  • FIG. 1 shows an implementation flow of a method for face recognition according to an embodiment of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • a method for face recognition includes:
  • Step S101 Extract feature vectors of any two samples in the preset training set according to the preset facial feature extraction model.
  • the preset facial feature extraction model is a pre-trained facial feature extraction model. Specifically, a large number of facial images can be used to learn facial feature extraction through convolutional neural network, and the trained facial feature extraction is established. Model, no longer detailed here.
  • the preset training set is a preset training set containing a large number of face images, and can be set. It is assumed that the preset training set includes M samples (ie, a face picture) and a category mark corresponding to the sample, where M is a positive integer greater than 1.
  • M is a positive integer greater than 1.
  • the category mark of the sample is based on whether the sample belongs to the same person's pre-set category mark. If two samples are the face image of the same person, the two samples belonging to the same person are one category mark, if the two samples are different people For face images, two samples that are not the same person are marked with different categories, and one category label may correspond to one or more samples.
  • any two samples include a first sample and a second sample, and the first sample and the second sample are different two samples, and the first sample and The second sample is described as an example.
  • x d ) and y (y 1 , y 2 , y 3 ... y d-3 , y d-2 , y d )
  • the class label of the first sample and the class label of the second sample are z i and z, respectively j .
  • the value of d is a dimension of the feature vector, and is a positive integer greater than 1. Specifically, it may be set when the preset facial feature extraction model is trained, and is not particularly limited herein.
  • Step S102 normalizing the feature vectors of the arbitrary two samples respectively.
  • the normalized processing is performed, and the elements of the normalized feature vector are The ratio of the elements of the corresponding dimension to the modulus length of the feature vector.
  • Step S103 Acquire a fusion feature vector of any two samples.
  • the eigenvector of the normalized first sample and the normalized second The feature vectors of the samples are fused to obtain a fused feature vector of the first sample and the second sample.
  • step S103 acquiring the fusion feature vector of the any two samples includes:
  • Step S104 Acquire a reference similarity of any two samples.
  • the reference similarity of the first sample and the second sample may be obtained according to the normalized first sample's feature vector and the normalized second sample's feature vector.
  • step S104 obtaining the reference similarity of the any two samples includes:
  • a cosine distance of the normalized first sample eigenvector and the normalized second sample eigenvector is determined.
  • x i ⁇ y j represents the dot product of the feature vector x i and the feature vector y j
  • 2 represent the two vectors of the feature vector x i and the feature vector y j , respectively
  • the number, the two-norm of the vector, refers to the sum of the square roots of the elements in the vector and the root number.
  • the cosine distance also known as the cosine similarity, is the magnitude of the difference between two individuals using the cosine of the two vectors in the vector space, which can be used to characterize the first sample and the second sample. Similarity.
  • the range of the cosine distance is [-1, +1], and the closer the distance is to 1, the closer the two vectors are to the same direction, that is, the positive correlation; the closer the distance is to -1, the more the direction of the two vectors Close to the opposite, that is, a negative correlation.
  • the reference similarity of the two samples is the sum of the cosine distance and the preset constant.
  • the degree is the sum of the cosine distance and a preset constant.
  • the preset constant is a preset constant. In a preferred embodiment, the preset constant is 0.5.
  • n cos(x i , y j )+ ⁇ .
  • the reference similarity of the two samples is the difference between the cosine distance and the preset constant.
  • the reference of the first sample and the second sample is similar
  • the degree is the difference between the cosine distance and the preset constant.
  • n cos(x i , y j ) ⁇ .
  • Step S105 sequentially traverse all the two samples that are different from each other in the preset training set, and obtain a fusion feature vector and a reference similarity of all the two samples that are different from each other in the preset training set.
  • the steps S101 to S104 are repeated to obtain the fusion features of all the two samples that are different from each other in the preset training set.
  • the vector and the reference similarity, the two samples that are different from each other means that the two samples are different samples.
  • the preset training set includes M samples, and each time two arbitrary samples are extracted from the preset training set until M*(M-1)/2 times are repeated, and the preset training set is completed.
  • the extraction of any two mutually different samples, that is, repeating M*(M-1)/2 times steps S101 to S104 the fusion feature vectors of all the two samples different from each other in the preset training set can be obtained.
  • the reference similarity, and the obtained fusion feature vector and reference similarity of the two samples which are different from each other are used as the data of the training regression model. At this point, the construction of the regression model training data is completed, and the regression model is trained later. .
  • Step S106 Determine a regression model after training according to the fusion feature vector of the two samples that are different from each other in the preset training set and the reference similarity training regression model.
  • the fused feature vector and the reference similarity of all the two different samples in the preset training set can be utilized.
  • the regression model is trained, and after the training is terminated, the regression model after training is determined.
  • Step S107 using the trained regression model to identify the face image to be recognized.
  • the trained regression model can be used to identify the face image to be recognized.
  • the recognition of the face image to be recognized mainly includes face verification and face retrieval.
  • the face verification determines whether the two face images to be verified are face images of the same person, and the face search is based on the target face.
  • the image is retrieved in the face database and the face image of the target face is the same person or the face image with the similarity of the target face image.
  • the feature vectors of all the two samples that are different from each other in the preset training set are merged, and the regression model is trained according to the fusion feature vectors of the two samples and the reference similarity training model to determine the training.
  • the fusion feature vector includes the texture feature and the dynamic mode feature of the face image. Therefore, the trained regression model can effectively distinguish the samples of different category markers, and use the trained regression model to identify the face image to be recognized. Can effectively improve the effect and accuracy of face recognition.
  • FIG. 2 shows an implementation flow of step S106 in the method for face recognition according to the embodiment of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • step S106 according to the fusion feature vectors of the two samples that are different from each other in the preset training set.
  • the regression model is trained with the reference similarity, and the regression model after the training is determined to include:
  • Step S1061 Acquire any fusion feature vector of the preset training set.
  • any fusion feature vector of the preset training set is first acquired, and the fusion is performed.
  • the feature vector is any one of the fusion feature vectors of the two different samples of the mutually different samples in the preset training set.
  • Step S1062 Input any of the fusion feature vectors into a regression model to obtain training similarity of two samples corresponding to any of the fusion feature vectors, wherein the regression model includes at least a first fully connected layer and a first The second fully connected layer, and the first fully connected layer and the second fully connected layer respectively perform feature mapping transformation on the any of the merged feature vectors by using an activation function.
  • the regression model includes at least a first fully connected layer and a second fully connected layer, and the first fully connected layer and the second fully connected layer both use an activation function for any of the fusions
  • the feature vector is used for feature map transformation.
  • the first full connection Both the layer and the second fully connected layer perform feature mapping transformation on any of the fusion feature vectors using a Relu activation function.
  • the first fully connected layer and the second fully connected layer may also adopt a variant function with a Relu activation function, such as a Leaky-Relu activation function or a P-Relu (English full name: Parametric-Relu) activation function or R-Relu. (English full name: Randomizied Relu) activation function and so on.
  • a Relu activation function such as a Leaky-Relu activation function or a P-Relu (English full name: Parametric-Relu) activation function or R-Relu. (English full name: Randomizied Relu) activation function and so on.
  • Step S1063 Determine, by using the loss function, an error of the similarity between the training similarity of the two samples corresponding to the any fusion feature vector and the reference similarity of the two samples corresponding to the any fusion feature vector.
  • the loss function may be used to determine any of the fusion features.
  • the L2 loss function is used to determine the training similarity of the two samples corresponding to any one of the fusion feature vectors and the reference similarity of the two samples corresponding to the any of the fusion feature vectors.
  • Error, wherein the L2 loss (English name: Squared hinge loss, L2 loss for short) function is used to evaluate the degree of inconsistency between the predicted value and the true value.
  • the L2 loss function is used to evaluate the training similarity. The degree of inconsistency with the reference similarity.
  • step S1064 is performed, and the parameter of the first fully connected layer of the regression model and the second fully connected layer are adjusted by a process of backpropagation by using a random gradient descent parameter.
  • the preset convergence condition is a pre-set convergence condition.
  • the preset convergence condition includes: The error is less than or equal to the preset error threshold or the error percentage corresponding to the error is less than or equal to the preset error percentage.
  • the preset error threshold and the preset error percentage are preset error thresholds, and are not particularly limited herein.
  • the stochastic gradient descent is mainly used to perform weight update in the neural network model, and the parameters of the model are updated and adjusted in one direction to minimize the loss function.
  • the stochastic gradient descent randomly selects one sample from the training set at a time (in the embodiment of the present invention) In the middle, it refers to the fusion feature vector) to learn.
  • Backpropagation is to calculate the product of the input signal and its corresponding weight in the forward propagation, then apply the activation function to the sum of these products, and then return the relevant error in the back propagation of the network model, using a random gradient.
  • the update weight value is decreased, and the weight parameter is updated in the opposite direction of the loss function gradient by calculating the gradient of the error function with respect to the weight parameter.
  • the parameters of the first fully connected layer and the second full of the regression model are adjusted by a process of backpropagation by using a random gradient descent.
  • the parameters of the connection layer are adjusted by a process of backpropagation by using a random gradient descent.
  • step S1061 After adjusting the parameters of the first fully connected layer and the parameters of the second fully connected layer, the process goes to step S1061, and steps S1061 to S1063 are repeatedly performed until the error satisfies a preset convergence condition.
  • step S1065 is performed, and the parameters of the first fully connected layer and the parameter of the second fully connected layer of the last iteration process before the preset convergence condition are used as the first full of the regression model.
  • the parameters of the connection layer and the parameters of the second fully connected layer determine the regression model after training.
  • the training regression model is stopped, and the parameters of the first fully connected layer and the parameters of the second fully connected layer of the last iterative process before the preset convergence condition are used as the regression model.
  • the parameters of the first fully connected layer and the parameters of the second fully connected layer determine the regression model after training, and thus the training of the regression model is completed.
  • the fusion feature vector of the preset training set includes the texture feature and the dynamic mode feature of the face image
  • the regression model is trained by using the fusion feature vector of the preset training set, and the random gradient is used to Adjusting the parameters of the regression model to the process of propagation, and determining the regression model after training, therefore, the trained regression model can effectively distinguish samples of different categories of markers, and use the trained regression model to treat the recognized face images When the recognition is performed, the effect and accuracy of face recognition can be effectively improved.
  • FIG. 3 shows an implementation flow of step S107 in the method for face recognition according to the embodiment of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • step S107 includes:
  • Step S201 Acquire a fusion feature vector of the first face image and the second face image to be verified.
  • the fusion feature vector of the first face image and the second face image is extracted, and the specific extraction method and the foregoing extraction are performed.
  • the method for fusing the feature vector of the first sample and the second sample is the same, that is, the feature vector of the first face image and the second face image is first extracted according to the preset face feature extraction model, and then the first face is extracted.
  • the feature vector of the picture and the second face image is normalized, and finally the fusion feature vector of the first face image and the second face image is obtained.
  • Step S202 input a fusion feature vector of the first face image and the second face image to the trained regression model, and acquire the first face image and the second face image. Similarity.
  • step S1062 When the similarity between the first face image and the second face image is obtained by using the trained regression model according to the fusion feature vector of the first face image and the second face image, The content of the above step S1062 can be referred to, and details are not described herein again.
  • step S203 is performed to determine that the first face image and the second face image are The face image of the same person.
  • the preset similarity threshold is a preset similarity, and is not particularly limited herein.
  • the similarity between the first face image and the second face image is greater than or equal to a preset similarity threshold, the first face image and the second face image may be determined to be the same person. Face picture.
  • step S204 is performed to determine that the first face image and the second face image are not the same person. Face picture.
  • the trained regression model can effectively distinguish the face images of different categories of markers, and use the trained regression model to verify the first face image and the second face image to be identified, which can effectively determine the first The similarity between the face image and the second face image, thereby determining whether the first face image and the second face image are face images of the same person, and thus, the face recognition can be further improved. Performance and accuracy.
  • FIG. 4 shows another implementation flow of step S107 in the method for face recognition according to the embodiment of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • step S107 includes:
  • Step S301 Acquire a target face image to be retrieved.
  • the target face image to be retrieved may be acquired by an image acquisition device, such as a camera or a camera, or the target face image to be retrieved may be obtained through a network, where the target face image to be retrieved is obtained.
  • the route is not subject to special restrictions.
  • Step S302 Extract the feature vector of the target face image and the feature vector of the face image included in the preset search database by using the preset face feature extraction model.
  • the preset retrieval database is a preset retrieval database, which includes a large number of face images. For details, refer to the content of step S101 above, and details are not described herein again.
  • step S107 further includes: separately performing a feature vector of the target face image and a face image included in the preset search database.
  • the feature vector is normalized.
  • step S102 For the normalization process of the feature vector, the content of the above step S102 can be specifically referred to, and details are not described herein again.
  • Step S303 respectively determining a feature vector of the target face image and a feature vector of the feature vector of each face image included in the preset search database.
  • step S103 when the feature vector of the target face image and the feature vector of the feature vector of each face image included in the preset search database are determined, the content of step S103 may be specifically referred to, and details are not described herein again.
  • Step S304 respectively input a feature vector of the target face image and a feature vector of a feature vector of each face image included in the preset search database to the trained regression model, and acquire the target.
  • step S1062 When the similarity between the target face image and each face image included in the preset search database is obtained, the content of step S1062 may be specifically referred to, and details are not described herein again.
  • Step S305 arranging, according to the similarity between the target face image and each face image included in the preset search database, each face image included in the preset search database.
  • the arranged face images are used as search results.
  • each face image included in the preset search database may be arranged in descending order according to the similarity between the target face image and each face image included in the preset search database.
  • the arranged face image is used as a retrieval result to return the retrieval result, for example, displayed on the display screen.
  • the image is identified by using the fusion feature vector and the trained regression model, and the similarity of each face image included in the target face image and the preset retrieval database is from large to small.
  • the order of each face image included in the preset search database is arranged, and the arranged face image is used as a retrieval result, which can improve the accuracy of face retrieval, thereby improving the effect and accuracy of face recognition. .
  • FIG. 5 shows still another implementation flow of step S107 in the method for face recognition according to the embodiment of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • step S107 further includes:
  • Step S306 respectively determining a cosine distance of the feature vector of the target face image and the feature vector of each face image included in the preset search database.
  • the target face picture may be first determined. And a cosine distance of the feature vector and the feature vector of each face image included in the preset retrieval database, to initially represent the similarity between the target face image and each face image included in the preset search database degree.
  • step S104 For determining the cosine distance of the feature vector of the target face image and the feature vector of each face image included in the preset search database, refer to the determining the normalized first sample in step S104.
  • the method of the feature vector and the cosine distance of the feature vector of the second sample will not be described in detail herein.
  • Step S307 the face images included in the preset search database are arranged in descending order of cosine distance, and the face images ranked in the top N are used as candidate sets, where N is a positive integer.
  • the preset search may be performed according to the cosine distance from the largest to the smallest.
  • the face images included in the database are arranged, and the top N face images are used as candidate sets, so as to narrow the search range, reduce the calculation amount of face search and subsequent face recognition, and improve face search and follow-up persons.
  • the positive integer N can be set.
  • the positive integer N is 100, that is, a face image ranked in the top 100 is used as a candidate set, so as to subsequently determine the target face.
  • the similarity between the picture and the 100 face pictures in the candidate set is 100, that is, a face image ranked in the top 100 is used as a candidate set, so as to subsequently determine the target face.
  • step S303 the determining a fusion feature vector of the feature vector of the target face image and the feature vector of each face image included in the preset retrieval database respectively includes:
  • Step S3031 respectively determine a feature vector of the target face image and a feature vector of the feature vector of each face image included in the candidate set.
  • the fusion feature vector of the feature vector of the target face image and the feature vector of each face image included in the candidate set may be determined. For details, refer to the content of step S303 above, and details are not described herein again.
  • step S304 the feature vector of the target face image and the feature vector of the feature vector of each face image included in the preset search database are respectively input to the trained regression model. And acquiring the similarity between the target face image and each face image included in the preset search database includes:
  • Step S3041 respectively input a feature vector of the target face image and a feature vector of a feature vector of each face image included in the candidate set to the trained regression model, and acquire the target face. The similarity of the picture and each face picture included in the candidate set.
  • step S304 the content of the above step S304 can be referred to in step S3041, and details are not described herein again.
  • step S305 the face included in the preset search database is in descending order of the similarity of each face image included in the target face picture and the preset search database.
  • the images are arranged and the arranged face images are included as search results:
  • Step S3051 Arrange the face images included in the candidate set in descending order of the similarity between the target face picture and each face picture included in the candidate set, and arrange the arranged faces.
  • the face image is used as a search result.
  • step S305 the content of step S305 can be referred to in step S3051, and details are not described herein again.
  • the face images included in the preset search database are arranged, and the face images ranked in the top N are used as candidate sets, and then the target face images are similar to each face image included in the candidate set.
  • the face images included in the candidate set are arranged in descending order, and the arranged face images are used as search results.
  • the cosine distance can preliminarily characterize the similarity between the pictures, by calculating the cosine distance, the plurality of face pictures ranked in the forefront most similar to the target face picture are first screened out as a candidate set for subsequent retrieval, therefore, The scope of the search can be narrowed, the retrieval speed can be improved, and the efficiency of face recognition can be improved.
  • FIG. 6 is a functional block diagram of a device for face recognition according to an embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • each module included in the apparatus 10 for face recognition is used to perform various steps in the corresponding embodiment of FIG. 1 .
  • the device 10 for face recognition includes a feature vector extraction module 101, a normalization module 102, a fusion feature vector acquisition module 103, a reference similarity acquisition module 104, a traversal acquisition module 105, a training module 106, and Identification module 107.
  • the feature vector extraction module 101 is configured to extract feature vectors of any two samples in the preset training set according to the preset face feature extraction model.
  • the normalization module 102 is configured to normalize the feature vectors of the any two samples separately.
  • the fusion feature vector obtaining module 103 is configured to acquire a fusion feature vector of the arbitrary two samples.
  • the reference similarity obtaining module 104 is configured to obtain a reference similarity of the any two samples.
  • the traversal obtaining module 105 is configured to sequentially traverse all the two samples that are different from each other in the preset training set, and obtain a fusion feature vector and a reference similarity of all the two samples that are different from each other in the preset training set.
  • the training module 106 is configured to determine a regression model after training according to the fusion feature vector of the two samples that are different from each other in the preset training set and the reference similarity training regression model.
  • the identification module 107 is configured to identify the face image to be recognized by using the trained regression model.
  • the fused feature vector obtaining module 103 fuses feature vectors of two samples that are different from each other in the preset training set, and the training module 106 combines the eigenvectors of the two samples that are different from each other and the reference similarity.
  • the regression model is trained to determine the regression model after training.
  • the fusion feature vector includes the texture feature and the dynamic mode feature of the face image. Therefore, the trained regression model can effectively distinguish the samples of different category markers, and use the trained regression model to identify the face image to be recognized. Can effectively improve the effect and accuracy of face recognition.
  • the fusion feature vector obtaining module 103 is specifically configured to:
  • FIG. 7 shows the structure of the training module 106 in the device for face recognition according to the embodiment of the present invention. For the convenience of description, only the parts related to the embodiment of the present invention are shown, which are as follows:
  • each unit included in the training module 106 is used to perform various steps in the corresponding embodiment of FIG. 2.
  • the training module 106 includes a first obtaining unit 1061, a second obtaining unit 1062, an error determining unit 1063, a parameter adjusting unit 1064, and a regression model determining unit 1065.
  • the first acquiring unit 1061 is configured to acquire any fusion feature vector of the preset training set.
  • the second obtaining unit 1062 is configured to input any of the fusion feature vectors to a regression model, and obtain training similarity of two samples corresponding to any of the fusion feature vectors, wherein the regression model includes at least a first fully connected layer and a second fully connected layer, and the first fully connected layer and the second fully connected layer both perform feature mapping transformation on the any of the merged feature vectors by using an activation function.
  • the error determining unit 1063 is configured to determine, by using the loss function, an error of the similarity between the training similarity of the two samples corresponding to the any of the fusion feature vectors and the reference similarity of the two samples corresponding to the any of the fusion feature vectors. .
  • the parameter adjustment unit 1064 is configured to adjust a parameter of the first fully connected layer of the regression model and the first step by using a process of back propagation by using a random gradient descent if the error does not satisfy a preset convergence condition.
  • the parameters of the two fully connected layers are configured to adjust a parameter of the first fully connected layer of the regression model and the first step by using a process of back propagation by using a random gradient descent if the error does not satisfy a preset convergence condition.
  • the regression model determining unit 1065 is configured to: when the error satisfies a preset convergence condition, use a parameter of the first fully connected layer and a parameter of the second fully connected layer that meet the last iteration process before the preset convergence condition The parameters of the first fully connected layer of the regression model and the parameters of the second fully connected layer are determined to determine the regression model after training.
  • the preset convergence condition includes:
  • the error is less than or equal to a preset error threshold or the error percentage corresponding to the error is less than or equal to a preset error percentage.
  • the fusion feature vector of the preset training set includes the texture feature and the dynamic mode feature of the face image
  • the regression model is trained by using the fusion feature vector of the preset training set, and the random gradient is used to Adjusting the parameters of the regression model to the process of propagation, and determining the regression model after training, therefore, the trained regression model can effectively distinguish samples of different categories of markers, and use the trained regression model to treat the recognized face images When the recognition is performed, the effect and accuracy of face recognition can be effectively improved.
  • FIG. 8 shows the structure of the identification module 107 in the device for face recognition according to the embodiment of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • each unit included in the identification module 107 is used to perform various steps in the corresponding embodiment of FIG. 3.
  • the identification module 107 includes a fusion feature vector acquisition unit 201, a first similarity acquisition unit 202, and a determination unit 203.
  • the fused feature vector obtaining unit 201 is configured to acquire a fused feature vector of the first face image and the second face image to be verified.
  • the first similarity obtaining unit 202 is configured to input the fusion feature vector of the first facial image and the second facial image to the trained regression model to obtain the first facial image. The similarity with the second face picture.
  • the determining unit 203 is configured to determine the first face image and the second if the similarity between the first face image and the second face image is greater than or equal to a preset similarity threshold
  • the face image is the face image of the same person.
  • the determining unit 203 is further configured to: if the similarity between the first face image and the second face image is less than a preset similarity threshold, determine the first face image and the second person The face image is not the face image of the same person.
  • the fusion feature vector obtaining unit 201 acquires a fusion feature vector of the first face image and the second face image to be verified, and the first similarity acquisition unit 202 is configured according to the fusion feature vector.
  • the determining unit 203 compares the similarity with a preset similarity threshold, and further determines the first face image and the location Whether the second face image is a face image of the same person.
  • the first similarity acquiring unit 202 determines the similarity between the first facial image and the second facial image according to the fusion feature vector
  • the determining unit 203 determines the first facial image and the location. Whether the second face image is a face image of the same person, therefore, the effect and accuracy of the face recognition can be further improved.
  • FIG. 9 shows another structure of the identification module 107 in the device for face recognition according to the embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • each unit included in the identification module 107 is used to perform various steps in the corresponding embodiment of FIG. 4.
  • the identification module 107 includes a target face image acquisition unit 301, a feature vector extraction unit 302, a fusion feature vector determination unit 303, a second similarity acquisition unit 304, and a retrieval result determination unit 305.
  • the target face image obtaining unit 301 is configured to acquire a target face image to be retrieved.
  • the feature vector extracting unit 302 is configured to separately extract a feature vector of the target face image and a feature vector of a face image included in the preset search database by using the preset face feature extraction model.
  • the fused feature vector determining unit 303 is configured to respectively determine a fused feature vector of the feature vector of the target face image and the feature vector of each face image included in the preset search database.
  • the second similarity obtaining unit 304 is configured to input the feature vector of the target face image and the feature vector of the feature vector of each face image included in the preset search database to the training Regression model, and obtain similarity between the target face image and each face image included in the preset search database.
  • the search result determining unit 305 is configured to select, according to the similarity between the target face image and each face image included in the preset search database, the preset search database Each face image is arranged, and the arranged face image is used as a search result.
  • the image is identified by using the fusion feature vector and the trained regression model, and the retrieval result determination unit 305 is similar to each face image included in the target face image and the preset retrieval database.
  • the face images included in the preset search database are arranged in descending order, and the arranged face images are used as search results. Therefore, the effect and accuracy of face recognition can be further improved.
  • FIG. 10 shows still another structure of the identification module 107 in the device for face recognition according to the embodiment of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • each unit or subunit included in the identification module 107 is used to perform various steps in the corresponding embodiment of FIG. 5.
  • the identification module 107 further includes a cosine distance determining unit 306 and a candidate set determining unit 307 on the basis of the structure shown in FIG. 9 .
  • the fusion feature vector determining unit 303 includes a fusion feature vector determining subunit 3031
  • the second similarity acquiring unit 304 includes a similarity acquiring subunit 3041
  • the retrieval result determining unit 305 includes a retrieval result determining subunit. 3051.
  • the cosine distance determining unit 306 is configured to respectively determine a cosine distance of a feature vector of the target face image and a feature vector of each face image included in the preset search database.
  • the candidate set determining unit 307 is configured to arrange the face images included in the preset search database according to the cosine distance from the largest to the smallest, and use the face images ranked in the top N as the candidate set.
  • N is a positive integer.
  • the fused feature vector determining sub-unit 3031 is configured to respectively determine a fused feature vector of a feature vector of the target face image and a feature vector of each face image included in the candidate set.
  • the similarity acquisition sub-unit 3041 is configured to respectively input a feature vector of the target face image and a feature vector of a feature vector of each face image included in the candidate set to the trained regression model. And acquiring the similarity of the target face picture and each face picture included in the candidate set.
  • the search result determining sub-unit 3051 is configured to select a face image included in the candidate set according to the similarity degree of the target face picture and each face picture included in the candidate set in descending order Arrange and arrange the arranged face images as the search results.
  • the cosine distance determining unit 306 determines a cosine distance of the feature vector of the target face image and a feature vector of each face image included in the preset search database, and the candidate set determining unit 307 follows the cosine distance.
  • the face images included in the preset search database are arranged in descending order, and the face images ranked in the top N are used as candidate sets, and the search result determining subunit 3051 follows the target face image.
  • FIG. 11 is a schematic structural diagram of a computer apparatus 1 according to a preferred embodiment of a method for implementing face recognition according to an embodiment of the present invention.
  • the computer device 1 includes a memory 11, a processor 12, and an input/output device 13.
  • the computer device 1 is a device capable of automatically performing numerical calculation and/or information processing according to an instruction set or stored in advance, and the hardware includes, but not limited to, a microprocessor, an application specific integrated circuit (ASIC). ), Field-Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • ASIC application specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer device 1 can be any electronic product that can interact with a user, such as a personal computer, a tablet computer, a smart phone, a personal digital assistant (PDA), a game machine, an interactive network television ( Internet Protocol Television (IPTV), smart wearable devices, etc.
  • the computer device 1 may be a server, including but not limited to a single network server, a server group composed of a plurality of network servers, or a cloud computing-based cloud composed of a large number of hosts or network servers, wherein the cloud Computation is a type of distributed computing, a super-virtual computer consisting of a cluster of loosely coupled computers.
  • the network in which the computer device 1 is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (VPN), and the like.
  • VPN virtual private network
  • the memory 11 is used to store programs of various methods of face recognition and various data, and realizes high-speed, automatic completion of access of programs or data during the operation of the computer device 1.
  • the memory 11 may be an external storage device and/or an internal storage device of the computer device 1. Further, the memory 11 may be a circuit having a storage function in a physical form, such as a RAM (Random-Access Memory), a FIFO (First In First Out), or the like, or the memory 11 It may be a storage device having a physical form, such as a memory stick, a TF card (Trans-flash Card), or the like.
  • the processor 12 can be a Central Processing Unit (CPU).
  • the CPU is a very large-scale integrated circuit, which is the computing core (Core) and the Control Unit of the computer device 1.
  • the processor 12 can execute an operating system of the computer device 1 and various installed applications, program codes, and the like, such as an operating system in each module or unit in the device 10 for performing face recognition, and various installed applications and programs. Code to implement face recognition methods.
  • the input/output device 13 is mainly used to implement an input/output function of the computer device 1, such as transceiving input digital or character information, or displaying information input by a user or information provided to a user and various menus of the computer device 1.
  • the modules/units integrated by the computer device 1 can be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the present invention implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium. The steps of the various method embodiments described above may be implemented when the program is executed by the processor.
  • the computer program comprises computer program code, which may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM). , random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunication signals.
  • the above-described characteristic means of the present invention can be realized by an integrated circuit and control the function of the living body detecting method described in any of the above embodiments. That is, the integrated circuit of the present invention is mounted in the computer device 1 such that the computer device 1 functions as follows:
  • the face image to be recognized is identified by using the trained regression model.
  • the functions of the living body detecting method can be installed in the computer device 1 by the integrated circuit of the present invention, so that the computer device 1 can perform the living body detecting method in any of the embodiments.
  • the functions implemented are not detailed here.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software function modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un dispositif de reconnaissance faciale, un dispositif informatique et un support d'enregistrement. Le procédé comprend les étapes consistant à : extraire des vecteurs de caractéristiques de deux échantillons quelconques dans un ensemble d'apprentissage prédéfini selon un modèle d'extraction de caractéristiques de visage prédéfini (S101); normaliser les vecteurs de caractéristiques des deux échantillons quelconques respectivement (S102); acquérir un vecteur de caractéristiques de fusion des deux échantillons quelconques (S103); acquérir une similarité de référence des deux échantillons quelconques (S104); traverser séquentiellement les deux échantillons qui sont différents l'un de l'autre dans l'ensemble d'apprentissage prédéfini, et obtenir le vecteur de caractéristique de fusion et une similarité de référence des deux échantillons qui sont différents l'un de l'autre dans l'ensemble d'apprentissage prédéfini (S105); réaliser l'apprentissage d'un modèle de régression selon le vecteur de caractéristique de fusion et la similarité de référence des deux échantillons qui sont différents l'un de l'autre dans l'ensemble d'apprentissage prédéfini (S106); et reconnaître une image de visage devant être reconnue à l'aide du modèle de régression entraîné (S107). Dans la présente invention, un modèle de régression est entraîné selon tous les vecteurs de caractéristiques de fusion et des similarités de référence dans l'ensemble d'apprentissage prédéfini, le modèle de régression entraîné peut distinguer efficacement des échantillons ayant différents marqueurs de catégorie, et l'effet et la précision de la reconnaissance faciale lors de la réalisation d'une reconnaissance sur une image de visage à reconnaître sont ainsi améliorés.
PCT/CN2017/119465 2017-12-18 2017-12-28 Procédé et dispositif de reconnaissance faciale, dispositif informatique et support d'enregistrement WO2019119505A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711366133.0A CN108197532B (zh) 2017-12-18 2017-12-18 人脸识别的方法、装置及计算机装置
CN201711366133.0 2017-12-18

Publications (1)

Publication Number Publication Date
WO2019119505A1 true WO2019119505A1 (fr) 2019-06-27

Family

ID=62574509

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2017/119465 WO2019119505A1 (fr) 2017-12-18 2017-12-28 Procédé et dispositif de reconnaissance faciale, dispositif informatique et support d'enregistrement
PCT/CN2018/120513 WO2019120115A1 (fr) 2017-12-18 2018-12-12 Procédé et appareil de reconnaissance faciale et dispositif informatique

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/120513 WO2019120115A1 (fr) 2017-12-18 2018-12-12 Procédé et appareil de reconnaissance faciale et dispositif informatique

Country Status (2)

Country Link
CN (1) CN108197532B (fr)
WO (2) WO2019119505A1 (fr)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414588A (zh) * 2019-07-23 2019-11-05 广东小天才科技有限公司 图片标注方法、装置、计算机设备和存储介质
CN110674748A (zh) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 图像数据处理方法、装置、计算机设备以及可读存储介质
CN111209860A (zh) * 2020-01-06 2020-05-29 上海海事大学 基于深度学习与强化学习的视频考勤系统及方法
CN111325156A (zh) * 2020-02-24 2020-06-23 北京沃东天骏信息技术有限公司 人脸识别方法、装置、设备和存储介质
CN111860165A (zh) * 2020-06-18 2020-10-30 盛视科技股份有限公司 一种基于视频流的动态人脸识别方法和装置
CN111968152A (zh) * 2020-07-15 2020-11-20 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN112101172A (zh) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 基于权重嫁接的模型融合的人脸识别方法及相关设备
CN112241664A (zh) * 2019-07-18 2021-01-19 顺丰科技有限公司 人脸识别方法、装置、服务器及存储介质
CN112395448A (zh) * 2019-08-15 2021-02-23 华为技术有限公司 一种人脸检索方法及装置
CN112418303A (zh) * 2020-11-20 2021-02-26 浙江大华技术股份有限公司 一种识别状态模型的训练方法、装置及计算机设备
CN112633297A (zh) * 2020-12-28 2021-04-09 浙江大华技术股份有限公司 目标对象的识别方法、装置、存储介质以及电子装置
CN112990090A (zh) * 2021-04-09 2021-06-18 北京华捷艾米科技有限公司 一种人脸活体检测方法及装置
CN112991154A (zh) * 2021-03-17 2021-06-18 福建库克智能科技有限公司 混合物的制作方法、混合物及人脸面具的图片的生成方法
CN113177449A (zh) * 2021-04-20 2021-07-27 北京百度网讯科技有限公司 人脸识别的方法、装置、计算机设备及存储介质
CN113269010A (zh) * 2020-02-14 2021-08-17 深圳云天励飞技术有限公司 一种人脸活体检测模型的训练方法和相关装置
CN113657178A (zh) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 人脸识别方法、电子设备和计算机可读存储介质
CN114139013A (zh) * 2021-11-29 2022-03-04 深圳集智数字科技有限公司 图像搜索方法、装置、电子设备及计算机可读存储介质
CN114565979A (zh) * 2022-03-04 2022-05-31 中国科学技术大学 一种行人重识别方法、系统、设备及存储介质
CN114581978A (zh) * 2022-02-28 2022-06-03 支付宝(杭州)信息技术有限公司 人脸识别的方法和系统
CN115690443A (zh) * 2022-09-29 2023-02-03 北京百度网讯科技有限公司 特征提取模型训练方法、图像分类方法及相关装置
US11881052B2 (en) 2019-08-15 2024-01-23 Huawei Technologies Co., Ltd. Face search method and apparatus

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197532B (zh) * 2017-12-18 2019-08-16 深圳励飞科技有限公司 人脸识别的方法、装置及计算机装置
CN108921071A (zh) * 2018-06-24 2018-11-30 深圳市中悦科技有限公司 人脸活体检测方法、装置、存储介质及处理器
CN109190654A (zh) * 2018-07-09 2019-01-11 上海斐讯数据通信技术有限公司 人脸识别模型的训练方法和装置
CN109063698B (zh) * 2018-10-23 2022-03-08 深圳大学 一种非负特征提取及人脸识别应用方法、系统及存储介质
CN109583332B (zh) * 2018-11-15 2021-07-27 北京三快在线科技有限公司 人脸识别方法、人脸识别系统、介质及电子设备
CN110070046B (zh) * 2019-04-23 2024-05-24 北京市商汤科技开发有限公司 人脸图像识别方法及装置、电子设备和存储介质
CN110427870B (zh) * 2019-06-10 2024-06-18 腾讯医疗健康(深圳)有限公司 眼部图片识别方法、目标识别模型训练方法及装置
CN110415424B (zh) * 2019-06-17 2022-02-11 众安信息技术服务有限公司 一种防伪鉴定方法、装置、计算机设备和存储介质
CN112445992B (zh) * 2019-09-03 2024-02-20 阿里巴巴集团控股有限公司 信息处理方法及装置
CN111091080A (zh) * 2019-12-06 2020-05-01 贵州电网有限责任公司 人脸识别方法及系统
CN111144240B (zh) * 2019-12-12 2023-02-07 深圳数联天下智能科技有限公司 图像处理方法及相关设备
CN111368644B (zh) * 2020-02-14 2024-01-05 深圳市商汤科技有限公司 图像处理方法、装置、电子设备及存储介质
CN111339884B (zh) * 2020-02-19 2023-06-06 浙江大华技术股份有限公司 图像识别方法以及相关设备、装置
CN111598818B (zh) * 2020-04-17 2023-04-28 北京百度网讯科技有限公司 人脸融合模型训练方法、装置及电子设备
CN111555889A (zh) * 2020-04-27 2020-08-18 深圳壹账通智能科技有限公司 电子签名的验证方法、装置、计算机设备和存储介质
CN111709303A (zh) * 2020-05-21 2020-09-25 北京明略软件系统有限公司 一种人脸图像的识别方法和装置
CN114372205B (zh) * 2022-03-22 2022-06-10 腾讯科技(深圳)有限公司 特征量化模型的训练方法、装置以及设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978550A (zh) * 2014-04-08 2015-10-14 上海骏聿数码科技有限公司 基于大规模人脸数据库的人脸识别方法及系统
US20160070956A1 (en) * 2014-09-05 2016-03-10 Huawei Technologies Co., Ltd. Method and Apparatus for Generating Facial Feature Verification Model
CN106250858A (zh) * 2016-08-05 2016-12-21 重庆中科云丛科技有限公司 一种融合多种人脸识别算法的识别方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7356168B2 (en) * 2004-04-23 2008-04-08 Hitachi, Ltd. Biometric verification system and method utilizing a data classifier and fusion model
CN104715254B (zh) * 2015-03-17 2017-10-10 东南大学 一种基于2d和3d sift特征融合的一般物体识别方法
CN107292146B (zh) * 2016-03-30 2019-12-13 中国联合网络通信集团有限公司 用户特征向量选取方法及系统
CN108197532B (zh) * 2017-12-18 2019-08-16 深圳励飞科技有限公司 人脸识别的方法、装置及计算机装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978550A (zh) * 2014-04-08 2015-10-14 上海骏聿数码科技有限公司 基于大规模人脸数据库的人脸识别方法及系统
US20160070956A1 (en) * 2014-09-05 2016-03-10 Huawei Technologies Co., Ltd. Method and Apparatus for Generating Facial Feature Verification Model
CN106250858A (zh) * 2016-08-05 2016-12-21 重庆中科云丛科技有限公司 一种融合多种人脸识别算法的识别方法及系统

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241664A (zh) * 2019-07-18 2021-01-19 顺丰科技有限公司 人脸识别方法、装置、服务器及存储介质
CN110414588A (zh) * 2019-07-23 2019-11-05 广东小天才科技有限公司 图片标注方法、装置、计算机设备和存储介质
US11881052B2 (en) 2019-08-15 2024-01-23 Huawei Technologies Co., Ltd. Face search method and apparatus
CN112395448A (zh) * 2019-08-15 2021-02-23 华为技术有限公司 一种人脸检索方法及装置
CN110674748A (zh) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 图像数据处理方法、装置、计算机设备以及可读存储介质
CN110674748B (zh) * 2019-09-24 2024-02-13 腾讯科技(深圳)有限公司 图像数据处理方法、装置、计算机设备以及可读存储介质
CN111209860A (zh) * 2020-01-06 2020-05-29 上海海事大学 基于深度学习与强化学习的视频考勤系统及方法
CN111209860B (zh) * 2020-01-06 2023-04-07 上海海事大学 基于深度学习与强化学习的视频考勤系统及方法
CN113269010B (zh) * 2020-02-14 2024-03-26 深圳云天励飞技术有限公司 一种人脸活体检测模型的训练方法和相关装置
CN113269010A (zh) * 2020-02-14 2021-08-17 深圳云天励飞技术有限公司 一种人脸活体检测模型的训练方法和相关装置
CN111325156A (zh) * 2020-02-24 2020-06-23 北京沃东天骏信息技术有限公司 人脸识别方法、装置、设备和存储介质
CN111325156B (zh) * 2020-02-24 2023-08-11 北京沃东天骏信息技术有限公司 人脸识别方法、装置、设备和存储介质
CN111860165B (zh) * 2020-06-18 2023-11-03 盛视科技股份有限公司 一种基于视频流的动态人脸识别方法和装置
CN111860165A (zh) * 2020-06-18 2020-10-30 盛视科技股份有限公司 一种基于视频流的动态人脸识别方法和装置
CN111968152A (zh) * 2020-07-15 2020-11-20 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN111968152B (zh) * 2020-07-15 2023-10-17 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN112101172B (zh) * 2020-09-08 2023-09-08 平安科技(深圳)有限公司 基于权重嫁接的模型融合的人脸识别方法及相关设备
CN112101172A (zh) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 基于权重嫁接的模型融合的人脸识别方法及相关设备
CN112418303A (zh) * 2020-11-20 2021-02-26 浙江大华技术股份有限公司 一种识别状态模型的训练方法、装置及计算机设备
CN112633297A (zh) * 2020-12-28 2021-04-09 浙江大华技术股份有限公司 目标对象的识别方法、装置、存储介质以及电子装置
CN112633297B (zh) * 2020-12-28 2023-04-07 浙江大华技术股份有限公司 目标对象的识别方法、装置、存储介质以及电子装置
CN112991154B (zh) * 2021-03-17 2023-10-17 福建库克智能科技有限公司 混合物的制作方法、混合物及人脸面具的图片的生成方法
CN112991154A (zh) * 2021-03-17 2021-06-18 福建库克智能科技有限公司 混合物的制作方法、混合物及人脸面具的图片的生成方法
CN112990090A (zh) * 2021-04-09 2021-06-18 北京华捷艾米科技有限公司 一种人脸活体检测方法及装置
CN113177449A (zh) * 2021-04-20 2021-07-27 北京百度网讯科技有限公司 人脸识别的方法、装置、计算机设备及存储介质
CN113177449B (zh) * 2021-04-20 2024-02-02 北京百度网讯科技有限公司 人脸识别的方法、装置、计算机设备及存储介质
CN113657178A (zh) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 人脸识别方法、电子设备和计算机可读存储介质
CN114139013A (zh) * 2021-11-29 2022-03-04 深圳集智数字科技有限公司 图像搜索方法、装置、电子设备及计算机可读存储介质
CN114581978A (zh) * 2022-02-28 2022-06-03 支付宝(杭州)信息技术有限公司 人脸识别的方法和系统
CN114565979A (zh) * 2022-03-04 2022-05-31 中国科学技术大学 一种行人重识别方法、系统、设备及存储介质
CN114565979B (zh) * 2022-03-04 2024-03-29 中国科学技术大学 一种行人重识别方法、系统、设备及存储介质
CN115690443A (zh) * 2022-09-29 2023-02-03 北京百度网讯科技有限公司 特征提取模型训练方法、图像分类方法及相关装置

Also Published As

Publication number Publication date
WO2019120115A1 (fr) 2019-06-27
CN108197532B (zh) 2019-08-16
CN108197532A (zh) 2018-06-22

Similar Documents

Publication Publication Date Title
WO2019120115A1 (fr) Procédé et appareil de reconnaissance faciale et dispositif informatique
WO2019200781A1 (fr) Procédé et dispositif de reconnaissance de reçu, et support de stockage
WO2021159613A1 (fr) Procédé et appareil d'analyse de similitude sémantique de texte et dispositif informatique
WO2021218060A1 (fr) Procédé et dispositif de reconnaissance faciale basée sur l'apprentissage profond
WO2019109526A1 (fr) Procédé et dispositif de reconnaissance de l'âge de l'image d'un visage, et support de stockage
US20210374386A1 (en) Entity recognition from an image
WO2019200782A1 (fr) Procédé de classification de données d'échantillon, procédé d'entraînement de modèle, dispositif électronique et support de stockage
WO2022105118A1 (fr) Procédé et appareil d'identification d'état de santé basés sur une image, dispositif et support de stockage
CN111062871A (zh) 一种图像处理方法、装置、计算机设备及可读存储介质
CN108197592B (zh) 信息获取方法和装置
CN110503076B (zh) 基于人工智能的视频分类方法、装置、设备和介质
US20230087657A1 (en) Assessing face image quality for application of facial recognition
US11126827B2 (en) Method and system for image identification
JP2022141931A (ja) 生体検出モデルのトレーニング方法及び装置、生体検出の方法及び装置、電子機器、記憶媒体、並びにコンピュータプログラム
CN108550065B (zh) 评论数据处理方法、装置及设备
WO2020238353A1 (fr) Procédé et appareil de traitement de données, support de stockage et dispositif électronique
Haji et al. Real time face recognition system (RTFRS)
WO2023123923A1 (fr) Procédé d'identification de poids de corps humain, dispositif d'identification de poids de corps humain, dispositif informatique, et support
US8086616B1 (en) Systems and methods for selecting interest point descriptors for object recognition
WO2023019927A1 (fr) Procédé et appareil de reconnaissance faciale, support de stockage et dispositif électronique
WO2021051602A1 (fr) Procédé et système de reconnaissance faciale à base de mot de passe lu avec les lèvres, dispositif, et support de stockage
CN116311370A (zh) 一种基于多角度特征的牛脸识别方法及其相关设备
CN110175500B (zh) 指静脉比对方法、装置、计算机设备及存储介质
CN112070744A (zh) 一种人脸识别的方法、系统、设备及可读存储介质
CN109815353B (zh) 一种基于类中心的人脸检索方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17935688

Country of ref document: EP

Kind code of ref document: A1