CN111476145A - A convolutional neural network-based 1: n face recognition method - Google Patents

A convolutional neural network-based 1: n face recognition method Download PDF

Info

Publication number
CN111476145A
CN111476145A CN202010258239.4A CN202010258239A CN111476145A CN 111476145 A CN111476145 A CN 111476145A CN 202010258239 A CN202010258239 A CN 202010258239A CN 111476145 A CN111476145 A CN 111476145A
Authority
CN
China
Prior art keywords
data set
face
features
convolutional neural
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010258239.4A
Other languages
Chinese (zh)
Inventor
周光亮
胡长晖
荆晓远
吴飞
虞建
景慎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202010258239.4A priority Critical patent/CN111476145A/en
Publication of CN111476145A publication Critical patent/CN111476145A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a convolutional neural network-based 1: the N face recognition method comprises the following steps: (1) constructing a data set and cutting the picture into uniform size based on the constructed data set, (2) extracting the human face features in the constructed data set by adopting two different convolutional neural networks of Facenet and insight face, and outputting the extracted features in a uniform one-dimensional structure mode; (3) constructing a normalization dictionary for the characteristic face extracted by the Facenet, and determining an index by adopting homotopy algorithm iteration of base tracking noise reduction based on the constructed normalization dictionary; (4) comparing the facial features extracted by different convolutional neural networks by adopting nearest neighbor analysis, and determining respective corresponding indexes; (5) and adopting an integration method to take the index with the most occurrence times as a final index to obtain an identification result. The invention can overcome the influence of other factors on the identification effect and improve the identification precision and the identification effect.

Description

A convolutional neural network-based 1: n face recognition method
Technical Field
The invention belongs to the field of face recognition, and particularly relates to a convolutional neural network-based 1: n face recognition method.
Background
For example, Facenet achieves 99.65% recognition rate on L dataset by using the inclusion research v1 network in combination with Softmax loss function, and achieves 99.77% recognition rate on L FW dataset 1: 1 face verification through L ResNet100E-IR structure in combination with Anglar Margin L oss loss function, while criminal face recognition, taking the 1: N face recognition of traffic intersection as an example, it is necessary to solve the influence of various elements and consider different recognition modes, and it is necessary to compare the face image in actual scene with N photos in different identity cards, fully considering image quality, age, and so on, and thus face recognition is more frequently used in face recognition between scenes, and face retrieval is more challenging, such as face retrieval, and face retrieval.
Disclosure of Invention
In order to solve the above problems, the present invention provides a convolutional neural network-based 1: n face recognition method.
The technical purpose is achieved, the technical effect is achieved, and the invention is realized through the following technical scheme:
a convolutional neural network-based 1: the N face recognition method comprises the following steps:
(1) constructing a dataset and cropping the picture to a uniform size based on the constructed dataset, the constructed dataset comprising
1, data set: the face image is collected, and the face image is collected,
and N, data set: n images of different faces of a person,
and an auxiliary set: a plurality of face images of the same person which have been classified;
(2) extracting the human face features in the constructed data set by adopting two different convolutional neural networks of Facenet and insight face, and outputting the extracted features in a uniform one-dimensional structure mode;
constructing a normalization dictionary for the characteristic face extracted by the Facenet, and determining an index by adopting homotopy algorithm iteration of base tracking noise reduction based on the constructed normalization dictionary;
(3) comparing the facial features extracted by different convolutional neural networks by adopting nearest neighbor analysis, and determining respective corresponding indexes;
(4) and (4) taking the index with the largest occurrence frequency in the step (3) and the step (4) as a final index by adopting an integration method, then finding out a corresponding label of the N data set characteristic face through the final index, and comparing the label with the data set face characteristic label 1 to obtain a recognition result.
As a further improvement of the present invention, in the step (1), mtcnr cropping is adopted to crop the pictures in the 1 data set and the N data set into two different sizes, and the size of the picture in the auxiliary set is cropped into any one of the two sizes cropped from the 1 data set or the N data set.
As a further improvement of the present invention, the network structure a is used for extracting features of pictures with the same size in the three data sets, and the network structure B is used for extracting features of pictures with a second size, which is different from the common size in the three data sets in the data set 1 and the data set N.
As a further improvement of the invention, the Facenet is used for feature extraction of the 1 data set, the N data set and the auxiliary set, and the insight face is used for feature extraction of the 1 data set and the N data set.
As a further improvement of the invention, the step (3) of constructing the normalized dictionary comprises the following steps:
(3-1) extracting the features of the auxiliary set by using Facenet, summing the feature faces of each object, averaging, and calculating the average face of each object;
(3-2) subtracting the average human face of each object from the human face features of the auxiliary set to obtain the average-removed human face features, and carrying out normalization processing, wherein the normalization processing is to control the data size to be 0-1, and a normalization matrix is obtained according to the following formula:
Figure BDA0002438270920000021
(3-3) performing normalization processing on the N data set by adopting the same method as the steps (3-1) and (3-2);
and (3-4) splicing the auxiliary set characteristics and the N data set characteristics after the normalization processing according to lines, wherein the N data set is in front, and the auxiliary set is behind to construct a normalization dictionary.
As a further improvement of the invention, the step (5) further comprises that aiming at the indexes obtained by different methods, the indexes are different or the result is 2:2, and the index result obtained by the Facenet method is the most comprehensive result.
As a further improvement of the invention, in the step (5), a coefficient matrix of each image is obtained by traversing the face features in the data set 1 and using a homotopy algorithm and a normalized dictionary, and the index 4 is determined by a minimum coefficient.
As a further improvement of the invention, the step (4) comprises the steps of performing nearest neighbor analysis by using the face features of the data set 1 and the data set N extracted by the insight face, calculating cosine distances between the face features of the data set 1 and all faces of the data set N by traversing the face features of the data set N, and determining an index 1 and an index 2 by using the minimum value.
As a further improvement of the present invention, the step (4) further includes performing nearest neighbor analysis on the facial features of the data set 1 and the data set N extracted by Facenet, calculating euclidean distances to all the faces of the data set N by traversing the facial features in the data set 1, and determining the index 3 by using the minimum value.
The invention has the beneficial effects that:
(1) and (3) constructing a normalized dictionary by using the convolution characteristics, and realizing sparse representation face recognition based on the convolution characteristics. The method has the advantages of obtaining the convolution characteristics in the identification process and simultaneously obtaining the robustness of the auxiliary set. Combining the convolution feature with the sparse representation, a 1: and N, face recognition.
(2) The identification method provided by the invention adopts multiple methods to extract the features, compares the features, and finally carries out majority voting on the indexes obtained by the multiple methods by an integration method to determine the final index, so that the result has certain fault tolerance, the influence of other factors on the identification effect is overcome to a certain extent, and the identification precision and the identification effect are improved.
Drawings
FIG. 1 is a schematic diagram of the inventive structure;
FIG. 2 is a diagram illustrating MTCNN cropping effects;
FIG. 3 is a diagram illustrating the steps of constructing a normalized dictionary.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
In the specific implementation process, considering the personal privacy problem and the face image acquisition approach, a 1: N face data set used for testing is relatively few, especially a face image of an actual scene and a data set of multiple identification photos, therefore, a 1: N face data set is constructed only on the basis of an L FW (L enabled Faces in the Wild) database, and an auxiliary set is constructed on the basis of a CASIA WEBFACE (CASIA for short) database.
Step 1, constructing a 1: N data set by utilizing an L FW database;
step 1.1, selecting the first picture of all people in L FW data sets, and putting the first picture, which is 5749 pictures in total, under the same folder to serve as an N data set;
step 1.2, selecting a folder containing two or more faces in an L FW database, selecting a second face, and 1680 faces in total, constructing a 1 data set and reserving subfolders;
step 1.3: in the CASIA database, a certain number of face images are reserved for each person, a certain number of persons are selected to construct an auxiliary set, the size of the auxiliary set can be adjusted, for example, 2500 objects are selected, and 10 pictures of each person are used as the auxiliary set.
Step 2: cutting 1 data set and N data set in batches by MTCNN cutting, and setting the picture size to 160 × 160 and 112 × 112 respectively; and (5) cutting the auxiliary set, and setting the face picture as 160 x 160.
And step 3: extracting features by using different convolutional neural networks;
step 3.1: the method comprises the following steps of utilizing an inclusion ResNet v1 network structure applying Facenet to combine with a softmax loss function to extract features of a data set 1, a data set N and an auxiliary set, wherein the input size of a picture is 160 × 160, and the output features are as follows: 1 x 512;
step 3.2, respectively extracting the characteristics of a 1 data set and an N data set by utilizing a network structure of L ResNet100E-IR and L eResnet50E-IR of the insight face and combining an Angular Margin L oss loss function, wherein the input size of the picture is 112 x 112, and the output characteristic is 1 x 512;
and 4, step 4: constructing a normalized dictionary by using the feature face extracted by Facenet, wherein the step of constructing the normalized dictionary is shown in fig. 3;
step 4.1: extracting the features of the auxiliary set by using Facenet, summing the feature faces of each object, averaging, and calculating the average face of each object;
step 4.2: subtracting the average face of each object from the face features of each object in the auxiliary set to obtain the average face feature;
step 4.3: and (3) carrying out normalization processing on the average removed human face features, and controlling the data size to be between 0 and 1, wherein the normalization formula is as follows:
Figure BDA0002438270920000041
wherein XmaxIs the maximum value of the matrix, XminIs the minimum value of the matrix, XnormIs made ofNormalized matrix
Step 4.4: carrying out normalization processing on the N data sets extracted by Facenet;
step 4.5: and splicing the auxiliary set characteristics and the N data set characteristics after the normalization processing according to lines, wherein the N data set is in front, and the auxiliary set is behind to construct a normalization dictionary.
And 5: adopting a Homotopy (Homotopy) algorithm iteration of base Pursuit De-Noising (BPDN) to determine an index, wherein the BPDN method mainly solves the following problems:
Figure BDA0002438270920000042
restricted by | | x | | non-conducting phosphor1≤t
Wherein A represents a normalized dictionary, y represents a 1 data set, x is a coefficient matrix, and t is a non-negative real number parameter. And traversing the face features in the data set 1, obtaining a coefficient matrix of each image by using a homotopy algorithm and a normalized dictionary, and determining an index 4 by using a minimum coefficient.
Step 6: and determining indexes of different loss functions of different network structures by adopting nearest neighbor analysis:
step 6.1, utilizing a network structure of L ResNet100E-IR and L eResnet50E-IR of the insight face to obtain a 1 data set and N data set human face features by combining an Angular Margin L oss loss function to carry out nearest neighbor analysis, wherein the human face features in the 1 data set are traversed, the cosine distances between the human face features and all human faces in the N data set are calculated, and an index 1 and an index 2 are determined through the minimum value;
step 6.2: and (3) performing nearest neighbor analysis on the data set 1 and the data set N obtained by combining the initiation ResNet v1 network structure of Facenet with the softmax loss function: traversing the face features in the data set 1, calculating Euclidean distances between the face features and all faces in the data set N, and determining an index 3 through a minimum value;
and 7: and obtaining a final index by using an integration method, and taking the index with the largest occurrence number as the final index by adopting a hard integration strategy. The concrete expression is as follows:
Figure BDA0002438270920000051
wherein M represents a mode, (C)Facenet,CInsightface,…,CBPDN) Representing the final index positions of the different convolution features.
And 8: and finding out the corresponding label of the N data set characteristic face through the final index, comparing the label with the 1 data set face characteristic label, indicating that the identification is correct if the label is the same, counting and calculating the final identification rate. Table 1 shows the results of the experiments.
Table 1 integrated process 1: n face recognition comparison
Method of producing a composite material Rate of accuracy
BPDN 81.20%
Facenet 81.67%
InsightFace(50E-IR) 81.49%
InsightFace(100E-IR) 81.54%
Ensemble methods 82.98%
In the BPDN method, the size of an auxiliary set is 2500 people, 10 people are used, the number of recognized faces is 1364 faces, the recognition rate is 81.20%, in the Facenet method, 1372 faces can be correctly recognized from 5749 faces of N data sets in 1680 faces of 1 data set, the recognition rate is 81.67%, in the method of combining the insight face with L ResNet50E-IR network structure, 1369 faces in 1 data set can be correctly recognized, the correct recognition rate is 81.49%, in the insight face combining L ResNet100E-IR method, 1370 faces in 1 data set can be correctly recognized, and the correct recognition rate is 81.54%.
And determining the final index by adopting a majority voting method based on the indexes obtained by the four convolutional neural networks. If the four indexes are different or the result is 2:2, the Facenet method is adopted to index the result, and the method has the highest recognition rate in the four methods. After the final output was determined by majority voting, the number of faces recognized was 1394, and the recognition rate was 82.98%. The end result of the proposed integration method is higher than the four methods described above.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps:
(1) constructing a dataset and cropping the picture to a uniform size based on the constructed dataset, the constructed dataset comprising
1, data set: the face image is collected, and the face image is collected,
and N, data set: n images of different faces of a person,
and an auxiliary set: a plurality of face images of the same person which have been classified;
(2) extracting the human face features in the constructed data set by adopting two different convolutional neural networks of Facenet and insight face, and outputting the extracted features in a uniform one-dimensional structure mode;
constructing a normalization dictionary for the characteristic face extracted by the Facenet, and determining an index by adopting homotopy algorithm iteration of base tracking noise reduction based on the constructed normalization dictionary;
(3) comparing the facial features extracted by different convolutional neural networks by adopting nearest neighbor analysis, and determining respective corresponding indexes;
(4) and (4) taking the index with the largest occurrence frequency in the step (3) and the step (4) as a final index by adopting an integration method, then finding out a corresponding label of the N data set characteristic face through the final index, and comparing the label with the data set face characteristic label 1 to obtain a recognition result.
2. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: in the step (1), MTCNN clipping is adopted to clip the pictures in the 1 data set and the N data set into two different sizes, and the size of the picture in the auxiliary set is any one of the two sizes clipped by the 1 data set or the N data set.
3. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: the network structure A is used for extracting features of pictures with the same size in the three data sets, and the network structure B is used for extracting features of pictures with a second size which is different from the common size in the three data sets in the data set 1 and the data set N.
4. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: the Facenet is used for feature extraction of the data set 1, the data set N and the auxiliary set, and the insight face is used for feature extraction of the data set 1 and the data set N.
5. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: the normalized dictionary is constructed in the step (3), and the method comprises the following steps:
(3-1) extracting the features of the auxiliary set by using Facenet, summing the feature faces of each object, averaging, and calculating the average face of each object;
(3-2) subtracting the average human face of each object from the human face features of the auxiliary set to obtain the average-removed human face features, and carrying out normalization processing, wherein the normalization processing is to control the data size to be 0-1, and a normalization matrix is obtained according to the following formula:
Figure FDA0002438270910000021
(3-3) performing normalization processing on the N data set by adopting the same method as the steps (3-1) and (3-2);
and (3-4) splicing the auxiliary set characteristics and the N data set characteristics after the normalization processing according to lines, wherein the N data set is in front, and the auxiliary set is behind to construct a normalization dictionary.
6. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: the step (5) further comprises that aiming at the indexes obtained by adopting different methods are different or the result is 2:2, the index result obtained by adopting the Facenet method is the most comprehensive result.
7. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: in the step (5), the coefficient matrix of each image is obtained by traversing the face features in the data set 1 and using a homotopy algorithm and a normalized dictionary, and the index 4 is determined by the minimum coefficient.
8. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: and the step (4) comprises the steps of performing nearest neighbor analysis on the face features of the data set 1 and the data set N extracted by the insight face, calculating cosine distances between the face features of the data set 1 and all faces of the data set N by traversing the face features of the data set 1, and determining an index 1 and an index 2 according to the minimum value.
9. A convolutional neural network-based 1: the N face recognition method is characterized by comprising the following steps: and the step (4) further comprises the steps of carrying out nearest neighbor analysis on the facial features of the data set 1 and the data set N extracted by the Facenet, calculating Euclidean distances between the facial features of the data set 1 and all the faces of the data set N by traversing the facial features, and determining an index 3 by the minimum value.
CN202010258239.4A 2020-04-03 2020-04-03 A convolutional neural network-based 1: n face recognition method Pending CN111476145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010258239.4A CN111476145A (en) 2020-04-03 2020-04-03 A convolutional neural network-based 1: n face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010258239.4A CN111476145A (en) 2020-04-03 2020-04-03 A convolutional neural network-based 1: n face recognition method

Publications (1)

Publication Number Publication Date
CN111476145A true CN111476145A (en) 2020-07-31

Family

ID=71750575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010258239.4A Pending CN111476145A (en) 2020-04-03 2020-04-03 A convolutional neural network-based 1: n face recognition method

Country Status (1)

Country Link
CN (1) CN111476145A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364784A (en) * 2020-11-13 2021-02-12 湖北师范大学 Rapid sparse representation single-sample face recognition method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330383A (en) * 2017-06-18 2017-11-07 天津大学 A kind of face identification method based on depth convolutional neural networks
CN108898105A (en) * 2018-06-29 2018-11-27 成都大学 It is a kind of based on depth characteristic and it is sparse compression classification face identification method
CN110378382A (en) * 2019-06-18 2019-10-25 华南师范大学 Novel quantization transaction system and its implementation based on deeply study
CN110807420A (en) * 2019-10-31 2020-02-18 天津大学 Facial expression recognition method integrating feature extraction and deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330383A (en) * 2017-06-18 2017-11-07 天津大学 A kind of face identification method based on depth convolutional neural networks
CN108898105A (en) * 2018-06-29 2018-11-27 成都大学 It is a kind of based on depth characteristic and it is sparse compression classification face identification method
CN110378382A (en) * 2019-06-18 2019-10-25 华南师范大学 Novel quantization transaction system and its implementation based on deeply study
CN110807420A (en) * 2019-10-31 2020-02-18 天津大学 Facial expression recognition method integrating feature extraction and deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GUANGLIANG ZHOU 等: ""The face recognition based on ensemble method"" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364784A (en) * 2020-11-13 2021-02-12 湖北师范大学 Rapid sparse representation single-sample face recognition method

Similar Documents

Publication Publication Date Title
CN108764041B (en) Face recognition method for lower shielding face image
US11017215B2 (en) Two-stage person searching method combining face and appearance features
US11263435B2 (en) Method for recognizing face from monitoring video data
CN109344727B (en) Identity card text information detection method and device, readable storage medium and terminal
EP3229171A1 (en) Method and device for determining identity identifier of human face in human face image, and terminal
EP3149611A1 (en) Learning deep face representation
JP2006338313A (en) Similar image retrieving method, similar image retrieving system, similar image retrieving program, and recording medium
CN111126307B (en) Small sample face recognition method combining sparse representation neural network
US10216786B2 (en) Automatic identity enrolment
CN110570443B (en) Image linear target extraction method based on structural constraint condition generation model
CN108805027B (en) Face recognition method under low resolution condition
CN116030396B (en) Accurate segmentation method for video structured extraction
AU2011252761A1 (en) Automatic identity enrolment
Zhang et al. Face detection algorithm based on improved AdaBoost and new haar features
CN111368867A (en) Archive classification method and system and computer readable storage medium
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
CN111476145A (en) A convolutional neural network-based 1: n face recognition method
CN112257689A (en) Training and recognition method of face recognition model, storage medium and related equipment
CN112200174A (en) Face frame detection method and module and living body face verification method and system
Wijaya et al. Real time face recognition using DCT coefficients based face descriptor
Liu et al. Finger-vein recognition with modified binary tree model
CN114049675A (en) Facial expression recognition method based on light-weight two-channel neural network
Divakar Multimodal biometric system using index based algorithm for fast search
Kundu et al. New hamming score based correlation method for fingerprint identification
CN113312959B (en) Sign language video key frame sampling method based on DTW distance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination