CN111310732A - High-precision face authentication method, system, computer equipment and storage medium - Google Patents

High-precision face authentication method, system, computer equipment and storage medium Download PDF

Info

Publication number
CN111310732A
CN111310732A CN202010196121.3A CN202010196121A CN111310732A CN 111310732 A CN111310732 A CN 111310732A CN 202010196121 A CN202010196121 A CN 202010196121A CN 111310732 A CN111310732 A CN 111310732A
Authority
CN
China
Prior art keywords
face
layer
image
convolutional
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010196121.3A
Other languages
Chinese (zh)
Inventor
龚汝洪
杜振锋
周晓清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Etonedu Co ltd
Original Assignee
Guangdong Etonedu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Etonedu Co ltd filed Critical Guangdong Etonedu Co ltd
Priority to CN202010196121.3A priority Critical patent/CN111310732A/en
Publication of CN111310732A publication Critical patent/CN111310732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a high-precision face authentication method, a high-precision face authentication system, computer equipment and a storage medium, wherein the method comprises the following steps: carrying out face detection on an image to be detected by utilizing a multitask cascade convolution neural network to obtain a face image; extracting features of the face image based on a 101-layer residual error network to obtain 512-dimensional face depth features serving as face features of a person to be authenticated; and comparing the face characteristics of the person to be authenticated with the face characteristics of the person in the database to obtain a face authentication result. The method has strong robustness and high robustness, eliminates the influence of noise such as sidedness, offset and the like on the face feature descriptor, quickly positions the face region and can accurately authenticate the identity of the person.

Description

High-precision face authentication method, system, computer equipment and storage medium
Technical Field
The invention relates to a high-precision face authentication method, a high-precision face authentication system, computer equipment and a storage medium, and belongs to the field of deep learning and image processing.
Background
In recent years, with the development of the internet, people are in the era of big data, so that the explosive growth of information quantity is brought, the problems of information leakage, embezzlement, loss and the like are brought by utilizing the traditional dynamic password, the biological feature recognition technology has the characteristics of difficult counterfeiting, uniqueness and the like, is considered as the core of the security of the next generation information technology, the face recognition technology is one of the biological feature recognition technologies, mainly senses and identifies people by utilizing optical imaging of the face, and at present, the technology is mainly applied to the fields of criminal investigation, monitoring systems, card punching attendance checking, secure payment and the like.
The classic process of Face Recognition technology (Face Recognition) is mainly divided into three steps: the human face detection, the positioning of facial feature points, the feature extraction and the classifier design need to comprehensively consider the performance of the feature selection, the feature extraction and the classifier. In recent years, with the introduction of Deep Convolutional Neural Network (DCNN), the accuracy of face recognition is improved in a cross-over manner. Although the face recognition technology is widely applied to various fields, the face recognition rate is affected by the shooting posture, the illumination difference, the expression change, the existence of the shelters, the age increase and the like, so that a lot of difficulties need to be overcome.
Traditional face recognition methods, such as face recognition based on geometric features; hidden Markov based face recognition; the traditional face recognition technologies such as face recognition based on Bayesian decision and face recognition based on a support vector machine have the advantages that under the influence of illumination or after a small amount of face shielding and face deviation, the recognition accuracy of the traditional face recognition technologies to human beings can be greatly reduced, and particularly when the traditional face recognition technologies are applied in real-time scenes (such as face attendance of teachers and students in schools, security inspection of high-speed rails or railway station personnel and the like), the traditional face recognition method is easily interfered by environmental noise, and personnel recognition errors can be caused.
Disclosure of Invention
In view of the above, the present invention provides a high-precision face authentication method, system, computer device and storage medium, which have strong robustness and high robustness, eliminate the influence of noise such as sidedness and offset on a face feature descriptor, quickly locate a face region, and can accurately authenticate the identity of a person.
The invention aims to provide a high-precision face authentication method.
A second object of the present invention is to provide a lightweight face recognition system.
It is a third object of the invention to provide a computer apparatus.
It is a fourth object of the present invention to provide a storage medium.
The first purpose of the invention can be achieved by adopting the following technical scheme:
a high-precision face authentication method, comprising:
carrying out face detection on an image to be detected by utilizing a multitask cascade convolution neural network to obtain a face image;
extracting features of the face image based on a 101-layer residual error network to obtain 512-dimensional face depth features serving as face features of a person to be authenticated;
and comparing the face characteristics of the person to be authenticated with the face characteristics of the person in the database to obtain a face authentication result.
Further, the performing face detection on the image to be detected by using the multitask cascade convolution neural network to obtain a face image specifically includes:
zooming the image to be detected into images with different sizes according to different zooming ratios to form a characteristic pyramid of the images;
inputting the feature pyramid into a candidate proposing network to obtain a first face classification result, a first candidate frame and a first face contour key point;
returning the first candidate frame to the image to be detected in a coordinate mode, and inputting the first candidate frame into an improvement network to obtain a second face classification result, a second candidate frame and a second face contour key point;
and returning the second candidate frame to the image to be detected in a coordinate mode, inputting the second candidate frame to an output network, and obtaining a third face classification result, a third candidate frame and a third face contour key point so as to obtain the face image.
Further, the inputting the feature pyramid into the candidate proposed network to obtain a first face classification result, a first candidate frame and a first face contour key point specifically includes:
inputting the feature pyramid into a first layer of the candidate proposed network, and generating 10 feature graphs of 5 by 5 through 10 convolution kernels of 3 by 3 and maximum pooling operation of 2 by 2;
inputting 10 feature maps of 5 × 5 into a second layer of the candidate proposed network, and generating 16 feature maps of 3 × 3 through 16 convolution kernels of 3 × 10;
inputting 16 feature maps of 3 × 3 into a third layer of the candidate proposed network, and generating 32 feature maps of 1 × 1 through 32 convolution kernels of 3 × 16;
outputting a first face classification result by 2 convolution kernels of 1 × 32 according to 32 feature maps of 1 × 1; outputting a first candidate frame through 4 convolution kernels of 1 × 32; the first face contour keypoints are output by 10 convolution kernels of 1 × 32.
Further, the returning the first candidate frame to the image to be detected in the form of coordinates and inputting the first candidate frame to the improvement network to obtain a second face classification result, a second candidate frame and a second face contour key point specifically includes:
returning the first candidate frame to an image to be detected in a coordinate mode, cutting a face region image on the image to be detected, converting the face region image into a 24 × 24 face region image, inputting the face region image into a first layer of an improved network, and generating 28 feature graphs of 11 through 28 convolution kernels of 3 × 3 and maximum pooling operation of 3 × 3;
inputting 28 feature maps of 11 × 11 into a second layer of the improvement network, and generating 48 feature maps of 4 × 4 through 48 convolution kernels of 3 × 28 and maximum pooling operation of 3 × 3;
inputting 48 feature maps of 4 × 4 into the third layer of the improvement network, and generating 64 feature maps of 3 × 3 through 64 convolution kernels of 2 × 48;
and mapping the feature maps of 3 × 64 to the fully-connected layer with the size of 128, and outputting a second face classification result, a second candidate frame and a second face contour key point.
Further, returning the second candidate frame to the image to be detected in a coordinate form, and inputting and outputting the second candidate frame to the network to obtain a third face classification result, a third candidate frame and a third face contour key point, specifically including:
returning the second candidate frame to the image to be detected in a coordinate mode, cutting a face region image on the image to be detected, converting the face region image into a 48 × 48 face region image, inputting the face region image into a first layer of an improved network, and generating 32 feature graphs of 23 × 23 through 32 convolution kernels of 3 × 3 and maximum pooling operation of 3 × 3;
inputting 32 characteristic graphs of 23 × 23 into a second layer of the output network, and generating 64 characteristic graphs of 10 × 10 through 64 convolution kernels of 3 × 32 and maximum pooling operation of 3 × 3;
inputting 64 feature graphs of 10 × 10 into a third layer of the output network, and generating 64 feature graphs of 4 × 4 through 64 convolution kernels of 3 × 64 and maximum pooling operation of 3 × 3;
inputting 64 feature maps of 4 × 4 into the fourth layer of the output network, and generating 128 feature maps of 3 × 3 through 128 convolution kernels of 2 × 64;
and mapping the feature maps of 3 × 128 to full-connected layers with the size of 256, and outputting a third face classification result, a third candidate frame and a third face contour key point.
Further, the feature extraction is performed on the face image based on the 101-layer residual error network to obtain 512-dimensional face depth features, and the 512-dimensional face depth features are used as face features of a person to be authenticated, and specifically include:
inputting a face image into a 101-layer residual error network, sequentially performing feature extraction on the face image layer by layer through five convolutional layers, and outputting 512-dimensional face depth features serving as face features of a person to be authenticated through a full-connection layer; inputting a face image into a 101-layer residual error network, sequentially performing feature extraction on the face image layer by layer through five convolutional layers, and outputting 512-dimensional face depth features serving as face features of a person to be authenticated through a full-connection layer;
wherein the five convolutional layers are respectively a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer and a fifth convolutional layer;
the first set of convolutional layers comprises a first convolutional layer and a maximum pooling layer, and the first convolutional layer comprises 64 convolution kernels of 7 by 7;
the second set of convolutional layers comprises three block convolutions, each block convolution comprises a second convolutional layer comprising 64 1 x 1 convolution kernels, a third convolutional layer comprising 64 3 x 3 convolution kernels, and a fourth convolutional layer comprising 256 1 x 1 convolution kernels;
the third set of convolutional layers comprises four block convolutions, each block convolution comprises a fifth convolutional layer comprising 128 1 x 1 convolutional kernels, a sixth convolutional layer comprising 128 3 x 3 convolutional kernels, and a seventh convolutional layer comprising 512 1 x 1 convolutional kernels;
the fourth set of convolutional layers comprises twenty-three block convolutions, each block convolution comprises an eighth convolutional layer comprising 256 1 x 1 convolutional kernels, a ninth convolutional layer comprising 256 3 x 3 convolutional kernels, and a tenth convolutional layer comprising 1024 1 x 1 convolutional kernels;
the fifth set of convolutional layers comprises three block convolutions, each block convolution comprises an eleventh convolutional layer, a twelfth convolutional layer and a thirteenth convolutional layer, the eighth convolutional layer comprises 512 1 x 1 convolutional kernels, the ninth convolutional layer comprises 512 3 x 3 convolutional kernels, and the tenth convolutional layer comprises 2048 1 x 1 convolutional kernels;
the training loss function adopted by the 101-layer residual error network is an additional angular amplitude loss function, and the following formula is adopted:
Figure BDA0002417673620000041
wherein N is the batch number of the batch size, y is the category number, theta is an additional angle, and m is an additional angle margin penalty factor.
Further, the comparing the human face features of the person to be authenticated with the human face features of the person in the database to obtain the personal authentication result specifically includes:
measuring the similarity between the human face features of the person to be authenticated and the human face features of the database personnel by using the Euclidean distance;
if the Euclidean distance obtained by the feature comparison is smaller than a preset threshold value, returning the corresponding database personnel ID, and outputting a personnel authentication result of successful comparison;
and if the Euclidean distances obtained after the characteristics of all the personnel in the database are compared are larger than or equal to a preset threshold value, outputting the result of the personnel authentication with failed comparison.
The second purpose of the invention can be achieved by adopting the following technical scheme:
a high accuracy face authentication system, the system comprising:
the face detection module is used for carrying out face detection on an image to be detected by utilizing the multitask cascade convolution neural network to obtain a face image;
the human face feature extraction module is used for extracting features of the human face image based on a 101-layer residual error network to obtain 512-dimensional human face depth features serving as human face features of the person to be authenticated;
and the face authentication module is used for comparing the face characteristics of the person to be authenticated with the face characteristics of the person in the database to obtain a face authentication result.
The third purpose of the invention can be achieved by adopting the following technical scheme:
a computer device comprises a processor and a memory for storing a program executable by the processor, and when the processor executes the program stored by the memory, the high-precision face authentication method is realized.
The fourth purpose of the invention can be achieved by adopting the following technical scheme:
a storage medium stores a program that, when executed by a processor, implements the above-described high-precision face authentication method.
Compared with the prior art, the invention has the following beneficial effects:
1. the method utilizes the multitask cascade convolution neural network to carry out face detection on an image to be detected to obtain a face image, then carries out feature extraction on the face image based on 101 layers of residual error networks to obtain 512-dimensional face depth features, and finally compares the face depth features with the face features of database personnel to realize face authentication.
2. The 101-layer residual error network carries out iterative training on model parameters by replacing the traditional softmax loss function with the additional angular amplitude loss function, the additional angular amplitude loss function can enhance the similarity of samples in classes and the diversity of samples between classes, and the distance of samples outside the classes is increased, so that the problem of classification boundary is maximized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of a high-precision face authentication method according to embodiment 1 of the present invention.
Fig. 2 is a frame diagram of face detection in embodiment 1 of the present invention.
Fig. 3 is a flow chart of face detection in embodiment 1 of the present invention.
Fig. 4 is a 101-layer residual network framework diagram according to embodiment 1 of the present invention.
Fig. 5 is a flowchart of face authentication in embodiment 1 of the present invention.
Fig. 6 is a block diagram of a high-precision face authentication system according to embodiment 2 of the present invention.
Fig. 7 is a block diagram of a face detection module according to embodiment 2 of the present invention.
Fig. 8 is a block diagram of a face authentication module according to embodiment 2 of the present invention.
Fig. 9 is a block diagram of a computer device according to embodiment 3 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts based on the embodiments of the present invention belong to the protection scope of the present invention.
Example 1:
as shown in fig. 1, the embodiment provides a high-precision face authentication method, which can be applied to the fields of face attendance by teachers and students in schools, security check of high-speed rails or railway station personnel, and the like, and includes the following steps:
s101, carrying out face detection on an image to be detected by utilizing a multitask cascade convolution neural network to obtain a face image.
The face detection method mainly uses a Multi-task cascaded convolutional neural network (MTCNN for short) to make candidates and select the best for a face feature window.
Further, as shown in fig. 2 and 3, the step S101 specifically includes:
s1011, zooming the image to be detected into images with different sizes according to different zooming proportions to form a characteristic pyramid of the image.
And S1012, inputting the feature pyramid into the candidate proposing network to obtain a first face classification result, a first candidate frame and a first face contour key point.
The candidate proposed network (Pnet) is a full convolution network, and the input of the candidate proposed network can be an image with any size, and is used for transmitting an image needing inference, but at this time, the output of the candidate proposed network is not a feature map with the size of 1 × 1, but is a feature map with W × H, and a grid on each feature map corresponds to a first face classification result, a first candidate frame (coordinate point information) and a first face contour key point.
Further, step S1012 specifically includes:
(1) the feature pyramid is input into the first tier of the candidate proposed network, and 10 5 by 5 feature maps are generated by 10 convolution kernels of 3 by 3, 2 by 2 Max Pooling (Max Pooling) operations.
(2) The 10 feature maps of 5 × 5 are input into the second layer of the candidate proposed network, and 16 feature maps of 3 × 3 are generated by 16 convolution kernels of 3 × 10.
(3) 16 feature maps of 3 × 3 are input into the third layer of the candidate proposed network, and 32 feature maps of 1 × 1 are generated by 32 convolution kernels of 3 × 16.
(4) Generating 2 1 × 1 feature maps for two classes of human faces by using 2 convolution kernels of 1 × 32 for 32 feature maps of 1 × 1, namely outputting a first human face classification result, wherein the human face classification result is the probability of human face classification; generating 4 1 × 1 feature maps for frame candidate regression judgment through 4 convolution kernels of 1 × 32, namely outputting a first frame candidate; through 10 convolution kernels of 1 × 32, 101 × 1 feature maps are generated for positioning key points of the face contour, that is, the first key points of the face contour are output, and the key points of the face contour include eyebrows, ears, eyes, mouth, nose and the like.
And S1013, returning the first candidate frame to the image to be detected in a coordinate mode, and inputting the first candidate frame to an improvement network to obtain a second face classification result, a second candidate frame and a second face contour key point.
Training data for the improved network (Rnet) is mainly generated by candidate frames and face contour key points (i.e. the first candidate frame and the first face contour key points) output by Pnet, and the data generation manner of the face contour key points of Rnet is similar to Pnet, but the size of the corresponding candidate frame is 24 × 24.
Further, step S1013 specifically includes:
(1) and returning the first candidate frame to the image to be detected in a coordinate mode, cutting a face region image on the image to be detected, converting the face region image into a 24 × 24 face region image, inputting the face region image into a first layer of the improved network, and generating 28 feature maps of 11 through 28 convolution kernels of 3 × 3 and maximum pooling operation of 3 × 3.
The face region image is intercepted on the image to be detected, and deformation and more details are avoided and reserved according to the method for intercepting the square with the maximum side length.
(2) The 28 signatures of 11 x 11 were input into the second layer of the improvement network, and 48 signatures of 4 x 4 were generated by 48 convolution kernels of 3 x 28, maximal pooling of 3 x 3.
(3) The 48 4 x 4 signatures were input into the third layer of the improvement network and 64 3 x 3 signatures were generated by 64 2 x 48 convolution kernels.
(4) And mapping the feature maps of 3 × 64 to the fully-connected layer with the size of 128, and outputting a second face classification result, a second candidate frame and a second face contour key point.
The full-connection layer with the output size of 2 is used for performing secondary classification on the face, namely outputting a second face classification result, the full-connection layer with the output size of 4 is used for candidate frame regression judgment, namely a second candidate frame, and the full-connection layer with the output size of 10 is used for positioning the face contour key points, namely outputting the second face contour key points.
And S1014, returning the second candidate frame to the image to be detected in a coordinate mode, inputting the second candidate frame to an output network, and obtaining a third face classification result, a third candidate frame and a third face contour key point so as to obtain the face image.
The output network (Onet) is the last network in the multitask cascade convolution neural network and is used for making the final output of the network, and the training data of the output network generates a candidate frame and a face contour key point (namely a second candidate frame and a second face contour key point) which are similar to the RNet and comprise the RNet output.
Further, step S1014 specifically includes:
(1) and returning the second candidate frame to the image to be detected in a coordinate mode, cutting a face region image on the image to be detected, converting the face region image into a 48 × 48 face region image, inputting the face region image into the first layer of the improved network, and generating 32 feature maps of 23 through 32 convolution kernels of 3 × 3 and maximum pooling operation of 3 × 3.
(2) The 32 signatures 23 by 23 were input into the second layer of the output network, and 64 signatures 10 by 10 were generated by 64 convolution kernels 3 by 32, and maximum pooling 3 by 3.
(3) The 64 10 × 10 signatures were input into the third layer of the output network, and 64 4 × 4 signatures were generated by 64 convolution kernels of 3 × 64, and maximum pooling of 3 × 3.
(4) The 64 4 × 4 signatures were input into the fourth layer of the output network, and 128 3 × 3 signatures were generated by 128 convolution kernels of 2 × 64.
(5) And mapping the feature maps of 3 × 128 to full-connected layers with the size of 256, and outputting a third face classification result, a third candidate frame and a third face contour key point.
The full-connection layer with the output size of 2 is used for carrying out two classifications on the face, namely outputting a third face classification result, the full-connection layer with the output size of 4 is used for candidate frame regression judgment, namely a third candidate frame, and the full-connection layer with the output size of 10 is used for positioning the face contour key points, namely outputting the third face contour key points.
S102, extracting the features of the face image based on the 101-layer residual error network to obtain 512-dimensional face depth features serving as the face features of the person to be authenticated.
In this embodiment, a 101-layer residual error network (Resnet-101) is used as a basic framework to perform feature extraction on the face image obtained in step S101 to obtain 512-dimensional face depth features,
because a Convolutional Neural Network (CNN) can extract low-level (low)/medium-level (mid)/high-level (high) features for a Neural Network, the more layers of the Network, the richer features of different levels can be extracted, and the more abstract features extracted by the deeper Network have semantic information, and a simple increase of the Network layer number can result in gradient disappearance or gradient explosion, while a residual error Network (Resnet) has good resistance in this respect. The main idea is to add direct connection channels in the network, i.e. the idea of high-speed network. Previous network architectures have performed a non-linear transformation of the performance input, while high speed networks have allowed a proportion of the output of previous network layers to be preserved. The thought of ResNet is very similar to that of a high-speed network, original input information is allowed to be directly transmitted to a later layer, a framework of a residual error network of 101 layers is shown in FIG. 4 and comprises five groups of convolution layers and a full connection layer (full connection), a face image is input into the residual error network of 101 layers, feature extraction is carried out on the face image layer by layer sequentially through the five groups of convolution layers, and 512-dimensional face depth features are output by the full connection layer to serve as face features of a person to be authenticated.
The five convolutional layers are the first convolutional layer conv1, the second convolutional layer conv2_ x, the third convolutional layer conv3_ x, the fourth convolutional layer conv4_ x and the fifth convolutional layer conv5_ x.
The first group of convolutional layers are input layers and comprise a first convolutional layer and a maximum pooling layer, the convolutional kernel size of the first convolutional layer is 7 x 7, the number of the convolutional layers is 64, the step size is 2, and the step size of the maximum pooling layer is 2.
The second group of convolutional layers are feature extraction layers and comprise three block convolutions, each block convolution comprises a second convolutional layer, a third convolutional layer and a fourth convolutional layer, the convolutional kernel size of the second convolutional layer is 1 x 1, the number of the convolutional kernels is 64, the convolutional kernel size of the third convolutional layer is 3 x 3, the number of the convolutional kernels is 64, the convolutional kernel size of the fourth convolutional layer is 1 x 1, and the number of the convolutional kernels is 256.
The third group of convolutional layers are feature extraction layers and comprise four block convolutions, each block convolution comprises a fifth convolutional layer, a sixth convolutional layer and a seventh convolutional layer, the convolutional kernel size of the fifth convolutional layer is 1 × 1, the number of the convolutional layers is 128, the convolutional kernel size of the sixth convolutional layer is 3 × 3, the number of the convolutional layers is 128, the convolutional kernel size of the seventh convolutional layer is 1 × 1, and the number of the convolutional layers is 512.
And the fourth group of convolutional layers are feature extraction layers and comprise twenty-three block convolutions, each block convolution comprises an eighth convolutional layer, a ninth convolutional layer and a tenth convolutional layer, the convolution kernel of the eighth convolutional layer is 1 × 1 in size and 256 in number, the convolution kernel of the ninth convolutional layer is 3 × 3 in size and 256 in number, and the convolution kernel of the tenth convolutional layer is 1 × 1 in size and 1024 in number.
The fifth group of convolutional layers are feature extraction layers and comprise three block convolutions, each block convolution comprises an eleventh convolutional layer, a twelfth convolutional layer and a thirteenth convolutional layer, the convolutional kernel of the eighth convolutional layer is 1 x 1 in size and 512 in number, the convolutional kernel of the ninth convolutional layer is 3 x 3 in size and 512 in number, and the convolutional kernel of the tenth convolutional layer is 1 x 1 in size and 2048 in number.
Because the Softma loss function in the conventional residual network, as shown in the following equation (1), does not explicitly optimize the features to make the positive pair similarity score higher and the negative pair similarity score lower, which will result in the gap of performance, and the additional angular amplitude loss function (additional angular amplitude loss) enhances the similarity of the intra-class samples and the diversity of the inter-class samples, and increases the distance of the out-class samples, thereby maximizing the problem of classification boundary, the present embodiment iteratively trains the model parameters by using the additional angular amplitude loss function instead of the conventional softmax loss function, as shown in the following equation (2):
Figure BDA0002417673620000091
Figure BDA0002417673620000092
wherein N is the batch number of the batch size, y is the category number, theta is an additional angle, and m is an additional angle margin penalty factor.
S103, comparing the face features of the person to be authenticated with the face features of the person in the database to obtain a face authentication result.
Further, as shown in fig. 5, the step S103 specifically includes:
and S1031, measuring the similarity between the human face features of the person to be authenticated and the human face features of the person in the database by using the Euclidean distance.
Specifically, the similarity between the human face features of the person to be authenticated and the human face features of the database is measured by using the euclidean distance as follows (3):
Figure BDA0002417673620000101
wherein, A represents the human face feature of the person to be authenticated, and B represents the human face feature of the person in the database.
Since the extracted face features are 512 dimensions, equation (3) is expanded as following equation (4):
Figure BDA0002417673620000102
s1032, if the Euclidean distance obtained by the feature comparison is smaller than a preset threshold value, returning the corresponding database personnel ID, and outputting a personnel authentication result of successful comparison.
And S1033, if the Euclidean distances obtained after the characteristics of all the persons are compared are larger than or equal to a preset threshold value, outputting the authentication result of the persons with failed comparison.
Those skilled in the art will appreciate that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct associated hardware, and the corresponding program may be stored in a computer-readable storage medium.
It should be noted that although the method operations of the above-described embodiments are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Example 2:
as shown in fig. 6, this embodiment provides a high-precision face authentication system, which includes a face detection module 601, a face feature extraction module 602, and a face authentication module 603, and the specific functions of each module are as follows:
the face detection module 601 is configured to perform face detection on an image to be detected by using a multitask cascaded convolutional neural network to obtain a face image.
The face feature extraction module 602 is configured to perform feature extraction on a face image based on a 101-layer residual error network to obtain 512-dimensional face depth features, which are used as face features of a person to be authenticated.
The face authentication module 603 is configured to compare the face features of the person to be authenticated with the face features of the person in the database to obtain a face authentication result.
Further, as shown in fig. 7, the face detection module 601 specifically includes:
the scaling unit 6011 is configured to scale the image to be detected into images of different sizes according to different scaling ratios, so as to form a feature pyramid of the image.
The first detecting unit 6012 is configured to input the feature pyramid into the candidate proposed network, so as to obtain a first face classification result, a first candidate frame, and a first face contour key point.
The second detecting unit 6013 is configured to return the first candidate frame to the image to be detected in the form of coordinates, and input the first candidate frame to the improvement network to obtain a second face classification result, a second candidate frame, and a second face contour key point.
And a third detecting unit 6014, configured to return the second candidate frame to the image to be detected in the form of coordinates, and input and output the second candidate frame to the network to obtain a third face classification result, a third candidate frame, and a third face contour key point, so as to obtain a face image.
Further, as shown in fig. 8, the face authentication module 603 specifically includes:
a measurement unit 6031 configured to measure similarity between the face features of the person to be authenticated and the face features of the person in the database by using the euclidean distance;
a first output unit 6032, configured to, if the euclidean distance obtained by the feature comparison is smaller than a preset threshold, return the corresponding database person ID, and output a person authentication result that the comparison is successful;
a second output unit 6033, configured to output a result of the person authentication that the comparison fails if the euclidean distances obtained after the feature comparison of all the persons in the database are greater than or equal to the preset threshold.
The specific implementation of each module in this embodiment may refer to embodiment 1, which is not described herein any more; it should be noted that the system provided in this embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure is divided into different functional modules to complete all or part of the functions described above.
Example 3:
the present embodiment provides a computer device, which may be a computer, as shown in fig. 9, and includes a processor 902, a memory, an input device 903, a display 904, and a network interface 905 connected by a system bus 901, where the processor is used to provide computing and control capabilities, the memory includes a nonvolatile storage medium 906 and an internal memory 907, the nonvolatile storage medium 906 stores an operating system, computer programs, and a database, the internal memory 907 provides an environment for the operating system and the computer programs in the nonvolatile storage medium to run, and when the processor 902 executes the computer programs stored in the memory, the high-precision face authentication method of the above embodiment 1 is implemented, as follows:
carrying out face detection on an image to be detected by utilizing a multitask cascade convolution neural network to obtain a face image;
extracting features of the face image based on a 101-layer residual error network to obtain 512-dimensional face depth features serving as face features of a person to be authenticated;
and comparing the face characteristics of the person to be authenticated with the face characteristics of the person in the database to obtain a face authentication result.
Further, the performing face detection on the image to be detected by using the multitask cascade convolution neural network to obtain a face image specifically includes:
zooming the image to be detected into images with different sizes according to different zooming ratios to form a characteristic pyramid of the images;
inputting the feature pyramid into a candidate proposing network to obtain a first face classification result, a first candidate frame and a first face contour key point;
returning the first candidate frame to the image to be detected in a coordinate mode, and inputting the first candidate frame into an improvement network to obtain a second face classification result, a second candidate frame and a second face contour key point;
and returning the second candidate frame to the image to be detected in a coordinate mode, inputting the second candidate frame to an output network, and obtaining a third face classification result, a third candidate frame and a third face contour key point so as to obtain the face image.
Further, the feature extraction is performed on the face image based on the 101-layer residual error network to obtain 512-dimensional face depth features, and the 512-dimensional face depth features are used as face features of a person to be authenticated, and specifically include:
inputting the face image into a residual error network of 101 layers, sequentially passing through five groups of convolution layers to carry out feature extraction on the face image layer by layer, and outputting 512-dimensional face depth features as face features of the person to be authenticated by a full connection layer.
Further, the comparing the human face features of the person to be authenticated with the human face features of the person in the database to obtain the person authentication result specifically includes:
measuring the similarity between the human face features of the person to be authenticated and the human face features of the database personnel by using the Euclidean distance;
if the Euclidean distance obtained by the feature comparison is smaller than a preset threshold value, returning the corresponding database personnel ID, and outputting a personnel authentication result of successful comparison;
and if the Euclidean distances obtained after the characteristics of all the personnel in the database are compared are larger than or equal to a preset threshold value, outputting the result of the personnel authentication with failed comparison.
Example 4:
the present embodiment provides a storage medium, which is a computer-readable storage medium, and stores a computer program, and when the computer program is executed by a processor, the high-precision face authentication method of the above embodiment 1 is implemented as follows:
carrying out face detection on an image to be detected by utilizing a multitask cascade convolution neural network to obtain a face image;
extracting features of the face image based on a 101-layer residual error network to obtain 512-dimensional face depth features serving as face features of a person to be authenticated;
and comparing the face characteristics of the person to be authenticated with the face characteristics of the person in the database to obtain a face authentication result.
Further, the performing face detection on the image to be detected by using the multitask cascade convolution neural network to obtain a face image specifically includes:
zooming the image to be detected into images with different sizes according to different zooming ratios to form a characteristic pyramid of the images;
inputting the feature pyramid into a candidate proposing network to obtain a first face classification result, a first candidate frame and a first face contour key point;
returning the first candidate frame to the image to be detected in a coordinate mode, and inputting the first candidate frame into an improvement network to obtain a second face classification result, a second candidate frame and a second face contour key point;
and returning the second candidate frame to the image to be detected in a coordinate mode, inputting the second candidate frame to an output network, and obtaining a third face classification result, a third candidate frame and a third face contour key point so as to obtain the face image.
Further, the feature extraction is performed on the face image based on the 101-layer residual error network to obtain 512-dimensional face depth features, and the 512-dimensional face depth features are used as face features of a person to be authenticated, and specifically include:
inputting the face image into a residual error network of 101 layers, sequentially passing through five groups of convolution layers to carry out feature extraction on the face image layer by layer, and outputting 512-dimensional face depth features as face features of the person to be authenticated by a full connection layer.
Further, the comparing the human face features of the person to be authenticated with the human face features of the person in the database to obtain the person authentication result specifically includes:
measuring the similarity between the human face features of the person to be authenticated and the human face features of the database personnel by using the Euclidean distance;
if the Euclidean distance obtained by the feature comparison is smaller than a preset threshold value, returning the corresponding database personnel ID, and outputting a personnel authentication result of successful comparison;
and if the Euclidean distances obtained after the characteristics of all the personnel in the database are compared are larger than or equal to a preset threshold value, outputting the result of the personnel authentication with failed comparison.
The storage medium described in this embodiment may be a magnetic disk, an optical disk, a computer Memory, a Random Access Memory (RAM), a usb disk, a removable hard disk, or other media.
In summary, the invention uses the multitask cascade convolution neural network to perform face detection on an image to be detected to obtain a face image, then performs feature extraction on the face image based on 101 layers of residual error networks to obtain 512-dimensional face depth features, and finally compares the face depth features with face features of database personnel to realize face authentication.
The above description is only for the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the scope of the present invention.

Claims (10)

1. A high-precision face authentication method is characterized by comprising the following steps:
carrying out face detection on an image to be detected by utilizing a multitask cascade convolution neural network to obtain a face image;
extracting features of the face image based on a 101-layer residual error network to obtain 512-dimensional face depth features serving as face features of a person to be authenticated;
and comparing the face characteristics of the person to be authenticated with the face characteristics of the person in the database to obtain a face authentication result.
2. The high-precision face authentication method according to claim 1, wherein the performing face detection on the image to be detected by using the multitask cascade convolution neural network to obtain the face image specifically comprises:
zooming the image to be detected into images with different sizes according to different zooming ratios to form a characteristic pyramid of the images;
inputting the feature pyramid into a candidate proposing network to obtain a first face classification result, a first candidate frame and a first face contour key point;
returning the first candidate frame to the image to be detected in a coordinate mode, and inputting the first candidate frame into an improvement network to obtain a second face classification result, a second candidate frame and a second face contour key point;
and returning the second candidate frame to the image to be detected in a coordinate mode, inputting the second candidate frame to an output network, and obtaining a third face classification result, a third candidate frame and a third face contour key point so as to obtain the face image.
3. The method according to claim 2, wherein the inputting the feature pyramid into the candidate proposed network to obtain a first face classification result, a first candidate frame, and a first face contour key point specifically comprises:
inputting the feature pyramid into a first layer of the candidate proposed network, and generating 10 feature graphs of 5 by 5 through 10 convolution kernels of 3 by 3 and maximum pooling operation of 2 by 2;
inputting 10 feature maps of 5 × 5 into a second layer of the candidate proposed network, and generating 16 feature maps of 3 × 3 through 16 convolution kernels of 3 × 10;
inputting 16 feature maps of 3 × 3 into a third layer of the candidate proposed network, and generating 32 feature maps of 1 × 1 through 32 convolution kernels of 3 × 16;
outputting a first face classification result by 2 convolution kernels of 1 × 32 according to 32 feature maps of 1 × 1; outputting a first candidate frame through 4 convolution kernels of 1 × 32; the first face contour keypoints are output by 10 convolution kernels of 1 × 32.
4. The method according to claim 2, wherein the step of returning the first candidate frame to the image to be detected in a coordinate form and inputting the first candidate frame to an improvement network to obtain a second face classification result, a second candidate frame and a second face contour key point specifically comprises:
returning the first candidate frame to an image to be detected in a coordinate mode, cutting a face region image on the image to be detected, converting the face region image into a 24 × 24 face region image, inputting the face region image into a first layer of an improved network, and generating 28 feature graphs of 11 through 28 convolution kernels of 3 × 3 and maximum pooling operation of 3 × 3;
inputting 28 feature maps of 11 × 11 into a second layer of the improvement network, and generating 48 feature maps of 4 × 4 through 48 convolution kernels of 3 × 28 and maximum pooling operation of 3 × 3;
inputting 48 feature maps of 4 × 4 into the third layer of the improvement network, and generating 64 feature maps of 3 × 3 through 64 convolution kernels of 2 × 48;
and mapping the feature maps of 3 × 64 to the fully-connected layer with the size of 128, and outputting a second face classification result, a second candidate frame and a second face contour key point.
5. The method for authenticating the high-precision human face according to claim 2, wherein the step of returning the second candidate frame to the image to be detected in a coordinate form and inputting and outputting the second candidate frame to the network to obtain a third human face classification result, a third candidate frame and a third human face contour key point specifically comprises the steps of:
returning the second candidate frame to the image to be detected in a coordinate mode, cutting a face region image on the image to be detected, converting the face region image into a 48 × 48 face region image, inputting the face region image into a first layer of an improved network, and generating 32 feature graphs of 23 × 23 through 32 convolution kernels of 3 × 3 and maximum pooling operation of 3 × 3;
inputting 32 characteristic graphs of 23 × 23 into a second layer of the output network, and generating 64 characteristic graphs of 10 × 10 through 64 convolution kernels of 3 × 32 and maximum pooling operation of 3 × 3;
inputting 64 feature graphs of 10 × 10 into a third layer of the output network, and generating 64 feature graphs of 4 × 4 through 64 convolution kernels of 3 × 64 and maximum pooling operation of 3 × 3;
inputting 64 feature maps of 4 × 4 into the fourth layer of the output network, and generating 128 feature maps of 3 × 3 through 128 convolution kernels of 2 × 64;
and mapping the feature maps of 3 × 128 to full-connected layers with the size of 256, and outputting a third face classification result, a third candidate frame and a third face contour key point.
6. The high-precision face authentication method according to any one of claims 1 to 5, wherein the feature extraction is performed on the face image based on the 101-layer residual error network to obtain 512-dimensional face depth features, which are used as face features of a person to be authenticated, and specifically comprises:
inputting a face image into a 101-layer residual error network, sequentially performing feature extraction on the face image layer by layer through five convolutional layers, and outputting 512-dimensional face depth features serving as face features of a person to be authenticated through a full-connection layer;
wherein the five convolutional layers are respectively a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer and a fifth convolutional layer;
the first set of convolutional layers comprises a first convolutional layer and a maximum pooling layer, and the first convolutional layer comprises 64 convolution kernels of 7 by 7;
the second set of convolutional layers comprises three block convolutions, each block convolution comprises a second convolutional layer comprising 64 1 x 1 convolution kernels, a third convolutional layer comprising 64 3 x 3 convolution kernels, and a fourth convolutional layer comprising 256 1 x 1 convolution kernels;
the third set of convolutional layers comprises four block convolutions, each block convolution comprises a fifth convolutional layer comprising 128 1 x 1 convolutional kernels, a sixth convolutional layer comprising 128 3 x 3 convolutional kernels, and a seventh convolutional layer comprising 512 1 x 1 convolutional kernels;
the fourth set of convolutional layers comprises twenty-three block convolutions, each block convolution comprises an eighth convolutional layer comprising 256 1 x 1 convolutional kernels, a ninth convolutional layer comprising 256 3 x 3 convolutional kernels, and a tenth convolutional layer comprising 1024 1 x 1 convolutional kernels;
the fifth set of convolutional layers comprises three block convolutions, each block convolution comprises an eleventh convolutional layer, a twelfth convolutional layer and a thirteenth convolutional layer, the eighth convolutional layer comprises 512 1 x 1 convolutional kernels, the ninth convolutional layer comprises 512 3 x 3 convolutional kernels, and the tenth convolutional layer comprises 2048 1 x 1 convolutional kernels;
the training loss function adopted by the 101-layer residual error network is an additional angular amplitude loss function, and the following formula is adopted:
Figure FDA0002417673610000031
wherein N is the batch number of the batch size, y is the category number, theta is an additional angle, and m is an additional angle margin penalty factor.
7. The method according to any one of claims 1 to 5, wherein the comparing the face features of the person to be authenticated with the face features of the person in the database to obtain the person authentication result specifically comprises:
measuring the similarity between the human face features of the person to be authenticated and the human face features of the database personnel by using the Euclidean distance;
if the Euclidean distance obtained by the feature comparison is smaller than a preset threshold value, returning the corresponding database personnel ID, and outputting a personnel authentication result of successful comparison;
and if the Euclidean distances obtained after the characteristics of all the personnel in the database are compared are larger than or equal to a preset threshold value, outputting the result of the personnel authentication with failed comparison.
8. A high accuracy face authentication system, the system comprising:
the face detection module is used for carrying out face detection on an image to be detected by utilizing the multitask cascade convolution neural network to obtain a face image;
the human face feature extraction module is used for extracting features of the human face image based on a 101-layer residual error network to obtain 512-dimensional human face depth features serving as human face features of the person to be authenticated;
and the face authentication module is used for comparing the face characteristics of the person to be authenticated with the face characteristics of the person in the database to obtain a face authentication result.
9. A computer device comprising a processor and a memory for storing a program executable by the processor, wherein the processor implements the high accuracy face authentication method according to any one of claims 1 to 7 when executing the program stored in the memory.
10. A storage medium storing a program, wherein the program, when executed by a processor, implements the high-accuracy face authentication method according to any one of claims 1 to 7.
CN202010196121.3A 2020-03-19 2020-03-19 High-precision face authentication method, system, computer equipment and storage medium Pending CN111310732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010196121.3A CN111310732A (en) 2020-03-19 2020-03-19 High-precision face authentication method, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010196121.3A CN111310732A (en) 2020-03-19 2020-03-19 High-precision face authentication method, system, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111310732A true CN111310732A (en) 2020-06-19

Family

ID=71162257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010196121.3A Pending CN111310732A (en) 2020-03-19 2020-03-19 High-precision face authentication method, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111310732A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330696A (en) * 2020-12-02 2021-02-05 青岛大学 Face segmentation method, face segmentation device and computer-readable storage medium
CN112766065A (en) * 2020-12-30 2021-05-07 山东山大鸥玛软件股份有限公司 Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN113343955A (en) * 2021-08-06 2021-09-03 北京惠朗时代科技有限公司 Face recognition intelligent tail box application method based on depth pyramid

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577990A (en) * 2017-08-09 2018-01-12 武汉世纪金桥安全技术有限公司 A kind of extensive face identification method for accelerating retrieval based on GPU
CN108229381A (en) * 2017-12-29 2018-06-29 湖南视觉伟业智能科技有限公司 Face image synthesis method, apparatus, storage medium and computer equipment
CN110197099A (en) * 2018-02-26 2019-09-03 腾讯科技(深圳)有限公司 The method and apparatus of across age recognition of face and its model training
WO2019200264A1 (en) * 2018-04-12 2019-10-17 Georgia Tech Research Corporation Privacy preserving face-based authentication
CN110852703A (en) * 2019-10-22 2020-02-28 佛山科学技术学院 Attendance checking method, system, equipment and medium based on side face multi-feature fusion face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577990A (en) * 2017-08-09 2018-01-12 武汉世纪金桥安全技术有限公司 A kind of extensive face identification method for accelerating retrieval based on GPU
CN108229381A (en) * 2017-12-29 2018-06-29 湖南视觉伟业智能科技有限公司 Face image synthesis method, apparatus, storage medium and computer equipment
CN110197099A (en) * 2018-02-26 2019-09-03 腾讯科技(深圳)有限公司 The method and apparatus of across age recognition of face and its model training
WO2019200264A1 (en) * 2018-04-12 2019-10-17 Georgia Tech Research Corporation Privacy preserving face-based authentication
CN110852703A (en) * 2019-10-22 2020-02-28 佛山科学技术学院 Attendance checking method, system, equipment and medium based on side face multi-feature fusion face recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘长伟: "基于MTCNN和Facenet的人脸识别", 《邮电设计技术》, no. 02, 20 February 2020 (2020-02-20), pages 32 - 38 *
屈梁生: "《机械监测诊断中的理论与方法 屈梁生论文集》", 西安交通大学出版社, pages: 810 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330696A (en) * 2020-12-02 2021-02-05 青岛大学 Face segmentation method, face segmentation device and computer-readable storage medium
CN112330696B (en) * 2020-12-02 2022-08-09 青岛大学 Face segmentation method, face segmentation device and computer-readable storage medium
CN112766065A (en) * 2020-12-30 2021-05-07 山东山大鸥玛软件股份有限公司 Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN113343955A (en) * 2021-08-06 2021-09-03 北京惠朗时代科技有限公司 Face recognition intelligent tail box application method based on depth pyramid

Similar Documents

Publication Publication Date Title
William et al. Face recognition using facenet (survey, performance test, and comparison)
CN111310732A (en) High-precision face authentication method, system, computer equipment and storage medium
Zhou et al. A method of facial expression recognition based on Gabor and NMF
CN110222780A (en) Object detecting method, device, equipment and storage medium
US11348364B2 (en) Method and system for neural fingerprint enhancement for fingerprint recognition
CN109670559A (en) Recognition methods, device, equipment and the storage medium of handwritten Chinese character
Nathwani Online signature verification using bidirectional recurrent neural network
Kutzner et al. Writer identification using handwritten cursive texts and single character words
CN110969073A (en) Facial expression recognition method based on feature fusion and BP neural network
Kim et al. Spatio-temporal representation for face authentication by using multi-task learning with human attributes
Jadhav et al. HDL-PI: hybrid DeepLearning technique for person identification using multimodal finger print, iris and face biometric features
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
Chen et al. STRAN: Student expression recognition based on spatio-temporal residual attention network in classroom teaching videos
CN111626132A (en) Model generation method, face recognition method, system, device and medium
CN112149747A (en) Hyperspectral image classification method based on improved Ghost3D module and covariance pooling
Yadav et al. In-browser attendance system using face recognition and serverless edge computing
Kadhim et al. A multimodal biometric database and case study for face recognition based deep learning
Li et al. Memory-augmented autoencoder based continuous authentication on smartphones with conditional transformer gans
Long et al. High discriminant features for writer-independent online signature verification
Sharma et al. A performance analysis of face and speech recognition in the video and audio stream using machine learning classification techniques
CN113343898B (en) Mask shielding face recognition method, device and equipment based on knowledge distillation network
Elssaedi et al. Comparing the effectiveness of different classifiers of data mining for signature recognition system
Dikii et al. Online handwritten signature verification system based on neural network classification
Shipurkar et al. End to End System for Handwritten Text Recognition and Plagiarism Detection using CNN & BLSTM
Li et al. Recognition algorithm of athletes' partially occluded face based on a deep learning algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination