CN107315995B - Face recognition method based on Laplace logarithmic face and convolutional neural network - Google Patents

Face recognition method based on Laplace logarithmic face and convolutional neural network Download PDF

Info

Publication number
CN107315995B
CN107315995B CN201710354814.9A CN201710354814A CN107315995B CN 107315995 B CN107315995 B CN 107315995B CN 201710354814 A CN201710354814 A CN 201710354814A CN 107315995 B CN107315995 B CN 107315995B
Authority
CN
China
Prior art keywords
face
layer
size
convolutional
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710354814.9A
Other languages
Chinese (zh)
Other versions
CN107315995A (en
Inventor
丁园园
王艳
刘华巍
常玉超
李宝清
袁晓兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Microsystem and Information Technology of CAS
Original Assignee
Shanghai Institute of Microsystem and Information Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Microsystem and Information Technology of CAS filed Critical Shanghai Institute of Microsystem and Information Technology of CAS
Priority to CN201710354814.9A priority Critical patent/CN107315995B/en
Publication of CN107315995A publication Critical patent/CN107315995A/en
Application granted granted Critical
Publication of CN107315995B publication Critical patent/CN107315995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/478Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention provides a face recognition method based on a Laplace logarithmic face and a convolutional neural network, which comprises the following steps: s1, acquiring and preprocessing a face image to be recognized; s2, judging whether the number of the face images in the database reaches a preset value, if not, executing S3, otherwise, executing S4; s3, extracting human face features from the human face image to be recognized by using a Laplacian logarithmic face algorithm, then calculating chi-square distances between the extracted human face features and the human face features corresponding to the human face images in the database, and outputting the human face image with the minimum chi-square distance; s4, extracting face features from the preprocessed face image to be recognized by using a pre-trained convolutional neural network; and then calculating the cosine distance between the extracted face features and the face features corresponding to the face images in the database, and outputting the face image with the minimum cosine distance. The invention can realize rapid face recognition, has high recognition accuracy and has important significance for monitoring, anti-terrorism and the like.

Description

Face recognition method based on Laplace logarithmic face and convolutional neural network
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition method based on a Laplace logarithmic face and a convolutional neural network.
Background
The biological characteristic identification technology is used as an important man-machine interaction mode, man-machine interaction and identification are realized by utilizing inherent attributes or behavior characteristics of a human body through computer processing and analysis, the characteristics of difficulty in counterfeiting, portability, convenience in use and the like are realized, and a verification approach with uniqueness, high reliability and stability is provided. In the field of biological identification, technologies currently being widely studied and applied are: fingerprint and palm print recognition, iris recognition, face recognition, behavior and action recognition, voice recognition and the like. The fingerprint, palm print and iris recognition is high in recognition accuracy, but the recognition mode is in an active contact type, the recognized personnel are required to be actively matched, the user experience degree is low, and various resistances are met in practical application. Compared with other biological recognition technologies, the human face recognition technology has the unique advantages of being convenient to use, high in recognition accuracy, prominent in image intuition, strong in universality of acquisition equipment and the like, and the human-computer interaction is convenient and friendly.
The challenges faced by face recognition technology in practical applications mainly include illumination change of face images, face expression, interference of collected image noise, shielding, face posture change and the like. The image noise has a great influence on the extraction of the face features, and the face recognition rate can be seriously reduced. For an active face recognition system, the influences of shielding, expression and face posture can be eliminated to a certain extent by means of manual intervention, but the controllability of illumination change in face image acquisition is low, which is a problem frequently occurring in the face recognition system.
Disclosure of Invention
In order to solve the technical problems, the invention provides a face recognition method based on a Laplace logarithmic face and a convolutional neural network, so as to realize rapid face recognition, and the face recognition method has strong illumination robustness and high recognition accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme:
a face recognition method based on Laplace logarithm face and convolution neural network is used for comparing a face image to be recognized with a face image pre-stored in a database and finding out a face with the highest similarity, and comprises the following steps:
s1, acquiring a face image to be recognized and preprocessing the face image;
s2, judging whether the number of the face images in the database reaches a preset value, if not, executing a step S3, otherwise, executing a step S4;
s3, extracting face features from the preprocessed face image to be recognized by using a Laplacian logarithm face algorithm, then calculating chi-square distances between the extracted face features and the face features corresponding to the face images in the database, and taking the face image with the smallest chi-square distance as the face with the highest similarity with the face image to be recognized;
s4, extracting face features from the preprocessed face image to be recognized by using a pre-trained convolutional neural network; and then calculating cosine distances between the extracted face features and the face features corresponding to the face images in the database, and taking the face image with the minimum cosine distance as the face with the highest similarity with the face image to be recognized.
Preferably, the preprocessing in step S1 includes face rectification and image cropping.
Further, the laplacian log-face algorithm in the step S3 includes the following steps:
firstly, sequentially transforming a preprocessed face image to be recognized into a logarithm domain and a Laplace domain;
and then, extracting the face features from the face image to be recognized in the Laplace domain by adopting an L BP algorithm.
Further, the convolutional neural network in step S4 extracts the face features by the following steps:
firstly, extracting human face features with different scales from a preprocessed human face image to be recognized;
and then fusing the facial features of different scales.
Further, the convolutional neural network comprises a convolutional layer Conv1, a pooling layer Pool1, a convolutional layer Conv2, a pooling layer Pool2, a convolutional layer Conv31, a convolutional layer Conv322, a connecting layer Conc1, a connecting layer Conc2, a pooling layer Pool51, a connecting layer Conc3 and a full connecting layer Fc5 which are connected in sequence, a convolutional layer Conv321 and a pooling layer Conv323 which are connected between the pooling layer Pool2 and the connecting layer Conc1 respectively, a pooling layer Pool3 connected between the pooling layer Pool2 and the convolutional layer Conv321, a pooling layer Pool4 and a pooling layer Pool4 which are connected between the connecting layer con 1 and the connecting layer Conc2 respectively, a pooling layer Pool52 connected between the pooling layer Pool2 and the connecting layer Conc3, and a pooling layer 53 connected between the connecting layer con 1 and the connecting layer Conc3 respectively.
Further, the size/step of convolutional layer Conv1 was set to 5 × 5/1, the size/step of pooled layer Pool1 was set to 3 × 3/3, the size/step of convolutional layer Conv2 was set to 3 × 03/1, the size/step of pooled layer Pool2 was set to 2 × 12/2, the size/step of convolutional layer Conv31 was set to 3 × 23/1, the size/step of pooled layer Pool1 was set to 2 × 32/2, the size/step of convolutional layer Conv321 was set to 1 × 1/1, the size/step of convolutional layer Conv322 was set to 3 × 3/1, the size/step of convolutional layer Conv323 was set to 5 × 5/1, the size/step of convolutional layer Conv4 was set to 3 × 3/1, the size/step of pooled layer Pool4 was set to 2 × 2/1, the size/step of pooled layer Pool51 was set to 3 × 3/1, the size/step of pooled layer Pool52 was set to 8745, and the size/step of pooled layer Pool53 was set to × 3/2.
Preferably, the step S1 and the step S3 are implemented based on a mobile terminal, and the step S4 is implemented based on a PC terminal.
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects:
aiming at the condition that face images are few in a database, the method adopts the illumination robust Laplacian logarithmic face algorithm to extract the face features, so that the illumination influence is eliminated; and aiming at the condition that a plurality of face images exist in the database, the convolutional neural network is adopted to extract the face features, so that the recognition effect of face recognition can be further improved. Therefore, the method can realize rapid face recognition, has high recognition accuracy, and has important significance for monitoring, anti-terrorism and the like.
Drawings
FIG. 1 is a flow chart of a face recognition method based on Laplace logarithmic face and convolutional neural network of the present invention;
FIG. 2 is a Yale B database sample and corresponding LL face facial feature map;
figure 3 is a basic framework for one embodiment of the convolutional neural network of the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Furthermore, it should be understood that various changes and modifications can be made by one skilled in the art after reading the description of the invention, and equivalents fall within the scope of the invention defined by the claims appended to the application.
The invention provides a face recognition method based on a Laplace logarithmic face and a convolutional neural network, which aims to compare a face image to be recognized with a face image pre-stored in a database and find out a face with the highest similarity. In the embodiment shown in fig. 1, the method of the present invention comprises the steps of:
firstly, a face image is collected through a face detection system of the mobile terminal, and preprocessing such as face image segmentation and extraction, face correction and image cutting is carried out. And then, judging whether the database is a large database or a small database, namely judging whether the number of the face images pre-stored in the database reaches a preset value (for example, 500), if not, judging that the database is the small database, directly extracting the face features of the preprocessed face images at the mobile terminal and identifying the face features, and if the number of the face images reaches the preset value, judging that the database is the large database, and considering the limitation of computing resources of the mobile terminal, transmitting the preprocessed face images to a PC (personal computer) terminal to extract and identify the face features.
When the database is a small database, the mobile terminal extracts the face features from the preprocessed face images to be recognized by using a laplacian logarithmic face (LL-face) algorithm, then calculates the chi-square distance between the extracted face features and the face features corresponding to the face images in the database, and takes the face image with the smallest chi-square distance as the face with the highest similarity with the face images to be recognized, wherein the laplacian logarithmic face algorithm specifically comprises the following steps of firstly converting the preprocessed face images to be recognized into a logarithm domain, then converting the logarithm domain into a laplacian domain, and finally adopting a classical L BP (local binary pattern) algorithm to extract the face features from the face images to be recognized in the laplacian domain, and after testing, the recognition precision comparison results of the LL-face algorithm and other illumination robust face recognition algorithms on a CMU-PIE database created by the university of Kanai Meilong of America are shown in Table 1:
TABLE 1
Figure BDA0001298386250000041
In Table 1, MSR, SQI, L G-face, W-face and G-face respectively represent a multi-scale homomorphic filtering algorithm, a self-quotient image algorithm, a local weight face algorithm, a Weber illumination face algorithm and a gradient face algorithm, values in brackets represent variance of identification precision, and as can be seen from Table 1, the LL-face algorithm provided by the invention is superior to other typical human face illumination robustness identification algorithms.
FIG. 2 shows Yale B database samples and corresponding LL face features, the Yale B database is a standard database for measuring the performance of the illumination robustness algorithm, the invention randomly picks a plurality of pictures and extracts LL face features of the pictures, and as can be seen from FIG. 2, LL face features can clearly extract the illumination robustness features and can depict the face detail information.
When the database is a big database, the mobile terminal transmits the preprocessed face image to the PC terminal, and the PC terminal extracts the face features from the preprocessed face image to be recognized by utilizing a pre-trained convolutional neural Network (NR-Network for short); and then calculating cosine distances between the extracted face features and the face features corresponding to the face images in the database, and taking the face image with the minimum cosine distance as the face with the highest similarity with the face image to be recognized.
The convolutional neural network is improved on the basis of a classic GoogleNet (a deep neural network built by google) network and a deep neural network built by the hong Kong Chinese university, which is specially used for face recognition, the convolutional neural network is provided in the form of a fully-connected layer Multi-input structure (Multi-inputs structure), Multi-level face features are extracted from the bottom layer of the network, and the anti-noise performance of a face recognition system is improved, the basic structure of one embodiment of the convolutional neural network is shown in FIG. 3, three layers of feature extraction modules (labeled as ①, ② and ③ in the figure) are mainly used for extracting face features of different scales respectively and fusing fully-connected layers at the top end of the network, specifically, the convolutional layer Conv1, the pooling layer Pool1, the convolutional layer Conv2, the convolutional layer Pool2, the convolutional layer Conv 5, the convolutional layer Conv322, the connecting layer Conc1, the Conv connecting layer 2, the Conv connecting layer and the Conv-connected layer, the Conv-connected layer 3653, the Conv-connected layer is also used for reducing the sizes of the convolutional neural network, the convolutional layer 2, the Conv-connected layer 3653 and the Conv-connected layer 3653, the Conv-connected layer 3653, the convolutional-connected Conv-connected layers are also used for reducing-connected Convolved-connected Conv-connected Convolved-:
TABLE 2
Figure BDA0001298386250000061
The construction of the convolutional neural network is realized by adopting a Caffe tool, the Caffe is a clear and efficient deep learning frame and supports a command line, a Python interface and a MAT L AB interface, the training of the network needs to provide a network configuration file Train _ val.txt and a solvent.prototxt file which are compiled according to the table 2, the training is executed by the Caffe training tool, the compiling of the Train _ val.txt and the solvent.prototxt file refers to a standard frame of the Caffe tool, and the specific structure of each layer is shown in the table I.
Table 3 shows the comparison of the NR-Network of the present invention with other noise robustness algorithms:
TABLE 3
Figure BDA0001298386250000062
Figure BDA0001298386250000071
In the table, sigma represents the variance of gaussian noise added by a test picture, F L BP (fuzzy L BP algorithm), NR L BP (noise robust L BP algorithm), NR L BP + and NR L BP + + are all classical illumination robust face feature extraction algorithms, BN1 and BN2 are two reference networks trained for verifying the multi-input structure of NR-Network, the main structure is the same as that of NR-Network, the difference is that there are two inputs to BN1 and only one input to BN2, as can be seen from table 3, the face recognition performance of NR-Network provided by the invention is obviously superior to that of other algorithms, and even if the noise pollution is serious, better face recognition accuracy can be obtained.
In conclusion, the invention can realize the extraction of the human face characteristics at both the PC end and the handheld terminal, and effectively overcomes two challenges in human face recognition, namely the influence of uneven illumination change in image acquisition and noise generated in the image transmission process on the performance of a human face recognition algorithm. In addition, different face recognition modes are adopted according to the size of the matching database, and the recognition speed and the accuracy are improved to a great extent. Especially for a database with a large data volume, the advantage of the recognition speed is more obvious.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The details of the present invention which are not described in detail are well known in the art.

Claims (6)

1. A face recognition method based on a Laplace logarithm face and a convolution neural network is used for comparing a face image to be recognized with a face image pre-stored in a database and finding out a face with the highest similarity, and is characterized by comprising the following steps:
s1, acquiring a face image to be recognized and preprocessing the face image;
s2, judging whether the number of the face images in the database reaches a preset value, if not, executing a step S3, otherwise, executing a step S4;
s3, extracting face features from the preprocessed face image to be recognized by using a Laplacian logarithm face algorithm, then calculating chi-square distances between the extracted face features and the face features corresponding to the face images in the database, and taking the face image with the smallest chi-square distance as the face with the highest similarity with the face image to be recognized;
s4, extracting face features from the preprocessed face image to be recognized by using a pre-trained convolutional neural network; calculating cosine distances between the extracted face features and the face features corresponding to the face images in the database, and taking the face image with the minimum cosine distance as the face with the highest similarity with the face image to be recognized;
the laplacian log-face algorithm in the step S3 includes the following steps:
firstly, sequentially transforming a preprocessed face image to be recognized into a logarithm domain and a Laplace domain;
and then, extracting the face features from the face image to be recognized in the Laplace domain by adopting an L BP algorithm.
2. The laplacian-log-face and convolutional neural network based face recognition method of claim 1, wherein the preprocessing in step S1 includes face rectification and image cropping.
3. The laplacian log-face and convolutional neural network based face recognition method of claim 1, wherein the convolutional neural network in step S4 extracts the face features by:
firstly, extracting human face features with different scales from a preprocessed human face image to be recognized;
and then fusing the facial features of different scales.
4. A laplace logarithmic face and convolutional neural network based face recognition method as claimed in claim 3, characterized in that the convolutional neural network comprises a convolutional layer Conv1, a pooling layer Pool1, a convolutional layer Conv2, a pooling layer Pool2, a convolutional layer Conv31, a convolutional layer Conv322, a connecting layer Conc1, a connecting layer Conc2, a pooling layer Pool51, a connecting layer Conc3 and a full connecting layer Fc5 which are connected in sequence, and further comprises a convolutional layer Conv321 and a convolutional layer Conv323 connected between the pooling layer Pool2 and the connecting layer Conc1, a pooling layer Pool5 connected between the pooling layer Pool2 and the convolutional layer Conv321, a convolutional layer Conv4 and a pooling layer Pool4 connected between the connecting layer Conc1 and the connecting layer Conc2, respectively, a pooling layer Pool 63 2 and the connecting layer Conc3 is connected between the pooling layer Conc 573 56 and the connecting layer Conc3, and a pooling layer connection 828453 is connected between the pooling layer Conc 3.
5. A laplacian log-face and convolutional neural network based face recognition method as claimed in claim 4, characterized in that the size/step size of convolutional layer Conv1 is set to 5 × 5/1, the size/step size of pooled layer Pool1 is set to 3 × 3/3, the size/step size of convolutional layer Conv2 is set to 3 × 03/1, the size/step size of pooled layer Pool2 is set to 2 × 12/2, the size/step size of convolutional layer Conv31 is set to 3 × 23/1, the size/step size of pooled layer Pool1 is set to 2 × 32/2, the size/step size of convolutional layer Conv321 is set to 1 × 1/1, the size/step size of convolutional layer Conv322 is set to 3 × 3/1, the size/step size of convolutional layer Conv323 is set to 5 × 5/1, the size/step size of convolutional layer Conv4 is set to 3, the size/step size of pooled layer Pool4 is set to 2 × 2/1, the size/step size of pooled layer Pool51 is set to 3 × 3/1, the size/step size of pooled layer 52 is set to 3 Pool × 5/5, and the size/step size of Pool × 3/2 is set to × 3/2.
6. The laplacian log-face and convolutional neural network based face recognition method of claim 1, wherein the steps S1 and S3 are implemented based on a mobile terminal, and the step S4 is implemented based on a PC.
CN201710354814.9A 2017-05-18 2017-05-18 Face recognition method based on Laplace logarithmic face and convolutional neural network Active CN107315995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710354814.9A CN107315995B (en) 2017-05-18 2017-05-18 Face recognition method based on Laplace logarithmic face and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710354814.9A CN107315995B (en) 2017-05-18 2017-05-18 Face recognition method based on Laplace logarithmic face and convolutional neural network

Publications (2)

Publication Number Publication Date
CN107315995A CN107315995A (en) 2017-11-03
CN107315995B true CN107315995B (en) 2020-07-31

Family

ID=60182173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710354814.9A Active CN107315995B (en) 2017-05-18 2017-05-18 Face recognition method based on Laplace logarithmic face and convolutional neural network

Country Status (1)

Country Link
CN (1) CN107315995B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523533B (en) * 2019-02-01 2023-07-07 阿里巴巴集团控股有限公司 Method and device for determining area of object from image
CN109858467B (en) * 2019-03-01 2021-05-07 北京视甄智能科技有限公司 Face recognition method and device based on key point region feature fusion
CN109948796B (en) * 2019-03-13 2023-07-04 腾讯科技(深圳)有限公司 Self-encoder learning method, self-encoder learning device, computer equipment and storage medium
CN110895797B (en) * 2019-04-04 2020-07-31 李雪梅 Intelligent network transceiving platform
CN111178187A (en) * 2019-12-17 2020-05-19 武汉迈集信息科技有限公司 Face recognition method and device based on convolutional neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000268184A (en) * 1999-03-17 2000-09-29 Sony Corp Image processing device and method and recording medium
CN101187986A (en) * 2007-11-27 2008-05-28 海信集团有限公司 Face recognition method based on supervisory neighbour keeping inlaying and supporting vector machine
CN105760833A (en) * 2016-02-14 2016-07-13 北京飞搜科技有限公司 Face feature recognition method
CN105787432A (en) * 2016-01-15 2016-07-20 浙江工业大学 Method for detecting human face shielding based on structure perception
CN105844132A (en) * 2016-03-17 2016-08-10 中国科学院上海微系统与信息技术研究所 Mobile terminal-based human face identification method and system
CN105930382A (en) * 2016-04-14 2016-09-07 严进龙 Method for searching for 3D model with 2D pictures
CN106650694A (en) * 2016-12-30 2017-05-10 江苏四点灵机器人有限公司 Human face recognition method taking convolutional neural network as feature extractor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8064697B2 (en) * 2007-10-12 2011-11-22 Microsoft Corporation Laplacian principal components analysis (LPCA)

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000268184A (en) * 1999-03-17 2000-09-29 Sony Corp Image processing device and method and recording medium
CN101187986A (en) * 2007-11-27 2008-05-28 海信集团有限公司 Face recognition method based on supervisory neighbour keeping inlaying and supporting vector machine
CN105787432A (en) * 2016-01-15 2016-07-20 浙江工业大学 Method for detecting human face shielding based on structure perception
CN105760833A (en) * 2016-02-14 2016-07-13 北京飞搜科技有限公司 Face feature recognition method
CN105844132A (en) * 2016-03-17 2016-08-10 中国科学院上海微系统与信息技术研究所 Mobile terminal-based human face identification method and system
CN105930382A (en) * 2016-04-14 2016-09-07 严进龙 Method for searching for 3D model with 2D pictures
CN106650694A (en) * 2016-12-30 2017-05-10 江苏四点灵机器人有限公司 Human face recognition method taking convolutional neural network as feature extractor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Face recognition based on Laplacian Eigenmaps;Weiqun Luo;《 2011 International Conference on Computer Science and Service System (CSSS)》;20110804;第416-419页 *

Also Published As

Publication number Publication date
CN107315995A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN107315995B (en) Face recognition method based on Laplace logarithmic face and convolutional neural network
CN107437074B (en) Identity authentication method and device
CN106920206B (en) Steganalysis method based on antagonistic neural network
CN111639558A (en) Finger vein identity verification method based on ArcFace Loss and improved residual error network
CN110472652B (en) Small sample classification method based on semantic guidance
CN108564040B (en) Fingerprint activity detection method based on deep convolution characteristics
CN104346628B (en) License plate Chinese character recognition method based on multiple dimensioned multi-direction Gabor characteristic
Ghanem et al. A survey on sign language recognition using smartphones
CN105117708A (en) Facial expression recognition method and apparatus
CN113515988B (en) Palm print recognition method, feature extraction model training method, device and medium
Jiang A review of the comparative studies on traditional and intelligent face recognition methods
Bansal et al. Statistical feature extraction based iris recognition system
Stojanović et al. Latent overlapped fingerprint separation: a review
Wang et al. Fingerprint pore extraction using U-Net based fully convolutional network
CN104679967A (en) Method for judging reliability of psychological test
CN108830217B (en) Automatic signature distinguishing method based on fuzzy mean hash learning
Narang et al. Robust face recognition method based on SIFT features using Levenberg-Marquardt Backpropagation neural networks
Thiyaneswaran et al. Iris Recognition using Left and Right Iris Feature of the Human Eye for Biometric Security System
Lv et al. Research on fingerprint feature recognition of access control based on deep learning
Qin et al. Partial fingerprint matching via phase-only correlation and deep convolutional neural network
Yuan et al. Fingerprint liveness detection adapted to different fingerprint sensors based on multiscale wavelet transform and rotation-invarient local binary pattern
CN109710062B (en) Cross-individual control method based on electroencephalogram and gesture signal fusion
CN114373212A (en) Face recognition model construction method, face recognition method and related equipment
Yan et al. A novel bimodal identification approach based on hand-print
Rafik et al. A Model Of A Biometric Recognition System Based On The Hough Transform Of Libor Masek and 1-D Log-Gabor Filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant