CN116092134A - Fingerprint living body detection method based on deep learning and feature fusion - Google Patents

Fingerprint living body detection method based on deep learning and feature fusion Download PDF

Info

Publication number
CN116092134A
CN116092134A CN202310147726.7A CN202310147726A CN116092134A CN 116092134 A CN116092134 A CN 116092134A CN 202310147726 A CN202310147726 A CN 202310147726A CN 116092134 A CN116092134 A CN 116092134A
Authority
CN
China
Prior art keywords
fingerprint
neural network
network model
deep neural
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310147726.7A
Other languages
Chinese (zh)
Inventor
孟凡清
李宗军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Institute of Chemical Technology
Original Assignee
Jilin Institute of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Institute of Chemical Technology filed Critical Jilin Institute of Chemical Technology
Priority to CN202310147726.7A priority Critical patent/CN116092134A/en
Publication of CN116092134A publication Critical patent/CN116092134A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
    • G06V40/1388Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger using image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
    • G06V40/1394Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

A fingerprint living body detection method based on deep learning and feature fusion comprises the following steps: 1) Establishing a basic data set: a large number of fingerprint image datasets are created, including live fingerprint images and false fingerprint images. 2) Constructing a deep neural network model: the structure of the mobile network is finely adjusted by taking the MobileNet V2 model as a basic network, and a lightweight deep neural network model suitable for fingerprint living body detection is designed. 3) Feature extraction: respectively preparing a gray value map, a direction field map and a Local Binary Pattern (LBP) map of the fingerprint, inputting the three maps into a deep neural network model, and respectively extracting features by using the deep neural network model. 4) Feature fusion: and 3) carrying out fusion calculation on the features extracted in the step 3) and different feature layers. 5) And (3) completing deep neural network model training: training the deep neural network model by using the training set of the basic data set in the step 1), so that the performance of the deep neural network model is optimal. 6) Classifying the true and false fingerprints: and (3) classifying by using the test set of the basic data set in the step (1) and using the optimal deep neural network model trained in the step (5), and finally obtaining the accurate result of fingerprint living body detection. The invention can provide a fingerprint living body detection method based on deep learning and feature fusion, which adopts a deep learning algorithm to design a convolutional neural network with feature fusion on the basis of not increasing the hardware cost of products, namely, different feature graphs are fused by using a cascade fusion function in a convolutional layer, and multi-layer information features of fingerprint images are extracted by fully utilizing complementary information so as to ensure the accuracy and safety of identities and rapidly and accurately detect the fingerprints of human living bodies.

Description

Fingerprint living body detection method based on deep learning and feature fusion
Technical Field
The invention relates to the fields of image processing, fingerprint identification technology and the like, in particular to a fingerprint living body detection method based on deep learning and feature fusion.
Background
At present, fingerprint identification technology is being promoted in fields such as finance, telecom, information security, e-government affairs, and relates to many application scenarios such as information security, identity authentication. However, with the progress of technology, authentication schemes relying on fingerprints alone are not secure enough, and advanced technologies such as "living body detection" are lacking. On one hand, the fingerprint living body detection technology can effectively avoid the safety risk of the common fingerprint authentication technology, effectively prevent library collision attack and reduce the probability of fingerprint information leakage; on the other hand, the fingerprint living body detection technology can also be used for realizing identity detection, illegal invasion behavior identification and collision resistance, and meets the quality requirements of safety management. Fingerprint in-vivo detection is an important technology, provides more accurate and reliable security verification for users, and helps to ensure the information security and the integrity of personal privacy, so that the importance of the fingerprint in-vivo detection is not neglected.
Fingerprint living detection methods can be currently classified into hardware detection methods and software detection methods. The hardware detection method detects the characteristics of the finger by adding additional hardware, which leads to increased product cost, inconvenient operation and unfavorable popularization. Compared with a hardware detection method, the software detection method is a cheaper image processing-based method, is easier to realize, and can improve the capability by updating software. Most of the algorithms in the current software level are based on solutions of shallow manual feature design SVM (support vector machine), the number of extracted features is small, the types of the features are single, and the living fingerprint detection accuracy is low and the detection speed is low. An algorithm combining a convolutional neural network and an SVM (support vector machine) is also proposed, but the algorithm divides feature extraction and classification recognition into two parts, so that the performance of the algorithm on fingerprint living body detection cannot be optimized.
Disclosure of Invention
The method aims to solve the problems of small quantity of extracted features, single feature type, low accuracy, low detection speed, low performance and the like in the existing living fingerprint detection technology. The invention aims to provide a fingerprint living body detection method based on deep learning and feature fusion, which adopts a deep learning algorithm to design a convolutional neural network with feature fusion on the basis of not increasing the hardware cost of products, namely, different feature graphs are fused by using a cascade fusion function in a convolutional layer, and multi-layer information features of a fingerprint image are extracted by fully utilizing complementary information so as to ensure the accuracy and safety of identity and rapidly and accurately detect the human living body fingerprint.
The invention provides the following technical scheme: a fingerprint living body detection method based on deep learning and feature fusion comprises the following steps:
1) Establishing a basic data set: a large number of fingerprint image datasets are created, including live fingerprint images and false fingerprint images.
2) Constructing a deep neural network model: the structure of the mobile network is finely adjusted by taking the MobileNet V2 model as a basic network, and a lightweight deep neural network model suitable for fingerprint living body detection is designed.
3) Feature extraction: respectively preparing a gray value map, a direction field map and a Local Binary Pattern (LBP) map of the fingerprint, inputting the three maps into a deep neural network model, and respectively extracting features by using the deep neural network model.
4) Feature fusion: and 3) carrying out fusion calculation on the features extracted in the step 3) and different feature layers.
5) And (3) completing deep neural network model training: training the deep neural network model by using the training set of the basic data set in the step 1), so that the performance of the deep neural network model is optimal.
6) Classifying the true and false fingerprints: and (3) classifying by using the test set of the basic data set in the step (1) and using the optimal deep neural network model trained in the step (5), and finally obtaining the accurate result of fingerprint living body detection.
Further, in said step 1), said establishing a basic data set: the fingerprint data sets used herein are from LivDet2013 and LivDet2015, which are official data sets of the global fingerprint biopsy contest. Wherein LivDet2013 contains 4 sub-data sets: biometrika, crossMatch, italdata and Swipe. LivDet2015 contains 4 sub-data sets: crossMatch, digital _ Persona, greenBit, hi _Scan. The LivDet dataset has two parts, a training set and a testing set.
In the step 2), the method for constructing the deep neural network model comprises the following steps: the determination of the network structure is important for the accuracy, real-time and generalization capability of fingerprint live detection. Firstly, the input size and the number of convolution layers of a single-path convolution network are required to be determined, the structure of the single-path convolution network is finely adjusted based on the structure of the MobileNet V2, a feature fusion idea is added, and a lightweight network model suitable for fingerprint living body detection is designed. The model includes Convolution layer 1 (Convolition 1), inverse residual block 1 (bottlebeck 1), inverse residual block 2 (bottlebeck 2), inverse residual block 3 (bottlebeck 3), inverse residual block 4 (bottlebeck 4), global average Pooling layer (Pooling AVE), convolution layer 2 (Convoltion 2), and output layer (Softmax).
In the step 3), the feature extraction method comprises the following steps: features are extracted from the gray value map, the direction field map, and the Local Binary Pattern (LBP) map of the fingerprint using the inverse residual block 1 (bottleneck 1) and the inverse residual block 2 (bottleneck 2), respectively. Wherein, the liquid crystal display device comprises a liquid crystal display device,
gray value map: the gray information of the fingerprint is very important to fingerprint identification, and the gray value graph adopts the original fingerprint image to be directly tiled by pixels, and column vectors are extracted to be used as the extracted fingerprint characteristics.
Directional field diagram: the fingerprint image has a relatively clear direction field, and the direction field describes the direction mode information of fingerprint ridges and bone lines. As a global and reliable feature of fingerprints, the direction field is a very important ring in the existing mainstream fingerprint identification technology. Many methods are used to estimate the fingerprint direction field, and the invention adopts a gradient-based method to estimate the direction field, and the specific algorithm process is as follows:
a) The fingerprint image I is divided into a series of blocks of size W x W that do not overlap each other.
b) Calculating gradient vector [ G ] of each point in the block along X and Y directions respectively x (x,y),G y (x,y)] T The calculation method is shown in formula (1).
Figure BDA0004089703680000031
c) Calculating a block gradient vector [ G ] for each block according to equation (2) Bx ,G By ] T And converts it into a block direction θ (0.ltoreq.θ) according to the formula (3)<π)。
Figure BDA0004089703680000032
Figure BDA0004089703680000033
Local Binary Pattern (LBP) map: the LBP algorithm is widely applied because of low computational complexity, gray scale invariance and rotation invariance for describing local texture characteristics of the image. The specific procedure can be expressed as:
Figure BDA0004089703680000034
wherein: (x) c ,y c ) Representing the center; p is p c A pixel representing a center point; p is p i Pixels representing surrounding points; p represents the number of surrounding pixel points;
Figure BDA0004089703680000035
in the step 4), the feature fusion method comprises the following steps: and 3) carrying out feature fusion on different feature layers in the step 3), and then further extracting distinguishing information from the fused feature map by using a reverse residual block 3 (bottleneck 3) and a reverse residual block 4 (bottleneck 4), and positioning the information to a position with specific features as a reference for fingerprint living body detection so as to ensure identity accuracy. Through fusion of the feature layers, multiple information can be fully utilized, the defect of single information is overcome, and the accuracy and generalization capability of the model are improved.
In the step 5), the training method for completing the deep neural network model comprises the following steps: and (3) global mean pooling is adopted for the feature vectors after fusion in the step (4), the feature map of the last convolution layer is quantized, then the convolution layer is connected with a full connection layer, and then a Softmax logistic regression classification layer is connected to realize classification. The network structure enables the convolution layer and the traditional neural network layer to be connected together, the convolution layer can be regarded as a feature extractor, the obtained features are classified by the traditional neural network layer, and finally the accurate result of fingerprint living body detection is obtained. Training the deep neural network model by using the training set of the basic data set in the step 1), so that the performance of the deep neural network model is optimal.
The method has the advantages that firstly, the convolution technology is utilized, so that the calculated amount is effectively reduced, and the detection speed is improved; secondly, the invention takes the MobileNet V2 model as a basic network, and because MobileNet V2 belongs to a lightweight network, the performance of the MobileNet V2 is ensured, the structure of the MobileNet V2 is finely adjusted, and a feature fusion idea is added, so that the MobileNet V2 can run on an embedded platform in real time with less calculation amount, and the accuracy, the instantaneity and the generalization capability of fingerprint living body detection are improved; finally, the invention can realize feature extraction and classification recognition at the same time, and the features are directly extracted through a deep learning model, and the features learned in a data driving mode are more general, so that the invention can be suitable for various spoofing attacks, and can greatly improve the robustness and generalization capability of the invention, so that the performance of the invention on fingerprint living body detection is optimal.
Drawings
FIG. 1 is a table of parameters for LivDet 2013;
FIG. 2 is a table of parameters for LivDet 2015;
FIG. 3 is a block diagram of a deep neural network of the present invention;
fig. 4 is a flowchart of an implementation of the overall algorithm.
Detailed Description
Example 1
For a better understanding, the present invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 4, a fingerprint living body detection method based on deep learning and feature fusion includes the steps of:
1) Establishing a basic data set: the fingerprint data sets used herein are from LivDet2013 and LivDet2015, which are official data sets of the global fingerprint biopsy contest. Wherein LivDet2013 contains 4 sub-data sets: biometrika, crossMatch, italdata and Swipe. LivDet2015 contains 4 sub-data sets: crossMatch, digital _ Persona, greenBit, hi _Scan. The LivDet dataset has two parts, a training set and a testing set. Fig. 1 and 2 show parameters of LivDet2013 and LivDet2015, respectively, including image size of the fingerprint, number of true samples, number of false samples, and number of materials for preparing the false fingerprint.
2) Constructing a deep neural network model: the determination of the network structure is important for the accuracy, real-time and generalization capability of fingerprint live detection. Firstly, the input size and the number of convolution layers of a single-path convolution network are required to be determined, the structure of the single-path convolution network is finely adjusted based on the structure of the MobileNet V2, a feature fusion idea is added, and a lightweight network model suitable for fingerprint living body detection is designed. The model includes Convolution layer 1 (Convolition 1), inverse residual block 1 (bottlebeck 1), inverse residual block 2 (bottlebeck 2), inverse residual block 3 (bottlebeck 3), inverse residual block 4 (bottlebeck 4), global average Pooling layer (Pooling AVE), convolution layer 2 (Convoltion 2), and output layer (Softmax). Fig. 3 is a diagram of a deep neural network according to the present invention.
3) Feature extraction: a gray value map, a direction field map and a Local Binary Pattern (LBP) map of the fingerprint are prepared, respectively, the three maps are input to a deep neural network model, and features are extracted from the gray value map, the direction field map and the Local Binary Pattern (LBP) map of the fingerprint by using a back residual block 1 (bottleneck 1) and a back residual block 2 (bottleneck 2), respectively. Wherein, the liquid crystal display device comprises a liquid crystal display device,
gray value map: the gray information of the fingerprint is very important to fingerprint identification, and the gray value graph adopts the original fingerprint image to be directly tiled by pixels, and column vectors are extracted to be used as the extracted fingerprint characteristics.
Directional field diagram: the fingerprint image has a relatively clear direction field, and the direction field describes the direction mode information of fingerprint ridges and bone lines. As a global and reliable feature of fingerprints, the direction field is a very important ring in the existing mainstream fingerprint identification technology. Many methods are used to estimate the fingerprint direction field, and the invention adopts a gradient-based method to estimate the direction field, and the specific algorithm process is as follows:
a) The fingerprint image I is divided into a series of blocks of size W x W that do not overlap each other.
b) Calculating gradient vector [ G ] of each point in the block along X and Y directions respectively x (x,y),G y (x,y)] T The calculation method is shown in formula (1).
Figure BDA0004089703680000051
c) Calculating a block gradient vector [ G ] for each block according to equation (2) Bx ,G By ] T And converts it into a block direction θ (0.ltoreq.θ) according to the formula (3)<π)。
Figure BDA0004089703680000052
Figure BDA0004089703680000053
Local Binary Pattern (LBP) map: the LBP algorithm is widely applied because of low computational complexity, gray scale invariance and rotation invariance for describing local texture characteristics of the image. The specific procedure can be expressed as:
Figure BDA0004089703680000061
wherein: (x) c ,y c ) Representing the center; p is p c A pixel representing a center point; p is p i Pixels representing surrounding points; p represents the number of surrounding pixel points;
Figure BDA0004089703680000062
4) Feature fusion: and 3) carrying out feature fusion on different feature layers in the step 3), and then further extracting distinguishing information from the fused feature map by using a reverse residual block 3 (bottleneck 3) and a reverse residual block 4 (bottleneck 4), and positioning the information to a position with specific features as a reference for fingerprint living body detection so as to ensure identity accuracy. Through fusion of the feature layers, multiple information can be fully utilized, the defect of single information is overcome, and the accuracy and generalization capability of the model are improved.
5) And (3) completing deep neural network model training: and (3) global mean pooling is adopted for the feature vectors after fusion in the step (4), the feature map of the last convolution layer is quantized, then the convolution layer is connected with a full connection layer, and then a Softmax logistic regression classification layer is connected to realize classification. The network structure enables the convolution layer and the traditional neural network layer to be connected together, the convolution layer can be regarded as a feature extractor, the obtained features are classified by the traditional neural network layer, and finally the accurate result of fingerprint living body detection is obtained. Training the deep neural network model by using the training set of the basic data set in the step 1), so that the performance of the deep neural network model is optimal.
6) Classifying the true and false fingerprints: and (3) classifying by using the test set of the basic data set in the step (1) and using the optimal deep neural network model trained in the step (5), wherein the accuracy of the final test achieves a good effect.

Claims (5)

1. The fingerprint living body detection method based on deep learning and feature fusion is characterized by comprising the following steps of:
1) Establishing a basic data set: a large number of fingerprint image datasets are created, including live fingerprint images and false fingerprint images.
2) Constructing a deep neural network model: the structure of the mobile network is finely adjusted by taking the MobileNet V2 model as a basic network, and a lightweight deep neural network model suitable for fingerprint living body detection is designed.
3) Feature extraction: respectively preparing a gray value map, a direction field map and a Local Binary Pattern (LBP) map of the fingerprint, inputting the three maps into a deep neural network model, and respectively extracting features by using the deep neural network model.
4) Feature fusion: and 3) carrying out fusion calculation on the features extracted in the step 3) and different feature layers.
5) And (3) completing deep neural network model training: training the deep neural network model by using the training set of the basic data set in the step 1), so that the performance of the deep neural network model is optimal.
6) Classifying the true and false fingerprints: and (3) classifying by using the test set of the basic data set in the step (1) and using the optimal deep neural network model trained in the step (5), and finally obtaining the accurate result of fingerprint living body detection.
2. The fingerprint living body detection method based on deep learning and feature fusion according to claim 1, wherein in the step 2), the method for constructing the deep neural network model is as follows: the determination of the network structure is important for the accuracy, real-time and generalization capability of fingerprint live detection. Firstly, the input size and the number of convolution layers of a single-path convolution network are required to be determined, the structure of the single-path convolution network is finely adjusted based on the structure of the MobileNet V2, a feature fusion idea is added, and a lightweight network model suitable for fingerprint living body detection is designed. The model includes Convolution layer 1 (Convolition 1), inverse residual block 1 (bottlebeck 1), inverse residual block 2 (bottlebeck 2), inverse residual block 3 (bottlebeck 3), inverse residual block 4 (bottlebeck 4), global average Pooling layer (Pooling AVE), convolution layer 2 (Convoltion 2), and output layer (Softmax).
3. The fingerprint living body detection method based on deep learning and feature fusion according to claim 1, wherein in step 3), the feature extraction method is as follows: features are extracted from the gray value map, the direction field map, and the Local Binary Pattern (LBP) map of the fingerprint using the inverse residual block 1 (bottleneck 1) and the inverse residual block 2 (bottleneck 2), respectively.
4. The fingerprint living body detection method based on deep learning and feature fusion according to claim 1, wherein in step 4), the feature fusion method is as follows: and 3) carrying out feature fusion on different feature layers in the step 3), and then further extracting distinguishing information from the fused feature map by using a reverse residual block 3 (bottleneck 3) and a reverse residual block 4 (bottleneck 4), and positioning the information to a position with specific features as a reference for fingerprint living body detection so as to ensure identity accuracy. Through fusion of the feature layers, multiple information can be fully utilized, the defect of single information is overcome, and the accuracy and generalization capability of the model are improved.
5. The fingerprint living body detection method based on deep learning and feature fusion according to claim 1, wherein in step 5), the method for training the deep neural network model is as follows: and (3) global mean pooling is adopted for the feature vectors after fusion in the step (4), the feature map of the last convolution layer is quantized, then the convolution layer is connected with a full connection layer, and then a Softmax logistic regression classification layer is connected to realize classification. The network structure enables the convolution layer and the traditional neural network layer to be connected together, the convolution layer can be regarded as a feature extractor, the obtained features are classified by the traditional neural network layer, and finally the accurate result of fingerprint living body detection is obtained. Training the deep neural network model by using the training set of the basic data set in the step 1), so that the performance of the deep neural network model is optimal.
CN202310147726.7A 2023-02-22 2023-02-22 Fingerprint living body detection method based on deep learning and feature fusion Pending CN116092134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310147726.7A CN116092134A (en) 2023-02-22 2023-02-22 Fingerprint living body detection method based on deep learning and feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310147726.7A CN116092134A (en) 2023-02-22 2023-02-22 Fingerprint living body detection method based on deep learning and feature fusion

Publications (1)

Publication Number Publication Date
CN116092134A true CN116092134A (en) 2023-05-09

Family

ID=86188005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310147726.7A Pending CN116092134A (en) 2023-02-22 2023-02-22 Fingerprint living body detection method based on deep learning and feature fusion

Country Status (1)

Country Link
CN (1) CN116092134A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721441A (en) * 2023-08-03 2023-09-08 厦门瞳景智能科技有限公司 Block chain-based access control security management method and system
CN117037221A (en) * 2023-10-08 2023-11-10 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721441A (en) * 2023-08-03 2023-09-08 厦门瞳景智能科技有限公司 Block chain-based access control security management method and system
CN116721441B (en) * 2023-08-03 2024-01-19 厦门瞳景智能科技有限公司 Block chain-based access control security management method and system
CN117037221A (en) * 2023-10-08 2023-11-10 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN117037221B (en) * 2023-10-08 2023-12-29 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Zhu et al. AR-Net: Adaptive attention and residual refinement network for copy-move forgery detection
CN109558832B (en) Human body posture detection method, device, equipment and storage medium
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN110059589B (en) Iris region segmentation method in iris image based on Mask R-CNN neural network
CN111274916B (en) Face recognition method and face recognition device
CN116092134A (en) Fingerprint living body detection method based on deep learning and feature fusion
US20230021661A1 (en) Forgery detection of face image
CN111401384A (en) Transformer equipment defect image matching method
CN110222572B (en) Tracking method, tracking device, electronic equipment and storage medium
US11430255B2 (en) Fast and robust friction ridge impression minutiae extraction using feed-forward convolutional neural network
CN111079514A (en) Face recognition method based on CLBP and convolutional neural network
AU2021101613A4 (en) Multi-modal feature fusion–based fingerprint liveness detection method
CN111767879A (en) Living body detection method
Manjunatha et al. Deep learning-based technique for image tamper detection
CN112052830A (en) Face detection method, device and computer storage medium
KR20230169104A (en) Personalized biometric anti-spoofing protection using machine learning and enrollment data
CN115830531A (en) Pedestrian re-identification method based on residual multi-channel attention multi-feature fusion
Liu et al. Iris recognition in visible spectrum based on multi-layer analogous convolution and collaborative representation
CN110263726B (en) Finger vein identification method and device based on deep correlation feature learning
CN113033305B (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN113763274A (en) Multi-source image matching method combining local phase sharpness orientation description
CN113139544A (en) Saliency target detection method based on multi-scale feature dynamic fusion
Yu et al. An identity authentication method for ubiquitous electric power Internet of Things based on dynamic gesture recognition
Park et al. Patch-based fake fingerprint detection using a fully convolutional neural network with a small number of parameters and an optimal threshold
Stojanović et al. Deep learning‐based approach to latent overlapped fingerprints mask segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination