WO2024032277A1 - 基于深度神经网络编码的个人化人脸生物密钥生成方法 - Google Patents

基于深度神经网络编码的个人化人脸生物密钥生成方法 Download PDF

Info

Publication number
WO2024032277A1
WO2024032277A1 PCT/CN2023/105657 CN2023105657W WO2024032277A1 WO 2024032277 A1 WO2024032277 A1 WO 2024032277A1 CN 2023105657 W CN2023105657 W CN 2023105657W WO 2024032277 A1 WO2024032277 A1 WO 2024032277A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
feature
feature map
neural network
biometric key
Prior art date
Application number
PCT/CN2023/105657
Other languages
English (en)
French (fr)
Inventor
吴震东
黄炎华
Original Assignee
杭州电子科技大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州电子科技大学 filed Critical 杭州电子科技大学
Publication of WO2024032277A1 publication Critical patent/WO2024032277A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • G06F21/46Structures or tools for the administration of authentication by designing passwords or checking the strength of passwords
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the invention belongs to the technical field of biometric key generation, and relates to a personalized face biometric key generation method based on deep neural network coding.
  • biometric authentication such as fingerprints, iris, face, etc.
  • biometric protection templates are crucial.
  • Biometric template protection generally requires three basic properties: non-linkability, irreversibility and revocability.
  • unlinkability is that the same feature template cannot be used to match different databases.
  • irreversibility is that the original biological characteristics cannot be restored through the biometric protection template.
  • revocability is that the biometric protection template can be frequently replaced to prevent leakage.
  • biometric keys Traditional biological protection templates use symmetric cryptography or public key cryptography for encryption and decryption operations. However, the existence of the key increases the burden on the key owner. Once the key is leaked, the biometric characteristics will be directly exposed. Therefore, the method of biometric keys has been widely studied by researchers in recent years. The method of directly generating biometric keys through biometric features not only ensures the security of biometric features, but also reduces the difficulty of key memory. There is currently little research on face-based biometric keys. Traditional face biometric keys use fuzzy commitment and fuzzy vault methods to allow limited errors for matching through fuzzy extraction. However, these two methods perform unsatisfactorily in terms of recognition accuracy and misrecognition rate.
  • the end-to-end method of directly learning random binary codes lacks interpretability and is difficult to directly apply to a thousand-person face system.
  • the end-to-end method of generating facial biometric keys is lacking in key strength, which is generally less than 512 bits, and the accuracy generally decreases as the key strength increases.
  • the existing biometric key generation technologies mainly include: (1) Chinese Patent No. 201410075104.9 discloses a frontal face image biometric key generation method: using the frontal face image to be transformed into a high-dimensional space after feature space transformation, , stabilize the facial feature information within an acceptable fluctuation range in a high-dimensional space, then extract a digital sequence from the stabilized feature vector, and encode the biometric key from the digital sequence.
  • PCA traditional feature extraction method
  • 202110350155.8 discloses a fingerprint biometric key generation method based on deep neural network coding: using a combination of classic fingerprint image processing methods and deep neural network methods to more accurately extract the stability of different samples of the same fingerprint Feature components, and through the processing of deep neural networks, further stabilize fingerprint features and feature values.
  • the final biometric key bit sequence is short (generally around 512 bits). As the biometric key bit length increases, the recognition accuracy will decrease significantly, and the misrecognition rate will increase during recognition. higher.
  • the present invention aims at the common problems of low strength, low recognition accuracy and high misrecognition rate in current facial biometric key methods.
  • a personalized face biometric key generation method based on deep neural network coding is proposed.
  • Preprocess the face image convert the preprocessed face image into a feature map with stability ⁇ 80% through the personalized feature extractor Feature_Extraction; output the face biometric key with stability ⁇ 99.5% through the Stabilizer ;
  • fuzzy extraction is used to obtain the face biometric key with a stability of >99.5%.
  • the personalized feature extractor includes the deep neural network module Fmap_E based on personalized training, the feature point screening module and the binary quantization fault tolerance module.
  • the stabilizer is a feature-stabilizing convolutional network based on the encoder-decoder structure. After preprocessing, the facial feature map is sequentially subjected to personalized feature point extraction, screening, and binary quantification tolerant, the feature points are stable, and the stable facial biometric key is obtained after fuzzy extraction of the biometric key.
  • the deep neural network module Fmap_E based on personalized training, facial features are personalized and a feature map containing feature points is proposed.
  • Feature point filtering removes common features in all facial feature maps and selects unique features of specific faces in the feature map.
  • binary quantization fault tolerance the feature points in the feature map are selected and quantified based on the feature point screening.
  • the feature point screening selects the unique features of the human face and the binary quantization amplifies these features, which is beneficial to the stabilizer. of stability.
  • the filtered feature map is converted into a more stable face biometric key through the stabilizer. After correcting the error of the stabilized key relative to the target biological key through error correction code, fuzzy extraction is used to extract the final biological key.
  • the present invention specifically includes the following steps:
  • the images preprocessed in step 1 are pre-trained using a deep neural network.
  • the pre-trained network can accurately identify faces in the pre-training data set.
  • Step 3 Personalized feature extraction.
  • Step 3.1 Construct the deep neural network module Fmap_E based on personalized training:
  • the personal face atlas to be trained is preprocessed in step 1 and then input into the deep neural network pretrained in step 2 for transfer learning.
  • a part of the deep neural network after transfer learning is intercepted to form a deep neural network module Fmap_E based on personalized training, which is used to extract personalized facial feature maps.
  • Step 3.2 Feature map screening:
  • the common features in the feature maps are removed based on all facial feature maps, and the unique features in the feature map are selected based on a single type of facial feature map.
  • the stability of the feature map after feature screening points is ⁇ 60%.
  • the single-type face feature map is obtained through the single-type face feature atlas, and the single-type face feature map is processed using binarization to obtain a binarized matrix.
  • the single-type face feature atlas is processed according to the binarized matrix.
  • the average feature map of a single type of face after binarization is regarded as the face biometric key, and the stability of the quantified face biometric key is ⁇ 80%.
  • the stabilizer is a feature-stabilizing convolutional network with an encoder-decoder structure.
  • the facial biometric key output in step 3 with a stability of ⁇ 80% is input into the stabilizer for stable learning, and the facial biometric key with a stability of ⁇ 99.5% is output. key.
  • the biometric key output in step 4 is corrected by error correction coding through the error correction coding module, and fuzzy extraction is used to obtain a face biometric key with a stability of >99.5%.
  • the deep neural network described in step 2 uses the residual network Resnet in the convolutional neural network.
  • step 3.2 includes global screening and local screening:
  • Global screening uses all feature maps Faces_featuremap to obtain the global feature map all_Fm by averaging each pixel: Among them, all_Fm is the global feature map, and len() is the number of pictures in the data set.
  • Local filtering is the cumulative average of the i-th facial feature map set Face_featuremap i to obtain the local feature map i_Fm: Among them, i_Fm is the local feature map of the i-th type of face, and len() is the number of pictures in the data set.
  • the filtered feature atlas F_mi and the feature point selection matrix m i are obtained.
  • m i is a binary matrix with the same size as Face_featuremap i .
  • F_m i choose(Face_featuremap i ,m i ), choose() is the feature point selection operation
  • F_m i is the feature point screening result of the i-th type of face
  • Face_featuremap i is the feature map set of the i-th type of face
  • m i Select the matrix for the feature points of the i-th type of face.
  • step 3.3 the specific steps in step 3.3 are:
  • the feature atlas is first Perform summation and averaging to obtain the average feature map.
  • avg i is the average feature map of the F_mi data set after filtering the feature points of the i-th face type
  • len(F_m i ) is the number of face images in F_mi
  • avg i is globally binarized and locally binarized. chemical combination exercise
  • the preliminary facial biometric key B_K i is obtained.
  • the global threshold T_all i is obtained through the global binarization OTSU method.
  • the avg i is divided into blocks according to the block size of s ⁇ s, and the block mean and standard deviation are calculated for each block. If the mean is less than the threshold ⁇ 3 and the standard deviation is less than the threshold ⁇ 4 , then the local binarization method is used for this block.
  • the block is binarized using the global threshold T_all i .
  • the binarized threshold matrix Ti can be obtained, and Ti is used to binarize the feature map in F_mi .
  • the biological key b_k i to be stabilized after binarization of F_mi can be obtained.
  • B_K i GLBinarization(avg i ,T i );
  • b_k i GLBinarization(F_m i ,T i );
  • GLBinarization() is a binary quantization operation
  • T i is the binary threshold matrix of the i-th type of face.
  • the stabilizer described in step 4 is a deep neural network Unet including an encoder-decoder structure.
  • a learning matrix W is added to the top layer of UNet. W increases the stability of the stabilizer by learning the relationship between points. Effect.
  • step 5 is specifically:
  • Reed-solomon error correction the Reed-solomon encoding process is as follows:
  • B_K i is divided into 8 blocks with 512 bits. Each block is preceded by a 512-bit random binary sequence n XOR operation and then Reed-solomon encoding is performed using 64-bit error correction per block to obtain 8 blocks N i n .
  • the Reed-solomon decoding process is as follows:
  • the matching process also performs block matching.
  • B_K' i generated after b_k i passes through the stabilizer Stabilizer is also divided into 8 blocks, and the 8 blocks generated during encoding After the XOR operation, Reed-solomon decoding is performed. After decoding, Random binary sequence' n can be obtained.
  • Random binary sequence' n is a random binary code
  • Reed-solomon-recode() is the Reed-solomon error correction code encoding operation.
  • Fuzzy_extraction() is performed after matching Random binary sequence' n with the original Random binary sequence n . If the fuzzy extraction condition is met, the target biological key B_K i is extracted. The number of completely matching blocks ⁇ is the fuzzy extraction requirement. If the number of matching blocks ⁇ n5, it is considered that the fuzzy extraction requirement is met and B_K i can be extracted:
  • B_K i Fuzzy_extraction(Random binary sequence' n ,Random binary sequence n );
  • Random binary sequence n is the original random binary sequence
  • Random binary sequence' n is the binary sequence after Reed-solomon decoding
  • Fuzzy_extraction() is the fuzzy extraction operation.
  • a feature map containing more personal facial features is proposed through personalized training of deep neural networks.
  • the feature points in the feature map are scattered and unstable, making it difficult to use directly. Therefore, a feature map fault-tolerant method is proposed, which uses feature selection and quantification technology to extract facial feature maps containing personal unique feature points.
  • a convolutional network with an encoder-decoder structure is designed to stabilize the feature map.
  • the network with an encoder-decoder structure can learn the shallow and deep features of the feature map through convolution with different synchronization lengths, and realize the feature map to Stability of biometric keys.
  • fuzzy extraction is used to fuzzy extract the key after the encoder-decoder is stabilized, and a face biometric key with high strength, high recognition accuracy, and low misrecognition rate is generated.
  • the present invention Compared with the existing facial biometric key generation methods, the present invention has the following advantages:
  • the accuracy of the end-to-end facial biometric key generation method is better than that of directly fuzzy extraction of the result feature vector of the deep neural network.
  • the end-to-end accuracy is ⁇ 85%, while the accuracy of the method combining feature extractor and stabilizer is >99.5%.
  • the deep neural network and matrix operations in the face biometric key generation process are all irreversible operations, so the face biometric key satisfies irreversibility.
  • the face biometric key can be directly used as a key in symmetric cryptography systems (such as AES) and public key cryptography systems (such as RSA), meeting the requirements of revocability and unlinkability.
  • Figure 1 is a flow chart of facial biometric key generation according to the present invention.
  • Figure 2 is a structural diagram of the deep neural network module Fmap_E based on personalized training
  • Figure 3 is a flow chart of feature point screening
  • Figure 4 shows the results of different binarization methods for feature maps
  • Figure 5 is the stabilizer network structure diagram
  • Figure 6 shows the error correction code encoding
  • Figure 7 shows error correction code decoding and fuzzy extraction.
  • a personalized face biometric key generation method based on deep neural network coding specifically includes the following steps:
  • Step 1 Preprocess the pre-training data set.
  • the pre-training face data set is a public face data set containing a large amount of data selected.
  • the MTCNN method is a common face image preprocessing method in this field.
  • Step 2 Pre-train the deep neural network E1.
  • E1 is the residual network Resnet in the convolutional neural network.
  • the pre-processed pre-trained face data set is pre-trained on Renset using the following loss function Loss. After pre-training, Resnet can accurately identify faces in the public data set.
  • ⁇ yi is the learning weight of feature vectors x i and x i
  • the angle between the weight vectors W j , x i is the result feature vector of E1, and i is the classification category.
  • Step 3 Build the personalized feature extractor Feature_Extraction to extract personalized features and convert the face image into a feature map with a stability of ⁇ 80%:
  • Stability refers to the ratio of pixels with the same pixel value at the same position to the total pixels for all samples of the same type of image. Images with stability ⁇ 80% will have better results after learning on the stabilizer:
  • stability is the stability of the same type of image collection img
  • size_pixel() is the number of pixels contained in the image
  • num_equal() is the number of pixels that all samples in the image set have the same pixel value at the same position.
  • Step 3.1 Construct the deep neural network module Fmap_E based on personalized training, and perform transfer learning on the pre-trained deep neural network E1.
  • the depth after pre-training Transfer learning is performed on the neural network E1 (Resnet in this embodiment).
  • the transfer learning method used is a common method in this field, so that the deep neural network can accurately identify faces in Faces.
  • the deep neural network E2 is obtained .
  • E2 is a Resnet network with a Resnet50 structure.
  • the deep neural network E2 after transfer learning is intercepted to form Fmap_E (the last two average pooling layers Avgpool and fully connected layer Liner of Resnet50 are separated, and the remaining part is used as the feature extractor Fmap_E), and Faces are extracted through Fmap_E Facial feature map in .
  • the deep neural network E2 after transfer learning can more effectively identify faces in Faces, allowing Fmap_E to extract face feature maps with more personalized characteristics.
  • Faces obtains the feature map set Faces_featuremap through the feature extractor Fmap_E:
  • Fmap_E is the deep neural network module based on personalized training
  • Resnet is the network after transfer learning
  • Liner() is the last fully connected layer of Resnet
  • Avgpool() is the penultimate average pooling layer of Resnet
  • Faces is the migration The face data set for learning
  • Facs_featuremap is the face feature atlas
  • Step 3.2 Filter feature points and filter the feature map Faces_featuremap:
  • the feature map set Faces_featuremap is filtered for feature points, including global filtering and local filtering.
  • Global screening uses all feature maps Faces_featuremap to obtain the global feature map all_Fm by averaging each pixel:
  • all_Fm is the global feature map
  • len() is the number of pictures in the data set.
  • Local filtering is the cumulative average of the i-th facial feature map set Face_featuremap i to obtain the local feature map i_Fm:
  • i_Fm is the local feature map of the i-th type of face
  • len() is the number of pictures in the data set.
  • the filtered feature atlas F_mi and the feature point selection matrix m i are obtained.
  • m i is a binary matrix with the same size as Face_featuremap i .
  • F_m i choose(Face_featuremap i ,m i ), choose() is the feature point selection operation
  • F_m i is the feature point screening result of the i-th type of face
  • Face_featuremap i is the feature map set of the i-th type of face
  • m i Select the matrix for the feature points of the i-th type of face.
  • the feature atlas F_m i (containing multiple different face samples of the same person) is first Perform summation and averaging to obtain the average feature map.
  • avg i is the average feature map of the F_mi data set after filtering the feature points of the i-th face type
  • len(F_m i ) is the number of face images in F_mi , and can be obtained by performing GLB binarization operation on avg i Preliminary face biometric key B_K i .
  • GLB binarization uses a method that combines global binarization and local binarization. As shown in Figure 4, the distribution of image points after global binarization is relatively sparse, which will increase the misrecognition rate of subsequent recognition. However, only local binarization is used. The binarized image points are too dense, which will reduce the accuracy of recognition. Therefore, choosing a binarization method that combines global and local features can better quantify feature points.
  • the global threshold T_all i is obtained through the global binarization OTSU method.
  • the OTSU method is a common method in the field of global binarization of images.
  • a Gaussian filter is used to process the block.
  • Gauss filter is a Gauss filter.
  • the biological key b_k i to be stabilized after binarization of F_mi can be obtained.
  • B_K i GLBinarization(avg i ,T i )
  • b_k i GLBinarization(F_m i ,T i ),
  • GLBinarization() is a binary quantization operation
  • T i is the binary threshold matrix of the i-th type of face.
  • Step 4 The stabilizer learns the stability from b_k i to B_K i , where B_K i is the preset biological key target value.
  • the stabilizer Stabilizer is a deep neural network including an encoder-decoder structure.
  • UNet is used in the adoption process.
  • the downsampled data is used to better restore the image.
  • simple UNet cannot learn the effective stability from b_k i to B_K i , so a learning matrix W (black filled box in Figure 5) is added to the top layer of UNet. W increases the stabilizer by learning the relationship between points. The effect is that the addition of W greatly increases the stability of the network.
  • Step 5 Divide error correction code and fuzzy extraction to stabilize B_K' i to B_K i .
  • B_K i is divided into 8 blocks with 512 bits. Each block is preceded by a 512-bit random binary sequence Random binary sequence n XOR operation, and then Reed-solomon encoding is performed using 64-bit error correction per block to obtain 8 blocks. The addition of random binary codes increases the randomness of the face biometric key.
  • the matching process also performs block matching.
  • B_K' i generated after b_k i passes through the stabilizer Stabilizer is also divided into 8 blocks, and the 8 blocks generated during encoding After the XOR operation, Reed-solomon decoding is performed. After decoding, Random binary sequence' n can be obtained.
  • Random binary sequence' n is a random binary code
  • Reed-solomon-recode() is the Reed-solomon error correction code encoding operation.
  • Fuzzy_extraction() is performed after matching Random binary sequence' n with the original Random binary sequence n . If the fuzzy extraction condition is met, the target biological key B_K i is extracted. Fuzzy extraction is a common method in this field and can obtain the key by allowing partial errors.
  • Random binary sequence n is the original random binary sequence
  • Random binary sequence' n is the binary sequence after Reed-solomon decoding
  • Fuzzy_extraction() is the fuzzy extraction operation.
  • the face biometric key extraction process is as follows: for a new face F i belonging to the i-th category of the dataset Faces, it is trained in sequence After training, the personalized feature extractor Feature_Extraction obtains the personalized feature map. Then the personalized feature map proposed by Feature_Extraction is stabilized using the trained stabilizer Stabilizer, and finally the face biometric key can be obtained through Reed-solomon decoding and fuzzy extraction.
  • Feature_Extraction includes using Fmap_E to propose the face feature map, using the feature point selection matrix m i for feature point selection, and using the binary threshold matrix T i for binary quantization.
  • the transfer learning method is used to allow the convolutional network to learn facial features through pre-training of a large number of face data atlases.
  • the pre-trained network is transferred and learned on the actual required face database, so that the convolutional network can learn facial features.
  • the product network can learn the characteristics of the target face more accurately.
  • Use feature screening and binarization processing to select each person's unique features.
  • the face biometric key is extracted, and stabilized by a stabilizer with a decoder-encoder structure.
  • error correction code and fuzzy extraction are combined to extract the biometric key that meets the target.
  • end-to-end generation methods There are currently three end-to-end generation methods, namely the end-to-end biological key generation method based on SeNet, the end-to-end biological key generation method based on VGGNet, and the end-to-end biological key generation based on binary fault tolerance. Methods, all three methods are known in the art.
  • the end-to-end biological key generation method based on SeNet learns the mapping between the network result feature vector and the random binary code by superimposing multiple SeNet network modules and fully connected layers. SeNet is a general network in this field.
  • the end-to-end learning method based on VGGnet is to add the mapping of several fully connected layer learning result feature vectors to random binary codes after VGGnet.
  • VGGnet is a general network in this field. Both methods learn the mapping between feature vectors and random binary codes. In fact, due to the randomness of the key itself and the approximate randomness of biometric features, the results of these two methods perform poorly.
  • the end-to-end generation method based on binary fault tolerance is to directly add a Sigmoid layer and a binary layer with p as the threshold value after the feature extraction network, and p is generally 0.5.
  • the data used in the experiments of the three end-to-end methods are the same as those of the present invention, and they are all Faces. The experimental results are compared as follows:
  • the number of feature points proposed in the process of personalized feature extraction is large, and the bit strength of the generated face biometric key is ⁇ 4096 bits, which is better than the end-to-end direct facial biometric key generation using the result feature vector of the convolutional network for direct fuzzy extraction. Even better, the accuracy of the end-to-end biometric key generation method is ⁇ 85%.
  • the combination of feature extractor and stabilizer achieved an accuracy of >99.5% and a misrecognition rate of ⁇ 0.1% in the experiment.
  • the generated biometric key is a binary sequence, so it does not contain the biometric information of the original face, effectively preventing the risk of privacy leakage.
  • the generated biological key can be directly applied to symmetric key systems (such as AES) and public key cryptography systems (such as RSA).
  • the combination with the key system can ensure the revocability and unlinkability of the biological key, and finally extract the password.
  • the deep neural network and matrix operations in the key process are both irreversible processes, so the face biometric key satisfies irreversibility.
  • the proposed facial biometric key method effectively improves the security and flexibility of facial features.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

基于深度神经网络编码的个人化人脸生物密钥生成方法,所述方法包括:对人脸图像进行预处理;通过个人化特征提取器Feature_Extraction将预处理后的人脸图像转换为稳定度≥80%的特征图;通过稳定器Stabilizer输出稳定度≥99%的人脸生物密钥;采用纠错码和模糊提取结合的方法,通过纠错编码模块对输出的生物密钥进行纠错码纠错后,采用模糊提取得到稳定度>99.5%的人脸生物密钥。所述方法最终生成的人脸生物密钥比特强度,识别准确率高,可作为密钥直接应用在对称密码系统(如AES)和公钥密码系统(如RSA)中,满足可撤销性和不可链接性。

Description

基于深度神经网络编码的个人化人脸生物密钥生成方法 技术领域
本发明属于生物密钥生成技术领域,涉及一种基于深度神经网络编码的个人化人脸生物密钥生成方法。
背景技术
在如今的大数据时代中,随着生物特征识别技术的飞快发展,生物特征认证(比如指纹,虹膜,人脸等)已经逐渐融入人们的生活之中。但随着生物特征认证准确率的提升,人们开始逐渐关注生物特征的隐私问题。越来越多的生物特征被直接采集存放在数据库中。这些生物特征直接关系到个人的财产安全。一旦这些无保护的生物特征数据库被攻击泄露,将会带来很大的隐私风险。因此,生物特征保护模板就显得至关重要。生物特征模板保护一般需要具有三个基本性质:不可链接性,不可逆性和可撤销性。不可链接性目的在无法使用相同的特征模板匹配不同数据库。不可逆性目的在无法通过生物特征保护模板还原出原生物特征。可撤销性目的在于可以频繁的更换生物特征保护模板来防止泄露。
传统生物保护模板都是使用对称密码系统或者公钥密码系统进行加解密操作。但密钥的存在增加了密钥拥有者的负担,一旦密钥泄露,生物特征就会直接暴露。因此近年来生物密钥的方法被研究者广泛研究,通过生物特征直接生成生物密钥的方法既保证了生物特征的安全性,也减轻了密钥记忆的难度。目前基于人脸的生物密钥研究较少,传统的人脸生物密钥使用模糊承诺和模糊保险库的方法,通过模糊提取,允许有限的错误来进行匹配。但这两种方法在识别的准确率和误识率上表现不理想。随着深度神经网络得发展,研究者选择使用深度神经网络端到端直接学习人脸特征到随机二进制码的映射。但端到端直接学习随机二进制码的方法在可解释上有所欠缺,难以直接应用在千人人脸系统中。此外,端到端生成人脸生物密钥的方法在密钥强度上有所欠缺,一般小于512bit,并且准确率一般会随着密钥强度的增强有所降低。
现有的生物密钥生成技术有主要有:(1)中国专利号201410075104.9公开了一种正面人脸图像生物密钥生成方法:利用正面人脸图像经特征空间变换后,向高维空间中投影,在高维空间中将人脸特征信息稳定到可接受的波动范围内,再对稳定后的特征向量提取数字序列,从数字序列中编码生成生物密钥。利用传统特征提取方法PCA提取人脸特征向量一方面会导致稳定生物密钥长度较短(一般为256bit),另一方面在准确率上得不到理想的效果。(2)中国专利号202110350155.8公开了一种基于深度神经网络编码的指纹生物密钥生成方法:利用经典指纹图像处理方法与深度神经网络方法相结合,更为精准地提取同枚指纹不同样本的稳定特征分量,并通过深度神经网络的处理,进一步稳定指纹特征于特征值。但指纹中可稳定的特征并不多,最终生物密钥比特序列较短(一般为512bit左右),随着生物密钥比特长度的提升,识别准确率会明显下降,且在识别中误识率较高。
发明内容
本发明针对目前人脸生物密钥方法普遍存在强度低、识别准确率低、误识率高的问题。提出了一种基于深度神经网络编码的个人化人脸生物密钥生成方法。
为实现本发明的目的,本发明采用了以下技术方案:
对人脸图像进行预处理;通过个人化特征提取器Feature_Extraction将预处理后的人脸图像转换为稳定度≥80%的特征图;通过稳定器Stabilizer输出稳定度≥99.5%的人脸生物密钥;采用纠错码和模糊提取结合的方法,通过纠错编码模块对输出的生物密钥进行纠错码纠错后,采用模糊提取得到稳定度>99.5%的人脸生物密钥。
个人化特征提取器包括基于个人化训练的深度神经网络模块Fmap_E、特征点筛选模块和二值化量化容错模块。稳定器是基于编码器-解码器结构的特征稳定卷积网络。预处理后人脸特征图依次经过个人化特征点提取、筛选、二值化量化容错,特征点稳定,生物密钥模糊提取后得到稳定的人脸生物密钥。
通过预处理从较复杂的图像中定位人脸,尽可能去除环境等其他信息。通过基于个人化训练的深度神经网络模块Fmap_E个人化学习人脸特征并提出含有特征点的特征图。通过特征点筛选去除所有人脸特征图中的共有特征,并选择特征图中特定人脸的专有特征。通过二值化量化容错在特征点筛选的基础上将特征图中的特征点进行选择量化处理,特征点筛选选取到了人脸的专有特征而二值化量化放大了这些特征,有利于稳定器的稳定。通过稳定器Stabilizer将筛选后的特征图转化为更为稳定的人脸生物密钥。通过纠错码纠错对稳定后的密钥相对于目的生物密钥进行纠错后,采用模糊提取提取出最终的生物密钥。
本发明具体包括以下步骤:
步骤1.图像预处理:
将公开人脸数据集作为预训练数据集,并对人脸数据集中的图像进行预处理操作;从图像中分割出人脸区域,将分割的人脸区域图像缩放到相同的尺寸,得到人脸特征图。
步骤2.预训练:
将经步骤1预处理后的图像使用深度神经网络进行预训练,预训练后的网络能准确识别预训练数据集中的人脸。
步骤3.个人化特征提取,为提升图像在稳定器上的学习效果,将人脸图像转换为稳定度≥80%的人脸生物密钥:
步骤3.1.构建基于个人化训练的深度神经网络模块Fmap_E:
将需训练的个人人脸图集经步骤1预处理后输入步骤2预训练后的深度神经网络上进行迁移学习。将迁移学习后的深度神经网络截取一部分,形成基于个人化训练的深度神经网络模块Fmap_E,用于提取到个人化的人脸特征图。
步骤3.2特征图筛选:
依据全部人脸特征图去除掉特征图中的共有特征,依据单类人脸特征图选择特征图中的专有特征,经特征筛选点后的特征图稳定度≥60%。
步骤3.3二值化量化:
通过单类人脸特征图集得到单类人脸平均特征图,使用二值化处理单类人脸平均特征图得到二值化矩阵,依据二值化矩阵处理单类人脸特征图集。经过二值化处理后的单类人脸平均特征图视为人脸生物密钥,经过量化后的人脸生物密钥稳定度≥80%。
步骤4.稳定器学习:
稳定器为编码器-解码器结构的特征稳定卷积网络,将步骤3输出的稳定度≥80%的人脸生物密钥输入稳定器进行稳定学习,输出稳定度≥99.5%的人脸生物密钥。
步骤5.模糊提取:
采用纠错编码和模糊提取结合的方法,通过纠错编码模块对步骤4输出的生物密钥进行纠错码纠错后,采用模糊提取得到稳定度>99.5%的人脸生物密钥。
进一步的,步骤2所述的深度神经网络选用卷积神经网络中的残差网络Resnet。
进一步的,所述的步骤3.2包括全局筛选和局部筛选:
全局筛选利用所有特征图Faces_featuremap,通过对每个像素点累计平均的方法得到全局特征图all_Fm:其中,all_Fm为全局特征图,len()为数据集中的图片数量。
局部筛选则是第i类人脸特征图集Face_featuremapi累加平均得到局部特征图i_Fm:其中i_Fm为第i类人脸的局部特征图,len()为数据集中图片数量。
第i类人脸特征图集Face_featuremapi的每张特征图选取同时满足的特征点作为特征图的初步筛选点,并将满足选取要求的特征点位置在特征点选择矩阵mi中的对应横坐标位置a和纵坐标位置b的点mi(a,b)=1,否则mi(a,b)=0;θ1和θ2为设定的阈值。
特征点选择后得到筛选后特征图集F_mi和特征点选择矩阵mi。mi为和Face_featuremapi大小相同的二进制矩阵。choose()为Face_featuremapi中特征点的选择过程,对中每张图像进行特征点遍历,如果mi(a,b)=1保留的特征点,mi(a,b)=0则舍弃特征点。
F_mi=choose(Face_featuremapi,mi),choose()为特征点选择运算,F_mi为第i类人脸的特征点筛选结果,Face_featuremapi为第i类人脸的特征图集,mi为第i类人脸的特征点选择矩阵。
进一步的,所述的步骤3.3具体为:
在二值化量化前,先对特征图集进行加和平均得到平均特征图avgi为第i类人脸的经过特征点筛选后的F_mi数据集的平均特征图,len(F_mi)为F_mi中人脸图像数量,对avgi进行全局二值化和局部二值化相结合操 作得到初步人脸生物密钥B_Ki
首先通过全局二值化OTSU方法得到全局阈值T_alli,其次将avgi依据s×s的块大小分块,每块计算块均值与标准差,如果均值小于阈值θ3且标准差小于阈值θ4,则对该块使用局部二值化处理的方法。局部二值化中则使用高斯滤波器处理该块,高斯滤波器的结果为该块的阈值矩阵,既Ts×s=Gauss filter(avgis×s),Gauss filter为高斯滤波器。否则对该块使用全局阈值T_alli进行二值化,此时阈值矩阵为Ts×s=T_alli。对avgi遍历后可得到二值化阈值矩阵Ti,利用Ti对F_mi中的特征图进行二值化处理。
二值化量化后可得到F_mi二值化后的待稳定生物密钥b_ki。根据特征二值化阈值矩阵Ti,计算avgi>Ti,如果avgi(a,b)>Ti(a,b),B_Ki=255,否则B_Ki=0。a和b分别为横坐标和纵坐标。
B_Ki=GLBinarization(avgi,Ti);
b_ki=GLBinarization(F_mi,Ti);
GLBinarization()为二值化量化运算,Ti为第i类人脸的二值化阈值矩阵。
进一步的,步骤4所述的稳定器为包括编码器-解码器结构的深度神经网络Unet,在UNet顶层上增加了一个学习矩阵W,W通过学习点与点之间的关系来增加稳定器的效果。
使用交叉熵和Sigmoid的结合作为损失函数Loss=BCE(Sigmoid(),B_Ki),将待稳定生物密钥b_ki作为稳定器的输入,初步人脸生物密钥B_Ki作为目标进行学习。
进一步的,所述的步骤5具体为:
采用Reed-solomon纠错,Reed-solomon编码过程如下:
B_Ki以512bit分为8块,每块先于512bit的随机二进制序列Random binary sequencen异或运算后采用每块纠错64bit的方式进行Reed-solomon编码得到8块Ni n
Reed-solomon解码过程如下:
匹配过程同样进行分块匹配,将b_ki通过稳定器Stabilizer后产生的B_K'i同样分为8块,和编码中产生的8块异或运算后进行Reed-solomon解码,解码后即可得到Random binary sequence'n
为目的生物密钥,Random binary sequence'n为随机二进制码,Reed-solomon-recode()为Reed-solomon纠错码编码运算。
将Random binary sequence'n与最初的Random binary sequencen进行匹配后进行模糊提取Fuzzy_extraction(),如过满足模糊提取条件既提取出目标生物密钥B_Ki。完全匹配块数λ为模糊提取要求,如果匹配块数λ≥n5则认为满足模糊提取要求,可提取到B_Ki
B_Ki=Fuzzy_extraction(Random binary sequence'n,Random binary sequencen);
Random binary sequencen为原始随机二进制序列,Random binary sequence'n为Reed-solomon解码后的二进制序列,Fuzzy_extraction()为模糊提取运算。
通过个人化训练深度神经网络提出含有较多个人人脸特征的特征图,但是特征图中特征点零散而不稳定,难以直接使用。因此提出特征图容错方法,采用特征选择与量化技术,提取含个人专有特征点的人脸特征图。然后,设计编码器-解码器结构的卷积网络对特征图进行稳定,编码器-解码器结构的网络通过不同步长的卷积可以学习到特征图的浅层与深层特征,实现特征图到生物密钥的稳定。最后使用模糊提取对编码器-解码器稳定后的密钥进行模糊提取,生成了高强度,识别准确率高,误识率低的人脸生物密钥。
本发明和目前已有的人脸生物密钥生成方法相比有以下优势:
1.个人化特征提取过程中提出特征点数目多,最终生成人脸生物密钥比特强度≥4096比特。
2.通过特征提取器和稳定器结合的方法,比对深度神经网络的结果特征向量直接进行模糊提取的端到端生成人脸生物密钥方法准确率更优秀。端到端准确率<85%,而特征提取器与稳定器结合的方法正确率>99.5%。
3.人脸生物密钥生成过程中的深度神经网络以及矩阵运算都是不可逆运算,因此人脸生物密钥满足不可逆性。人脸生物密钥可作为密钥直接应用在对称密码系统(如AES)和公钥密码系统(如RSA)中,满足可撤销性和不可链接性。
附图说明
图1为本发明人脸生物密钥生成流程图;
图2为基于个人化训练的深度神经网络模块Fmap_E结构图;
图3为特征点筛选流程图;
图4为特征图不同二值化方法结果图;
图5为稳定器网络结构图;
图6为纠错码编码;
图7为纠错码解码和模糊提取。
具体实施方式
为了更好的理解本发明,接下来将结合符合对本发明做出一些详细的实施方式和具体的操作过程。
如图1所示,一种基于深度神经网络编码的个人化人脸生物密钥生成方法,具体包括如下步骤:
步骤1.预处理预训练数据集。预训练人脸数据集选择的含有较大数据量的公开人脸数据集。并将其通过MTCNN方法进行人脸定位,最后统一为N1×M1大小的人脸图像,一般取N1=M1=160。MTCNN方法为本领域内通用的人脸图像预处理方法。
步骤2.预训练深度神经网络E1,本实施例中E1为卷积神经网络中的残差网络Resnet。预处理后的预训练人脸数据集在Renset上使用如下的损失函数Loss进行预训练,经过预训练后的Resnet能对公开数据集中的人脸进行准确识别。
其中,s和m为常量,本实施例中选取s=30,m=0.35。θyi为特征向量xi和xi的学习权 重向量Wj之间的角度,xi为E1的结果特征向量,i为分类类别。
步骤3.构建个人化特征提取器Feature_Extraction提取个人化特征,将人脸图像转换为稳定度≥80%的特征图:
稳定度指同一类型图像的所有样本,在相同位置具有相同像素值的像素点占总像素点的比例,稳定度≥80%的图像在稳定器上学习后有较好的效果:
stability为同一类型图像集合img的稳定度,size_pixel()为图像含有的像素点数量,num_equal()图像集中所有样本在相同位置具有相同像素值的像素数量。
步骤3.1.构建基于个人化训练的深度神经网络模块Fmap_E,对预训练后的深度神经网络E1进行迁移学习。
对实际需要生成人脸生物密钥的数据集Faces(Faces中数据可以是通过手机或摄像头等方式采集的人脸图像)使用和预训练数据集相同的预处理方法后,在预训练后的深度神经网络E1(本实施例中为Resnet)上进行迁移学习,所使用的迁移学习方法为本领域内的通用方法,使得深度神经网络可以准确识别Faces中的人脸,学习后获得深度神经网络E2。
本实施例中E2为Resnet网络,Resnet50结构。如图2所示,截取迁移学习后的深度神经网络E2形成Fmap_E(分离Resnet50的后两层平均池化层Avgpool和全连接层Liner,余下的部分作为特征提取器Fmap_E),并通过Fmap_E提取Faces中的人脸特征图。经过迁移学习后的深度神经网络E2可更有效的识别Faces中的人脸,从而使得Fmap_E可提取出更具有个人化特征的人脸特征图。
Faces通过特征提取器Fmap_E得到特征图集Faces_featuremap:
其中Fmap_E为基于个人化训练的深度神经网络模块,Resnet为迁移学习后的网络,Liner()为Resnet最后一层全连接层,Avgpool()为Resnet倒数第二次平均池化层,Faces为迁移学习的人脸数据集,Facs_featuremap为人脸特征图集;
步骤3.2.特征点筛选,筛选特征图集Faces_featuremap:
如图3所示,提取特征图后对特征图集Faces_featuremap进行特征点筛选,包括全局筛选和局部筛选。
全局筛选利用所有特征图Faces_featuremap,通过对每个像素点累计平均的方法得到全局特征图all_Fm:
其中,all_Fm为全局特征图,len()为数据集中的图片数量。
局部筛选则是第i类人脸特征图集Face_featuremapi累加平均得到局部特征图i_Fm:
其中i_Fm为第i类人脸的局部特征图,len()为数据集中图片数量。
选取两个阈值θ1=n1,θ2=n2,本实施例中取n1=2,n2=50。
第i类人脸特征图集Face_featuremapi的每张特征图选取同时满足的特征点作为特征图的初步筛选点,并将满足选取要求的特征点位置在特征点选择矩阵mi中的对应横坐标位置a和纵坐标位置b的点mi(a,b)=1,否则mi(a,b)=0。
特征点选择后得到筛选后特征图集F_mi和特征点选择矩阵mi。mi为和Face_featuremapi大小相同的二进制矩阵。choose()为Face_featuremapi中特征点的选择过程,对中每张图像进行特征点遍历,如果mi(a,b)=1保留的特征点,mi(a,b)=0则舍弃特征点。
F_mi=choose(Face_featuremapi,mi),choose()为特征点选择运算,F_mi为第i类人脸的特征点筛选结果,Face_featuremapi为第i类人脸的特征图集,mi为第i类人脸的特征点选择矩阵。
步骤3.3.二值化量化:
在二值化量化前,先对特征图集F_mi(内含同一个人的多份不同人脸采样)进行加和平均得到平均特征图avgi为第i类人脸的经过特征点筛选后的F_mi数据集的平均特征图,len(F_mi)为F_mi中人脸图像数量,对avgi进行GLB二值化操作即可得到初步人脸生物密钥B_Ki
GLB二值化使用全局二值化和局部二值化相结合的方法,如图4所示,全局二值化后的图像点分布较为稀疏,会增加后续识别的误识率,而仅使用局部二值化后的图像点过于密集,会降低识别的准确率。因此选取全局与局部结合的二值化方法可以更好的量化特征点。
首先通过全局二值化OTSU方法得到全局阈值T_alli,OTSU方法为图像全局二值化领域内的通用方法。其次将avgi依据s×s的块大小分块(s一般选取3或者5),每块计算块均值与标准差,如果均值小于阈值θ3=n3(为设定值)且标准差小于阈值θ4=n4(为设定值),则对该块使用局部二值化处理的方法。局部二值化中则使用高斯滤波器处理该块,高斯滤波器的结果为该块的阈值矩阵,既Ts×s=Gauss filter(avgis×s),Gauss filter为高斯滤波器,高斯滤波为领域内通用方法。否则对该块使用全局阈值T_alli进行二值化,此时阈值矩阵为Ts×s=T_alli。对avgi遍历后可得到二值化阈值矩阵Ti,利用Ti对F_mi中的特征图进行二值化处理。
二值化量化后可得到F_mi二值化后的待稳定生物密钥b_ki。根据特征二值化阈值矩阵Ti,计算avgi>Ti,如果avgi(a,b)>Ti(a,b),B_Ki=255,否则B_Ki=0。a和b分别为横坐标和纵坐标。
B_Ki=GLBinarization(avgi,Ti)
b_ki=GLBinarization(F_mi,Ti),
GLBinarization()为二值化量化运算,Ti为第i类人脸的二值化阈值矩阵。
步骤3.1-步骤3.3获得的基于个人化训练的深度神经网络模块Fmap_E、特征筛选矩阵mi和二值化量化矩阵Ti构成了个人化特征提取器Feature_Extraction。
步骤4.稳定器学习b_ki到B_Ki的稳定,B_Ki为预设的生物密钥目标值。
如图5所示,稳定器Stabilizer为包括编码器-解码器结构的深度神经网络,本实施例中为Unet,UNet结构与其他编码器-解码器结构不同的地方在于UNet在上采用过程中使用了下采样的数据,有利于更好的还原图像。但单纯的UNet并不能学习b_ki到B_Ki的有效稳定,于是在UNet顶层上增加了一个学习矩阵W(图5中黑色填充框),W通过学习点与点之间的关系来增加稳定器的效果,W的增添极大程度增加了网络的稳定能力。将b_ki作为稳定器的输入,B_Ki作为目标进行学习。使用交叉熵BCE(Binary CrossEntropy)和Sigmoid 的结合作为损失函数Loss=BCE(Sigmoid(),B_Ki)。交叉熵和Sigmod函数为本领域通用函数。交叉熵计算为稳定器经Sigmod处理后的输出值与B_Ki目标值之间的交叉熵。学习后的稳定器输出值记为B_K'i=Stabilizer(b_ki),B_K'i一般情况下不等于B_Ki,并且两者仍有一定的差距。
步骤5.分纠错码和模糊提取,将B_K'i稳定到B_Ki
选择Reed-solomon纠错,如图6所示,选择分块纠错法,Reed-solomon编码过程如下:
B_Ki以512bit分为8块,每块先于512bit的随机二进制序列Random binary sequencen异或运算后采用每块纠错64bit的方式进行Reed-solomon编码得到8块随机二进制码的添加增加了人脸生物密钥的随机性。
如图7所示,Reed-solomon解码过程如下:
匹配过程同样进行分块匹配,将b_ki通过稳定器Stabilizer后产生的B_K'i同样分为8块,和编码中产生的8块异或运算后进行Reed-solomon解码,解码后即可得到Random binary sequence'n
为目的生物密钥,Random binary sequence'n为随机二进制码,Reed-solomon-recode()为Reed-solomon纠错码编码运算。
将Random binary sequence'n与最初的Random binary sequencen进行匹配后进行模糊提取Fuzzy_extraction(),如过满足模糊提取条件既提取出目标生物密钥B_Ki。模糊提取为本领域通用方法,可通过允许部分的错误来获取密钥。完全匹配块数λ为模糊提取要求,如果匹配块数λ≥n5则认为满足模糊提取要求,可提取到B_Ki,本实施例中设置n5≥6:
B_Ki=Fuzzy_extraction(Random binary sequence'n,Random binary sequencen);
Random binary sequencen为原始随机二进制序列,Random binary sequence'n为Reed-solomon解码后的二进制序列,Fuzzy_extraction()为模糊提取运算。
人脸生物密钥提取过程为:对于一张属于数据集Faces第i类的新人脸Fi,依次经过训 练后的个人化特征提取器Feature_Extraction得到个人化特征图。然后对Feature_Extraction提出的个人化特征图使用训练后的稳定器Stabilizer进行稳定,最后通过Reed-solomon解码和模糊提取便可得到人脸生物密钥。其中Feature_Extraction中包括使用Fmap_E提出人脸特征图,使用特征点选择矩阵mi进行特征点选择,使用二值化阈值矩阵Ti进行二值化量化。形式化表示如下:
B_Ki=Fuzzy_extraction(Reed-solomon(Stabilizer(Feature_Extraction(Fi))))
Feature_Extraction(Fi)=GLBinarization(choose(Fmap_E(Fi),mi),Ti)
采用上述方法,利用迁移学习的方法,通过大量人脸数据图集的预训练让卷积网络学习到人脸特征,其次将预训练后的网络在实际所需人脸数据库上迁移学习,使得卷积网络可以更精确的学习到目的人脸的特征。利用特征筛选和二值化的处理选取每个人的专有特征。再通过筛选,量化后特征提取提取出人脸生物密钥,并通过以解码器-编码器结构的稳定器进行稳定,最后分纠错码和模糊提取结合提取符合目标的生物密钥。
最后,对比了通过端到端直接生成生物密钥的方法。端到端生成方法有目前有3种,分别是基于SeNet的端到端的生物密钥生成方法,基于VGGNet的端到端的生物密钥生成方法和基于二值化容错的端到端生物密钥生成方法,三种方法均为本领域已知方法。基于SeNet的端到端生物密钥生成方法通过叠加多个SeNet网络模块和全连接层学习网络结果特征向量与随机二进制码之间的映射,SeNet为本领域通用网络。基于VGGnet的端到端学习方法则是通过在VGGnet后增添数个全连接层学习结果特征向量与随机二进制码的映射,VGGnet为本领域通用网络。两种方法均是学习特征向量与随机二进制码之间的映射,实际上由于密钥本身的随机性和生物特征的近似随机性,这两种方法的结果表现较差。基于二值化容错的端到端生成方法是在特征提取网络后直接增添Sigmoid层和以p为阙值的二值化层,p一般取0.5。三种端到端方法实验所用数据和本发明相同,均为Faces。实验结果对比如下:
个人化特征提取过程中提出特征点数目多,生成人脸生物密钥比特强度≥4096比特,比使用卷积网络的结果特征向量直接进行模糊提取的端到端直接人脸生成生物密钥的结果更为出色,端到端生成生物密钥方法准确率<85%。特征提取器与稳定器结合在实验中准确率>99.5%,误识率<0.1%。产生的生物密钥为二进制序列,因此不含原人脸的生物特征信息,有效的防止了隐私泄露风险。产生的生物密钥可以直接应用于对称密钥系统(如AES)以及公钥密码系统(如RSA),与密钥系统的结合可以保证生物密钥的可撤销性和不可链接性,最后提取密钥过程中的深度神经网络和矩阵运算都是不可逆过程,因此人脸生物密钥满足不可逆性。所提出的人脸生物密钥方法有效提高了人脸特征的安全性和灵活性。
以上显示和描述了本发明的基本原理和主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原 理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。

Claims (6)

  1. 基于深度神经网络编码的个人化人脸生物密钥生成方法,其特征在于:具体包括如下步骤:
    步骤1.图像预处理:
    将公开人脸数据集作为预训练数据集,并对人脸数据集中的图像进行预处理操作;从图像中分割出人脸区域,将分割的人脸区域图像缩放到相同的尺寸,得到人脸特征图;
    步骤2.预训练:
    将经步骤1预处理后的图像使用深度神经网络进行预训练,预训练后的网络能准确识别预训练数据集中的人脸;
    步骤3.个人化特征提取,为提升图像在稳定器上的学习效果,将人脸图像转换为稳定度≥80%的人脸生物密钥:
    步骤3.1.构建基于个人化训练的深度神经网络模块Fmap_E:
    将需训练的个人人脸图集经步骤1预处理后输入步骤2预训练后的深度神经网络上进行迁移学习;将迁移学习后的深度神经网络截取一部分,形成基于个人化训练的深度神经网络模块Fmap_E,用于提取到个人化的人脸特征图;
    步骤3.2特征图筛选:
    依据全部人脸特征图去除掉特征图中的共有特征,依据单类人脸特征图选择特征图中的专有特征,经特征筛选点后的特征图稳定度≥60%;
    步骤3.3二值化量化:
    通过单类人脸特征图集得到单类人脸平均特征图,使用二值化处理单类人脸平均特征图得到二值化矩阵,依据二值化矩阵处理单类人脸特征图集;经过二值化处理后的单类人脸平均特征图视为人脸生物密钥,经过量化后的人脸生物密钥稳定度≥80%;
    步骤4.稳定器学习:
    稳定器为编码器-解码器结构的特征稳定卷积网络,将步骤3输出的稳定度≥80%的人脸生物密钥输入稳定器进行稳定学习,输出稳定度≥99%的人脸生物密钥;
    步骤5.模糊提取:
    采用纠错编码和模糊提取结合的方法,通过纠错编码模块对步骤4输出的生物密钥进行纠错码纠错后,采用模糊提取得到稳定>99.5%的人脸生物密钥。
  2. 如权利要求1所述的基于深度神经网络编码的个人化人脸生物密钥生成方法,其特征在于:步骤2所述的深度神经网络选用卷积神经网络中的残差网络Resnet。
  3. 如权利要求1所述的基于深度神经网络编码的个人化人脸生物密钥生成方法,其特征在于:所述的步骤3.2包括全局筛选和局部筛选:
    全局筛选利用所有特征图Faces_featuremap,通过对每个像素点累计平均的方法得到全局特征图all_Fm:其中,all_Fm为全局特征图,len()为数据集中的图片数量;
    局部筛选则是第i类人脸特征图集Face_featuremapi累加平均得到局部特征图i_Fm: 其中i_Fm为第i类人脸的局部特征图,len()为数据集中图片数量;
    第i类人脸特征图集Face_featuremapi的每张特征图选取同时满足的特征点作为特征图的初步筛选点,并将满足选取要求的特征点位置在特征点选择矩阵mi中的对应横坐标位置a和纵坐标位置b的点mi(a,b)=1,否则mi(a,b)=0;θ1和θ2为设定的阈值;
    特征点选择后得到筛选后特征图集F_mi和特征点选择矩阵mi;mi为和Face_featuremapi大小相同的二进制矩阵;choose()为Face_featuremapi中特征点的选择过程,对中每张图像进行特征点遍历,如果mi(a,b)=1保留的特征点,mi(a,b)=0则舍弃特征点;
    F_mi=choose(Face_featuremapi,mi),choose()为特征点选择运算,F_mi为第i类人脸的特征点筛选结果,Face_featuremapi为第i类人脸的特征图集,mi为第i类人脸的特征点选择矩阵。
  4. 如权利要求1所述的基于深度神经网络编码的个人化人脸生物密钥生成方法,其特征在于:所述的步骤3.3具体为:
    在二值化量化前,先对特征图集进行加和平均得到平均特征图avgi为第i类人脸的经过特征点筛选后的F_mi数据集的平均特征图,len(F_mi)为F_mi中人脸图像数量,对avgi进行全局二值化和局部二值化相结合操作得到初步人脸生物密钥B_Ki
    首先通过全局二值化OTSU方法得到全局阈值T_alli,其次将avgi依据s×s的块大小分块,每块计算块均值与标准差,如果均值小于阈值θ3且标准差小于阈值θ4,则对该块使用局部二值化处理的方法;局部二值化中则使用高斯滤波器处理该块,高斯滤波器的结果为该块的阈值矩阵,既Ts×s=Gauss filter(avgis×s),Gauss filter为高斯滤波器;否则对该块使用全局阈值T_alli进行二值化,此时阈值矩阵为Ts×s=T_alli;对avgi遍历后可得到二值化阈值矩阵Ti,利用Ti对F_mi中的特征图进行二值化处理;
    二值化量化后可得到F_mi二值化后的待稳定生物密钥b_ki;根据特征二值化阈值矩阵Ti,计算avgi>Ti,如果avgi(a,b)>Ti(a,b),B_Ki=255,否则B_Ki=0;a和b分别为横坐标和纵坐标;
    B_Ki=GLBinarization(avgi,Ti);
    b_ki=GLBinarization(F_mi,Ti);
    GLBinarization()为二值化量化运算,Ti为第i类人脸的二值化阈值矩阵。
  5. 如权利要求1所述的基于深度神经网络编码的个人化人脸生物密钥生成方法,其特征在于:步骤4所述的稳定器为包括编码器-解码器结构的深度神经网络Unet,在UNet顶层上增加了一个学习矩阵W,W通过学习点与点之间的关系来增加稳定器的效果;
    使用交叉熵和Sigmoid的结合作为损失函数Loss=BCE(Sigmoid(),B_Ki),将待稳定生物密钥b_ki作为稳定器的输入,初步人脸生物密钥B_Ki作为目标进行学习。
  6. 如权利要求1所述的基于深度神经网络编码的个人化人脸生物密钥生成方法,其特征在于:所述的步骤5具体为:
    采用Reed-solomon纠错,Reed-solomon编码过程如下:
    B_Ki以512bit分为8块,每块先于512bit的随机二进制序列Random binary sequencen异或运算后采用每块纠错64bit的方式进行Reed-solomon编码得到8块
    Reed-solomon解码过程如下:
    匹配过程同样进行分块匹配,将b_ki通过稳定器Stabilizer后产生的B_K'i同样分为8块,和编码中产生的8块异或运算后进行Reed-solomon解码,解码后即可得到Random binary sequence'n
    为目的生物密钥,Random binary sequence'n为随机二进制码,Reed-solomon-recode()为Reed-solomon纠错码编码运算;
    将Random binary sequence'n与最初的Random binary sequencen进行匹配后进行模糊提取Fuzzy_extraction(),如过满足模糊提取条件既提取出目标生物密钥B_Ki;完全匹配块数λ为模糊提取要求,如果匹配块数λ≥n5则认为满足模糊提取要求,可提取到B_Ki
    B_Ki=Fuzzy_extraction(Random binary sequence'n,Random binary sequencen);
    Random binary sequencen为原始随机二进制序列,Random binary sequence'n为Reed-solomon解码后的二进制序列,Fuzzy_extraction()为模糊提取运算。
PCT/CN2023/105657 2023-05-26 2023-07-04 基于深度神经网络编码的个人化人脸生物密钥生成方法 WO2024032277A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310606687.2 2023-05-26
CN202310606687.2A CN116628660B (zh) 2023-05-26 2023-05-26 基于深度神经网络编码的个人化人脸生物密钥生成方法

Publications (1)

Publication Number Publication Date
WO2024032277A1 true WO2024032277A1 (zh) 2024-02-15

Family

ID=87616688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105657 WO2024032277A1 (zh) 2023-05-26 2023-07-04 基于深度神经网络编码的个人化人脸生物密钥生成方法

Country Status (2)

Country Link
CN (1) CN116628660B (zh)
WO (1) WO2024032277A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100749380B1 (ko) * 2006-03-10 2007-08-16 연세대학교 산학협력단 암호화 생체인식시스템의 고유 생체코드 생성방법
CN101452526A (zh) * 2008-10-31 2009-06-10 电子科技大学 基于指纹和人脸的二维条码式身份认证方法
CN102111418A (zh) * 2011-03-02 2011-06-29 北京工业大学 一种基于人脸特征密钥生成的网上身份认证方法
CN110543822A (zh) * 2019-07-29 2019-12-06 浙江理工大学 一种基于卷积神经网络和监督式离散哈希算法的指静脉识别方法
CN112906527A (zh) * 2021-02-05 2021-06-04 杭州电子科技大学 一种基于深度神经网络编码的指静脉生物密钥生成方法
US20230095182A1 (en) * 2021-03-08 2023-03-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for extracting biological features, device, medium, and program product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976321B (zh) * 2010-09-21 2013-04-10 北京工业大学 基于人脸特征密钥生成的加密方法
US10606353B2 (en) * 2012-09-14 2020-03-31 Interaxon Inc. Systems and methods for collecting, analyzing, and sharing bio-signal and non-bio-signal data
CN111222434A (zh) * 2019-12-30 2020-06-02 深圳市爱协生科技有限公司 基于局部二值模式和深度学习的合成人脸图像取证方法
CN113128364B (zh) * 2021-03-31 2024-02-02 杭州电子科技大学 一种基于深度神经网络编码的指纹生物密钥生成方法
CN115168633A (zh) * 2022-07-20 2022-10-11 杭州电子科技大学 一种可实现强加扰的人脸识别隐私保护方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100749380B1 (ko) * 2006-03-10 2007-08-16 연세대학교 산학협력단 암호화 생체인식시스템의 고유 생체코드 생성방법
CN101452526A (zh) * 2008-10-31 2009-06-10 电子科技大学 基于指纹和人脸的二维条码式身份认证方法
CN102111418A (zh) * 2011-03-02 2011-06-29 北京工业大学 一种基于人脸特征密钥生成的网上身份认证方法
CN110543822A (zh) * 2019-07-29 2019-12-06 浙江理工大学 一种基于卷积神经网络和监督式离散哈希算法的指静脉识别方法
CN112906527A (zh) * 2021-02-05 2021-06-04 杭州电子科技大学 一种基于深度神经网络编码的指静脉生物密钥生成方法
US20230095182A1 (en) * 2021-03-08 2023-03-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for extracting biological features, device, medium, and program product

Also Published As

Publication number Publication date
CN116628660B (zh) 2024-01-30
CN116628660A (zh) 2023-08-22

Similar Documents

Publication Publication Date Title
WO2022073452A1 (zh) 一种基于自注意力上下文网络的高光谱遥感图像分类方法
Soleymani et al. Multi-level feature abstraction from convolutional neural networks for multimodal biometric identification
Yu et al. Super-resolving very low-resolution face images with supplementary attributes
Pandya et al. Fingerprint classification using a deep convolutional neural network
CN111444881A (zh) 伪造人脸视频检测方法和装置
Zhang et al. Generative steganography by sampling
CN109325915B (zh) 一种用于低分辨率监控视频的超分辨率重建方法
CN111738058B (zh) 基于生成对抗网络的针对生物模板保护的重构攻击方法
US20240087343A1 (en) License plate classification method, license plate classification apparatus, and computer-readable storage medium
CN103714326A (zh) 一种单样本人脸识别方法
Jha et al. Automation of cheque transaction using deep learning and optical character recognition
Hebbar et al. Transfer learning approach for splicing and copy-move image tampering detection.
CN113128364B (zh) 一种基于深度神经网络编码的指纹生物密钥生成方法
CN112580011B (zh) 一种面向生物特征隐私保护的人像加解密系统
CN116383470B (zh) 一种具有隐私保护的图像搜索方法
WO2024032277A1 (zh) 基于深度神经网络编码的个人化人脸生物密钥生成方法
CN110852239B (zh) 一种人脸识别系统
Choudhary et al. Multimodal biometric-based authentication with secured templates
Khudher LSB steganography strengthen footprint biometric template
Al-Saidi et al. Iris features via fractal functions for authentication protocols
Han et al. Low resolution facial manipulation detection
Zhang et al. Domain embedded multi-model generative adversarial networks for image-based face inpainting
Tan et al. Privacy Protection for Medical Images Based on DenseNet and Coverless Steganography.
CN112906637B (zh) 基于深度学习的指纹图像识别方法、装置和电子设备
Martey et al. AI-Based Palm Print Recognition System for High-security Applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23851503

Country of ref document: EP

Kind code of ref document: A1