WO2022236877A1 - 一种基于深度学习的指纹纹理提取方法、系统、装置及存储介质 - Google Patents
一种基于深度学习的指纹纹理提取方法、系统、装置及存储介质 Download PDFInfo
- Publication number
- WO2022236877A1 WO2022236877A1 PCT/CN2021/095967 CN2021095967W WO2022236877A1 WO 2022236877 A1 WO2022236877 A1 WO 2022236877A1 CN 2021095967 W CN2021095967 W CN 2021095967W WO 2022236877 A1 WO2022236877 A1 WO 2022236877A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- fingerprint
- image
- deep learning
- texture extraction
- texture
- Prior art date
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 68
- 238000013135 deep learning Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000010606 normalization Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
- G06V40/1359—Extracting features related to ridge properties; Determining the fingerprint type, e.g. whorl or loop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
Definitions
- the present invention relates to the technical field of fingerprint identification, in particular to a deep learning-based fingerprint texture extraction method, system, device and storage medium.
- fingerprint recognition is widely recognized and applied because of its convenience and reliability.
- the traditional fingerprint identification method is to extract the ridge texture map of the fingerprint first, and then extract the feature points based on the texture map and do matching to find the similarity.
- fingerprint texture extraction is a very important link, which directly affects the accuracy of subsequent feature point extraction and matching.
- the traditional method can obtain a high recognition rate on high-quality fingerprint images
- the fingerprints may be blurred, damaged, and stained during the process of collecting fingerprints, which makes the traditional method very difficult to recognize. It is difficult to extract the complete fingerprint texture, which makes the fingerprint recognition effect very poor. Especially in the case of severe fingerprint damage, the traditional method cannot restore the fingerprint texture at all, so that the accuracy of fingerprint recognition cannot be improved.
- one of the purposes of the present invention is to provide a fingerprint texture extraction method based on deep learning, which can improve the accuracy of fingerprint recognition.
- the second object of the present invention is to provide a fingerprint texture extraction system based on deep learning.
- the third object of the present invention is to provide a fingerprint texture extraction device.
- the fourth object of the present invention is to provide a storage medium.
- a fingerprint texture extraction method based on deep learning comprising:
- Step S1 Obtain the fingerprint image, and preprocess the fingerprint image so that the size of the fingerprint image meets the input requirements of the fingerprint texture extraction model;
- Step S2 Input the preprocessed fingerprint image into the fingerprint texture extraction model to output the texture map
- Step S3 Cutting the output texture map to obtain a texture map that matches the size of the original fingerprint image.
- the method for preprocessing the fingerprint image is:
- the preset image size is a multiple of 8 in both the length and width of the image, so that the width and height of the filled image are:
- W is the width of the filled image
- H is the height of the filled image
- w is the width of the original image
- h is the height of the original image.
- the pixel value of the pixels used to fill the surroundings of the image is 128, and the number of filled pixels is:
- f(x,y) is the pixel value before normalization
- F(x,y) is the pixel value after normalization
- the fingerprint texture extraction model adopts a segmentation network model of encoder-decoder structure, and a feature fusion layer is set between the encoder part and the decoder part for feature fusion of the features of the encoder and decoder.
- the encoder includes:
- NB1D layers are used to perform convolution processing on the downsampled image to extract image features, wherein the NB1D layer uses 3*1 and 1*3 group convolution.
- a fingerprint texture extraction system based on deep learning including:
- the preprocessing module is used to preprocess the obtained fingerprint image so that the size of the fingerprint image meets the input requirements of the fingerprint texture extraction model;
- the model analysis module is used for inputting the fingerprint image after preprocessing into the fingerprint texture extraction model to output the texture map;
- the post-processing module is used for cropping the output texture map to obtain a texture map matching the size of the original fingerprint image.
- a fingerprint texture extraction device comprising:
- a memory for storing the program
- the processor is used to load the program to execute the fingerprint texture extraction method based on deep learning as described above.
- a storage medium which stores a program, is characterized in that, when the program is executed by a processor, the above-mentioned method for extracting fingerprint texture based on deep learning is realized.
- the present invention provides a method based on deep learning to extract fingerprint texture images.
- the proposed semantic segmentation model is used to directly extract fingerprint texture images; this method utilizes the powerful learning ability of deep learning, so that the model has a good texture Extraction ability and adaptability to various damage situations, thus improving the accuracy of fingerprint texture extraction.
- Fig. 1 is a schematic flow chart of the fingerprint texture extraction method based on deep learning of the present invention
- Fig. 2 is the structural representation of fingerprint texture extraction model of the present invention
- Fig. 3 is a schematic structural diagram of the downsampling layer of the present invention.
- Fig. 4 is a schematic structural diagram of the feature fusion layer of the present invention.
- Fig. 5 is the structural representation of NB1D layer of the present invention.
- Fig. 6 is a schematic structural diagram of the NB1D_Dx layer of the present invention.
- Fig. 7 is the effect drawing of fingerprint texture extraction of the present invention.
- Fig. 8 is a schematic block diagram of the modules of the deep learning-based fingerprint texture extraction system of the present invention.
- This embodiment provides a fingerprint texture extraction method based on deep learning.
- the method of this embodiment can extract fingerprint textures in various situations very well, and can greatly improve the accuracy of fingerprint recognition.
- the fingerprint texture extraction method of the present embodiment specifically includes the following steps:
- Step S1 Obtain the fingerprint image, and preprocess the fingerprint image so that the size of the fingerprint image meets the input requirements of the fingerprint texture extraction model;
- Step S2 Input the preprocessed fingerprint image into the fingerprint texture extraction model to output the texture map
- Step S3 Cutting the output texture map to obtain a texture map that matches the size of the original fingerprint image.
- the fingerprint image can be obtained through the Internet or fingerprint collection equipment. Since the collected fingerprint image may be uneven in size and proportion, it is necessary to preprocess the collected fingerprint image.
- the fingerprint image is preprocessed.
- the processing steps mainly include two steps, one is to perform grayscale processing on the collected fingerprint image, so that the fingerprint image is unified into a fingerprint grayscale image, and then fill pixels around the fingerprint image according to the preset image ratio and size; in this embodiment Fill pixels around the fingerprint image, and the size and ratio of the fingerprint image after filling the pixels can reach the preset requirements; in this embodiment, the pixel values filled in the fingerprint image are uniformly 128 pixels, and ensure that the fingerprint image after filling the pixels
- the width and height of are both multiples of 8, where the original width of the image is w and the original length is h, then the width W and height H of the image after filling the pixels are as shown in Formula 1:
- the preprocessing of the fingerprint image also includes a second step: performing pixel value normalization processing on the image filled with pixels.
- the normalization process divides each pixel by 255, and then subtracts 0.5, which is expressed as:
- f(x,y) is the pixel value before normalization
- F(x,y) is the pixel value after normalization
- the size and specification of the fingerprint image can be unified, which is convenient for subsequent input of the image into the fingerprint texture extraction model for texture extraction.
- the fingerprint image is imported into the fingerprint texture extraction model, where the fingerprint texture extraction model is based on the improvement of the ERFNet semantic segmentation model to adapt to the segmentation and extraction of fingerprint texture.
- the fingerprint texture extraction model of this embodiment adopts the encoder-decoder segmentation network structure, wherein the encoder is the encoder part, which is used to gradually reduce the spatial dimension of the input data; the decoder is the decoder part, which is used to gradually restore the details of the target and the corresponding space dimension; and there is a feature fusion layer (merge layer) between the encoder and the decoder, which is used to fuse the features of the encoder and the decoder to fuse the shallow and deep features.
- the network framework of the fingerprint texture extraction model in this embodiment is shown in Figure 2, where the left side of the figure is the encoder part, and the right side is the decoder part.
- the encoder part includes a stacked down-sampling layer, NB1D layer, and NB1D-Dx layer.
- the downsampling layer Down uses the downsampling module Downsamplerblock in ERFNet, and the 2x, 4x, and 8x in front of it represent the current
- the size of the downsampling output feature map is 1/2, 1/4, 1/8 of the original input size, which can significantly reduce the size of the feature map, reduce the complexity of network calculation and memory usage, and enhance the real-time performance of segmentation.
- the input fingerprint image will be down-sampled once processing, halving the size of the input image, reducing a large amount of redundancy in the visual information itself, saving a lot of calculations, and further improving the speed of the network model.
- the input image with a size of 1024*512*3 is convolved by 3*3, and the convolution stride is 2, and 13 channels are obtained, and 3 channels are obtained through MaxPooling on the right side, and 16 channels are obtained after concat operation. channels, making the final image a size of 512*256*16.
- the input image is processed through the NB1D layer after one layer of downsampling, and the NB1D layer in the present embodiment adds BatchNorm and dropout layers on the basis of Non-bottleneck-1D (Non-bt-1D),
- the structure of the NB1D layer is shown in Figure 5.
- Two 3*3 convolution kernels are decomposed into two groups of 3*1 and 1*3 one-dimensional convolutions to reduce the amount of calculation; and each group of 3*1 After the one-dimensional convolution of 1*3, the BatchNorm layer is required to perform BN normalization processing, which is used to forcibly pull the distribution of the output value of any neuron in each layer of neural network back to the standard normal with a mean of 0 and a variance of 1. distributed.
- the dropout layer using the discarding method is used to reduce overfitting.
- the data after a downsampling operation passes through 2 layers of NB1D layer, and then passes through 1 layer of downsampling layer, 5 layers of NB1D layer and 1 layer of downsampling layer, and then enters the NB1D-Dx layer; in this embodiment, NB1D
- the -Dx layer is sequentially stacked with NB1D-D2 layer, NB1D-D4 layer, NB1D-D8 layer and NB1D-D16 layer.
- the NB1D-Dx layer is based on the hole convolution module of the NB1D layer.
- the hole convolution is also called dilated convolution, which uses dilated convolution on the last two layers of convolution of NB1D.
- the x in dx represents the dilated value (expansion coefficient); the structure of the NB1D-Dx layer in this embodiment is shown in Figure 6, the NB1D-Dx layer can not only keep the image space information from being lost, but also increase the local receptive field of the network, thereby improving the semantic segmentation accuracy.
- the encoder in this embodiment includes a stacked NB1D-Dx layer, an upper acquisition layer, and an NB1D layer, wherein the structure of each layer in the decoder is the same as the corresponding layer structure in the encoder, except that the layers in the decoder are in order
- the stacking order is opposite to that of the encoder; in this embodiment, after the image data after the encoder enters the decoder, it passes through the NB1D-D2 layer, NB1D-D4 layer, NB1D-D8 layer, and NB1D-D16 layer in sequence, and then passes through the first layer After sampling layer, 2 layers of NB1D layer, 1 layer of upsampling layer, 2 layers of NB1D layer and 1 layer of upsampling layer, the final model outputs a fingerprint texture image with the same size W*H as the input.
- the Merge feature fusion module is used to fuse the corresponding feature maps between the encoder and the decoder to fuse shallow and deep features to better fit the fingerprint texture.
- the merge layer is shown in FIG. 4 , after merging the features of the encoder and the decoder, a 1*1 convolution is performed.
- the output texture map size is W ⁇ H. It is necessary to cut out the pixels filled in the top, bottom, left, and right sides of the image in the image preprocessing to obtain a texture map of the same size as the original image.
- the top, bottom, left, and right values to be cropped are calculated by formula 2.
- the quality of fingerprint texture extraction is directly related to the extraction and matching of fingerprint feature points, which is the most important link in fingerprint identification.
- the fingerprint image collected is directly extracted by using the deep learning model-fingerprint texture extraction model.
- Fingerprint texture images as shown in Figure 7, can extract fingerprint textures in various situations very well, solve the problem of fingerprint texture extraction when fingerprints are blurred, damaged, and seriously stained, and can directly improve the accuracy of fingerprint recognition .
- This method makes use of the powerful learning ability of deep learning, which makes the model have good texture extraction ability and adaptability to various damage situations.
- This embodiment provides a fingerprint texture extraction system based on deep learning, and executes the fingerprint texture extraction method based on deep learning described in Embodiment 1.
- the fingerprint texture extraction system of this embodiment includes at least the following modules:
- the preprocessing module is used to preprocess the obtained fingerprint image so that the size of the fingerprint image meets the input requirements of the fingerprint texture extraction model;
- the model analysis module is used for inputting the fingerprint image after preprocessing into the fingerprint texture extraction model to output the texture map;
- the post-processing module is used for cropping the output texture map to obtain a texture map matching the size of the original fingerprint image.
- This embodiment provides a fingerprint texture extraction device, including:
- a memory for storing the program
- the processor is configured to load the program to execute the fingerprint texture extraction method based on deep learning described in Embodiment 1.
- this embodiment also provides a storage medium, which stores a program, and is characterized in that, when the program is executed by a processor, the method for extracting fingerprint texture based on deep learning as described above is implemented.
- the device and storage medium in this embodiment are based on the two aspects of the same inventive concept as the method in the previous embodiment.
- the implementation process of the method has been described in detail above, so those skilled in the art can understand clearly from the foregoing description To understand the structure and implementation process of the device in this implementation, for the sake of brevity of the description, details will not be repeated here.
- the above-mentioned embodiment is only a preferred embodiment of the present invention, and cannot be used to limit the protection scope of the present invention. Any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention belong to the scope of the present invention. Scope of protection claimed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
一种基于深度学习的指纹纹理提取方法,具体包括步骤S1:获取指纹图像,并对指纹图像进行预处理使指纹图像尺寸达到指纹纹理提取模型的输入要求;步骤S2:将预处理后的指纹图像输入至指纹纹理提取模型中以输出纹理图;步骤S3:将输出后的纹理图进行裁剪处理以获得与原指纹图像尺寸相符的纹理图。还公开了一种基于深度学习的指纹纹理提取系统、装置及存储介质。通过本方法可以很好的提取出各种情况下的指纹纹理,可以大幅度提升指纹识别的准确率。
Description
本发明涉及指纹识别技术领域,尤其涉及一种基于深度学习的指纹纹理提取方法、系统、装置及存储介质。
在信息化社会的不断发展中,人们迫切需要更加可靠的识别技术对身份进行认证,利用生物特征进行身份认证已成为时下的热潮。其中指纹识别因其方便性和可靠性受到普遍认同和应用。传统的指纹识别方法是先提取指纹的脊线纹理图,然后基于纹理图提取特征点并做匹配寻找相似性。其中,指纹纹理提取是很重要的一个环节,直接影响到后面特征点提取和匹配的准确性。
但是,传统方法在高质量的指纹图像上虽可以获得很高的识别率,但是,在实际应用中,采集指纹过程中可能因指纹存在模糊、破坏、污损等情况,导致基于传统的方法很难提取出完整的指纹纹理,使得指纹识别效果很差。特别在指纹破损严重的情况下,传统方法根本无法恢复指纹纹理,使得指纹识别准确率无法得到提高。
发明内容
为了克服现有技术的不足,本发明的目的之一在于提供一种基于深度学习的指纹纹理提取方法,可提高指纹识别的准确率。
本发明的目的之二在于提供一种基于深度学习的指纹纹理提取系统。
本发明的目的之三在于提供一种指纹纹理提取装置。
本发明的目的之四在于提供一种存储介质。
本发明的目的之一采用如下技术方案实现:
一种基于深度学习的指纹纹理提取方法,包括:
步骤S1:获取指纹图像,并对指纹图像进行预处理使指纹图像尺寸达到指纹纹理提取模型的输入要求;
步骤S2:将预处理后对指纹图像输入至指纹纹理提取模型中以输出纹理图;
步骤S3:将输出后的纹理图进行裁剪处理以获得与原指纹图像尺寸相符的纹理图。
进一步地,所述对指纹图像进行预处理的方法为:
将采集到的指纹图像进行灰度处理,再按照预设的图像比例和尺寸对指纹图像的四周填充像素;
对填充像素后的图像进行像素值归一化处理。
进一步地,所述预设图像尺寸为图像的长和宽均为8的倍数,使填充后图像的宽和高为:
进一步地,用于填充图像四周的所述像素的像素值为128,且填充像素的个数为:
进一步地,所述对填充像素后的图像进行像素值归一化处理的公式为:
F(x,y)=f(x,y)/255-0.5 (公式3)
其中,f(x,y)为归一化前的像素值,F(x,y)为归一化后的像素值,且F(x,y)∈[-0.5,0.5]。
进一步地,所述指纹纹理提取模型采用编码器-解码器结构的分割网络模型,编结器部分和解码器部分之间设有特征融合层,用于将编码器和解码器的特征进行特征融合。
进一步地,所述编码器包括:
多个下采样层,用于对图像下采用处理;
多个NB1D层,用于对下采样处理后的图像进行卷积处理以提取图像特征,其中,NB1D层采用3*1和1*3的分组卷积。
本发明的目的之二采用如下技术方案实现:
一种基于深度学习的指纹纹理提取系统,包括:
预处理模块,用于对获得的指纹图像进行预处理使指纹图像尺寸达到指纹纹理提取模型的输入要求;
模型分析模块,用于将预处理后对指纹图像输入至指纹纹理提取模型中以输出纹理图;
后处理模块,用于将输出后的纹理图进行裁剪处理以获得与原指纹图像尺寸相符的纹理图。
本发明的目的之三采用如下技术方案实现:
一种指纹纹理提取装置,包括:
程序;
存储器,用于存储所述程序;
处理器,用于加载所述程序以执行如上述的基于深度学习的指纹纹理提取方法。
本发明的目的之四采用如下技术方案实现:
一种存储介质,其存储有程序,其特征在于,所述程序被处理器执行时实现如上述的基于深度学习的指纹纹理提取方法。
相比现有技术,本发明的有益效果在于:
本发明提供基于深度学习的方法提取指纹纹理图像,对采集到的指纹图像,利用提出的语义分割模型直接提取指纹纹理图像;本方法利用了深度学习强大的学习能力,使得模型具有很好的纹理提取能力和各种破损情况的适应能力,从而提高指纹纹理提取的准确性。
图1为本发明基于深度学习的指纹纹理提取方法的流程示意图;
图2为本发明指纹纹理提取模型的结构示意图;
图3为本发明下采样层的结构示意图;
图4为本发明特征融合层的结构示意图;
图5为本发明NB1D层的结构示意图;
图6为本发明NB1D_Dx层的结构示意图;
图7为本发明指纹纹理提取效果图;
图8为本发明基于深度学习的指纹纹理提取系统的模块示意框图。
下面,结合附图以及具体实施方式,对本发明做进一步描述,需要说明的是,在不相冲突的前提下,以下描述的各实施例之间或各技术特征之间可以任意组合形成新的实施例。
实施例一
本实施例提供一种基于深度学习的指纹纹理提取方法,通过本实施例的方法可很好的提取出各种情况下的指纹纹理,可大幅度提升指纹识别的准确率。
如图1所示,本实施例的指纹纹理提取方法具体包括如下步骤:
步骤S1:获取指纹图像,并对指纹图像进行预处理使指纹图像尺寸达到指纹纹理提取模型的输入要求;
步骤S2:将预处理后对指纹图像输入至指纹纹理提取模型中,以输出纹理图;
步骤S3:将输出后的纹理图进行裁剪处理以获得与原指纹图像尺寸相符的纹理图。
其中,可通过互联网或指纹采集设备获取指纹图像,由于采集所得的指纹图像在尺寸及比例上可能参差不齐,因此需对采集所得的指纹图像进行预处理,本实施例中对指纹图像进行预处理步骤主要包括两步,一为将采集到的指纹图像进行灰度处理,使得指纹图像统一为指纹灰度图像,再按照预设的图像比例和尺寸对指纹图像的四周填充像素;本实施例中在指纹图像的四周填充像素,填充像素后的指纹图像的尺寸及比例可达到预设规定;本实施例中填充在指纹图像中的像素值统一为128像素,且确保填充像素后的指纹图像的宽和高都是8的倍数,其中设图像的原始宽度为w,原始长度为h,则填充像素后的图像的宽 W和高H则为公式1所示:
很明显,
(W-w)∈[0,1,...,7],(H-h)∈[0,1,...,7]
则,变换后的图像上下左右填充的像素个数为:
对指纹图像进行预处理还包括第二步:对填充像素后的图像进行像素值归一化处理。其中归一化处理对每个像素除以255,再减去0.5,用公式表示为:
F(x,y)=f(x,y)/255-0.5 (公式3)
其中,f(x,y)为归一化前的像素值,F(x,y)为归一化后的像素值,且F(x,y)∈[-0.5,0.5]。
将采集所得的指纹图像进行像素填充操作并进行归一化处理后,可统一指纹图像的尺寸和规格,便于后续将图像输入至指纹纹理提取模型中进行纹路提取。
指纹图像完成预处理操作后,将指纹图像导入至指纹纹理提取模型中,其中指纹纹理提取模型是基于ERFNet语义分割模型的改进,以适应于指纹纹理的 分割提取。
本实施例的指纹纹理提取模型采用encoder-decoder分割网络结构,其中encoder为编码器部分,用于逐渐缩减输入数据的空间维度;decoder为解码器部分,用于逐步恢复目标的细节和相应的空间维度;而在编码器和解码器之间设有特征融合层(merge层),用于将编码器和解码器的特征进行特征融合,以融合浅层和深层特征。本实施例的指纹纹理提取模型的网络框架如图2所示,其中图中左侧为编码器部分,右侧为解码器部分。
其中,编码器部分包括有堆叠的下采样层、NB1D层、NB1D-Dx层。
对于语义分割任务来说,对图像进行下采样处理是必不可少的,本实施例的下采样层Down使用的是ERFNet中的下采样模块Downsamplerblock,其前面的2x、4x、8x代表的是当前下采样输出特征图尺寸是原始输入尺寸的1/2、1/4、1/8,可显著降低特征图尺寸,降低网络计算复杂度和内存占用率,增强分割的实时性。且由于初始输入图像的尺寸较大,含有较多冗余信息,导致内存占用率和计算复杂度高,影响网络模型运行速度,因此,本实施例中会对输入的指纹图像先进行一次下采样处理,使输入图像尺寸减半,减少视觉信息本身的大量冗余,节省大量计算,进一步提高网络模型运行速度。
本实施例中的下采集层的结构如图3所示,使用size=2*2、stride=2的MaxPooling和filter=3*3、stride=2的卷积核最后合起来作为下采样输出,例如将尺寸为1024*512*3的输入图像经过3*3的卷积,卷积步长stride为2,得到13个channels,右边经过MaxPooling得到3个channels,经过concat操作进行合并后得到16个channels,使得最终图像变为512*256*16的尺寸。
本实施例中输入图像经过一层下采样后,经NB1D层进行处理,本实施例中的NB1D层是在Non-bottleneck-1D(Non-bt-1D)的基础上加了BatchNorm和 dropout层,NB1D层的结构如图5所示,将两个3*3的卷积核分解为两组3*1、1*3的一维卷积,减少计算量;并在每完成一组3*1、1*3的一维卷积后都需BatchNorm层进行BN归一化处理,用于将每层神经网络任意神经元的输出值的分布强行拉回到均值为0方差为1的标准正态分布。最后再经过使用丢弃法的dropout层以减少过拟合。
本实施例经过一次下采样操作后的数据经过2层NB1D层后,再经过1层下采样层、5层NB1D层和1层下采样层后,进入NB1D-Dx层中;本实施例中NB1D-Dx层依次堆叠有NB1D-D2层、NB1D-D4层、NB1D-D8层和NB1D-D16层。其中,NB1D-Dx层基于NB1D层的空洞卷积模块,其中空洞卷积也称扩张卷积,是在NB1D的最后两层卷积上使用dilated convolution,其中dx中的x就是代表dilated值(扩张系数);本实施例中的NB1D-Dx层的结构如图6所示,NB1D-Dx层不仅能够保持图像空间信息不丢失,又能够增大网络的局部感受野,从而提高语义分割精度。
指纹图像经过编码器处理后,进入解码器部分进行上采样和卷积操作,以完成指纹纹路提取的操作。本实施例中的所述编码器包括堆叠的NB1D-Dx层、上采集层和NB1D层,其中解码器中每个层的结构与编码器中的对应的层结构相同,只是解码器中层的依次堆叠顺序与编码器相反;本实施例中经过编码器后的图像数据进入解码器后,依次经过NB1D-D2层、NB1D-D4层、NB1D-D8层和NB1D-D16层后,经过1层上采样层、2层NB1D层、1层上采样层、2层NB1D层和1层上采样层后,最终模型输出与输入尺寸W*H相同的指纹纹理图像。并在解码过程中利用Merge特征融合模块将编码器和解码器之间相对应的特征图中做特征融合,以融合浅层和深层特征,以更好地拟合出指纹纹理。本实施例中merge层如图4所示,将编码器和解码器的特征进行合并后,进行1*1 的卷积。
本实施例经过指纹纹理提取模型提取指纹纹理图像后,输出的纹理图大小是W×H,需通过裁剪掉图像预处理中在图像上下左右填充的像素,就得到与原图像同样大小的纹理图了,其中要裁剪的上下左右的值top、bottom、left、right由公式2算出。
指纹纹理提取的质量好坏直接关系到指纹特征点的提取和匹配,是指纹识别中最重要的一个环节,在本发明中,利用深度学习模型-指纹纹理提取模型对采集到的指纹图像直接提取指纹纹理图像,如图7所示,可以很好的提取出各种情况下的指纹纹理,解决指纹在模糊、破坏、污损严重情况下的指纹纹理提取问题,能直接提升指纹识别的准确率。该方法利用了深度学习强大的学习能力,使得模型具有很好的纹理提取能力和各种破损情况的适应能力。
实施例二
本实施例提供一种基于深度学习的指纹纹理提取系统,执行实施例一所述的基于深度学习的指纹纹理提取方法,如图8所示,本实施例的指纹纹理提取系统至少包括如下模块:
预处理模块,用于对获得的指纹图像进行预处理使指纹图像尺寸达到指纹纹理提取模型的输入要求;
模型分析模块,用于将预处理后对指纹图像输入至指纹纹理提取模型中以输出纹理图;
后处理模块,用于将输出后的纹理图进行裁剪处理以获得与原指纹图像尺寸相符的纹理图。
实施例三
本实施例提供一种指纹纹理提取装置,包括:
程序;
存储器,用于存储所述程序;
处理器,用于加载所述程序以执行实施例一所述的基于深度学习的指纹纹理提取方法。
此外,本实施例还提供一种存储介质,其存储有程序,其特征在于,所述程序被处理器执行时实现如上述的基于深度学习的指纹纹理提取方法。
本实施例中的装置及存储介质与前述实施例中的方法是基于同一发明构思下的两个方面,在前面已经对方法实施过程作了详细的描述,所以本领域技术人员可根据前述描述清楚地了解本实施中的装置的结构及实施过程,为了说明书的简洁,在此就不再赘述。上述实施方式仅为本发明的优选实施方式,不能以此来限定本发明保护的范围,本领域的技术人员在本发明的基础上所做的任何非实质性的变化及替换均属于本发明所要求保护的范围。
上述实施方式仅为本发明的优选实施方式,不能以此来限定本发明保护的范围,本领域的技术人员在本发明的基础上所做的任何非实质性的变化及替换均属于本发明所要求保护的范围。
Claims (10)
- 一种基于深度学习的指纹纹理提取方法,其特征在于,包括:步骤S1:获取指纹图像,并对指纹图像进行预处理使指纹图像尺寸达到指纹纹理提取模型的输入要求;步骤S2:将预处理后对指纹图像输入至指纹纹理提取模型中以输出纹理图;步骤S3:将输出后的纹理图进行裁剪处理以获得与原指纹图像尺寸相符的纹理图。
- 根据权利要求1所述的基于深度学习的指纹纹理提取方法,其特征在于,所述对指纹图像进行预处理的方法为:将采集到的指纹图像进行灰度处理,再按照预设的图像比例和尺寸对指纹图像的四周填充像素;对填充像素后的图像进行像素值归一化处理。
- 根据权利要求2所述的基于深度学习的指纹纹理提取方法,其特征在于,所述对填充像素后的图像进行像素值归一化处理的公式为:F(x,y)=f(x,y)/255-0.5 (公式3)其中,f(x,y)为归一化前的像素值,F(x,y)为归一化后的像素值,且F(x,y)∈[-0.5,0.5]。
- 根据权利要求1所述的基于深度学习的指纹纹理提取方法,其特征在于,所述指纹纹理提取模型采用编码器-解码器结构的分割网络模型,编结器部分和解码器部分之间设有特征融合层,用于将编码器和解码器的特征进行特征融合。
- 根据权利要求6所述的基于深度学习的指纹纹理提取方法,其特征在于,所述编码器包括:多个下采样层,用于对图像下采用处理;多个NB1D层,用于对下采样处理后的图像进行卷积处理以提取图像特征,其中,NB1D层采用3*1和1*3的分组卷积。
- 一种基于深度学习的指纹纹理提取系统,其特征在于,包括:预处理模块,用于对获得的指纹图像进行预处理使指纹图像尺寸达到指纹纹理提取模型的输入要求;模型分析模块,用于将预处理后对指纹图像输入至指纹纹理提取模型中以输出纹理图;后处理模块,用于将输出后的纹理图进行裁剪处理以获得与原指纹图像尺 寸相符的纹理图。
- 一种基于深度学习的指纹纹理提取装置,其特征在于,包括:程序;存储器,用于存储所述程序;处理器,用于加载所述程序以执行如权利要求1-7任一项所述的基于深度学习的指纹纹理提取方法。
- 一种存储介质,其存储有程序,其特征在于,所述程序被处理器执行时实现如权利要求1-7任一项所述的基于深度学习的指纹纹理提取方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110528813.8 | 2021-05-14 | ||
CN202110528813.8A CN113239808B (zh) | 2021-05-14 | 一种基于深度学习的指纹纹理提取方法、系统、装置及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022236877A1 true WO2022236877A1 (zh) | 2022-11-17 |
Family
ID=77134382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/095967 WO2022236877A1 (zh) | 2021-05-14 | 2021-05-26 | 一种基于深度学习的指纹纹理提取方法、系统、装置及存储介质 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022236877A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6111978A (en) * | 1996-12-13 | 2000-08-29 | International Business Machines Corporation | System and method for determining ridge counts in fingerprint image processing |
CN106709450A (zh) * | 2016-12-23 | 2017-05-24 | 上海斐讯数据通信技术有限公司 | 一种指纹图像识别方法及系统 |
CN108960214A (zh) * | 2018-08-17 | 2018-12-07 | 中控智慧科技股份有限公司 | 指纹图像增强二值化方法、装置、设备、系统及存储介质 |
CN109145810A (zh) * | 2018-08-17 | 2019-01-04 | 中控智慧科技股份有限公司 | 指纹细节点检测方法、装置、设备、系统及存储介质 |
CN109840458A (zh) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | 一种指纹识别方法及指纹采集设备 |
CN112733670A (zh) * | 2020-12-31 | 2021-04-30 | 北京海鑫科金高科技股份有限公司 | 指纹特征提取方法、装置、电子设备及存储介质 |
-
2021
- 2021-05-26 WO PCT/CN2021/095967 patent/WO2022236877A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6111978A (en) * | 1996-12-13 | 2000-08-29 | International Business Machines Corporation | System and method for determining ridge counts in fingerprint image processing |
CN106709450A (zh) * | 2016-12-23 | 2017-05-24 | 上海斐讯数据通信技术有限公司 | 一种指纹图像识别方法及系统 |
CN109840458A (zh) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | 一种指纹识别方法及指纹采集设备 |
CN108960214A (zh) * | 2018-08-17 | 2018-12-07 | 中控智慧科技股份有限公司 | 指纹图像增强二值化方法、装置、设备、系统及存储介质 |
CN109145810A (zh) * | 2018-08-17 | 2019-01-04 | 中控智慧科技股份有限公司 | 指纹细节点检测方法、装置、设备、系统及存储介质 |
CN112733670A (zh) * | 2020-12-31 | 2021-04-30 | 北京海鑫科金高科技股份有限公司 | 指纹特征提取方法、装置、电子设备及存储介质 |
Non-Patent Citations (2)
Title |
---|
MENG LU: "An Identification Method of High-speed Railway Sign Based on Convolutional Neural Network", ACTA AUTOMATICA SINICA, KEXUE CHUBANSHE, BEIJING, CN, vol. 46, no. 3, 1 January 2020 (2020-01-01), CN , pages 518 - 530, XP093003833, ISSN: 0254-4156, DOI: 10.16383/j.aas.c190182 * |
ROMERA EDUARDO, ALVAREZ JOSE M., BERGASA LUIS M., ARROYO ROBERTO: "ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 19, no. 1, 1 January 2018 (2018-01-01), Piscataway, NJ, USA , pages 263 - 272, XP093003835, ISSN: 1524-9050, DOI: 10.1109/TITS.2017.2750080 * |
Also Published As
Publication number | Publication date |
---|---|
CN113239808A (zh) | 2021-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018145470A1 (zh) | 一种图像检测方法和装置 | |
CN111126240B (zh) | 一种三通道特征融合人脸识别方法 | |
CN114612456B (zh) | 一种基于深度学习的钢坯自动语义分割识别方法 | |
CN103886585A (zh) | 一种基于排序学习的视频跟踪方法 | |
CN114724155A (zh) | 基于深度卷积神经网络的场景文本检测方法、系统及设备 | |
CN112749671A (zh) | 一种基于视频的人体行为识别方法 | |
CN112712500A (zh) | 一种基于深度神经网络的遥感图像目标提取方法 | |
CN117392375A (zh) | 一种针对微小物体的目标检测算法 | |
CN107895154B (zh) | 面部表情强度计算模型的形成方法及系统 | |
WO2022236877A1 (zh) | 一种基于深度学习的指纹纹理提取方法、系统、装置及存储介质 | |
CN112016592B (zh) | 基于交叉领域类别感知的领域适应语义分割方法及装置 | |
Özyurt et al. | A new method for classification of images using convolutional neural network based on Dwt-Svd perceptual hash function | |
CN110555406B (zh) | 一种基于Haar-like特征及CNN匹配的视频运动目标识别方法 | |
CN116843715B (zh) | 一种基于深度学习的多视图协同图像分割方法和系统 | |
CN108537266A (zh) | 一种深度卷积网络的织物纹理疵点分类方法 | |
WO2024099026A1 (zh) | 图像处理方法、装置、设备、存储介质及程序产品 | |
CN112232403A (zh) | 一种红外图像与可见光图像的融合方法 | |
CN106203291A (zh) | 一种基于形态成分分析与自适应字典学习的场景图像文字检测的方法 | |
Zhang et al. | Chinese license plate recognition using machine and deep learning models | |
CN113239808B (zh) | 一种基于深度学习的指纹纹理提取方法、系统、装置及存储介质 | |
CN116385720A (zh) | 一种乳腺癌病灶超声波图像分割算法 | |
CN112380966B (zh) | 基于特征点重投影的单眼虹膜匹配方法 | |
CN115471901A (zh) | 基于生成对抗网络的多姿态人脸正面化方法及系统 | |
CN117523626A (zh) | 伪rgb-d人脸识别法 | |
CN117612024B (zh) | 一种基于多尺度注意力的遥感图像屋顶识别方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21941446 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21941446 Country of ref document: EP Kind code of ref document: A1 |