CN107480649B - Fingerprint sweat pore extraction method based on full convolution neural network - Google Patents

Fingerprint sweat pore extraction method based on full convolution neural network Download PDF

Info

Publication number
CN107480649B
CN107480649B CN201710733540.4A CN201710733540A CN107480649B CN 107480649 B CN107480649 B CN 107480649B CN 201710733540 A CN201710733540 A CN 201710733540A CN 107480649 B CN107480649 B CN 107480649B
Authority
CN
China
Prior art keywords
sweat pore
neural network
fingerprint
convolution neural
full convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710733540.4A
Other languages
Chinese (zh)
Other versions
CN107480649A (en
Inventor
王海霞
杨熙丞
陈朋
梁荣华
马灵涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201710733540.4A priority Critical patent/CN107480649B/en
Publication of CN107480649A publication Critical patent/CN107480649A/en
Application granted granted Critical
Publication of CN107480649B publication Critical patent/CN107480649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • G06V40/1353Extracting features related to minutiae or pores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Abstract

A fingerprint sweat pore extraction method based on a full convolution neural network comprises the following steps: the first step is as follows: acquiring high-definition fingerprint images, marking sweat pore positions and ridge line positions in each fingerprint image, and performing data augmentation on the marked fingerprint images to form a marked data set; the second step is that: constructing a full convolution neural network model, setting initial parameters and a loss function, and training the full convolution neural network model by using the labeled data set to obtain a trained full convolution neural network model; the third step: predicting the preliminary region probability of sweat pores and ridge lines of the test fingerprint picture through a trained full convolution neural network model; the fourth step: and removing the false sweat pore area from the preliminary sweat pore area by using the characteristic of the sweat pore to obtain the real sweat pore area and the central coordinate. The invention learns and extracts the sweat pore characteristics with different shapes and sizes through the full convolution neural network, thereby improving the accuracy of the sweat pore extraction.

Description

Fingerprint sweat pore extraction method based on full convolution neural network
Technical Field
The invention relates to the field of fingerprint identification, in particular to a fingerprint sweat pore extraction method based on a full convolution neural network.
Background
Because of the uniqueness and permanence of fingerprints, fingerprint features are widely used in personal identification as the most common biometric features; the current automatic fingerprint identification system (AFRS) generally adopts minutiae features of fingerprints to perform fingerprint identification, although the current fingerprint identification system (AFRS) has better accuracy, the automatic fingerprint identification system (AFRS) needs to use more fingerprint features to improve the accuracy along with the continuous improvement of public requirements on personal safety, sweat pore features of the fingerprints belong to third-level features of the fingerprints, and the characteristics of the sweat pores of the fingerprints are proved to have uniqueness and permanence as the minutiae features of the fingerprints;
the fingerprint sweat pore extraction technology is a key step of using the characteristics of the fingerprint sweat pores, and at present, some traditional methods can extract the fingerprint sweat pores, such as a Gaussian difference filtering method, a dynamic anisotropy extraction method and the like; however, due to the fact that the sizes and the forms of sweat pores are various, only a part of sweat pore characteristics can be extracted by the traditional method, and the defects of low accuracy, low robustness and the like exist.
Disclosure of Invention
In order to solve the problems of various sweat pore characteristic forms, inaccurate detection results and the like in the existing sweat pore extraction technology, the invention provides a fingerprint sweat pore extraction method based on a full convolution neural network.
In order to achieve the purpose, the invention adopts the technical scheme that:
a fingerprint sweat pore extraction method based on a full convolution neural network comprises the following steps:
1) acquiring high-definition fingerprint images with the resolution of 1200dpi, manually marking the positions of sweat pore areas and ridge line areas in each fingerprint image to obtain marked pictures corresponding to the fingerprint images, and preprocessing and data augmentation are carried out on the marked fingerprint images to form a marked data set required by full convolution neural network model training;
2) constructing a full convolution neural network model, setting training parameters and a loss function, and training the model of the full convolution neural network by using the labeled data set to obtain the trained full convolution neural network model;
3) predicting the preliminary region probability of sweat pores and ridge lines of the test fingerprint picture through a trained full convolution neural network model;
4) and (4) screening and removing pseudo sweat pore areas from the preliminary sweat pore areas by utilizing the characteristics that the sizes and the shapes of the sweat pores have certain rules and the sweat pores only exist on ridge lines to obtain real sweat pore areas and central coordinates.
Further, in the step 1), the fingerprint image augmentation process includes the following steps:
(11) rotating the fingerprint image clockwise by 90 degrees, 180 degrees and 270 degrees respectively to obtain a new fingerprint image;
(12) cutting the fingerprint image into four sub-pictures with the size of 1/4 of the original image, and enlarging the area of each picture by 4 times to expand the picture to the size of the original image;
(13) normalizing the fingerprint image, wherein the normalization operation is as follows:
Figure BDA0001387635120000021
wherein I represents a fingerprint image, m and n represent row and column values of a fingerprint image matrix, min (I) and max (I) represent minimum and maximum values of pixels in the fingerprint image matrix, and I*Representing the normalized fingerprint image.
Still further, the step 2) comprises the following steps:
(21) constructing a full convolution neural network model, wherein the layer of the whole full convolution neural network comprises five parts; the first part consists of two convolutional layers and one pooling layer, where the input picture size is
240 × 320 × 1, each convolutional layer is processed by 64 convolutional kernels with the size of 3 × 3 and a RELU activation function, the output characteristic is 236 × 316 × 64, each 2 × 2 pixel in the pooling layer is combined into one pixel and the maximum value is taken, and the output characteristic is 118 × 158 × 64;
the second part consists of two convolutional layers and a pooling layer, wherein the size of the input feature is 118 × 158 × 64, each convolutional layer is processed by 128 convolutional kernels with the size of 3 × 3 and a RELU activation function, the output feature is 114 × 154 × 128, each 2 × 2 pixel in the pooling layer is combined into one pixel and the maximum value is taken, and the size of the output feature is 57 × 77 × 128;
the third part consists of two convolutional layers, wherein the size of the input features is 57 × 77 × 128, each convolutional layer is processed by 256 convolutional kernels with the size of 3 × 3 and a RELU activation function, and the output features are 53 × 73 × 256;
the fourth part consists of one upsampling layer and two convolutional layers, wherein the size of an input feature is 53 × 73 × 256, the upsampling layer performs deconvolution operation by using 128 convolutional kernels with the size of 2 × 2, the output feature is 106 × 146 × 128, then each convolutional layer is processed by 128 convolutional kernels with the size of 3 × 3 and a RELU activation function, and the size of the output feature is 102 × 142 × 128;
the fifth part consists of one upsampling layer and three convolutional layers, wherein the size of an input feature is 102 × 142 × 128, the upsampling layer performs deconvolution operation by using 64 convolution kernels with the size of 2 × 2, the output feature is 204 × 284 × 64, then the first two convolutional layers are processed by 64 convolution kernels with the size of 3 × 3 and a RELU activation function, the last convolutional layer is processed by 3 convolution kernels with the size of 1 × 1 and a RELU activation function, the size of the output feature is 200 × 280 × 3, and the output feature comprises 3 types: ridge lines, sweat pores and background;
(22) and determining parameters of the full convolution neural network, loading the pictures in the training set into the full convolution neural network model for training by taking 32 pictures as one batch, and obtaining the trained network after the iteration times are 100 times.
Further, in the step (22), a batch stochastic gradient descent algorithm mini-batch-SGD with a momentum term is used for calculating parameter updates of each network layer, wherein the value of the momentum term is 0.2.
In the step (22), a cross entropy loss function based on softmax is used; the softmax cross entropy loss function is of the form:
Figure BDA0001387635120000041
in the above formula, ak(x) Representing the output value of the kth class at point x,
Figure BDA0001387635120000042
representing the sum of the output values of all classes at point x, pk(x) Represents the probability of class k at point x;
the cross entropy loss function is of the form:
Figure BDA0001387635120000043
where w (x) represents the weight parameter of the model, l (x) represents the true class of point x, pl(x)(x) Representing the probability of the true class at point x and E representing the loss value of the cross entropy function.
Further, the step 3) comprises the following steps:
(31) in order to match the input picture size of the trained full convolution neural network, a window with the size of 1/4 fingerprint pictures is set to perform sliding window extraction operation on a test fingerprint picture to obtain a series of sub-pictures, the sub-pictures are amplified and then input into the trained full convolution neural network, a sweat pore pixel probability map P and a ridge line pixel probability map J are output, wherein the value range of each pixel point in P is 0-1 and represents the probability of whether the pixel point is a sweat pore, the value range of each pixel point in J is 0-1 and represents the probability of whether the pixel point is a ridge line, and the ridge line is extracted for removing false sweat pores;
(32) determining the threshold thr to be 0.3 to carry out binarization operation on the sweat pore pixel probability map and the ridge line pixel probability map, wherein the form is as follows:
Figure BDA0001387635120000044
Figure BDA0001387635120000045
wherein M is the primary sweat pore area image after binarization, and N is the ridge line area image after binarization.
The process of the step 4) is as follows: the preliminary sweat pore area map also has some pseudo sweat pore areas, the characteristic that there is certain restriction in utilizing sweat pore size and the characteristic that sweat pores only exist on the ridge, need to get rid of the sweat pore area that is greater than 30 pixels and is less than 3 pixels in preliminary sweat pore area map M, and get rid of the sweat pore area that is not on preliminary ridge area map N in preliminary sweat pore area map M, obtain the central coordinate of final sweat pore area and every sweat pore, map the sweat pore central coordinate that every sprite obtained to former fingerprint picture finally, obtain the sweat pore area and the central coordinate of the fingerprint picture that awaits measuring.
Compared with the prior art, the invention has the advantages that: the method improves the accuracy of fingerprint sweat pore detection through the full convolution neural network, reduces the false recognition rate of sweat pore detection, has good robustness, and can accurately extract the sweat pore characteristics of the fingerprint under different conditions such as illumination, humidity and the like.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention;
FIG. 2 is a diagram of a full convolution neural network of the present invention;
FIG. 3 is a diagram of the effect of fingerprint sweat pore extraction by the algorithm of the present invention, wherein a represents an original fingerprint picture, b represents a ridge probability graph after full convolution neural network prediction, c represents a sweat pore probability graph after full convolution neural network prediction, and d represents a final sweat pore extraction result graph.
Detailed Description
The invention will be further described with reference to the following figures and embodiments:
referring to fig. 1 to 3, a fingerprint sweat pore extraction method based on a full convolution neural network includes the following steps:
1) acquiring a high-definition fingerprint image, manually marking sweat pores and ridges, and performing data augmentation operation on the marked fingerprint image to form a marked data set required by full convolution neural network model training; the method specifically comprises the following steps:
(11) acquiring high-definition fingerprint images with the resolution of 1200dpi, and manually marking the positions of sweat pore areas and ridge line areas in each fingerprint image;
(12) rotating the marked fingerprint image clockwise by 90 degrees, 180 degrees and 270 degrees respectively to obtain a new fingerprint image;
(13) the fingerprint image is cut into four sub-pictures with the size of the original drawing 1/4, and the area of each picture is enlarged by 4 times to be expanded to the size of the original drawing.
(14) Normalizing the fingerprint image, wherein the normalization operation is as follows:
Figure BDA0001387635120000061
where I represents the fingerprint image, m and n represent the row and column values of the fingerprint image matrix, and min (I) and max (I) represent the minimum and maximum values of the pixels in the fingerprint image matrix. I is*Representing the normalized fingerprint image.
2) The method comprises the following steps of constructing a full convolution neural network model, setting initial parameters and a loss function, training the model of the full convolution neural network by using a labeled data set, and obtaining the trained full convolution neural network model, wherein the method specifically comprises the following steps:
(21) referring to fig. 2, a full convolution neural network model is constructed, and the layer of the whole full convolution neural network comprises five parts; the first part consists of two convolutional layers and a pooling layer, wherein the size of an input picture is 240 × 320 × 1, each convolutional layer is processed by 64 convolutional kernels with the size of 3 × 3 and a RELU activation function, the output characteristic is 236 × 316 × 64, each 2 × 2 pixel in the pooling layer is combined into one pixel and the maximum value is taken, and the output characteristic size is 118 × 158 × 64;
the second part consists of two convolutional layers and a pooling layer, wherein the size of the input feature is 118 × 158 × 64, each convolutional layer is processed by 128 convolutional kernels with the size of 3 × 3 and a RELU activation function, the output feature is 114 × 154 × 128, each 2 × 2 pixel in the pooling layer is combined into one pixel and the maximum value is taken, and the size of the output feature is 57 × 77 × 128;
the third part consists of two convolutional layers, wherein the size of the input features is 57 × 77 × 128, each convolutional layer is processed by 256 convolutional kernels with the size of 3 × 3 and a RELU activation function, and the output features are 53 × 73 × 256;
the fourth part consists of one upsampling layer and two convolutional layers, wherein the size of an input feature is 53 × 73 × 256, the upsampling layer performs deconvolution operation by using 128 convolutional kernels with the size of 2 × 2, the output feature is 106 × 146 × 128, then each convolutional layer is processed by 128 convolutional kernels with the size of 3 × 3 and a RELU activation function, and the size of the output feature is 102 × 142 × 128;
the fifth part consists of an upsampling layer and three convolutional layers, wherein the size of an input feature is 102 multiplied by 142 multiplied by 128, the upsampling layer uses 64 convolutional kernels with the size of 2 multiplied by 2 to carry out deconvolution operation, the output feature is 204 multiplied by 284 multiplied by 64, then the first two convolutional layers are processed by 64 convolutional kernels with the size of 3 multiplied by 3 and a RELU activation function, the last convolutional layer is processed by 2 convolutional kernels with the size of 1 multiplied by 1 and a RELU activation function, and the size of the output feature is 200 multiplied by 280 multiplied by 3;
(22) determining parameters of a full convolution neural network, initializing weight parameters by Gaussian normal distribution, loading the pictures in a training set into a full convolution neural network model for training by taking 32 pictures as one batch, wherein the iteration times are 100 times, the parameter updating of each network layer is calculated by adopting a batch stochastic gradient descent algorithm mini-batch-SGD with momentum items, and the value of the momentum items is 0.2;
(23) the invention uses a softmax-based cross entropy loss function, and the softmax cross entropy loss function is in the form as follows:
Figure BDA0001387635120000071
in the above formula, ak(x) Representing the output value of the kth class at point x,
Figure BDA0001387635120000072
representing the sum of the output values of all classes at point x, pk(x) Represents the probability of class k at point x;
the cross entropy loss function is of the form:
Figure BDA0001387635120000073
where w (x) represents the weight parameter of the model, l (x) represents the true class of point x, pl(x)(x) Representing the probability of the true class at point x and E representing the loss value of the cross entropy function.
3) Predicting the preliminary region probability of sweat pores and ridge lines of the test fingerprint picture through a trained full convolution neural network model; the method specifically comprises the following steps:
(31) setting a window with the size of 1/4 fingerprint pictures to perform sliding window extraction operation on the test fingerprint pictures to obtain a series of sub-pictures, amplifying the sub-pictures, inputting the amplified sub-pictures into a trained full convolution neural network, and outputting a sweat pore pixel probability map P and a ridge line pixel probability map J, wherein the value range of each pixel point in P is 0-1 and represents the probability of whether the pixel point is a sweat pore, and the value range of each pixel point in J is 0-1 and represents the probability of whether the pixel point is a ridge line;
(32) determining the threshold thr to be 0.3 to carry out binarization operation on the sweat pore pixel probability map and the ridge line pixel probability map, wherein the form is as follows:
Figure BDA0001387635120000081
Figure BDA0001387635120000082
wherein M is a binarized preliminary sweat pore area map, and N is a binarized preliminary ridge line area map.
4) Removing the false sweat pore area from the preliminary sweat pore area by using the characteristic of the sweat pore to obtain a real sweat pore area and a central coordinate; the process is as follows: removing sweat pore areas larger than 30 pixels and smaller than 3 pixels in the preliminary sweat pore area graph M, removing sweat pore areas on the preliminary ridge line area graph N in the preliminary sweat pore area graph M to obtain final sweat pore areas and center coordinates of each sweat pore, and finally mapping the sweat pore center coordinates obtained by each sub-picture to the original fingerprint picture to obtain the sweat pore areas and the center coordinates of the fingerprint picture to be detected.

Claims (6)

1. A fingerprint sweat pore extraction method based on a full convolution neural network is characterized by comprising the following steps:
1) acquiring high-definition fingerprint images with the resolution of 1200dpi, manually marking the positions of sweat pore areas and ridge line areas in each fingerprint image to obtain marked pictures corresponding to the fingerprint images, and preprocessing and data augmentation are carried out on the marked fingerprint images to form pictures and marked data sets required by full convolution neural network model training;
2) constructing a full convolution neural network model, setting initial parameters and a loss function, and training the model of the full convolution neural network by using the labeled data set to obtain a trained full convolution neural network model;
3) predicting the preliminary region probability of sweat pores and ridge lines of the test fingerprint picture through a trained full convolution neural network model; the step (3) comprises the following steps:
(31) carrying out sliding window extraction operation on a new fingerprint picture by using a window with the size of 1/4 to obtain a series of sub-pictures, amplifying the sub-pictures, inputting the amplified sub-pictures into a trained full-convolution neural network, and outputting a sweat pore pixel probability map P and a ridge line pixel probability map J, wherein the value range of each pixel point in P is 0-1 and represents the probability of whether the pixel point is a sweat pore, and the value range of each pixel point in J is 0-1 and represents the probability of whether the pixel point is a ridge line;
(32) determining the threshold thr to be 0.3 to carry out binarization operation on the sweat pore pixel probability map and the ridge line pixel probability map, wherein the form is as follows:
Figure FDA0002502022350000011
Figure FDA0002502022350000012
wherein M is a binarized preliminary sweat pore area map, and N is a binarized preliminary ridge line area map;
4) and removing the false sweat pore area from the preliminary sweat pore area by using the characteristic of the sweat pore to obtain the real sweat pore area and the center coordinate.
2. The method for extracting the sweat pore of the fingerprint based on the full convolution neural network as claimed in claim 1, wherein in the step 1), the fingerprint image augmentation process comprises the following steps:
(11) rotating the fingerprint image clockwise by 90 degrees, 180 degrees and 270 degrees respectively to obtain a new fingerprint image;
(12) cutting the fingerprint image into four sub-pictures with the size of 1/4 of the original image, and enlarging the area of each picture by 4 times to expand the picture to the size of the original image;
(13) normalizing the fingerprint image, wherein the normalization operation is as follows:
Figure FDA0002502022350000021
wherein I represents a fingerprint image, m and n represent row and column values of a fingerprint image matrix, min (I) and max (I) represent minimum and maximum values of pixels in the fingerprint image matrix, and I*Representing the normalized fingerprint image.
3. The method for extracting the fingerprint sweat pore based on the full convolution neural network as claimed in claim 1 or 2, wherein the step 2) comprises the following steps:
(21) constructing a full convolution neural network model, wherein the whole full convolution neural network comprises five parts, the first part and the second part respectively consist of two convolution layers and a pooling layer, the third part consists of two convolution layers, the fourth part consists of an upper sampling layer and two convolution layers, and the fifth part consists of an upper sampling layer and three convolution layers;
(22) and determining parameters of the full convolution neural network, loading the pictures in the training set into the full convolution neural network model for training by taking 32 pictures as one batch, wherein the iteration times are 100 times.
4. The method for extracting the fingerprint sweat pore based on the full convolution neural network as claimed in claim 3, wherein in the step (22), a batch stochastic gradient descent (MIN-batch-SGD) algorithm with a momentum term is adopted to calculate the parameter update of each network layer, wherein the value of the momentum term is 0.2.
5. The full convolution neural network-based fingerprint sweat pore extraction method according to claim 3, wherein a softmax-based cross entropy loss function is used in the step (22), and the softmax cross entropy loss function is of the form:
Figure FDA0002502022350000031
in the above formula, ak(x) Representing the output value of the kth class at point x,
Figure FDA0002502022350000032
representing the sum of the output values of all classes at point x, pk(x) Represents the probability of class k at point x;
the cross entropy loss function is of the form:
Figure FDA0002502022350000033
where w (x) represents the weight parameter of the model, l (x) represents the true class of point x, pl(x)(x) Representing the probability of the true class at point x and E representing the loss value of the cross entropy function.
6. The method for extracting the fingerprint sweat pore based on the full convolution neural network as claimed in claim 1 or 2, wherein the process of the step 4) is as follows: removing sweat pore areas which are larger than 30 pixels and smaller than 3 pixels in the preliminary sweat pore area graph M, removing sweat pore areas which are not on the preliminary ridge line area graph N in the preliminary sweat pore area graph M to obtain final sweat pore areas and center coordinates of each sweat pore, and finally mapping the sweat pore center coordinates obtained by each sub-picture to the original fingerprint picture to obtain the sweat pore areas and the center coordinates of the fingerprint picture to be detected.
CN201710733540.4A 2017-08-24 2017-08-24 Fingerprint sweat pore extraction method based on full convolution neural network Active CN107480649B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710733540.4A CN107480649B (en) 2017-08-24 2017-08-24 Fingerprint sweat pore extraction method based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710733540.4A CN107480649B (en) 2017-08-24 2017-08-24 Fingerprint sweat pore extraction method based on full convolution neural network

Publications (2)

Publication Number Publication Date
CN107480649A CN107480649A (en) 2017-12-15
CN107480649B true CN107480649B (en) 2020-08-18

Family

ID=60602576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710733540.4A Active CN107480649B (en) 2017-08-24 2017-08-24 Fingerprint sweat pore extraction method based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN107480649B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145810A (en) * 2018-08-17 2019-01-04 中控智慧科技股份有限公司 Details in fingerprint point detecting method, device, equipment, system and storage medium
CN110969566A (en) * 2018-09-29 2020-04-07 北京嘉楠捷思信息技术有限公司 Deconvolution processing method and device, and image processing method and device
CN109657567B (en) * 2018-11-30 2022-09-02 深圳大学 Weak supervision characteristic analysis method and system based on 3D fingerprint image
CN109919022A (en) * 2019-01-29 2019-06-21 浙江工业大学 A kind of adaptive inside and outside OCT fingerprint extraction method
CN110334566B (en) * 2019-03-22 2021-08-03 浙江工业大学 OCT (optical coherence tomography) internal and external fingerprint extraction method based on three-dimensional full-convolution neural network
WO2020254857A1 (en) 2019-06-18 2020-12-24 Uab Neurotechnology Fast and robust friction ridge impression minutiae extraction using feed-forward convolutional neural network
CN110472501B (en) * 2019-07-10 2022-08-30 南京邮电大学 Neural network-based fingerprint sweat pore coding classification method
CN111079626B (en) * 2019-12-11 2023-08-01 深圳市迪安杰智能识别科技有限公司 Living body fingerprint identification method, electronic equipment and computer readable storage medium
CN111597895A (en) * 2020-04-15 2020-08-28 浙江工业大学 OCT fingerprint anti-counterfeiting method based on resnet50
CN111666813B (en) * 2020-04-29 2023-06-30 浙江工业大学 Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN111652308B (en) * 2020-05-13 2024-02-23 三峡大学 Flower identification method based on ultra-lightweight full convolutional neural network
CN111879508B (en) * 2020-07-28 2022-06-10 无锡迈斯德智能测控技术有限公司 Method and device for estimating instantaneous rotating speed of rotating machine based on time-frequency transformation and storage medium
CN113011361B (en) * 2021-03-29 2023-11-07 福建师范大学 OCT fingerprint-based internal maximum intensity projection imaging method
CN113657145B (en) * 2021-06-30 2023-07-14 深圳市人工智能与机器人研究院 Fingerprint retrieval method based on sweat pore characteristics and neural network
CN113705519A (en) * 2021-09-03 2021-11-26 杭州乐盯科技有限公司 Fingerprint identification method based on neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425137A (en) * 2008-11-10 2009-05-06 北方工业大学 Face Image Fusion Method Based on Laplacian Pyramid
CN106203298A (en) * 2016-06-30 2016-12-07 北京集创北方科技股份有限公司 Biological feather recognition method and device
CN106650725B (en) * 2016-11-29 2020-06-26 华南理工大学 Candidate text box generation and text detection method based on full convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
加窗傅里叶滤波和相干增强扩散在条纹去噪中的比较分析;王海霞 等;《实验力学》;20170531(第05期);第1-2页 *

Also Published As

Publication number Publication date
CN107480649A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107480649B (en) Fingerprint sweat pore extraction method based on full convolution neural network
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN107358260B (en) Multispectral image classification method based on surface wave CNN
CN111325748B (en) Infrared thermal image nondestructive testing method based on convolutional neural network
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN107516316B (en) Method for segmenting static human body image by introducing focusing mechanism into FCN
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN108960404B (en) Image-based crowd counting method and device
CN108446707B (en) Remote sensing image airplane detection method based on key point screening and DPM confirmation
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN109190458B (en) Method for detecting head of small person based on deep learning
CN109360179B (en) Image fusion method and device and readable storage medium
CN110705565A (en) Lymph node tumor region identification method and device
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN106372624A (en) Human face recognition method and human face recognition system
CN107705323A (en) A kind of level set target tracking method based on convolutional neural networks
CN111612747A (en) Method and system for rapidly detecting surface cracks of product
CN104966054A (en) Weak and small object detection method in visible image of unmanned plane
CN113282905A (en) Login test method and device
CN111178405A (en) Similar object identification method fusing multiple neural networks
CN112232249B (en) Remote sensing image change detection method and device based on depth characteristics
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN113610024A (en) Multi-strategy deep learning remote sensing image small target detection method
WO2018137226A1 (en) Fingerprint extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant