CN110334566A - Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks - Google Patents

Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks Download PDF

Info

Publication number
CN110334566A
CN110334566A CN201910219860.7A CN201910219860A CN110334566A CN 110334566 A CN110334566 A CN 110334566A CN 201910219860 A CN201910219860 A CN 201910219860A CN 110334566 A CN110334566 A CN 110334566A
Authority
CN
China
Prior art keywords
oct
neural networks
convolutional neural
fingerprint
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910219860.7A
Other languages
Chinese (zh)
Other versions
CN110334566B (en
Inventor
梁荣华
丁宝进
陈朋
王海霞
张怡龙
刘义鹏
蒋莉
崔静静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910219860.7A priority Critical patent/CN110334566B/en
Publication of CN110334566A publication Critical patent/CN110334566A/en
Application granted granted Critical
Publication of CN110334566B publication Critical patent/CN110334566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • G06V40/1353Extracting features related to minutiae or pores

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks, include the following steps: 1) to the cuticula regional location and the manual mark of papillaris pars regional location progress in every width fingerprint OCT image, obtain mark picture corresponding with OCT image, ROI is carried out to extract and data enhancing, composition labeled data collection;2) the three-dimensional full convolutional neural networks model of building, sets training parameter and loss function, uses the data set training pattern marked;3) cuticula, the papillaris pars for the OCT image not marked by trained full convolutional neural networks model prediction;4) according to the cuticula and papillaris pars of all OCT images, the outer fingerprint and interior fingerprint of OCT fingerprint are respectively obtained by splicing according to relative depth and OCT image spatial order.The present invention learns to extract the cuticula and papillaris pars feature of OCT image by three-dimensional full convolutional neural networks, to generate accurately inside and outside fingerprint.

Description

Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks
Technical field
The present invention relates to fingerprint recognition field, in particular to refer to inside and outside a kind of OCT based on three-dimensional full convolutional neural networks Line extracting method.
Background technique
Because the uniqueness of fingerprint is with permanently, fingerprint characteristic has been widely applied to individual as most common biological characteristic In identification.Fingerprint recognition system is to capture the lines of finger tip surface on two dimensional image, then to its some spies Sign (lines, minutiae point of crestal line valley line etc.) is identified.However, when finger surface is there are dirt, sweat and by can not When the damage of reparation, fingerprint just will receive destruction, be unable to complete identification mission.In addition, forging fingerprint made of the materials such as silica gel Film also usually can successfully cheat these systems.
Studies have shown that the supracutaneous fingerprint ridge line of finger and valley line, in corium with the nipple of the intersection of epidermis Layer (papilla), it is the source of dactylotype.The supracutaneous fingerprint of finger is that external fingerprint is exactly this layer of relief feature Exact duplicate.It can be seen that inside fingerprint being obtained by papillaris pars profile, being not easy to be destroyed is the strong of external fingerprint Supplement.At the same time, optical coherence tomography (optical coherencetomography, OCT) this Noninvasive at As technology, the information of 1~3mm depth, obtains the 3D volume data of the fingerprint of finger under available human skin surface, this is adopts Collect fingerprint inside high-resolution three-dimension and provides possibility.
Fingerprint extraction is typically all to be based on gray value jump inside and outside existing OCT, and the method for tangent plane method or cluster is found Cuticula and papillaris pars in OCT image generate outer fingerprint and interior fingerprint respectively.These methods need preset parameter more, difficult To adapt to OCT finger print data cuticula and papillaris pars situation complicated and changeable.With the development of deep learning, convolutional neural networks More and more image recognition is applied to, in the fields such as semantic classification.
Summary of the invention
Robustness in order to overcome the problems, such as fingerprint extraction inside and outside existing OCT is not strong, and the invention proposes one kind to be based on three The extracting method for tieing up fingerprint inside and outside the OCT of full convolutional neural networks learns and extracts angle by three-dimensional full convolutional neural networks Matter layer and papillaris pars, to generate accurately inside and outside fingerprint.
To achieve the goals above, the technical solution adopted by the present invention are as follows:
A kind of fingerprint pore extracting method based on three-dimensional full convolutional neural networks, includes the following steps:
1) OCT fingerprint volume data size is set as W × H × N, i.e., is made of, indicates the OCT image that N resolution ratio are W × H The spatially vertical tangent plane of continuous N finger prints.Several groups of continuous OCT images are selected, to the cutin in every width fingerprint image Layer region position and papillaris pars regional location carry out manual mark, obtain mark picture corresponding with OCT image, and to marking OCT image carry out ROI and extract and data enhancing, labeled data needed for constituting three-dimensional full convolutional neural networks model training Collection;
2) the three-dimensional full convolutional neural networks model of building, sets training parameter and loss function, uses the data marked The model of the three-dimensional full convolutional neural networks of collection training obtains trained three-dimensional full convolutional neural networks model;
3) cuticula, the papillaris pars for the OCT image not marked by trained full convolutional neural networks model prediction;
4) according to the cuticula and papillaris pars of all OCT images, according to relative depth and OCT image spatial order, warp Splicing is crossed, the outer fingerprint and interior fingerprint of OCT fingerprint are respectively obtained.
Further, in the step 1), include the following steps: during OCT Fingerprint enhancement
1.1) several groups of continuous OCT images are manually marked first, marks out cuticula and papillaris pars in the picture Region;
1.2) since OCT image major part area is background, only some is finger sub-dermal structures, therefore to every OCT image carries out ROI extraction, and extraction process is as follows: utilize equal-sized rectangle frame, size be 240 × 80 (a height of 240, it is wide 80), in the picture successively to intercept in the region comprising cuticula and papillaris pars, same operation is also equally carried out to mark figure, Obtain the mark to match.Efficiency of algorithm is not only improved in this way, can also obtain more OCT image training datas;
1.3) ROI image of OCT and its corresponding mark figure all clockwises are rotated by 90 °, 180 degree, 270 degree and water Flat doubling obtains more training samples.
Further, the step 2) includes the following steps:
2.1) the building full convolutional neural networks model of one three-dimensional, due in continuous OCT image, the position of cuticula And the position of papillaris pars spatially all has continuity (position of cuticula and the position of papillaris pars are slowly varying), therefore A three-dimensional full convolutional neural networks are constructed, (do not consider to criticize when single picture with common full convolutional neural networks input Size), three-dimensional full convolutional neural networks input is one group of continuous OCT image, to consider the space between consecutive image Relationship, wherein the size of input picture is 240 × 80 × 8 (taking 8 continuous ROI images), entire three-dimensional full convolutional Neural net The layer of network includes 8 parts:
For first part to Part III, every part is the pond layer three-dimensional by two three-dimensional convolutional layers and one Composition, for i-th section, 1≤i≤3, each Three dimensional convolution layer passes through 16*2iA size be 3 × 3 × 3 convolution kernel and Rectified Linear Unit (RELU) activation primitive and the processing of Batch Normalization (BN) function, three-dimensional pond Change and every 2 × 2 × 2 pixel is combined into a pixel in layer and takes maximum value therein, last i-th section output feature sizes are (16*2i)×(240*2-i)×(80*2-i)×(8*2-i), then the output of last Part III is 128 × 30 × 10 × 1;
Part IV is to be made of two convolutional layers, and wherein the size of input feature vector is 128 × 30 × 10 × 1, Mei Gejuan The convolution kernel and RELU activation primitive and the processing of BN function, output feature that lamination is 3 × 3 × 3 by 256 sizes are 256 ×30×10×1;
For Part V to Part VII, every part is by a three-dimensional deconvolution core and two three-dimensional convolution Layer composition.For i-th section, 5≤i≤7 enable t=i-4, then the output feature sizes that three-dimensional deconvolution obtains are (256*2-t) ×(30*2t)×(10*2t)×(1*2t), dimension size here and the dimension size one after two Three dimensional convolution of 8-i layer The two results are stitched together by sample, and obtained feature sizes are (256*2-t*2)×(30*2t)×(10*2t)×(1*2t)。 Pass through two Three dimensional convolution layers after this, each convolutional layer is by 256*2-tThe convolution kernel and RELU that a size is 3 × 3 × 3 swash Function and BN function processing living, obtained feature are (256*2-t)×(30*2t)×(10*2t)×(1*2t), last in this way The output of seven parts is 32 × 240 × 80 × 8;
Part VIII is then the last one part, is made of a convolutional layer and softmax function, convolutional layer only includes 3 A 3 × 3 × 3 convolution kernel, obtained output feature sizes are 3 × 240 × 80 × 8, are finally obtained by softmax function general Rate prognostic chart, size be also 3 × 240 × 80 × 8,240 × 80 × 8 represent be input 8 ROI pictures, for every ROI Figure, generates 3 probability graphs, respectively represents the probability that pixel is cuticula, papillaris pars or background, which class probability is maximum, then The prediction of pixel is exactly the category, and for some pixel, it is the Probability p of l classlIt calculates as follows:
Wherein hlIt is the input of softmax, 1≤l≤3;
2.2) parameter for determining full convolutional neural networks, it is 2 that the size criticized, which is arranged, and the picture in training set is schemed with 8 Piece is a batch, and 2 batches of full convolutional neural networks models of loading are trained every time, and instruction can be obtained for 100 times in the number of iterations The network perfected;
The parameter for calculating each network layer is updated using the batch stochastic gradient descent algorithm for having momentum term
Mini-batch-SGD, wherein the value of momentum term is set as being 0.2;
Use dice loss function;Its functional form is as follows:
In above formula, pl(x) and gl(x) prediction probability and Truth Probability that x belongs to l class are respectively represented.
Further, in the step 3), in order to cooperate the input picture ruler of trained three-dimensional full convolutional neural networks It is very little, the OCT image to be predicted obtain a series of sub-pictures having a size of 240 × 80 segmentations, sub-pictures are input to training In the full convolutional neural networks of good three-dimensional, corresponding cuticula is obtained, then papillaris pars reverts to sub-pictures original OCT image size.
Further, the step 4) includes following operation:
4.1) due to OCT image prediction cuticula and papillaris pars may be it is discontinuous, using nearest-neighbor insert Value keeps it continuous;
4.2) cuticula upper surface contour curve is set as Lsu(w), wherein 0≤w < W, represents horizontal axis.Equally, cuticula following table Facial contour curve is Lsd(w), then papillaris pars upper surface profile is set as Lpu(w), then it can be obtained by following formula according to relative depth Obtain two curve LE(w), LI(w):
LE(w)=| Lsu(w)-Lsd(w)| (3)
LI(w)=| Lsu(w)-Lpu(w)| (4)
Such image obtains the line that two width are W respectively, finally again by the two lines of one group of OCT image according to space Sequence is stitched together respectively, obtains inside and outside fingerprint.Wherein, LE(w) it is used to generate outer fingerprint, LI(w) it is used to generate interior fingerprint.Refer to The resolution ratio of print image is then W × N.
Compared with prior art, beneficial effects of the present invention are shown: improving OCT by three-dimensional full convolutional neural networks The robustness of inside and outside fingerprint extraction;Meanwhile compared to common (two dimension) convolution, three-dimensional convolution operation has been fully considered continuously The spatial relationship of OCT image can obtain better effect.
Detailed description of the invention
Fig. 1 is the flow chart of the fingerprint pore extracting method the present invention is based on full convolutional neural networks;
Fig. 2 is three-dimensional full convolutional neural networks structure chart in the present invention;
Fig. 3 (a) is OCT image, and figure (b) is the cuticula finally obtained, and figure (c) is the papillaris pars finally obtained, is schemed (d) The upper and lower surfaces contour curve of cuticula and the upper surface contour curve of papillaris pars have been marked respectively;
Fig. 4 is the outer fingerprint image extracted;
Fig. 5 is the interior fingerprint image extracted.
Specific embodiment
The invention will be further described with embodiment with reference to the accompanying drawing:
Referring to FIG. 1 to FIG. 5, a kind of fingerprint pore extracting method based on full convolutional neural networks includes the following steps:
1) OCT fingerprint volume data size is set as W × H × N, i.e., is made of, indicates the OCT image that N resolution ratio are W × H The spatially vertical tangent plane of continuous N finger prints.Several groups of continuous OCT images are selected, to the cutin in every width fingerprint image Layer region position and papillaris pars regional location carry out manual mark, obtain mark picture corresponding with OCT image, and to marking OCT image carry out ROI and extract and data enhancing, labeled data needed for constituting three-dimensional full convolutional neural networks model training Collection, includes the following steps:
1.1) several groups of continuous OCT images are manually marked first, marks out cuticula and papillaris pars in the picture Region;
1.2) since OCT image major part area is background, only some is finger sub-dermal structures.Therefore to every OCT image carries out ROI extraction, and extraction process is as follows: utilize equal-sized rectangle frame, size be 240 × 80 (a height of 240, it is wide 80), in the picture successively to intercept in the region comprising cuticula and papillaris pars, same operation is also equally carried out to mark figure, Obtain the mark to match.Efficiency of algorithm is not only improved in this way, can also obtain more OCT image training datas;
1.3) ROI image of OCT and its corresponding mark figure all clockwises are rotated by 90 °, 180 degree, 270 degree and water Flat doubling obtains more training samples;
2) the three-dimensional full convolutional neural networks model of building, sets training parameter and loss function, uses the data marked The model of the three-dimensional full convolutional neural networks of collection training obtains trained three-dimensional full convolutional neural networks model, including walks as follows It is rapid:
2.1) the full convolutional neural networks model of a three-dimensional, due in continuous OCT image, the position of cuticula are constructed And the position of papillaris pars spatially all has continuity (position of cuticula and the position of papillaris pars are slowly varying), therefore A three-dimensional full convolutional neural networks are constructed, (do not consider to criticize when single picture with common full convolutional neural networks input Size), three-dimensional full convolutional neural networks input is one group of continuous OCT image, to consider the space between consecutive image Relationship, wherein the size of input picture is 240 × 80 × 8 (taking 8 continuous ROI images), entire three-dimensional full convolutional Neural net The layer of network includes 8 parts:
For first part to Part III, every part is the pond layer three-dimensional by two three-dimensional convolutional layers and one Composition.For i-th section (1≤i≤3), each Three dimensional convolution layer passes through 16*2iA size be 3 × 3 × 3 convolution kernel and Rectified Linear Unit (RELU) activation primitive and the processing of Batch Normalization (BN) function, three-dimensional pond Change and every 2 × 2 × 2 pixel is combined into a pixel in layer and takes maximum value therein, i-th section finally exports feature sizes and is (16*2i)×(240*2-i)×(80*2-i)×(8*2-i), then the output of last Part III is 128 × 30 × 10 × 1;
Part IV is to be made of two convolutional layers, and wherein the size of input feature vector is 128 × 30 × 10 × 1, Mei Gejuan The convolution kernel and RELU activation primitive and the processing of BN function, output feature that lamination is 3 × 3 × 3 by 256 sizes are 256 ×30×10×1;
For Part V to Part VII, every part is by a three-dimensional deconvolution core and two three-dimensional convolution Layer composition.For i-th section (5≤i≤7), t=i-4 is enabled, then the output feature sizes that three-dimensional deconvolution obtains are (256*2-t) ×(30*2t)×(10*2t)×(1*2t), dimension size here and the dimension size one after two Three dimensional convolution of 8-i layer The two results are stitched together by sample, and obtained feature sizes are (256*2-t*2)×(30*2t)×(10*2t)×(1*2t)。 Pass through two Three dimensional convolution layers after this, each convolutional layer is by 256*2-tThe convolution kernel and RELU that a size is 3 × 3 × 3 swash Function and BN function processing living, obtained feature are (256*2-t)×(30*2t)×(10*2t)×(1*2t), last in this way The output of seven parts is 32 × 240 × 80 × 8;
Part VIII is then the last one part, is made of a convolutional layer and softmax function, convolutional layer only includes 3 A 3 × 3 × 3 convolution kernel, obtained output feature sizes are 3 × 240 × 80 × 8, are finally obtained by softmax function general Rate prognostic chart, size be also 3 × 240 × 80 × 8,240 × 80 × 8 represent be input 8 ROI pictures, for every ROI Figure, generates 3 probability graphs, respectively represents the probability that pixel is cuticula, papillaris pars or background.Which class probability is maximum, then The prediction of pixel is exactly the category.For some pixel, it is the Probability p of l classlIt calculates as follows:
Wherein hlIt is the input of softmax, 1≤l≤3;
2.2) parameter for determining full convolutional neural networks, it is 2 that the size criticized, which is arranged, and the picture in training set is schemed with 8 Piece is a batch, and 2 batches of full convolutional neural networks models of loading are trained every time, and instruction can be obtained for 100 times in the number of iterations The network perfected;
The parameter for calculating each network layer is updated using the batch stochastic gradient descent algorithm mini-batch- for having momentum term SGD, wherein the value of momentum term is set as 0.2;
Using dice loss function, functional form is as follows:
In above formula, pl(x) and gl(x) prediction probability and Truth Probability that x belongs to l class are respectively represented.
3) cuticula, the papillaris pars for the OCT image not marked by trained full convolutional neural networks model prediction, mistake Journey is as follows:
In order to cooperate the input dimension of picture of trained three-dimensional full convolutional neural networks, to the OCT image to be predicted into Row obtains a series of sub-pictures, sub-pictures is input to the trained full convolutional neural networks of three-dimensional having a size of 240 × 80 segmentations In, corresponding cuticula is obtained, then sub-pictures are reverted to original OCT image size by papillaris pars.
4) according to the cuticula and papillaris pars of all OCT images, according to relative depth and OCT image spatial order, warp Splicing is crossed, the outer fingerprint and interior fingerprint of OCT fingerprint is respectively obtained, specifically comprises the following steps:
4.1) due to OCT image prediction cuticula and papillaris pars may be it is discontinuous, using nearest-neighbor insert Value keeps it continuous.
4.2) cuticula upper surface contour curve is set as Lsu(w), wherein 0≤w < W, represents horizontal axis.Equally, cuticula following table Facial contour curve is Lsd(w), then papillaris pars upper surface profile is set as Lpu(w), then it can be obtained by following formula according to relative depth Obtain two curve LE(w), LI(w):
LE(w)=| Lsu(w)-Lsd(w)|
LI(w)=| Lsu(w)-Lpu(w)|
Such image obtains the line that two width are W respectively, finally again by the two lines of one group of OCT image according to space Sequence is stitched together respectively, obtains inside and outside fingerprint.Wherein, LE(w) it is used to generate outer fingerprint, LI(w) it is used to generate interior fingerprint.Refer to The resolution ratio of print image is then W × N.

Claims (5)

1. fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks, which is characterized in that the method includes Following steps:
1) OCT fingerprint volume data size is set as W × H × N, i.e., is made of the OCT image that N resolution ratio are W × H, representation space The vertical tangent plane of upper continuous N finger prints, selects several groups of continuous OCT images, to the cuticula area in every width fingerprint image Domain position and papillaris pars regional location carry out manual mark, obtain mark picture corresponding with OCT image, and to having marked OCT image carries out ROI and extracts with data enhancing, labeled data collection needed for constituting three-dimensional full convolutional neural networks model training;
2) the three-dimensional full convolutional neural networks model of building, is set training parameter and loss function, is assembled for training using the data marked The model for practicing three-dimensional full convolutional neural networks obtains trained three-dimensional full convolutional neural networks model;
3) cuticula, the papillaris pars for the OCT image not marked by trained full convolutional neural networks model prediction;
4) according to the cuticula and papillaris pars of all OCT images, according to relative depth and OCT image spatial order, by spelling It connects, respectively obtains the outer fingerprint and interior fingerprint of OCT fingerprint.
2. fingerprint extraction method inside and outside the OCT according to claim 1 based on three-dimensional full convolutional neural networks, feature exist In in the step 1), OCT Fingerprint enhancement process includes the following steps:
1.1) several groups of continuous OCT images are manually marked first, marks out cuticula and nipple layer region in the picture;
1.2) since OCT image major part area is background, only some is finger sub-dermal structures, therefore is schemed to every OCT As carrying out ROI extraction, extraction process is as follows: utilizing equal-sized rectangle frame, size is 240 × 80, will include in the picture The region of cuticula and papillaris pars successively intercepts, and equally also carries out same operation to mark figure, obtains the mark to match, in this way Efficiency of algorithm is not only improved, more OCT image training datas can be also obtained;
1.3) ROI image of OCT and its corresponding mark figure all clockwises are rotated by 90 °, 180 degree, 270 degree and horizontal right Folding, obtains more training samples.
3. fingerprint extraction method inside and outside the OCT according to claim 1 or 2 based on three-dimensional full convolutional neural networks, special Sign is that the step 2) includes the following steps:
2.1) construct the full convolutional neural networks model of a three-dimensional, due in continuous OCT image, the position of cuticula and The position of papillaris pars spatially all has continuity, therefore constructs a three-dimensional full convolutional neural networks, and common complete When convolutional neural networks input is single picture, three-dimensional full convolutional neural networks input is one group of continuous OCT image, to examine Consider the spatial relationship between consecutive image, wherein the size of input picture is 240 × 80 × 8, take 8 continuous ROI images, The layer of entire three-dimensional full convolutional neural networks includes 8 parts:
For first part to Part III, every part is the pond layer group three-dimensional by two three-dimensional convolutional layers and one At for i-th section, 1≤i≤3, each Three dimensional convolution layer passes through 16*2iThe convolution kernel and RELU that a size is 3 × 3 × 3 swash Function and BN function living are handled, and every 2 × 2 × 2 pixel is combined into a pixel in three-dimensional pond layer and takes maximum therein Value, it is (16*2 that i-th section, which finally exports feature sizes,i)×(240*2-i)×(80*2-i)×(8*2-i), then last third portion The output divided is 128 × 30 × 10 × 1;
Part IV is to be made of two convolutional layers, and wherein the size of input feature vector is 128 × 30 × 10 × 1, each convolutional layer The convolution kernel and RELU activation primitive and the processing of BN function, output feature that are 3 × 3 × 3 by 256 sizes are 256 × 30 ×10×1;
For Part V to Part VII, every part is by a three-dimensional deconvolution core and two three-dimensional convolutional layer groups At for i-th section, 5≤i≤7 enable t=i-4, then the output feature sizes that three-dimensional deconvolution obtains are (256*2-t)× (30*2t)×(10*2t)×(1*2t), dimension size here and the dimension size one after two Three dimensional convolution of 8-i layer The two results are stitched together by sample, and obtained feature sizes are (256*2-t*2)×(30*2t)×(10*2t)×(1*2t), Pass through two Three dimensional convolution layers after this, each convolutional layer is by 256*2-tThe convolution kernel and RELU that a size is 3 × 3 × 3 swash Function and BN function processing living, obtained feature are (256*2-t)×(30*2t)×(10*2t)×(1*2t), last in this way The output of seven parts is 32 × 240 × 80 × 8;
Part VIII is then the last one part, is made of a convolutional layer and softmax function, convolutional layer only include 33 × 3 × 3 convolution kernel, obtained output feature sizes are 3 × 240 × 80 × 8, and it is pre- finally to obtain probability by softmax function Mapping, size be also 3 × 240 × 80 × 8,240 × 80 × 8 represent be input 8 ROI pictures, every ROI is schemed, 3 probability graphs are generated, respectively represent the probability that pixel is cuticula, papillaris pars or background, which class probability is maximum, then pixel Prediction be exactly the category, for some pixel, it is the Probability p of l classlIt calculates as follows:
Wherein hlIt is the input of softmax, 1≤l≤3;
2.2) parameter for determining full convolutional neural networks, it is 2 that the size criticized, which is arranged, is with 8 pictures by the picture in training set One batch, 2 batches of full convolutional neural networks models of loading are trained every time, and the number of iterations can be obtained for 100 times and train Network;
The parameter for calculating each network layer is updated using the batch stochastic gradient descent algorithm mini-batch-SGD for having momentum term, Wherein the value of momentum term is set as 0.2;
Use dice loss function;Its functional form is as follows:
In above formula, pl(x) and gl(x) prediction probability and Truth Probability that x belongs to l class are respectively represented.
4. fingerprint extraction method inside and outside the OCT according to claim 1 or 2 based on three-dimensional full convolutional neural networks, special Sign is, in the step 3), in order to cooperate the input dimension of picture of trained three-dimensional full convolutional neural networks, to pre- The OCT image of survey carries out obtaining a series of sub-pictures having a size of segmentation, and sub-pictures are input to trained three-dimensional full convolution mind In network, corresponding cuticula is obtained, then sub-pictures are reverted to original OCT image size by papillaris pars.
5. fingerprint extraction method inside and outside the OCT according to claim 1 or 2 based on three-dimensional full convolutional neural networks, special Sign is, the step (4) the following steps are included:
4.1) due to the cuticula and papillaris pars of OCT image prediction may be it is discontinuous, using nearest-neighbor interpolation Keep it continuous;
4.2) cuticula upper surface contour curve is set as Lsu(w), wherein 0≤w < W, represents horizontal axis, equally, cuticula lower surface wheel Wide curve is Lsd(w), then papillaris pars upper surface profile is set as Lpu(w), then according to relative depth, two are obtained by following formula Curve LE(w), LI(w):
LE(w)=| Lsu(w)-Lsd(w)| (3)
LI(w)=| Lsu(w)-Lpu(w)| (4)
Such image obtains the line that two width are W respectively, finally again by the two lines of one group of OCT image according to spatial order It is stitched together respectively, obtains inside and outside fingerprint, wherein LE(w) it is used to generate outer fingerprint, LI(w) it is used to generate interior fingerprint, fingerprint image The resolution ratio of picture is then W × N.
CN201910219860.7A 2019-03-22 2019-03-22 OCT (optical coherence tomography) internal and external fingerprint extraction method based on three-dimensional full-convolution neural network Active CN110334566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910219860.7A CN110334566B (en) 2019-03-22 2019-03-22 OCT (optical coherence tomography) internal and external fingerprint extraction method based on three-dimensional full-convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910219860.7A CN110334566B (en) 2019-03-22 2019-03-22 OCT (optical coherence tomography) internal and external fingerprint extraction method based on three-dimensional full-convolution neural network

Publications (2)

Publication Number Publication Date
CN110334566A true CN110334566A (en) 2019-10-15
CN110334566B CN110334566B (en) 2021-08-03

Family

ID=68139549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910219860.7A Active CN110334566B (en) 2019-03-22 2019-03-22 OCT (optical coherence tomography) internal and external fingerprint extraction method based on three-dimensional full-convolution neural network

Country Status (1)

Country Link
CN (1) CN110334566B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111317462A (en) * 2020-03-20 2020-06-23 佛山科学技术学院 Blood flow imaging method and device based on U-net neural network
CN111666813A (en) * 2020-04-29 2020-09-15 浙江工业大学 Subcutaneous sweat gland extraction method based on three-dimensional convolutional neural network of non-local information
CN112991232A (en) * 2021-04-30 2021-06-18 深圳阜时科技有限公司 Training method of fingerprint image restoration model, fingerprint identification method and terminal equipment
CN113034475A (en) * 2021-03-30 2021-06-25 浙江工业大学 Finger OCT (optical coherence tomography) volume data denoising method based on lightweight three-dimensional convolutional neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1768067A2 (en) * 2005-09-23 2007-03-28 Neuricam S.P.A. Electro-optical device for counting persons, or other, based on stereoscopic vision, and relative method
CN107480649A (en) * 2017-08-24 2017-12-15 浙江工业大学 A kind of fingerprint pore extracting method based on full convolutional neural networks
CN107563364A (en) * 2017-10-23 2018-01-09 清华大学深圳研究生院 The discriminating conduct of the fingerprint true and false and fingerprint identification method based on sweat gland
CN109154961A (en) * 2018-02-26 2019-01-04 深圳市汇顶科技股份有限公司 Optics fingerprint sensing in LCD screen based on the optical imagery for utilizing lens-pinhole module and other optical designs

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1768067A2 (en) * 2005-09-23 2007-03-28 Neuricam S.P.A. Electro-optical device for counting persons, or other, based on stereoscopic vision, and relative method
CN107480649A (en) * 2017-08-24 2017-12-15 浙江工业大学 A kind of fingerprint pore extracting method based on full convolutional neural networks
CN107563364A (en) * 2017-10-23 2018-01-09 清华大学深圳研究生院 The discriminating conduct of the fingerprint true and false and fingerprint identification method based on sweat gland
CN109154961A (en) * 2018-02-26 2019-01-04 深圳市汇顶科技股份有限公司 Optics fingerprint sensing in LCD screen based on the optical imagery for utilizing lens-pinhole module and other optical designs

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG HAIXIA: "External and internal fingerprint extraction based on optical coherence tomography", 《SPIE》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111317462A (en) * 2020-03-20 2020-06-23 佛山科学技术学院 Blood flow imaging method and device based on U-net neural network
CN111317462B (en) * 2020-03-20 2023-11-03 佛山科学技术学院 Blood flow imaging method and device based on U-net neural network
CN111666813A (en) * 2020-04-29 2020-09-15 浙江工业大学 Subcutaneous sweat gland extraction method based on three-dimensional convolutional neural network of non-local information
CN111666813B (en) * 2020-04-29 2023-06-30 浙江工业大学 Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN113034475A (en) * 2021-03-30 2021-06-25 浙江工业大学 Finger OCT (optical coherence tomography) volume data denoising method based on lightweight three-dimensional convolutional neural network
CN113034475B (en) * 2021-03-30 2024-04-19 浙江工业大学 Finger OCT (optical coherence tomography) volume data denoising method based on lightweight three-dimensional convolutional neural network
CN112991232A (en) * 2021-04-30 2021-06-18 深圳阜时科技有限公司 Training method of fingerprint image restoration model, fingerprint identification method and terminal equipment
CN112991232B (en) * 2021-04-30 2021-07-23 深圳阜时科技有限公司 Training method of fingerprint image restoration model, fingerprint identification method and terminal equipment

Also Published As

Publication number Publication date
CN110334566B (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
CN112766160B (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN110334566A (en) Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks
CN105957066B (en) CT image liver segmentation method and system based on automatic context model
CN107977969B (en) Endoscope fluorescence image segmentation method, device and storage medium
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN105139004B (en) Facial expression recognizing method based on video sequence
CN109903301B (en) Image contour detection method based on multistage characteristic channel optimization coding
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN103761536B (en) Human face beautifying method based on non-supervision optimal beauty features and depth evaluation model
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN109584251A (en) A kind of tongue body image partition method based on single goal region segmentation
CN110427832A (en) A kind of small data set finger vein identification method neural network based
CN109859233A (en) The training method and system of image procossing, image processing model
CN111429460A (en) Image segmentation method, image segmentation model training method, device and storage medium
CN107851194A (en) Visual representation study for brain tumor classification
CN110188792A (en) The characteristics of image acquisition methods of prostate MRI 3-D image
CN109410168A (en) For determining the modeling method of the convolutional neural networks model of the classification of the subgraph block in image
CN107424145A (en) The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks
CN104077742B (en) Human face sketch synthetic method and system based on Gabor characteristic
CN108564120A (en) Feature Points Extraction based on deep neural network
CN111242956A (en) U-Net-based ultrasonic fetal heart and fetal lung deep learning joint segmentation method
CN110110808A (en) A kind of pair of image carries out the method, apparatus and computer readable medium of target mark
CN109636910A (en) A kind of cranium face restored method generating confrontation network based on depth
CN112750531A (en) Automatic inspection system, method, equipment and medium for traditional Chinese medicine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant