CN105469032A - Infrared image identification method - Google Patents

Infrared image identification method Download PDF

Info

Publication number
CN105469032A
CN105469032A CN201510785610.1A CN201510785610A CN105469032A CN 105469032 A CN105469032 A CN 105469032A CN 201510785610 A CN201510785610 A CN 201510785610A CN 105469032 A CN105469032 A CN 105469032A
Authority
CN
China
Prior art keywords
image
subimage
vein image
vein
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510785610.1A
Other languages
Chinese (zh)
Inventor
赖真霖
文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sixiang Lianchuang Technology Co Ltd
Original Assignee
Chengdu Sixiang Lianchuang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sixiang Lianchuang Technology Co Ltd filed Critical Chengdu Sixiang Lianchuang Technology Co Ltd
Priority to CN201510785610.1A priority Critical patent/CN105469032A/en
Publication of CN105469032A publication Critical patent/CN105469032A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Abstract

The invention provides an infrared image identification method comprising the steps of binarizing and enhancing an infrared palm vein image, and identifying the infrared palm vein image by extracting the features of the image. The invention provides an infrared image identification method, which effectively improves the identification range and the identification speed and accuracy for low-quality palm vein gathering images.

Description

Infrared image recognition
Technical field
The present invention relates to image recognition, particularly a kind of Infrared image recognition.
Background technology
Along with the development of biometrics, face and fingerprint recognition can not meet growing security requirement.In recent years, the identification based on palm vein feature is in widespread attention in living things feature recognition field.Palm vein image obtains easily, and take storage space little, its research has important using value.Palm vein identification relevant authentication product will play an important role in network security certification.Existing palm vein recognition system can only process the sample under better condition, and partially dark for image, the palmmprint sample that sharpness is not high, discrimination decreases, meanwhile, the general calculated amount of the algorithm applied is comparatively large, makes identifying be difficult to reach real-time.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of Infrared image recognition, comprising:
Binaryzation is carried out to infrared palm vein image and strengthens process;
Infrared palm vein image is identified by extracting characteristics of image.
Preferably, described by extract characteristics of image identify infrared palm vein image, comprise further:
Vein image after strengthening is carried out piecemeal, is divided into vein subimage f (x, y) of multiple 64 × 64 sizes, first it is carried out with down conversion:
F ( u , v ) = Σ x = 0 64 - 1 Σ y = 0 64 - 1 f ( x , y ) e - j 2 π ( u x / 64 + v y / 64 )
After conversion, transformation results is also the matrix of 64 × 64 sizes, simultaneously [0,2 π) decile is divided into 126 sections, corresponding 126 deflections, adopts linear interpolation method to estimate the radial coordinate on different directions:
F 1=F l,u(x u-x)+F l,d(x-x d)
F 2=F r,u(x u-x)+F r,d(x-x d)
F(i,j)=F 1(y u-y ij)+F 2(y ij-y d)
Here, F (i, j) is the value at the place of an i direction radial coordinate jth coordinate, (u, l), (d, l), (u, r), (d, r) is respectively the value at coordinate (x, y) upper left, lower-left, upper right, bottom right place; Then to F (i, j), do inverse Fourier transform, result is designated as f ' (i, j), wherein i=l, and 2 ..., 126; J=1,2 ..., 64, i is directional information, and j is intercept information;
The subimage coefficient of dissociation set of sample same area all in training set is formed sample matrix:
In formula, n is sample size in vein image Sample Storehouse, k ijit is a jth coefficient of wavelet decomposition of i-th sample; Carry out principal component analysis, obtain a stack features value vector (ω 1, ω 2..., ω 126), from 126 components, find p component, when the ratio that its variance accounts for population variance is greater than a certain value, then select p component above as principal component, the ratio computing formula that principal component variance accounts for whole variance is as follows:
α p = Σ i = 1 p ω i Σ i = 1 126 ω i ,
Image feature vector after calculating by analysis is
In formula, C is multi-stress, and i-th of matrix Y is classified as jth principal component; Work as α pwhen being greater than and being less than given threshold value, (y i, 1, y i, 2y i,p) t, namely as the proper vector of the i-th width image;
The proper vector of the vein object in palm vein image pattern storehouse is classified, after obtaining new vein image, the subimage of this position is classified, and this subimage after record sort belongs to the record number of all objects in Sample Storehouse, using the matching result of objects maximum for record number as this subimage;
After being mated by known class respectively by the Q of a test sample book subimage, obtain the classification correct probability [R of diverse location subimage a1,1... R aq, q], here, Ai, j are the subimage of divided rear (i, the j) position of test sample book, and Q=q 2;
Merge all subimage classification results of candidate matches vein image, identify in this image and storehouse, which class vein image matches;
C p=ΣAi,j
C pfor image to be matched belongs to the probability coefficent of p class image; Appropriate threshold T is set, if C p>T, determines that image to be matched belongs to p class vein image; If C p< T, determines that image to be matched is non-existent new vein image in Sample Storehouse.
The present invention compared to existing technology, has the following advantages:
The present invention proposes a kind of Infrared image recognition, for the collection image that quality is lower, effectively improve identification range, recognition speed and precision.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the Infrared image recognition according to the embodiment of the present invention.
Embodiment
Detailed description to one or more embodiment of the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain many substitute, amendment and equivalent.Set forth many details in the following description to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some in these details or all details.
An aspect of of the present present invention provides a kind of Infrared image recognition.Fig. 1 is the Infrared image recognition process flow diagram according to the embodiment of the present invention.
What palm vein image utilized is the absorption characteristic of haemoglobin near infrared light of human vein.Adopt in palm vein collecting device be wavelength coverage at the near-infrared luminous diode of 700-1100 as light source because the light of this wave band easily penetrates palm bone and musculature, then adopt the imageing sensor of ISO.In the gatherer process of palm vein image, if the position that palm is placed is too inclined, the gradient fields on palm border in palm vein image may be caused little, thus cause the Boundary Extraction of palm vein imperfect, affect the intercepting in palm vein ROI (ROI) region.
Palm vein image identification comprises following process alternatively:
Palm vein image is split, and is separated by the vein texture in vein image, with the speed of the accuracy and feature extraction that improve image characteristics extraction from background area.
Palm vein image strengthens, for the vein texture information of outstanding image.
Feature extraction, by carrying out feature extraction to pretreated image, obtains geometric properties template or the data characteristics template of palm vein image.
Match cognization, by the user's palm vein image gathered, obtains sample form, mates, to identify User Identity with the enrollment in database before.
Further, for above-mentioned Iamge Segmentation, the present invention adopts following process: according to directional spreding choice direction number, by the gray scale of vein image xsect and the size of curvature distribution determination wave filter and frequency range, then the image information for each direction of palm vein only constructs a wave filter, by the setting of the weights coefficient to the filtering subgraph after all directions filtering, subgraph after filtering is rebuild, finally calculate and rebuild image 4 direction neighboring mean value, the average image obtained obtains splitting image after comparing with reconstruction image difference.
First, for the feature of palm vein image, the present invention calculates the directional spreding figure of palm vein image, analyzes the directional spreding of original image, determines the selection of filter direction according to characteristic distributions.Every the angle of π/8, preset the template operator T in 8 directions k(k=l ..., 8).Suppose the palm vein image F of a width, the coordinate of pixel centered by (i, j), F (i, j) is the central pixel point gray-scale value at former vein image (i, j) place.
Directional spreding figure computing method are as follows:
(1) node-by-node algorithm goes out 8 direction gray scale convolution values on each pixel.Adopt the template operator T of 5x5 kcalculate the gray scale Convolution sums S at pixel (i, j) place k(i, j) (k=1 ..., 8).Formula below using in a 9x9 block obtains 8 direction gray scale Convolution sums of this window center pixel (i, j):
S k ( i , j ) = &Sigma; x = - 4 4 &Sigma; y = - 4 4 F ( i + x , j + y ) &CenterDot; T k ( x , y )
Wherein, x, y are the distance that template is slided in the picture, represent the coefficient of correspondence direction cope match-plate pattern, S k(x, y) be convolution algorithm after center pixel point value, be defined as the gray scale Convolution sums of central pixel point here.
(2) from S k(i, j) (k=1 ..., 8) in the gray scale Convolution sums of 8 central pixel point take out maximum Convolution sums value, and get the direction of subscript as this central pixel point of maximum Convolution sums value:
S kmax=max(S k(i,j))(k=1,...,8)
k max=arg(S kmax)
Wherein, max represents the maximal value of getting in one group of gray scale Convolution sums, and arg represents and gets S kmaxsubscript value.
The centre frequency of wave filter and envelope standard variance determine the filter effect of palm vein image, and the distance between palm vein texture determines filter centre frequency f, and the width of vein texture determines the size δ of wave filter xδ y, in order to reduce by δ xδ ythe impact that value difference is brought image enhaucament, the present invention gets following standard variance, i.e. dimension delta=the δ of wave filter xy.And δ and f meets following relation:
&delta; f = 1 &pi; l n 2 2 2 B - 1 2 B + 1
Wherein, B represents the spatial domain bandwidth of wave filter, and between [0.5,2.5] value.
(3) 8 width images after filtering are rebuild.Define one group of weighted value λ k∈ [0,1]; The process of reconstruction of 8 width images can be defined as:
R = &Sigma; k = 1 8 &lambda; k S k , Wherein &Sigma; k = 1 8 &lambda; k = 1
R represent reconstruction after image.
(4) adopt 4 direction average templates to the smoothing process of reconstructed image R.4 direction average template T of a definition 7x7 size a, from horizontal level, every; A direction is determined in π/4.The direction angle range of formwork calculation be [0, π).Computing method are as follows:
First the pixel grey scale mean value R of each pixel on a direction, 4 centered by center pixel is calculated l(l gets 1-4).And the gray average sum in 4 directions is averaged, process may be defined as:
R , = R &CircleTimes; T a 4 &times; 7
Wherein represent two-dimensional convolution computing, R ' represents 4 direction average images.
(5) Iamge Segmentation.In order to simplify the process of binaryzation, using the standard of the difference comparsion of reconstructed image R and 4 direction average image R ' as segmentation, the cutting procedure of image can be defined as:
E ( i , j ) = 1 R ( i , j ) - R &prime; ( i , j ) > 0 0 R ( i , j ) - R &prime; ( i , j ) &le; 0
R (i, j) represents the gray-scale value of reconstructed image R at (i, j) place, and R ' (i, j) represents the gray-scale value of 4 direction average image R ' at (i, j) place, and E is the image after segmentation.
After obtaining the binary image after segmentation, further, in above-mentioned image enhaucament, the present invention adopts following process to carry out refinement to image, the feature with outstanding palmprint image:
(1) iterations being marked N, to arrange initial value be 1, according to different N values, when meeting the corresponding threshold condition of following process 1 to process 4, select implementation 1 to process 4 to perform refinement to border, image upper left, border, bottom right, border, lower-left, upper right border respectively, if Nmod4=1, implementation 1; If NmodA=2, implementation 2; If Nmod4=3, implementation 3; If Nmod4=0, implementation 4; The threshold condition that wherein process 1 to process 4 is respective is:
The threshold condition of process 1 is:
2≤A(P)≤6;
P2·P4·P8=1;
P2·P4·P6=1;
Wherein A (P) represents that around current pixel point P, 8 pixel intermediate values are the number of 1; P2, P4, P6, P8 are respectively the value of the top of current pixel point P, left, right, below adjacent pixels point;
The threshold condition of process 2 is:
2≤A(P)≤6;
P4·P6·P8=0;
P2·P6·P8=0;
The threshold condition of process 3 is:
2≤A(P)≤6;
P2·P4·P6=0;
P2·P6·P8=0;
The threshold condition of process 4 is:
P2·P4·P8=0;
P4·P6·P8=0;
(2) often perform an iterative refinement procedure, perform N and increase 1 operation; Judge that whether current refined result is identical with refinement last time result, if no longer include change after identical i.e. refinement, then go to step (7); If different, then continue step (3);
(3) judge the value of Nmod4, when being 1, implementation 1, in process 1, if the A of current pixel P (P)=2, then deletes pixel P, otherwise retains pixel P; If pixel P meets all conditions of process 1 time different, then retain pixel P, go to step (2);
(4) value of Nmod4 is judged, when being 2, implementation 2, in process 2, if the A of current pixel P (P)=2, then pixel P is deleted, otherwise retain pixel P: if meet all conditions of process 2 when pixel P is different, then retain pixel P, go to step (2):
(5) value of Nmod4 is judged, when being 3, implementation 3, in process 3, if the A of current pixel P (P)=2, then pixel P is deleted, otherwise retain pixel P: if meet all conditions of process 3 when pixel P is different, then retain pixel P, go to step (2);
(6) value of Nmod4 is judged, when being 0, implementation 4, in process 4, if the A of current pixel P (P)=2, then pixel P is deleted, otherwise retain pixel P: if meet all conditions of process 4 when pixel P is different, then retain pixel P, go to step (2);
(7) if pixel P meets following two conditions, then pixel P is deleted as redundant sub-pixels:
P2P6=1 and P7=0;
P6P8=1 and P1=0;
Wherein P1 and P7 is respectively upper left side and the lower left adjacent pixels of pixel P;
(8) if pixel P meets the following conditions, then pixel P is deleted as redundant sub-pixels:
P2P6=1 and P3=0;
P6P8=1 and P9=0;
Wherein P1 and P7 is respectively upper right side and the lower right adjacent pixels of pixel P;
After obtaining the palmmprint vein image after strengthening, further, in above-mentioned image characteristics extraction process, the present invention adopts following process:
The feature of vein image is mainly reflected in texture extension trend, and for binaryzation gray level image, extension trend is embodied in again in the direction of vein border high-frequency information.Emphasis of the present invention is described and feature extraction for the angle of bend of vein curve.
Vein image after strengthening is carried out piecemeal, is divided into vein subimage f (x, y) of multiple 64 × 64 sizes, first it is carried out with down conversion:
F ( u , v ) = &Sigma; x = 0 64 - 1 &Sigma; y = 0 64 - 1 f ( x , y ) e - j 2 &pi; ( u x / 64 + v y / 64 )
After conversion, transformation results is also the matrix of 64 × 64 sizes, simultaneously [0,2 π) decile is divided into 126 sections, corresponding 126 deflections, adopts linear interpolation method to estimate the radial coordinate on different directions:
F 1=F l,u(x u-x)+F l,d(x-x d)
F 2=F r,u(x u-x)+F r,d(x-x d)
F(i,j)=F 1(y u-y ij)+F 2(y ij-y d)
Here, F (i, j) is the value at the place of an i direction radial coordinate jth coordinate, (u, l), (d, l), (u, r), (d, r) is respectively the value at coordinate (x, y) upper left, lower-left, upper right, bottom right place; Then to F (i, j), do inverse Fourier transform, result is designated as f ' (i, j), wherein i=l, and 2 ..., 126; J=1,2 ..., 64, i is directional information, and j is intercept information;
The subimage coefficient of dissociation set of sample same area all in training set is formed sample matrix:
In formula, n is sample size in vein image Sample Storehouse, k ijit is a jth coefficient of wavelet decomposition of i-th sample; Carry out principal component analysis, obtain a stack features value vector (ω 1, ω 2..., ω 126), from 126 components, find p component, when the ratio that its variance accounts for population variance is greater than a certain value, then select p component above as principal component, the ratio computing formula that principal component variance accounts for whole variance is as follows:
&alpha; p = &Sigma; i = 1 p &omega; i &Sigma; i = 1 126 &omega; i ,
Image feature vector after calculating by analysis is
In formula, C is multi-stress, and i-th of matrix Y is classified as jth principal component; Work as α pwhen being greater than and being less than given threshold value, (y i, 1, y i, 2y i,p) t, namely as the proper vector of the i-th width image;
The proper vector of the vein object in palm vein image pattern storehouse is classified, after obtaining new vein image, the subimage of this position is classified, and this subimage after record sort belongs to the record number of all objects in Sample Storehouse, using the matching result of objects maximum for record number as this subimage;
After being mated by known class respectively by the Q of a test sample book subimage, obtain the classification correct probability [R of diverse location subimage a1,1... R aq, q], here, Ai, j are the subimage of divided rear (i, the j) position of test sample book, and Q=q 2;
Merge all subimage classification results of candidate matches vein image, identify in this image and storehouse, which class vein image matches;
C p=ΣAi,j
C pfor image to be matched belongs to the probability coefficent of p class image; Appropriate threshold T is set, if C p>T, determines that image to be matched belongs to p class vein image; If C p< T, determines that image to be matched is non-existent new vein image in Sample Storehouse.
In sum, the present invention proposes a kind of Infrared image recognition, for the collection image that quality is lower, effectively improve identification range, recognition speed and precision.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on network that multiple computing system forms, alternatively, they can realize with the executable program code of computing system, thus, they can be stored and be performed by computing system within the storage system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore, any amendment made when without departing from the spirit and scope of the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.In addition, claims of the present invention be intended to contain fall into claims scope and border or this scope and border equivalents in whole change and modification.

Claims (2)

1. an Infrared image recognition, for identifying gathered infrared palm vein image, is characterized in that, comprise:
Binaryzation is carried out to infrared palm vein image and strengthens process;
Infrared palm vein image is identified by extracting characteristics of image.
2. method according to claim 1, is characterized in that, described by extract characteristics of image identify infrared palm vein image, comprise further:
Vein image after strengthening is carried out piecemeal, is divided into vein subimage f (x, y) of multiple 64 × 64 sizes, first it is carried out with down conversion:
F ( u , v ) = &Sigma; x = 0 64 - 1 &Sigma; y = 0 64 - 1 f ( x , y ) e - j 2 &pi; ( u x / 64 + v y / 64 )
After conversion, transformation results is also the matrix of 64 × 64 sizes, simultaneously [0,2 π) decile is divided into 126 sections, corresponding 126 deflections, adopts linear interpolation method to estimate the radial coordinate on different directions:
F 1=F l,u(x u-x)+F l,d(x-x d)
F 2=F r,u(x u-x)+F r,d(x-x d)
F(i,j)=F 1(y u-y ij)+F 2(y ij-y d)
Here, F (i, j) is the value at the place of an i direction radial coordinate jth coordinate, (u, l), (d, l), (u, r), (d, r) is respectively the value at coordinate (x, y) upper left, lower-left, upper right, bottom right place; Then to F (i, j), do inverse Fourier transform, result is designated as f ' (i, j), wherein i=l, and 2 ..., 126; J=1,2 ..., 64, i is directional information, and j is intercept information;
The subimage coefficient of dissociation set of sample same area all in training set is formed sample matrix:
In formula, n is sample size in vein image Sample Storehouse, k ijit is a jth coefficient of wavelet decomposition of i-th sample; Carry out principal component analysis, obtain a stack features value vector (ω 1, ω 2..., ω 126), from 126 components, find p component, when the ratio that its variance accounts for population variance is greater than a certain value, then select p component above as principal component, the ratio computing formula that principal component variance accounts for whole variance is as follows:
&alpha; p = &Sigma; i = 1 p &omega; i &Sigma; i = 1 126 &omega; i ,
Image feature vector after calculating by analysis is
In formula, C is multi-stress, and i-th of matrix Y is classified as jth principal component; Work as α pwhen being greater than and being less than given threshold value, (y i, 1, y i, 2y i,p) t, namely as the proper vector of the i-th width image;
The proper vector of the vein object in palm vein image pattern storehouse is classified, after obtaining new vein image, the subimage of this position is classified, and this subimage after record sort belongs to the record number of all objects in Sample Storehouse, using the matching result of objects maximum for record number as this subimage;
After being mated by known class respectively by the Q of a test sample book subimage, obtain the classification correct probability [R of diverse location subimage a1,1... R aq, q], here, A i,jfor the subimage of divided rear (i, the j) position of test sample book, and Q=q 2;
Merge all subimage classification results of candidate matches vein image, identify in this image and storehouse, which class vein image matches;
C p=ΣA i,j
C pfor image to be matched belongs to the probability coefficent of p class image; Appropriate threshold T is set, if C p>T, determines that image to be matched belongs to p class vein image; If C p< T, determines that image to be matched is non-existent new vein image in Sample Storehouse.
CN201510785610.1A 2015-11-16 2015-11-16 Infrared image identification method Pending CN105469032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510785610.1A CN105469032A (en) 2015-11-16 2015-11-16 Infrared image identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510785610.1A CN105469032A (en) 2015-11-16 2015-11-16 Infrared image identification method

Publications (1)

Publication Number Publication Date
CN105469032A true CN105469032A (en) 2016-04-06

Family

ID=55606703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510785610.1A Pending CN105469032A (en) 2015-11-16 2015-11-16 Infrared image identification method

Country Status (1)

Country Link
CN (1) CN105469032A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318213A (en) * 2014-10-21 2015-01-28 沈阳大学 Method for using human body palm biology information to identify identities
CN104504361A (en) * 2014-11-10 2015-04-08 深圳云派思科技有限公司 Method for extracting principal direction characteristics of palm veins on the basis of direction characteristics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318213A (en) * 2014-10-21 2015-01-28 沈阳大学 Method for using human body palm biology information to identify identities
CN104504361A (en) * 2014-11-10 2015-04-08 深圳云派思科技有限公司 Method for extracting principal direction characteristics of palm veins on the basis of direction characteristics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孙磊: "手指静脉图像的特征提取算法", 《中国优秀硕士学位论文全文数据库》 *
贾旭: "基于多特性融合的手背静脉识别关键算法研究", 《中国优秀博士学位论文全文数据库》 *
赵丹丹: "手指静脉图像的增强和细化算法及其在身份识别中的应用", 《中国优秀硕士学位论文全文数据库》 *

Similar Documents

Publication Publication Date Title
CN108009520B (en) Finger vein identification method and system based on convolution variational self-encoder network
Justino et al. Reconstructing shredded documents through feature matching
CN107862282A (en) A kind of finger vena identification and safety certifying method and its terminal and system
CN102663393B (en) Method for extracting region of interest of finger vein image based on correction of rotation
Joshi et al. Latent fingerprint enhancement using generative adversarial networks
CN107729820B (en) Finger vein identification method based on multi-scale HOG
CN102254188B (en) Palmprint recognizing method and device
Du et al. Wavelet domain local binary pattern features for writer identification
CN102332084B (en) Identity identification method based on palm print and human face feature extraction
CN104680127A (en) Gesture identification method and gesture identification system
CN104834922A (en) Hybrid neural network-based gesture recognition method
CN105320950A (en) A video human face living body detection method
CN106022218A (en) Palm print palm vein image layer fusion method based on wavelet transformation and Gabor filter
CN101246543A (en) Examiner identity appraising system based on bionic and biological characteristic recognition
CN105069807A (en) Punched workpiece defect detection method based on image processing
CN104951940A (en) Mobile payment verification method based on palmprint recognition
CN110555382A (en) Finger vein identification method based on deep learning and Wasserstein distance measurement
CN108334875A (en) Vena characteristic extracting method based on adaptive multi-thresholding
CN105373781A (en) Binary image processing method for identity authentication
CN115223211B (en) Identification method for converting vein image into fingerprint image
CN110163182A (en) A kind of hand back vein identification method based on KAZE feature
CN112597812A (en) Finger vein identification method and system based on convolutional neural network and SIFT algorithm
CN114821682B (en) Multi-sample mixed palm vein identification method based on deep learning algorithm
CN110147769B (en) Finger vein image matching method
CN109934102B (en) Finger vein identification method based on image super-resolution

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160406