CN107358198A - A kind of iris identification method based on segmentation feature selection - Google Patents

A kind of iris identification method based on segmentation feature selection Download PDF

Info

Publication number
CN107358198A
CN107358198A CN201710567044.6A CN201710567044A CN107358198A CN 107358198 A CN107358198 A CN 107358198A CN 201710567044 A CN201710567044 A CN 201710567044A CN 107358198 A CN107358198 A CN 107358198A
Authority
CN
China
Prior art keywords
mrow
msub
iris
msubsup
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710567044.6A
Other languages
Chinese (zh)
Inventor
郑慧诚
陈佳捷
欧阳楚旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201710567044.6A priority Critical patent/CN107358198A/en
Publication of CN107358198A publication Critical patent/CN107358198A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

All characteristic vectors are segmented by the iris identification method provided by the invention based on segmentation feature selection, can more accurately pick out in each section of characteristic vector can effectively in region class between sample and class sample characteristic vector, every section of characteristic vector chosen is combined into the feature final as iris to describe, from experimental result as can be seen that the selected characteristic vector out of the present invention can obtain more preferable recognition performance relative to the feature selection approach of classics.

Description

A kind of iris identification method based on segmentation feature selection
Technical field
The present invention relates to biometrics identification technology field, more particularly, to a kind of rainbow based on segmentation feature selection Film recognition methods.
Background technology
Today of life every aspect is almost permeated in internet, is become increasingly conspicuous the problem of the network information security, identity mirror Other security and difficulty are also gradually increasing.In Internet era, the method for main flow is to identify identity using password.So And the risk for being stolen and cracking is not only existed, but also be often faced with password forgetting as identification using password Problem.So the biometrics identification technology of high reliability, high security, portability arises at the historic moment.Iris recognition is as current The higher technology of the degree of accuracy in living things feature recognition, it has high reliability, high security, high stability, high antifalsification.Cause This, the discrimination for improving iris recognition has important practical significance.
A key factor is feature selecting in iris recognition, existing popular feature selection approach have boosting methods and Lasso methods.Wherein boosting methods have more preferable recognition effect by adjusting each Weak Classifier of weight for case, But boosting methods do not ensure that the feature set finally obtained is global optimum, and it is bad in training data design In the case of easily there is the result of over-fitting.And lasso methods construct penalty using the thought of regularization, by Partial Feature Weights zero setting reaches the effect of feature selecting, but because its object function is not linear, it is achieved that it is not to get up very much Effective percentage, and square error term secondary in object function cause it is very sensitive for abnormity point when carrying out feature selecting, And the category of model only has+1 and -1, it can not ensure that model has largest interval, so its generalization ability is also weaker.
The content of the invention
Recognition accuracy existing for the iris identification method that above prior art provides is high, extensive energy to solve by the present invention A kind of weak technological deficiency of power, there is provided iris identification method based on segmentation feature selection.
To realize above goal of the invention, the technical scheme of use is:
A kind of iris identification method based on segmentation feature selection, comprises the following steps:
S1. training sample pair is inputted, being positioned to for iris is carried out respectively to two iris pictures in each pair training sample To the annular region where iris, the annular region of iris is then launched into rectangular image;
S2., the rectangular image of iris is divided into the subregion of multiple non-overlapping copies, then builds multiple different filtering ginsengs Several leafy difference filters;
S3. each sub-regions are with after leafy difference filter convolution, obtaining filter result, to each in filter result Pixel does 0-1 codings according to the symbol of its value, and symbol is encoded to 1 for positive, and symbol is encoded to 0 for negative, by filter result In pixel coding spliced in sequence after obtain a multi-C vector, the characteristic vector as subregion;
S4. after performing step S3 operation with step S2 multiple leafy difference filters respectively per sub-regions, obtain The feature set being made up of multiple characteristic vectors;
S5. each sub-regions obtain corresponding feature set after performing the operation of step S3, S4;
S6. all characteristic vectors that all subregion feature sets include are divided into T sections, every section has D feature;
S7., each pair training sample is carried out to S1~S6 processing;
S8. for every section of characteristic vector of all training sample centerings, characteristic vector is carried out using the method for linear programming Selection;
S9. the Hamming distance between the characteristic vector come out based on each pair training sample selection is supported the instruction of vector machine Practice, then to test sample to the processing according to step S1~S7 after, the characteristic vector that chooses is obtained according to S8, will be tested Hamming distance between the characteristic vector by selection that sample centering obtains is inputted to the SVMs trained, so as to Carry out the identification of iris.
In such scheme, the method that the iris identification method of the bright offer of this law is selected by segmentation feature finds out most effective retouch Iris and region class another characteristic vector are stated, characteristic vector is improved and portrays ability and differentiation for classification for iris Property.Therefore iris identification method provided by the invention can improve the accuracy rate that grader carries out iris recognition.
Compared with prior art, the beneficial effects of the invention are as follows:
All characteristic vectors are segmented by iris identification method provided by the invention, can more accurately be picked out every In one section of characteristic vector can effectively in region class between sample and class sample characteristic vector, the feature that every section is chosen to Amount combines final as iris feature description, from experimental result as can be seen that the selected feature out of the present invention to Amount can obtain more preferable recognition performance relative to the feature selection approach of classics.
Brief description of the drawings
Fig. 1 is Iris Location and normalized detailed process figure.
Fig. 2 is the schematic diagram of recognition methods.
Embodiment
Accompanying drawing being given for example only property explanation, it is impossible to be interpreted as the limitation to this patent;
Below in conjunction with drawings and examples, the present invention is further elaborated.
Embodiment 1
As shown in Figure 1, 2, iris identification method provided by the invention mainly includes Iris Location and normalization, iris recognition This two parts.Picture used in the present invention is the image for only including eye portion gathered from infrared spectrum.
First, Iris Location and normalization
(1) Iris Location
The positioning of iris is divided into the positioning of exterior iris boundary and inner boundary, and the positioning of wherein iris inner boundary is to pupil Boundary alignment.By the connected component analysis of following steps, coarse positioning can be carried out to pupil position;
1st, image is divided by the sub-block of 20 × 20 pixels, calculates the average gray value of each sub-block, minimum is flat Initial threshold T of the equal gray value as iteration0
2nd, pupil region is found according to following iterative process:
1) with initial threshold T0Binaryzation is carried out to image;
2) all connected regions in binary image are traveled through, calculate its area S respectivelyt, eccentricity CtAnd region length-width ratio ηt
3) judge whether each connected region meets condition:StMore than the s of setting, CtLess than the c of setting, and | ηt- 1 | it is less than The ε of setting;
If 4) condition in meeting 3) in the presence of a connected region, the connected region is the region P of pupil;If deposit Meet condition in multiple connected regions, then by eccentricity CtRegion P of the minimum connected region as pupil;Iteration knot Beam.
5) condition during 3) if each connected region is unsatisfactory for, T0=T0+ Δ T, if T0More than image maximum gradation value, Then iteration terminates, and otherwise goes to 1) continuation iteration;
If after the 3, iteration terminates, not finding qualified connected region yet, then face is found in all iteration results Product is more than s and possesses the connected region of global minima eccentricity, the region P using the region as pupil.
After the coarse positioning of pupil, parameter search space can be reduced with this with the rough estimate pupil center of circle and radius, it is real The fast accurate positioning of existing pupil.It is as follows to implement process description:
1st, pupil region of interest ROI is obtained.Estimate using connected region P center-of-mass coordinate as the pupil center of circle, if The average of connected region length and width is L, the length of side value using L as square region of interest ROI, according to the position in the center of circle and ROI The length of side, square area is intercepted in original image, as the region of interest ROI comprising pupil.
2nd, boundary reference point set is extracted.Canny rim detections are carried out to ROI, obtain including the point set of pupil edge.
3rd, it is accurately positioned pupil boundary using Hough loop truss.The scope of [L/2, L] as pupil radium, Hough is used Loop truss carries out ballot to edge point set and added up, when accumulator obtains maximum, the home position and radius value of resulting circle The as parameter of pupil boundary, that is, the parameter of iris inner boundary.
It is similar with the localization method of above-mentioned pupil as the localization method of exterior iris boundary.Due to the pupil center of circle and iris The center of circle approaches, based on iris inner boundary radius r in formula (1)1With exterior iris boundary radius r0Existing relation, iris can be obtained External boundary radius r0Scope be [1.8r1,4.0r1], by 8.0r1Side as exterior iris boundary square area ROI interested It is long, equally it is to reduce parameter search space using the center of circle and radius, reaches the effect of exterior iris boundary fast positioning.
(2) iris normalization
The annular region where iris can be obtained by Iris Location, the present embodiment uses Daugman rubber paper matrix Type [1] is normalized, and the annular region of iris then is launched into rectangular matrix.Resolution ratio is obtained after normalization For the rectangular image of 72 × 512 pixels.
First, iris recognition
Iris recognition is divided into three phases:(1) feature is extracted;(2) feature choosing is carried out with the method for sectional linear programming Select;(3) to being trained and predicting using SVMs after feature normalization
(1) feature is extracted
The 1st, iris image after normalization is divided into the subregion of 8 × 32 pixels of multiple non-overlapping copies, then produce 9 altogether × 16=144 sub-regions.
2nd, for every sub-regions, feature extraction, this method are carried out using the leafy difference filter (MLDF) in formula (2) Core using the two-dimensional Gaussian function based on stochastic variable X as MLDF:
Wherein, variable μpiAnd σpiCenter and the yardstick of the i-th leaf forward direction gaussian filtering, variable μ are represented respectivelynjAnd σnj Center and the yardstick of jth leaf negative sense gaussian filtering, N are represented respectivelyp、NnThe number of positive and negative leaf, C are represented respectivelyp、Cn>0, point The weight of positive and negative leaf is not represented.
The 3rd, multigroup filtering parameter is set, form multiple MLDF wave filters.Using two kinds of wave filters of two leaves and three leaves, each In kind wave filter, there are distance and direction between two leaves, wherein the totally 21 kinds of selections of distance d ∈ { 1,2,3 ..., 21 } pixel, direction θ ∈ 0, π/4, and pi/2,3 π/4 }, each leaf wave filter is represented that Filtering Template has 5 × 5 pixels and 9 × 9 by two-dimensional filtering template 2 kinds of sizes of pixel, then symbiosis is into 336 kinds of MLDF wave filters.
4th, per sub-regions and after one of them leafy difference filter convolution, filter result is obtained, each pixel 0-1 codings are done according to the symbol of value, symbol is encoded to 1 for positive, and symbol is encoded to 0 for negative, then by 8 × 32 subregion 256 dimensional vectors comprising 0-1 values are spliced into by row, as a candidate feature vector of the subregion, each wave filter A feature will be obtained, therefore the subregion will obtain all candidate features of the subregion after 336 MLDF filter filterings Collection, the complete expression to be formed to the iris feature is combined by the feature set of all local subregions.In this experiment altogether Produce 144 × 336=48384 candidate feature vector.Because candidate feature is more, it is necessary to carry out feature selecting, select optimal Iris feature statement.
(2) feature selecting is carried out with the method for sectional linear programming
48384 candidate features are divided into T sections, every section has D feature.As shown in formula (3), for each section of feature, use The method of linear programming carries out feature selecting:
Wherein constraints is as follows:
ωti>=0, i=1,2 ..., D, t=1,2 ..., T (8)
N+、N-The sum of sample pair between sample pair and class in class is represented respectively.ωtiFor ith feature vector in t sections Between Hamming distance weight, ωtiValue is higher, the importance of the Hamming distance between ith feature vector in corresponding t sections It is higher.PtiTo characterize the prior information of Hamming distance importance between ith feature vector in t sections.Respectively adjust Save the slack variable of the interior sample discrimination threshold between class of class in t section features.For constant term, the parameter is coordinating to tie The openness and accuracy of identification of fruit, αt、βtThe Hamming distance of the interior matched samples all between class of class respectively in t sections characteristic vector From average, represent the discrimination threshold between class in class in loss function respectively.T in matched sample in expression jth group class Hamming distance between the ith feature vector of section,The ith feature of t sections is vectorial in matched sample between expression kth group class Between Hamming distance, the Hamming distance between characteristic vector can be calculated by formula (9).
r1(i)、r2(i) it is i-th of value in 256 dimensional feature vectors comprising 0-1 values, the Hamming distance between characteristic vector From can be by carrying out XOR one by one to the 0-1 sequences in characteristic vector to try to achieve.
The basic thought of sectional linear programming model be for each section of feature, maximize class interval principle under, The sparse expression result of solving model.For each section of feature, object function is made up of two parts, and a part is loss function, Another part is sparsity constraints.Loss functionIt is to be used to calculate in t section features in class between class The penalty term of misclassification error, meet maximize class interval principle feature (feature vector score is more than α i.e. in classtOr Feature vector score is less than β between classt) do not interfere with loss function, but do not meet maximize the feature of class interval will be by To the constraint of loss function.λ in penalty term+、λ-For the penalty coefficient of misclassification error, the more big then loss function of the value is for mistake The constraint of the feature of misclassification is stronger, and Classification and Identification effect is more accurate, but the openness decline solved.If conversely, the value is smaller Classification and Identification effect declines, and the openness raising solved.In this method, λ is made+-=6.Another part of object function is dilute Dredge property constraintFor ensureing the openness of solution.In addition, different from traditional sparse systems of selection of Lasso, the function Introduce PtiThis priori, make the process of sparse selection it can be considered that the importance of feature.Here discrimination index (D is usedti) Inverse represent the prior information P of the Hamming distance in t sections between ith feature vectorti, DtiCalculation formula such as (10) shown in.
The average of Hamming distance between the ith feature vector of all sample centering t sections in class is represented respectively And standard deviation, andThe Hamming distance between the ith feature vector of all sample centering t sections between class is represented respectively Average and standard deviation.DtiValue is more big, shows that the performance of this feature vector is better, can more efficiently differentiate between class and in class Sample.DtiThe more high then P of valuetiBe worth it is smaller, for this feature vector sparse constraint it is lower, then this feature vector is in learning process In be selected possibility it is also higher.
48384 characteristic vectors are divided into T sections, to each section of characteristic vector using linear programming method carry out feature to Amount selection, so can in each section of characteristic vector can effectively in region class between characteristic vector and class characteristic vector feature Vectorial exact choice is come out, and T is set to 336 sections herein, then the quantity D of each section of characteristic vector is 114.
(3) it is trained and predicts using SVMs after feature normalization
(1) training data is normalized, normalization formula is as follows:
Wherein μiRepresent the average of the Hamming distance between all training sample centering ith feature vectors, σiRepresent The standard deviation of Hamming distance between all training sample centering ith feature vectors, xiRepresent normalization before ith feature to Hamming distance between amount, XiRepresent the Hamming distance between ith feature vector after normalizing.
(2) normalized form of the Hamming distance between the characteristic vector elected is entered using SVM (SVMs) Row training;
(3) it is predicted using the supporting vector machine model trained.
Embodiment 2
The present embodiment has carried out specific experiment to the method for embodiment 1, and the experimental data source of the present embodiment is based on Casia_iris_thousand databases, this database acquire the iris picture of the right and left eyes of 1000 people, each iris There are 10 pictures, a total of 20000 iris pictures.Training stage has used the iris picture of preceding 50 people as training sample This, sample 2250 is right wherein in class, and sample 4500 is right between class, first with the method for (3) carry out characteristic vector select pick out it is important Characteristic vector, after data are normalized according to the method for (11), be trained using SVMs, obtain model. In forecast period, sample has 87750 pairs in the class that the experiment of the present embodiment uses, and sample has 950000 pairs between class, uses training Good model is predicted.Experimental result such as uses to weigh at error rate (EER) and TG-AUC (AUC) the two indexs, Wherein the ordinate of curve represents false rejection rate (FRR), and abscissa represents false acceptance rate (FAR), the experimental result such as institute of table 1 Show.Index EER and AUC is smaller, illustrates that the performance of model is better.It can be seen that from experimental result, the performance ratio warp of the inventive method Lasso the and LP feature selection approach of allusion quotation will totally get well.
The inventive method of table 1. and the performance comparision of classical feature selection approach
Lasso[2] LP[3] The inventive method
EER 5.1% 4.2% 3.79%
AUC 0.0187 0.0123 0.0104
Obviously, the above embodiment of the present invention is only intended to clearly illustrate example of the present invention, and is not pair The restriction of embodiments of the present invention.For those of ordinary skill in the field, may be used also on the basis of the above description To make other changes in different forms.There is no necessity and possibility to exhaust all the enbodiments.It is all this All any modification, equivalent and improvement made within the spirit and principle of invention etc., should be included in the claims in the present invention Protection domain within.

Claims (9)

  1. A kind of 1. iris identification method based on segmentation feature selection, it is characterised in that:Comprise the following steps:
    S1. training sample pair is inputted, the positioning that two iris pictures in each pair training sample are carried out with iris respectively obtains rainbow Annular region where film, the annular region of iris is then launched into rectangular image;
    S2., the rectangular image of iris is divided into the subregion of multiple non-overlapping copies, then builds multiple different filtering parameters Leafy difference filter;
    S3. each sub-regions are with after leafy difference filter convolution, obtaining filter result, to each pixel in filter result Point does 0-1 codings according to the symbol of its value, and symbol is encoded to 1 for positive, and symbol is encoded to 0 for negative, by filter result The coding of pixel obtains a multi-C vector after being spliced in sequence, the characteristic vector as subregion;
    S4. after performing step S3 operation with step S2 multiple leafy difference filters respectively per sub-regions, obtain by more The feature set of individual characteristic vector composition;
    S5. each sub-regions obtain corresponding feature set after performing the operation of step S3, S4;
    S6. all characteristic vectors that all subregion feature sets include are divided into T sections, every section has D feature;
    S7., each pair training sample is carried out to S1~S6 processing;
    S8. for every section of characteristic vector of all training sample centerings, the choosing of characteristic vector is carried out using the method for linear programming Select;
    S9. the Hamming distance between the characteristic vector come out based on each pair training sample selection is supported the training of vector machine, Then to test sample to the processing according to step S1~S7 after, the characteristic vector that chooses is obtained according to S8, by test specimens Hamming distance between the characteristic vector by selection that this centering obtains is inputted to the SVMs trained, so as to enter The identification of row iris.
  2. 2. the iris identification method according to claim 1 based on segmentation feature selection, it is characterised in that:The step S1 Iris Location is included positioning exterior iris boundary and two processes are positioned to iris inner boundary, wherein being positioned to iris inner boundary Detailed process it is as follows:
    S11. the division of sub-block is carried out respectively to two iris pictures of training sample centering, then calculates being averaged for each sub-block Gray value, using average gray value minimum in each sub-block as iteration initial threshold T0
    S12. with initial threshold T0Binaryzation is carried out to image;
    S13. connected region all in binary image is traveled through, calculates the area S of each connected regiont, eccentricity CtAnd region Length-width ratio ηt
    S14. judge whether each connected region meets condition:StMore than the s of setting, CtLess than the c of setting, and | ηt- 1 | less than setting Fixed ε;
    If S15. meeting the condition in S14 in the presence of a connected region, the connected region is the region P of pupil;If Multiple connected regions be present and meet condition, then by eccentricity CtRegion P of the minimum connected region as pupil;Iteration knot Beam;
    If S16. each connected region is unsatisfactory for the condition in S14, T is made0=T0+ Δ T, then iteration perform step S12~ S15, until T0More than the maximum gradation value of image;
    If after S17. iteration terminates, not finding the region P of pupil yet, then area is found in all iteration results and be more than S and the connected region for possessing minimum eccentricity, the region P using the connected region as pupil;
    S18. the estimate using connected region P center-of-mass coordinate as the pupil center of circle, if the average of connected region length and width is L, by L As the length of side value of pupil region of interest ROI, according to the position in the center of circle and the length of side of pupil region of interest ROI, in image Middle interception square area is as the region of interest ROI comprising pupil;
    S19. Canny rim detections are carried out to region of interest ROI, obtains including the edge point set of pupil;
    S20. by the scope of [L/2, L] as pupil radium, ballot is carried out to edge point set using Hough loop truss and is added up, when tired When adding the device to obtain maximum, the home position and radius value of resulting circle are the parameter of pupil boundary, that is, iris inner edge The parameter on boundary;
    The detailed process of the exterior iris boundary positioning is as follows:
    S21. the estimate using connected region P center-of-mass coordinate as the iris center of circle, if the radius value for the pupil that step S20 is obtained For r1, by 8.0r1As the length of side value of iris region of interest ROI, according to the position in the center of circle and iris region of interest ROI Length of side value, square area is intercepted in the picture as the region of interest ROI comprising iris;
    S22. Canny rim detections are carried out to region of interest ROI, obtains including the edge point set of iris;
    S23. by [1.8r1,4.0r1] scope as iris radius, it is tired that ballot is carried out to edge point set using Hough loop truss Add, when accumulator obtains maximum, the home position and radius value of resulting circle are the parameter of iris boundary, that is, rainbow The parameter of film external boundary;
    Annular region according to where the process that the positioning of above-mentioned iris inner boundary and exterior iris boundary position obtains iris.
  3. 3. the iris identification method according to claim 2 based on segmentation feature selection, it is characterised in that:The step S1 After obtaining the annular region where iris, annular region is normalized using Daugman rubber paper mockup, then by rainbow The annular region of film is launched into rectangular image.
  4. 4. the iris identification method according to claim 2 based on segmentation feature selection, it is characterised in that:The step S2 The leafy difference filter of structure uses the two-dimensional Gaussian function based on stochastic variable X as core:
    <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>C</mi> <mi>p</mi> </msub> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>p</mi> </msub> </munderover> <mfrac> <mn>1</mn> <msqrt> <mrow> <mn>2</mn> <msub> <mi>&amp;pi;&amp;sigma;</mi> <mrow> <mi>p</mi> <mi>i</mi> </mrow> </msub> </mrow> </msqrt> </mfrac> <mi>exp</mi> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>p</mi> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mrow> <mi>p</mi> <mi>i</mi> </mrow> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>&amp;rsqb;</mo> <mo>-</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>n</mi> </msub> </munderover> <mfrac> <mn>1</mn> <msqrt> <mrow> <mn>2</mn> <msub> <mi>&amp;pi;&amp;sigma;</mi> <mrow> <mi>n</mi> <mi>j</mi> </mrow> </msub> </mrow> </msqrt> </mfrac> <mi>exp</mi> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>n</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mrow> <mi>n</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow>
    Wherein, variable μpiAnd σpiCenter and the yardstick of the i-th leaf forward direction gaussian filtering, variable μ are represented respectivelynjAnd σnjRespectively Represent center and the yardstick of jth leaf negative sense gaussian filtering, Np、NnThe number of positive and negative leaf, C are represented respectivelyp、Cn>0, difference table Show the weight of positive and negative leaf.
  5. 5. the iris identification method according to claim 1 based on segmentation feature selection, it is characterised in that:The step S7 The detailed process for carrying out feature selecting to every section of characteristic vector using the method for linear programming is as follows:
    <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>}</mo> </mrow> </munder> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </munderover> <mo>{</mo> <mfrac> <msup> <mi>&amp;lambda;</mi> <mo>+</mo> </msup> <msup> <mi>N</mi> <mo>+</mo> </msup> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>N</mi> <mo>+</mo> </msup> </munderover> <msubsup> <mi>&amp;delta;</mi> <mrow> <mi>t</mi> <mi>j</mi> </mrow> <mo>+</mo> </msubsup> <mo>+</mo> <mfrac> <msup> <mi>&amp;lambda;</mi> <mo>-</mo> </msup> <msup> <mi>N</mi> <mo>-</mo> </msup> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>N</mi> <mo>-</mo> </msup> </munderover> <msubsup> <mi>&amp;delta;</mi> <mrow> <mi>t</mi> <mi>k</mi> </mrow> <mo>-</mo> </msubsup> <mo>+</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>D</mi> </munderover> <msub> <mi>P</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <msub> <mi>&amp;omega;</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    (1) constraints of formula is as follows:
    <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>D</mi> </munderover> <msub> <mi>&amp;omega;</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>j</mi> </mrow> <mo>+</mo> </msubsup> <mo>&amp;le;</mo> <msub> <mi>&amp;alpha;</mi> <mi>t</mi> </msub> <mo>+</mo> <msubsup> <mi>&amp;delta;</mi> <mrow> <mi>t</mi> <mi>j</mi> </mrow> <mo>+</mo> </msubsup> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msup> <mi>N</mi> <mo>+</mo> </msup> <mo>,</mo> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>T</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>D</mi> </munderover> <msub> <mi>&amp;omega;</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>k</mi> </mrow> <mo>-</mo> </msubsup> <mo>&amp;GreaterEqual;</mo> <msub> <mi>&amp;beta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msubsup> <mi>&amp;delta;</mi> <mrow> <mi>t</mi> <mi>k</mi> </mrow> <mo>-</mo> </msubsup> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msup> <mi>N</mi> <mo>-</mo> </msup> <mo>,</mo> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>T</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <msubsup> <mi>&amp;delta;</mi> <mrow> <mi>t</mi> <mi>j</mi> </mrow> <mo>+</mo> </msubsup> <mo>&amp;GreaterEqual;</mo> <mn>0</mn> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msup> <mi>N</mi> <mo>+</mo> </msup> <mo>,</mo> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>T</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <msubsup> <mi>&amp;delta;</mi> <mrow> <mi>t</mi> <mi>k</mi> </mrow> <mo>-</mo> </msubsup> <mo>&amp;GreaterEqual;</mo> <mn>0</mn> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msup> <mi>N</mi> <mo>-</mo> </msup> <mo>,</mo> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>T</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    ωti>=0, i=1,2 ..., D, t=1,2 ..., T (6)
    Wherein, N+、N-The sum of sample pair between sample pair and class in class is represented respectively;ωtiFor sample centering t section ith features The weight of Hamming distance between vector;PtiTo characterize the weight of the Hamming distance between sample centering t section ith feature vectors The prior information for the property wanted;Respectively adjust the relaxation change of the interior sample discrimination threshold between class of class in t section characteristic vectors Amount;λ+、λ-For constant term;αt、βtThe Hamming distance of matched samples all between class in class respectively in t sections characteristic vector Average;The Hamming distance in matched sample between the ith feature vector of t sections in jth group class is represented,Represent kth group Hamming distance between class in matched sample between the ith feature vector of t sections.
  6. 6. the iris identification method according to claim 5 based on segmentation feature selection, it is characterised in that:The feature to The calculating process of Hamming distance between amount is as follows:
    <mrow> <mi>H</mi> <mi>D</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msub> <mi>r</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>&amp;CirclePlus;</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow>
    Wherein r1(i)、r2(i) it is respectively i-th of value in two characteristic vectors, M is characterized the dimension of vector.
  7. 7. the iris identification method according to claim 5 based on segmentation feature selection, it is characterised in that:The PtiUsing Discrimination index DtiInverse represent, DtiIt is expressed as:
    <mrow> <msub> <mi>D</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <msubsup> <mi>m</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> <mo>+</mo> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> <mo>-</mo> </msubsup> <mo>|</mo> </mrow> <msqrt> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&amp;sigma;</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> <mo>+</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&amp;sigma;</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> <mo>-</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> <mo>/</mo> <mn>2</mn> </mrow> </msqrt> </mfrac> </mrow>
    Represent respectively the Hamming distance in class between the ith feature vector of all sample centering t sections average and Standard deviation, andThe Hamming distance between class between the ith feature vector of all sample centering t sections is represented respectively Average and standard deviation.
  8. 8. the iris identification method based on segmentation feature selection according to any one of claim 1~7, it is characterised in that: After the step S8 carries out feature selecting, the Hamming distance between the characteristic vector that chooses is normalized, so The structure of vector machine is supported based on the Hamming distance between the characteristic vector being normalized afterwards.
  9. 9. the iris identification method according to claim 8 based on segmentation feature selection, it is characterised in that:The normalization The detailed process of processing is as follows:
    <mrow> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>i</mi> </msub> </mrow> <msub> <mi>&amp;sigma;</mi> <mi>i</mi> </msub> </mfrac> </mrow>
    Wherein μiRepresent the average of the Hamming distance between all training sample centering ith feature vectors, σiRepresent all training The standard deviation of Hamming distance between sample centering ith feature vector, xiRepresent before normalizing between ith feature vector Hamming distance, XiRepresent the Hamming distance between ith feature vector after normalizing.
CN201710567044.6A 2017-07-12 2017-07-12 A kind of iris identification method based on segmentation feature selection Pending CN107358198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710567044.6A CN107358198A (en) 2017-07-12 2017-07-12 A kind of iris identification method based on segmentation feature selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710567044.6A CN107358198A (en) 2017-07-12 2017-07-12 A kind of iris identification method based on segmentation feature selection

Publications (1)

Publication Number Publication Date
CN107358198A true CN107358198A (en) 2017-11-17

Family

ID=60293289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710567044.6A Pending CN107358198A (en) 2017-07-12 2017-07-12 A kind of iris identification method based on segmentation feature selection

Country Status (1)

Country Link
CN (1) CN107358198A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182401A (en) * 2017-12-27 2018-06-19 武汉理工大学 A kind of safe iris identification method based on polymerization block message
CN110427804A (en) * 2019-06-18 2019-11-08 中山大学 A kind of iris auth method based on secondary migration study
CN112580714A (en) * 2020-12-15 2021-03-30 电子科技大学中山学院 Method for dynamically optimizing loss function in error-cause strengthening mode

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539990A (en) * 2008-03-20 2009-09-23 中国科学院自动化研究所 Method for selecting and rapidly comparing robust features of iris images
CN102902980A (en) * 2012-09-13 2013-01-30 中国科学院自动化研究所 Linear programming model based method for analyzing and identifying biological characteristic images
CN103745242A (en) * 2014-01-30 2014-04-23 中国科学院自动化研究所 Cross-equipment biometric feature recognition method
US20180165517A1 (en) * 2016-12-13 2018-06-14 Samsung Electronics Co., Ltd. Method and apparatus to recognize user

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539990A (en) * 2008-03-20 2009-09-23 中国科学院自动化研究所 Method for selecting and rapidly comparing robust features of iris images
CN102902980A (en) * 2012-09-13 2013-01-30 中国科学院自动化研究所 Linear programming model based method for analyzing and identifying biological characteristic images
CN103745242A (en) * 2014-01-30 2014-04-23 中国科学院自动化研究所 Cross-equipment biometric feature recognition method
US20180165517A1 (en) * 2016-12-13 2018-06-14 Samsung Electronics Co., Ltd. Method and apparatus to recognize user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马宏伟: "用户较少配合情况下的虹膜识别方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182401A (en) * 2017-12-27 2018-06-19 武汉理工大学 A kind of safe iris identification method based on polymerization block message
CN108182401B (en) * 2017-12-27 2021-09-03 武汉理工大学 Safe iris identification method based on aggregated block information
CN110427804A (en) * 2019-06-18 2019-11-08 中山大学 A kind of iris auth method based on secondary migration study
CN110427804B (en) * 2019-06-18 2022-12-09 中山大学 Iris identity verification method based on secondary transfer learning
CN112580714A (en) * 2020-12-15 2021-03-30 电子科技大学中山学院 Method for dynamically optimizing loss function in error-cause strengthening mode
CN112580714B (en) * 2020-12-15 2023-05-30 电子科技大学中山学院 Article identification method for dynamically optimizing loss function in error-cause reinforcement mode

Similar Documents

Publication Publication Date Title
CN109800648B (en) Face detection and recognition method and device based on face key point correction
CN107977609B (en) Finger vein identity authentication method based on CNN
Rodriguez-Damian et al. Automatic detection and classification of grains of pollen based on shape and texture
CN111639558B (en) Finger vein authentication method based on ArcFace Loss and improved residual error network
CN109815801A (en) Face identification method and device based on deep learning
CN109902590A (en) Pedestrian&#39;s recognition methods again of depth multiple view characteristic distance study
CN108427921A (en) A kind of face identification method based on convolutional neural networks
CN105138973A (en) Face authentication method and device
CN105354866A (en) Polygon contour similarity detection method
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN104102920A (en) Pest image classification method and pest image classification system based on morphological multi-feature fusion
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN102194114A (en) Method for recognizing iris based on edge gradient direction pyramid histogram
CN110334715A (en) A kind of SAR target identification method paying attention to network based on residual error
CN107358198A (en) A kind of iris identification method based on segmentation feature selection
Somasundaram Machine learning approach for homolog chromosome classification
CN109145971A (en) Based on the single sample learning method for improving matching network model
Huang et al. Design and Application of Face Recognition Algorithm Based on Improved Backpropagation Neural Network.
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
Huo et al. An effective feature descriptor with Gabor filter and uniform local binary pattern transcoding for Iris recognition
Shaheed et al. A hybrid proposed image quality assessment and enhancement framework for finger vein recognition
CN106250814B (en) A kind of finger venous image recognition methods based on hypersphere granulation quotient space model
Chen et al. A finger vein recognition algorithm based on deep learning
CN106022348A (en) Finger retrieving method base on specific point direction field and fingerprint projection
CN112861871A (en) Infrared target detection method based on target boundary positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171117