CN102436591A - Discrimination method of forged iris image - Google Patents
Discrimination method of forged iris image Download PDFInfo
- Publication number
- CN102436591A CN102436591A CN2011103621039A CN201110362103A CN102436591A CN 102436591 A CN102436591 A CN 102436591A CN 2011103621039 A CN2011103621039 A CN 2011103621039A CN 201110362103 A CN201110362103 A CN 201110362103A CN 102436591 A CN102436591 A CN 102436591A
- Authority
- CN
- China
- Prior art keywords
- iris
- false
- image
- sample
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012850 discrimination method Methods 0.000 title abstract 2
- 238000000034 method Methods 0.000 claims abstract description 85
- 238000010606 normalization Methods 0.000 claims description 30
- 210000001747 pupil Anatomy 0.000 claims description 24
- 238000012706 support-vector machine Methods 0.000 claims description 18
- 230000004069 differentiation Effects 0.000 claims description 17
- 239000000284 extract Substances 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 239000004744 fabric Substances 0.000 claims description 8
- 239000000203 mixture Substances 0.000 claims description 8
- 238000005242 forging Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 abstract description 8
- 230000008901 benefit Effects 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 abstract 4
- 238000012549 training Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 8
- 210000001508 eye Anatomy 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000002203 pretreatment Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 210000005252 bulbus oculi Anatomy 0.000 description 3
- 238000003909 pattern recognition Methods 0.000 description 3
- 229920003023 plastic Polymers 0.000 description 3
- 239000004033 plastic Substances 0.000 description 3
- 101100112111 Caenorhabditis elegans cand-1 gene Proteins 0.000 description 2
- 238000007418 data mining Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 210000000720 eyelash Anatomy 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Abstract
The invention discloses a discrimination method of a forged iris image. The method comprises a construction phase and a discrimination phase. The construction phase comprises a construction phase of a tree-type visual sense dictionary and a construction phase of a forged iris discrimination classifier, wherein, in the construction phase of the tree-type visual sense dictionary, a sample iris image in a sample iris image library is processed until obtainment of the tree-type visual sense dictionary, in the construction phase of a forged iris discrimination classifier, based on the tree-type visual sense dictionary and a sparse coding method, the forged iris discrimination classifier is constructed, and in the discrimination phase, the forged iris discrimination classifier is used to determine true or false of an object iris image. According to the invention, a plurality of forged iris images can be effectively discriminated, and the method has the advantages of rapid speed, high precision, high robustness, high reliability and high security and can be applied to a system of identity discrimination to raise various performance of the system.
Description
Technical field
The invention belongs to computer vision, Digital Image Processing and mode identification technology, be specifically related to a kind of method of discrimination of false iris images, can be applicable to come in the identity judgement system based on biological characteristic the combination property of raising system self.
Background technology
At present, received the attention of national governments, be penetrated into each aspect of people's daily life based on the identification of biological characteristic.In numerous biological characteristics, iris has advantages such as uniqueness height, strong, the non-infringement property of stability.These advantages make iris be particularly suitable for people's authentication and differentiation, receive increasing concern between the more than ten years in the past, and correlative study and technology have also obtained development rapidly.Iris is differentiated not only can be applied to ecommerce, financial instrument, information security, traffic, public security and the administration of justice, and has risen to the height of national strategy national defence.
Along with the iris differentiation moves towards practical from experiment, be applied in numerous safe prevention and control field, itself also is faced with safety problem the iris judgement system, various multi-form system attack modes occurred.Wherein, false iris is very big threat to system, and the method for false iris attacking system has a variety of, such as being printed on iris image, colour printed contact lenses that iris image on the paper, display screen show, having the synthetic eye that enriches iris texture etc.False iris possibly cause the mistake of iris judgement system to differentiate; For example in self-service clearance system; System works comprises suspect's iris database in differentiating mode of operation, wears colour printed contact lenses and possibly cause not registration of this user of distinguish of system; P Passable causes system not play due safe prevention and control effect.Therefore, effectively the false iris method of discrimination is the pith that improves iris judgement system reliability.
At present, though the safety issue of iris judgement system becomes the focus that receives much concern, the method that practical false iris is differentiated is also few, is the security strategy to a certain system mostly.The spectral characteristic of the Fourier transform of John Daugman (U.S.Pat.No.5291560) use iris image is carried out the false iris differentiation and can be used for differentiating iris image of papery printing clearly and true iris image.Shi Pengfei etc. (CN101059837A) use the contrast of gray level co-occurrence matrixes and the characteristic that angle second moment characteristic is differentiated as false iris, and this method is to colour printed contact lenses differentiation clearly.
Along with the iris judgement system develops to user friendly, the convenient direction of using, user's fitness requirement is lowered, can cause degradation problem under the iris image quality, this makes the genuine/counterfeit discriminating of iris face more challenges.The method of false iris also presents variation, high-quality development trend.Existing false iris is differentiated algorithm and is still had improved space, and the false iris differentiation of how fast and effeciently carrying out in the iris judgement system remains a difficult problem that needs to be resolved hurrily.
Summary of the invention
The technical matters that (one) will solve
Technical matters to be solved by this invention has provided a kind of distinguishing false iris images method; To improve existing distinguishing false iris images method; Improve precision, robustness and the reliability of distinguishing false iris images system, thereby improve the security performance of iris authentication system.
(2) technical scheme
For solving the problems of the technologies described above; The invention provides a kind of distinguishing false iris images method; This method comprises structure stage and differentiation stage; The said structure stage comprises the structure stage of tree type vision dictionary and the structure stage of false iris identification and classification device, and wherein the structure stage of tree type vision dictionary handles the sample iris image in the sample iris image storehouse, obtains tree type vision dictionary; The structure stage of false iris identification and classification device is constructed false iris identification and classification device based on said tree type vision dictionary and sparse coding method; The differentiation stage uses said false iris identification and classification device to judge the true and false of target iris image.
The structure stage of wherein said tree type vision dictionary comprises the steps:
Step S11: set up said sample iris image storehouse, make it comprise a plurality of authentic specimen iris images and a plurality of forgery sample iris image; Step S12: a plurality of authentic specimen iris images in the said sample image storehouse and a plurality of forgery sample iris image are carried out pre-service, obtain normalization sample iris image; Step S13: in said normalization sample iris image, extract low-level image feature, this low-level image feature is meant the characteristic that adopts operator directly to extract from image.Step S14: set up tree type vision dictionary according to the low-level image feature that is extracted, this tree type vision dictionary configuration is following: comprise a root node and a plurality of layer, each layer comprises some nodes; Wherein root node is a dummy node, represents whole feature space; Ground floor comprises k
1Individual node, it is as the child node of root node, corresponding k
1Individual feature clustering center; The second layer comprises the child node of node in the corresponding ground floor, and each node has k in the ground floor
2Node; In later every layer all is the child node of last layer, and each father node has k at most
iNode, wherein i>2.
Wherein the structure stage of false iris identification and classification device comprises the steps:
Step S15: from the root node of described tree type vision dictionary and descend, low-level image feature being encoded at every layer of cascade obtains the proper vector of said low-level image feature; Step S16: authentic specimen iris image and the proper vector of forging the sample iris image respectively as positive sample and negative sample, are trained SVMs, obtain false iris identification and classification device.
Wherein the differentiation stage comprises the steps:
Step S21: the target iris image is carried out pre-service, obtain normalization target iris image; Step S22: in said normalization target iris image, extract low-level image feature; Step S23: adopt the tree type vision dictionary identical with step S15, descend from root node, low-level image feature being encoded at every layer of cascade obtains the proper vector of said low-level image feature; Step S24: the proper vector that step S23 is obtained is input in the said false iris sorter, and whether differentiate this target iris image according to sorter output result is false iris images.
Preprocessing process among the step S12 can comprise the steps:
Separate said sample iris image, obtain an iris region; The border of pupil and iris in this iris region of match; This iris region is transformed under the polar coordinates, accomplish normalization said sample iris image.
In step S13, can adopt yardstick invariant features conversion (SIFT) the description operator of intensive sampling to carry out the low-level image feature extraction.
In step S14, can the low-level image feature of said extraction be used as the input of the k mean cluster of cascade, learn to obtain said tree type vision dictionary through the k mean cluster of cascade.
In step S15, said cataloged procedure can adopt the linear sparse coding (LLC) of local restriction.
In step S15; To each width of cloth sample iris image; Statistics is to passing through the path number of each node in the tree type vision dictionary in its low-level image feature cataloged procedure; And, wherein, be designated as 0 in every layer of LLC cataloged procedure as the coding result of the node of word candidate with the LLC coding result of itself and each layer of the tree type vision dictionary composition characteristic vector that is connected in series.
In step S16, respectively as positive sample and negative sample, train SVMs to obtain false iris identification and classification device authentic specimen iris image and the proper vector of forging the sample iris image;
Preprocessing process among the step S21 comprises the steps:
Separate said target iris image, obtain an iris region; The border of pupil and iris in this iris region of match; This iris region is transformed under the polar coordinates, accomplish normalization said iris image.
In step S22, can adopt yardstick invariant features conversion (SIFT) the description operator of intensive sampling to carry out the low-level image feature extraction.
In step S23, said cataloged procedure can adopt the linear sparse coding (LLC) of local restriction.
In step S23; To each width of cloth target iris image; Statistics is to passing through the path number of each node in the tree type vision dictionary in its low-level image feature cataloged procedure; And, wherein, be designated as 0 in every layer of LLC cataloged procedure as the coding result of the node of word candidate with the LLC coding result of itself and each layer of the tree type vision dictionary composition characteristic vector that is connected in series.
In step S24, differentiating when said false iris sorter is said target iris image when being false iris images, sends alerting signal.
(3) beneficial effect
The false iris method of discrimination that the present invention proposes has adopted the sparse coding mode of the tree type vision dictionary drawn game portion constraint of cascade, and its beneficial effect is embodied in the following aspects:
1. tree type vision dictionary of the present invention adopts the feature space with lap, considers the relation between the sight word, makes to have less quantization error along the coding of the tree type vision dictionary of cascade, can improve the precision of false iris method of discrimination thus.
2. the sparse features of each layer coding has adopted the coded system of local restriction can extract the conspicuousness characteristic in tree type vision dictionary.Employing is along the coded system of mulitpath in the tree type vision dictionary, and the dependence that reduces certain one deck is encoded can improve the robustness and the reliability of false iris method of discrimination thus to reduce the accumulative total quantization error in the concatenated coding.
3. the present invention adopts SVMs to make sorter, automatically training classifier.Adopt SVMs training classifier and sparse coding characteristic as characteristic of division, generalization ability is preferably arranged, be particularly suitable for the not enough and various informative situation of false iris of sample iris image.
4. the present invention does not have special requirement for hardware, can be used for differentiating multiple false iris, and does not relate to complicated calculating, is easy in the system of reality, use, and improves the security performance of iris judgement system.
In sum, the present invention can accomplish false iris images and the truly differentiation of iris image effectively, improves precision, robustness and the reliability of iris judgement system, thereby improves the security performance of iris authentication system.The present invention can be used for differentiating papery and prints iris, wears false iris attack meanses such as colour printed contact lenses, synthetic eye and synthetic iris image; Can with various iris judgement system cooperatings; Be applied to fields such as national defence, finance, police criminal detection, also can be applied to other and need carry out the field that identity is differentiated.
Description of drawings
Fig. 1 a makes up the stage FB(flow block) for the false iris method of discrimination;
Fig. 1 b differentiates the stage FB(flow block) for the false iris method of discrimination;
Fig. 2 a is true iris image sample picture;
Fig. 2 b is the false iris images sample picture;
Fig. 3 a to Fig. 3 c is the synoptic diagram of iris image pre-treatment step;
Fig. 4 is the training FB(flow block) of tree type vision dictionary;
Fig. 5 is the concatenated coding process synoptic diagram based on tree type dictionary.
Embodiment
For making the object of the invention, technical scheme and advantage clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, to further explain of the present invention.
In general, the iris judgement system is divided into hardware and software two big modules: iris image acquisition device and iris image are differentiated algorithm.Iris image is differentiated algorithm and is comprised three key steps: image pre-service, feature extraction and pattern match.The method that the present invention proposes is applied to can be used for multiple iris judgement system to improve the security of system before the judegment part branch of iris discriminant software module.
Specifically, the present invention proposes a kind of distinguishing false iris images method, and its sparse coding based on the tree type vision dictionary drawn game portion constraint of cascade is realized.
Distinguishing false iris images method of the present invention comprises structure stage and two stages of differentiation stage, and the structure stage comprises the structure stage of tree type vision dictionary and the structure stage of false iris identification and classification device again, and its concrete steps are described below:
Wherein, the structure stage of tree type vision dictionary comprises the steps: to gather the sample iris image in the sample iris image storehouse, and said sample iris image comprises the authentic specimen iris image and forges the sample iris image; Sample iris image to gathering carries out pre-service, comprises the border of separating pupil and iris in iris region, this iris region of match and to the normalization of iris image; From the normalization sample iris image of authentic specimen iris image and forgery sample iris image, extract low-level image feature; The k mean cluster study of the low-level image feature that extracts being carried out cascade obtains tree type vision dictionary.
The structure stage of false iris identification and classification device comprises the steps: to descend from the root node of tree type vision dictionary; Cascade low-level image feature is carried out coded representation; Every layer set of letters of corresponding tree type dictionary adopts the linear sparse coding mode of local restriction to encode; Proper vector the series connection of the coding result of iris image is differentiated as false iris is used for SVMs (Support Vector Machine, training study SVM); The distribution situation that is distributed in feature space according to authentic specimen iris image and the texture of forging the sample iris image makes up false iris identification and classification device.Wherein, SVMs is a kind of machine learning method based on Statistical Learning Theory that grows up the mid-90, improves the learning machine generalization ability through seeking structuring risk minimum, realizes minimizing of empiric risk and fiducial range.Its concrete grammar is for example referring to Christopher J.C.Burges. " A Tutorial on Support Vector Machines for Pattern Recognition " .Data Mining and Knowledge Discovery 2:121-167,1998.
The differentiation stage comprises the steps: to gather the target iris image; This target iris image possibly be true iris image or false iris images; This target iris image is carried out pre-service, comprise the border of separating pupil and iris in iris region, this iris region of match and the normalization of iris image; From the normalization iris image that obtains, extract low-level image feature; Descend from the root node of tree type vision dictionary, cascade low-level image feature is carried out coded representation, every layer set of letters of corresponding tree type dictionary adopts the linear sparse coding mode of local restriction to encode; Connect the coding result of iris image as proper vector; Be input to the false iris identification and classification device that training obtains; Whether be from false iris, if then send alerting signal and do the follow-up system safeguard measure according to the result of false iris identification and classification device output if differentiating this target iris image; If not, then accept and get into follow-up iris to differentiate process.
Explain one by one in the face of the committed step that the present invention relates to down.The concrete form of each basic step is described below in the method for the invention:
At first, the structure stage be based on cascade tree type vision dictionary drawn game portion constraint sparse coding feature description and based on the structure of the SVMs of statistical learning.For the texture difference between accurate description false iris images and the true iris image; Extract yardstick invariant features conversion (SIFT; Scale-Invariant Feature Transform) descriptor is as low-level image feature; Sparse coding based on the constraint of the tree type vision dictionary drawn game portion of cascade is described low-level image feature, will connect as proper vector with the sparse coding result along the coding path of tree type vision dictionary; At last, use the SVMs training to make up the differentiation that sorter is used for true iris image and false iris images.Detailed process is following:
Step S11: set up the sample iris image storehouse comprise a plurality of authentic specimen iris images and a plurality of forgery sample iris images, will forge the sample iris image as negative sample, with the authentic specimen iris image as sample just.Fig. 2 a has provided the example of the authentic specimen iris image in the sample iris image storehouse, and Fig. 2 b has provided the example of the forgery sample iris image in the sample iris image storehouse.
Step S12: authentic specimen iris image in the sample iris image storehouse and forgery sample iris image are carried out pre-service.Because pre-treatment step is identical to the operation of true iris image and false iris images, therefore in description, the two is referred to as iris image below to step S12.And, in following step S13 and S14, also continue to use this appellation.Fig. 3 a to Fig. 3 c is to the synoptic diagram of iris image pre-treatment step.Not only comprise iris in the iris image shown in Fig. 3 a, also comprise pupil, the white of the eye, eyelid and eyelashes etc.The pretreated first step of iris image is from iris image, to separate iris region; The outline of pupil and iris is all very near circular; Task is to find the coordinate and the radius of the circle of match pupil and iris boundary, then iris region is transformed under the polar coordinates iris is carried out normalization.The gray scale of human eye pupil is far below the peripheral region; Isolate pupil region so can use threshold method; Center of gravity that then should the zone is as preliminary pupil center, near this point, goes to the edge of match pupil with variable-sized template, and best fitting result is exactly the result of pupil location.The center of iris is near the center of pupil, so can make center and the radius that uses the same method and find iris.Fig. 3 b is the example after the Iris Location to iris image among Fig. 3 a, and circle is wherein represented the outer boundary of pupil and iris after the match.With the pupil center of circle is that former described true iris image and the false iris images of naming a person for a particular job transforms to polar coordinate system from rectangular coordinate system; Under polar coordinate system, described iris image is zoomed to unified size, realize the normalization of iris image, round an iris circle ring area as area-of-interest.The iris image that is transformed into after the normalization under the polar coordinates is called the normalization iris image.Fig. 3 c is the normalization iris image of iris image among Fig. 3 a.
Step S13: in the normalization iris image, extract low-level image feature, adopt yardstick invariant features conversion (SIFT) the description operator of intensive sampling to carry out the low-level image feature extraction.Here low-level image feature is meant the characteristic that adopts certain operator (adopting SIFT to describe operator) directly to extract from image here.Uniformly-spaced choosing the zone of 16 * 16 pixels, to SIFT feature description of region extraction 128 dimensions of each 16 * 16 pixel as low-level image feature.The calculating of SIFT descriptor is existing the announcement in patent documentation US6711293B1, repeats no more at this.
Step S14:, learn to obtain tree type vision dictionary through the k mean cluster of cascade with the input of the low-level image feature that extracts as the k mean cluster of cascade.The structure of this tree type vision dictionary is following: comprise a root node and a plurality of layer, each layer comprises some nodes; Wherein root node is a dummy node, represents whole feature space; Ground floor comprises k
1Individual node, it is as the child node of root node, corresponding k
1Individual feature clustering center; The second layer comprises the child node of node in the corresponding ground floor, and each node has k in the ground floor
2Node; In later every layer all is the child node of last layer, and each father node has k at most
iNode, wherein i>2.
Cascade in this step is meant that the learning process of tree type dictionary successively carries out along tree type dictionary downwards, and every layer of k average all receives the constraint of upper layer node.
The learning process of tree type vision dictionary comprises: the whole low-level image features with input carry out the study of k mean cluster, obtain the k in the tree type vision dictionary ground floor
1Individual sight word, and certain sight word that whole low-level image features are labeled as in the corresponding ground floor is constituted k according to nearest neighbouring rule
1Individual character subset; For each node in the ground floor, carry out the study of k mean cluster with its characteristic of correspondence subclass and obtain k
2Individual sight word is as its child node in the tree type vision dictionary second layer.And the like obtain multilayer tree type vision dictionary.This learning process is the k mean cluster process of cascade.In the study of the i layer of tree type vision dictionary, if when having empty set in the study of k mean cluster, corresponding father node has the k of being less than
iNode.The stop condition of tree type vision dictionary study be certain one deck empty node number than the total node number of this layer greater than a threshold value, for example 20%, perhaps reach the maximum number of plies k of the tree type vision dictionary of setting
MaxFig. 4 has provided the training process process flow diagram of tree type vision dictionary.
Step S15: the root node of the tree type vision dictionary that obtains from step S14 and descending, cascade low-level image feature is carried out coded representation, every layer set of letters of corresponding tree type dictionary adopts linear sparse coding (LLC) mode of local restriction to encode.Its cascade is meant that cataloged procedure successively carries out along tree type dictionary downwards, and every layer of coding retrained by the result of upper strata coding.The concrete grammar of LLC is for example referring to Wang J, Yang J, Yu K; Lv F, Huang T, Gong Y. " Locality-constrained linear coding for image classification:CVPR2010:The Twenty-Third IEEE Computer Society Conference on Computer Vision and Pattern Recognition "; San Francisco; CA, June 13-18,2010.The process of the linear sparse coding of local restriction is summed up as the minimal value of asking energy function
Wherein, X=[x
1, x
2..., x
N] ∈ R
D * NBe N the D dimensional feature that from a width of cloth picture, extracts, B ∈ R
D * MBe the vision dictionary that comprises M sight word that is used to encode, C=[c
1, c
2..., c
N] expression X coding result, ⊙ representes that the correspondence of corresponding vector element multiplies each other d
iBe characteristic x
iAnd the measuring vector of distance between each sight word among the B, λ is the parameter between 0 and 1.
Fig. 5 has provided the cataloged procedure based on tree type vision dictionary, and wherein empty circles is represented word candidate (node in the corresponding tree construction).Each layer at tree type vision dictionary utilizes the LLC method to the low-level image feature description of encoding.The word that has the peak response value in " √ " expression one deck, the child nodes conduct of these nodes correspondences is the both candidate nodes of one deck down.The vision dictionary that ground floor is used to encode is all nodes of this layer.Dictionary is then chosen according to last sublevel coding result in the succeeding layer, and the vision dictionary that j layer coding used is by the word that k peak response value arranged in the j-1 layer coding result most (i.e. j layer coding result C
jK corresponding word of maximum k absolute value) the child node composition.Can finding the solution based on the cascade LLC of tree type vision dictionary coding by following parsing:
Wherein,
L=1,2..., m
jBe candidate's sight word of j layer, m
jBe the number of candidate's sight word of j layer, x is a low-level image feature, and λ and σ are two parameters of LLC method.
To each width of cloth sample iris image, statistics is to the path number through each node in the tree type vision dictionary in its low-level image feature cataloged procedure, and with the LLC coding result of itself and each layer of the tree type vision dictionary composition characteristic vector that is connected in series.Wherein, be designated as 0 in every layer of LLC cataloged procedure as the coding result of the node of word candidate.
Step S16: with the proper vector of true iris image and false iris images respectively as positive sample and negative sample; The training SVMs obtains false iris identification and classification device; Said SVMs is based on a kind of machine learning method of Statistical Learning Theory; Improve the learning machine generalization ability through seeking structuring risk minimum; Realize minimizing of empiric risk and fiducial range, its concrete grammar is for example referring to Christopher J.C.Burges. " A Tutorial on Support Vector Machines for Pattern Recognition " .Data Mining and Knowledge Discovery 2:121-167,1998; It is two types of problems that false iris is differentiated, and is the two merotypes classification of true iris image and false iris images, adopts single SVMs to get final product.The decision function of SVMs is:
Wherein, x
iBe training sample; y
iBe training sample class mark; I is the label of training sample; N is the training sample number; X is for treating classification samples; K (x
i, x) be the kernel function that satisfies the Mercer condition, the aerial inner product of its corresponding a certain conversion; Sign (x) is an indicative function, when x>=0, is output as 1, otherwise is 0.The a of corresponding support vector
iBe not 0, but not the corresponding a of support vector
iBe 0.The condition that the kernel function that described Mercer condition is a SVMs need satisfy, and symmetric kernel function K (x, the necessary and sufficient condition that z) has an inner product form in the Hilbert space is; ∫ K (x; Z) g (x) g (z) dxdz >=0 couple of any quadractically integrable function g sets up, and wherein (x z) is called Mercer nuclear to K.As training sample, the feature class of forging the extraction of sample iris image is labeled as 0 to the characteristic of S14 step extraction, the feature class that the authentic specimen iris image extracts is labeled as 1, training classifier.
Through after the training in structure stage,, show that so this iris image is a false iris images, otherwise be true iris image if the output valve of the sorter of certain iris image is 0.
Secondly, the differentiation stage of false iris images is based on sorter that the structure stage makes up to be differentiated the mark iris image of opening one's eyes wide, and whether differentiate this target iris image is from false iris.
The target iris image possibly be true iris image or false iris images; At first the target iris image is carried out pre-service, obtain the normalization iris image, on this normalization iris image; Similarly carrying out low-level image feature with the structure stage extracts; And utilize identical tree type vision dictionary that low-level image feature is carried out coded representation, the false iris identification and classification device that utilizes the structure stage to obtain at last differentiates whether this target iris image is that detailed process is following from true iris:
Step S21: the target iris image is carried out pre-service, and Fig. 3 a to Fig. 3 c is the synoptic diagram to the iris image pre-treatment step.Not only comprise iris in the target iris image shown in Fig. 3 a, also comprise pupil, the white of the eye, eyelid and eyelashes etc.The pretreated first step of iris image is from the target iris image, to separate iris region; The outline of pupil and iris is all very near circular; Task is to find the coordinate and the radius of the circle of match pupil and iris boundary, then iris region is transformed under the polar coordinates iris is carried out normalization.The gray scale of human eye pupil is far below the peripheral region; Isolate pupil region so can use threshold method; Center of gravity that then should the zone is as preliminary pupil center, near this point, goes to the edge of match pupil with variable-sized template, and best fitting result is exactly the result of pupil location.The center of iris is near the center of pupil, so can make center and the radius that uses the same method and find iris.Fig. 3 b is the example after the Iris Location to iris image among Fig. 3 a, and circle is wherein represented the outer boundary of pupil and iris after the match.With the pupil center of circle is that former described true iris image and the false iris images of naming a person for a particular job transforms to polar coordinate system from rectangular coordinate system; Under polar coordinate system, described iris image is zoomed to unified size, realize the normalization of iris image, round an iris circle ring area as area-of-interest.The iris image that is transformed into after the normalization under the polar coordinates is called the normalization iris image.Fig. 3 c is the normalization iris image of iris image among Fig. 3 a.
Step S22: in the normalization iris image, extract low-level image feature, adopt yardstick invariant features conversion (SIFT) the description operator of intensive sampling to carry out the low-level image feature extraction.Here low-level image feature is meant the characteristic that adopts certain operator (adopting SIFT to describe operator) directly to extract from image here.Uniformly-spaced choosing the zone of 16 * 16 pixels, to SIFT feature description of region extraction 128 dimensions of each 16 * 16 pixel as low-level image feature.It is identical that the interval of adopting among its interval and the S13 keeps.The calculating of SIFT descriptor is existing the announcement in patent documentation US6711293B1, repeats no more at this.
Step S23: adopt the tree type vision dictionary identical with step S15; From the root node of tree type vision dictionary and descend; Cascade low-level image feature is carried out coded representation, every layer set of letters of corresponding tree type dictionary adopts linear sparse coding (LLC) mode of local restriction to encode.Its cascade is meant that cataloged procedure successively carries out along tree type dictionary downwards, and every layer of coding retrained by the result of upper strata coding.The process of the linear sparse coding of local restriction is summed up as the minimal value of asking energy function
Wherein, X=[x
1, x
2..., x
N] ∈ R
D * NBe N the D dimensional feature that from a width of cloth picture, extracts, B ∈ R
D * MBe the vision dictionary that comprises M sight word that is used to encode, C=[c
1, c
2..., c
N] expression X coding result, ⊙ representes that the correspondence of corresponding vector element multiplies each other d
iBe characteristic x
iAnd the measuring vector of distance between each sight word among the B, λ is the parameter between 0 and 1.
Fig. 5 has provided the cataloged procedure based on tree type vision dictionary, and wherein empty circles is represented word candidate (node in the corresponding tree construction).Each layer at tree type vision dictionary utilizes the LLC method to the low-level image feature description of encoding.The word that has the peak response value in " √ " expression one deck, the child nodes conduct of these nodes correspondences is the both candidate nodes of one deck down.The vision dictionary that ground floor is used to encode is all nodes of this layer.Dictionary is then chosen according to last sublevel coding result in the succeeding layer, and the vision dictionary that j layer coding used is by the word that k peak response value arranged in the j-1 layer coding result most (i.e. j layer coding result C
jK corresponding word of maximum k absolute value) the child node composition.Can finding the solution based on the cascade LLC of tree type vision dictionary coding by following parsing:
Wherein,
L=1,2..., m
jBe candidate's sight word of j layer, m
jBe the number of candidate's sight word of j layer, x is a low-level image feature, and λ and σ are two parameters of LLC method.
To any width of cloth target iris image, statistics is to the path number through each node in the tree type vision dictionary in its low-level image feature cataloged procedure, and with the LLC coding result of itself and each layer of the tree type vision dictionary composition characteristic vector that is connected in series.Wherein, be designated as 0 in every layer of LLC cataloged procedure as the coding result of the node of word candidate.
Step S24: the proper vector that step S23 is obtained is input in the false iris identification and classification device of structure stage structure; Whether differentiate this target iris image according to the output result of false iris identification and classification device is false iris images; And; When it is false iris images when differentiation, send alerting signal.
The example of the concrete application of the above embodiment of the present invention is following:
In one example, the present invention is suitable for preventing to utilize false iris illegally to get into the limited access area territory, adopts iris discrimination technology control access entitlements like the concerning security matters zone, allows the people who gets into to be registered in this system.Not having that the Li Si who gets into this concerning security matters zone attempts to get into should the zone, and he has stolen the king's five who is registered to this system iris image, and this image is processed the plastics eyeball.The plastics eyeball attempt of holding king's five iris texture images when him is cheated the iris judgement system when getting into the concerning security matters zone; System acquisition is to the iris image of forging the plastics eyeball; Automatically determine this image from false iris and warning; The prompting staff confirms Li Si's identity, and protection king's five iris feature template.
In another example, the present invention is suitable in the iris judgement system of suspect's investigation, and like the self-service clearance system employing iris discrimination technology on certain airport, the fugitive suspect's of this system and public security bureau iris database links to each other.A suspect Zhang San is ordered to arrest by public security organ, and his iris information has been placed in this fugitive suspicion of crime human iris database (blacklist).When Zhang San disguises oneself and worn the contact lenses of chromatic colour stamp; Use counterfeit passport to prepare to run away by air, by plane; When using self-service clearance system, system acquisition determines him automatically and has worn colour printed contact lenses and warning to Zhang San's iris image; Point out the staff need require Zhang San to extract glasses and differentiate again, emphasis inspection Zhang San's identity.Though Zhang San has forged certificate, disguise oneself with the iris judgement system of out-tricking, he is still arrested and is brought to justice under help of the present invention.
The present invention can effectively improve the overall performance of iris judgement system at aspects such as security and stability, is the gordian technique during iris of future generation is differentiated.
Above-described specific embodiment; The object of the invention, technical scheme and beneficial effect have been carried out further explain, should be understood that, the above is merely specific embodiment of the present invention; Be not limited to the present invention; All within spirit of the present invention and principle, any modification of being made, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (15)
1. the method for discrimination of a false iris images, it is characterized in that: comprise structure stage and differentiation stage, the said structure stage comprises the structure stage of tree type vision dictionary and the structure stage of false iris identification and classification device, wherein:
The structure stage of tree type vision dictionary handles the sample iris image in the sample iris image storehouse, obtains tree type vision dictionary;
The structure stage of false iris identification and classification device is constructed false iris identification and classification device based on said tree type vision dictionary and sparse coding method;
The differentiation stage uses said false iris identification and classification device to judge the true and false of target iris image.
2. distinguishing false iris images method as claimed in claim 1, the structure stage of wherein said tree type vision dictionary comprises the steps:
Step S11: set up said sample iris image storehouse, make it comprise a plurality of authentic specimen iris images and a plurality of forgery sample iris image;
Step S12: a plurality of authentic specimen iris images in the said sample image storehouse and a plurality of forgery sample iris image are carried out pre-service, obtain normalization sample iris image;
Step S13: from said normalization sample iris image, extract low-level image feature, this low-level image feature is meant the characteristic that adopts operator directly to extract from image;
Step S14: set up tree type vision dictionary according to the low-level image feature that is extracted, this tree type vision dictionary configuration is following: comprise a root node and a plurality of layer, each layer comprises some nodes; Wherein root node is a dummy node, represents whole feature space; Ground floor comprises k
1Individual node, it is as the child node of root node, corresponding k
1Individual feature clustering center; The second layer comprises the child node of node in the corresponding ground floor, and each node has k in the ground floor
2Node; In later every layer all is the child node of last layer, and each father node has k at most
iNode, wherein i>2.
3. distinguishing false iris images method as claimed in claim 2, wherein the structure stage of false iris identification and classification device comprises the steps:
Step S15: from the root node of described tree type vision dictionary and descend, low-level image feature being encoded at every layer of cascade obtains the proper vector of said low-level image feature;
Step S16: authentic specimen iris image and the proper vector of forging the sample iris image respectively as positive sample and negative sample, are trained SVMs, obtain false iris identification and classification device.
4. distinguishing false iris images method as claimed in claim 3, wherein the differentiation stage comprises the steps:
Step S21: the target iris image is carried out pre-service, obtain normalization target iris image;
Step S22: from said normalization target iris image, extract low-level image feature;
Step S23: adopt the tree type vision dictionary identical with step S15, descend from root node, low-level image feature being encoded at every layer of cascade obtains the proper vector of said low-level image feature;
Step S24: the proper vector that step S23 is obtained is input in the said false iris sorter, and whether differentiate this target iris image according to sorter output result is false iris images.
5. distinguishing false iris images method as claimed in claim 4, wherein the preprocessing process among the step S12 comprises the steps:
Separate said sample iris image, obtain an iris region;
The border of pupil and iris in this iris region of match;
This iris region is transformed under the polar coordinates, accomplish normalization said sample iris image.
6. the method for discrimination of false iris images according to claim 4 wherein in step S13, adopts yardstick invariant features conversion (SIFT) the description operator of intensive sampling to carry out the low-level image feature extraction.
7. the method for discrimination of false iris images according to claim 4, wherein in step S14, with the input as the k mean cluster of cascade of the low-level image feature of said extraction, the k mean cluster study through cascade obtains said tree type vision dictionary.
8. the method for discrimination of false iris images according to claim 4, wherein in step S15, said cataloged procedure adopts the linear sparse coding (LLC) of local restriction.
9. the method for discrimination of false iris images according to claim 4; Wherein in step S15; To each width of cloth sample iris image, statistics is to the path number through each node in the tree type vision dictionary in its low-level image feature cataloged procedure, and with the LLC coding result of itself and each layer of the tree type vision dictionary composition characteristic vector that is connected in series; Wherein, be designated as 0 in every layer of LLC cataloged procedure as the coding result of the node of word candidate.
10. according to the method for discrimination of the described false iris images of claim 4; Wherein in step S16; Respectively as positive sample and negative sample, train SVMs to obtain false iris identification and classification device authentic specimen iris image and the proper vector of forging the sample iris image;
11. distinguishing false iris images method as claimed in claim 4, wherein the preprocessing process among the step S21 comprises the steps:
Separate said target iris image, obtain an iris region;
The border of pupil and iris in this iris region of match;
This iris region is transformed under the polar coordinates, accomplish normalization said iris image.
12. the method for discrimination of false iris images according to claim 4 wherein in step S22, adopts yardstick invariant features conversion (SIFT) the description operator of intensive sampling to carry out the low-level image feature extraction.
13. the method for discrimination of false iris images according to claim 4, wherein in step S23, said cataloged procedure adopts the linear sparse coding (LLC) of local restriction.
14. the method for discrimination of false iris images according to claim 4; Wherein in step S23; To each width of cloth target iris image, statistics is to the path number through each node in the tree type vision dictionary in its low-level image feature cataloged procedure, and with the LLC coding result of itself and each layer of the tree type vision dictionary composition characteristic vector that is connected in series; Wherein, be designated as 0 in every layer of LLC cataloged procedure as the coding result of the node of word candidate.
15. the method for discrimination of false iris images according to claim 4, wherein in step S24, differentiating when said false iris sorter is said target iris image when being false iris images, sends alerting signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110362103 CN102436591B (en) | 2011-11-15 | 2011-11-15 | Discrimination method of forged iris image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110362103 CN102436591B (en) | 2011-11-15 | 2011-11-15 | Discrimination method of forged iris image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102436591A true CN102436591A (en) | 2012-05-02 |
CN102436591B CN102436591B (en) | 2013-09-25 |
Family
ID=45984643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110362103 Active CN102436591B (en) | 2011-11-15 | 2011-11-15 | Discrimination method of forged iris image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102436591B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036012A (en) * | 2014-06-24 | 2014-09-10 | 中国科学院计算技术研究所 | Dictionary learning method, visual word bag characteristic extracting method and retrieval system |
CN104537292A (en) * | 2012-08-10 | 2015-04-22 | 眼验有限责任公司 | Method and system for spoof detection for biometric authentication |
CN107220598A (en) * | 2017-05-12 | 2017-09-29 | 中国科学院自动化研究所 | Iris Texture Classification based on deep learning feature and Fisher Vector encoding models |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101008985A (en) * | 2007-01-18 | 2007-08-01 | 章毅 | Seal identification system and controlling method thereof |
US20080240514A1 (en) * | 2007-03-26 | 2008-10-02 | The Hong Kong Polytechnic University | Method of personal recognition using hand-shape and texture |
CN101923640A (en) * | 2010-08-04 | 2010-12-22 | 中国科学院自动化研究所 | Method for distinguishing false iris images based on robust texture features and machine learning |
-
2011
- 2011-11-15 CN CN 201110362103 patent/CN102436591B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101008985A (en) * | 2007-01-18 | 2007-08-01 | 章毅 | Seal identification system and controlling method thereof |
US20080240514A1 (en) * | 2007-03-26 | 2008-10-02 | The Hong Kong Polytechnic University | Method of personal recognition using hand-shape and texture |
CN101923640A (en) * | 2010-08-04 | 2010-12-22 | 中国科学院自动化研究所 | Method for distinguishing false iris images based on robust texture features and machine learning |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104537292A (en) * | 2012-08-10 | 2015-04-22 | 眼验有限责任公司 | Method and system for spoof detection for biometric authentication |
US9971920B2 (en) | 2012-08-10 | 2018-05-15 | EyeVerify LLC | Spoof detection for biometric authentication |
CN104537292B (en) * | 2012-08-10 | 2018-06-05 | 眼验有限责任公司 | The method and system detected for the electronic deception of biological characteristic validation |
CN104036012A (en) * | 2014-06-24 | 2014-09-10 | 中国科学院计算技术研究所 | Dictionary learning method, visual word bag characteristic extracting method and retrieval system |
CN104036012B (en) * | 2014-06-24 | 2017-06-30 | 中国科学院计算技术研究所 | Dictionary learning, vision bag of words feature extracting method and searching system |
CN107220598A (en) * | 2017-05-12 | 2017-09-29 | 中国科学院自动化研究所 | Iris Texture Classification based on deep learning feature and Fisher Vector encoding models |
CN107220598B (en) * | 2017-05-12 | 2020-11-10 | 中国科学院自动化研究所 | Iris image classification method based on deep learning features and Fisher Vector coding model |
Also Published As
Publication number | Publication date |
---|---|
CN102436591B (en) | 2013-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101923640B (en) | Method for distinguishing false iris images based on robust texture features and machine learning | |
CN101833646B (en) | In vivo iris detection method | |
Zhao et al. | Dynamic texture recognition using volume local binary count patterns with an application to 2D face spoofing detection | |
US9064145B2 (en) | Identity recognition based on multiple feature fusion for an eye image | |
Sun et al. | Improving iris recognition accuracy via cascaded classifiers | |
Perez et al. | Methodological improvement on local Gabor face recognition based on feature selection and enhanced Borda count | |
Tome et al. | The 1st competition on counter measures to finger vein spoofing attacks | |
Hu et al. | Iris liveness detection using regional features | |
CN101142584B (en) | Method for facial features detection | |
CN107967458A (en) | A kind of face identification method | |
CN106650693A (en) | Multi-feature fusion identification algorithm used for human face comparison | |
CN110516616A (en) | A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set | |
TW201039248A (en) | Method and system for identifying image and outputting identification result | |
CN103049736A (en) | Face identification method based on maximum stable extremum area | |
CN107220598B (en) | Iris image classification method based on deep learning features and Fisher Vector coding model | |
Pant et al. | Off-line Nepali handwritten character recognition using Multilayer Perceptron and Radial Basis Function neural networks | |
Pal et al. | Off-line signature identification using background and foreground information | |
Pal et al. | Off-line Bangla signature verification | |
El-Sayed et al. | Identity verification of individuals based on retinal features using Gabor filters and SVM | |
CN102436591B (en) | Discrimination method of forged iris image | |
Pal et al. | Off-line English and Chinese signature identification using foreground and background features | |
CN108133187B (en) | The one-to-one iris identification method of dimensional variation invariant feature and the voting of more algorithms | |
Gopane et al. | Indian counterfeit banknote detection using support vector machine | |
Patil et al. | Fake currency detection using image processing | |
Chen et al. | Iris recognition using 3D co-occurrence matrix |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |