CN1928886A - Iris identification method based on image segmentation and two-dimensional wavelet transformation - Google Patents

Iris identification method based on image segmentation and two-dimensional wavelet transformation Download PDF

Info

Publication number
CN1928886A
CN1928886A CN 200610021266 CN200610021266A CN1928886A CN 1928886 A CN1928886 A CN 1928886A CN 200610021266 CN200610021266 CN 200610021266 CN 200610021266 A CN200610021266 A CN 200610021266A CN 1928886 A CN1928886 A CN 1928886A
Authority
CN
China
Prior art keywords
iris
image
gray
circle
zone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610021266
Other languages
Chinese (zh)
Other versions
CN100373396C (en
Inventor
马争
董自信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CNB200610021266XA priority Critical patent/CN100373396C/en
Publication of CN1928886A publication Critical patent/CN1928886A/en
Application granted granted Critical
Publication of CN100373396C publication Critical patent/CN100373396C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The iris recognition method based on image segmentation and 2D wavelet transformation comprises: dividing the iris positioning into the key-point inner edge and an outer edge positioning; by mapping technology from Cartesian coordinate to polar coordinate, normalizing positioned image into a fixed gray matrix; then taking image segmentation for two times into 18 sub-areas finally; taking 2D wavelet transformation to extract the wavelet coefficient and variance as feature values from main wavelet channel; in matching and recognition algorithm, using reciprocal variance sum and different weighing values to obtain final result. Compared with prior art, this invention has well noise-proof feature and high recognition ratio without losing real-time.

Description

Iris identification method based on image segmentation and two-dimensional wavelet transformation
Technical field
Based on the iris identification method of image segmentation and two-dimensional wavelet transformation, belong to the biological characteristic mode identification technology, particularly iris feature recognition methods.
Background technology
Along with the development of infotech and the widespread use of ecommerce, information security day by day becomes the important and urgent problem that people face.Can be used for identity differentiates, protects the biometrics identification technology of information security more and more to be subject to people's attention.So-called biometrics identification technology is meant that high-tech means is close combines by computing machine and optics, acoustics and biostatistics etc., utilizes intrinsic physiological property of human body and behavioural characteristic to carry out the evaluation of personal identification.And the iris feature recognition technology utilizes the difference of human eye iris texture to carry out identification.
Contain profuse information in the iris of human eye.Its surface has some to be similar to the texture of filament, spot, whirlpool, shape such as crown.These textures of iris have uniqueness, and different people has different iris texture characteristics, even if same individual, the iris texture characteristic of its left eye and right eye is all different.Therefore, it is very accurate carrying out identification with these textural characteristics.And the textural characteristics of iris mainly be by the people also the environment when the embryo determined, and since the outside of iris have transparent cornea with it with extraneous isolated, so full grown iris is not vulnerable to extraneous injury and changes.Therefore iris recognition has absolute reliability.In addition, because pupil can change according to the power of light, and then affects the iris shape also to ensue variation, whether the iris sample that utilizes this point to differentiate to be used to discern is the iris of eye of living, so iris recognition also has higher antifalsification.Have these advantages just because of iris texture, make identity recognizing technology based on iris feature in finance, ecommerce, various aspects such as security all have great application prospect.
Identity recognizing technology based on iris feature has obtained vigorous growth abroad, and has progressively carried out the commercialization process of iris recognition.Iris authentication system is succeeded in developing by univ cambridge uk at first.1993, the John.G.Daugman of univ cambridge uk provided more complete iris identification method.This method accuracy height, speed is fast, is the theoretical foundation of nearly all commercial iris authentication system in the world today, and his initiative work makes automatic iris recognition become possibility.1996, people such as the Wildes of Princeton succeeded in developing the iris authentication system based on the area image registration technology.1998, people such as the Boles of University of Queensland proposed the iris identification method based on the zero crossing wavelet transformation.See document for details: J.G.Daugman.High Confidence Visual Recognition of Persons by a Test of Statistical Independence.IEEE transactions on Pattern Analysis and Machine Intelligence, 1993,15 (11): 1148-1161, document: R.P.Wildes, J.C.Asmuth, G.L.Green, et al.A Machine-Vision System for Iris Recognition.Machine Vision and Applications, 1996,9 (1): 1-8 and document: W.Boles, B.Boashah.A HumanIdentification Technique Using Images of the Iris and Wavelet Transform.IEEE transactions onSignal Processing, 1998,46 (4): 1185-1188.
Domestic research to iris recognition technology is started late, and has also obtained development faster in recent years.But compare with the flourishing prosperous growth momentum of external iris recognition industry, certain gap is still arranged.At present, Institute of Automation, Chinese Academy of sociences has finished the research of the laboratory stage of iris recognition, and has applied for the patent of iris capturing device.Shanghai Communications University, Zhejiang University, the Central China University of Science and Technology etc. are also in the research of being correlated with, and have also all obtained certain achievement in research.See document for details: Wang Yunhong, Zhu Yong, Tan Tieniu. the identity based on iris recognition is differentiated. robotization journal, 2002,28 (1): 1-10, document: answer honeysuckle, Xu Guozhi. based on the iris recognition technology of wavelet transformation zero passage detection. Shanghai Communications University's journal, 2002,36 (3): 355-358 and document: Chen Liangzhou, leaf tiger year. a kind of new Algorithm of Iris Recognition research. Huabei Polytechnical College measuring technology journal, 2000,14 (4): 211-216.
At present, in the iris identification method that has proposed, having obtained preferably in actual applications, the method for recognition effect has both at home and abroad:
1, the iris identification method of Daugman based on phase analysis.It adopts the phase characteristic of the method coding iris of Gabor wavelet filtering.2D Gabor function can reach the localization of frequency field and spatial domain preferably, has good frequency and directional selectivity in other words in the localization of space.By calculating 2D Gabor phase coefficient, can from texture, extract continuous and discontinuous texture information effectively.See document for details: J.G.Daugman.High Confidence Visual Recognition ofPersons by a Test of Statistical Independence.IEEE transactions on Pattern Analysis and MachineIntelligence, 1993,15 (11): 1148-1161.
2, the iris identification method of the zero crossing of Boles detection.It adopts one-dimensional wavelet transform that a sampling curve along iris centres circle is carried out zero crossing and detects, and finishes the classification of iris feature by two self-defining similarity functions.Its theoretical foundation is that the signal zero crossing of Mallat is described reconstruction theory.See document for details: W.Boles, B.Boashah.A HumanIdentification Technique Using Images of the Iris and Wavelet Transform.IEEE transactions onSignal Processing, 1998,46 (4): 1185-1188.
3, Tan Tieniu, people's such as Wang Yunhong iris identification method based on texture analysis.This method is regarded the feature of iris as a kind of random grain, from the local feature of the angle extraction iris of texture analysis.This method adopts Gabor filtering and is that the two-dimensional wavelet transformation of wavelet basis extracts iris texture characteristic with the Daubechies-4 small echo, and carry out characteristic matching with variance inverse weight Euclidean distance method, obtained recognition effect preferably, see document for details: Wang Yunhong, Zhu Yong, Tan Tieniu. the identity based on iris recognition is differentiated. robotization journal, 2002,28 (1): 1-10.
Though above-mentioned 3 kinds of iris identification methods have all been obtained recognition effect preferably, still have incomplete place.Though method 1 has obtained higher identification accuracy,, 2048 dimensions have been reached because the dimension of the iris feature vector of its extraction is higher.Therefore the sharpness to the iris image that collects has higher requirement.The limitation that system is in the past drifted about though method 2 has overcome, rotation and ratio scaling bring, also to brightness variation and insensitive for noise, and not very high to the quality requirements of images acquired, but because this algorithm has only utilized a part of iris texture characteristic information, so do not obtain higher correct recognition rata.Method 3 has adopted different texture analysis strategy texture feature extractions, has obtained travelling speed faster, but it is more coarse to the iris texture description, and its correct recognition rata still is not very high.
The quality of estimating a kind of iris identification method has two than important index: discrimination and travelling speed.Usually, these two indexs are contradiction.A kind of good iris identification method is to improve discrimination to greatest extent under the prerequisite that satisfies the travelling speed that system real time requires.
Summary of the invention
The iris identification method that the present invention proposes has obtained higher iris recognition rate under the prerequisite of requirement of real time.It is considered except the noise in the iris texture image preferably by image segmentation, more comprehensively extracted the characteristic information of two-dimentional iris texture image again by two-dimensional wavelet transformation, the method for suing for peace by variance inverse weight feature gets recognition result to the end at last.
Content of the present invention for convenience of description, at first make a term definition at this:
1. iris: the tissue of people's eyeball intermediary in the middle of pupil and sclera, this part group only contains unique, and abundant texture information can be used for carrying out identification.Its geometric configuration is an annular shape.
2. the inside and outside edge of iris: the intersection of iris and pupil is called the interior change edge of iris, and it is a circle.The intersection of iris and sclera is called the outward flange of iris, and it also is a circle.
3. iris image acquiring device: the device of catching the iris image digital signal.
4. gray level image: only comprised monochrome information in the image and without any the image of colouring information.
5. medium filtering: being a kind of nonlinear signal processing method, also is a kind of typical low-pass filter.By the gray level ordering, getting its intermediate value is output pixel value with the pixel in the field for it.The effect of medium filtering depends on two key elements: the pixel count that relates in the spatial dimension in field and the median calculation.
6. grey level histogram: grey level histogram (histogram) is the function of gray level, has the number of the pixel of every kind of gray level in its presentation video, every kind of frequency that gray scale occurs in the reflection image.The horizontal ordinate of grey level histogram is a gray level, and ordinate is the frequency that this gray level occurs, and is the most basic statistical nature of image.
7. image binaryzation: set a threshold value, the gray-scale value of each picture element in the image all with this threshold, when gray-scale value is made as 1 to the gray-scale value of this picture element during greater than threshold value; When gray-scale value is made as 0 to the gray-scale value of this point during less than threshold value.Picture element gray-scale value in the image become have only 0,1 process to be called image binaryzation.
8. Iris Location: comprise in pupil, sclera, the ciliary iris image at a width of cloth, accurate in locating goes out the process of the geometric position of annular iris in image.
9.Roberts operator: it is a kind of edge detection operator commonly used.The Roberts operator assigns to approach gradient operator based on the vertical and level error with image.It is realized jointly by two 2 * 2 templates: 1 0 0 - 1 , 0 1 - 1 0 . Its computing formula is: R ( i , j ) = ( f ( i , j ) - f ( i + 1 , j + 1 ) ) 2 + ( f ( i , j + 1 ) - f ( i + 1 , j ) ) 2 .
10.x the Gray Projection amount P (x) of direction: gray matrix B (x, y) in, on the x direction, the corresponding pixel gray scale of each x value is all added together.Promptly P ( x ) = Σ y B ( x , y ) .
11.y the Gray Projection amount P (y) of direction: gray matrix B (x, y) in, on the y direction, the corresponding pixel gray scale of each y value is all added together.Promptly P ( y ) = Σ x B ( x , y ) .
12. circular edge detecting device: its basic mathematical operator is: Its basic thought is exactly at the continuous iteration of parameter space (r, x 0, y 0) value because each parameter value (r, x 0, y 0) all corresponding circle, so iteration (r, x 0, y 0) process, that is to say the process of iteration circle.On the circumference of each circle, ask the round integration of gray-scale value.Along with (r, x 0, y 0) circulation change, gray-scale value circle integration also and then changes, and finds round integration to change that maximum circle and is the circle that will detect.
13. iris image normalization: because each the position of iris all can be different when gathering iris image, and because the influence of acquisition system illumination can cause the amplification or the contraction of pupil, this can make the size of iris also and then change.Can not directly be used for feature extraction so orient the iris region of annular, must convert the annular iris to a fixed-size gray matrix image.This process is the image normalization process.
14. histogram equalization: it is to be the histogram transformation of original image uniform form, has so just increased the dynamic range of pixel gray-scale value, has strengthened the contrast on the integral image.Through the image after the equalization, its readability is significantly improved, and required target information will be highlighted out.Histogram equalization can solve because the problem that inhomogeneous illumination impacts the iris result.
15. two-dimensional wavelet transformation a: width of cloth iris image is carried out that two-dimensional wavelet transformation can extract horizontal direction, vertical direction and to the detail coefficients of angular direction.It is fit to the analysis of two-dimensional images signal very much.
16.Haar small echo: it is a kind of better simply wavelet basis, and its wavelet function is:
ψ ( t ) = 1 , t ∈ [ 0,1 / 2 ] - 1 , t ∈ [ 1 / 2,1 .
17. wavelet channel: carrying out completely to piece image, wavelet decomposition obtains a series of wavelet coefficient.Usually the subimage that these coefficient of wavelet decomposition are constituted calls the wavelet decomposition passage.Piece image is carried out 1 rank two-dimensional wavelet transformation can obtain four kinds of wavelet channel: LL, LH, HL, HH.Each passage has characterized the information under original image different space frequency and the direction.
18. average: in its expression wavelet channel, the mean value of wavelet coefficient.It has characterized the energy size of wavelet channel.Its computing formula is: E n = 1 M × N Σ i = 1 M Σ j = 1 N | x ( i , j ) | .
19. sample variance: it is used for weighing in the wavelet channel, the departure degree of wavelet coefficient and average.Its computing formula is:
D n = Σ i = 1 M Σ j = 1 N [ | x ( i , j ) | - E n ] 2 M × N - 1 .
20. the systematic learning stage: system extracts complete eigenwert to the iris image that collects.And this eigenwert deposited in the iris sample database, with master sample as coupling identification.
21. the system identification stage: the iris image of unknown identity is read in system, extracts the eigenwert of half.And the eigenwert in this eigenwert and the iris sample database compared according to certain coupling recognizer, finally draw the recognition result of unknown identity iris image.
Detailed technology scheme of the present invention is:
Iris identification method based on image segmentation and two-dimensional wavelet transformation is characterized in that, it comprises the following step:
Step 1: gather iris image.
Harvester by iris image is gathered iris image, and the iris image gray matrix H that obtains can be used for further handling (x, y).
Step 2: iris image medium filtering.
To the iris image gray matrix H of step 1 gained (x, y) do smoothing processing obtain gray matrix I (x, y).In smoothing processing, wave filter is the nonlinear median filter of 9 discrete picture elements for the sample window size.
Step 3: iris inward flange location.Specifically may further comprise the steps:
Step 1): image binaryzation
Obtain gray matrix I (x, grey level histogram y), find out in the grey level histogram in the peak value corresponding gray scale value M of gray-scale value in (20~125) scope, gray-scale value M and a safety coefficient D addition are obtained can be used for the threshold value Y of gray level image binaryzation.Safety coefficient D gets the number between 3 to 7 usually.Be threshold value with Y to iris gray level image I (x y) carries out binaryzation, obtain binary image B (x, y).
Step 2): look for the rough center of circle of inward flange
Calculate the Gray Projection amount P (x) of binaryzation matrix B, at the Gray Projection amount P (y) of y direction in the x direction.In one-dimension array P (x), find its minimum value, and find the x of this minimum value correspondence 1In like manner in one-dimension array P (y), find its minimum value, and find the y of this minimum value correspondence 1(x 1, y 1) be the rough center of circle of iris inward flange.
Step 3): rim detection
To the image B after the binaryzation (x y) carries out edge of image with the Roberts operator and detects, obtain a bianry image BW who contains the edge (x, y).
Step 4): marginal point is divided into 4 quadrants
BW (x, y) in the rough center of circle (x of the inward flange that obtains in the step 3) 1, y 1) set up coordinate system for true origin.The divergent margin point is divided into four quadrants.In each quadrant, in the fan-shaped range of 30 pixels of distance initial point and 50 pixels, 3 pixels of picked at random differ at 10 marginal points more than the unit.In like manner, also be so to select 3 pixels in other quadrants.Such 4 quadrants have 12 pixels.
Step 5): 4 quadrant associatings are accurately located
In 12 points of step 4), select 3 not points on same straight line, just can constitute a circle by these 3 points.And obtain all the other 9 points to this circle apart from d.The multipotency of 12 points constitutes 220 circles, also promptly has 220 apart from d, finds that circle of minimum d correspondence to be inward flange of iris in these 220 distances.If the center of circle of the iris inward flange that obtains is (x a, y a), radius is r a
Step 4: iris outward flange location.Specifically may further comprise the steps:
Step 1): limit the iteration scope that rounded edge detects template
Step 2 obtain I (x, y) in, carry out iteration with the circular edge detecting device and ask the gray integration value.In iterative process the center of circle (x of inward flange a, y a) as iteration (x 0, y 0) initial value.And (x 0, y 0) the hunting zone be limited to (x a-10, y a-10), (x a-10, y a+ 10), (x a+ 10, y a-10), (x a+ 10, y a+ 10) be in the rectangle on summit.And the hunting zone of r is limited in 70~110 unit picture elements.In the process of search, be not on whole circle, gray scale to be done round integration, but with (x 0, y 0) in the rectangular coordinate system set up, angle is to ask the integration of gray scale on-45 °~45 °, 135 °~225 ° the circular camber line.
Step 2): find out outward flange in the iteration
(r, x at the scope inner iteration parameter space of step 1) 0, y 0) value, obtain gray integration and change outward flange that maximum circle is iris.Corresponding parameters value (r, x 0, y 0) be outer peripheral radius of iris and circle-center values (r b, x b, y b).Positioning result as shown in Figure 1.
Step 5: iris image normalization.Specifically may further comprise the steps:
Step 1): set up the coordinate conversion model and with iris image normalization
Obtain the circumference parameter (r at the inside and outside edge of iris by step 3 and step 4 a, x a, y a), (r b, x b, y b), with the interior round heart (x a, y a) as the initial point foundation of coordinate system rectangular coordinate is converted to polar mathematical model (as shown in Figure 2).Begin to do the ray that becomes the θ angle with horizontal line from initial point in this model, respectively there is an intersection point on it and inside and outside border, and note is made B (x respectively i, y i), A (x o, y o).Two intersection point A on the ray, (x y) can use A (x to the coordinate of any point between the B o(θ), y o(θ)), B (x i(θ), y iLinear combination (θ)) is represented: x ( r , θ ) = ( 1 - r ) * x i ( θ ) + r * x 0 ( θ ) y ( r , θ ) = ( 1 - r ) * y i ( θ ) + y * y 0 ( θ ) . So just iris image can be normalized to θ is transverse axis, and r is that (x, y), iris normalized result as shown in Figure 3 for the gray matrix P of the fixed measure of the longitudinal axis.
Step 2): histogram equalization
The gray matrix P that step 1) is obtained (x, y) do histogram equalization handle obtain Normalized Grey Level matrix PI (x, y).
Step 6: the gray matrix image segmentation first time after the normalization
PI (x, y) in, the part of choosing gray matrix top 16 * 1024 is as the iris texture a-quadrant.Choosing line width is 17~48, the scope of row be respectively 1~128,384~640,896~1024 totally 3 pockets as iris texture B zone.Choosing line width is 49~64, and the scope of row is respectively that 1~64,448~576,980~1024 totally 3 pockets are as iris texture C zone, and iris image is cut apart as shown in Figure 4 for the first time.
Step 7: the gray matrix image segmentation second time after the normalization.
The gray matrix that texture a-quadrant in the step 6 is divided into 8 fritters: line width is 1~16, and col width is respectively 1~128,129~256,257~384,385~512,513~640,641~768,769~896,897~1024.The gray matrix that iris texture B zone also is divided into 8 fritters: line width is 17~32, and col width is respectively 1~128,385~512,513~640,897~1024; Line width is 33~48, and col width is respectively 1~128,385~512,513~640,897~1024.The gray matrix that iris texture C zone is divided into 2 fritters: line width is 49~64, and col width is that two gray matrix zones of 1~64,961~1024 combine; Line width is 49~64, and col width is 449~578 gray matrix zone.Iris image is cut apart as shown in Figure 5 for the second time.
Step 8: each cut zone is carried out two-dimensional wavelet transformation
Is that wavelet basis carries out 3 rank two-dimensional wavelet transformations to each the little cut zone in the step 7 with the Haar small echo, obtains 10 wavelet channel behind the two-dimensional wavelet transformation altogether.These passages are designated as LL respectively 3, LH 3, HL 3, HH 3, LH 2, HL 2, HH 2, LH 1, HL 1, HH 1HH wherein 1, HH 2, HH 3Triple channel has been represented the information of iris image under horizontal high frequency and vertical high frequency, and they contain a large amount of noises, is unfavorable for extracting iris feature.Give up these three passages and only keep remaining 7 passages, as shown in Figure 6.
Step 9: extract wavelet coefficient average and variance as eigenwert
Each wavelet channel behind certain little cut zone two-dimensional wavelet transformation is extracted its average and sample variance: E n = 1 M × N Σ i = 1 M Σ j = 1 N | x ( i , j ) | , D n = Σ i = 1 M Σ j = 1 N [ | x ( i , j ) | - E n ] 2 M × N - 1 As eigenwert.This each wavelet channel of little cut zone can extract two eigenwerts like this.7 wavelet channel just can extract 14 eigenwerts.Each little cut zone is all repeated this process.Each little cut zone can both extract 14 eigenwerts, has 18 little cut zone, so can extract 252 eigenwerts.
Step 10: mate identification with the equal value difference of variance inverse weight.Specifically may further comprise the steps:
Step 1): extract complete characterization as the iris feature sample
Learning phase in system is all handled to the processing procedure of step 9 according to step 2 the every width of cloth iris image that will learn.Every width of cloth iris image can extract 252 eigenwerts, these eigenwerts as the sample storage of this iris image in the iris sample database, be used for the back identification decision.
Step 2): extract characteristics of mean and be used for identification
Cognitive phase in system to the iris image an of width of cloth the unknown, is handled to the processing procedure of step 9 according to step 2 this image.And in step 9, only extract average E nAs eigenwert.The iris image that is used to like this discern only need extract 128 eigenwerts.
Step 3): divide 3 parts to mate identification respectively
In the coupling identifying, 252 eigenwerts in the sample are divided into 3 part: E according to three texture regions of step 6 A, E B, E CE ABe the eigenwert of iris texture a-quadrant through the conversion extraction.E BBe the eigenwert of iris texture B zone through the conversion extraction.E CBe the eigenwert of iris texture C zone through the conversion extraction.In like manner also 128 eigenwerts of iris image to be identified are divided into 3 part: e A, e B, e CCoupling recognizer according to the summation of variance inverse weight is mated the identification computing respectively to 3 parts.Promptly P j = Σ i = 1 N ( e ji - E ji ) 2 D ji , j=A,B,C。
Step 4): with 3 recognition results of different coefficient weightings
Can obtain P by step 3) A, P B, P CThe recognition result of every part be multiply by weighting coefficient promptly obtained final recognition result P.Be P=a*P A+ b*P B+ c*P CUsually weighting coefficient situation about distributing according to iris texture weighting coefficient a: b: c=7 then: 2: 1.In specific implementation process of the present invention, get a=0.7, b=0.2, c=0.1.
Step 5): setting threshold T discerns judgement
Set a threshold value T,, judge that promptly current iris image sample in iris image to be identified and the database is from same eyes as P during as P<T.When P>T, judge that promptly current iris image sample in iris image to be identified and the database is from different eyes.
By above 10 steps, just can carry out the identification of real-time by gathering iris image.
Need to prove:
1, the purpose of introducing safety coefficient is for fear of noise such as introducing eyelashs in to iris image binaryzation process in the step 1) of step 3.What the value of safety coefficient can not be established is excessive or too small, and usually value is 5 more suitable.
2, the purpose that finds the rough center of circle the step 2 of step 3) is in order to be that true origin is set up coordinate system with this center of circle in the step 4) of step 3, and then determines four and can search for round quadrant.
3, the step 3) of step 3 only selects for use the Roberts operator to carry out rim detection in numerous edge detection operators.Be that it is more obvious to become edge in making because the threshold value of binaryzation is selected better.So use simple Roberts operator can accurately detect inward flange, and because its calculating is simple, so also have very high arithmetic speed.
4, in the step 2 of step 3) in calculate the center of circle and the radius that can "ball-park" estimate goes out the iris inward flange according to the accumulation of gray scale.But the reason that also will carry out step 3), step 4), step 5) be since the texture major part of iris all in place, so "ball-park" estimate can influence discrimination near inward flange.The inward flange of iris needs accurately location, so come the accurate localization inward flange by step 3), step 4), step 5).
5, why location iris outer peripheral process is not both because iris outward flange and sclera gray scale difference value are not very big with the method for step 3 location inward flange in the step 4, can not locate fast with the method for binaryzation.And, also be subjected to ciliary interference easily, so outer peripheral location does not need very high degree of accuracy because it is less to distribute near outer peripheral iris region texture.Adopt the circular edge detecting device to locate outward flange and reached the locating accuracy requirement.
6, why will do normalized by step 5 is because each the position of iris all can be different when gathering iris image.And because the influence of acquisition system illumination can cause the amplification or the contraction of pupil, this can make the size of iris also and then change.Can not directly be used for feature extraction so orient the iris region of annular, must convert the annular iris to a fixed-size gray matrix image.Though when gathering iris image, all can cause the change of the absolute position of iris, the relative position of iris texture generally can not change at every turn.Still adopt the method for polar coordinate transform that iris is carried out normalization.Because the outer edge of iris is not concentric usually, so this polar coordinate transform neither be concentric.
7, the step 3) in the step 5 is the purpose that the gray level image after the normalization is reached the figure image intensifying by histogram equalization.This step can solve in the image acquisition process because the problem of the iris image intensity profile inequality that inhomogeneous illumination causes.
8, step 6 is for doing image segmentation processing for the first time to iris image gray matrix after the normalization.It is divided into A, B, a C3 part to the Normalized Grey Level matrix according to the sparse of iris texture distribution.
9, step 7 is an image segmentation for the second time, and it is on the basis of the image segmentation first time and then iris image is divided into 18 is subjected to the less zone of noise possibility.
10, in the step 8, each cut zone can both obtain 10 wavelet channel after through 3 rank two-dimensional wavelet transformations, why will give up 3 wavelet channel that comprise diagonal detail is because these 3 wavelet channel are subjected to the possibility maximum of noise, discrimination can be influenced, so give up this 3 wavelet channel.
11, in the step 9, average is used for weighing the energy of wavelet channel, and variance is used for weighing the degree that wavelet coefficient in the wavelet channel departs from average.
12, the step 2 of step 10), only need extract half eigenwert at the cognitive phase of system is because what adopt in matching algorithm is the coupling recognizer of variance inverse weight summation.And used variance is the variance in the iris sample characteristics in the database, and does not need to know the variance feature in the iris image to be identified.So, when extracting the iris feature of iris image to be identified, only need to extract the characteristics of mean of iris image to be identified, and do not need to extract the variance feature of wavelet channel through wavelet coefficient behind the two-dimensional wavelet transformation at cognitive phase.The arithmetic speed that this has improved iris recognition has to a certain extent improved the real-time of recognition system.
13, in the step 5) of step 10, different threshold value T can be set according to the difference of security needs.Occasion higher to security requirement can be oppositely arranged lower threshold value T; Occasion lower to security requirement can be oppositely arranged higher threshold value T.
The present invention is divided into two parts in inward flange location and outward flange location to Iris Location, and lays stress on the inward flange location.In image normalization to the fixing gray matrix after then adopting rectangular coordinate and polar coordinates mapping theory the location.And the image after the normalization carried out image segmentation twice, finally be divided into 18 zonules.Carry out two-dimensional wavelet transformation by each little cut zone again, extract the wavelet coefficient average of main wavelet channel and variance as eigenwert to iris image after the normalized.In the coupling recognizer, in the normalized for the first time 3 subregions after the image segmentation adopt the matching algorithm of variance inverse weight summation to discern judgement respectively, obtain 3 recognition results.Come these 3 recognition results of weighting with different confidence level coefficients at last, obtain final recognition result.Adopt Algorithm of Iris Recognition of the present invention to carry out identification and can obtain higher recognition accuracy and noiseproof feature preferably, and very fast travelling speed is arranged, satisfy the requirement of system real time.
Innovation part of the present invention is:
1, in to the iris image position fixing process, according to locating the order that inward flange relocates the foreign aid earlier,, then carry out rim detection with the Roberts operator earlier to the iris image binaryzation, accurately locate the iris inward flange with the thought of Hough conversion then; During outward flange, the rounded edge that adopts Daugman to propose detects template and locatees outward flange in the location, and according to the parameter of oriented inward flange, has dwindled the hunting zone that rounded edge detects template, has improved arithmetic speed.
2, normalization matrix is carried out image segmentation twice.Cut apart for the first time normalization matrix is divided into 3 parts.Cut apart for the second time normalization matrix is divided into 18 zonules.Avoid eyelid and ciliary influence through what cut apart big limit for twice, and done sufficient preparation for the latter feature extraction.Then the gray matrix image after the normalization has been done the image enhancement processing of histogram equalization, avoided owing to uneven illumination causes the strong dark bigger situation of difference of image.
3,, be that wavelet basis carries out 3 rank two-dimensional wavelet transformations respectively to 18 little cut zone and obtains 10 wavelet channel with the Haar small echo in feature extraction phases.In these 10 wavelet channel, give up and fall to contain comprising of noise of horizontal high frequency and 3 wavelet channel of vertical high frequency information, keep remaining 7 passages.And to the 2-d wavelet coefficient of these 7 passages average and variance as eigenwert.
4, adopt the matching algorithm of variance inverse weight summation.And used variance is an existing variance in the database, make the average of wavelet coefficient that only need extract iris to be identified at cognitive phase as eigenwert, promptly the eigenwert quantity extracted of recognition system cognitive phase only is half of learning phase extraction eigenwert quantity.The arithmetic speed that this has accelerated system has to a certain extent preferably improved the real-time performance of system.
Description of drawings
Fig. 1 is the Iris Location result schematic diagram
This figure iris original image uses is a width of cloth iris image in the CASIA iris database (version 1.0).It is that a panel height is 280 pixels, and wide is the gray level image matrix of 320 pixels.The zone that the circle of two whites comprises among the figure is belt iris region.
Fig. 2 is a coordinate system transformation model synoptic diagram
Fig. 3 is iris normalized result
Belt iris region among Fig. 1 is normalized into a fixing gray matrix, and the height of gray matrix is 64 pixels among the figure, and wide is 1024 pixels.
Fig. 4 is cut apart for the first time for iris image
Texture distribution situation according to iris is divided into 3 zones to the Normalized Grey Level matrix.Wherein texture distribution in a-quadrant is the closeest, and the texture of whole iris 70% all is distributed in this zone; The B zone includes 20% iris texture; The C zone includes 10% iris texture.
Fig. 5 is cut apart for the second time for iris image
According to the possibility that is subjected to noise, on the basis of Fig. 4, continue to segment the image into 18 little cut zone ((1) is to (18) among the figure).These 18 zonules are subjected to eyelid and ciliary interference less.
Fig. 6 selects synoptic diagram for the two-dimensional transform wavelet channel
The wavelet channel of dark part for keeping among the figure, white portion is the wavelet channel of giving up.Wherein, LL 3Be the information of iris image under horizontal low frequency and vertical low frequency behind the 3 rank two-dimensional wavelet transformations.LH 1, LH 2, LH 3Be respectively the information of iris image under horizontal low frequency and vertical high frequency behind 1,2, the 3 rank wavelet transformations.HL 1, HL 2, HL 3Be respectively the information under horizontal high frequency and vertical low frequency of iris image behind 1,2, the 3 rank wavelet transformations.HH 1, HH 2, HH 3Be respectively the information of iris image under horizontal high frequency and vertical high frequency behind 1,2, the 3 rank wavelet transformations.
Fig. 7 is a schematic flow sheet of the present invention.
Embodiment
Adopt algorithm of the present invention, in CASIA iris database (version 1.0), test.100 groups of iris images in our the picked at random CASIA database.Get four width of cloth images in every group of iris image, totally 400 width of cloth iris images experimentize.At learning phase, 4 width of cloth iris images in every group of iris image are extracted 252 averages and variance eigenwert respectively, again the eigenwert of this 4 width of cloth image is got average and deposit in the sample database as last sample characteristics corresponding to this group iris image.In like manner, we extract the sample characteristics of these 100 groups of iris images, and deposit in the sample database.At cognitive phase, we use 400 width of cloth iris images to carry out pattern match identification computing to every group of sample characteristics in the sample database.Carry out 40000 (400 * 100) inferior pattern coupling identification computing so altogether.Obtain 40000 coupling identification result of calculation P altogether.Empirical value according to P is provided with threshold value T.Can obtain correct recognition rata preferably when T=2.0, this moment, correct recognition rata was 98.6%.

Claims (5)

  1. Based on the iris identification method of image segmentation and two-dimensional wavelet transformation, it is characterized in that 1, it comprises the following step:
    Step 1: gather iris image
    Harvester by iris image is gathered iris image, and the iris image gray matrix H that obtains can be used for further handling (x, y);
    Step 2: iris image medium filtering
    To the iris image gray matrix H of step 1 gained (x, y) do smoothing processing obtain gray matrix I (x, y);
    Step 3: iris inward flange location specifically may further comprise the steps:
    Step 1): image binaryzation
    Obtain gray matrix I (x, grey level histogram y), find out in the grey level histogram in the peak value corresponding gray scale value M of gray-scale value in (20~125) scope, gray-scale value M and a safety coefficient D addition are obtained can be used for the threshold value Y of gray level image binaryzation; Be threshold value with Y to iris gray level image I (x y) carries out binaryzation, obtain binary image B (x, y);
    Step 2): look for the rough center of circle of inward flange
    Calculate the Gray Projection amount P (x) of binaryzation matrix B, at the Gray Projection amount P (y) of y direction in the x direction; In one-dimension array P (x), find its minimum value, and find the x of this minimum value correspondence 1In like manner in one-dimension array P (y), find its minimum value, and find the y of this minimum value correspondence 1(x 1, y 1) be the rough center of circle of iris inward flange;
    Step 3): rim detection
    To the image B after the binaryzation (x y) carries out edge of image with the Roberts operator and detects, obtain a bianry image BW who contains the edge (x, y);
    Step 4): marginal point is divided into 4 quadrants
    BW (x, y) in the rough center of circle (x of the inward flange that obtains in the step 3) 1, y 1) set up coordinate system for true origin, the divergent margin point is divided into four quadrants; In each quadrant, in the fan-shaped range of 30 pixels of distance initial point and 50 pixels, 3 pixels of picked at random differ at 10 marginal points more than the unit; In like manner, also be so to select 3 pixels in other quadrants, such 4 quadrants have 12 pixels;
    Step 5): 4 quadrant associatings are accurately located
    In 12 points of step 4), select 3 not points on same straight line, constitute a circle by these 3 points, obtain all the other 9 points to this circle apart from d; The multipotency of 12 points constitutes 220 circles, also promptly has 220 apart from d, finds that circle of minimum d correspondence to be inward flange of iris in these 220 distances, and the center of circle of establishing the iris inward flange that obtains is (x a, y a), radius is r a
    Step 4: iris outward flange location specifically may further comprise the steps:
    Step 1): limit the iteration scope that rounded edge detects template
    Step 2 obtain I (x, y) in, carry out iteration with the circular edge detecting device and ask the gray integration value; In iterative process the center of circle (x of inward flange a, y a) as iteration (x 0, y 0) initial value, and (x 0, y 0) the hunting zone be limited to (x a-10, y a-10), (x a-10, y a+ 10), (x a+ 10, y a-10), (x a+ 10, y a+ 10) be in the rectangle on summit, and the hunting zone of r is limited in 70~110 unit picture elements; In the process of search, be not on whole circle, gray scale to be done round integration, but with (x 0, y 0) in the rectangular coordinate system set up, angle is to ask the integration of gray scale on-45 °~45 °, 135 °~225 ° the circular camber line;
    Step 2): find out outward flange in the iteration
    (r, x at the scope inner iteration parameter space of step 1) 0, y 0) value, obtain gray integration and change outward flange that maximum circle is iris, corresponding parameters value (r, x 0, y 0) be outer peripheral radius of iris and circle-center values (r b, x b, y b);
    Step 5: iris image normalization specifically may further comprise the steps:
    Step 1): set up the coordinate conversion model and with iris image normalization
    Obtain the circumference parameter (r at the inside and outside edge of iris by step 3 and step 4 a, x a, y a), (r b, x b, y b), with the interior round heart (x a, y a) as the initial point foundation of coordinate system rectangular coordinate is converted to polar mathematical model; Begin to do the ray that becomes the θ angle with horizontal line from initial point in this model, respectively there is an intersection point on it and inside and outside border, and note is made B (x respectively i, y i), A (x 0, y 0); Two intersection point A on the ray, (x y) can use A (x to the coordinate of any point between the B 0(θ), y 0(θ), B (x i(θ), y iLinear combination (θ)) is represented: x ( r , θ ) = ( 1 - r ) * x i ( θ ) + r * x 0 ( θ ) y ( r , θ ) = ( 1 - r ) * y i ( θ ) + r * y 0 ( θ ) , So just iris image can be normalized to θ is transverse axis, r be the fixed measure of the longitudinal axis gray matrix P (x, y);
    Step 2): histogram equalization
    The gray matrix P that step 1) is obtained (x, y) do histogram equalization handle obtain Normalized Grey Level matrix PI (x, y);
    Step 6: the gray matrix image segmentation first time after the normalization
    PI (x, y) in, the part of choosing gray matrix top 16 * 1024 is as iris texture zone (A); Choosing line width is 17~48, the scope of row be respectively 1~128,384~640,896~1024 totally 3 pockets as iris texture zone (B); Choosing line width is 49~64, the scope of row be respectively 1~64,448~576,980~1024 totally 3 pockets as iris texture zone (C);
    Step 7: the gray matrix image segmentation second time after the normalization.
    The gray matrix that texture region in the step 6 (A) is divided into 8 fritters: line width is 1~16, and col width is respectively 1~128,129~256,257~384,385~512,513~640,641~768,769~896,897~1024; The gray matrix that iris texture zone (B) also is divided into 8 fritters: line width is 17~32, and col width is respectively 1~128,385~512,513~640,897~1024; Line width is 33~48, and col width is respectively 1~128,385~512,513~640,897~1024; The gray matrix that iris texture zone (C) is divided into 2 fritters: line width is 49~64, and col width is that two gray matrix zones of 1~64,961~1024 combine; Line width is 49~64, and col width is 449~578 gray matrix zone;
    Step 8: each cut zone is carried out two-dimensional wavelet transformation
    Is that wavelet basis carries out 3 rank two-dimensional wavelet transformations to each the little cut zone in the step 7 with the Haar small echo, obtains 10 wavelet channel behind the two-dimensional wavelet transformation altogether, and these passages are designated as LL respectively 3, LH 3, HL 3, HH 3, LH 2, HL 2, HH 2, LH 1, HL 1, HH 1Give up HH 1, HH 2, HH 3These three passages only keep remaining 7 passages;
    Step 9: extract wavelet coefficient average and variance as eigenwert
    Each wavelet channel behind certain little cut zone two-dimensional wavelet transformation is extracted its average and sample variance:
    E n = 1 M × N Σ i = 1 M Σ j = 1 N | x ( i , j ) | , D n = Σ i - 1 M Σ j - 1 N [ | x ( i , j ) | - E n ] 2 M × N - 1 As eigenwert, this each wavelet channel of little cut zone can extract two eigenwerts like this, and 7 wavelet channel just can extract 14 eigenwerts; Each little cut zone is all repeated this process, can extract 252 eigenwerts altogether;
    Step 10: mate identification with the equal value difference of variance inverse weight, specifically may further comprise the steps:
    Step 1): extract complete characterization as the iris feature sample
    Learning phase in system, the every width of cloth iris image that will learn is all handled to the processing procedure of step 9 according to step 2, every width of cloth iris image can extract 252 eigenwerts, these eigenwerts as the sample storage of this iris image in the iris sample database, be used for the back identification decision;
    Step 2): extract characteristics of mean and be used for identification
    Cognitive phase in system to the iris image an of width of cloth the unknown, is handled to the processing procedure of step 9 according to step 2 this image, and only extracts 128 average E in step 9 nBe used for identification;
    Step 3): divide 3 parts to mate identification respectively
    In the coupling identifying, 252 eigenwerts in the sample are divided into 3 part: E according to three texture regions of step 6 A, E B, E C: E ABe the eigenwert that iris texture zone (A) extracts through conversion, E BBe the eigenwert that iris texture zone (B) extracts through conversion, E CBe iris texture zone (C) eigenwert through the conversion extraction; In like manner also 128 eigenwerts of iris image to be identified are divided into 3 part: e A, e B, e CCoupling recognizer according to the summation of variance inverse weight is mated the identification computing respectively to 3 parts, promptly P j = Σ i = 1 N ( e ji - E ji ) 2 D ji , j=A,B,C;
    Step 4): with 3 recognition results of different coefficient weightings
    Can obtain P by step 3) A, P B, P C, the recognition result of every part be multiply by weighting coefficient has promptly obtained final recognition result P, i.e. P=a*P A+ b*P B+ c*P C
    Step 5): setting threshold T discerns judgement
    Set a threshold value T,, judge that promptly current iris image sample in iris image to be identified and the database is from same eyes as P during as P<T; When P>T, judge that promptly current iris image sample in iris image to be identified and the database is from different eyes.
  2. 2, the iris identification method based on image segmentation and two-dimensional wavelet transformation according to claim 1, it is characterized in that, in the described step 2 to the iris image gray matrix H (x of step 1 gained, y) do in the smoothing processing process, wave filter is the nonlinear median filter of 9 discrete picture elements for the sample window size.
  3. 3, the iris identification method based on image segmentation and two-dimensional wavelet transformation according to claim 1 is characterized in that, in the step 1) image binaryzation processing procedure in the step 3, described safety coefficient D gets the number between 3 to 7 usually.
  4. 4, the iris identification method based on image segmentation and two-dimensional wavelet transformation according to claim 3 is characterized in that, described safety coefficient D is 5.
  5. 5, the iris identification method based on image segmentation and two-dimensional wavelet transformation according to claim 1, it is characterized in that, weighting coefficient described in the step 4) of step 10 is decided according to the situation that iris texture distributes usually, and its concrete ratio can be: a: b: c=7: 2: 1.
CNB200610021266XA 2006-06-27 2006-06-27 Iris identification method based on image segmentation and two-dimensional wavelet transformation Expired - Fee Related CN100373396C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200610021266XA CN100373396C (en) 2006-06-27 2006-06-27 Iris identification method based on image segmentation and two-dimensional wavelet transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200610021266XA CN100373396C (en) 2006-06-27 2006-06-27 Iris identification method based on image segmentation and two-dimensional wavelet transformation

Publications (2)

Publication Number Publication Date
CN1928886A true CN1928886A (en) 2007-03-14
CN100373396C CN100373396C (en) 2008-03-05

Family

ID=37858846

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200610021266XA Expired - Fee Related CN100373396C (en) 2006-06-27 2006-06-27 Iris identification method based on image segmentation and two-dimensional wavelet transformation

Country Status (1)

Country Link
CN (1) CN100373396C (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847260A (en) * 2009-03-25 2010-09-29 索尼公司 Image processing equipment, image processing method and program
CN102081739A (en) * 2011-01-13 2011-06-01 山东大学 Iris characteristic extracting method based on FIR (Finite Impulse Response) filter and downsampling
CN102136072A (en) * 2010-01-21 2011-07-27 索尼公司 Learning apparatus, leaning method and process
CN102324032A (en) * 2011-09-08 2012-01-18 北京林业大学 Texture feature extraction method for gray level co-occurrence matrix in polar coordinate system
CN103198301A (en) * 2013-04-08 2013-07-10 北京天诚盛业科技有限公司 Iris positioning method and iris positioning device
CN104166848A (en) * 2014-08-28 2014-11-26 武汉虹识技术有限公司 Matching method and system applied to iris recognition
CN104700386A (en) * 2013-12-06 2015-06-10 富士通株式会社 Edge extraction method and device of tongue area
CN103246871B (en) * 2013-04-25 2015-12-02 山东师范大学 A kind of imperfect exterior iris boundary localization method strengthened based on image non-linear
CN105550661A (en) * 2015-12-29 2016-05-04 北京无线电计量测试研究所 Adaboost algorithm-based iris feature extraction method
CN106447600A (en) * 2016-07-06 2017-02-22 河北箱变电器有限公司 Electric power client demand drafting system
CN106910434A (en) * 2017-02-13 2017-06-30 武汉随戈科技服务有限公司 A kind of exhibitions conference service electronics seat card
CN107134025A (en) * 2017-04-13 2017-09-05 奇酷互联网络科技(深圳)有限公司 Iris lock control method and device
CN107195079A (en) * 2017-07-20 2017-09-22 长江大学 A kind of dining room based on iris recognition is swiped the card method and system
CN107895157A (en) * 2017-12-01 2018-04-10 沈海斌 A kind of pinpoint method in low-resolution image iris center
CN108334438A (en) * 2018-03-04 2018-07-27 王昆 The method that intelligence prevents eye injury
CN108470171A (en) * 2018-07-27 2018-08-31 上海聚虹光电科技有限公司 The asynchronous coding comparison method of two dimension
CN109409223A (en) * 2018-09-21 2019-03-01 昆明理工大学 A kind of iris locating method
CN109501721A (en) * 2017-09-15 2019-03-22 南京志超汽车零部件有限公司 A kind of vehicle user identifying system based on iris recognition
CN110619272A (en) * 2019-08-14 2019-12-27 中山市奥珀金属制品有限公司 Iris image segmentation method
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
CN112699874A (en) * 2020-12-30 2021-04-23 中孚信息股份有限公司 Character recognition method and system for image in any rotation direction
CN115166120A (en) * 2022-06-23 2022-10-11 中国科学院苏州生物医学工程技术研究所 Spectral peak identification method, device, medium and product
CN115546236A (en) * 2022-11-24 2022-12-30 阿里巴巴(中国)有限公司 Image segmentation method and device based on wavelet transformation

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408977B (en) * 2008-11-24 2012-04-18 东软集团股份有限公司 Method and apparatus for dividing candidate barrier region

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1092372C (en) * 1997-05-30 2002-10-09 王介生 Iris recoganizing method
KR100374707B1 (en) * 2001-03-06 2003-03-04 에버미디어 주식회사 Method of recognizing human iris using daubechies wavelet transform

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847260B (en) * 2009-03-25 2013-03-27 索尼公司 Image processing apparatus, and image processing method
CN101847260A (en) * 2009-03-25 2010-09-29 索尼公司 Image processing equipment, image processing method and program
CN102136072A (en) * 2010-01-21 2011-07-27 索尼公司 Learning apparatus, leaning method and process
CN102081739A (en) * 2011-01-13 2011-06-01 山东大学 Iris characteristic extracting method based on FIR (Finite Impulse Response) filter and downsampling
CN102081739B (en) * 2011-01-13 2012-07-25 山东大学 Iris characteristic extracting method based on FIR (Finite Impulse Response) filter and downsampling
CN102324032B (en) * 2011-09-08 2013-04-17 北京林业大学 Texture feature extraction method for gray level co-occurrence matrix in polar coordinate system
CN102324032A (en) * 2011-09-08 2012-01-18 北京林业大学 Texture feature extraction method for gray level co-occurrence matrix in polar coordinate system
CN103198301A (en) * 2013-04-08 2013-07-10 北京天诚盛业科技有限公司 Iris positioning method and iris positioning device
CN103198301B (en) * 2013-04-08 2016-12-28 北京天诚盛业科技有限公司 iris locating method and device
CN103246871B (en) * 2013-04-25 2015-12-02 山东师范大学 A kind of imperfect exterior iris boundary localization method strengthened based on image non-linear
CN104700386A (en) * 2013-12-06 2015-06-10 富士通株式会社 Edge extraction method and device of tongue area
CN104166848B (en) * 2014-08-28 2017-08-29 武汉虹识技术有限公司 A kind of matching process and system applied to iris recognition
CN104166848A (en) * 2014-08-28 2014-11-26 武汉虹识技术有限公司 Matching method and system applied to iris recognition
CN105550661A (en) * 2015-12-29 2016-05-04 北京无线电计量测试研究所 Adaboost algorithm-based iris feature extraction method
CN106447600A (en) * 2016-07-06 2017-02-22 河北箱变电器有限公司 Electric power client demand drafting system
CN106910434A (en) * 2017-02-13 2017-06-30 武汉随戈科技服务有限公司 A kind of exhibitions conference service electronics seat card
CN107134025A (en) * 2017-04-13 2017-09-05 奇酷互联网络科技(深圳)有限公司 Iris lock control method and device
CN107195079A (en) * 2017-07-20 2017-09-22 长江大学 A kind of dining room based on iris recognition is swiped the card method and system
CN109501721A (en) * 2017-09-15 2019-03-22 南京志超汽车零部件有限公司 A kind of vehicle user identifying system based on iris recognition
CN107895157B (en) * 2017-12-01 2020-10-27 沈海斌 Method for accurately positioning iris center of low-resolution image
CN107895157A (en) * 2017-12-01 2018-04-10 沈海斌 A kind of pinpoint method in low-resolution image iris center
CN108334438A (en) * 2018-03-04 2018-07-27 王昆 The method that intelligence prevents eye injury
CN108470171A (en) * 2018-07-27 2018-08-31 上海聚虹光电科技有限公司 The asynchronous coding comparison method of two dimension
CN109409223A (en) * 2018-09-21 2019-03-01 昆明理工大学 A kind of iris locating method
CN110619272A (en) * 2019-08-14 2019-12-27 中山市奥珀金属制品有限公司 Iris image segmentation method
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
CN111161276B (en) * 2019-11-27 2023-04-18 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
CN112699874A (en) * 2020-12-30 2021-04-23 中孚信息股份有限公司 Character recognition method and system for image in any rotation direction
CN115166120A (en) * 2022-06-23 2022-10-11 中国科学院苏州生物医学工程技术研究所 Spectral peak identification method, device, medium and product
CN115546236A (en) * 2022-11-24 2022-12-30 阿里巴巴(中国)有限公司 Image segmentation method and device based on wavelet transformation

Also Published As

Publication number Publication date
CN100373396C (en) 2008-03-05

Similar Documents

Publication Publication Date Title
CN1928886A (en) Iris identification method based on image segmentation and two-dimensional wavelet transformation
TWI224287B (en) Iris extraction method
Gangwar et al. IrisSeg: A fast and robust iris segmentation framework for non-ideal iris images
Lau et al. Automatically early detection of skin cancer: Study based on nueral netwok classification
Fan et al. Optic disk detection in fundus image based on structured learning
CN1710593A (en) Hand-characteristic mix-together identifying method based on characteristic relation measure
CN1885314A (en) Pre-processing method for iris image
CN101055618A (en) Palm grain identification method based on direction character
CN105261015A (en) Automatic eyeground image blood vessel segmentation method based on Gabor filters
WO2013087026A1 (en) Locating method and locating device for iris
CN1885310A (en) Human face model training module and method, human face real-time certification system and method
CN1737821A (en) Image segmentation and fingerprint line distance getting technique in automatic fingerprint identification method
Gurudath et al. Machine learning identification of diabetic retinopathy from fundus images
CN102306289A (en) Method for extracting iris features based on pulse couple neural network (PCNN)
CN106778551A (en) A kind of fastlink and urban road Lane detection method
CN109919929A (en) A kind of fissuring of tongue feature extracting method based on wavelet transformation
Ni Ni et al. Anterior chamber angle shape analysis and classification of glaucoma in SS-OCT images
CN1092372C (en) Iris recoganizing method
CN1658224A (en) Combined recognising method for man face and ear characteristics
CN109840484A (en) A kind of pupil detection method based on edge filter, oval evaluation and pupil verifying
CN1442823A (en) Individual identity automatic identification system based on iris analysis
CN102332098A (en) Method for pre-processing iris image
Wang et al. A novel vessel segmentation in fundus images based on SVM
Kuri et al. Automated segmentation of retinal blood vessels using optimized gabor filter with local entropy thresholding
CN1549188A (en) Estimation of irides image quality and status discriminating method based on irides image identification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080305

Termination date: 20100627