CN101201893A - Iris recognizing preprocessing method based on grey level information - Google Patents
Iris recognizing preprocessing method based on grey level information Download PDFInfo
- Publication number
- CN101201893A CN101201893A CNA2006101225281A CN200610122528A CN101201893A CN 101201893 A CN101201893 A CN 101201893A CN A2006101225281 A CNA2006101225281 A CN A2006101225281A CN 200610122528 A CN200610122528 A CN 200610122528A CN 101201893 A CN101201893 A CN 101201893A
- Authority
- CN
- China
- Prior art keywords
- iris
- image
- point
- gray
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a pretreatment method of iris images based on gray information. A rough circle center (xo, yo) of a pupil is located through binary, mathematical morphology, gray projection and other operations, the point, which has a gray value larger than T and the shortest distance to the (xo, yo), is searched at a plurality of lines close to the (xo, yo) from the center to the two sides as a pupil boundary point; the accurate circle center and the radius of the pupil can be located through curve fitting; the gray level first difference of the lines where the searched pixel points lie is calculated and the point with the largest value of the sum of the level first difference is searched in the range of places where iris external boundary point possibly appears and taken as the iris external boundary point; the accurate circle center and radius of the iris external boundary is obtained after curve fitting. The quality of iris image is judged through calculating point sharpness and number of effective pixel points according to the normalized image. The pretreatment method of iris images of the invention not only can decrease the time needed by repeated iteration during location, but also can judge the quality of iris image accurately and quickly according to the extracted iris image.
Description
Technical field
The invention belongs to technical field of image processing, relate generally to the iris identity recognizing technology in the biological characteristic discriminating.
Background technology
In the current information age, how accurately to identify a people's identity, the protection information security is a crucial social concern that must solve.For this reason, the biological characteristic authentication technique quietly newly rises, and becomes the forward position research topic in information security management field, the present world.The biological characteristic authentication technique be meant utilize human body intrinsic physiological characteristic or behavioural characteristic carry out personal identification and identify.The iris identity recognizing technology is a branch of biological characteristic authentication technique, it is the application of computer image processing technology and mode identification technology in the person identification field, because its high stability and high accuracy have become the popular developing direction that biological characteristic is differentiated in recent years.Iris identity automatic identification technology is widely used at aspects such as bank, public security, airport, networks, has huge economic and realistic meaning.Now it used at border control, taken an overall view of authentication, made a draft of money, information management and building safety management etc., people are broken away from remember the loaded down with trivial details of credit number, account No., identification card number, network entry number.Along with the development of Digital Signal Processing and image processing techniques, the iris identification system reaches its maturity.See document for details: John G.Daugman, " How Iris Recognition Works; " IEEETransaction on Circuits and Systems for Video Technology, Volume 14, Issue 1, pp.21-30,2004 and document: John G.Daugman, " High Confidence Recognition of Persons by Iris Patterns, " The Proceeding of IEEE 35
ThInternational Carnahan Conference on Security Technology, pp.254-263,2001 is described.
In the iris identity recognizing technology, the iris image pre-service is the key of whole recognition technology, and it comprises Iris Location and iris image quality assessment.Iris Location is the first step of iris recognition, and its execution time and precision will directly influence the speed and the accuracy of whole iris authentication system.In practice, because iris region usually is subjected to blocking of eyelid and eyelashes, Iris Location algorithm accuracy and validity are still waiting further raising.How in the inferior quality iris image that has eyelashes and eyelid occlusion issue, orient iris quickly and accurately, and its border or position are described with mathematical model is the subject matter that we study.See document for details: John G.Daugman, " High Confidence Visual Recognition of Personsby a Test of Statistical Independence; " IEEE Transaction on Pattern Analysis and MachineIntelligence, volume15, no.11, pp.1148-1161,1993.The iris image quality assessment is an important link in the automatic iris authentication system, and it has guaranteed to be met in the gatherer process image of quality standard.In the reality, because the focal length problem of collecting device when taking, the rotational problems of moment eyeball, and eyelid and eyelashes usually make the iris image of collection can't carry out follow-up feature extraction to the partial occlusion of iris.In the existed algorithms a kind of effective iris image quality assessment models is not proposed also at present, therefore we are intended to set up the general feasible assessment models of a cover, see document for details: Chen Ji, Hu Guangshu, " Iris Image Quality Evaluation based on Wavelet PacketDecomposition, " Journal of Tsinghua University (Sci ﹠amp; Tech), volume 43, no.3, pp.377-380,2003.
The method of present normally used Iris Location has:
(1) goes on foot iris locating methods based on two of shade of gray.It is sought the approximate location of outer rim in the iris, and then utilizes circular detector to carry out fine positioning near this position in the small range by coarse positioning, thereby finds the exact position of outer rim in the iris.But this method search that need iterate in actual applications, operand is bigger, and efficient is not high.See document for details: Li Qingrong, Ma Zheng, " A Iris Location Algorithm, " Journal of UEST of China, volume 31, no.1, pp.7-9.
(2) based on the iris locating method of hough transform.It is by certain operator, extracts the marginal point in the iris image, thereby search is by the position at the maximum circular curve place of marginal point.Its shortcoming is usually can introduce noise in marginal point extracts, and makes that the Iris Location result is inaccurate.See document for details: Richard P.Wildes, " Iris Recognition:anEmerging Biometric Technology, " Proceedings of the IEEE, volume85, pp.1348-1363,1997.
At present existing iris method for evaluating quality has:
(1) based on the method for fast fourier transform.It carries out fast two-dimensional fourier transformation to the picture element in two rectangular blocks on the iris region, and then by to the statistics of its high frequency, intermediate frequency and low frequency energy, whether analysis image is clear and exist eyelashes to block.This model general capable not strong, easily that texture is less clear iris image erroneous judgement is the inferior quality iris image.See document for details: Li Ma, Tieniu Tan, Yunhong Wang, Dexin Zhang, " Personal Identification basedon Iris Texture Analysis, " IEEE Transactions on Pattern Analysis and Machine Intelligence, volume.25, no.12, pp.1519-1533.
(2) based on the method for WAVELET PACKET DECOMPOSITION.It is chosen the texture high fdrequency component and distributes the most concentrated sub-band as the feature sub-band, with the criterion of its energy as differentiation picture quality.The shortcoming of this method is can't judge because of eyelashes to block in-problem iris image.See document for details: Chen Ji, Hu Guangshu, " Iris Image Quality Evaluation based onWavelet Packet Decomposition, " Journal of Tsinghua University (Sci ﹠amp; Tech), volume 43, no.3, pp.377-380,2003.
Above-mentioned iris image Preprocessing Algorithm all has problems to a certain extent, and location algorithm is consuming time more, and is subjected to the interference of eyelashes occlusion issue easily, and stability is not high.The versatility of iris image quality appraisal procedure is not strong.
Summary of the invention
Task of the present invention provides a kind of iris locating method based on shade of gray and curve fitting, and it has the advantages that to block accurate positioning under the situation at eyelash.And set up one on this basis and overlapped the more intense iris image quality assessment models of versatility.
In order to describe content of the present invention easily, at first some terms are defined.
Definition 1: iris.The center of eyeball is the pupil of black, and the outer intermarginal annular tissue of pupil is iris.It presents the textural characteristics of interlaced similar and spot, filament, striped, crypts.Same individual's iris can change in life hardly the people's, and the iris of different people is different fully.
Definition 2: gray level image.Only comprise monochrome information in the image and without any the image of other colouring informations.
Definition 3: binary-state threshold.Selected gray scale threshold value when image is carried out binaryzation.
Definition 4: binaryzation.The all values of entire image is changed into the process of having only two kinds of values, and generally these two kinds of values are 0 and 1 or 0 and 255.When the value on the image more than or equal to the threshold values of binaryzation the time, the value two-value of this point turns to 1 (or 255); When the value on the image less than the binaryzation threshold values time, the value two-value of this point turns to 0.
Definition 5: mathematical morphology.Go to measure and extract in the image correspondingly-shaped to reach purpose with structural element to graphical analysis and identification with certain form.The fundamental operation of mathematical morphology has 4: expand (or expansion), corrosion (or erosion), unlatching and closed.The operational formula that expands and corrode is:
With
The operational formula of open operation is:
With
Wherein A is an image collection, and B is a structural element, and ^ represents to do the mapping about initial point, ()
xExpression translation x, ∩ represents to occur simultaneously, and φ represents empty set,
Tabular form comprises entirely,
Be the dilation operation symbol, Θ is the erosion operation symbol, and o is for opening operational symbol, and g is the closure operation symbol.
Definition 6: Gray Projection.Gray Projection in the two-dimensional space to the one-dimensional space, is divided into horizontal Gray Projection and vertical Gray Projection.Horizontal Gray Projection is meant the gray scale along continuous straight runs in the two dimensional image is added up, is transformed into the one-dimensional space.Transfer function is:
Vertical Gray Projection is meant the gray scale in the two dimensional image is vertically added up, is transformed into a bit space.Transfer function is:
S wherein
h(x) the expression horizontal ordinate is the Gray Projection value of x, S
v(y) the expression ordinate is the Gray Projection value of y, and M, N are the width and the height of image, and (x y) is position (x, the gray-scale value of picture element y) to I.
Definition 7: pupil boundary points.Be meant the point that is positioned on the pupil outward flange iris inward flange.
Definition 8: circle match.The coordinate of known series of points is set up a circular curve equation that can reflect these coordinate points positions.Specifically: equation of a circle is x
2+ y
2+ cx+dy+e=0, c, d and e are the parameters about the radius and the central coordinate of circle point of circular curve, and (x y) is the coordinate figure of the point on the circular curve, and the best circular curve with respect to these coordinate points makes error variance and minimum exactly so.Error variance and formula be:
Wherein, ε
2Be meant error variance and, (x
i, y
i) be the coordinate of known point.
Definition 9: horizontal first order difference.In the image, the gray-scale value of the back pixel of certain delegation deducts the gray-scale value of front pixel, or the gray-scale value of front pixel deducts the gray-scale value of back pixel, obtains the horizontal first order difference value of this row.Horizontal first order difference can be given prominence to the vertical edge information of image, is convenient to edge extracting.
Definition 10: iris outer rim frontier point.Iris is an annular region, and the point that is positioned on the iris outward flange is called iris outer rim frontier point.
Definition 11: normalization.The iris region of annular is drawn into the identical rectangular area of size, and to eliminate owing to the shooting distance difference, factors such as pupil contraction are to the influence of recognition effect.Concrete computing formula is:
Wherein, r is distributed in interval [0,1], and θ is distributed in interval [0,2 π], and (x
p(θ), y
p(θ)) and (x
i(θ), y
i(θ)) represent iris internal boundary points and outer boundary point on the θ direction respectively.
Define 12. normalization iris images.Original iris image is carried out the rectangular image that normalized obtains afterwards.
Definition 13.8-neighborhood.To a coordinate points is that (it has 4 levels and vertical neighbour's pixel for x, pixel p y), and their coordinate is respectively (x+1, y), (x-1, y), (x, y+1), (x, y-1), and 4 diagonal angle neighbour's pixels, their coordinate points be (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1), 8 such pixels are collectively referred to as the 8-neighborhood of p.
Define 14. acutancees.Be used to estimate the operator of Definition of digital picture, its mathematical form is:
Wherein dI is the difference of the gray-scale value of the gray-scale value of picture element and this point in certain any eight neighborhoods in the image, and dx is the distance of consecutive point, and m and n are respectively the height and width of image.
Definition 15. effective picture elements.Being meant the picture element that is positioned at iris region in the normalization iris image, mainly is in order to distinguish invalid picture elements such as eyelashes and eyelid.The condition that effective picture element must satisfy is: V
Lash≤ I (x, y)≤V
Eyelid, V wherein
LashAnd V
EyelidBe the thresholding that takes a decision as to whether the picture element that is positioned at eyelashes zone and palpebral region, (x y) is the gray scale of image to I.
Define 16. visibilitys.Be meant the visible level of iris texture in the original iris image, the principal element that influences visibility is eyelid and eyelashes blocking iris region.
According to iris image Preprocessing Algorithm of the present invention, it comprises the following step:
The bianry image that obtains in the step 3. pair step 2 carries out closure operation in the mathematical morphology and eliminates little cavity in the bianry image.Specifically, closure operation is:
Promptly earlier original image A is carried out dilation operation with structural element B, and then carry out erosion operation.Structural element B is one 7 * 7 a matrix, and the value of the element in the intermediate approximation border circular areas is 1, and the value of all the other elements is 0.G is the closure operation symbol,
Be the dilation operation symbol, Θ is the erosion operation symbol.
Obtain the level and vertical Gray Projection of image in step 4. calculation procedure 3, the computing formula of horizontal projection is:
The computing formula of vertical Gray Projection is:
S wherein
h(x) the expression horizontal ordinate is the Gray Projection value of x, S
v(y) the expression ordinate is the Gray Projection value of y, and M, N are the width and the height of image, and (x y) is position (x, the gray-scale value of picture element y) to I.
Horizontal Gray Projection S in step 5. search step 4
hHorizontal ordinate x when (x) getting minimum value
oWith vertical Gray Projection S
h(y) get the ordinate y that minimum value is
o, with (x
o, y
o) be considered as the rough center of pupil.
Step 6. is y at ordinate
oIn this delegation, with (x
o, y
o) be the center, along continuous straight runs is searched for the point of pixel gray-scale value greater than T left, stops search immediately when searching the pixel gray-scale value greater than T, writes down the coordinate (x of this moment
l, y
o) as the coordinate of pupil boundary points, carry out along continuous straight runs search to the right by same mode again, obtain another border point coordinate (x
r, y
o).
Step 7. is got coordinate points (x
o, y
o) near several rows, on each row that takes out, carry out the search of pupil boundary points, in method and the step 6 at y
oThe searching method that delegation carries out is identical, finally can obtain the coordinate of a series of pupil boundary points.
Therefore step 8., justifies match to a series of pupil boundary points that obtain in the step 7 because the inward flange of pupil is very similar to a circle, and specifically: equation of a circle is x
2+ y
2+ cx+dy+e=0, c, d and e are the parameters about the radius and the central coordinate of circle point of circular curve, and (x y) is the coordinate figure of the point on the circular curve, and the best circular curve with respect to these coordinate points makes error variance and minimum exactly so.Error variance and formula be:
Wherein, ε
2Be meant error variance and, (x
i, y
i) be the coordinate of known point, obtain the accurate center of circle (x of pupil at last
p, y
p) and radius r
p
Step 9. coordinates computed point (x
p, y
p) the horizontal first order difference of being expert at, concrete computing formula is: work as x
pD (x, y during<x<N-5
p)=I (x+5, y
p)-I (x, y
p); As 5<x≤x
pThe time D (x, y
p)=I (x-5, y
p)-I (x, y
p).D (x, y wherein
p) denotation coordination point (x, y
p) horizontal first order difference value, (x, y) (x, gray-scale value y), N are picture traverse to the denotation coordination point to I.
Step 10. is y at ordinate
pIn the delegation, at interval [x
p+ r
p+ 20, x
p+ r
p+ 100] go up the horizontal first order difference value sum of calculating each point and 20 points in back.Concrete computing formula is: work as x
p+ r
p+ 20<x<x
p+ r
p+ 100 o'clock,
D (x+i, y wherein
p) be coordinate points (x+i, the y that obtains in the step 7
p) horizontal first order difference value.And in this interval, find out S (x, y
p) coordinate points (x of correspondence when getting maximal value
i, y
p) as iris outer rim frontier point.
Step 11. ordinate is y
pIn the delegation, at interval [x
p-r
p-100, x
p-r
p-20] go up the horizontal first order difference value sum of calculating each point and 20 points in front.Concrete computing formula is: work as x
p-r
p-100<x<x
p-r
p-20 o'clock,
D (x+i, y wherein
p) be coordinate points (x+i, the y that obtains in the step 9
p) horizontal first order difference value.And in this interval, find out S (x, y
p) coordinate points of correspondence (x, y when getting maximal value
p) as iris outer rim frontier point.
Step 12. is got coordinate points (x
p, y
p) near several rows, on each row that takes out, carry out the search of iris outer rim frontier point, in method and step 9, step 10 and the step 11 at y
pThe searching method that delegation carries out is identical, finally can obtain the coordinate of a series of iris outward flange frontier points.
Step 13. is because the outward flange of pupil also is very similar to a circle, therefore, to a series of iris outward flange frontier points that obtain in the step 10 carry out with step 8 in similarly circle match, obtain the outer peripheral accurate center of circle (x of iris
i, y
i) and radius r
i
The step 14. pair iris region of orienting carries out normalized, and concrete computing formula is:
Wherein, r is distributed in interval [0,1], and θ is distributed in interval [0,2 π], and (x
p(θ), y
p(θ)) and (x
i(θ), y
i(θ)) represent iris internal boundary points and outer boundary point on the θ direction respectively.
In step 15. calculation procedure 14 to size be the some acutance of the normalization iris image of M * N, concrete computing formula is:
Wherein dI is the difference of the gray-scale value of the gray-scale value of picture element and this point in certain any 8-neighborhood in the image, and dx is the distance of consecutive point, and m and n are respectively the height and width of image.
Step 16: with in the step 15 to the normalization iris image some sharpness value f and predefinedly be used to judge whether clearly threshold values V of iris image
fCompare, if f 〉=V
f, think that then the sharpness of image satisfies the requirement of system, otherwise, think the requirement of discontented pedal system.
The number of effective picture element in the step 17. statistics normalization iris image, concrete computing formula is:
Wherein
V wherein
LashAnd V
EyelidBe the thresholding that takes a decision as to whether the picture element that is positioned at eyelashes zone and palpebral region, (x y) is the gray scale of image to I.
Step 18. is with the number K of effective picture element of obtaining in the step 17 and predefinedly be used to judge whether iris image exists the threshold values V of eyelid and eyelashes occlusion issue
kCompare, if K 〉=V
k, think that then the visibility of iris image satisfies the requirement of system, otherwise, think the requirement of discontented pedal system.
By above step, we just extract normalized iris image from the original image that contains iris, and judge the requirement whether this image satisfies system.
Need to prove:
1. carry out binaryzation in the step 2 and choose a fixing threshold value V
bA fixing threshold value V who chooses
bObtain by a large amount of tests, and select a fixing threshold values to be here because the gray-scale value of the gray-scale value of pupil region and iris region differs very big, even the iris image of taking under different illumination conditions also can guarantee the effect of binaryzation.
2. locate the rough center (x of pupil in the step 5
o, y
o) be in order to determine to carry out the scope of iris boundary point search.
3. thinking in the step 6 that gray-scale value is exactly the frontier point of pupil greater than the point of T, is because obviously the increasing progressively of the marginal existence gray-scale value of pupil, when being to be exactly pupil edge greater than a certain value.
4. in the step 5,, be unfavorable for the accurate location of method, must adopt Gaussian function that projection value is carried out smoothing processing because the first order difference horizontal projection curve burr that is obtained by step 4 is a lot.
5. the size of mentioning the normalization iris image in the step 15 is M * N, and value M is the spacing value decision of the θ that got when operating according to normalization, and the spacing value of the r that value N is got when being operated by normalization determines.
6. the picture point sharpness value in the step 15 has mainly characterized the readability of image, and the big more image of some sharpness value f is clear more, and the more little image of f is fuzzy more.
7. the threshold values V in the step 16
fBe to obtain by a large amount of iris images of same collecting device are tested, this value can the clear and fuzzy iris image of accurate classification.
8. mention V in the step 17
LashAnd V
Eyelid, we think that gray-scale value is less than V
LashPicture element be the picture element in eyelashes zone, gray-scale value is greater than V
EyelidBe the picture element of palpebral region, grey value profile is at V
LashAnd V
EyelidBetween be the picture element of iris region.
9. the big more iris region that is not blocked of mailbox picture element number K value that obtains in the step 17 is big more, and the eyelid of the more little existence of K value and eyelashes block serious more.
The present invention adopts frontier point search and circle match to combine, and has at first realized the coarse localization in the pupil center of circle by binaryzation, burn into expansion and Gray Projection; Add up by single gray-scale value comparison and grey scale difference then, search out frontier point and carry out curve fitting; Last according to the normalization iris image that obtains, from sharpness and two aspect assess image quality of visibility.The method that combines based on frontier point search and curve fitting that adopts that the present invention proposes can improve the Iris Location precision effectively; Adopt the quality evaluating method based on an acutance and available point number of the present invention's proposition, improved the versatility of traditional quality assessment algorithm.
Innovation part of the present invention is:
Made full use of the half-tone information of iris image and the method for curve fitting, carried out curve fitting, thereby the positional information of acquisition iris region reaches the purpose of separating iris by the coordinate that obtains iris inside and outside circle frontier point; And get quality by the correct iris image of must having estimated of the half-tone information of normalization iris image.The present invention at first adopts sciagraphy, to filling to such an extent that the binaryzation iris image carries out level and vertical Gray Projection through hot spot, obtains the rough center of pupil.By scanning, find gray-scale value any internal boundary points as iris greater than a certain threshold values to horizontal grey scale curve.Above-mentioned a series of internal boundary points are justified match, thereby obtain the positional information of iris inner edge.Afterwards,, obtain position coordinates when integrated value is got maximal value as the frontier point of iris, and then utilize these frontier points to justify match again by horizontal first order difference being carried out the integration of adjacent 20 points, the positional information of iris outer rim.The location that the method for utilizing half-tone information and circle match to combine is carried out iris region is a characteristic of the present invention, compares with two general step iris locating methods, and it is high 5 percentage points that locating accuracy of the present invention is wanted, and speed improves 60%.Then when carrying out quality evaluation, the present invention adds up the some acutance of normalization iris image and effective picture element number, by comparing with predefined threshold values, thereby iris image quality has been carried out correct evaluation, and versatility is very strong.
Description of drawings
Fig. 1 is the original image that contains iris;
Wherein, 1 expression pupil; 2 expression irises; Hot spot in the 3 expression pupils; The inner edge of 4 expression irises; The outer rim of 5 expression irises.
Fig. 2 is the 8-neighborhood synoptic diagram of pixel p;
Wherein, r is level and vertical next-door neighbour's pixel, and s is diagonal angle neighbour's pixel.
Fig. 3 is the positioning result figure of the inventive method.
Embodiment
Adopt method of the present invention, at first use C language and assembly language to write the iris preprocessor; Adopt the original image of CMOS or CCD camera head automatic shooting iris then; Then the iris original image that photographs is input to as source data in the iris preprocessor of DSP embedded system and handles; Through Iris Location and image quality measure, up-to-standard iris image location back output comprises the iris normalized image of enriching texture information.Adopt 2400 to take different illumination conditions good, that comprise different people, the different gray scale iris image of taking posture as source data, locating accuracy is 97.5%, and the location piece image only needs 100ms.
In sum, method of the present invention makes full use of the half-tone information of iris image, in conjunction with the circle fitting method, thereby realizes iris region being provided from the iris original image that is provided rapidly and accurately and making quality evaluation accurately.
Claims (2)
1. the present invention relates to a kind of iris recognizing preprocessing method, it is characterized in that comprising the steps: based on half-tone information
Step 1. is carried out image acquisition by camera head to the iris in the human eye, obtains containing the original-gray image of iris image;
Step 2. is chosen a fixing threshold value V
b, original iris image is carried out binaryzation;
The bianry image that obtains in the step 3. pair step 2 carries out closure operation in the mathematical morphology and eliminates little cavity in the bianry image;
The level that obtains image in step 4. calculation procedure 3 is to degree projection S
h(x) with vertical Gray Projection S
v(y); And search for horizontal Gray Projection S
hHorizontal ordinate x when (x) getting minimum value
oWith vertical Gray Projection S
h(y) get the ordinate y that minimum value is
o, with (x
o, y
o) be considered as the rough center of pupil;
Step 5. is at point (x
o, y
o) on the row at place and near the several rows this,, search out the coordinate of a series of pupil boundary points by the comparison of gray-scale value;
Therefore step 6., justifies match to a series of pupil boundary points that obtain in the step 5 because the inward flange of pupil is very similar to a circle, obtains the accurate center of circle (x of pupil at last
p, y
p) and radius r
p
Step 7. coordinates computed point (x
p, y
p) the horizontal first order difference of being expert at; And be y at ordinate
pDelegation and close on the row on, the coordinate points of correspondence can obtain a series of iris outer rim frontier points when getting maximal value by the horizontal first order difference value of the interval search sum that may exist at the iris outward flange;
Step 8. is because the outward flange of pupil also is very similar to a circle, therefore, to a series of iris outward flange frontier points that obtain in the step 7 carry out with step 6 in similarly circle match, obtain the outer peripheral accurate center of circle (x of iris
i, y
i) and radius r
i
The step 9. pair iris region of orienting carries out normalized;
The point acutance of the normalization iris image that arrives in step 10. calculation procedure 11, and be used to judge whether clearly threshold values V of iris image with predefined
fCompare, if f 〉=V
f, think that then the sharpness of image satisfies the requirement of system, otherwise, think the requirement of discontented pedal system;
The effective number of picture element in the step 11. statistics normalization iris image is with the number K of effective picture element and predefinedly be used to judge whether iris image exists the threshold values V of eyelid and eyelashes occlusion issue
kCompare, if K 〉=V
k, think that then the visibility of iris image satisfies the requirement of system, otherwise, think the requirement of discontented pedal system;
We just extract normalized iris image from the original image that contains iris by above step, and judge the requirement whether this image satisfies system.
2. said as claim 1, a kind of iris recognizing preprocessing method based on half-tone information is characterized in that:
Step 5. is y at ordinate
oIn this delegation, with (x
o, y
o) be the center, along continuous straight runs is searched for the point of pixel gray-scale value greater than T left, stops search immediately when searching the pixel gray-scale value greater than T, writes down the coordinate (x of this moment
l, y
o) as the coordinate of pupil boundary points, carry out along continuous straight runs search to the right by same mode again, obtain another border point coordinate (x
r, y
o); Get coordinate points (x
o, y
o) near several rows, on each row that takes out, carry out the search of pupil boundary points, method with at y
oThe searching method that delegation carries out is identical, finally can obtain the coordinate of a series of pupil boundary points;
Step 7. (1) coordinates computed point (x
p, y
p) the horizontal first order difference of being expert at, concrete computing formula is: work as x
pDuring<x<N-5, D (x, y
p)=I (x+5, y
p)-I (x, y
p); As 5<x≤x
pThe time D (x, y
p)=I (x-5, y
p)-I (x, y
p); D (x, y wherein
p) denotation coordination point (x, y
p) horizontal first order difference value, I (x, y) the denotation coordination point (x, gray-scale value y), N are picture traverse;
(2) be y at ordinate
pIn the delegation, at interval [x
p+ r
p+ 20, x
p+ r
p+ 100] go up the horizontal first order difference value sum of calculating each point and 20 points in back; Concrete computing formula is: work as x
p+ r
p+ 20<x<x
p+ r
p+ 100 o'clock,
D (x+i, y wherein
p) be coordinate points (x+i, the y that obtains in the step (1)
p) horizontal first order difference value; And in this interval, find out S (x, y
p) when getting maximal value the coordinate points of correspondence as iris outer rim frontier point; Again at interval [x
p-r
p-100, x
p-r
p-20] go up the horizontal first order difference value sum of calculating each point and 20 points in front; Concrete computing formula is: work as x
p-r
p-100<x<x
p-r
p-20 o'clock,
D (x-i, y wherein
p) be coordinate points (x-i, the y that obtains in the step (1)
p) horizontal first order difference value; And in this interval, find out S (x, y
p) when getting maximal value the coordinate points of correspondence as iris outer rim frontier point;
(3) get the pupil center of circle (x
p, y
p) near several rows, on each row that takes out, carry out the search of iris outer rim frontier point, in method and the step 7 at y
pThe searching method that delegation carries out is identical, finally can obtain the coordinate of a series of iris outward flange frontier points;
In step 10. calculation procedure 9 to size be the some acutance of the normalization iris image of M * N, concrete computing formula is:
Wherein dI is the difference of the gray-scale value of the gray-scale value of picture element and this point in certain any 8-neighborhood in the image, and dx is the distance of consecutive point, and m and n are respectively the height and width of image;
Step 11: with in the step 9 to the normalization iris image some sharpness value f and predefinedly be used to judge whether clearly threshold values V of iris image
fCompare, if f 〉=V
f, think that then the sharpness of image satisfies the requirement of system, otherwise, think the requirement of discontented pedal system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2006101225281A CN101201893A (en) | 2006-09-30 | 2006-09-30 | Iris recognizing preprocessing method based on grey level information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2006101225281A CN101201893A (en) | 2006-09-30 | 2006-09-30 | Iris recognizing preprocessing method based on grey level information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101201893A true CN101201893A (en) | 2008-06-18 |
Family
ID=39517052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2006101225281A Pending CN101201893A (en) | 2006-09-30 | 2006-09-30 | Iris recognizing preprocessing method based on grey level information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101201893A (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930543A (en) * | 2010-08-27 | 2010-12-29 | 南京大学 | Method for adjusting eye image in self-photographed video |
CN101576951B (en) * | 2009-05-20 | 2011-11-09 | 电子科技大学 | Iris external boundary positioning method based on shades of gray and classifier |
CN102332098A (en) * | 2011-06-15 | 2012-01-25 | 夏东 | Method for pre-processing iris image |
US20140022371A1 (en) * | 2012-07-20 | 2014-01-23 | Pixart Imaging Inc. | Pupil detection device |
CN103605959A (en) * | 2013-11-15 | 2014-02-26 | 武汉虹识技术有限公司 | A method for removing light spots of iris images and an apparatus |
CN104166848A (en) * | 2014-08-28 | 2014-11-26 | 武汉虹识技术有限公司 | Matching method and system applied to iris recognition |
CN104463159A (en) * | 2014-12-31 | 2015-03-25 | 北京释码大华科技有限公司 | Image processing method and device of iris positioning |
CN105260725A (en) * | 2015-10-23 | 2016-01-20 | 北京无线电计量测试研究所 | Iris recognition system |
CN105389574A (en) * | 2015-12-25 | 2016-03-09 | 成都品果科技有限公司 | Method and system for detecting human eye irises in pictures |
CN105488487A (en) * | 2015-12-09 | 2016-04-13 | 湖北润宏科技有限公司 | Iris positioning method and device |
CN105574865A (en) * | 2015-12-14 | 2016-05-11 | 沈阳工业大学 | Method for extracting eyelashes based on improved ant colony algorithm |
WO2016150239A1 (en) * | 2015-03-24 | 2016-09-29 | 北京天诚盛业科技有限公司 | Method and apparatus for screening iris images |
CN106203358A (en) * | 2016-07-14 | 2016-12-07 | 北京无线电计量测试研究所 | A kind of iris locating method and equipment |
CN106419830A (en) * | 2016-11-10 | 2017-02-22 | 任秋生 | Method for measuring diameters of pupils |
CN106650616A (en) * | 2016-11-09 | 2017-05-10 | 北京巴塔科技有限公司 | Iris location method and visible light iris identification system |
CN106778631A (en) * | 2016-12-22 | 2017-05-31 | 江苏大学 | The quick heterogeneous iris classification device method for designing for filtering false iris in a kind of iris recognition preprocessing process |
WO2017092679A1 (en) * | 2015-12-02 | 2017-06-08 | 中国银联股份有限公司 | Eyeball tracking method and apparatus, and device |
US9854159B2 (en) | 2012-07-20 | 2017-12-26 | Pixart Imaging Inc. | Image system with eye protection |
WO2018108124A1 (en) * | 2016-12-15 | 2018-06-21 | 腾讯科技(深圳)有限公司 | Method and system for positioning pupil |
CN108288248A (en) * | 2018-01-02 | 2018-07-17 | 腾讯数码(天津)有限公司 | A kind of eyes image fusion method and its equipment, storage medium, terminal |
CN109409223A (en) * | 2018-09-21 | 2019-03-01 | 昆明理工大学 | A kind of iris locating method |
CN109559294A (en) * | 2017-09-26 | 2019-04-02 | 凌云光技术集团有限责任公司 | A kind of detection method and device of drop circular hole quality |
CN109738433A (en) * | 2018-11-30 | 2019-05-10 | 西北大学 | Water oil layer mixed liquor moisture content detecting method and device based on image procossing |
CN110026902A (en) * | 2017-12-27 | 2019-07-19 | 株式会社迪思科 | Cutting apparatus |
CN110276229A (en) * | 2018-03-14 | 2019-09-24 | 京东方科技集团股份有限公司 | Target object regional center localization method and device |
CN112489042A (en) * | 2020-12-21 | 2021-03-12 | 大连工业大学 | Metal product printing defect and surface damage detection method based on super-resolution reconstruction |
CN112906431A (en) * | 2019-11-19 | 2021-06-04 | 北京眼神智能科技有限公司 | Iris image segmentation method and device, electronic equipment and storage medium |
CN112906431B (en) * | 2019-11-19 | 2024-05-24 | 北京眼神智能科技有限公司 | Iris image segmentation method and device, electronic equipment and storage medium |
-
2006
- 2006-09-30 CN CNA2006101225281A patent/CN101201893A/en active Pending
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101576951B (en) * | 2009-05-20 | 2011-11-09 | 电子科技大学 | Iris external boundary positioning method based on shades of gray and classifier |
CN101930543B (en) * | 2010-08-27 | 2012-06-27 | 南京大学 | Method for adjusting eye image in self-photographed video |
CN101930543A (en) * | 2010-08-27 | 2010-12-29 | 南京大学 | Method for adjusting eye image in self-photographed video |
CN102332098B (en) * | 2011-06-15 | 2014-03-12 | 湖南领创智能科技有限公司 | Method for pre-processing iris image |
CN102332098A (en) * | 2011-06-15 | 2012-01-25 | 夏东 | Method for pre-processing iris image |
US10574878B2 (en) | 2012-07-20 | 2020-02-25 | Pixart Imaging Inc. | Electronic system with eye protection |
US9854159B2 (en) | 2012-07-20 | 2017-12-26 | Pixart Imaging Inc. | Image system with eye protection |
US20140022371A1 (en) * | 2012-07-20 | 2014-01-23 | Pixart Imaging Inc. | Pupil detection device |
CN103605959A (en) * | 2013-11-15 | 2014-02-26 | 武汉虹识技术有限公司 | A method for removing light spots of iris images and an apparatus |
CN104166848A (en) * | 2014-08-28 | 2014-11-26 | 武汉虹识技术有限公司 | Matching method and system applied to iris recognition |
CN104166848B (en) * | 2014-08-28 | 2017-08-29 | 武汉虹识技术有限公司 | A kind of matching process and system applied to iris recognition |
CN104463159A (en) * | 2014-12-31 | 2015-03-25 | 北京释码大华科技有限公司 | Image processing method and device of iris positioning |
CN104463159B (en) * | 2014-12-31 | 2017-11-28 | 北京释码大华科技有限公司 | A kind of image processing method and device for positioning iris |
WO2016150239A1 (en) * | 2015-03-24 | 2016-09-29 | 北京天诚盛业科技有限公司 | Method and apparatus for screening iris images |
CN105260725A (en) * | 2015-10-23 | 2016-01-20 | 北京无线电计量测试研究所 | Iris recognition system |
US10699420B2 (en) | 2015-12-02 | 2020-06-30 | China Unionpay Co., Ltd. | Eyeball tracking method and apparatus, and device |
WO2017092679A1 (en) * | 2015-12-02 | 2017-06-08 | 中国银联股份有限公司 | Eyeball tracking method and apparatus, and device |
CN105488487B (en) * | 2015-12-09 | 2018-11-02 | 湖北润宏科技股份有限公司 | A kind of iris locating method and device |
CN105488487A (en) * | 2015-12-09 | 2016-04-13 | 湖北润宏科技有限公司 | Iris positioning method and device |
CN105574865B (en) * | 2015-12-14 | 2019-11-12 | 沈阳工业大学 | Based on the method for improving ant group algorithm extraction eyelashes |
CN105574865A (en) * | 2015-12-14 | 2016-05-11 | 沈阳工业大学 | Method for extracting eyelashes based on improved ant colony algorithm |
CN105389574A (en) * | 2015-12-25 | 2016-03-09 | 成都品果科技有限公司 | Method and system for detecting human eye irises in pictures |
CN105389574B (en) * | 2015-12-25 | 2019-03-22 | 成都品果科技有限公司 | The method and system of human eye iris in a kind of detection picture |
CN106203358A (en) * | 2016-07-14 | 2016-12-07 | 北京无线电计量测试研究所 | A kind of iris locating method and equipment |
CN106203358B (en) * | 2016-07-14 | 2019-11-19 | 北京无线电计量测试研究所 | A kind of iris locating method and equipment |
CN106650616A (en) * | 2016-11-09 | 2017-05-10 | 北京巴塔科技有限公司 | Iris location method and visible light iris identification system |
CN106419830A (en) * | 2016-11-10 | 2017-02-22 | 任秋生 | Method for measuring diameters of pupils |
US10878593B2 (en) | 2016-12-15 | 2020-12-29 | Tencent Technology (Shenzhen) Company Limited | Pupil localizing method and system |
WO2018108124A1 (en) * | 2016-12-15 | 2018-06-21 | 腾讯科技(深圳)有限公司 | Method and system for positioning pupil |
CN106778631A (en) * | 2016-12-22 | 2017-05-31 | 江苏大学 | The quick heterogeneous iris classification device method for designing for filtering false iris in a kind of iris recognition preprocessing process |
CN106778631B (en) * | 2016-12-22 | 2020-11-20 | 江苏大学 | Heterogeneous iris classifier design method for rapidly filtering out forged irises in iris recognition preprocessing process |
CN109559294A (en) * | 2017-09-26 | 2019-04-02 | 凌云光技术集团有限责任公司 | A kind of detection method and device of drop circular hole quality |
CN109559294B (en) * | 2017-09-26 | 2021-01-26 | 凌云光技术股份有限公司 | Method and device for detecting quality of circular hole of drop |
CN110026902A (en) * | 2017-12-27 | 2019-07-19 | 株式会社迪思科 | Cutting apparatus |
CN108288248A (en) * | 2018-01-02 | 2018-07-17 | 腾讯数码(天津)有限公司 | A kind of eyes image fusion method and its equipment, storage medium, terminal |
CN110276229A (en) * | 2018-03-14 | 2019-09-24 | 京东方科技集团股份有限公司 | Target object regional center localization method and device |
CN109409223A (en) * | 2018-09-21 | 2019-03-01 | 昆明理工大学 | A kind of iris locating method |
CN109738433A (en) * | 2018-11-30 | 2019-05-10 | 西北大学 | Water oil layer mixed liquor moisture content detecting method and device based on image procossing |
CN109738433B (en) * | 2018-11-30 | 2021-03-26 | 西北大学 | Method and device for detecting water content of water-oil layered mixed liquid based on image processing |
CN112906431A (en) * | 2019-11-19 | 2021-06-04 | 北京眼神智能科技有限公司 | Iris image segmentation method and device, electronic equipment and storage medium |
CN112906431B (en) * | 2019-11-19 | 2024-05-24 | 北京眼神智能科技有限公司 | Iris image segmentation method and device, electronic equipment and storage medium |
CN112489042A (en) * | 2020-12-21 | 2021-03-12 | 大连工业大学 | Metal product printing defect and surface damage detection method based on super-resolution reconstruction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100373397C (en) | Pre-processing method for iris image | |
CN101201893A (en) | Iris recognizing preprocessing method based on grey level information | |
CN101359365B (en) | Iris positioning method based on maximum between-class variance and gray scale information | |
CN101246544B (en) | Iris positioning method based on boundary point search and minimum kernel value similarity region edge detection | |
CN101266645B (en) | Iris positioning method based on multi-resolution analysis | |
Cherabit et al. | Circular hough transform for iris localization | |
CN102542281B (en) | Non-contact biometric feature identification method and system | |
CN101339603A (en) | Method for selecting qualified iris image from video frequency stream | |
Puhan et al. | Efficient segmentation technique for noisy frontal view iris images using Fourier spectral density | |
CN107169479A (en) | Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication | |
EP3680794A1 (en) | Device and method for user authentication on basis of iris recognition | |
Li et al. | Robust iris segmentation based on learned boundary detectors | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
Johar et al. | Iris segmentation and normalization using Daugman’s rubber sheet model | |
Chai et al. | Local chan-vese segmentation for non-ideal visible wavelength iris images | |
Ahmed et al. | Retina based biometric authentication using phase congruency | |
CN109753912A (en) | A kind of multi-light spectrum palm print matching process based on tensor | |
Khan et al. | Fast and efficient iris segmentation approach based on morphology and geometry operation | |
Leo et al. | Highly usable and accurate iris segmentation | |
Farouk et al. | Iris recognition system techniques: A literature survey and comparative study | |
Chirchi et al. | Enhanced isocentric segmentor and wavelet rectangular coder to iris segmentation and recognition | |
Kyaw et al. | Performance analysis of features extraction on iris recognition system | |
Mashudi et al. | Dynamic U-Net Using Residual Network for Iris Segmentation | |
Viriri et al. | Improving iris-based personal identification using maximum rectangular region detection | |
Subbarayudu et al. | A novel iris recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |