CN105488494B - A kind of quick accurate localization method of iris inner boundary - Google Patents

A kind of quick accurate localization method of iris inner boundary Download PDF

Info

Publication number
CN105488494B
CN105488494B CN201511006255.XA CN201511006255A CN105488494B CN 105488494 B CN105488494 B CN 105488494B CN 201511006255 A CN201511006255 A CN 201511006255A CN 105488494 B CN105488494 B CN 105488494B
Authority
CN
China
Prior art keywords
image
edge
iris
inner boundary
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201511006255.XA
Other languages
Chinese (zh)
Other versions
CN105488494A (en
Inventor
王效灵
俞斌德
李宁宁
林云
杨佐丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN201511006255.XA priority Critical patent/CN105488494B/en
Publication of CN105488494A publication Critical patent/CN105488494A/en
Application granted granted Critical
Publication of CN105488494B publication Critical patent/CN105488494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The present invention relates to a kind of quick accurate positioning methods of iris inner boundary.Existing iris inner boundary localization method otherwise limited by positioning accuracy or by locating speed limitation, or when need the problem of meeting many preconditions in advance, be difficult meet the needs of image biological identification technology.The present invention first carries out marginal point extraction to the iris image of acquisition;Then edge strength is carried out to two edge images that front is extracted adaptively to merge, obtain final edge image co-registration figure;Extra miscellaneous marginal point and noise spot are removed to the new edge graph depth of fusion later;It finally carries out redefining marginal point again, is carried out data storage and be shown in iris original image.The present invention has significant improvement to being located in speed and accuracy for iris inner boundary, can satisfy the demand of image biological identification technology.

Description

A kind of quick accurate localization method of iris inner boundary
Technical field
The invention belongs to image biological identification technology fields, are related to a kind of quick accurate positioning side of iris inner boundary Method.
Background technique
As a big feature in information technology, the fast development of network technology and Network Information epoch is exactly identity Digitlization and recessivation, and traditional authentication identifying method be based on the things for identifying some mark personal identifications, it is main to wrap Include two aspects: (1) identity article;(2) identity knowledge.These traditional identifications are lacked there is serious It falls into, mankind's safety surface is made to face great challenge.And iris recognition is that current application is the most convenient and accurate a kind of.Therefore, such as What precise Identification a person's identity, protection information security are that the crucial society that must be solved the current information age is asked Topic has very great value and market application prospect boundless.
The accurate quick positioning of iris inner boundary is important crucial a part of Iris Location, and Iris Location is that iris is known Key foundation link in other technology, feature extraction and pattern matching step to the later period play the role of very big.Iris edge is fixed Position is developed so far, and produces many location algorithms, but most algorithms otherwise limited by positioning accuracy or by The limitation of locating speed, or need the problem of meeting many preconditions in advance etc., all there is some defects, and are produced Industry accept extensively always or those are simple, efficiently and are not necessarily to iris locating methods of excessive precondition.So research A kind of simple and quick accurate location algorithm of inward flange is still a challenging project.
Summary of the invention
In view of the deficiencies of the prior art, the present invention proposes a kind of quick accurate localization methods of iris inner boundary.
Steps are as follows for the technical solution adopted for solving the technical problem of the present invention:
Step (1) iris image acquiring.
Step (2) carries out marginal point extraction to the iris gray level image of extraction, here be to be calculated based on improving Canny Son carries out detection edge.
Step (3) extracts the topically effective information in edge using wavelet transformation to the iris gray level image of extraction.
The edge image that step (4) extracts step (2) and step (3) carries out edge strength and adaptively merges, and obtains most End edge edge image co-registration figure.
Step (5) is removed extra miscellaneous marginal point and noise spot to the new edge graph depth of fusion, here be Four-way scanning method based on marginal information extracts object.
It carries out redefining marginal point again after step (6), is carried out data storage and be shown in iris original image In.
Beneficial effects of the present invention:
(1) the present invention not only reduces the operand of algorithm, but also avoids unnecessary grain details, to make the later period Image segmentation result it is faster and more acurrate.
(2) four-way sector scanning method is a kind of image segmentation based on edge geological information, and the marginal point of all participations is all It can be completely independent and complete algorithm, without carrying out information exchange with other marginal points, have good parallel characteristics, to algorithm Speed, which has, significantly to be promoted.
(3) can be by four edges boundary line selective removal to whole picture rectangular image and addition, area in different colors Point, and sweep length parameter be it is controllable, the enclosed region of different area can be filtered out;In this way in favor of the standard of pupil edge Really detection and positioning.
Detailed description of the invention
Fig. 1 is main flow chart of the invention;
Fig. 2 is the flow chart of step of the present invention (2);
Fig. 3 is the flow chart of step of the present invention (5) four-way region-filling algorithm.
Specific embodiment:
Below in conjunction with attached drawing, the invention will be further described.
As shown in Figure 1, the specific technical solution of the present invention are as follows:
Step (1) iris image acquiring.Since the differentiation of iris essentially consists in the difference of grain details, primary Task is the iris image for obtaining high quality, but is to be difficult to obtain clearly rainbow under normal illumination with common CCD camera Film image and existing iris image-taking device is still expensive, used herein is the iris library that the Chinese Academy of Sciences provides, and the inside includes 47000 multiple it is all kinds of under the conditions of iris figure, very have researching value.
The iris gray level image that step (2) provides the Chinese Academy of Sciences carries out marginal point extraction, here be based on improving Canny operator carries out detection edge.Since the acquisition process of iris image is the image acquisition for putting forth effort on high quality, from From the perspective of information theory, best pretreatment is not pre-process.Collected iris image is directly carried out at analysis Reason.
Firstly, the iris picture laboratory test results according to tens of thousands of are analyzed, its space scale coefficient and high threshold are set Value, to control image smoothness and screen out secondary edge;Then it is described further according to fig. 2, specific as follows:
Step 1: with Gaussian filter smoothed image;
Its Gaussian smoothing function are as follows:Original image and Gaussian smoothing function convolution: H (x, y)=f (x, y) * G (x, y);
Step 2: first carrying out seeking single order local derviation to Gaussian smoothing function, it is respectively as follows:
With single order local derviation and image convolution, the gradient magnitude and gradient direction of each pixel are then calculated, is respectively as follows: P (x, y)=f (x, y) * Gx, Q (x, y)=f (x, y) * Gy,
θ (x, y)=arctan (Q (x, y)/P (x, y));
Step 3: carrying out non-maxima suppression to gradient magnitude;
Step 4: carrying out high threshold edge detection generates strong edge figure.
Step (3) has characterization signal office since wavelet transformation has the characteristics that multiresolution analysis, in two domain of time-frequency The ability of portion's feature, and its window area immobilizes, and length of window and width can change automatically, it can be right as needed Signal high and low frequency part concrete analysis, so extracting the local feature at iris image edge used here as wavelet transformation, so It carries out information with the strong edge figure in step (2) afterwards to merge, in order to which the omission marginal point for making up pupil as far as possible makes pupil Edge be a complete enclosed region.
For scale parameter j, translation parameters k, by yj,k(t) continuous variable j and k in, which take, does integer discrete form, by it It indicates are as follows:
yj,k(t)=2j/2y(2jt-k)
Discrete wavelet is obtained, is denoted as:
Wf(j, k)=(f (t), yj,k(t))
Wavelet transformation passes through the adjusting of parameter j, k, to realize that the localization in terms of frequency domain and time domain acts on.
Step (4) only remains the marginal information of image, and the edge in order to make fused image due to edge image Effect reaches most preferably, chooses the window inward flange Self-adaptive strength blending algorithm based on wavelet transformation.
Basic thought: selected window calculates the edge strength of each pixel in window, and using normalized edge strength as Weight is weighted summation to the high frequency coefficient of two images.
If F1(x,y)、F2(x, y) is respectively two images to be fused, and F (x, y) is fused image.D1(x,y)、 D2(x, y) is respectively the pixel value at two images matrix midpoint (x, y), and D (x, y) is the picture at fused image point (x, y) Element value.It is m width image when scale coefficient is j, direction coefficient is k, is obtained at point (x, y) through wavelet decomposition Coefficient;It is m width image when scale coefficient is j, direction coefficient is k, the window area centered on (x, y) point The mean value of middle wavelet coefficient.W is window area, if selected areas is the square region that side length is a, then W=a*a.
Edge strength at point (x, y) is defined as:
In two images,Weight coefficient be taken as:
After Wavelet Edge figure is merged with strong edge figure, obtained final edge image F (x, y) is indicated, then each picture The pixel value D (x, y) of vegetarian refreshments is indicated are as follows:
Step (5) carries out depth to the combination of edge figure of acquisition and thoroughly removes extra miscellaneous marginal point and noise spot, this In be the four-way scanning method based on marginal information.According to Fig.3,.
It is scanned and is filled out in different colors with four direction (from top to bottom, from left to right, from top to bottom, from right to left) It fills, first setting sweep length parameter and color parameter, then is the sector scanning filling from top to bottom for carrying out first time, according to Fig. 3 process successively carries out, wherein last fill color will be as Scanning Detction color next time, the filling in the last one direction Color will be unique reservation color, that is, be partitioned into required extraction object.
The object of extraction is carried out redefining marginal point again after step (6), carried out data storage and shown It is shown in iris original image.

Claims (3)

1. a kind of quick accurate localization method of iris inner boundary, it is characterised in that method includes the following steps:
Step (1) iris image acquiring;
Step (2) carries out marginal point extraction to the iris gray level image of extraction;
Step (3) extracts the topically effective information in edge using wavelet transformation to the iris gray level image of extraction;
The edge image that step (4) extracts step (2) and step (3) carries out edge strength and adaptively merges, and obtains final Edge image co-registration figure;
Step (5) is removed extra miscellaneous marginal point and noise spot to the new edge graph depth of fusion;
It carries out redefining marginal point again after step (6), is carried out data storage and be shown in iris original image;
Improvement Canny operator has been used to carry out detection edge in step (2), specifically:
Firstly, the iris picture laboratory test results according to tens of thousands of are analyzed, its space scale coefficient and high threshold are set, with It controls image smoothness and screens out secondary edge;Then as follows:
Step 1: with Gaussian filter smoothed image;
Its Gaussian smoothing function are as follows:Original image and Gaussian smoothing function convolution: H (x, y) =f (x, y) * G (x, y);
Step 2: first carrying out seeking single order local derviation to Gaussian smoothing function, it is respectively as follows:
With single order local derviation and image convolution, then calculate the gradient magnitude and gradient direction of each pixel, be respectively as follows: P (x, Y)=f (x, y) * Gx, Q (x, y)=f (x, y) * Gy,
Step 3: carrying out non-maxima suppression to gradient magnitude;
Step 4: carrying out high threshold edge detection generates strong edge figure;
Step (4) is specifically:
If F1(x,y)、F2(x, y) is respectively two images to be fused, and F (x, y) is fused image;D1(x,y)、D2(x, It y) is respectively pixel value at two images matrix midpoint (x, y), D (x, y) is the pixel value at fused image point (x, y);It is m width image when scale coefficient is j, direction coefficient is k, the coefficient obtained at point (x, y) through wavelet decomposition;It is m width image when scale coefficient is j, direction coefficient is k, it is small in the window area centered on (x, y) point The mean value of wave system number;W is window area, if selected areas is the square region that side length is a, then W=a*a;
Edge strength at point (x, y) is defined as:
In two images,Weight coefficient be taken as:
After Wavelet Edge figure is merged with strong edge figure, obtained final edge image F (x, y) is indicated, then each pixel Pixel value D (x, y) indicate are as follows:
2. the quick accurate positioning method of iris inner boundary according to claim 1, it is characterised in that: sharp in step (3) The topically effective information in edge is extracted with wavelet transformation, specifically:
For scale parameter j, translation parameters k, by yj,k(t) continuous variable j and k in, which take, does integer discrete form, is indicated Are as follows:
yj,k(t)=2j/2y(2jt-k)
Discrete wavelet is obtained, is denoted as:
Wf(j, k)=(f (t), yj,k(t))
Wavelet transformation passes through the adjusting of parameter j, k, to realize that the localization in terms of frequency domain and time domain acts on.
3. the quick accurate positioning method of iris inner boundary according to claim 1, it is characterised in that: step (5) uses Be that four-way scanning method based on marginal information extracts object, specifically:
Using from top to bottom, from left to right, from top to bottom, this four direction from right to left, be scanned filling in different colors, Sweep length parameter and color parameter are set first, then are the sector scanning fillings from top to bottom for carrying out first time, wherein on Primary fill color is by as Scanning Detction color next time, and the fill color in the last one direction will be unique reservation color, i.e., It has been partitioned into required extraction object.
CN201511006255.XA 2015-12-29 2015-12-29 A kind of quick accurate localization method of iris inner boundary Active CN105488494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511006255.XA CN105488494B (en) 2015-12-29 2015-12-29 A kind of quick accurate localization method of iris inner boundary

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511006255.XA CN105488494B (en) 2015-12-29 2015-12-29 A kind of quick accurate localization method of iris inner boundary

Publications (2)

Publication Number Publication Date
CN105488494A CN105488494A (en) 2016-04-13
CN105488494B true CN105488494B (en) 2019-01-08

Family

ID=55675466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511006255.XA Active CN105488494B (en) 2015-12-29 2015-12-29 A kind of quick accurate localization method of iris inner boundary

Country Status (1)

Country Link
CN (1) CN105488494B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008091278A3 (en) * 2006-09-25 2008-09-25 Retica Systems Inc Iris data extraction
CN101894256A (en) * 2010-07-02 2010-11-24 西安理工大学 Iris identification method based on odd-symmetric 2D Log-Gabor filter
CN103824061A (en) * 2014-03-03 2014-05-28 山东大学 Light-source-reflection-region-based iris positioning method for detecting and improving Hough conversion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60307583T2 (en) * 2002-11-20 2007-10-04 Stmicroelectronics S.A. Evaluation of the sharpness of an image of the iris of an eye

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008091278A3 (en) * 2006-09-25 2008-09-25 Retica Systems Inc Iris data extraction
CN101894256A (en) * 2010-07-02 2010-11-24 西安理工大学 Iris identification method based on odd-symmetric 2D Log-Gabor filter
CN103824061A (en) * 2014-03-03 2014-05-28 山东大学 Light-source-reflection-region-based iris positioning method for detecting and improving Hough conversion

Also Published As

Publication number Publication date
CN105488494A (en) 2016-04-13

Similar Documents

Publication Publication Date Title
Yang et al. Multifocus image fusion based on NSCT and focused area detection
Zhang et al. Face morphing detection using Fourier spectrum of sensor pattern noise
CN106296666B (en) A kind of color image removes shadow method and application
CN109242888B (en) Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN104809734B (en) A method of the infrared image based on guiding filtering and visual image fusion
CN106204509B (en) Infrared and visible light image fusion method based on regional characteristics
CN105182350B (en) A kind of multibeam sonar object detection method of application signature tracking
CN106898015B (en) A kind of multi thread visual tracking method based on the screening of adaptive sub-block
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN109584204A (en) A kind of image noise intensity estimation method, storage medium, processing and identification device
CN108009591A (en) A kind of contact network key component identification method based on deep learning
CN105761214A (en) Remote sensing image fusion method based on contourlet transform and guided filter
CN107844736A (en) iris locating method and device
CN108830856B (en) GA automatic segmentation method based on time series SD-OCT retina image
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
CN106373128B (en) Method and system for accurately positioning lips
CN113592782B (en) Method and system for extracting X-ray image defects of composite material carbon fiber core rod
CN108629262A (en) Iris identification method and related device
CN108550145A (en) A kind of SAR image method for evaluating quality and device
CN109726681A (en) It is a kind of that location algorithm is identified based on the blind way of machine learning identification and image segmentation
CN106599891B (en) A kind of remote sensing images region of interest rapid extracting method based on scale phase spectrum conspicuousness
CN109559273A (en) A kind of quick joining method towards vehicle base map picture
CN104637060B (en) A kind of image partition method based on neighborhood principal component analysis-Laplce
CN105488494B (en) A kind of quick accurate localization method of iris inner boundary
CN108520252A (en) Landmark identification method based on generalised Hough transform and wavelet transformation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant