CN111666847A - Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology - Google Patents

Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology Download PDF

Info

Publication number
CN111666847A
CN111666847A CN202010459392.3A CN202010459392A CN111666847A CN 111666847 A CN111666847 A CN 111666847A CN 202010459392 A CN202010459392 A CN 202010459392A CN 111666847 A CN111666847 A CN 111666847A
Authority
CN
China
Prior art keywords
iris
point
pix
boundary
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010459392.3A
Other languages
Chinese (zh)
Inventor
张彦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010459392.3A priority Critical patent/CN111666847A/en
Publication of CN111666847A publication Critical patent/CN111666847A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

In our method, only the 3 o 'clock and 9 o' clock directions of the iris are considered as search regions for the inner and outer boundaries of the iris in order to reduce the computation time required to locate the iris boundaries in a captured image of the eye. The circular ring shape of the iris is then mapped to a rectangular area of constant size to alleviate the problem of changing iris size that can occur due to changes in image capture conditions, and the computation time required for our method is short. Thereafter, the lower half of the iris ring is regarded as a region of interest for feature extraction. This area is slightly disturbed by the eyelids and eyelashes. The characteristic extraction process is completed by a proposed local 0-1 quantization technology, and compared with other technologies which utilize wavelet analysis (such as Fourier analysis and Gabor analysis), because the parameters of the wavelet analysis are preset, the parameters of the wavelet analysis can not adapt to some noises and changes in the iris image due to the change of image acquisition conditions, the proposed local 0-1 quantization technology is not influenced by the changes, and through data test, compared with other algorithms, the scheme has the advantages that the correct segmentation rate reaches 99.7% and the correct recognition rate reaches 99.84% in a lower segmentation time.

Description

Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology
Technical Field
The invention relates to an iris segmentation, feature extraction and matching method based on a local 0-1 quantization technology, in particular to an iris feature extraction and identification method from an eye local image (namely 3-point and 9-point directions of an iris).
Background
In recent years, a large number of iris extraction algorithms have been proposed. Iris recognition methods are based on applying Hough transforms to the entire eye image to locate the iris boundaries and using wavelet analysis for feature extraction. The main drawback of these techniques is the large amount of computational time. Further, input parameters for wavelet analysis are selected and fixed in advance, and thus these parameters may not be suitable for application to different imaging conditions (i.e., different imaging distances, different light intensities, etc.).
However, most methods require complicated mathematical computation and may require a long time to wait for the computation result, and in order to accelerate the recognition speed and maintain a high recognition rate, we propose an iris segmentation, feature extraction and matching method based on a local 0-1 quantization technology, and the proposed method is fast and adaptive. To reduce the computation time required for iris segmentation, we propose a method to search for the outer boundaries in a small (rather than the entire image) but useful region, i.e. the 3 o 'clock and 9 o' clock directions of the iris. Secondly, the segmented iris region is mapped into a matrix of fixed size. Then, by mapping the gray value of each point in the normalized iris to a binary level: 0 or 1, and extracting identification features from the iris. These levels are generated by 0, 1 quantization of the intensity variations in a particular neighborhood of the mapped points. Compared with the traditional characteristic extraction method based on wavelet analysis, the method is simple, small in calculation amount and capable of adapting to the change of image acquisition conditions. The similarity between different irises is then measured by computing the hamming distance between the iris feature vectors.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an iris segmentation, feature extraction and matching method based on a local 0-1 quantization technology, in particular to an iris feature extraction and recognition method for iris feature positioning from an eye local image, namely the 3 o 'clock direction and the 9 o' clock direction of an iris, which can solve the problem of feature comparison which is possibly not suitable for different shooting conditions (namely different shooting distances, different light intensities and the like) caused by huge calculation time and pre-selected and fixed input parameters of wavelet analysis in the existing iris recognition method.
In order to solve the technical problems, the invention provides the following technical method:
iris segmentation, feature extraction and matching method based on local 0-1 quantization technology
The method comprises the following specific steps:
the method comprises the following steps: firstly, iris segmentation is carried out on an image:
the iris area is defined by two boundaries: an inner boundary determined by the pupillary region and an outer boundary determined by the sclera. These boundaries can be considered as two concentric circles. In our proposed method, we consider that the inner boundary is most likely to approximate an ellipse.
1.1 Iris inner boundary positioning
The method adopts an edge detection technology and then adopts an ellipse Hough transformation to position the inner boundary of the iris. In this step, the proposed method is based on the result of locating the inner boundary, i.e. the 3 o 'clock and 9 o' clock directions of the outer boundary of the iris are usually the clearest parts of the boundary (i.e. slightly disturbed by the eyelashes or eyelids).
1.2 locating the outer boundary:
the method of the present invention utilizes the result that the ratio of the pupil radius to the iris radius is in the range of 0: 20 to 0: 55. By this method the area is searched for an outer boundary based on the walking 1 results. Then, filtering and positioning are carried out in the 3 o 'clock region and the 9 o' clock region respectively from the left direction and the right direction; and finally, converting the filtering area into a binary area of 0 and 1, and then applying circular Hough transformation to find an outer boundary.
Considering the case where the inner and outer boundaries of the iris are not always concentric circles, the Hough transform is applied to search the center of the outer boundary by ± 5 pixels within a small range around the center of the inner boundary. It is clear that applying the Hough transform in the 3-and 9-point specific regions will reduce the computation time.
Step two: iris standardization:
iris image capture conditions (i.e., capture distance, light intensity, etc.) may vary, which results in variations in the size of the captured iris image. The method uses a standardized method to map iris regions into matrices of size 25x250 elements. Thus, each point in the iris ring (i.e., located in Cartesian coordinates (x, y)) is mapped to a point in the normalized matrix (i.e., located in polar coordinates (r, θ)) as shown below
x(Rpix,θr)=(1-Rpix)xpix(θ)+Rpixxd(θ)
y(Rpix,θr)=(1-Rpix)ypix(θ)+Rpixyd(θ)
Wherein R ispixHas a value in the interval [0, 1 ]]And the value of R theta is in the interval [0, 2 pi ]]In (1). (x)pix(θ)、 ypix(θ)) and (xd (θ), yd (θ)) are coordinates of the inner and outer boundaries in the θ direction, respectively. In the present method, the lower half of the iris ring is the region of interest in the feature extraction process. This area of interest is slightly affected by the eyelids and eyelashes.
Step three: feature extraction
The algorithm incorporates the effects of intensity variations during iris image acquisition, eliminating any initial hardware and further software applications for adjusting illumination to reduce the effects of illumination intensity.
3.1 find the minimum and maximum gray values within a (1x20) sliding window, centered at each point of the normalized area of interest.
3.2 calculate the binary threshold for each point in the region of interest by finding the midpoint between the minimum and maximum in 3.1.
3.3 if the point value is greater than its binary threshold, set the point to "1"; otherwise, the point is set to "0".
Step four: feature matching
The algorithm applies Hamming Distance (HD) technique as a matching method. The Hamming distance finds different corresponding bit numbers between the two biometric templates. I.e. the differences between the templates. The matching process allows us to decide whether two compared irises belong to the same person. The hamming distance between the binary templates a and B is defined as follows:
Figure BSA0000209444850000041
where M is the number of bits in one comparison template,
Figure BSA0000209444850000042
is an exclusive orAnd (5) operating. Small HD means that the two templates are similar to each other. There may be rotation of the iris pattern during image acquisition, eight templates rotating in cycles are first constructed to compare the image to the database. This is done by moving the template column four times to the right and four times to the left. These shifted templates, together with the original input template, provide a total of 9 templates to be used in the comparison. And taking the obtained minimum Hamming distance as the final matching ratio of the input.
Compared with the prior art, the invention can achieve the following beneficial effects:
the 01 quantization technology of the local iris image can increase the rapidity and the adaptability of iris positioning, reduce the calculation time required by iris segmentation, has small calculation amount and can adapt to the change of image acquisition conditions (illumination, distance and the like) under the condition of ensuring that the characteristics are not lost.
Drawings
FIG. 1 is a schematic diagram of a simulation of the local 01 quantization of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Referring to fig. 1, the present invention provides an iris segmentation, feature extraction and matching method based on local 0-1 quantization technology, which comprises the following specific steps:
iris segmentation, feature extraction and matching method based on local 0-1 quantization technology
The method comprises the following specific steps:
the method comprises the following steps: firstly, iris segmentation is carried out on an image:
the iris area is defined by two boundaries: an inner boundary determined by the pupillary region and an outer boundary determined by the sclera. These boundaries can be considered as two concentric circles. In our proposed method, we consider that the inner boundary is most likely to approximate an ellipse.
1.1 Iris inner boundary positioning
The method adopts an edge detection technology and then adopts an ellipse Hough transformation to position the inner boundary of the iris. In this step, the proposed method is based on the result of locating the inner boundary, i.e. the 3 o 'clock and 9 o' clock directions of the outer boundary of the iris are usually the clearest parts of the boundary (i.e. slightly disturbed by the eyelashes or eyelids).
1.2 locating the outer boundary:
the method of the present invention utilizes the result that the ratio of the pupil radius to the iris radius is in the range of 0: 20 to 0: 55. By this method the area is searched for an outer boundary based on the walking 1 results. Then, filtering and positioning are carried out in the 3 o 'clock region and the 9 o' clock region respectively from the left direction and the right direction; and finally, converting the filtering area into a binary area of 0 and 1, and then applying circular Hough transformation to find an outer boundary.
Considering the case where the inner and outer boundaries of the iris are not always concentric circles, the Hough transform is applied to search the center of the outer boundary by ± 5 pixels within a small range around the center of the inner boundary. It is clear that applying the Hough transform in the 3-and 9-point specific regions will reduce the computation time.
Step two: iris standardization:
iris image capture conditions (i.e., capture distance, light intensity, etc.) may vary, which results in variations in the size of the captured iris image. The method uses a standardized method to map iris regions into matrices of size 25x250 elements. Thus, each point in the iris ring (i.e., located in Cartesian coordinates (x, y)) is mapped to a point in the normalized matrix (i.e., located in polar coordinates (r, θ)) as shown below
x(Rpix,θr)=(1-Rpix)xpix(θ)+Rpixxd(θ)
y(Rpix,θr)=(1-Rpix)ypix(θ)+Rpixyd(θ)
Wherein R ispixHas a value in the interval [0, 1 ]]And the value of R theta is in the interval [0, 2 pi ]]In (1). (x)pix(θ)、 ypix(θ)) and (xd (θ), yd (θ)) are coordinates of the inner and outer boundaries in the θ direction, respectively. In this method, the lower half of the iris ringIs the region of interest in the feature extraction process. This area of interest is slightly affected by the eyelids and eyelashes.
Step three: feature extraction
The algorithm incorporates the effects of intensity variations during iris image acquisition, eliminating any initial hardware and further software applications for adjusting illumination to reduce the effects of illumination intensity.
3.1 find the minimum and maximum gray values within a (1x20) sliding window, centered at each point of the normalized area of interest.
3.2 calculate the binary threshold for each point in the region of interest by finding the midpoint between the minimum and maximum in 3.1.
3.3 if the point value is greater than its binary threshold, set the point to "1"; otherwise, the point is set to "0".
Step four: feature matching
The algorithm applies Hamming Distance (HD) technique as a matching method. The Hamming distance finds different corresponding bit numbers between the two biometric templates. I.e. the differences between the templates. The matching process allows us to decide whether two compared irises belong to the same person. The hamming distance between the binary templates a and B is defined as follows:
Figure BSA0000209444850000081
where M is the number of bits in one comparison template,
Figure BSA0000209444850000082
is an exclusive or operation. Small HD means that the two templates are similar to each other. The method of the present invention, where there may be rotation of the iris pattern during image acquisition, first constructs eight templates that rotate in cycles in order to compare the image to the database. This is done by moving the template column four times to the right and four times to the left. These shifted templates, together with the original input template, provide a total of 9 templates to be used in the comparison. Using the obtained minimum Hamming distance as inputThe final matching ratio of (a).
The embodiments of the present invention are not limited thereto, and according to the above-described embodiments of the present invention, other embodiments obtained by modifying, replacing or combining the above-described preferred embodiments in various other forms without departing from the basic technical idea of the present invention by using the conventional technical knowledge and the conventional means in the field can fall within the scope of the present invention.

Claims (4)

1. An iris segmentation, feature extraction and matching method based on local 0-1 quantization technology is characterized by comprising the following specific steps:
the method comprises the following steps: firstly, iris segmentation is carried out on an image:
the iris area is defined by two boundaries: an inner boundary determined by the pupillary region and an outer boundary determined by the sclera. These boundaries can be considered as two concentric circles. In our proposed method, we consider that the inner boundary is most likely to approximate an ellipse. The method uses an edge detection technique and then uses an elliptical Hough transform for inner boundary localization, i.e., the 3 o 'clock and 9 o' clock directions of the outer boundary of the iris are usually the clearest parts of the boundary (i.e., slightly disturbed by eyelashes or eyelids). In addition, the method searches the area to locate the boundary within the range of 0: 25 to 0: 60 by using the ratio of the pupil radius to the iris radius, and after filtering the 3 o 'clock area and the 9 o' clock area from or to the right, respectively, the filtering area is converted into a 01 area so as to apply circular Hough transform to find the outer boundary, and when the inner and outer boundaries of the iris are not always concentric, the Hough transform is applied to search the center of the outer boundary within a small range around the center of the inner boundary by +/-5 pixels. It is thus clear that applying the Hough transform in specific regions, i.e. 3-point and 9-point regions, will reduce the computation time.
Step two: iris standardization:
iris image capture conditions (i.e., capture distance, light intensity, etc.) may vary, which results in variations in the size of the captured iris image. The method uses a standardized method to map iris regions into matrices of size 25x250 elements. Thus, each point in the iris ring (i.e., located in Cartesian coordinates (x, y)) is mapped to a point in the normalized matrix (i.e., located in polar coordinates (r, θ)) as shown below
x(Rpix,θr)=(1-Rpix)xpix(θ)+Rpixxd(θ)
y(Rpix,θr)=(1-Rpix)ypix(θ)+Rpixyd(θ)
Wherein R ispixHas a value in the interval [0, 1 ]]And the value of R theta is in the interval [0, 2 pi ]]In (1). (x)pix(θ)、ypix(θ)) and (xd (θ), yd (θ)) are coordinates of the inner and outer boundaries in the θ direction, respectively. In the present method, the lower half of the iris ring is the region of interest in the feature extraction process. This area of interest is slightly affected by the eyelids and eyelashes.
Step three: feature extraction:
the method incorporates the effects of intensity variations on iris image acquisition, eliminates any initial hardware and further software applications for adjusting the illumination to reduce the effects of illumination intensity by finding the minimum and maximum gray values within a (1x15) sliding window centered at each point in the standardized region of interest. Then calculating a binary threshold for each point in the region of interest by finding the midpoint between the minimum and maximum values, and setting the point to "1" if the point value is greater than its binary threshold; otherwise, the point is set to "0".
Step four: feature matching
The algorithm applies Hamming Distance (HD) technique as a matching method. The Hamming distance finds different corresponding bit numbers between the two biometric templates. I.e. the differences between the templates. The matching process allows us to decide whether two compared irises belong to the same person. The hamming distance between the binary templates a and B is defined as follows:
Figure FSA0000209444840000021
where M is the number of bits in one comparison template,
Figure FSA0000209444840000031
is an exclusive or operation. Small HD means that the two templates are similar to each other. There may be rotation of the iris pattern during image acquisition, eight templates rotating in cycles are first constructed to compare the image to the database. This is done by moving the template column four times to the right and four times to the left. These shifted templates, together with the original input template, provide a total of 9 templates to be used in the comparison. And taking the obtained minimum Hamming distance as the final matching ratio of the input.
2. The method for technical iris segmentation and iris feature extraction based on local 0, 1 quantization as claimed in claim 1, wherein the detailed code retrieval in step one is:
1.1 locating the inner edge of the Iris pupil
Using edge detection techniques, at 3 o 'clock and 9 o' clock of the outer boundary of the iris, typically the clearest part of the boundary (i.e., slightly disturbed by the eyelashes or eyelids), and then using an elliptical Hough transform for inner boundary localization.
1.2 locating the outer edge of the iris pupil
The method utilizes the result that the ratio of the pupil radius to the iris radius is in the range of 0: 25 to 0: 60. Therefore, the proposed method determines the search area to locate the boundary, after these 3 o 'clock and 9 o' clock areas are filtered from or to the right, respectively, the filtered area is converted into a 0, 1 binary area to find the outer boundary using the circular Hough transform, which is applied to search the center of the outer boundary within ± 5 pixels within a small range around the center of the inner boundary, considering the case where the inner and outer boundaries of the iris are not always concentric.
3. The iris segmentation, feature extraction and matching method based on local 0-1 quantization technology as claimed in claim 1, wherein the detailed code retrieval steps in the third step are:
3.1 find the minimum and maximum gray values within a (1x20) sliding window, centered at each point where the lower half of the iris is the normalized area.
3.2 calculate the binary threshold for each point in the region of interest by finding the midpoint between the minimum and maximum in 3.1.
3.3 if the point value is greater than its binary threshold, set the point to "1"; otherwise, the point is set to "0".
4. The iris segmentation, feature extraction and matching method based on local 0-1 quantization technology as claimed in claim 1, wherein the detailed feature matching step in the fourth step is:
4.1 there may be rotation of the iris pattern during image acquisition, eight templates of rotation are first constructed to compare the image to the database. This is done by moving the template column four times to the right and four times to the left. These shifted templates, together with the original input template, provide a total of 9 templates to be used in the comparison. Using the obtained minimum Hamming distance as the final matching ratio of input
The iris segmentation, feature extraction and matching method based on the local 0-1 quantization technology is combined. In order to obtain accurate and quick segmentation, the scheme uses 3 and 9 o' clock directions of the iris to find the outer boundary of the iris. These areas are usually the sharpest parts of the outer boundary (slightly affected by the eyelids and eyelashes). In the aspect of feature extraction, the lower half of the iris is selected as a region of interest. This region is typically the most clearly visible region of the iris (i.e., slightly disturbed by the eyelashes and eyelids). From the region of interest, feature extraction of the iris employs a local 0-1 based quantization technique. The technology eliminates the influence of light intensity change on the iris recognition image. The method has good effect on speed and precision.
CN202010459392.3A 2020-05-26 2020-05-26 Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology Pending CN111666847A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010459392.3A CN111666847A (en) 2020-05-26 2020-05-26 Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010459392.3A CN111666847A (en) 2020-05-26 2020-05-26 Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology

Publications (1)

Publication Number Publication Date
CN111666847A true CN111666847A (en) 2020-09-15

Family

ID=72384780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010459392.3A Pending CN111666847A (en) 2020-05-26 2020-05-26 Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology

Country Status (1)

Country Link
CN (1) CN111666847A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150474A (en) * 2020-10-12 2020-12-29 山东省科学院海洋仪器仪表研究所 Underwater bubble image feature segmentation and extraction method
CN112434675A (en) * 2021-01-26 2021-03-02 西南石油大学 Pupil positioning method for global self-adaptive optimization parameters

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1760887A (en) * 2004-10-11 2006-04-19 中国科学院自动化研究所 The robust features of iris image extracts and recognition methods
CN101093538A (en) * 2006-06-19 2007-12-26 电子科技大学 Method for identifying iris based on zero crossing indication of wavelet transforms
CN103164704A (en) * 2013-04-12 2013-06-19 山东师范大学 Iris image segmentation algorithm based on mixed Gaussian model
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching
CN106250810A (en) * 2015-06-15 2016-12-21 摩福公司 By iris identification, individuality is identified and/or the method for certification
CN107871322A (en) * 2016-09-27 2018-04-03 北京眼神科技有限公司 Iris segmentation method and apparatus
CN109840484A (en) * 2019-01-23 2019-06-04 张彦龙 A kind of pupil detection method based on edge filter, oval evaluation and pupil verifying

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1760887A (en) * 2004-10-11 2006-04-19 中国科学院自动化研究所 The robust features of iris image extracts and recognition methods
CN101093538A (en) * 2006-06-19 2007-12-26 电子科技大学 Method for identifying iris based on zero crossing indication of wavelet transforms
CN103164704A (en) * 2013-04-12 2013-06-19 山东师范大学 Iris image segmentation algorithm based on mixed Gaussian model
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching
CN106250810A (en) * 2015-06-15 2016-12-21 摩福公司 By iris identification, individuality is identified and/or the method for certification
CN107871322A (en) * 2016-09-27 2018-04-03 北京眼神科技有限公司 Iris segmentation method and apparatus
CN109840484A (en) * 2019-01-23 2019-06-04 张彦龙 A kind of pupil detection method based on edge filter, oval evaluation and pupil verifying

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹留洋: "虹膜特征提取及匹配算法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150474A (en) * 2020-10-12 2020-12-29 山东省科学院海洋仪器仪表研究所 Underwater bubble image feature segmentation and extraction method
CN112434675A (en) * 2021-01-26 2021-03-02 西南石油大学 Pupil positioning method for global self-adaptive optimization parameters
CN112434675B (en) * 2021-01-26 2021-04-09 西南石油大学 Pupil positioning method for global self-adaptive optimization parameters

Similar Documents

Publication Publication Date Title
EP1820141B1 (en) Multiscale variable domain decomposition method and system for iris identification
CN1885314A (en) Pre-processing method for iris image
CN111666847A (en) Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology
CN108573219B (en) Eyelid key point accurate positioning method based on deep convolutional neural network
Zaim Automatic segmentation of iris images for the purpose of identification
Ng et al. An effective segmentation method for iris recognition system
WO2016004706A1 (en) Method for improving iris recognition performance in non-ideal environment
Lin A novel iris recognition method based on the natural-open eyes
CN106940786B (en) Iris reconstruction method using iris template based on LLE and PSO
kumar Dewangan et al. Human identification and verification using iris recognition by calculating hamming distance
Hasan et al. Dual iris matching for biometric identification
Nsaef et al. Enhancement segmentation technique for iris recognition system based on Daugman's Integro-differential operator
Koç et al. A new encoding of iris images employing eight quantization levels
Narote et al. An automated segmentation method for iris recognition
Wang et al. Finger vein roi extraction based on robust edge detection and flexible sliding window
Yiming et al. Research on iris recognition algorithm based on hough transform
Punyani et al. Iris recognition system using morphology and sequential addition based grouping
Sallehuddin et al. A survey of iris recognition system
George et al. A survey on prominent iris recognition systems
Das Recognition of Human Iris Patterns
Pirasteh et al. Iris Recognition Using Localized Zernike's Feature and SVM.
Dey et al. Fast and accurate personal identification based on iris biometric
Wang et al. A novel iris location algorithm
Ihsanto et al. Development and analysis of a zeta method for low-cost, camera-based iris recognition
Sun et al. Research on palm vein recognition algorithm based on improved convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200915