CN107844736B - Iris positioning method and device - Google Patents

Iris positioning method and device Download PDF

Info

Publication number
CN107844736B
CN107844736B CN201610833231.XA CN201610833231A CN107844736B CN 107844736 B CN107844736 B CN 107844736B CN 201610833231 A CN201610833231 A CN 201610833231A CN 107844736 B CN107844736 B CN 107844736B
Authority
CN
China
Prior art keywords
iris
pupil
image
boundary
eyelid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610833231.XA
Other languages
Chinese (zh)
Other versions
CN107844736A (en
Inventor
孙婷
王琪
张祥德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Eyecool Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eyecool Technology Co Ltd filed Critical Beijing Eyecool Technology Co Ltd
Priority to CN201610833231.XA priority Critical patent/CN107844736B/en
Publication of CN107844736A publication Critical patent/CN107844736A/en
Application granted granted Critical
Publication of CN107844736B publication Critical patent/CN107844736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an iris positioning method and device. Wherein, the method comprises the following steps: acquiring image information of a target iris, and roughly positioning a pupil in the image information; constructing a trisection map of the target object according to the coarse positioning result of the pupil, and extracting the target object from the image information through a preset algorithm according to the trisection map, wherein the target object comprises: the pupil, eyelid, and iris outer circle of the target iris; and determining the positioning information of the target iris according to the extracted pupil, eyelid and iris excircle, wherein the positioning information comprises at least one of the following: pupil boundary, eyelid boundary, and iris boundary. The invention solves the technical problem that the iris positioning result is inaccurate when the iris image is a non-ideal iris image such as non-circular pupil, uneven illumination, overexposure or underexposure and the like in the prior art.

Description

Iris positioning method and device
Technical Field
The invention relates to the field of image processing, in particular to an iris positioning method and device.
Background
Iris recognition refers to a technique for achieving the purpose of identity authentication by using the characteristic information of the iris in the eye. In recent years, iris recognition technology is considered as the most promising biometric recognition technology. The irises are annular regions located between the black pupil and the white sclera of the human eye surface, each iris containing unique features such as spots, pits, filaments, folds, crystalline lens, etc. The iris, as a biometric feature, has the following advantages that other biometric features are not comparable to each other: uniqueness, stability, high anti-counterfeiting performance and non-contact performance, so the iris identification has the advantages that other biological characteristics cannot be obtained, and is more suitable for identity identification technology in information security.
In the whole iris recognition system, the interested part is obviously the iris region, so that the iris region is accurately separated from the original human eye image, which is the premise of iris recognition, the positioning precision determines the performance of the whole iris recognition system, and the iris positioning is an important link of the iris recognition. In the research aspect of iris positioning, many classical algorithms have been proposed, such as calculus operator and boundary detection combined with Hough transform, and fig. 1a, 1b, and 1c are results of positioning iris boundaries using calculus operator, where fig. 1a is an original iris image, fig. 1b is a result obtained by positioning using calculus operator, fig. 1c is an actual real iris boundary, and it can be seen from fig. 1a, 1b, and 1c that accurate positioning cannot be achieved under the condition of uneven illumination of iris images; fig. 2a, 2b, and 2c show results of positioning an iris boundary by combining boundary detection and Hough transform, where fig. 2a shows an original iris image, fig. 2b shows a result of boundary detection, and fig. 2c shows a positioning result by combining boundary detection and Hough transform, and it can be seen from fig. 2a, 2b, and 2c that, in a case where an iris is severely non-uniform in illumination, a result of positioning an iris boundary by combining boundary detection and Hough transform is not accurate.
The algorithms commonly used in the above prior art are all based on the assumption that the inner and outer boundaries of the iris are circular, but in fact, the shape of the pupil is non-circular, and the behavior of the captured object is not controllable due to the non-invasive nature of the capture, so that there are many non-ideal situations, for example: the eye line is deflected, the eyelid is slightly opened, and light spots or a picture frame is shielded, and the like, so that an ideal positioning result cannot be achieved by the iris positioning classical algorithm. Therefore, the localization of non-ideal iris images is a challenging problem to be solved.
Aiming at the problem that in the prior art, when an iris image is a non-ideal iris image such as non-circular pupil, uneven illumination, overexposure or underexposure and the like, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides an iris positioning method and device, which at least solve the technical problem that in the prior art, when an iris image is a non-ideal iris image such as non-circular pupil, uneven illumination, overexposure or underexposure and the like, an iris positioning result is inaccurate.
According to an aspect of an embodiment of the present invention, there is provided an iris positioning method, including: acquiring image information of a target iris, and roughly positioning a pupil in the image information; constructing a trisection map of the target object according to the coarse positioning result of the pupil, and extracting the target object from the image information through a preset algorithm according to the trisection map, wherein the target object comprises: the pupil, eyelid, and iris outer circle of the target iris; and determining the positioning information of the target iris according to the extracted pupil, eyelid and iris excircle, wherein the positioning information comprises at least one of the following: pupil boundary, eyelid boundary, and iris boundary.
According to another aspect of the embodiments of the present invention, there is also provided an iris positioning apparatus including: the acquisition module 100 is configured to acquire image information of a target iris and perform coarse positioning on a pupil in the image information; the first determining module 102 is configured to construct a trimap image of a target object according to a coarse positioning result of a pupil, and extract the target object from image information through a preset algorithm according to the trimap image, where the target object includes: the pupil, eyelid, and iris outer circle of the target iris; a second determining module 104, configured to determine positioning information of the target iris according to the extracted pupil, eyelid, and iris excircle, where the positioning information includes at least one of: pupil boundary, eyelid boundary, and iris boundary.
In the embodiment of the invention, the image information of the target iris is acquired, the area containing the target object in the image information is determined, a trisection map of the target object is constructed, the target object is extracted from the image information through a preset algorithm according to the trisection map, and the positioning information of the target iris is determined according to the extracted pupil, eyelid and iris excircle, wherein the positioning information comprises at least one of the following information: pupil boundary, eyelid boundary, and iris boundary. The steps determine the positioning information of the target iris by constructing the trisection map and extracting the pupil, the eyelid boundary and the iris excircle from the iris image through the matting algorithm according to the constructed trisection map, thereby realizing the positioning of the target iris, solving the technical problem that the iris positioning result is inaccurate when the iris image is the nonideal iris image such as the noncircular pupil, uneven illumination, overexposure or underexposure and the like in the prior art, and realizing the technical effect of accurately positioning the target iris under the nonideal conditions such as the noncircular pupil, the underexposure or overexposure of the image information, the serious closed eyelid and the like of the image information of the target iris.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1a is a non-ideal raw iris image according to the prior art;
FIG. 1b is a schematic diagram of the result of iris localization using a calculus operator for the iris image of FIG. 2a, according to the prior art;
FIG. 1c is a schematic illustration of the ideal positioning result from the iris image in FIG. 1 a;
FIG. 2a is a non-ideal raw iris image according to the prior art;
FIG. 2b is a diagram illustrating the result of a boundary detection performed on the iris image of FIG. 2a according to the prior art;
FIG. 2c is a schematic diagram of the results of a boundary detection combined with Hough transform localization of an iris image in 2a according to the prior art;
FIG. 3 is a flow chart of a method of iris localization according to an embodiment of the present invention;
figure 4a is a schematic illustration of an image corresponding to a pupil region cut from original image information in accordance with the present invention;
figure 4b is a schematic illustration of an alternative pupil triplet according to an embodiment of the present invention;
FIG. 4c is a background image obtained by matting FIG. 4a with the tripartite graph information provided in FIG. 4b, according to an embodiment of the invention;
FIG. 4d is a grayscale histogram of a small region of the pupil taken in FIG. 4a, according to the present invention;
FIG. 4e is a schematic illustration of the spot positions of FIG. 4a obtained by a threshold segmentation method according to the present invention;
FIG. 4f is a schematic illustration of expansion of the spot of FIG. 4e in accordance with the present invention;
FIG. 4g is a schematic diagram of the boundary points of a non-circular pupil according to the present invention;
FIG. 4h is a schematic diagram of a non-circular pupil boundary point after noise removal according to the present invention;
FIG. 4i is a schematic illustration of the result of a non-circular pupil location according to the present invention;
FIG. 5 is a flow chart of an alternative method of locating a pupil boundary in accordance with embodiments of the present invention;
FIG. 6a is a schematic illustration of an alternative eyelid trisection view in accordance with an embodiment of the present invention;
FIG. 6b is a schematic illustration of a foreground image obtained by matting from the eyelid trisection of FIG. 6a according to an embodiment of the invention;
fig. 6c is a schematic diagram of a binary image obtained by performing threshold segmentation on a foreground image according to an embodiment of the present invention;
FIG. 6d is a schematic diagram of key points of the upper and lower eyelids obtained from the binary image of FIG. 6c according to an embodiment of the present invention;
FIG. 6e is a schematic illustration of eyelid positioning results for a non-ideal iris image (with frame) according to an embodiment of the present invention;
FIG. 6f is a schematic illustration of eyelid positioning results for another non-ideal iris image (micro-occlusion) in accordance with an embodiment of the present invention;
FIG. 7 is a flow diagram of an alternative method of eyelid localization for a target image in accordance with an embodiment of the present invention;
FIG. 8a is a schematic illustration of an alternative parameter label for constructing a trimap of an outer circle of an iris in accordance with embodiments of the invention;
FIG. 8b is a schematic diagram of an alternative trisection view of the outer circle of the iris in accordance with embodiments of the present invention;
FIG. 8c is a schematic diagram of an alternative foreground image of the outer circle of the iris obtained by a matting algorithm according to an embodiment of the invention;
FIG. 8d is a schematic illustration of an alternative plurality of rows having points with the greatest gradient values in accordance with an embodiment of the present invention;
FIG. 8e is a schematic illustration of an alternative iris excircle positioning result according to the embodiment of the present invention;
FIG. 9 is a schematic illustration of an alternative acquisition of the outer boundary of an iris in accordance with embodiments of the present invention; and
fig. 10 is a schematic structural diagram of an iris positioning apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of an iris localization method, it is noted that the steps illustrated in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be executed in an order different than that here.
Fig. 3 is a schematic diagram of an iris positioning method according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps:
step S302, acquiring image information of the target iris, and roughly positioning the pupil in the image information.
Specifically, the target iris is an iris to be detected, and the image information of the target iris may be a standard iris image or a non-ideal iris image, that is, an iris image with uneven illumination, insufficient or excessive exposure, severely closed eyelids, and severely non-circular pupils.
In the above step, a RST (Radial Symmetry Transform) may be used to perform coarse positioning on the pupil, where a coarse positioning result obtained by the coarse positioning may include a center position and a radius of the pupil.
Step S304, constructing a trisection map of the target object according to the coarse positioning result of the pupil, and extracting the target object from the image information through a preset algorithm according to the trisection map, wherein the target object comprises: the pupil, eyelid, and outer iris circle of the target iris.
Specifically, in the above step, a trimap image of the target object is constructed according to the coarse positioning result of the pupil, which may be that a foreground region and a background region of the target object are determined according to the coarse positioning result of the pupil, after the foreground region and the background region are determined, the foreground region and the background region are marked differently, and the remaining regions are not processed, so that a trimap image of the target object can be obtained.
In an alternative embodiment, the preset algorithm may be a matting algorithm. The matting algorithm assumes that the color distribution of the foreground and background pixels within the local window is linear. The target function is obtained according to the least square method, and the image of the target object can be extracted from the image information of the target iris through the matting algorithm provided by the embodiment.
Step S306, determining the positioning information of the target iris according to the extracted pupil, eyelid and iris excircle, wherein the positioning information comprises at least one of the following: pupil boundary, eyelid boundary, and iris boundary.
It should be noted here that, in the above-mentioned embodiment of the present application, since the target object is extracted from the image information of the target iris through the matting algorithm according to the constructed trimap image, when the image information of the target iris is the image information of non-circular pupil, under-exposure or over-exposure of the image information, and severe closure of eyelids, if the positioning is performed by using the calculus operator or boundary detection in combination with Hough transformation, the obtained positioning result is not accurate, but the constructed trimap image is not substantially affected by the above-mentioned factors, and the iris is positioned according to the constructed trimap image by relying on the a priori knowledge of the trimap image, the above-mentioned embodiment can accurately position the target iris under the non-ideal conditions that the image information of the target iris is non-circular pupil, under-exposure or over-exposure of the image information, severe closure of eyelids, and the like.
As can be seen from the above, in the present application, the above steps acquire image information of a target iris, determine an area containing the target object in the image information, construct a trimap image of the target object, extract the target object from the image information through a preset algorithm according to the trimap image, and determine positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle, where the positioning information includes at least one of: pupil boundary, eyelid boundary, and iris boundary. The steps determine the positioning information of the target iris by constructing the trimap image and extracting the pupil, the eyelid boundary and the iris excircle from the trimap image, thereby realizing the positioning of the target iris, solving the technical problem that the iris positioning result is inaccurate when the iris image is the nonideal iris image such as the noncircular pupil, uneven illumination, overexposure or underexposure and the like in the prior art, and realizing the technical effect of accurately positioning the target iris under the nonideal conditions such as the noncircular pupil, the underexposure or overexposure of the image information, the serious closure of the eyelid and the like of the image information of the target iris.
Optionally, according to the above embodiment of the present application, after acquiring the image information of the target iris, the method further includes: an area including the pupil is cut out from the image information of the target iris.
Optionally, according to the above embodiment of the present application, when the target object is a pupil, constructing a trimap image of the target object according to a coarse positioning result of the pupil, and extracting the target object from the image information according to a preset algorithm according to the trimap image, includes:
step S3041, determining a foreground region and a background region of the pupil according to the coarse positioning result, and marking the foreground region and the background region differently, so as to construct the pupil trimap.
In the foregoing step, the coarse positioning result may include a circle center position and a radius of the pupil, and in order to avoid an influence on positioning caused by severe eyelid closure, a lower semicircle may be constructed by using the circle center position as a center and using a preset distance smaller than the radius of the pupil as a radius, and used as a foreground region of the pupil bipartite graph, a circle may still be constructed by using the circle center position as a center and using another preset distance larger than the radius of the pupil as a radius, and a region having a radius larger than the circle may be used as a background region. Then, the foreground area is marked as 1, the background area is marked as 0, and the rest areas are not processed, so that a pupil trimap can be obtained, and in an alternative embodiment, the pupil trimap can be as shown in fig. 4 b.
Step S3043, a pupil background image is obtained according to the pupil three-segment image by a matting algorithm.
In an alternative embodiment, according to the pupil triplet shown in fig. 4b, the background region obtained by matting the iris image can be as shown in fig. 4c, and the iris image can be matting by the following method:
obtaining a background area of the trimap by solving alpha through a formula (L + lambda D) alpha-lambda b according to the constructed trimap, wherein the matrix L is a known matting Laplacian matrix, the matrix D is a diagonal matrix with the same size as the matrix L, the diagonal elements are elements of a matrix T which are arranged in columns, and the rest elements are 0, the matrix T is a matrix which marks the background area and the foreground area of the trimap as 1, and the rest areas are marked as 0; b is a (M × N) × 1-dimensional vector whose elements are elements of a matrix K, which is a matrix in which the foreground region of the trimap is marked as 1 and the remaining regions are marked as 0, and is arranged in rows.
The above formula (L + λ D) α ═ λ b is obtained by changing the objective function E (α) · αTLα+λ(α-β)TD (alpha-beta) derivation to obtainAn embodiment of obtaining the objective function is described in detail below.
It should be noted in advance that, for an input image I, each pixel of the input image I can be written as a convex combination of a foreground color F and a background color B, where I ═ α F + (1- α) B, α is the opacity of the foreground, also called the mask value, when α ═ 1, called the absolute foreground, and when α ═ 0, called the absolute background, the matting algorithm used in the present application estimates the value of the unknown region { F, B, α } according to the known absolute foreground and absolute background pixels.
Then, for the gray-scale image of the target iris, assuming that the foreground region F and the background region B in a small window are both constants, the α value of the pixel point in the small window w on the gray-scale image I of the target iris can be calculated by the following formula:
Figure BDA0001116677880000071
wherein
Figure BDA0001116677880000072
Obtaining a cost function according to a least square method:
Figure BDA0001116677880000073
in order to obtain the α that minimizes the cost function J, the cost function J is sorted to obtain:
J(α)=αTLα;
here, a constructed pupil three-segment graph is introduced, and a priori foreground and background point knowledge is given in the three-segment graph, so that while the cost function J is minimized, the obtained α also needs to satisfy foreground and background information of the three-segment graph, that is, for a pixel point α marked as foreground in the three-segment graph, 1, and for a pixel point α marked as background, 0, so as to obtain an objective function: e (alpha) ═ alphaTLα+λ(α-β)TD(α-β)。
As can be seen from the above, in the above steps of the present application, the foreground region and the background region of the pupil are determined according to the coarse positioning result, and different marks are performed on the foreground region and the background region to construct the pupil three-segment image, and the pupil background image is obtained according to the pupil three-segment image through the matting algorithm. According to the scheme, the purpose of obtaining the pupil image is achieved by constructing the pupil three-segment image and separating the background image, namely the image of the pupil region, from the image information of the original iris by adopting a matting algorithm.
Optionally, according to the above embodiment of the present application, determining the positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle includes: determining a pupil boundary of the target iris according to the extracted pupil, wherein the step of determining the pupil boundary of the target iris according to the extracted pupil comprises the following steps:
step S3061, perform binarization on the background image according to a preset threshold to obtain a binary image corresponding to the background image, and perform boundary detection on the binary image to obtain a boundary of the binary image.
In the above step, since the pupil region is marked as the foreground region in the previous step, the pixel value of the pupil region in the background image is low, and then the boundary detection is performed to obtain the boundary of the background image, where the preset threshold for performing the binarization processing on the image may be 0.1.
Step S3063, denoising the boundary of the background image through the spot noise template to obtain the boundary point of the pupil.
Step S3065, fitting the boundary points of the pupil with an ellipse to obtain a pupil boundary.
It should be noted here that, since the pupil is not necessarily circular, or even severely non-circular, the above steps use an ellipse to fit the boundary points.
According to the method, the background image is subjected to binarization processing according to the preset threshold, the boundary of the binarized background image is obtained through boundary detection, the boundary of the background image is subjected to denoising processing through the noise template to obtain the boundary point of the pupil, and the pupil boundary point is fitted through the ellipse to obtain the inner boundary of the target iris. According to the scheme, the pupil boundary is obtained through binarization processing and boundary detection on the basis of obtaining a background image, the facula noise is removed through a facula noise template, and the boundary points of the pupil are fitted through an ellipse, so that the non-circular pupil is accurately positioned.
Optionally, according to the above embodiment of the present application, denoising the boundary of the binary image through the spot noise template to obtain the boundary point of the pupil includes:
step S30631, acquiring a light spot noise template of the target iris; and eliminating the spot noise point on the pupil boundary by using the spot noise template, wherein the step of acquiring the spot noise template of the target iris comprises the following steps:
step S30635, determining pixel points in the area including the pupil whose pixel values are greater than the preset pixel threshold as light spots.
Specifically, the preset value may be a gray value smaller than most of the light spots and larger than most of the pupil area and the iris area, so as to detect the light spots.
Step S30637, binary segmentation is performed on the region including the pupil to obtain the spot position.
And step S30639, expanding the area where the light spot is located according to a preset template to obtain a light spot noise template.
In an alternative embodiment, the original image is clipped to a small region including the pupil according to the pupil coarse localization parameter obtained by RST, as shown in fig. 4a, fig. 4d is a gray distribution histogram of the clipped region, in this region, since the light spot is bright and its gray value is high, and the gray values of the pupil region and the iris region are low and are much below 150, the light spot can be detected by using threshold segmentation, the point with the pixel value greater than 150 is regarded as the light spot, so that binary segmentation is performed, the pixel point value with the pixel value greater than 150 is set to 1, and the others are 0, and fig. 4e is the position of the light spot obtained by threshold segmentation. Finally, the spot area is expanded using a circular template of 5 x 5, resulting in a spot noise template as shown in fig. 4 f. The facula noise template is mainly used for eliminating the influence of facula positioned on the pupil boundary on the selection of pupil boundary candidate points. Fig. 4g shows that the pupil boundary before the speckle noise is removed is obviously affected by the speckle, and fig. 4h shows the pupil boundary after the speckle noise is removed, which obviously removes the influence of the speckle on the boundary.
As can be seen from the above, in the above steps of the present application, a region including a pupil is intercepted from image information of a target iris, a pixel point having a pixel value greater than a preset pixel threshold value in the region including the pupil is determined as a light spot, binary segmentation is performed on the region including the pupil to obtain a light spot position, and the region where the light spot position is located is expanded according to a preset template to obtain a light spot noise template. The light spots are detected through threshold segmentation, and the detected light spots are expanded, so that the effect of eliminating the influence of the light spots on the pupil boundary candidate point selection is achieved.
Optionally, according to the above embodiment of the present application, fitting the boundary point of the pupil through an ellipse to obtain the pupil boundary includes:
in step S30651, constraint conditions for an ellipse are set, the ellipse being used to represent the inner boundary of the target iris.
In an alternative embodiment, the pupil boundary is fitted using a direct ellipse fitting method based on the obtained pupil boundary points. The general form of an ellipse with a planar conic can be expressed as:
Figure BDA0001116677880000091
wherein
Figure BDA0001116677880000092
For the obtained N sets of boundary points (x)i,yi),
Figure BDA0001116677880000093
Called a point (x)i,yi) Finding the algebraic distance to the elliptic curve by least squares
Figure BDA0001116677880000094
The square sum of the algebraic distances from the N sets of boundary points to the ellipse is minimized, i.e. the objective function is:
Figure BDA0001116677880000095
since the pupil is approximately elliptical, the constraint b is added2-4ac<0 to ensure that the fitting results are all elliptical. The constraint conditions are:
Figure BDA0001116677880000096
s.t.b2-4ac<0
step S30653, solving the constraint condition to obtain a parameter of an ellipse closest to the pupil, and obtaining a boundary of the pupil according to the parameter of the ellipse.
In an alternative embodiment, again using the above fitting of the pupil boundary using a direct ellipse fitting method as an example, due to the constraint b2-4ac<0 is an inequality, and when solving, the solution does not necessarily exist, so that the restriction condition b is introduced2-4ac ═ 1, solving for parameters that result in an ellipse
Figure BDA0001116677880000101
Fig. 4h is an exemplary diagram of locating the inner boundary of the iris according to the above embodiment.
A preferred step of locating the pupil boundary is described below in accordance with the above embodiments:
step S51, after the target iris to be detected is input, RST is performed to roughly locate the pupil boundary.
Step S52, acquiring a spot noise template through spot detection.
Step S53, construct a pupil bipartite graph, and obtain a background image (i.e. an image of a pupil) through a matting algorithm.
And step S54, performing boundary detection on the binary image through the binary image to obtain boundary points of the pupil, and performing denoising processing on the boundary points by using a light spot noise template.
In step S55, a pupil boundary is obtained by fitting an ellipse to the boundary point of the pupil.
Optionally, according to the above embodiment of the present application, in a case that a target object is an eyelid, constructing a trimap image of the target object according to a coarse positioning result of the pupil, and extracting the target object from the image information according to a preset algorithm according to the trimap image, the method includes:
step S3045, determining a foreground region and a background region of the pupil according to the coarse positioning result, and marking the foreground region and the background region differently, so as to construct an eyelid trimap.
In the foregoing step, the method for determining the foreground region and the background region of the pupil according to the coarse positioning result may be that the circle center position and the radius in the coarse positioning parameter are obtained, the circle center position may be used as the circle center, a preset distance smaller than the radius is used as a semi-minor axis of the ellipse, another preset distance smaller than an empirical value of a distance between the inner and outer corners of the eye is used as a semi-major axis to form the ellipse, the inside of the ellipse is used as the foreground region, a region whose distance from the circle center is larger than the empirical value of the distance between the upper and lower eyelids and the circle center is used as the background region, the foreground region is marked as 1, the background region is marked as 0, and the rest regions are not processed to obtain the trisection diagram of the eyelids, and the.
Step S3047, an eyelid foreground image is acquired from the eyelid trimap through a matting algorithm.
In an alternative embodiment, the image of the eyelid obtained by the matting algorithm is shown in FIG. 6 b.
According to the method, the foreground area and the background area of the pupil are determined according to the rough positioning parameters, the foreground area and the background area are marked differently to construct the eyelid trimap, and the eyelid foreground image is obtained according to the eyelid trimap through a matting algorithm. The eyelid trisection image is constructed by the scheme, and the foreground image, namely the image of the eyelid area is separated from the original iris image by adopting a matting algorithm, so that the purpose of acquiring the eyelid image is realized.
Optionally, according to the above embodiment of the present application, determining the positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle includes: determining the boundary of the eyelid according to the extracted eyelid, wherein the determining the boundary of the eyelid according to the extracted eyelid comprises:
step S3067, a threshold segmentation is performed on the image of the foreground region to obtain a binary image of the eyelid.
In an optional embodiment, the binary image obtained by performing threshold segmentation on the foreground region image may be a binary image of an eyelid as shown in fig. 6c, and the method for performing threshold segmentation may be to set a preset gray value, classify each pixel in the foreground region image according to the preset gray value, mark a pixel point larger than the preset gray value as 1, and mark a pixel point smaller than the preset gray value as 0, thereby obtaining the binary image of the eyelid.
In step S3069, upper and lower eyelid key points for the eyelids are determined.
In an optional embodiment, in a preset range of the roughly positioned pupil center, a pixel point with the maximum abscissa value and zero gray value in each row is upwards searched to serve as a key point of an upper eyelid; the point in each row with the smallest abscissa value and zero gray value is searched downward as the key point of the lower eyelid, as shown in fig. 6 d.
Step S30610, fitting the upper eyelid key points and the lower eyelid key points of the foreground image by a parabola to obtain an eyelid boundary of the target iris.
Since the shape of the eyelids can be approximated to a parabola, least squares parabolic fitting is performed for two sets of key points of the given upper and lower eyelids, respectively.
According to the method, the threshold segmentation is carried out on the image of the foreground region in the steps to obtain the binary image of the eyelid, the key point of the upper eyelid and the key point of the lower eyelid of the eyelid are determined, and the key point of the upper eyelid and the key point of the lower eyelid of the foreground image are fitted through the parabola to obtain the eyelid boundary of the target iris. According to the scheme, the boundary of the upper eyelid and the lower eyelid is obtained in a parabolic fitting mode, the influence of serious closure of the eyelids, insufficient or excessive exposure of the image on eyelid positioning is avoided, and accurate eyelid positioning when the iris image is a non-ideal iris image is realized.
Optionally, according to the above embodiment of the present application, fitting an upper eyelid key point and a lower eyelid key point of the foreground image by using a parabola to obtain an eyelid boundary of the target iris includes:
in step S30611, a corresponding quadratic function is set for the upper eyelid key point and the lower eyelid key point.
Specifically, in the step, the quadratic function is a parabolic function.
Step S30613, setting eyelid boundary constraint conditions, where the eyelid boundary constraint conditions are that errors of quadratic functions of the upper eyelid key points and the lower eyelid key points corresponding to the upper eyelid key points and the lower eyelid key points, respectively, are minimized.
In an alternative embodiment, for a given set of data (x)i,yi) (i-1, 2, …, n), finding the quadratic function y-a0+a1x+a2x2Minimizing the sum of squared errors, wherein (x)i,yi) Is an upper eyelid key point or a lower eyelid key point, a0、a1、a2And Q is the error of the quadratic function of the upper eyelid key point and the lower eyelid key point corresponding to the upper eyelid key point and the lower eyelid key point respectively:
Figure BDA0001116677880000111
step S30614, a quadratic function is solved according to the eyelid boundary constraint conditions, and the boundaries of the upper eyelid and the lower eyelid are obtained.
In an alternative embodiment, the above is still applied for a given set of data (x)i,yi) (i-1, 2, …, n), finding the quadratic function y-a0+a1x+a2x2Taking the sum of squared errors as an example, solve the above constraint to let Q be aj(j ═ 1,2,3) partial derivatives, i.e.
Figure BDA0001116677880000121
Obtaining extreme points of Q, comparing the function value of each extreme point to obtain a which minimizes Q0、a1、a2Is the coefficient of the quadratic function of the upper and lower eyelids. The parabolic image formed by the coefficients of the obtained quadratic function is the boundary between the upper eyelid and the lower eyelid.
Fig. 7 is a flowchart of an alternative eyelid positioning method for a target image according to an embodiment of the present application, and a preferred embodiment of eyelid positioning in the present application is described below with reference to the example shown in fig. 7:
and step S71, performing RST coarse positioning on the pupil to obtain boundary parameters of the pupil, and constructing an eyelid trisection map.
And step S72, acquiring an eyelid foreground image through matting calculation according to the eyelid trisection.
And step S73, acquiring a binary image of the foreground image by adopting a threshold segmentation method.
In step S74, key points of the upper and lower eyelids are acquired.
Step S75, parabolic fitting is performed on the upper eyelid and the lower eyelid, respectively, to obtain an upper eyelid boundary and a lower eyelid boundary.
Optionally, according to the above embodiment of the present application, in a case that the target object is an outer circle of an iris, determining an area including the target object in the image information, constructing a trimap image of the target object, and extracting the target object from the image information according to a preset algorithm according to the trimap image, includes:
step S3049, obtaining the circle center position and the radius of the inner boundary included in the coarse positioning result.
Step S30411, obtaining a first reference distance and a second reference distance based on the circle center position and the radius, and marking a pixel point whose distance from the circle center is smaller than the first reference distance as 1, and marking a pixel point whose distance from the circle center is greater than the second reference distance as 0.
As a preferred embodiment, in the case of poor quality of the iris image (for example, the eyelashes of the upper eyelid are more shielded), the distance from the center of the iris image to the center of the iris image may be smaller than the first reference distance, and the pixel point located below the center of the iris image is marked as 1, so as to prevent the foreground image of the trisection image of the outer circle of the iris from including the image of the eyelashes, thereby affecting the positioning of the outer circle of the iris.
In the steps, on one hand, constructing the trisection image of the excircle of the iris can take most pixel points of the image as known foreground or background pixels, and the smaller the number of the unknown pixels is, the higher the operation efficiency of the matting algorithm is; on the other hand, the accuracy of the sectional drawing is also improved by using the trisection drawing as a supplement condition.
The inner boundary of the iris can be roughly positioned by using a radial symmetry transformation algorithm, and the parameter circle center (x) of the inner boundary of the iris is obtained by combining with the graph shown in figure 8apupil,ypupil) And radius rpupilAnd constructing a formula of a trisection graph by using the inner boundary parameter information:
Figure BDA0001116677880000131
wherein tri (x, y) represents a constructed trimap, I is the original image information of the target iris, and the gray value is normalized to [0, 1%]In the interval (2), the point with the gray value of 1 represents a foreground point, the point with the gray value of 0 represents a background point, and the rest points are unknown pixel points. Fig. 8a shows the parameter labeling of a trimap configuration, and fig. 8b is a schematic representation of a trimap of the outer circle of the iris configured. It can be seen that the foreground region is a semicircle, because the human eyelashes partially block the iris region, and thus the foreground is selected to avoid the eyelash portion. In the above formula d1And d2Is a distance parameter, optionally d1=8,d2=13。
Step S30413, determining pixel points marked as 1 as foreground points and pixel points marked as 0 as background points, and constructing a three-part graph of the iris excircle according to the foreground points and the background points.
A trimap view of the outer circle of the iris may be as shown in fig. 8 b.
Step S30415, obtaining an iris foreground image according to the trisection image of the outer circle of the iris through a matting algorithm.
In an alternative embodiment, the obtained foreground image may be as shown in fig. 8 c.
It can be known from the above that, in the above embodiments of the present application, the inner boundary of the target iris is located by using a radial symmetry algorithm, parameters of the inner boundary are obtained, the first reference distance and the second reference distance are obtained based on the circle center and the radius, the pixel point whose distance from the circle center is smaller than the first reference distance is marked as 1, the pixel point whose distance from the circle center is greater than the second reference distance is marked as 0, the pixel point marked as 1 is determined as a foreground point, the pixel point marked as 0 is a background point, a three-part diagram of the iris outer circle is constructed according to the foreground point and the background point, and the iris foreground image is obtained according to the three-part diagram of the iris outer circle by using a. According to the scheme, the three-division diagram of the iris excircle is constructed, the image of the foreground region, namely the region of the iris excircle is separated from the original iris image by adopting the matting algorithm, the purpose of obtaining the iris excircle is achieved, and particularly, the accurate positioning effect can be achieved under the conditions of severe closure of eyelids and the like.
Optionally, according to the above embodiment of the present application, determining the positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle includes: determining the iris outer boundary of the target iris according to the extracted iris outer circle, wherein the step of determining the iris outer boundary of the target iris according to the extracted iris outer circle comprises the following steps of:
step S30611, taking the center of the circle of the inner boundary of the iris as the center, obtaining the horizontal gradient value of each row upwards and downwards in a preset range, and screening two pixel points with the largest gradient value in two preset interval ranges in each row.
In the above steps, as the gray value is changed from small to large from the black background to the foreground iris region, the gray change is obviously generated at the outer circle boundary of the iris, so that the gray gradient value of the outer circle of the iris is the largest, and the radius of the outer circle of the iris can be obtained according to the gradient. Although the inner boundary and the outer circle of the iris are not concentric circles, the centers of the inner circle and the outer circle are approximately similar. Therefore, the center positions of the outer circle and the inner boundary of the iris can be temporarily assumed to be the sameIs (i.e., (x))c,yc)=(xpupil,ypupil)。
In an alternative embodiment, the circle center position (x) of the foreground image obtained after the matting is calculatedc,yc) Upper and lower n rows of: namely, it is
xi∈{xc-n,xc-(n-1),…,xc-1,xc,xc+1,…,xc+(n-1),xc+n} (4-7)
Where (i ═ 1,2, …,2n +1), there are 2n +1 rows, and each row x is obtainediThe lateral gradient value of (a). Because the iris foreground image is not shielded by eyelids under the common condition near the circle center, the boundary is clearer, and the interference of eyelashes is avoided, the estimated radius is more accurate.
Since the gray scale at the inner boundary of the iris also changes greatly, and in order to locate the outer circle, it is necessary to exclude the point of the maximum gradient value from the inner boundary, so that two interval ranges are set to be (1, y)c-rpupil-5) and (y)c+rpupil+5,160) to exclude the case where the point of the maximum value of the gradient is on the inner boundary, wherein the image size in the above embodiment is 120 × 160, thereby excluding the inner boundary from the above two interval ranges, within which the 2n +1 rows and x rows are found respectivelyiCoordinate position corresponding to the maximum value of the transverse gradient of
Figure BDA0001116677880000141
And
Figure BDA0001116677880000142
where (i ═ 1,2, …,2n +1), where these two intervals do not already include the inner boundary of the iris.
Step S30613, obtaining coordinates corresponding to two pixel points with the largest gradient values in two preset interval ranges in each row, and calculating the distance between the two pixel points and the center of the circle according to the coordinates corresponding to the two pixel points.
Since the gray scale at the inner boundary of the iris also varies greatly, we should exclude the inner boundary from being found in order to locate the outer circleBoundary case, so we are at (1, y) of each rowc-rpupil-5) and (y)c+rpupil+5,160) two intervals, find these 2n +1 lines each line x separatelyiCoordinate position corresponding to the maximum value of the transverse gradient of
Figure BDA0001116677880000143
And
Figure BDA0001116677880000144
where (i ═ 1,2, …,2n +1), since these two intervals do not already include the inner boundary of the iris.
Step S30615, a reference radius of the outer circles of the plurality of irises is obtained according to a distance between the center of the circle and two pixel points with the largest gradient values in two preset interval ranges in each row.
In an alternative embodiment, the xth maximum gradient can be found by the following formulaiLine estimation of iris excircle radius
Figure BDA0001116677880000145
Figure BDA0001116677880000146
In connection with the illustration shown in fig. 8d, the point of maximum gradient value in a plurality of rows is
Figure BDA0001116677880000147
And S30617, determining the outer iris boundary of the target iris according to the average value of the reference radiuses of the outer circles of the irises and the circle center.
In an alternative embodiment, the average of the reference radii of the plurality of outer irises may be obtained by the following formula:
Figure BDA0001116677880000151
wherein the estimated radius obtained for each row
Figure BDA0001116677880000152
Averaging to obtain the estimated excircle radius rc
Optionally, according to the above embodiment of the present application, after determining the outer iris boundary of the target iris according to the average value of the reference radii of the outer circles of the plurality of irises and the center of the circle, the method further includes: searching excircle parameters in a preset area through a calculus operator, wherein the preset area comprises: the distance sum of the difference between the distance from the center of the circle and the radius of the outer boundary of the iris within a first preset range and a preset rectangular area with the center of the circle as the center.
Roughly positioning the outer circle of the iris according to a matting algorithm to obtain an outer circle radius parameter rcBecause the positioning accuracy of the matting algorithm is higher, the calculus operator is utilized to determine the radius (r)c-5,rc+5) and the range around the center of the inner boundary circle, and searching the outer circle parameters to accurately position the iris image, wherein the first preset range is (-5, + 5).
Fig. 9 is a schematic diagram of an alternative method for obtaining an outer iris boundary according to an embodiment of the present invention, and a preferred embodiment of the positioning of the outer iris boundary in the present application will be described with reference to the example shown in fig. 9:
step S91, coarse positioning of the inner boundary of the iris is performed.
And step S92, acquiring a foreground image of the outer circle of the iris through the constructed trisection image by a matting algorithm.
And step S93, finding the maximum value point of the transverse gradient of the upper and lower n rows of the circle center of the coarse positioning of the inner boundary of the iris in the preset range.
And step S94, calculating the average value of the distances from the point with the maximum gradient value in each row to the center of the circle as the estimated iris outer boundary radius.
And step S95, accurately positioning the center and radius of the outer boundary of the iris within a preset range through a calculus operator.
Example 2
The present application further provides an iris positioning apparatus, which can be used to perform the iris positioning method in embodiment 1, and fig. 10 is a schematic structural diagram of an iris positioning apparatus according to an embodiment of the present application, where the apparatus includes:
the acquiring module 100 is configured to acquire image information of a target iris and perform coarse positioning on a pupil in the image information.
The first determining module 102 is configured to construct a trimap image of a target object according to a coarse positioning result of a pupil, and extract the target object from image information through a preset algorithm according to the trimap image, where the target object includes: the pupil, eyelid, and outer iris circle of the target iris.
A second determining module 104, configured to determine positioning information of the target iris according to the extracted pupil, eyelid, and iris excircle, where the positioning information includes at least one of: pupil boundary, eyelid boundary, and iris boundary.
It should be noted here that, in the above-mentioned embodiment of the present application, since the target object is extracted from the image information of the target iris by constructing the ternary diagram, when the image information of the target iris is the image information of pupil non-circle, image information under-exposure or over-exposure, and eyelid is severely closed, if the positioning is performed by using the calculus operator or boundary detection in combination with Hough transform, the obtained positioning result is not accurate, but the construction of the ternary diagram is not substantially affected by the above-mentioned factors, and the iris is positioned according to the constructed ternary diagram by relying on the knowledge of the ternary diagram a priori, the above-mentioned embodiment can accurately position the target iris under the non-ideal conditions that the image information of the target iris is the pupil non-circle, the image information under-exposure or over-exposure, and the eyelid is severely closed.
As can be seen from the above, the apparatus of the present application obtains image information of a target iris through an obtaining module, determines an area containing the target object in the image information through a first determining module, constructs a trimap image of the target object, extracts the target object from the image information through a preset algorithm according to the trimap image through a second determining module, and determines positioning information of the target iris according to the extracted pupil, eyelid, and iris excircle, where the positioning information includes at least one of: pupil boundary, eyelid boundary, and iris boundary. The steps determine the positioning information of the target iris by constructing the trimap image and extracting the pupil, the eyelid boundary and the iris excircle from the trimap image, thereby realizing the positioning of the target iris, solving the technical problem that the iris positioning result is inaccurate when the iris image is the nonideal iris image such as the noncircular pupil, uneven illumination, overexposure or underexposure and the like in the prior art, and realizing the technical effect of accurately positioning the target iris under the nonideal conditions such as the noncircular pupil, the underexposure or overexposure of the image information, the serious closure of the eyelid and the like of the image information of the target iris.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. An iris localization method, comprising:
acquiring image information of a target iris, and roughly positioning a pupil in the image information;
constructing a trisection map of a target object according to the coarse positioning result of the pupil, and extracting the target object from the image information through a preset algorithm according to the trisection map, wherein the target object comprises: the pupil, the eyelid and the outer circle of the iris of the target iris, and the trimap image comprises a foreground area, a background area and other areas of the target object;
determining the positioning information of the target iris according to the extracted pupil, eyelid and iris excircle, wherein the positioning information comprises at least one of the following: a pupil boundary, an eyelid boundary, and an iris boundary;
under the condition that the target object is the outer circle of the iris, determining an area containing the target object in the image information, constructing a trimap image of the target object, and extracting the target object in the image information through a preset algorithm according to the trimap image, wherein the steps of:
acquiring the circle center position and the radius of the inner boundary of the iris contained in the coarse positioning result;
under the condition that the iris image is shielded, acquiring a first reference distance and a second reference distance based on the circle center position and the radius, marking a pixel point which is less than the first reference distance from the circle center and is positioned below the circle center as 1, and marking a pixel point which is more than the second reference distance from the circle center as 0;
determining pixel points marked as 1 as foreground points and pixel points marked as 0 as background points, and constructing a trimap image of the outer circle of the iris according to the foreground points and the background points;
and acquiring an iris foreground image according to the trisection image of the iris excircle by using a matting algorithm.
2. The method of claim 1, wherein after acquiring image information of a target iris and coarsely locating a pupil in the image information, the method further comprises: and intercepting a region including the pupil from the image information of the target iris.
3. The method according to claim 2, wherein when the target object is the pupil, constructing a trimap image of the target object according to a coarse positioning result of the pupil, and extracting the target object from the image information according to the trimap image through a preset algorithm, includes:
determining a foreground region and a background region of the pupil according to the coarse positioning result, and marking the foreground region and the background region differently to construct a pupil three-segment diagram;
and acquiring a pupil background image according to the pupil three-segment image through a matting algorithm.
4. The method of claim 3, wherein determining the positioning information of the target iris from the extracted pupil, eyelid and iris outer circle comprises: determining a pupil boundary of the target iris according to the extracted pupil, wherein the step of determining the pupil boundary of the target iris according to the extracted pupil comprises the following steps:
carrying out binarization processing on the background image according to a preset threshold value to obtain a binary image corresponding to the background image, and carrying out boundary detection on the binary image to obtain a boundary of the binary image;
denoising the boundary of the binary image through a light spot noise template to obtain the boundary point of the pupil;
and fitting the boundary points of the pupil through an ellipse to obtain the pupil boundary.
5. The method according to claim 4, wherein denoising the boundary of the binary image through a speckle noise template to obtain the boundary point of the pupil comprises:
acquiring a light spot noise template of the target iris;
eliminating the facula noise point on the pupil boundary by using the facula noise template;
the step of obtaining the spot noise template of the target iris comprises the following steps:
determining pixel points with pixel values larger than a preset pixel threshold value in the region including the pupil as light spots;
performing binary segmentation on the region including the pupil to obtain a light spot position;
and expanding the area where the light spot is located according to a preset template to obtain the light spot noise template.
6. The method of claim 5, wherein fitting the boundary points of the pupil by an ellipse to obtain the pupil boundary comprises:
setting a constraint condition of an ellipse, wherein the ellipse is used for representing the inner boundary of the target iris;
and solving the constraint condition to obtain the parameter of the ellipse closest to the pupil, and obtaining the boundary of the pupil according to the parameter of the ellipse.
7. The method according to claim 1, wherein, in a case where the target object is an eyelid, constructing a trimap image of the target object according to a result of coarse positioning of the pupil, and extracting the target object from the image information by a preset algorithm according to the trimap image, comprises:
determining a foreground region and a background region of the pupil according to the coarse positioning result, and marking the foreground region and the background region differently to construct an eyelid ternary diagram;
and acquiring an eyelid foreground image according to the eyelid trisection image through a matting algorithm.
8. The method of claim 7, wherein determining the positioning information of the target iris from the extracted pupil, eyelid and iris outer circle comprises: determining the boundaries of the eyelids according to the extracted eyelids, wherein the determining the boundaries of the eyelids according to the extracted eyelids comprises:
performing threshold segmentation on the image of the foreground region to obtain a binary image of the eyelid;
determining upper and lower eyelid keypoints for the eyelid;
fitting the upper eyelid key points and the lower eyelid key points of the foreground image through a parabola to obtain the eyelid boundary of the target iris.
9. The method of claim 8, wherein fitting the upper eyelid keypoints and the lower eyelid keypoints of the foreground image by a parabola to obtain the eyelid boundary of the target iris comprises:
setting corresponding quadratic functions for the upper eyelid key points and the lower eyelid key points;
setting a constraint condition of the eyelid boundary, wherein the constraint condition of the eyelid boundary is that an error of a quadratic function corresponding to the upper eyelid key point and the lower eyelid key point and the upper eyelid key point and the lower eyelid key point respectively is minimum;
and solving the quadratic function according to the constraint condition of the eyelid boundary to obtain the boundary of the upper eyelid and the lower eyelid.
10. The method of claim 1, wherein determining the positioning information of the target iris from the extracted pupil, eyelid and iris outer circle comprises: determining the iris outer boundary of the target iris according to the extracted iris outer circle, wherein the step of determining the iris outer boundary of the target iris according to the extracted iris outer circle comprises the following steps:
taking the circle center of the inner boundary of the iris as a center, respectively obtaining the transverse gradient value of each line upwards and downwards in a preset range, and screening two pixel points with the maximum gradient values in two preset interval ranges in each line;
obtaining coordinates corresponding to two pixel points with the maximum gradient values in the two preset interval ranges in each row, and calculating the distance between the two pixel points and the circle center according to the coordinates corresponding to the two pixel points;
obtaining reference radiuses of a plurality of iris excircles according to the distance between the circle center and two pixel points with the maximum gradient values in the two preset interval ranges in each row;
and determining the outer iris boundary of the target iris according to the average value of the reference radiuses of the outer irises of the plurality of irises and the circle center.
11. The method of claim 10, wherein after determining the outer iris boundary of the target iris based on the average of the reference radii of the plurality of outer irises and the center of the circle, the method further comprises: searching excircle parameters in a preset area through a calculus operator, wherein the preset area comprises: the difference between the distance to the circle center and the radius of the outer boundary of the iris is in a first preset range, and the distance to the circle center is a preset rectangular area taking the circle center as the center.
12. An iris positioning apparatus, comprising:
the acquisition module is used for acquiring image information of a target iris and carrying out coarse positioning on a pupil in the image information;
a first determining module, configured to construct a trimap image of the target object according to the coarse positioning result of the pupil, and extract the target object from the image information through a preset algorithm according to the trimap image, where the target object includes: the pupil, the eyelid and the outer circle of the iris of the target iris, and the trimap image comprises a foreground area, a background area and other areas of the target object;
a second determining module, configured to determine positioning information of the target iris according to the extracted pupil, eyelid, and iris excircle, where the positioning information includes at least one of: a pupil boundary, an eyelid boundary, and an iris boundary;
under the condition that the target object is the outer circle of the iris, the first determining module is further used for acquiring the circle center position and the radius of the inner boundary of the iris contained in the rough positioning result; under the condition that the iris image is shielded, acquiring a first reference distance and a second reference distance based on the circle center position and the radius, marking a pixel point which is less than the first reference distance from the circle center and is positioned below the circle center as 1, and marking a pixel point which is more than the second reference distance from the circle center as 0; determining pixel points marked as 1 as foreground points and pixel points marked as 0 as background points, and constructing a trimap image of the outer circle of the iris according to the foreground points and the background points; and acquiring an iris foreground image according to the trisection image of the iris excircle by using a matting algorithm.
CN201610833231.XA 2016-09-19 2016-09-19 Iris positioning method and device Active CN107844736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610833231.XA CN107844736B (en) 2016-09-19 2016-09-19 Iris positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610833231.XA CN107844736B (en) 2016-09-19 2016-09-19 Iris positioning method and device

Publications (2)

Publication Number Publication Date
CN107844736A CN107844736A (en) 2018-03-27
CN107844736B true CN107844736B (en) 2021-01-01

Family

ID=61656822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610833231.XA Active CN107844736B (en) 2016-09-19 2016-09-19 Iris positioning method and device

Country Status (1)

Country Link
CN (1) CN107844736B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572735B (en) * 2018-04-24 2021-01-26 京东方科技集团股份有限公司 Pupil center positioning device and method and virtual reality equipment
CN108921010B (en) * 2018-05-15 2020-12-22 北京环境特性研究所 Pupil detection method and detection device
CN109086713B (en) * 2018-07-27 2019-11-15 腾讯科技(深圳)有限公司 Eye recognition method, apparatus, terminal and storage medium
CN109389033B (en) * 2018-08-28 2022-02-11 江苏理工学院 Novel pupil rapid positioning method
CN109325455B (en) * 2018-09-28 2021-11-30 北京无线电计量测试研究所 Iris positioning and feature extraction method and system
CN109446935B (en) * 2018-10-12 2021-06-29 北京无线电计量测试研究所 Iris positioning method for iris recognition in long-distance traveling
CN109376649A (en) * 2018-10-20 2019-02-22 张彦龙 A method of likelihood figure, which is reduced, from eye gray level image calculates the upper lower eyelid of identification
CN110516548B (en) * 2019-07-24 2021-08-03 浙江工业大学 Iris center positioning method based on three-dimensional eyeball model and Snakucle
CN112163507B (en) * 2020-09-25 2024-03-05 北方工业大学 Mobile-end-oriented lightweight iris recognition system
CN112464829B (en) * 2020-12-01 2024-04-09 中航航空电子有限公司 Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
WO2023088069A1 (en) * 2021-11-19 2023-05-25 北京眼神智能科技有限公司 Iris recognition method and apparatus, storage medium, and device
CN114795650A (en) * 2022-04-28 2022-07-29 艾视雅健康科技(苏州)有限公司 Automatic image combination method and device for ophthalmologic medical device
CN115294202B (en) * 2022-10-08 2023-01-31 南昌虚拟现实研究院股份有限公司 Pupil position marking method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614919B1 (en) * 1998-12-25 2003-09-02 Oki Electric Industry Co., Ltd. Method of extracting iris region and individual identification device
CN101866420A (en) * 2010-05-28 2010-10-20 中山大学 Image preprocessing method for optical volume holographic iris recognition
CN105260698A (en) * 2015-09-08 2016-01-20 北京天诚盛业科技有限公司 Method and device for positioning iris image
CN105631407A (en) * 2015-12-18 2016-06-01 电子科技大学 Forest musk deer iris positioning method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614919B1 (en) * 1998-12-25 2003-09-02 Oki Electric Industry Co., Ltd. Method of extracting iris region and individual identification device
CN101866420A (en) * 2010-05-28 2010-10-20 中山大学 Image preprocessing method for optical volume holographic iris recognition
CN105260698A (en) * 2015-09-08 2016-01-20 北京天诚盛业科技有限公司 Method and device for positioning iris image
CN105631407A (en) * 2015-12-18 2016-06-01 电子科技大学 Forest musk deer iris positioning method

Also Published As

Publication number Publication date
CN107844736A (en) 2018-03-27

Similar Documents

Publication Publication Date Title
CN107844736B (en) Iris positioning method and device
Yang et al. Efficient illuminant estimation for color constancy using grey pixels
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
US6885766B2 (en) Automatic color defect correction
JP4529172B2 (en) Method and apparatus for detecting red eye region in digital image
US8768014B2 (en) System and method for identifying a person with reference to a sclera image
WO2017162069A1 (en) Image text identification method and apparatus
US8682073B2 (en) Method of pupil segmentation
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN104318262A (en) Method and system for replacing skin through human face photos
Abate et al. BIRD: Watershed based iris detection for mobile devices
JP2003030667A (en) Method for automatically locating eyes in image
CN104217221A (en) Method for detecting calligraphy and paintings based on textural features
US9633284B2 (en) Image processing apparatus and image processing method of identifying object in image
Thalji et al. Iris Recognition using robust algorithm for eyelid, eyelash and shadow avoiding
WO2009158700A1 (en) Assessing biometric sample quality using wavelets and a boosted classifier
Chakravarty et al. Coupled sparse dictionary for depth-based cup segmentation from single color fundus image
Banerjee et al. Iris segmentation using geodesic active contours and grabcut
WO2016192213A1 (en) Image feature extraction method and device, and storage medium
JP2015094973A (en) Image processor, image processing method, image processing program, and recording medium
Hasan et al. Improving alignment of faces for recognition
CN108230409B (en) Image similarity quantitative analysis method based on multi-factor synthesis of color and content
Goswami et al. Kernel group sparse representation based classifier for multimodal biometrics
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
CN114926635A (en) Method for segmenting target in multi-focus image combined with deep learning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 071800 Beijing Tianjin talent home (Xincheng community), West District, Xiongxian Economic Development Zone, Baoding City, Hebei Province

Patentee after: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Address before: 100085 20 / F, building 4, yard 1, shangdishi street, Haidian District, Beijing 2013

Patentee before: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Iris positioning method and device

Effective date of registration: 20220614

Granted publication date: 20210101

Pledgee: China Construction Bank Corporation Xiongxian sub branch

Pledgor: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Registration number: Y2022990000332

PE01 Entry into force of the registration of the contract for pledge of patent right