Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of an iris localization method, it is noted that the steps illustrated in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be executed in an order different than that here.
Fig. 3 is a schematic diagram of an iris positioning method according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps:
step S302, acquiring image information of the target iris, and roughly positioning the pupil in the image information.
Specifically, the target iris is an iris to be detected, and the image information of the target iris may be a standard iris image or a non-ideal iris image, that is, an iris image with uneven illumination, insufficient or excessive exposure, severely closed eyelids, and severely non-circular pupils.
In the above step, a RST (Radial Symmetry Transform) may be used to perform coarse positioning on the pupil, where a coarse positioning result obtained by the coarse positioning may include a center position and a radius of the pupil.
Step S304, constructing a trisection map of the target object according to the coarse positioning result of the pupil, and extracting the target object from the image information through a preset algorithm according to the trisection map, wherein the target object comprises: the pupil, eyelid, and outer iris circle of the target iris.
Specifically, in the above step, a trimap image of the target object is constructed according to the coarse positioning result of the pupil, which may be that a foreground region and a background region of the target object are determined according to the coarse positioning result of the pupil, after the foreground region and the background region are determined, the foreground region and the background region are marked differently, and the remaining regions are not processed, so that a trimap image of the target object can be obtained.
In an alternative embodiment, the preset algorithm may be a matting algorithm. The matting algorithm assumes that the color distribution of the foreground and background pixels within the local window is linear. The target function is obtained according to the least square method, and the image of the target object can be extracted from the image information of the target iris through the matting algorithm provided by the embodiment.
Step S306, determining the positioning information of the target iris according to the extracted pupil, eyelid and iris excircle, wherein the positioning information comprises at least one of the following: pupil boundary, eyelid boundary, and iris boundary.
It should be noted here that, in the above-mentioned embodiment of the present application, since the target object is extracted from the image information of the target iris through the matting algorithm according to the constructed trimap image, when the image information of the target iris is the image information of non-circular pupil, under-exposure or over-exposure of the image information, and severe closure of eyelids, if the positioning is performed by using the calculus operator or boundary detection in combination with Hough transformation, the obtained positioning result is not accurate, but the constructed trimap image is not substantially affected by the above-mentioned factors, and the iris is positioned according to the constructed trimap image by relying on the a priori knowledge of the trimap image, the above-mentioned embodiment can accurately position the target iris under the non-ideal conditions that the image information of the target iris is non-circular pupil, under-exposure or over-exposure of the image information, severe closure of eyelids, and the like.
As can be seen from the above, in the present application, the above steps acquire image information of a target iris, determine an area containing the target object in the image information, construct a trimap image of the target object, extract the target object from the image information through a preset algorithm according to the trimap image, and determine positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle, where the positioning information includes at least one of: pupil boundary, eyelid boundary, and iris boundary. The steps determine the positioning information of the target iris by constructing the trimap image and extracting the pupil, the eyelid boundary and the iris excircle from the trimap image, thereby realizing the positioning of the target iris, solving the technical problem that the iris positioning result is inaccurate when the iris image is the nonideal iris image such as the noncircular pupil, uneven illumination, overexposure or underexposure and the like in the prior art, and realizing the technical effect of accurately positioning the target iris under the nonideal conditions such as the noncircular pupil, the underexposure or overexposure of the image information, the serious closure of the eyelid and the like of the image information of the target iris.
Optionally, according to the above embodiment of the present application, after acquiring the image information of the target iris, the method further includes: an area including the pupil is cut out from the image information of the target iris.
Optionally, according to the above embodiment of the present application, when the target object is a pupil, constructing a trimap image of the target object according to a coarse positioning result of the pupil, and extracting the target object from the image information according to a preset algorithm according to the trimap image, includes:
step S3041, determining a foreground region and a background region of the pupil according to the coarse positioning result, and marking the foreground region and the background region differently, so as to construct the pupil trimap.
In the foregoing step, the coarse positioning result may include a circle center position and a radius of the pupil, and in order to avoid an influence on positioning caused by severe eyelid closure, a lower semicircle may be constructed by using the circle center position as a center and using a preset distance smaller than the radius of the pupil as a radius, and used as a foreground region of the pupil bipartite graph, a circle may still be constructed by using the circle center position as a center and using another preset distance larger than the radius of the pupil as a radius, and a region having a radius larger than the circle may be used as a background region. Then, the foreground area is marked as 1, the background area is marked as 0, and the rest areas are not processed, so that a pupil trimap can be obtained, and in an alternative embodiment, the pupil trimap can be as shown in fig. 4 b.
Step S3043, a pupil background image is obtained according to the pupil three-segment image by a matting algorithm.
In an alternative embodiment, according to the pupil triplet shown in fig. 4b, the background region obtained by matting the iris image can be as shown in fig. 4c, and the iris image can be matting by the following method:
obtaining a background area of the trimap by solving alpha through a formula (L + lambda D) alpha-lambda b according to the constructed trimap, wherein the matrix L is a known matting Laplacian matrix, the matrix D is a diagonal matrix with the same size as the matrix L, the diagonal elements are elements of a matrix T which are arranged in columns, and the rest elements are 0, the matrix T is a matrix which marks the background area and the foreground area of the trimap as 1, and the rest areas are marked as 0; b is a (M × N) × 1-dimensional vector whose elements are elements of a matrix K, which is a matrix in which the foreground region of the trimap is marked as 1 and the remaining regions are marked as 0, and is arranged in rows.
The above formula (L + λ D) α ═ λ b is obtained by changing the objective function E (α) · αTLα+λ(α-β)TD (alpha-beta) derivation to obtainAn embodiment of obtaining the objective function is described in detail below.
It should be noted in advance that, for an input image I, each pixel of the input image I can be written as a convex combination of a foreground color F and a background color B, where I ═ α F + (1- α) B, α is the opacity of the foreground, also called the mask value, when α ═ 1, called the absolute foreground, and when α ═ 0, called the absolute background, the matting algorithm used in the present application estimates the value of the unknown region { F, B, α } according to the known absolute foreground and absolute background pixels.
Then, for the gray-scale image of the target iris, assuming that the foreground region F and the background region B in a small window are both constants, the α value of the pixel point in the small window w on the gray-scale image I of the target iris can be calculated by the following formula:
Obtaining a cost function according to a least square method:
in order to obtain the α that minimizes the cost function J, the cost function J is sorted to obtain:
J(α)=αTLα;
here, a constructed pupil three-segment graph is introduced, and a priori foreground and background point knowledge is given in the three-segment graph, so that while the cost function J is minimized, the obtained α also needs to satisfy foreground and background information of the three-segment graph, that is, for a pixel point α marked as foreground in the three-segment graph, 1, and for a pixel point α marked as background, 0, so as to obtain an objective function: e (alpha) ═ alphaTLα+λ(α-β)TD(α-β)。
As can be seen from the above, in the above steps of the present application, the foreground region and the background region of the pupil are determined according to the coarse positioning result, and different marks are performed on the foreground region and the background region to construct the pupil three-segment image, and the pupil background image is obtained according to the pupil three-segment image through the matting algorithm. According to the scheme, the purpose of obtaining the pupil image is achieved by constructing the pupil three-segment image and separating the background image, namely the image of the pupil region, from the image information of the original iris by adopting a matting algorithm.
Optionally, according to the above embodiment of the present application, determining the positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle includes: determining a pupil boundary of the target iris according to the extracted pupil, wherein the step of determining the pupil boundary of the target iris according to the extracted pupil comprises the following steps:
step S3061, perform binarization on the background image according to a preset threshold to obtain a binary image corresponding to the background image, and perform boundary detection on the binary image to obtain a boundary of the binary image.
In the above step, since the pupil region is marked as the foreground region in the previous step, the pixel value of the pupil region in the background image is low, and then the boundary detection is performed to obtain the boundary of the background image, where the preset threshold for performing the binarization processing on the image may be 0.1.
Step S3063, denoising the boundary of the background image through the spot noise template to obtain the boundary point of the pupil.
Step S3065, fitting the boundary points of the pupil with an ellipse to obtain a pupil boundary.
It should be noted here that, since the pupil is not necessarily circular, or even severely non-circular, the above steps use an ellipse to fit the boundary points.
According to the method, the background image is subjected to binarization processing according to the preset threshold, the boundary of the binarized background image is obtained through boundary detection, the boundary of the background image is subjected to denoising processing through the noise template to obtain the boundary point of the pupil, and the pupil boundary point is fitted through the ellipse to obtain the inner boundary of the target iris. According to the scheme, the pupil boundary is obtained through binarization processing and boundary detection on the basis of obtaining a background image, the facula noise is removed through a facula noise template, and the boundary points of the pupil are fitted through an ellipse, so that the non-circular pupil is accurately positioned.
Optionally, according to the above embodiment of the present application, denoising the boundary of the binary image through the spot noise template to obtain the boundary point of the pupil includes:
step S30631, acquiring a light spot noise template of the target iris; and eliminating the spot noise point on the pupil boundary by using the spot noise template, wherein the step of acquiring the spot noise template of the target iris comprises the following steps:
step S30635, determining pixel points in the area including the pupil whose pixel values are greater than the preset pixel threshold as light spots.
Specifically, the preset value may be a gray value smaller than most of the light spots and larger than most of the pupil area and the iris area, so as to detect the light spots.
Step S30637, binary segmentation is performed on the region including the pupil to obtain the spot position.
And step S30639, expanding the area where the light spot is located according to a preset template to obtain a light spot noise template.
In an alternative embodiment, the original image is clipped to a small region including the pupil according to the pupil coarse localization parameter obtained by RST, as shown in fig. 4a, fig. 4d is a gray distribution histogram of the clipped region, in this region, since the light spot is bright and its gray value is high, and the gray values of the pupil region and the iris region are low and are much below 150, the light spot can be detected by using threshold segmentation, the point with the pixel value greater than 150 is regarded as the light spot, so that binary segmentation is performed, the pixel point value with the pixel value greater than 150 is set to 1, and the others are 0, and fig. 4e is the position of the light spot obtained by threshold segmentation. Finally, the spot area is expanded using a circular template of 5 x 5, resulting in a spot noise template as shown in fig. 4 f. The facula noise template is mainly used for eliminating the influence of facula positioned on the pupil boundary on the selection of pupil boundary candidate points. Fig. 4g shows that the pupil boundary before the speckle noise is removed is obviously affected by the speckle, and fig. 4h shows the pupil boundary after the speckle noise is removed, which obviously removes the influence of the speckle on the boundary.
As can be seen from the above, in the above steps of the present application, a region including a pupil is intercepted from image information of a target iris, a pixel point having a pixel value greater than a preset pixel threshold value in the region including the pupil is determined as a light spot, binary segmentation is performed on the region including the pupil to obtain a light spot position, and the region where the light spot position is located is expanded according to a preset template to obtain a light spot noise template. The light spots are detected through threshold segmentation, and the detected light spots are expanded, so that the effect of eliminating the influence of the light spots on the pupil boundary candidate point selection is achieved.
Optionally, according to the above embodiment of the present application, fitting the boundary point of the pupil through an ellipse to obtain the pupil boundary includes:
in step S30651, constraint conditions for an ellipse are set, the ellipse being used to represent the inner boundary of the target iris.
In an alternative embodiment, the pupil boundary is fitted using a direct ellipse fitting method based on the obtained pupil boundary points. The general form of an ellipse with a planar conic can be expressed as:
wherein
For the obtained N sets of boundary points (x)
i,y
i),
Called a point (x)
i,y
i) Finding the algebraic distance to the elliptic curve by least squares
The square sum of the algebraic distances from the N sets of boundary points to the ellipse is minimized, i.e. the objective function is:
since the pupil is approximately elliptical, the constraint b is added2-4ac<0 to ensure that the fitting results are all elliptical. The constraint conditions are:
s.t.b2-4ac<0
step S30653, solving the constraint condition to obtain a parameter of an ellipse closest to the pupil, and obtaining a boundary of the pupil according to the parameter of the ellipse.
In an alternative embodiment, again using the above fitting of the pupil boundary using a direct ellipse fitting method as an example, due to the constraint b
2-4ac<0 is an inequality, and when solving, the solution does not necessarily exist, so that the restriction condition b is introduced
2-
4ac ═ 1, solving for parameters that result in an ellipse
Fig. 4h is an exemplary diagram of locating the inner boundary of the iris according to the above embodiment.
A preferred step of locating the pupil boundary is described below in accordance with the above embodiments:
step S51, after the target iris to be detected is input, RST is performed to roughly locate the pupil boundary.
Step S52, acquiring a spot noise template through spot detection.
Step S53, construct a pupil bipartite graph, and obtain a background image (i.e. an image of a pupil) through a matting algorithm.
And step S54, performing boundary detection on the binary image through the binary image to obtain boundary points of the pupil, and performing denoising processing on the boundary points by using a light spot noise template.
In step S55, a pupil boundary is obtained by fitting an ellipse to the boundary point of the pupil.
Optionally, according to the above embodiment of the present application, in a case that a target object is an eyelid, constructing a trimap image of the target object according to a coarse positioning result of the pupil, and extracting the target object from the image information according to a preset algorithm according to the trimap image, the method includes:
step S3045, determining a foreground region and a background region of the pupil according to the coarse positioning result, and marking the foreground region and the background region differently, so as to construct an eyelid trimap.
In the foregoing step, the method for determining the foreground region and the background region of the pupil according to the coarse positioning result may be that the circle center position and the radius in the coarse positioning parameter are obtained, the circle center position may be used as the circle center, a preset distance smaller than the radius is used as a semi-minor axis of the ellipse, another preset distance smaller than an empirical value of a distance between the inner and outer corners of the eye is used as a semi-major axis to form the ellipse, the inside of the ellipse is used as the foreground region, a region whose distance from the circle center is larger than the empirical value of the distance between the upper and lower eyelids and the circle center is used as the background region, the foreground region is marked as 1, the background region is marked as 0, and the rest regions are not processed to obtain the trisection diagram of the eyelids, and the.
Step S3047, an eyelid foreground image is acquired from the eyelid trimap through a matting algorithm.
In an alternative embodiment, the image of the eyelid obtained by the matting algorithm is shown in FIG. 6 b.
According to the method, the foreground area and the background area of the pupil are determined according to the rough positioning parameters, the foreground area and the background area are marked differently to construct the eyelid trimap, and the eyelid foreground image is obtained according to the eyelid trimap through a matting algorithm. The eyelid trisection image is constructed by the scheme, and the foreground image, namely the image of the eyelid area is separated from the original iris image by adopting a matting algorithm, so that the purpose of acquiring the eyelid image is realized.
Optionally, according to the above embodiment of the present application, determining the positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle includes: determining the boundary of the eyelid according to the extracted eyelid, wherein the determining the boundary of the eyelid according to the extracted eyelid comprises:
step S3067, a threshold segmentation is performed on the image of the foreground region to obtain a binary image of the eyelid.
In an optional embodiment, the binary image obtained by performing threshold segmentation on the foreground region image may be a binary image of an eyelid as shown in fig. 6c, and the method for performing threshold segmentation may be to set a preset gray value, classify each pixel in the foreground region image according to the preset gray value, mark a pixel point larger than the preset gray value as 1, and mark a pixel point smaller than the preset gray value as 0, thereby obtaining the binary image of the eyelid.
In step S3069, upper and lower eyelid key points for the eyelids are determined.
In an optional embodiment, in a preset range of the roughly positioned pupil center, a pixel point with the maximum abscissa value and zero gray value in each row is upwards searched to serve as a key point of an upper eyelid; the point in each row with the smallest abscissa value and zero gray value is searched downward as the key point of the lower eyelid, as shown in fig. 6 d.
Step S30610, fitting the upper eyelid key points and the lower eyelid key points of the foreground image by a parabola to obtain an eyelid boundary of the target iris.
Since the shape of the eyelids can be approximated to a parabola, least squares parabolic fitting is performed for two sets of key points of the given upper and lower eyelids, respectively.
According to the method, the threshold segmentation is carried out on the image of the foreground region in the steps to obtain the binary image of the eyelid, the key point of the upper eyelid and the key point of the lower eyelid of the eyelid are determined, and the key point of the upper eyelid and the key point of the lower eyelid of the foreground image are fitted through the parabola to obtain the eyelid boundary of the target iris. According to the scheme, the boundary of the upper eyelid and the lower eyelid is obtained in a parabolic fitting mode, the influence of serious closure of the eyelids, insufficient or excessive exposure of the image on eyelid positioning is avoided, and accurate eyelid positioning when the iris image is a non-ideal iris image is realized.
Optionally, according to the above embodiment of the present application, fitting an upper eyelid key point and a lower eyelid key point of the foreground image by using a parabola to obtain an eyelid boundary of the target iris includes:
in step S30611, a corresponding quadratic function is set for the upper eyelid key point and the lower eyelid key point.
Specifically, in the step, the quadratic function is a parabolic function.
Step S30613, setting eyelid boundary constraint conditions, where the eyelid boundary constraint conditions are that errors of quadratic functions of the upper eyelid key points and the lower eyelid key points corresponding to the upper eyelid key points and the lower eyelid key points, respectively, are minimized.
In an alternative embodiment, for a given set of data (x)
i,y
i) (i-1, 2, …, n), finding the quadratic function y-a
0+a
1x+a
2x
2Minimizing the sum of squared errors, wherein (x)
i,y
i) Is an upper eyelid key point or a lower eyelid key point, a
0、a
1、a
2And Q is the error of the quadratic function of the upper eyelid key point and the lower eyelid key point corresponding to the upper eyelid key point and the lower eyelid key point respectively:
step S30614, a quadratic function is solved according to the eyelid boundary constraint conditions, and the boundaries of the upper eyelid and the lower eyelid are obtained.
In an alternative embodiment, the above is still applied for a given set of data (x)
i,y
i) (i-1, 2, …, n), finding the quadratic function y-a
0+a
1x+a
2x
2Taking the sum of squared errors as an example, solve the above constraint to let Q be a
j(j ═ 1,2,3) partial derivatives, i.e.
Obtaining extreme points of Q, comparing the function value of each extreme point to obtain a which minimizes Q
0、a
1、a
2Is the coefficient of the quadratic function of the upper and lower eyelids. The parabolic image formed by the coefficients of the obtained quadratic function is the boundary between the upper eyelid and the lower eyelid.
Fig. 7 is a flowchart of an alternative eyelid positioning method for a target image according to an embodiment of the present application, and a preferred embodiment of eyelid positioning in the present application is described below with reference to the example shown in fig. 7:
and step S71, performing RST coarse positioning on the pupil to obtain boundary parameters of the pupil, and constructing an eyelid trisection map.
And step S72, acquiring an eyelid foreground image through matting calculation according to the eyelid trisection.
And step S73, acquiring a binary image of the foreground image by adopting a threshold segmentation method.
In step S74, key points of the upper and lower eyelids are acquired.
Step S75, parabolic fitting is performed on the upper eyelid and the lower eyelid, respectively, to obtain an upper eyelid boundary and a lower eyelid boundary.
Optionally, according to the above embodiment of the present application, in a case that the target object is an outer circle of an iris, determining an area including the target object in the image information, constructing a trimap image of the target object, and extracting the target object from the image information according to a preset algorithm according to the trimap image, includes:
step S3049, obtaining the circle center position and the radius of the inner boundary included in the coarse positioning result.
Step S30411, obtaining a first reference distance and a second reference distance based on the circle center position and the radius, and marking a pixel point whose distance from the circle center is smaller than the first reference distance as 1, and marking a pixel point whose distance from the circle center is greater than the second reference distance as 0.
As a preferred embodiment, in the case of poor quality of the iris image (for example, the eyelashes of the upper eyelid are more shielded), the distance from the center of the iris image to the center of the iris image may be smaller than the first reference distance, and the pixel point located below the center of the iris image is marked as 1, so as to prevent the foreground image of the trisection image of the outer circle of the iris from including the image of the eyelashes, thereby affecting the positioning of the outer circle of the iris.
In the steps, on one hand, constructing the trisection image of the excircle of the iris can take most pixel points of the image as known foreground or background pixels, and the smaller the number of the unknown pixels is, the higher the operation efficiency of the matting algorithm is; on the other hand, the accuracy of the sectional drawing is also improved by using the trisection drawing as a supplement condition.
The inner boundary of the iris can be roughly positioned by using a radial symmetry transformation algorithm, and the parameter circle center (x) of the inner boundary of the iris is obtained by combining with the graph shown in figure 8apupil,ypupil) And radius rpupilAnd constructing a formula of a trisection graph by using the inner boundary parameter information:
wherein tri (x, y) represents a constructed trimap, I is the original image information of the target iris, and the gray value is normalized to [0, 1%]In the interval (2), the point with the gray value of 1 represents a foreground point, the point with the gray value of 0 represents a background point, and the rest points are unknown pixel points. Fig. 8a shows the parameter labeling of a trimap configuration, and fig. 8b is a schematic representation of a trimap of the outer circle of the iris configured. It can be seen that the foreground region is a semicircle, because the human eyelashes partially block the iris region, and thus the foreground is selected to avoid the eyelash portion. In the above formula d1And d2Is a distance parameter, optionally d1=8,d2=13。
Step S30413, determining pixel points marked as 1 as foreground points and pixel points marked as 0 as background points, and constructing a three-part graph of the iris excircle according to the foreground points and the background points.
A trimap view of the outer circle of the iris may be as shown in fig. 8 b.
Step S30415, obtaining an iris foreground image according to the trisection image of the outer circle of the iris through a matting algorithm.
In an alternative embodiment, the obtained foreground image may be as shown in fig. 8 c.
It can be known from the above that, in the above embodiments of the present application, the inner boundary of the target iris is located by using a radial symmetry algorithm, parameters of the inner boundary are obtained, the first reference distance and the second reference distance are obtained based on the circle center and the radius, the pixel point whose distance from the circle center is smaller than the first reference distance is marked as 1, the pixel point whose distance from the circle center is greater than the second reference distance is marked as 0, the pixel point marked as 1 is determined as a foreground point, the pixel point marked as 0 is a background point, a three-part diagram of the iris outer circle is constructed according to the foreground point and the background point, and the iris foreground image is obtained according to the three-part diagram of the iris outer circle by using a. According to the scheme, the three-division diagram of the iris excircle is constructed, the image of the foreground region, namely the region of the iris excircle is separated from the original iris image by adopting the matting algorithm, the purpose of obtaining the iris excircle is achieved, and particularly, the accurate positioning effect can be achieved under the conditions of severe closure of eyelids and the like.
Optionally, according to the above embodiment of the present application, determining the positioning information of the target iris according to the extracted pupil, eyelid, and iris outer circle includes: determining the iris outer boundary of the target iris according to the extracted iris outer circle, wherein the step of determining the iris outer boundary of the target iris according to the extracted iris outer circle comprises the following steps of:
step S30611, taking the center of the circle of the inner boundary of the iris as the center, obtaining the horizontal gradient value of each row upwards and downwards in a preset range, and screening two pixel points with the largest gradient value in two preset interval ranges in each row.
In the above steps, as the gray value is changed from small to large from the black background to the foreground iris region, the gray change is obviously generated at the outer circle boundary of the iris, so that the gray gradient value of the outer circle of the iris is the largest, and the radius of the outer circle of the iris can be obtained according to the gradient. Although the inner boundary and the outer circle of the iris are not concentric circles, the centers of the inner circle and the outer circle are approximately similar. Therefore, the center positions of the outer circle and the inner boundary of the iris can be temporarily assumed to be the sameIs (i.e., (x))c,yc)=(xpupil,ypupil)。
In an alternative embodiment, the circle center position (x) of the foreground image obtained after the matting is calculatedc,yc) Upper and lower n rows of: namely, it is
xi∈{xc-n,xc-(n-1),…,xc-1,xc,xc+1,…,xc+(n-1),xc+n} (4-7)
Where (i ═ 1,2, …,2n +1), there are 2n +1 rows, and each row x is obtainediThe lateral gradient value of (a). Because the iris foreground image is not shielded by eyelids under the common condition near the circle center, the boundary is clearer, and the interference of eyelashes is avoided, the estimated radius is more accurate.
Since the gray scale at the inner boundary of the iris also changes greatly, and in order to locate the outer circle, it is necessary to exclude the point of the maximum gradient value from the inner boundary, so that two interval ranges are set to be (1, y)
c-r
pupil-5) and (y)
c+r
pupil+5,160) to exclude the case where the point of the maximum value of the gradient is on the inner boundary, wherein the image size in the above embodiment is 120 × 160, thereby excluding the inner boundary from the above two interval ranges, within which the 2n +1 rows and x rows are found respectively
iCoordinate position corresponding to the maximum value of the transverse gradient of
And
where (i ═ 1,2, …,2n +1), where these two intervals do not already include the inner boundary of the iris.
Step S30613, obtaining coordinates corresponding to two pixel points with the largest gradient values in two preset interval ranges in each row, and calculating the distance between the two pixel points and the center of the circle according to the coordinates corresponding to the two pixel points.
Since the gray scale at the inner boundary of the iris also varies greatly, we should exclude the inner boundary from being found in order to locate the outer circleBoundary case, so we are at (1, y) of each row
c-r
pupil-5) and (y)
c+r
pupil+5,160) two intervals, find these 2n +1 lines each line x separately
iCoordinate position corresponding to the maximum value of the transverse gradient of
And
where (i ═ 1,2, …,2n +1), since these two intervals do not already include the inner boundary of the iris.
Step S30615, a reference radius of the outer circles of the plurality of irises is obtained according to a distance between the center of the circle and two pixel points with the largest gradient values in two preset interval ranges in each row.
In an alternative embodiment, the xth maximum gradient can be found by the following formula
iLine estimation of iris excircle radius
In connection with the illustration shown in fig. 8d, the point of maximum gradient value in a plurality of rows is
And S30617, determining the outer iris boundary of the target iris according to the average value of the reference radiuses of the outer circles of the irises and the circle center.
In an alternative embodiment, the average of the reference radii of the plurality of outer irises may be obtained by the following formula:
wherein the estimated radius obtained for each row
Averaging to obtain the estimated excircle radius r
c。
Optionally, according to the above embodiment of the present application, after determining the outer iris boundary of the target iris according to the average value of the reference radii of the outer circles of the plurality of irises and the center of the circle, the method further includes: searching excircle parameters in a preset area through a calculus operator, wherein the preset area comprises: the distance sum of the difference between the distance from the center of the circle and the radius of the outer boundary of the iris within a first preset range and a preset rectangular area with the center of the circle as the center.
Roughly positioning the outer circle of the iris according to a matting algorithm to obtain an outer circle radius parameter rcBecause the positioning accuracy of the matting algorithm is higher, the calculus operator is utilized to determine the radius (r)c-5,rc+5) and the range around the center of the inner boundary circle, and searching the outer circle parameters to accurately position the iris image, wherein the first preset range is (-5, + 5).
Fig. 9 is a schematic diagram of an alternative method for obtaining an outer iris boundary according to an embodiment of the present invention, and a preferred embodiment of the positioning of the outer iris boundary in the present application will be described with reference to the example shown in fig. 9:
step S91, coarse positioning of the inner boundary of the iris is performed.
And step S92, acquiring a foreground image of the outer circle of the iris through the constructed trisection image by a matting algorithm.
And step S93, finding the maximum value point of the transverse gradient of the upper and lower n rows of the circle center of the coarse positioning of the inner boundary of the iris in the preset range.
And step S94, calculating the average value of the distances from the point with the maximum gradient value in each row to the center of the circle as the estimated iris outer boundary radius.
And step S95, accurately positioning the center and radius of the outer boundary of the iris within a preset range through a calculus operator.
Example 2
The present application further provides an iris positioning apparatus, which can be used to perform the iris positioning method in embodiment 1, and fig. 10 is a schematic structural diagram of an iris positioning apparatus according to an embodiment of the present application, where the apparatus includes:
the acquiring module 100 is configured to acquire image information of a target iris and perform coarse positioning on a pupil in the image information.
The first determining module 102 is configured to construct a trimap image of a target object according to a coarse positioning result of a pupil, and extract the target object from image information through a preset algorithm according to the trimap image, where the target object includes: the pupil, eyelid, and outer iris circle of the target iris.
A second determining module 104, configured to determine positioning information of the target iris according to the extracted pupil, eyelid, and iris excircle, where the positioning information includes at least one of: pupil boundary, eyelid boundary, and iris boundary.
It should be noted here that, in the above-mentioned embodiment of the present application, since the target object is extracted from the image information of the target iris by constructing the ternary diagram, when the image information of the target iris is the image information of pupil non-circle, image information under-exposure or over-exposure, and eyelid is severely closed, if the positioning is performed by using the calculus operator or boundary detection in combination with Hough transform, the obtained positioning result is not accurate, but the construction of the ternary diagram is not substantially affected by the above-mentioned factors, and the iris is positioned according to the constructed ternary diagram by relying on the knowledge of the ternary diagram a priori, the above-mentioned embodiment can accurately position the target iris under the non-ideal conditions that the image information of the target iris is the pupil non-circle, the image information under-exposure or over-exposure, and the eyelid is severely closed.
As can be seen from the above, the apparatus of the present application obtains image information of a target iris through an obtaining module, determines an area containing the target object in the image information through a first determining module, constructs a trimap image of the target object, extracts the target object from the image information through a preset algorithm according to the trimap image through a second determining module, and determines positioning information of the target iris according to the extracted pupil, eyelid, and iris excircle, where the positioning information includes at least one of: pupil boundary, eyelid boundary, and iris boundary. The steps determine the positioning information of the target iris by constructing the trimap image and extracting the pupil, the eyelid boundary and the iris excircle from the trimap image, thereby realizing the positioning of the target iris, solving the technical problem that the iris positioning result is inaccurate when the iris image is the nonideal iris image such as the noncircular pupil, uneven illumination, overexposure or underexposure and the like in the prior art, and realizing the technical effect of accurately positioning the target iris under the nonideal conditions such as the noncircular pupil, the underexposure or overexposure of the image information, the serious closure of the eyelid and the like of the image information of the target iris.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.