CN113902798A - Pupil quick positioning method for color iris recognition - Google Patents

Pupil quick positioning method for color iris recognition Download PDF

Info

Publication number
CN113902798A
CN113902798A CN202111230913.9A CN202111230913A CN113902798A CN 113902798 A CN113902798 A CN 113902798A CN 202111230913 A CN202111230913 A CN 202111230913A CN 113902798 A CN113902798 A CN 113902798A
Authority
CN
China
Prior art keywords
image
gradient
component
value
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111230913.9A
Other languages
Chinese (zh)
Inventor
宋植厅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jihong Dingyuan Technology Co ltd
Original Assignee
Shenzhen Jihong Dingyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jihong Dingyuan Technology Co ltd filed Critical Shenzhen Jihong Dingyuan Technology Co ltd
Priority to CN202111230913.9A priority Critical patent/CN113902798A/en
Publication of CN113902798A publication Critical patent/CN113902798A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • G06F17/153Multidimensional correlation or convolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pupil fast positioning method for color iris recognition, which utilizes the color space characteristics of a color iris image to carry out YCbCr color space transformation processing on the color iris image to obtain a binary image with obvious pupil characteristics and higher robustness, wherein other noise signals are basically shielded, so that the calculation amount of edge detection and iris inner circle positioning is greatly reduced, and the positioning and extraction efficiency of the edge detection and the iris inner circle is obviously improved.

Description

Pupil quick positioning method for color iris recognition
Technical Field
The invention relates to the technical field of identity recognition, in particular to a pupil quick positioning method based on a colorful iris image in iris recognition.
Background
The human iris is a circular ring-shaped part surrounding a black pupil, surrounded by a white sclera. The iris recognition firstly needs to complete the positioning of the iris in the iris image, including the inner circle positioning and the outer circle positioning of the iris circular ring, and aims to extract the image of the iris circular ring part and lay a foundation for subsequent processing. The so-called inner circle location is to find out the center point and the boundary circle of the pupil, i.e. the inner boundary of the iris ring. The outer circle positioning is to find the outer boundary of the iris circle.
The conventional iris inner circle positioning is mainly to firstly carry out binarization transformation on a gray scale image, then select operators such as canny or sobel and the like to carry out edge detection on the binarized image so as to form corresponding edge detection data, and analyze and calculate the group of data to find out a circle meeting the conditions. Typically, these "circles" are obtained in the order of tens or even hundreds, and a relatively large number of calculations are required to filter out the inner circle of that true iris circle.
The traditional iris recognition technology is based on the gray image only with black and white two colors acquired by a special infrared camera (matched with a corresponding infrared light source). The color iris image can be captured under normal illumination by a camera arranged on a mobile terminal with mass use, such as a mobile phone, a tablet computer and the like. Therefore, the technology based on color iris image recognition should have a wider application range.
Disclosure of Invention
The invention utilizes the color space characteristic of the color iris image to carry out color space transformation processing on the image to obtain the binary image with obvious pupil characteristics and higher robustness, wherein other noise signals are basically shielded, so that the calculation amount of edge detection and the positioning of the inner circle of the iris is greatly reduced, and the efficiency of edge detection and the positioning and extraction of the inner circle of the iris is obviously improved.
The technical scheme adopted by the invention is as follows: the pupil quick positioning method for color iris recognition comprises the following processing steps:
p1, converting the color iris image into YCbCr color space image;
p2, separating the Cr component map from the YCbCr color space image to obtain a Cr map;
p3, carrying out binarization conversion on the Cr image to obtain a clear-edge binary image Cr _ bw image;
p4, performing edge detection on the Cr _ bw graph to obtain an edge detection graph Cr _ edge;
p5, filtering and extracting all circle fitting data from the edge detection graph Cr _ edge, and filtering and extracting the data of the inner circle of the iris: the coordinates and radius of the center point of the pupil.
Further, the specific processing procedure of the P1 for converting the color iris image into the YCbCr color space image is as follows:
p1-1, calculating the spatial dimensions of the RGB matrix of the color iris image: number of rows, number of columns, dimension;
p1-2, the spatial dimensions calculated according to the aforementioned P1-1: separating RGB matrixes to obtain three component matrixes of R component, G component and B component respectively;
p1-3, converting the RGB matrix of the color iris image to generate the luminance component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Y respectively, and adding a Y component correction value cor _ Y to all elements of each component for correction to obtain a brightness component Y matrix result of the YCbCr color space; the processing expression is as follows:
luminance component Y ═ (pa _ Yr + R + pa _ Yg + G + pa _ Yb + B) + cor _ Y;
p1-4, converting the RGB matrix of the color iris image to generate the blue component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Cb respectively, and adding a Cb component correction value cor _ Cb correction to all elements of each component to obtain a blue component Cb matrix result of the YCbCr color space; the processing expression is as follows:
blue component Cb ═ pa _ Cbr + pa _ Cbg × G + pa _ Cbb) + cor _ Cb;
p1-5, converting the RGB matrix of the color iris image to generate the red component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Cr respectively, and adding Cr component correction value cor _ Cr to all elements of each component for correction to obtain a red component Cr matrix result of the YCbCr color space; the processing expression is as follows:
red component Cr ═ (pa _ Crr × R + pa _ Crg × G + pa _ Crb × B) + cor _ Cr;
and P1-6, combining the luminance component Y matrix, the blue component Cb matrix and the red component Cr matrix obtained by the P1-3, the P1-4 and the P1-5 to obtain a complete result of converting the RGB matrix into the YCbCr color space.
Specifically, the parameter vector pa _ Y of P1-3 is:
pa_Y=[pa_Yr,pa_Yg,pa_Yb]=[0.299,0.587,0.114]。
specifically, the parameter vector pa _ Cb of P1-4 is:
pa_Cb=[pa_Cbr,pa_Cbg,pa_Cbb]=[-0.1687,-0.3313,0.5]。
specifically, the parameter vector pa _ Cr of P1-5 is:
pa_Cr=[pa_Crr,pa_Crg,pa_Crb]=[0.5,-0.4187,-0.0813]。
specifically, the Y component correction value cor _ Y of P1-3 is 16.
Specifically, the Cb component correction value cor _ Cb of P1-4 is 128.
Specifically, the Cr component correction value cor _ Cr of P1-5 is 128.
Further, the specific processing procedure of the P3 converting the Cr map into the binary map Cr _ bw map by binarization is as follows:
p3-1, calculating the optimal segmentation threshold value of the iris pupil of the Cr image output by P2 as a foreground object, so as to maximize the variance between the iris pupil as the foreground object and the background, and to highlight the iris pupil object;
p3-2, carrying out image binarization segmentation on the Cr image by using the segmentation threshold obtained by P3-1, namely setting foreground pixels smaller than the segmentation threshold as '0' values and setting background pixels larger than or equal to the segmentation threshold as '255' values;
and P3-3, carrying out 0/1 binarization conversion on the binarization gray level map obtained by P3-2 to obtain a binary map Cr _ bw map.
Further, the iris pupil of the Cr map of P3-1 is used as the optimal segmentation threshold of the foreground object, and the specific calculation method is as follows:
p3-1-1, generating a histogram of a Cr map;
p3-1-2, histogram smoothing treatment is carried out on the histogram of the Cr map;
p3-1-3, calculating the maximum gray value and the minimum gray value of the histogram of the Cr image after the smoothing treatment, and taking the maximum gray value and the minimum gray value as the boundary values of the subsequent calculation;
p3-1-4, calculating the mass moment of each gray value, i.e. the value of each gray value multiplied by the number of pixels of the gray value;
p3-1-5, calculating the variance of the histogram of the Cr map under each gray level, namely the fluctuation range under the gray level;
and P3-1-6, filtering out the maximum variance value from the variance under each gray level, and taking the gray level corresponding to the variance value as the optimal segmentation threshold of the foreground object, wherein the iris pupil of the Cr image of P3-1 is taken as the gray level corresponding to the maximum variance value.
Further, the method for detecting the edge of the binary map Cr _ bw of P4 includes the following processes:
p4-1, carrying out image filtering processing on the binary image Cr _ bw to remove noise signals;
p4-2, calculating the amplitude and direction of the gradient of the binary image Cr _ bw after the image denoising processing is finished;
p4-3, carrying out non-maximum inhibition on the gradient amplitude;
p4-4, edge detection and connection using a dual threshold algorithm.
Further, the specific processing method of the P4-1 for performing image filtering processing on the binary image Cr _ bw is as follows:
p4-1-1, determining a proper filtering template, including the size and standard deviation coefficient;
p4-1-2, generating a filter mask matrix by using the filter template;
and P4-1-3, performing convolution calculation by using the filter mask matrix and the binary image Cr _ bw image matrix:
firstly, keeping the rows unchanged and the columns changed, and performing convolution operation in the horizontal direction;
secondly, on the obtained result, keeping the rows not to be limited, changing the rows, and performing convolution operation in the vertical direction;
and P4-1-4, removing abnormal element values exceeding the upper limit of the peak value in the image matrix of the binary image Cr _ bw after convolution calculation, and obtaining the binary image Cr _ bw with higher smoothness of a noise signal of filtering a single or isolated block in the image.
Further, the processing procedure of calculating the gradient, the gradient magnitude and the gradient direction of each pixel point in the Cr _ bw image of P4-2 is as follows:
p4-2-1, three zero-valued matrices of the same size as the Cr _ bw image matrix are built as follows:
x-direction gradient value matrix Ix (X, y)
② Y direction gradient value matrix Iy (x, Y)
③ gradient amplitude matrix M (x, y) of the target image matrix
P4-2-2, calculating the gradient of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient in x-direction of Cr _ bw (x, y) element: ix (x, y) ═ I (x +1, y) -I (x-1, y)
Y-directional gradient of Cr _ bw (x, y) element: iy (x, y) ═ I (x, y +1) -I (x, y-1)
P4-2-3, calculating the gradient amplitude M of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient amplitude of Cr _ bw (x, y) element
Figure BDA0003315897990000041
P4-2-4, calculating the gradient direction angle theta of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
the gradient direction angle θ (x, y) of the Cr _ bw (x, y) element is arctan ((Iy (x, y), Ix (x, y)).
Further, the P4-3 performs non-maximum suppression on the binary map Cr _ bw map, and the specific processing method thereof is as follows:
p4-3-1, establishing a zero value matrix K (x, y) of the same size as the Cr _ bw image matrix;
p4-3-2, reading all pixels of the gradient amplitude matrix M (x, y) in a circulating traversal mode, and judging whether the gradient value of the current pixel is 0 or not;
p4-3-3, if the gradient value of the current pixel of the gradient amplitude matrix M (x, y) is 0, assigning K (x, y) to the corresponding pixel of 0;
p4-3-4, if the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) is not 0, comparing the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) with the gradient of X and the gradient of Y of the adjacent pixels, screening out the pixel with the maximum value, giving M (X, Y), and assigning 0 to the other pixels with smaller values;
and P4-3-5, screening out the pixel value of the maximum value and assigning the current pixel of K (x, y) until all pixels are traversed to obtain a non-maximum value inhibition processing result K (x, y).
Further, the P4-4 uses a dual-threshold algorithm to detect and connect edges, and the specific processing method thereof is as follows:
3, selecting a proper high threshold value and a proper low threshold value according to the image;
4, circularly traversing all pixels of the binary image Cr _ bw after the non-maximum value is restrained;
3, if the gradient value of the current pixel is higher than the high threshold value, keeping;
4, if the gradient value of the current pixel is lower than the low threshold value, discarding;
and 5, if the gradient value of the current pixel is between the high threshold and the low threshold, searching pixel gradient values from adjacent pixels, if the pixel gradient values are higher than the high threshold, keeping the pixel gradient values, and if the pixel gradient values are not higher than the high threshold, discarding the pixel gradient values.
Further, the processing method for filtering and extracting the pupil boundary circle data of P5 includes: and filtering and extracting all circle fitting data from the edge detection image Cr _ edge, calculating an array with circle center coordinates and radius lengths of the circle fitting as elements one by one, filtering and extracting an iris and pupil positioning result meeting conditions from the array, and outputting the coordinates and the radius of a center point of the iris and pupil positioning result.
The invention has the beneficial effects that:
the pupil in the binary image Cr _ bw of the invention is quite prominent, and most of the time, only one circle corresponding to the pupil exists in the image, and the data filtering and extraction of the circle are quite simple and fast. Compared with the calculation and filtration of a large amount of 'circle' data in the traditional iris inner circle extraction process, the processing efficiency is obviously improved.
Drawings
FIG. 1 is a flow chart of a pupil fast positioning method for color iris recognition according to the present invention;
FIG. 2 is a schematic diagram of an example of the original input of a color iris image according to the present invention;
FIG. 3 is a diagram illustrating the result of converting a color iris image to YCbCr color space according to the present invention;
FIG. 4 is a diagram illustrating the result of Cr component extracted from YCbCr color space according to the present invention;
FIG. 5 is a diagram illustrating a binarization result of a Cr component map according to the present invention;
FIG. 6 is a schematic view of the processing flow of the Cr component map edge detection calculation according to the present invention;
FIG. 7 is a schematic diagram of the edge detection result of the Cr component diagram according to the present invention;
FIG. 8 is a schematic view of the processing flow of the image filtering process performed on the binary image Cr _ bw by color iris recognition according to the present invention;
FIG. 9 is a diagram illustrating a filtering template for image filtering according to the present invention;
FIG. 10 is a diagram illustrating a convolution calculation method of an image matrix and a filter template in an image filtering process according to the present invention;
FIG. 11 is a schematic diagram illustrating an output result of the image filtering process performed on the binary image Cr _ bw in the image filtering process according to the present invention;
FIG. 12 is a schematic view of the processing flow of gradient magnitude and direction detection for the binary map Cr _ bw according to the present invention;
FIG. 13 is a schematic diagram illustrating a processing result of detecting the gradient amplitude and the direction of the binary image Cr _ bw according to the present invention;
FIG. 14 is a schematic view of a process flow of non-maximum suppression of the binary map Cr _ bw according to the present invention;
FIG. 15 is a diagram illustrating the result of the non-maximum suppression process performed on the binary Cr _ bw map according to the present invention;
fig. 16 is a diagram illustrating the pupil location result according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the following embodiments and the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be construed as being included in the scope of the present invention.
Example 1: fig. 1 is a processing flow chart of the fast pupil location method for color iris recognition according to the present invention. As shown in fig. 1, the method for rapidly positioning a pupil by color iris recognition of the present invention comprises the following processing steps:
p1, converting the color iris image into YCbCr color space image;
p2, separating the Cr component map from the YCbCr color space image to obtain a Cr map;
p3, carrying out binarization conversion on the Cr image to obtain a binary image Cr _ bw image with higher edge definition;
p4, carrying out edge detection on the Cr _ bw image to obtain an edge detection image Cr _ edge;
p5, filtering and extracting all circle fitting data from the edge detection graph Cr _ edge, and filtering and extracting the data of the inner circle of the iris: the coordinates and radius of the center point of the pupil.
Example 2: referring to fig. 2 and 3, fig. 2 is a schematic diagram of an example of an original input of a color iris image, and fig. 3 is a schematic diagram of a result of converting the color iris image shown in fig. 2 into a YCbCr color space. As shown in fig. 2, the input of the present invention is a color iris image, and in this embodiment, the image is captured by a common smart phone widely used in the market at present, and an auxiliary illumination of a suitable visible light source is applied during shooting. As shown in fig. 3, the color iris image shown in fig. 2 is converted into a resultant diagram of a YCbCr color space. YCbCr is one of the color spaces commonly used in digital photography systems. An image of the YCbCr color space is composed of three components, including a Y component, a Cb component, and a Cr component. Where Y is the luminance (luma) component of the color, Cb is the density offset component of blue, and Cr is the density offset component of red.
Example 3: fig. 4 is a schematic diagram showing the result of extracting the Cr component from the YCbCr color space image shown in fig. 3, that is, the density offset component of red extracted from the YCbCr color space image according to the present invention.
Example 4: FIG. 5 is a binary image with clear edges obtained by the binary conversion of the Cr component image shown in FIG. 4 according to the present invention. As shown in fig. 5, after the binarization conversion, the obtained binary image shows a very clear "pupil" pattern, and other noise signals except for the pupil are basically eliminated.
Example 5: referring to fig. 6 and 7, the processing flow and the processing result of the edge detection calculation of the binary map Cr _ bw shown in fig. 5 according to the present invention are shown. Fig. 6 is a schematic flow chart of the process of performing the edge detection calculation on the binary image Cr _ bw map shown in fig. 5 according to the present invention, and fig. 7 is a schematic flow chart of the process of performing the edge detection calculation on the binary image Cr _ bw map shown in fig. 5 according to the present invention. Since the binary image shown in fig. 5 filters most of the noise patterns outside the pupil, the edge detection result is very simple and clear. The method for detecting the edge of the binary image Cr _ bw image comprises the following processes:
p4-1, carrying out image filtering processing on the binary image Cr _ bw to filter out noise signals;
p4-2, calculating the amplitude and direction of the gradient of the binary image Cr _ bw after the image filtering processing;
p4-3, carrying out non-maximum inhibition on the gradient amplitude;
p4-4, edge detection and connection using a dual threshold algorithm.
Example 6: see fig. 8, 9, 10 and 11. Fig. 8 is a schematic view showing a processing flow of the image filtering processing performed on the binary image Cr _ bw shown in fig. 5 according to the present invention, fig. 9 is a schematic view showing a filtering template according to the present invention, fig. 10 is a schematic view showing a convolution calculation method between an image matrix and the filtering template in the image filtering processing according to the present invention, and fig. 11 is a schematic view showing an output result of the image filtering processing performed on the binary image Cr _ bw in the image filtering processing according to the present invention. The image filtering process, also called smoothing filtering process, has two functions: firstly, smoothing the image and secondly eliminating the image noise. In the present embodiment, the main purpose of the image filtering process is to remove image noise. In the binary image Cr _ bw of the color iris image, due to the reason of the image itself, and after the conversion process of converting the color iris image into the YCbCr color space and performing binarization conversion on the Cr component image, some non-target objects such as small spots and even isolated pixels may exist in the pupil and the periphery of the pupil in the image. The basic idea of the treatment is as follows: a filtering mask template is introduced, and the filtering mask template and a filtering image are used for performing matrix convolution operation, so that an image noise signal can be eliminated, and the effect of enhancing the image edge to be clear can be achieved. The P4-1 carries out image filtering processing on the binary image Cr _ bw, and the specific processing method comprises the following steps:
in the first step, an appropriate filter mask template is determined, including its size, standard deviation coefficients. The filter mask template is a matrix, and in the embodiment, the filter mask template matrix adopts a 3 × 3 matrix.
And secondly, using the filter mask template as a filter mask matrix. In this embodiment, the filter mask template adopted by the filter mask template is shown in fig. 9 and is a matrix of 3 × 3 as shown below:
[(X-1,Y-1);(X-1,Y);(X-1,Y+1);
(X,Y-1);(X,Y);(X,Y+1);
(X+1,Y-1);(X+1,Y);(X+1,Y+1)]
=[0.075,0.124,0.075;
0.124,0.204,0.124;
0.075,0.124,0.075;]
and thirdly, performing convolution calculation by using the filtering mask matrix and the binary image Cr _ bw image matrix. Fig. 9 is a schematic diagram of the convolution calculation method according to the present invention. The calculation method is shown in the following expression:
(x,y)=(x-1,y-1)*(X-1,Y-1)+(x-1,y)*(X-1,Y)*(x-1,y+1)*(X-1,Y+1)
+(x,y-1)*(X,Y-1)+(x,y)*(X,Y)*(x,y+1)*(X,Y+1)
+(x+1,y-1)*(X+1,Y-1)+(x+1,y)*(X+1,Y)*(x+1,y+1)*(X+1,Y+1)
as shown in fig. 9, in this embodiment, the grayscale value of the Cr _ bw (x, y) ═ Cr _ bw (3,3) pixel is 38, and the value obtained by performing convolution operation on the grayscale value of this pixel is changed to 33 with reference to the following expression:
Cr_bw(3,3)=35*0.075+36*0.124+31*0.075
+31*0.124+38*0.204+28*0.124
+31*0.075+34*0.124+21*0.075
=33
the convolution calculation process is as follows:
firstly, keeping the rows unchanged and the columns changed, and performing convolution operation on all matrix elements of the rows in the horizontal direction;
secondly, changing the line number, and continuing the convolution operation in the first step until the convolution operation of all the lines is traversed;
and fourthly, removing abnormal element values exceeding the upper limit of the peak value in the binary image Cr _ bw image matrix after convolution calculation, and obtaining the binary image Cr _ bw with the image noise signal removed and the iris-pupil image edge gradient higher.
Example 7: see fig. 12 and 13. Fig. 12 is a schematic view showing a flow of a process of detecting a gradient amplitude and a direction of the binary map Cr _ bw shown in fig. 11 according to the present invention, and fig. 13 is a schematic view showing a result of the process of detecting a gradient amplitude and a direction of the binary map Cr _ bw shown in fig. 11 according to the present invention. In image recognition, the gradient direction of an image is the direction in which the function f (x, y) changes most quickly, when an edge exists in the image, a large gradient value exists, and conversely, when a smoother part exists in the image, the gray value change is small, and the corresponding gradient is also small. In this embodiment, the gradient of each pixel point in the Cr _ bw image is calculated, and the processing procedure of the gradient amplitude and the direction is as follows:
in a first step, three zero-valued matrices of the same size as the Cr _ bw image matrix are built as follows:
x-direction gradient value matrix Ix (X, y)
② Y direction gradient value matrix Iy (x, Y)
③ gradient amplitude matrix M (x, y) of the target image matrix
Secondly, calculating the gradient of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient in x-direction of Cr _ bw (x, y) element: ix (x, y) ═ I (x +1, y) -I (x-1, y)
Y-directional gradient of Cr _ bw (x, y) element: iy (x, y) ═ I (x, y +1) -I (x, y-1)
Thirdly, calculating the gradient amplitude M of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient amplitude of Cr _ bw (x, y) element
Figure BDA0003315897990000081
Step four, calculating the gradient direction angle theta of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient direction angle θ (x, y) of Cr _ bw (x, y) element is arctan ((Iy (x, y), Ix (x, y))
Example 8: see fig. 14 and 15. Fig. 14 is a schematic flow chart illustrating a process of non-maximum suppression of the binary map Cr _ bw shown in fig. 13 according to the present invention, and fig. 15 is a schematic diagram illustrating a result of the process of non-maximum suppression of the binary map Cr _ bw shown in fig. 13 according to the present invention. In this embodiment, the non-maximum suppression processing is to find a maximum value on a gradient matrix generated by performing gradient calculation on the image Cr _ bw, and to eliminate element values other than the maximum value. The basic idea is as follows: and taking the calculated pixel as a reference center, observing the conditions of adjacent pixel points along the gradient direction of the point, determining and reserving the point with the maximum value according to the observation result, and rejecting the point with the non-maximum value. The processing procedure for carrying out non-maximum suppression on the binary image Cr _ bw image is as follows:
in a first step, a zero value matrix K (x, y) of the same size as the Cr bw image matrix is established as follows:
and step two, circularly traversing and reading all pixels of the gradient amplitude matrix M (x, y), and judging whether the gradient value of the current pixel is 0.
Thirdly, if the gradient value of the current pixel of the gradient amplitude matrix M (x, y) is 0, assigning a value K (x, y) to the corresponding pixel of 0;
fourthly, if the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) is not 0, comparing the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) with the gradient values of X and Y of adjacent pixels, screening out the pixel with the maximum value as being greater than M (X, Y), and assigning 0 to the other pixels with smaller values;
and fifthly, screening out the pixel value of the maximum value and assigning the current pixel K (x, y) until all pixels are traversed to obtain a non-maximum value inhibition processing result K (x, y).
Example 9: fig. 16 is a schematic diagram showing an output result of the present invention after pupil location calculation is performed on the color iris image shown in fig. 2. As shown in fig. 16, the result of pupil location of the color iris image is calculated using the edge detection result data shown in fig. 15: the coordinates and the radius of the central point of the pupil, thereby quickly and accurately positioning the pupil position of the colored iris.

Claims (16)

1. The pupil quick positioning method for color iris recognition is characterized by comprising the following processing steps:
p1, converting the color iris image into YCbCr color space image;
p2, separating the Cr component map from the YCbCr color space image to obtain a Cr map;
p3, carrying out binarization conversion on the Cr image to obtain a clear-edge binary image Cr _ bw image;
p4, performing edge detection on the Cr _ bw graph to obtain an edge detection graph Cr _ edge;
p5, filtering and extracting all circle fitting data from the edge detection graph Cr _ edge, and filtering and extracting the data of the inner circle of the iris: the coordinates and radius of the center point of the pupil.
2. The method for rapidly positioning the pupil through color iris recognition according to claim 1, wherein the specific process of converting the color iris image into the YCbCr color space image by P1 is as follows:
p1-1, calculating the spatial dimensions of the RGB matrix of the color iris image: number of rows, number of columns, dimension;
p1-2, the spatial dimensions calculated according to the aforementioned P1-1: separating RGB matrixes to obtain three component matrixes of R component, G component and B component respectively;
p1-3, converting the RGB matrix of the color iris image to generate the luminance component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Y respectively, and adding a Y component correction value cor _ Y to all elements of each component for correction to obtain a brightness component Y matrix result of the YCbCr color space; the processing expression is as follows:
luminance component Y ═ (pa _ Yr + R + pa _ Yg + G + pa _ Yb + B) + cor _ Y;
p1-4, converting the RGB matrix of the color iris image to generate the blue component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Cb respectively, and adding a Cb component correction value cor _ Cb correction to all elements of each component to obtain a blue component Cb matrix result of the YCbCr color space; the processing expression is as follows:
blue component Cb ═ pa _ Cbr + pa _ Cbg × G + pa _ Cbb) + cor _ Cb;
p1-5, converting the RGB matrix of the color iris image to generate the red component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Cr respectively, and adding Cr component correction value cor _ Cr to all elements of each component for correction to obtain a red component Cr matrix result of the YCbCr color space; the processing expression is as follows:
red component Cr ═ (pa _ Crr × R + pa _ Crg × G + pa _ Crb × B) + cor _ Cr;
and P1-6, combining the luminance component Y matrix, the blue component Cb matrix and the red component Cr matrix obtained by the P1-3, the P1-4 and the P1-5 to obtain a complete result of converting the RGB matrix into the YCbCr color space.
3. The specific process of P1 for converting a color iris image into a YCbCr color space image as claimed in claim 2, wherein the parameter vector pa _ Y of P1-3 is:
pa_Y=[pa_Yr,pa_Yg,pa_Yb]=[0.299,0.587,0.114]。
4. the specific process of P1 for converting a color iris image into a YCbCr color space image as claimed in claim 2, wherein the parameter vector pa _ Cb of P1-4 is:
pa_Cb=[pa_Cbr,pa_Cbg,pa_Cb b]=[-0.1687,-0.3313,0.5]。
5. the specific process of P1 for converting a color iris image into a YCbCr color space image as claimed in claim 2, wherein the parameter vector pa _ Cr of P1-5 is:
pa_Cr=[pa_Crr,pa_Crg,pa_Crb]=[0.5,-0.4187,-0.0813]。
6. the specific process of P1 for converting a color iris image into a YCbCr color space image according to claim 2, wherein said Y component correction value cor _ Y of P1-3 is 16.
7. The specific process of P1 for converting a color iris image into a YCbCr color space image as claimed in claim 2, wherein said Cb component correction value cor _ Cb of P1-4 is 128.
8. The specific process of P1 for converting a color iris image into a YCbCr color space image according to claim 2, wherein said Cr component correction value cor _ Cr of P1-5 is 128.
9. The pupil fast positioning method for color iris recognition according to claim 1, wherein the specific process of converting the Cr map into the binary map Cr _ bw by P3 is as follows:
p3-1, calculating the optimal segmentation threshold value of the iris pupil of the Cr image output by P2 as a foreground object, so as to maximize the variance between the iris pupil as the foreground object and the background, and to highlight the iris pupil object;
p3-2, carrying out image binarization segmentation on the Cr image by using the segmentation threshold obtained by P3-1, namely setting foreground pixels smaller than the segmentation threshold as '0' values and setting background pixels larger than or equal to the segmentation threshold as '255' values;
and P3-3, carrying out 0/1 binarization conversion on the binarization gray level map obtained by P3-2 to obtain a binary map Cr _ bw map.
10. The specific processing procedure of P3 for binarizing and converting the Cr map into a binary map Cr _ bw map according to claim 9, wherein the iris pupil of the Cr map of P3-1 is used as the optimal segmentation threshold of the foreground object, and the specific calculation method is as follows:
p3-1-1, generating a histogram of a Cr map;
p3-1-2, histogram smoothing treatment is carried out on the histogram of the Cr map;
p3-1-3, calculating the maximum gray value and the minimum gray value of the histogram of the Cr image after the smoothing treatment, and taking the maximum gray value and the minimum gray value as the boundary values of the subsequent calculation;
p3-1-4, calculating the mass moment of each gray value, i.e. the value of each gray value multiplied by the number of pixels of the gray value;
p3-1-5, calculating the variance of the histogram of the Cr map under each gray level, namely the fluctuation range under the gray level;
and P3-1-6, screening the maximum variance value from the variance under each gray level, and taking the gray level corresponding to the variance value as the optimal segmentation threshold of the foreground object, wherein the iris pupil of the Cr image of P3-1 is the gray level corresponding to the maximum variance value.
11. The pupil fast positioning method for color iris recognition according to claim 1, wherein the method for detecting the edge of the binary map Cr _ bw of P4 comprises the following processes:
p4-1, carrying out image filtering processing on the binary image Cr _ bw to filter noise signals;
p4-2, calculating gradient, gradient amplitude and gradient direction of the binary image Cr _ bw after image filtering processing;
p4-3, carrying out non-maximum inhibition on the gradient amplitude;
p4-4, edge detection and connection using a dual threshold algorithm.
12. The P4-1 image filtering process for binary image Cr _ bw according to claim 11, wherein the specific processing method is as follows:
p4-1-1, determining a proper filtering template, including the size and standard deviation coefficient;
p4-1-2, generating a filter mask matrix by using the filter template;
and P4-1-3, performing convolution calculation by using the filter mask matrix and the binary image Cr _ bw image matrix:
firstly, keeping the rows unchanged and the columns changed, and performing convolution operation in the horizontal direction;
secondly, on the obtained result, keeping the rows not to be limited, changing the rows, and performing convolution operation in the vertical direction;
and P4-1-4, removing abnormal element values exceeding the upper limit of the peak value in the image matrix of the binary image Cr _ bw after convolution calculation, namely obtaining the binary image Cr _ bw with higher smoothness of noise signals of a single or isolated block filtered out from the image.
13. The method for calculating the gradient, the gradient magnitude and the gradient direction of the binary image Cr _ bw according to the P4-2, as recited in claim 11, is characterized in that the specific processing method comprises the following steps:
p4-2-1, three zero-valued matrices of the same size as the Cr _ bw image matrix are built as follows:
x-direction gradient value matrix Ix (X, y)
② Y direction gradient value matrix Iy (x, Y)
③ gradient amplitude matrix M (x, y) of the target image matrix
P4-2-2, calculating the gradient of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient in x-direction of Cr _ bw (x, y) element: ix (x, y) ═ I (x +1, y) -I (x-1, y)
Y-directional gradient of Cr _ bw (x, y) element: iy (x, y) ═ I (x, y +1) -I (x, y-1)
P4-2-3, calculating the gradient amplitude M of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient amplitude of Cr _ bw (x, y) element
Figure FDA0003315897980000031
P4-2-4, calculating the gradient direction angle theta of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
the gradient direction angle θ (x, y) of the Cr _ bw (x, y) element is arctan ((Iy (x, y), Ix (x, y)).
14. The method for performing non-maximum suppression on the gradient amplitude of the binary image Cr _ bw according to the P4-3 of claim 11, is characterized in that the specific processing method comprises the following steps:
p4-3-1, establishing a zero value matrix K (x, y) of the same size as the Cr _ bw image matrix;
p4-3-2, reading all pixels of the gradient amplitude matrix M (x, y) in a circulating traversal mode, and judging whether the gradient value of the current pixel is 0 or not;
p4-3-3, if the gradient value of the current pixel of the gradient amplitude matrix M (x, y) is 0, assigning K (x, y) to the corresponding pixel of 0;
p4-3-4, if the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) is not 0, the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) is compared with the gradient of the adjacent pixels, the gradient of the current pixel is compared, the pixel value with the maximum value is screened out and is given to M (X, Y), and the other adjacent pixels with smaller values are assigned with 0;
and P4-3-5, screening out the pixel value of the maximum value and assigning the current pixel of K (x, y) until all pixels are traversed to obtain a non-maximum value inhibition processing result K (x, y).
15. The method for detecting and connecting edges by using the dual-threshold algorithm of the P4-4 as claimed in claim 11, wherein the specific processing method is as follows:
1, selecting a proper high threshold value and a proper low threshold value according to an image;
circularly traversing all pixels of the binary image Cr _ bw after the non-maximum value is inhibited;
3, if the gradient value of the current pixel is higher than the high threshold value, keeping;
4, if the gradient value of the current pixel is lower than the low threshold value, discarding;
and 5, if the gradient value of the current pixel is between the high threshold and the low threshold, searching pixel gradient values from adjacent pixels, if the pixel gradient values are higher than the high threshold, keeping the pixel gradient values, and if the pixel gradient values are not higher than the high threshold, discarding the pixel gradient values.
16. The method for rapidly positioning a pupil by color iris recognition according to claim 1, wherein the specific process of filtering and extracting the pupil boundary circle data by P5 is as follows: and filtering and extracting all circle fitting data from the edge detection image Cr _ edge, calculating an array with circle center coordinates and radius lengths of the circle fitting as elements one by one, filtering and extracting an iris and pupil positioning result meeting conditions from the array, and outputting the coordinates and the radius of a center point of the iris and pupil positioning result.
CN202111230913.9A 2021-10-22 2021-10-22 Pupil quick positioning method for color iris recognition Pending CN113902798A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111230913.9A CN113902798A (en) 2021-10-22 2021-10-22 Pupil quick positioning method for color iris recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111230913.9A CN113902798A (en) 2021-10-22 2021-10-22 Pupil quick positioning method for color iris recognition

Publications (1)

Publication Number Publication Date
CN113902798A true CN113902798A (en) 2022-01-07

Family

ID=79025741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111230913.9A Pending CN113902798A (en) 2021-10-22 2021-10-22 Pupil quick positioning method for color iris recognition

Country Status (1)

Country Link
CN (1) CN113902798A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060147094A1 (en) * 2003-09-08 2006-07-06 Woong-Tuk Yoo Pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
CN102509093A (en) * 2011-10-18 2012-06-20 谭洪舟 Close-range digital certificate information acquisition system
CN106778499A (en) * 2016-11-24 2017-05-31 江苏大学 A kind of method of quick positioning people's eye iris during iris capturing
CN106845388A (en) * 2017-01-18 2017-06-13 北京交通大学 The extracting method of the mobile terminal palmmprint area-of-interest based on complex scene
US20180174309A1 (en) * 2015-06-05 2018-06-21 Kiyoshi Hoshino Eye Motion Detection Method, Program, Program Storage Medium, and Eye Motion Detection Device
CN111738146A (en) * 2020-06-22 2020-10-02 哈尔滨理工大学 Rapid separation and identification method for overlapped fruits

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060147094A1 (en) * 2003-09-08 2006-07-06 Woong-Tuk Yoo Pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
CN102509093A (en) * 2011-10-18 2012-06-20 谭洪舟 Close-range digital certificate information acquisition system
US20180174309A1 (en) * 2015-06-05 2018-06-21 Kiyoshi Hoshino Eye Motion Detection Method, Program, Program Storage Medium, and Eye Motion Detection Device
CN106778499A (en) * 2016-11-24 2017-05-31 江苏大学 A kind of method of quick positioning people's eye iris during iris capturing
CN106845388A (en) * 2017-01-18 2017-06-13 北京交通大学 The extracting method of the mobile terminal palmmprint area-of-interest based on complex scene
CN111738146A (en) * 2020-06-22 2020-10-02 哈尔滨理工大学 Rapid separation and identification method for overlapped fruits

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑沁锋;李晓宁: "基于Gabor滤波的快速虹膜识别算法", 计算机工程与设计, vol. 32, no. 3, 31 December 2011 (2011-12-31) *

Similar Documents

Publication Publication Date Title
TWI407800B (en) Improved processing of mosaic images
CN107767390B (en) The shadow detection method and its system of monitor video image, shadow removal method
JP4054184B2 (en) Defective pixel correction device
CN105654445B (en) A kind of handset image denoising method based on wavelet transformation edge detection
CN108288264B (en) Wide-angle camera module contamination testing method
CN111784605B (en) Image noise reduction method based on region guidance, computer device and computer readable storage medium
US8913842B2 (en) Image smoothing method based on content-dependent filtering
KR20150116833A (en) Image processor with edge-preserving noise suppression functionality
CN112887693B (en) Image purple border elimination method, equipment and storage medium
CN109544583B (en) Method, device and equipment for extracting interested area of leather image
EP3452980A1 (en) Methods and apparatus for automated noise and texture optimization of digital image sensors
US8737762B2 (en) Method for enhancing image edge
CN104778710B (en) A kind of morphological images edge detection method based on quantum theory
CN110930321A (en) Blue/green screen digital image matting method capable of automatically selecting target area
CN107292897B (en) Image edge extraction method and device for YUV domain and terminal
WO2023016146A1 (en) Image sensor, image collection apparatus, image processing method, and image processor
CN107644437B (en) Color cast detection system and method based on blocks
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
CN115205194B (en) Image processing-based method, system and device for detecting coverage rate of armyworm plate
Kaur et al. QRFODD: Quaternion Riesz fractional order directional derivative for color image edge detection
CN113902798A (en) Pupil quick positioning method for color iris recognition
CN116958058A (en) Lens dirt detection method and device and image detection equipment
CN109003268B (en) Method for detecting appearance color of ultrathin flexible IC substrate
CN113469980B (en) Flange identification method based on image processing
CN115358948A (en) Low-illumination image enhancement method based on improved Retinex algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination