CN111079676A - Human eye iris detection method and device - Google Patents

Human eye iris detection method and device Download PDF

Info

Publication number
CN111079676A
CN111079676A CN201911336374.XA CN201911336374A CN111079676A CN 111079676 A CN111079676 A CN 111079676A CN 201911336374 A CN201911336374 A CN 201911336374A CN 111079676 A CN111079676 A CN 111079676A
Authority
CN
China
Prior art keywords
iris
key points
human eye
human
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911336374.XA
Other languages
Chinese (zh)
Other versions
CN111079676B (en
Inventor
于慧敏
丁洋凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201911336374.XA priority Critical patent/CN111079676B/en
Publication of CN111079676A publication Critical patent/CN111079676A/en
Application granted granted Critical
Publication of CN111079676B publication Critical patent/CN111079676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention discloses a method and a device for detecting human iris, comprising the following steps: receiving a face data set, and training a key point positioning neural network; carrying out rough positioning on the key points of the human face by using a key point positioning neural network to obtain corresponding key points of the human face, and selecting key points of the human eye region according to the obtained key points of the human face; calculating the size of an angle of the iris which is not shielded by the eyelid and the size of the radius of the outer boundary of the iris of the human eye according to key points of the human eye area; and performing iterative optimization on key points of the human eye region according to the size of the angle of the iris which is not shielded by the eyelid and the size upper bound of the radius of the outer boundary of the human eye iris, so as to realize the precise positioning of the key points and the precise positioning of the outer boundary of the iris. The invention can efficiently position the accurate positioning of the key points of the human eyes and the outer boundary of the iris, has higher accuracy and robustness and can achieve the efficiency of real-time processing.

Description

Human eye iris detection method and device
Technology neighborhood
The invention belongs to the field of iris detection, and particularly relates to a human eye iris detection method and device.
Background
In the face recognition task, the face key point detection task is an important branch. The human face key point detection task is to locate the key region position of the human face, usually eyes, nose, mouth, face contour and the like, of a given human face image, and the human face key point detection is an important link in the human face recognition task, so that accurate human face key point positions are obtained, and the human face key point detection task is the premise of tasks such as human face posture recognition, expression recognition, human face beauty, fatigue recognition and the like.
The traditional iris detection algorithm is represented by the Daugman algorithm, which was proposed by Daugman in 1993 in the literature, and the algorithm uses a coarse-to-fine strategy to finally achieve single-pixel accuracy in locating the iris and estimates the center and radius of the iris and iris.
Although the Daugman algorithm has a better result, the dependency on an iteration starting point is high, if the iteration starting point is not properly selected, the iteration process is difficult to converge, and the output result is also poor; the traditional Daugman algorithm generally determines the starting point of iteration by using a method based on traditional image processing, but the robustness of the methods is generally poor, and accurate results are difficult to guarantee for low-quality images such as faces with large posture transformation and shadowy faces.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for detecting an iris of a human eye, so as to solve the problem of excessive dependence on an iteration starting point in the prior art.
In one aspect of the embodiments of the present invention, a method for detecting an iris of a human eye is provided, including receiving a face data set, training a key point positioning neural network; carrying out rough positioning on the key points of the human face by using a key point positioning neural network to obtain corresponding key points of the human face, and selecting key points of the human eye region according to the obtained key points of the human face; calculating the size of an angle of the iris which is not shielded by the eyelid and the size of the radius of the outer boundary of the iris of the human eye according to key points of the human eye area; and performing iterative optimization on key points of the human eye region according to the size of the angle of the iris which is not shielded by the eyelid and the size upper bound of the radius of the outer boundary of the human eye iris, so as to realize the precise positioning of the key points and the precise positioning of the outer boundary of the iris.
In another aspect of the embodiments of the present invention, a human iris detection apparatus is provided, which includes a training neural network module, configured to receive a human face data set, and train a key point positioning neural network; the key point detection module is used for carrying out rough positioning on the key points of the human face by using the key point positioning neural network, acquiring corresponding key points of the human face and selecting key points of the human eye region according to the obtained key points of the human face; the parameter estimation module is used for calculating the angle of the iris which is not shielded by the eyelid and the radius of the outer boundary of the iris of the human eye according to the key points of the human eye area; and the iris accurate positioning module is used for carrying out iterative optimization on key points of the human eye region according to the angle size of the iris which is not shielded by the eyelid and the radius upper bound of the outer boundary of the iris of the human eye, so as to realize the accurate positioning of the key points and the accurate positioning of the outer boundary of the iris.
By adopting any one of the technical schemes, the embodiment of the invention has the beneficial effects that:
the key point positioning neural network is used as a first detection step, so that the detection precision of the central point of the iris is improved.
The Daugman algorithm has the defects that a circle center position with high confidence coefficient determined in advance is needed to be used as a starting point of initial iteration, the key point positioning neural network can provide the high-precision human eye center position, and the Daugman defect is effectively solved.
The invention can efficiently position the center position and the iris radius of the iris of human eyes, has higher accuracy and robustness and can achieve the efficiency of real-time processing.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method of iris detection in a human eye according to an embodiment of the invention;
FIG. 2 is an alternative 68-point keypoint example in accordance with the present invention;
FIG. 3 is an alternative left eye keypoint distribution map according to the present invention;
FIG. 4 is an alternative face image to be detected according to the present invention;
FIG. 5 is a result of human eye region detection corresponding to the keypoint localization neural network of FIG. 4, in accordance with the present invention;
FIG. 6 is the result of the iteration through the modified Daugman algorithm corresponding to FIG. 5, according to the present invention;
fig. 7 is a schematic structural diagram of a human iris detection device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention.
Example 1
Fig. 1 is a flowchart of a method for detecting an iris of a human eye according to embodiment 1 of the present invention, as shown in fig. 1, the method including the steps of:
and S101, receiving a face data set and training a key point positioning neural network.
Specifically, in the above step, a network may be trained by using four-tuple data including a face image, face key point coordinates, a face category label, and a face pose angle, and the key point positioning neural network may be a convolutional neural network using Hourglass as a Backbone.
And S102, carrying out coarse positioning on the key points of the human face by using a key point positioning neural network, acquiring corresponding key points of the human face, and selecting key points of the human eye region according to the obtained key points of the human face.
Specifically, before the key point locating neural network is used for rough locating of the key points of the human face, normalization processing is carried out on the input human face image, and the normalization processing uses a mean value and a variance used in normalization during training.
And step S103, calculating the angle of the iris which is not shielded by the eyelid and the radius of the outer boundary of the iris of the human eye according to the key points of the human eye area.
Specifically, the actual iris is covered by the upper eyelid and the lower eyelid and cannot be presented in the form of a complete circle, so that the angle range of the unshielded iris is considered when the iris is detected, and the size of the angle of the iris which is not shaded by the eyelid can be calculated by the upper eye detection key point and the lower eye detection key point of the human eye region and the central key point of the iris through an inverse trigonometric function; in addition, the upper bound of the radius of the outer boundary of the iris of the human eye is calculated by key points of the left and right canthi of the human eye area, the value is half of the distance between the left canthi and the right canthi, and the value is utilized in the subsequent iteration.
And S104, performing iterative optimization on key points of the human eye region according to the angle of the iris which is not shielded by the eyelid and the radius upper bound of the outer boundary of the human eye iris, and realizing the precise positioning of the key points and the precise positioning of the outer boundary of the iris.
Specifically, when the step is executed, the human face image to be detected is subjected to gray processing; further, intercepting the gray level image of the human eye area according to the key point of the human eye area for processing; the initial circle center parameters of the iris are obtained by a key point positioning neural network, the radius range of the outer boundary of the iris cannot exceed the upper limit of the radius of the iris of a human eye, and in a candidate area, the best iris circle center position and the radius of the outer boundary are searched by using an improved Daugman algorithm to serve as a final result.
The method comprises the steps of receiving a face data set, and training a key point positioning neural network; carrying out rough positioning on the key points of the human face by using a key point positioning neural network to obtain corresponding key points of the human face, and selecting key points of the human eye region according to the obtained key points of the human face; calculating the size of an angle of the iris which is not shielded by the eyelid and the size of the radius of the outer boundary of the iris of the human eye according to key points of the human eye area; and performing iterative optimization on key points of the human eye region according to the size of the angle of the iris which is not shielded by the eyelid and the size upper bound of the radius of the outer boundary of the human eye iris, so as to realize the precise positioning of the key points and the precise positioning of the outer boundary of the iris.
Firstly, establishing a data set taking key points and a face image as a core, establishing a key point positioning neural network based on a neural network technology, and establishing a loss function taking similarity distance measurement between real coordinates and predicted coordinates of the face key points as the core; setting up a key point positioning neural network, setting the weight of a trunk network as a training result on a public data set by adopting a transfer learning method, defining the weight of a non-trunk network by adopting a random initialization method, and using a back propagation algorithm during training; when the human face image processing device is used, input human face images are subjected to normalization processing, the processed human face images are sent into a trained key point positioning neural network to obtain human face key points, a group of key point positions are screened, and key points in human eye areas are selected; calculating the angle of the iris which is not shielded by the eyelid and the radius of the outer boundary of the iris of the human eye from key points of the human eye region; and intercepting a human eye region gray image according to the positions of the key points of the human eye region, and performing improved Daugman iteration on the key points of the human eye region to realize the precise positioning of the key points and the precise positioning of the outer boundary of the iris. The method takes the key point positioning neural network as the first detection step, improves the detection precision of the central point of the iris, solves the problem of dependence of the Daugman algorithm on the quality of the initial iteration starting point, and can effectively detect the iris in the face image.
Optionally, according to the above embodiment of the present application, training the key point localization neural network based on the face data set includes:
step S1011, a data set with the key points and the face image as the core is established.
Specifically, quadruple data including face images, face key point coordinates, face class labels and face pose angles are established, and the data set is more effective for training a network.
Step S1012, a key point positioning neural network is established based on the neural network technology.
Specifically, as an embodiment, the Hourglass network may be selected as the convolutional neural network of the Backbone.
And step S1013, training a face key point detection model by adopting a loss function taking similarity distance measurement between the real coordinates and the predicted coordinates of the face key points as a core.
Specifically, a Focus Loss form can be referred to, so that the face pose angle is large, the data with few class labels has a larger influence on the Loss function in training.
Optionally, according to the above embodiment of the present application, the establishing a data set with a key point and a face image as a core includes:
and step S10111, acquiring a human face image.
Specifically, in an alternative embodiment, the color facial images with the resolution of 128 × 128-: including but not limited to different skin tones, different facial poses, different facial expressions, different facial shapes, etc.
And step S10112, labeling the key points of the human face in a transverse and longitudinal coordinate value format.
Specifically, in an alternative embodiment, a face may be labeled according to a 68-point face key point diagram, as shown in fig. 2, which is an alternative 68-point key point example according to the present invention, 68-point key points in the face are labeled, and two key points are added: after the center of the iris of the left and right eyes is marked, the key points of the human eye region are shown in an optional left eye key point distribution diagram (taking the left eye as an example, the right eye is the same) in fig. 3 according to the present invention, Q1Is the center point of the iris, Q2,Q3Is the angle of both eyes, Q4,Q5Two points on either side of the iris, Q, to describe the contour of the upper eyelid6,Q7Two points, which describe the contour of the lower eyelid, are located on either side of the iris. Q2~Q7The key point label of (1) is based on the key point label of 68 points, Q1The marked point of (a) is the center of the iris.
And step S10113, labeling the face type labels, wherein the labeling format is the names of different face types.
Specifically, in an optional embodiment, names of different face categories, such as different skin colors, different face poses, different face expressions, and different face shapes, are labeled, after the labeling is completed, the proportions of different label data are counted, weights in training are attached to the data of different labels, the weight value is (0-1), wherein the categories with small data amount, the assigned weights are larger, the categories with large data amount, and the assigned weights are smaller, and in an optional embodiment, the weights are distributed according to the proportion of the data amount of a certain category to the total data amount.
Step S10114, marking the Euler angles of the human face postures, wherein the marking format is the Euler angles of the human face postures in three directions.
Specifically, in an alternative embodiment, each face image is labeled with a face pose euler angle (α, γ), and the labeling may be performed using a pose estimation network, or may be completed during the acquisition.
Optionally, according to the above embodiment of the present application, the establishing a key point localization neural network based on a neural network technology includes:
and step S10121, building a face key point positioning neural network.
Specifically, as an embodiment, a Hourglass network may be selected as a convolutional neural network of the backhaul, and the positions of the keypoints are determined by adopting a mode of outputting a keypoint heat map and selecting a maximum value as the position of the keypoint.
Step S10122, adopting a transfer learning method, and setting the weight of the backbone network as the training result on the public data set.
Specifically, as an embodiment, a backhaul network trained on a WFLW public data set may be used as the initial value of the weight of the model Backbone network.
In step S10123, the non-backbone network weight is defined by a random initialization method.
Specifically, as an embodiment, the initialization of the non-backbone network weight may be defined by using a gaussian random initialization method.
Optionally, according to the above embodiment of the present application, training the keypoint location neural network by using the loss function with the similarity distance metric between the real coordinates and the predicted coordinates of the face keypoints as the core includes:
step S10131, designing a loss function taking similarity distance measurement between the real coordinates and the predicted coordinates of the key points of the human faces as a core, wherein the loss function can enable the human faces with different labels and postures to have different contributions to network training.
Specifically, as an example, the loss function L may take the form:
Figure BDA0002331030920000051
in the above formula (1), N represents the total amount of training data,
Figure BDA0002331030920000052
is represented by CiWeight corresponding to the label face, parameter θjJ is 1, 2, 3, and represents the euler angle of the face pose (α)i,βi,γi),||d||2And 2-norm representing the difference between the real coordinate and the predicted coordinate of a certain face key point.
And step S10132, training a key point positioning neural network by adopting a back propagation algorithm based on the loss function.
Specifically, as an embodiment, during actual training, batch processing BatchSize is set to 256, a backward propagation parameter is updated by using an Adam optimization method, the weight attenuation coefficient is 1e-6, the momentum is set to 0.9, the iteration number can be selected to 20k, and the learning rate can be selected to 1 e-5.
Optionally, according to the foregoing embodiment of the present application, the coarse positioning of the face key points is performed by using a key point positioning neural network, so as to obtain corresponding face key points, and the key points in the eye region are selected according to the obtained face key points, including:
step S1021, a normalization process is performed on the input face image.
Specifically, as an embodiment, fig. 4 is an optional face image to be detected according to the present invention, the face image is normalized during network training, and during testing, the face image is normalized according to the same mean and variance, and the optional mean and variance based on ImgaeNet are: mean value [0.485, 0.456, 0.406], variance [0.229, 0.224, 0.225]
And step S1022, the processed face image is sent to the trained key point positioning neural network to obtain the face key points.
And S1023, screening the obtained key point positions of the group, and selecting key points in the human eye area.
Specifically, as an embodiment, the key points of the human eye region, the left eye part, may be screened as follows: central coordinate Q of iris of left eye1(x1,y1) Left eye right eye corner coordinate Q2(x2,y2),Q3(x3,y3) Left eye upper and lower eyelid coordinates Q4(x4,y4),Q5(x5,y5),Q6(x6,y6),Q7(x7,y7) And the right eye part: right eye iris center coordinate Q8(x8,y8) Right eye left and right eye corner coordinate Q9(x9,y9),Q10(x10,y10) Coordinate Q of upper and lower eyelids of right eye11(x11,y11),Q12(x12,y12),Q13(x13,y13),Q14(x14,y14). Q is shown in FIG. 2, which is an alternative left eye keypoint map according to the present invention (the same applies to the right eye)1Is the center point of the iris, Q2,Q3Is the angle of both eyes, Q4,Q5Two points on either side of the iris, Q, to describe the contour of the upper eyelid6,Q7To illustrate the two points of the lower eyelid contour, located on both sides of the iris, fig. 5 is a diagram of the key points of the human eye region detected by the key point localization neural network corresponding to fig. 4 according to the present invention.
Optionally, according to the above embodiments of the present application, calculating, by the key points in the human eye region, an angle of the iris that is not covered by the eyelid and an upper bound of the radius of the outer boundary of the human iris, includes:
and step S1031, calculating the angle of the iris which is not shielded by the eyelid by the inverse trigonometric function according to the upper and lower eye detection key points of the human eye region and the central key point of the iris.
Specifically, as an example, the opening and closing degrees of the left and right canthi are defined as A1And A2The degree represents the angle of the iris which is not shielded by the eyelid and is represented by a key point Q1~Q7The human body can be estimated by combining simple geometric relationsOpening and closing degree A of left and right canthus1,A2(with A)1For example, A2The same applies) as shown in the following formula (2):
Figure BDA0002331030920000061
step S1032, calculating the radius upper bound of the outer boundary of the iris of the human eye according to key points of the left and right canthi of the human eye area, wherein the value is half of the distance between the left and right canthi.
Specifically, as an embodiment, an upper bound S of the size of the radius of the outer boundary of the iris of the human eye is defined, i.e. the radius of the iris cannot exceed the value of S. From the key point Q1~Q7And the upper bound S of the radius of the outer boundary of the iris of the human eye (the iris size is smaller than the width of the canthus) can be estimated by a simple geometric relation, as shown in the formula (3):
Figure BDA0002331030920000062
optionally, according to the above embodiment of the present application, according to the size of the angle that the iris is not covered by the eyelid and the size of the radius of the outer boundary of the iris of the human eye, iterative optimization is performed on the key points of the human eye region, so as to achieve the fine positioning of the key points and the fine positioning of the outer boundary of the iris, including:
and step S1041, performing graying processing on the face image to be detected.
Specifically, as an embodiment, RGB three channels may be added in a ratio of 0.299, 0.587, and 0.114 to perform graying of the image.
Step S1042, intercepting the human eye area gray level image according to the human eye area key point.
And step S1043, performing improved Daugman iteration on the key points of the human eye region.
And step S1044, realizing the precise positioning of the key points and the precise positioning of the outer boundary of the iris.
Optionally, according to the above embodiment of the present application, the modified Daugman iteration performed on the key points of the human eye region includes:
in step S10431, for example, the left eye (the same applies to the right eye), set Q1(n)The central point of the human iris obtained by the key point detection model is used as the initial iris central point Q of the iteration1(1)(ii) a Setting the nth iteration step length as Tn,T1Is the initial step length; selecting the current iris center point Q1(n)And peripheral and Q1(n)At intervals of T n8 positions per pixel.
Specifically, as an example, the initial step size T1And S' is the initial step size and is determined by rounding the upper bound S of the iris radius.
Step S10432, taking the 9 points selected in step S10431 as the center of circle, respectively, calculating the gray scale gradient values of the corresponding pixel values under different radii, and recording the largest gray scale gradient value, and the radius and center point corresponding to the largest gray scale gradient value.
Specifically, as an example, when calculating the gray scale gradient values of the pixel values corresponding to the 9 center points at different radii, the following formula (4) can be used
max(r,x,y)|gθ,r| (4)
The angle θ in the above equation (4) is limited to
Figure BDA0002331030920000071
And
Figure BDA0002331030920000072
|gθ,rand | represents the sum of the pixel gradients on the sector gray gradient operator at the angle θ with the radius r.
Step S10433, assuming that the maximum gray gradient value is calculated from the point P, judges the point P and the current center point Q1(n)Whether it is a point or not, if it is the same point, Q is added1(n)Set as the initial iris center point Q of the next iteration1(n+1)Recording the corresponding radius, and performing step S10434; otherwise, the current iris circle center Q is used1(n)The point is replaced with a point P, and steps S10431-S10432 are repeated.
Step S10434The step length is shortened in the following way: t isn+1=p·TnP is more than 0 and less than 1; at the current step length TnWhen the number of pixels becomes less than 0.5, the iteration is stopped, and the currently obtained iris center point Q is obtained1(n)And the corresponding iris outer boundary radius is the position of the final iris circle center and the iris outer boundary radius; otherwise, steps S10431-S10432 are repeated.
Specifically, as an example, the ratio p of the shortening steps may be 0.5, and fig. 6 is a result of iteration through the modified Daugman algorithm according to the present invention corresponding to fig. 5.
Example 2
The present application further provides a human iris detection apparatus for performing the human iris detection method in embodiment 1, and fig. 7 is a schematic structural diagram of a human iris detection apparatus according to an embodiment of the present invention, the apparatus including:
and the training neural network module 10 is used for receiving the face data set and training the key point positioning neural network.
And the key point detection module 11 is configured to perform coarse positioning on the face key points by using the key point positioning neural network, acquire corresponding face key points, and select key points in the eye region according to the acquired face key points.
And the parameter estimation module 12 is used for calculating the angle of the iris which is not shielded by the eyelid and the radius upper bound of the outer boundary of the iris of the human eye according to the key points of the human eye region.
And the iris accurate positioning module 13 is used for performing iterative optimization on key points of the human eye region according to the angle of the iris which is not shielded by the eyelid and the radius upper bound of the outer boundary of the human eye iris, so as to realize accurate positioning of the key points and the accurate positioning of the outer boundary of the iris.
According to the device, the key point positioning neural network is trained by receiving the face data set, rough positioning of the key points of the face is carried out by using the key point positioning neural network, corresponding key points of the face are obtained, key points of the human eye area are selected according to the obtained key points of the face, the upper bound of the angle size of the iris which is not shielded by the eyelid and the radius size of the outer boundary of the human eye iris is calculated according to the key points of the human eye area, iterative optimization is carried out on the key points of the human eye area according to the angle size of the iris which is not shielded by the eyelid and the upper bound of the radius size of the outer boundary of the human eye iris, and accurate positioning of the key points and accurate positioning of the outer.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method of iris detection in a human eye, comprising:
receiving a face data set, and training a key point positioning neural network;
carrying out rough positioning on the key points of the human face by using a key point positioning neural network to obtain corresponding key points of the human face, and selecting key points of the human eye region according to the obtained key points of the human face;
calculating the size of an angle of the iris which is not shielded by the eyelid and the size of the radius of the outer boundary of the iris of the human eye according to key points of the human eye area;
and performing iterative optimization on key points of the human eye region according to the size of the angle of the iris which is not shielded by the eyelid and the size upper bound of the radius of the outer boundary of the human eye iris, so as to realize the precise positioning of the key points and the precise positioning of the outer boundary of the iris.
2. The method of claim 1, wherein receiving a face data set, training a keypoint localization neural network comprises:
establishing a data set taking key points and face images as a core;
establishing a key point positioning neural network based on a neural network technology;
and training a key point positioning neural network by adopting a loss function taking similarity distance measurement between the real coordinates and the predicted coordinates of the key points of the human face as a core.
3. The method for detecting the iris of a human eye of claim 2, wherein the establishing of the data set with the key points and the face image as the core comprises the following steps:
collecting a face image;
labeling key points of the human face, wherein the labeling format is the horizontal coordinate value and the vertical coordinate value of the key points;
labeling the face type labels, wherein the labeling format is the names of different face types;
and marking the Euler angles of the human face postures, wherein the marking format is the Euler angles of the human face postures in three directions.
4. The human iris detection method of claim 2, wherein the establishing of the key point localization neural network based on the neural network technology comprises:
building a key point positioning neural network;
adopting a transfer learning method to set the weight of the backbone network as a training result on the public data set;
the non-backbone network weight is defined by adopting a random initialization method.
5. The method of claim 2, wherein the training of the key point localization neural network using a loss function with a measure of similarity distance between the real coordinates and the predicted coordinates of the key points of the human face as a core comprises:
designing a loss function taking similarity distance measurement between the real coordinates and the predicted coordinates of the key points of the human face as a core, wherein the loss function can make human faces with different labels and postures contribute to network training differently;
and training the key point positioning neural network by adopting a back propagation algorithm based on the loss function.
6. The method of claim 1, wherein the step of using the key point positioning neural network to perform rough positioning of key points of the human face to obtain corresponding key points of the human face, and the step of selecting key points of the human eye region according to the obtained key points of the human face comprises:
normalizing the input face image;
sending the processed face image into a trained key point positioning neural network to obtain face key points;
and screening the obtained key point positions of the group, and selecting key points in the human eye area.
7. The method for detecting the iris of a human eye according to claim 1, wherein calculating the upper bound of the angle of the iris which is not covered by the eyelid and the radius of the outer border of the iris of the human eye according to the key points of the area of the human eye comprises:
calculating the size of the angle of the iris which is not shielded by the eyelid according to the geometric relationship by the upper and lower eye detection key points and the central key point of the iris in the human eye region;
the upper bound of the radius of the outer boundary of the iris of the human eye is calculated by key points of the left canthus and the right canthus in the human eye area, and the value is half of the distance between the left canthus and the right canthus.
8. The method for detecting the iris of the human eye according to claim 1, wherein the iterative optimization of the key points of the human eye region is performed according to the size of the angle of the iris which is not covered by the eyelid and the size of the radius of the outer boundary of the iris of the human eye, so as to realize the fine positioning of the key points and the fine positioning of the outer boundary of the iris, and the method comprises the following steps:
carrying out gray processing on a face image to be detected;
intercepting a human eye area gray level image according to the human eye area key points;
performing improved Daugman iteration on the key points of the human eye region;
and realizing the fine positioning of the key points and the fine positioning of the outer boundary of the iris.
9. The method of claim 8, wherein performing modified Daugman iterations on the key points of the eye region comprises:
(9.1) setting Q1(n)For the central point of the human iris obtained by the nth iteration, the central key point of the human iris obtained by the key point positioning neural network is taken as the initial iris central point Q of the iteration1(1)(ii) a Setting the nth iteration step length as Tn,T1Is the initial step length; selecting the current iris center point Q1(n)And peripheral and Q1(n)At intervals of Tn8 positions of each pixel;
(9.2) respectively taking the 9 points selected in the step (9.1) as the centers of circles, calculating the gray gradient values of the corresponding pixel values under different radiuses of the points, and recording the maximum gray gradient value, the corresponding outer boundary radius and the center point of the gray gradient value;
(9.3) determining the point P and the current center point Q assuming that the maximum gray gradient value is calculated from the point P1(n)Whether it is a point or not, if it is the same point, Q is added1(n)Set as the initial iris center point Q of the next iteration1(n+1)Recording the corresponding radius, and performing the step (9.4); otherwise, the current iris circle center Q is used1(n)Replacing the point with a point P, and repeating the steps (9.1) - (9.2);
(9.4) shortening the step length in the following way: t isn+1=p·TnP is more than 0 and less than 1, and the current step length TnWhen the value of the iris center point Q is smaller than the set threshold value, the iteration is stopped, and the currently obtained iris center point Q is1(n)And the corresponding iris outer boundary radius is the position of the final iris circle center and the iris outer boundary radius; otherwise, repeating the steps (9.1) - (9.2).
10. An iris detection apparatus for a human eye, comprising:
the neural network module is used for receiving the face data set and training the key point positioning neural network;
the key point detection module is used for carrying out rough positioning on the key points of the human face by using the key point positioning neural network, acquiring corresponding key points of the human face and selecting key points of the human eye region according to the obtained key points of the human face;
the parameter estimation module is used for calculating the angle of the iris which is not shielded by the eyelid and the radius upper bound of the outer boundary of the iris of the human eye according to the key points of the human eye area;
and the iris accurate positioning module is used for carrying out iterative optimization on key points of the human eye region according to the angle size of the iris which is not shielded by the eyelid and the radius upper bound of the outer boundary of the iris of the human eye, so as to realize the accurate positioning of the key points and the accurate positioning of the outer boundary of the iris.
CN201911336374.XA 2019-12-23 2019-12-23 Human eye iris detection method and device Active CN111079676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911336374.XA CN111079676B (en) 2019-12-23 2019-12-23 Human eye iris detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911336374.XA CN111079676B (en) 2019-12-23 2019-12-23 Human eye iris detection method and device

Publications (2)

Publication Number Publication Date
CN111079676A true CN111079676A (en) 2020-04-28
CN111079676B CN111079676B (en) 2022-07-19

Family

ID=70316681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911336374.XA Active CN111079676B (en) 2019-12-23 2019-12-23 Human eye iris detection method and device

Country Status (1)

Country Link
CN (1) CN111079676B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149641A (en) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 Method, device, equipment and storage medium for monitoring driving state
CN113420721A (en) * 2021-07-21 2021-09-21 北京百度网讯科技有限公司 Method and device for labeling key points of image
CN113837117A (en) * 2021-09-28 2021-12-24 上海电力大学 Novel normalization and deep neural network-based iris coding method
CN116959085A (en) * 2023-07-27 2023-10-27 山东睿芯半导体科技有限公司 Iris positioning identification method, iris positioning identification device, chip and terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060165266A1 (en) * 2005-01-26 2006-07-27 Honeywell International Inc. Iris recognition system and method
US20100284576A1 (en) * 2006-09-25 2010-11-11 Yasunari Tosa Iris data extraction
CN103198301A (en) * 2013-04-08 2013-07-10 北京天诚盛业科技有限公司 Iris positioning method and iris positioning device
CN204791051U (en) * 2015-04-21 2015-11-18 国家电网公司 Electric power intelligent security manages device based on iris algorithm
CN107958173A (en) * 2016-10-18 2018-04-24 北京眼神科技有限公司 Iris locating method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060165266A1 (en) * 2005-01-26 2006-07-27 Honeywell International Inc. Iris recognition system and method
US20100284576A1 (en) * 2006-09-25 2010-11-11 Yasunari Tosa Iris data extraction
CN103198301A (en) * 2013-04-08 2013-07-10 北京天诚盛业科技有限公司 Iris positioning method and iris positioning device
CN204791051U (en) * 2015-04-21 2015-11-18 国家电网公司 Electric power intelligent security manages device based on iris algorithm
CN107958173A (en) * 2016-10-18 2018-04-24 北京眼神科技有限公司 Iris locating method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PENG ZHIYONG 等: "An improved Daugman method for iris recognition", 《WUHAN UNIVERSITY JOURNAL OF NATURAL SCIENCES》 *
杜德生 等: "一种快速虹膜定位方法", 《计算机工程与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149641A (en) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 Method, device, equipment and storage medium for monitoring driving state
CN113420721A (en) * 2021-07-21 2021-09-21 北京百度网讯科技有限公司 Method and device for labeling key points of image
CN113420721B (en) * 2021-07-21 2022-03-29 北京百度网讯科技有限公司 Method and device for labeling key points of image
CN113837117A (en) * 2021-09-28 2021-12-24 上海电力大学 Novel normalization and deep neural network-based iris coding method
CN113837117B (en) * 2021-09-28 2024-05-07 上海电力大学 Iris coding method based on novel normalization and depth neural network
CN116959085A (en) * 2023-07-27 2023-10-27 山东睿芯半导体科技有限公司 Iris positioning identification method, iris positioning identification device, chip and terminal

Also Published As

Publication number Publication date
CN111079676B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN111079676B (en) Human eye iris detection method and device
CN107506693B (en) Distort face image correcting method, device, computer equipment and storage medium
US11487995B2 (en) Method and apparatus for determining image quality
CN105989331B (en) Face feature extraction element, facial feature extraction method, image processing equipment and image processing method
Quelhas et al. Cell nuclei and cytoplasm joint segmentation using the sliding band filter
US7512255B2 (en) Multi-modal face recognition
CN103914676B (en) A kind of method and apparatus used in recognition of face
CN110705425B (en) Tongue picture multi-label classification method based on graph convolution network
CN110909618B (en) Method and device for identifying identity of pet
CN107609517A (en) A kind of classroom behavior detecting system based on computer vision
CN104318603A (en) Method and system for generating 3D model by calling picture from mobile phone photo album
CN109087261B (en) Face correction method based on unlimited acquisition scene
WO2022001237A1 (en) Method and system for automatically recognizing image of primary tumor of nasopharyngeal carcinoma
CN111062328B (en) Image processing method and device and intelligent robot
CN107909622A (en) Model generating method, the scanning planing method of medical imaging and medical image system
CN109711268B (en) Face image screening method and device
CN111401219B (en) Palm key point detection method and device
CN111723691A (en) Three-dimensional face recognition method and device, electronic equipment and storage medium
CN109409298A (en) A kind of Eye-controlling focus method based on video processing
CN103778406A (en) Object detection method and device
CN104794441A (en) Human face feature extracting method based on active shape model and POEM (patterns of oriented edge magnituedes) texture model in complicated background
CN112800815A (en) Sight direction estimation method based on deep learning
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
CN111723688B (en) Human body action recognition result evaluation method and device and electronic equipment
CN111488811B (en) Face recognition method, device, terminal equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant