CN109086734B - Method and device for positioning pupil image in human eye image - Google Patents

Method and device for positioning pupil image in human eye image Download PDF

Info

Publication number
CN109086734B
CN109086734B CN201810934979.8A CN201810934979A CN109086734B CN 109086734 B CN109086734 B CN 109086734B CN 201810934979 A CN201810934979 A CN 201810934979A CN 109086734 B CN109086734 B CN 109086734B
Authority
CN
China
Prior art keywords
point
determining
sampling
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810934979.8A
Other languages
Chinese (zh)
Other versions
CN109086734A (en
Inventor
谢波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ennew Digital Technology Co Ltd
Original Assignee
Ennew Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ennew Digital Technology Co Ltd filed Critical Ennew Digital Technology Co Ltd
Priority to CN201810934979.8A priority Critical patent/CN109086734B/en
Publication of CN109086734A publication Critical patent/CN109086734A/en
Application granted granted Critical
Publication of CN109086734B publication Critical patent/CN109086734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for positioning pupil images, wherein the method comprises the following steps: s0: determining an initial pixel point in a human eye image as an origin; s1: determining a plurality of rays according to an original point, calculating the pixel gradient amplitude of each pixel point forming the rays, and determining the pixel point corresponding to the pixel gradient amplitude larger than a preset threshold value as a reference pixel point; s2: determining a plurality of suspicious edge points according to the spacing distance between the reference pixel point and the origin, and determining a geometric center pixel point corresponding to the suspicious edge points; s3: detecting whether the distance between the geometric center pixel point determined this time and the historical geometric center pixel point determined last time is smaller than a set value, if so, executing S5; otherwise, executing S4; s4: determining the geometric center pixel point as a new origin and executing S1; s5: and fitting the target ellipse model according to the suspicious edge points to mark the position of the pupil image. According to the technical scheme, the pupil image can be more accurately positioned.

Description

Method and device for positioning pupil image in human eye image
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for positioning pupil images in human eye images.
Background
When the face recognition or the iris recognition is realized, the pupil image carried by the image of the eye needs to be positioned, that is, the position of the pupil image is marked in the image of the eye.
At present, when positioning a pupil image carried by a human eye image, a vertical gray scale gradient integral projection mode and a horizontal gray scale gradient integral projection mode are generally adopted to determine a plurality of edge points of the pupil image in the human eye image, a geometric center corresponding to each edge point is determined as a pupil center, and the position of the pupil image in the human eye image is marked according to the pupil center and each edge point.
In the above technical solution, a large number of noise points caused by eyelashes and eyelids may exist in each determined edge point, which may cause an excessively large difference between the determined pupil center and the real center of the pupil image, and when the position of the pupil image is marked according to the determined pupil center and each edge point, the pupil image carried by the image for a person eye cannot be accurately positioned.
Disclosure of Invention
The invention provides a method and a device for positioning pupil images in human eye images, which can more accurately position the pupil images carried in the human eye images.
In a first aspect, the present invention provides a method for positioning a pupil image in an image of a person eye, including:
s0: determining initial pixel points in a human eye image to be processed, and taking the initial pixel points as original points;
s1: determining a plurality of rays in the human eye image to be processed according to the origin, calculating pixel gradient amplitudes of pixel points forming each ray, and determining pixel points corresponding to the pixel gradient amplitudes larger than a preset threshold value as reference pixel points;
s2: determining a plurality of suspicious edge points from each reference pixel point according to the spacing distance between each reference pixel point and the original point, and determining a geometric center pixel point corresponding to each suspicious edge point;
s3: detecting whether the distance value between the geometric center pixel point determined this time and the historical geometric center pixel point determined last time is smaller than a set value, if so, executing S5; otherwise, go to S4;
s4: determining the geometric center pixel point as a historical geometric center pixel point, determining the geometric center pixel point as a new origin, and executing S1;
s5: and fitting a target ellipse model according to each suspicious edge point, and marking the position of the pupil image in the human eye image through the target ellipse model.
Preferably, the first and second electrodes are formed of a metal,
the fitting of the target ellipse model according to the suspicious edge points and the marking of the positions of the pupil images in the human eye images through the target ellipse model comprise the following steps:
a0: fitting an initial elliptical model according to each suspicious edge point;
a1: randomly selecting at least three sampling edge points from the suspicious edge points, determining that the at least three sampling edge points respectively correspond to sampling tangent lines on the initial elliptical model, and determining the sampling pupil center according to the sampling edge points and the sampling tangent lines;
a2: for each non-sampling edge point which is not selected as a sampling edge point in each suspicious edge point, determining a calibration pupil center according to the non-sampling edge point, a tangent line of the non-sampling edge point on the initial elliptical model, each sampling edge point and each sampling tangent line;
a3: for each calibration pupil center, when the separation distance between the calibration pupil center and the sampling pupil center is not greater than a set distance, determining a non-sampling edge point corresponding to the calibration pupil center as a trusted edge point;
a4: determining a ratio between the first total amount of each trusted edge point and the second total amount of each suspicious edge point, detecting whether the ratio is smaller than a set threshold value, and if so, executing A1; otherwise, a5 is executed;
a5: and fitting a target ellipse model according to each trusted edge point, and marking the position of the pupil image in the human eye image through the target ellipse model.
Preferably, the first and second electrodes are formed of a metal,
determining the sampling pupil center according to each sampling edge point and each sampling tangent line, including:
determining the midpoint between every two adjacent sampling edge points according to the initial ellipse model;
for each midpoint, determining the intersection point of the sampling tangent lines corresponding to the two sampling edge points of the midpoint respectively, and determining the midpoint and the straight line where the intersection point is located;
and calculating a polar distance point which is closest to each straight line in the human eye image according to a least square method, and determining the polar distance point as the center of the sampling pupil.
Preferably, the first and second electrodes are formed of a metal,
the fitting of the target ellipse model according to each trusted edge point includes:
b0: forming a credible set by using each credible edge point;
b1: fitting a transition ellipse model according to each trusted edge point included in the trusted combination;
b2: calculating an algebraic distance between each trusted edge point in the trusted set and the transition ellipse model;
b3: calculating average fitting deviation according to the algebraic distances and the credible total amount of the credible edge points in the credible set, and determining a deviation threshold according to the average fitting deviation;
b4: for each suspicious edge point, detecting an algebraic distance between the suspicious edge point and the transition ellipse model, determining the suspicious edge point as a noise point when the algebraic distance between the suspicious edge point and the transition ellipse model is greater than the deviation threshold, and forming a new credible set by using each edge point which is not determined as a noise point;
b5: detecting whether the formed credible set is completely the same as the formed credible set at the previous time, if so, executing B6; otherwise, B1 is executed;
b6: and determining the corresponding transition ellipse model when the belief set is formed at this time as a target ellipse model.
Preferably, the first and second electrodes are formed of a metal,
the determining a plurality of suspicious edge points from each reference pixel point according to the distance between each reference pixel point and the origin point comprises:
detecting the spacing distance between each reference pixel point and the origin point;
calculating an expected value and a standard deviation for each of the separation distances;
extracting a plurality of target spacing distances from the spacing distances according to the expected value and the standard deviation;
and determining the reference pixel points corresponding to the target spacing distances as suspicious edge points.
Preferably, the first and second electrodes are formed of a metal,
before determining the initial pixel points in the human eye image to be processed, the method further comprises the following steps:
collecting a video image;
respectively filtering each frame of original image of the video image to obtain a preprocessed image corresponding to each frame of the original image;
and respectively extracting the human eye image to be processed carried by the current preprocessed image from each preprocessed image.
Preferably, the first and second electrodes are formed of a metal,
before determining the initial pixel points in the human eye image to be processed, the method further comprises the following steps:
sequentially selecting an unselected human eye image to be processed, and determining the corresponding frame number of the selected human eye image to be processed in the video image;
then, the determining an initial pixel point in the to-be-processed human eye image includes:
when the frame number is 1, determining the geometric center of the selected human eye image to be processed as an initial pixel point; or when the frame number is greater than 1, determining an initial pixel point in the to-be-processed eye image according to the pupil center of the pupil image carried by the to-be-processed eye image selected last time.
Preferably, the first and second electrodes are formed of a metal,
the S5 further includes determining the geometric center pixel point as a pupil center of a pupil image carried in the to-be-processed human eye image.
In a second aspect, the present invention provides an apparatus for positioning a pupil image in an image for a person, including:
the initial point determining module is used for determining initial pixel points in the human eye image to be processed and taking the initial pixel points as original points;
the edge point detection module is used for determining a plurality of rays in the human eye image to be processed according to the original point, calculating the pixel gradient amplitude of each pixel point forming each ray, and determining the pixel point corresponding to each pixel gradient amplitude which is greater than a preset threshold value as a reference pixel point;
the noise point filtering module is used for determining a plurality of suspicious edge points from each reference pixel point according to the spacing distance between each reference pixel point and the original point and determining a geometric center pixel point corresponding to each suspicious edge point;
the center detection module is used for detecting whether the distance value between the geometric center pixel point determined this time and the historical geometric center pixel point determined last time is smaller than a set numerical value or not, and if so, the model fitting module is triggered; otherwise, triggering a transition processing module;
the transition processing module is used for determining the geometric center pixel point as a historical geometric center pixel point, determining the geometric center pixel point as a new original point and triggering the edge point detection module;
and the model fitting module is used for fitting a target ellipse model according to each suspicious edge point and marking the position of the pupil image in the human eye image through the target ellipse model.
Preferably, the first and second electrodes are formed of a metal,
the model fitting module comprises: the device comprises a preprocessing unit, a sampling processing unit, a calibration processing unit, a trusted processing unit, a detection processing unit and a marking processing unit; wherein the content of the first and second substances,
the preprocessing unit is used for fitting an initial elliptical model according to each suspicious edge point;
the sampling processing unit is used for randomly selecting at least three sampling edge points from the suspicious edge points, determining that the at least three sampling edge points respectively correspond to sampling tangent lines on the initial elliptical model, and determining the sampling pupil center according to the sampling edge points and the sampling tangent lines;
the calibration processing unit is configured to determine, for each non-sampling edge point that is not selected as a sampling edge point in each suspicious edge point, a calibration pupil center according to the non-sampling edge point, a tangent line that the non-sampling edge point corresponds to on the initial elliptical model, each sampling edge point, and each sampling tangent line;
the trusted processing unit is configured to, for each calibration pupil center, determine a non-sampling edge point corresponding to the calibration pupil center as a trusted edge point when an interval distance between the calibration pupil center and the sampling pupil center is not greater than a set distance;
the detection processing unit is configured to determine a ratio between a first total amount of each trusted edge point and a second total amount of each suspicious edge point, detect whether the ratio is smaller than a set threshold, and trigger the sampling processing unit if the ratio is smaller than the set threshold; otherwise, triggering the meter processing unit;
and the meter processing unit is used for fitting a target ellipse model according to each trusted edge point and marking the position of the pupil image in the human eye image through the target ellipse model.
Preferably, the first and second electrodes are formed of a metal,
further comprising: the device comprises an image acquisition module, a filtering processing module and an image extraction module; wherein the content of the first and second substances,
the image acquisition module is used for acquiring video images;
the filtering processing module is used for respectively filtering each frame of original image of the video image to obtain a preprocessed image corresponding to each frame of the original image;
the image extraction module is used for respectively extracting the human eye image to be processed carried by the current preprocessed image from each preprocessed image.
The invention provides a method and a device for positioning a pupil image in a human eye image, wherein in the first stage, an initial pixel point in the human eye image to be processed is taken as an origin, then a plurality of rays are determined in the human eye image to be processed according to the origin, and the pixel gradient amplitude of each pixel point forming each ray is calculated, because the pixel gradient amplitude between the edge point of the pupil image in the human eye image and other pixel points is relatively large, the pixel points respectively corresponding to each pixel gradient amplitude larger than a preset threshold value can be determined as reference pixel points, namely, a part of reference pixel points which are possibly the edge point of the pupil image are screened out from the human eye image, and then, because the distance between the noise point and the origin in the human eye image is relatively large relative to the distance difference between the true edge point and the origin of the pupil image, therefore, a plurality of suspicious edge points can be determined from each reference pixel point according to the spacing distance between each reference pixel point and the origin point respectively, so as to remove a part of noise points which are not pupil image edge points in each reference pixel point, after the geometric center pixel point of each suspicious edge point which is removed a part of noise points and is remained is determined, the geometric center point can be used as a new origin point, the method similar to the above steps is repeatedly executed, when the distance value between the geometric center pixel points respectively determined at this time and the previous time is smaller than a set value, namely when the geometric center pixel points are determined twice adjacently, the pupil centers (geometric center pixel points) respectively corresponding to two groups of suspicious edge points are extremely close to or superposed on the pupil center of the pupil image in the human eye image, and when the geometric center pixel points are determined at this time, the suspicious edge points corresponding to the geometric center pixel points are not provided with obvious noise points, the noise points mixed into each suspicious edge point are greatly reduced; in the second stage, the target elliptical model is fitted according to the suspicious edge points, the positions of the pupil images in the human eye images are marked through the target elliptical model, the geometric center pixel points corresponding to the suspicious edge points are not depended on, and the influence of residual (and a small amount of) noise points mixed in the suspicious edge points on the positioning result is reduced; in summary, the number of noise points mixed into the suspicious edge points is greatly reduced in the first stage, and meanwhile, the influence degree of a small number of noise points mixed into the suspicious edge points on the positioning result is reduced in the second stage, so that the pupil images carried in the human eye images can be more accurately positioned.
Drawings
In order to more clearly illustrate the embodiments or the prior art solutions of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic flowchart of a method for positioning a pupil image in an image of a human eye according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the determination of the center of a sampled pupil according to selected sampled edge points and a fitted initial elliptical model according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for positioning a pupil image in an image for a person eye according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another apparatus for positioning a pupil image in an image for a person eye according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail and completely with reference to the following embodiments and accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for positioning a pupil image in an image of a person eye, including:
s0: determining initial pixel points in a human eye image to be processed, and taking the initial pixel points as original points;
s1: determining a plurality of rays in the human eye image to be processed according to the origin, calculating pixel gradient amplitudes of pixel points forming each ray, and determining pixel points corresponding to the pixel gradient amplitudes larger than a preset threshold value as reference pixel points;
s2: determining a plurality of suspicious edge points from each reference pixel point according to the spacing distance between each reference pixel point and the original point, and determining a geometric center pixel point corresponding to each suspicious edge point;
s3: detecting whether the distance value between the geometric center pixel point determined this time and the historical geometric center pixel point determined last time is smaller than a set value, if so, executing S5; otherwise, go to S4;
s4: determining the geometric center pixel point as a historical geometric center pixel point, determining the geometric center pixel point as a new origin, and executing S1;
s5: and fitting a target ellipse model according to each suspicious edge point, and marking the position of the pupil image in the human eye image through the target ellipse model.
As shown in fig. 1, in the first stage, an initial pixel point in a to-be-processed eye image is first used as an origin, a plurality of rays are determined in the to-be-processed eye image according to the origin, pixel gradient amplitudes of pixel points constituting each ray are calculated, since the pixel gradient amplitudes between edge points of a pupil image and other pixel points in the eye image are relatively large, the pixel points corresponding to the pixel gradient amplitudes greater than a preset threshold value can be determined as reference pixel points, that is, a part of reference pixel points which may be edge points of the pupil image are screened out from the eye image, and subsequently, since the distance between a noise point and the origin in the eye image is relatively large with respect to the distance between a true edge point of the pupil image and the origin, the distance between each reference pixel point and the origin can be determined according to the distance between each reference pixel point and the origin, determining a plurality of suspicious edge points from each reference pixel point, removing a part of noise points which are not pupil image edge points in each reference pixel point, after determining the geometric center pixel point of each suspicious edge point remaining after removing a portion of the noise points, the geometric center point can be used as a new origin, the method similar to the above steps can be repeatedly executed, when the distance values between the geometric center pixel points respectively determined at this time and the previous time are smaller than the set value, that is, when the geometric center pixel points are determined twice, the pupil centers (geometric center pixel points) corresponding to two groups of suspicious edge points respectively approach or coincide with the pupil centers of the pupil images in the human eye images, and it is also described that there are no obvious noise points in each suspicious edge point corresponding to the determination of the geometric center pixel points, so that noise points mixed into each suspicious edge point are greatly reduced; in the second stage, the target elliptical model is fitted according to the suspicious edge points, the positions of the pupil images in the human eye images are marked through the target elliptical model, the geometric center pixel points corresponding to the suspicious edge points are not depended on, and the influence of residual (and a small amount of) noise points mixed in the suspicious edge points on the positioning result is reduced; in summary, the number of noise points mixed into the suspicious edge points is greatly reduced in the first stage, and meanwhile, the influence degree of a small number of noise points mixed into the suspicious edge points on the positioning result is reduced in the second stage, so that the pupil images carried in the human eye images can be more accurately positioned.
In the above embodiment, the gradient amplitude corresponding to a pixel point specifically refers to an absolute difference between a pixel value of a subsequent pixel point and a pixel value of a previous pixel point along the ray propagation direction; for example, taking a human eye image with 9 × 9 pixels as an example, if the origin is (5, 5), for a ray horizontally propagating to the right determined according to the origin, if pixel points constituting the ray in the human eye image are (5, 5), (5, 6), (5, 7), (5, 8), (5, 9), and for one pixel point (5, 6), the pixel gradient amplitude is an absolute difference between a pixel value of the pixel point (5, 6) and a pixel value of the pixel point (5, 5) in the human eye image.
Obviously, when the loop is executed from S1 to S4, the historical geometric center pixel point determined in the previous loop is the origin point in step S1 when the current loop determines the geometric center pixel point.
Based on the embodiment shown in fig. 1, in a preferred embodiment of the present invention, the fitting a target ellipse model according to each suspicious edge point, and marking the position of the pupil image in the human eye image through the target ellipse model includes:
a0: fitting an initial elliptical model according to each suspicious edge point;
a1: randomly selecting at least three sampling edge points from the suspicious edge points, determining that the at least three sampling edge points respectively correspond to sampling tangent lines on the initial elliptical model, and determining the sampling pupil center according to the sampling edge points and the sampling tangent lines;
a2: for each non-sampling edge point which is not selected as a sampling edge point in each suspicious edge point, determining a calibration pupil center according to the non-sampling edge point, a tangent line of the non-sampling edge point on the initial elliptical model, each sampling edge point and each sampling tangent line;
a3: for each calibration pupil center, when the separation distance between the calibration pupil center and the sampling pupil center is not greater than a set distance, determining a non-sampling edge point corresponding to the calibration pupil center as a trusted edge point;
a4: determining a ratio between the first total amount of each trusted edge point and the second total amount of each suspicious edge point, detecting whether the ratio is smaller than a set threshold value, and if so, executing A1; otherwise, a5 is executed;
a5: and fitting a target ellipse model according to each trusted edge point, and marking the position of the pupil image in the human eye image through the target ellipse model.
In the embodiment, an initial elliptical model is fitted according to each suspicious edge point, then at least three sampling edge points can be randomly selected from each suspicious edge point, and each sampling edge point is determined to respectively correspond to a sampling tangent on the initial elliptical model, so that a sampling pupil center corresponding to each sampling edge point is determined according to each sampling edge point and each sampling tangent; in the subsequent process, aiming at each non-sampling edge point which is not selected as a sampling edge point in each suspicious edge point, the center of the pupil can be determined and calibrated according to the non-sampling edge point, the tangent line of the non-sampling edge point on the initial elliptical model, each sampling edge point and each sampling tangent line; the smaller the spacing distance between the calibration pupil center and the sampling pupil center, the higher the probability that the non-sampling edge point corresponding to the calibration pupil center is the true edge point of the pupil image in the human eye image when each selected sampling edge point is the true edge point of the pupil image in the human eye image, otherwise, the lower the probability that the non-sampling edge point corresponding to the calibration pupil center is the true edge point of the pupil image in the human eye image, therefore, for each determined calibration pupil center, when the spacing distance between the calibration pupil center and the sampling pupil center is not greater than the set distance, the non-sampling edge point corresponding to the calibration pupil center can be determined as the trusted edge point, obviously, the non-sampling edge point which is not determined as the trusted edge point is the noise point relative to each selected sampling edge point; since a large number of noise points in each determined suspicious edge point are already removed in the first stage, and the number of noise points mixed into each suspicious edge point should be much smaller than the number of real edge points of the pupil image in the eye image, when a ratio between a first total number of all the reliable edge points relative to each sampling edge point and a second total number of each edge point is determined, the larger the ratio is, the higher the probability that each selected sampling edge point is the real edge point of the pupil image in the eye image is, otherwise, the lower the probability that each selected sampling edge point is the real edge point of the pupil image in the eye image is, and accordingly, when the determined ratio is smaller than a set threshold, the sampling edge points can be reselected and the similar method can be performed to further remove the noise points mixed into each suspicious edge point, and fitting the target ellipse model according to the remaining credible edge points after the noise points are removed again to mark the positions of the pupil images in the human eye images until the determined ratio is not less than the set threshold value, so that the influence degree of the noise points mixed into the suspicious edge points on the positioning result is reduced, and the pupil images in the human eye images are positioned more accurately.
It should be noted that, when at least three sampling edge points are randomly selected from the determined suspicious edge points and subsequent business processing is performed, the larger the number of the selected sampling edge points is, the larger the amount of calculation involved is, and therefore, in order to reduce the amount of calculation and to more quickly and accurately position the pupil image carried in the human eye image, the number of the sampling edge points selected each time may be 3.
It should be noted that the sampling edge point described in this embodiment corresponds to a sampling tangent on the initial ellipse model, and there are two specific cases a and B as follows:
and A, the sampling edge point is positioned on the fitted initial elliptical model, and at the moment, the sampling tangent line of the sampling edge point corresponding to the initial elliptical model refers to the tangent line of the sampling edge point on the initial elliptical model.
And B, the sampling edge point is not positioned on the fitted initial elliptical model, at this time, a near-distance point which is closest to the algebraic distance of the sampling edge point on the fitted initial elliptical model needs to be determined, and the sampling tangent line of the sampling edge point corresponding to the initial elliptical model refers to the tangent line of the near-distance point corresponding to the sampling edge point on the initial elliptical model.
Based on the foregoing embodiment, in an embodiment of the present invention, the determining a sampling pupil center according to each of the sampling edge points and each of the sampling tangents includes:
determining the midpoint between every two adjacent sampling edge points according to the initial ellipse model;
for each midpoint, determining the intersection point of the sampling tangent lines corresponding to the two sampling edge points of the midpoint respectively, and determining the midpoint and the straight line where the intersection point is located;
and calculating a polar distance point which is closest to each straight line in the human eye image according to a least square method, and determining the polar distance point as the center of the sampling pupil.
When the sampling pupil center is determined according to each sampling edge point and each sampling tangent line in step a1, if each selected sampling edge point is a real edge point of a pupil image carried in a human eye image, the determined sampling pupil center should be very close to or overlapped with the real pupil center of the pupil image, so that noise points mixed in each suspicious edge point are removed in the subsequent process to extract a reliable edge point with a high probability of being the real edge point of the pupil image. Specifically, when the calibration pupil center corresponding to each non-sampling edge point is determined by a similar method based on the selected sampling edge points in the subsequent process, the closer the distance between the calibration pupil center corresponding to the non-sampling edge point and the sampling pupil center is, the higher the probability that the sampling edge point is the true edge point of the pupil image is, and otherwise, the higher the probability that the non-sampling edge point is the noise point is.
For example, referring to fig. 2, taking the example of extracting three sampling edge points M1, M2, M3 from each edge point, after determining that M1, M2, M3 respectively correspond to sampling tangents on the initial elliptical model, it is able to determine the intersection point P2 of the sampling tangents respectively corresponding to M1 and M2, the intersection point P1 of the sampling tangents respectively corresponding to M1 and M3, and the intersection point P3 of the sampling tangents respectively corresponding to M2 and M3; determining that M1 is adjacent to M2, M2 is adjacent to M3, and M1 is adjacent to M3 according to the initial elliptical model, and further determining a midpoint X between adjacent sampled edge points M1 and M2, a midpoint Y between adjacent sampled edge points M1 and M3, and a midpoint Z between adjacent sampled edge points M2 and M3; correspondingly, for the midpoint X, an intersection point P2 of the sampling tangents respectively corresponding to the two sampling edge points M1 and M2 corresponding to the midpoint X can be determined, and then a straight line L2 where the midpoint X and the intersection point P2 are located can be determined, and a straight line L1 where the midpoint Y and the intersection point P1 are located and a straight line L3 where the midpoint Z and the intersection point P3 are located can be determined by a similar method; in the subsequent process, the polar distance points closest to the straight lines L1, L2 and L in the human eye image can be calculated by a least square method or other algorithms, and the polar distance points are the sampling pupil centers O corresponding to the sampling edge points M1, M2 and M3.
Note that the centers of the pupils in fig. 2 are intersections of L1, L2, and L3, and in an actual service scene, there may be no common intersection among L1, L2, and L3, and in this case, it is necessary to calculate the closest polar distance points to the straight lines L1, L2, and L in the human eye image by a least square method or other algorithm.
Specifically, based on the foregoing embodiment, in one possible implementation manner, the fitting a target ellipse model according to each trusted edge point includes:
b0: forming a credible set by using each credible edge point;
b1: fitting a transition ellipse model according to each trusted edge point included in the trusted combination;
b2: calculating an algebraic distance between each trusted edge point in the trusted set and the transition ellipse model;
b3: calculating average fitting deviation according to the algebraic distances and the credible total amount of the credible edge points in the credible set, and determining a deviation threshold according to the average fitting deviation;
b4: for each suspicious edge point, detecting an algebraic distance between the suspicious edge point and the transition ellipse model, determining the suspicious edge point as a noise point when the algebraic distance between the suspicious edge point and the transition ellipse model is greater than the deviation threshold, and forming a new credible set by using each edge point which is not determined as a noise point;
b5: detecting whether the formed credible set is completely the same as the formed credible set at the previous time, if so, executing B6; otherwise, B1 is executed;
b6: and determining the corresponding transition ellipse model when the belief set is formed at this time as a target ellipse model.
In this implementation, when the loop executes B1 to B5, the calculated average fitting deviation of the loop may be combined with the transition ellipse model formed in the loop to independently measure whether each edge point is a noise point relative to the transition ellipse model; specifically, a deviation threshold may be determined according to the average fitting deviation, when an algebraic distance between a suspected edge point and the transition ellipse model is greater than the deviation threshold, it indicates that the suspected edge point is a noise point with respect to the transition ellipse model formed this time, otherwise, it may be determined as a trusted edge point (i.e., it is determined as a true edge point of the pupil image in the human eye image), so that it may be implemented to re-determine a suspected edge point, in which a part of the suspected edge points are incorrectly defined as noise points, as a trusted edge point, and determine a noise point which is incorrectly determined as a trusted edge point, thereby forming a new trusted set using each suspected edge point which is not determined as a noise point; the method is executed aiming at the formed new credible set in a circulating manner, until the credible sets formed twice adjacently are completely the same, the noise points in all suspicious edge points in the credible set obtained in the circulating manner are completely and accurately removed, all the credible edge points in the credible set formed at this time are the real edge points of the pupil images in the human eye images, correspondingly, the corresponding transition ellipse model formed when the credible set is formed at this time is determined as the target ellipse model, and when the pupil images carried in the human eye images are marked through the target ellipse model, the influence of the noise points on the positioning result is more effectively avoided, and the pupil images carried in the human eye images can be more accurately positioned.
In this embodiment, the average fitting deviation specifically refers to an average value of algebraic distances between each trusted edge point in the currently-formed trusted set and the currently-formed transition ellipse model.
In this embodiment, each time the deviation threshold is formed, the deviation threshold formed this time is usually 1 to 2 times of the average fitting deviation calculated this time, and the specific multiple may be adjusted by combining with the actual service scenario.
In a preferred embodiment of the present invention, in order to prevent that too many noise points are mixed into the edge points, and a plurality of sampling edge points with higher probability of being the true edge points of the pupil image in the image of the human eye cannot be quickly selected, so that the pupil image carried in the image of the human eye cannot be quickly located, the method further includes:
recording the ratio determined each time, and recording the cycle times of each continuously determined ratio which are continuously smaller than the set threshold;
when the cycle times reach a set value, selecting a target ratio with the maximum value from the recorded ratios;
and fitting a target ellipse model according to each credible edge point corresponding to the determined target ratio, and marking the position of the pupil image in the human eye image through the target ellipse model.
In the embodiment, by recording the determined ratios and recording the cycle times of continuously determined ratios which are continuously smaller than the set threshold, when the times of continuously selecting a plurality of sampling edge points with higher probability of being the real edge points of the pupil images in the human eye images reach the set value, namely the cycle times of continuously determined ratios which are continuously smaller than the set threshold reach the set value, because the ratio determined each time directly reflects the probability that each sampling edge point correspondingly selected when the ratio is determined this time is the real edge point of the pupil images in the human eye images, the target ratio with the maximum value can be selected from the recorded ratios, the target elliptical model is fitted according to each credible edge point corresponding to the determined target ratio, and then the position of the pupil images in the human eye images is marked through the target elliptical model, the method avoids the problem that pupil images carried in the human eye images cannot be quickly positioned due to the fact that a plurality of suitable sampling edge points cannot be selected within a long time.
Based on the embodiment shown in fig. 1, in a preferred embodiment of the present invention, the determining a plurality of suspicious edge points from each reference pixel point according to the distance between each reference pixel point and the origin point includes:
detecting the spacing distance between each reference pixel point and the origin point;
calculating an expected value and a standard deviation for each of the separation distances;
extracting a plurality of target spacing distances from the spacing distances according to the expected value and the standard deviation;
and determining the reference pixel points corresponding to the target spacing distances as suspicious edge points.
In this embodiment, in an actual service scene, the number of detected noise points in each reference pixel point is far lower than the number of real edge points of the pupil image, the distance difference between most of the noise points and the origin is larger than the distance difference between each of the real edge points and the origin of the pupil image, and the distance difference between different real edge points of the pupil image and the origin is smaller, and by detecting the distance between each of the reference pixel points and the origin, and calculating an expected value and a standard deviation of each distance, in a data set formed by each distance, if the influence of a current distance on the dispersion degree of the data set is larger, it is indicated that the probability that the reference pixel point corresponding to the current distance is a noise point is higher, so that the noise distance with a larger dispersion degree in each distance can be determined according to the calculated expected value and standard deviation, and extracting each target interval distance which is not determined as the noise interval distance, and taking the reference pixel points respectively corresponding to each target interval distance as suspicious edge points to remove a large number of noise points mixed in each reference pixel point.
For example, taking S1 to S4 as an example of performing the loop once, calculating that the distances between the reference pixel A, B, C, D, E and the origin O are 19, 20, 21, 20, and 35 in sequence, and after calculating the expected values and standard deviations of the distances, obtaining a standard deviation greater than the corresponding reference value, which indicates that the dispersion degree of each distance is too high, that is, it indicates that there may be a noise point in each reference pixel; at this time, according to the difference between the calculated expected value and each separation distance, it can be determined that the separation distances having a large influence on the dispersion degree of the data set include "35", that is, the probability that the reference pixel point corresponding to the separation distance "35" is a noise point is high, the noise separation distance "35" can be removed, the other four target separation distances that are not determined as the noise separation distances are extracted, and the reference pixel points corresponding to the four target separation distances are determined as suspicious edge points.
Generally, taking the calculation of the expected value u and the standard deviation δ of the distance values as an example, reference pixel points whose distance from the origin is greater than u +1.5 δ or less than u-1.5 δ may be removed, and each reference pixel point that is not removed is a suspicious edge point.
Based on any of the foregoing embodiments, in a preferred embodiment of the present invention, before determining the initial pixel point in the human eye image to be processed, the method further includes:
collecting a video image;
respectively filtering each frame of original image of the video image to obtain a preprocessed image corresponding to each frame of the original image;
and respectively extracting the human eye image to be processed carried by the current preprocessed image from each preprocessed image.
Through the embodiment, the pupil images respectively carried by each frame of original image in the continuously acquired video image can be positioned.
In the embodiment, sample images for face detection and human eye detection can be collected in advance, and a face detection model and a human eye detection model are trained by combining Haar-Like rectangular features of the sample images and an Adaboost algorithm; after the video image is collected, due to the self quality problem of each electronic element in image collection equipment such as a camera and the like and the influence of illumination intensity in a shooting environment, each frame of original image of the video image can have a certain degree of external noise, so that each frame of original image of the video image can be preprocessed, specifically, the original image can be filtered, external noises such as additive noise, salt and gaussian noise can be removed, the original information of the image is prevented from being covered by the noises, the signal-to-noise ratio of the original image is enhanced, and preprocessed images corresponding to each frame of original image are obtained; further, for each obtained preprocessed image, firstly, a face position is located in the current original image by using a trained face detection model (namely, the position of the face image on the current preprocessed image is marked), and then, the position of the eye position on the marked face image is determined by using the trained eye detection model, so that an eye image (namely, an eye image to be processed) carried by the current preprocessed image is extracted.
Further, when it is required to process the human eye images corresponding to each frame of the original image in the video image to extract suspicious edge points of the pupil images carried in the multiple human eye images, and to position the pupil images, in order to save processing time and improve processing efficiency, in an embodiment of the present invention, the S5 further includes: and determining the geometric center pixel point as the pupil center of the pupil image carried in the human eye image to be processed.
For example, when a pupil image carried by a to-be-processed eye image needs to be located, if an original image corresponding to the to-be-processed eye image is a first frame of the acquired video image, in step S5, after a geometric center pixel point is further determined as a pupil center of the pupil image carried by the to-be-processed eye image, taking a pixel coordinate of the pupil center in the to-be-processed eye image as (x, y), for example, when a current eye image corresponding to a second frame of the original image in the video image needs to be processed to determine a suspicious edge point of the pupil image carried by the current eye image, a pixel point (x, y) in the current eye image may be determined as an initial pixel point, which approaches the pupil center of the pupil image carried by the current eye image in the current eye image, and a pixel point (x, y) is taken as an origin, and suspicious edge points of the pupil image carried by the current human eye image can be determined more quickly and accurately when subsequent processing is carried out.
Correspondingly, in a preferred embodiment of the present invention, before determining the initial pixel point in the human eye image to be processed, the method further includes:
sequentially selecting an unselected human eye image to be processed, and determining the corresponding frame number of the selected human eye image to be processed in the video image;
then, the determining an initial pixel point in the to-be-processed human eye image includes:
when the frame number is 1, determining the geometric center of the selected human eye image to be processed as an initial pixel point; or when the frame number is greater than 1, determining an initial pixel point in the to-be-processed eye image according to the pupil center of the pupil image carried by the to-be-processed eye image selected last time.
It should be noted that, in order to avoid that a large number of noise points, which are formed by eyelashes and eyelids of a user in an eye image and are extremely close to pupil edge points, are erroneously determined as reference pixel points, when a frame number is greater than 1, and an initial pixel point is determined in the eye image to be processed, in a preferred embodiment of the present invention, when the initial pixel point is taken as an origin in the face image to be processed and a plurality of rays are determined, a deviation angle of each ray drawn from the origin in a horizontal direction is between-30 degrees and between 150 degrees and 210 degrees.
Based on the same concept as the method embodiment of the present invention, referring to fig. 3, an embodiment of the present invention provides a device for positioning a pupil image in a human eye image, including:
an initial point determining module 301, configured to determine an initial pixel point in a to-be-processed human eye image, and use the initial pixel point as an origin point;
an edge point detection module 302, configured to determine a plurality of rays in the to-be-processed human eye image according to the origin, calculate a pixel gradient amplitude of each pixel point constituting each ray, and determine, as a reference pixel point, a pixel point corresponding to each pixel gradient amplitude greater than a preset threshold;
a noise point filtering module 303, configured to determine, according to a distance between each reference pixel point and the origin, a plurality of suspicious edge points from each reference pixel point, and determine a geometric center pixel point corresponding to each suspicious edge point;
a center detection module 304, configured to detect whether a distance value between the currently determined geometric center pixel point and the previously determined historical geometric center pixel point is smaller than a set value, and if so, trigger the model fitting module 306; otherwise, the transition processing module 305 is triggered;
the transition processing module 305 is configured to determine the geometric center pixel as a historical geometric center pixel, determine the geometric center pixel as a new origin, and trigger the edge point detecting module;
the model fitting module 306 is configured to fit a target ellipse model according to each suspicious edge point, and mark the position of the pupil image in the human eye image through the target ellipse model.
In a preferred embodiment of the present invention, the model fitting module 306 includes: the device comprises a preprocessing unit, a sampling processing unit, a calibration processing unit, a trusted processing unit, a detection processing unit and a marking processing unit; wherein the content of the first and second substances,
the preprocessing unit is used for fitting an initial elliptical model according to each suspicious edge point;
the sampling processing unit is used for randomly selecting at least three sampling edge points from the suspicious edge points, determining that the at least three sampling edge points respectively correspond to sampling tangent lines on the initial elliptical model, and determining the sampling pupil center according to the sampling edge points and the sampling tangent lines;
the calibration processing unit is configured to determine, for each non-sampling edge point that is not selected as a sampling edge point in each suspicious edge point, a calibration pupil center according to the non-sampling edge point, a tangent line that the non-sampling edge point corresponds to on the initial elliptical model, each sampling edge point, and each sampling tangent line;
the trusted processing unit is configured to, for each calibration pupil center, determine a non-sampling edge point corresponding to the calibration pupil center as a trusted edge point when an interval distance between the calibration pupil center and the sampling pupil center is not greater than a set distance;
the detection processing unit is configured to determine a ratio between a first total amount of each trusted edge point and a second total amount of each suspicious edge point, detect whether the ratio is smaller than a set threshold, and trigger the sampling processing unit if the ratio is smaller than the set threshold; otherwise, triggering the meter processing unit;
and the meter processing unit is used for fitting a target ellipse model according to each trusted edge point and marking the position of the pupil image in the human eye image through the target ellipse model.
Referring to fig. 4, in a preferred embodiment of the present invention, the apparatus further includes: an image acquisition module 401, a filtering processing module 402 and an image extraction module 403; wherein the content of the first and second substances,
the image acquisition module 401 is configured to acquire a video image;
the filtering processing module 402 is configured to perform filtering processing on each frame of original image of the video image to obtain a preprocessed image corresponding to each frame of the original image;
the image extracting module 403 is configured to respectively extract a to-be-processed eye image carried by the current preprocessed image from each preprocessed image.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. On the hardware level, the electronic device comprises a processor and optionally an internal bus, a network interface and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
In a possible implementation manner, the processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program, and may also obtain the corresponding computer program from other devices to form the apparatus for positioning the pupil image in the human eye image on a logical level. And the processor executes the program stored in the memory so as to realize the device for positioning the pupil image in the human eye image provided by any embodiment of the invention through the executed program.
The method performed by the apparatus for locating a pupil image in an image for a person according to the embodiment of the present invention shown in fig. 5 can be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
Embodiments of the present invention also provide a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the method for positioning a pupil image in an image for a person eye provided in any embodiment of the present invention, and in particular to perform the method shown in fig. 1.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units or modules by function, respectively. Of course, the functionality of the units or modules may be implemented in the same one or more software and/or hardware when implementing the invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (7)

1. A method for locating a pupil image in a human eye image, comprising:
s0: determining initial pixel points in a human eye image to be processed, and taking the initial pixel points as original points;
s1: determining a plurality of rays in the human eye image to be processed according to the origin, calculating pixel gradient amplitudes of pixel points forming each ray, and determining pixel points corresponding to the pixel gradient amplitudes larger than a preset threshold value as reference pixel points;
s2: determining a plurality of suspicious edge points from each reference pixel point according to the spacing distance between each reference pixel point and the original point, and determining a geometric center pixel point corresponding to each suspicious edge point;
s3: detecting whether the distance value between the geometric center pixel point determined this time and the historical geometric center pixel point determined last time is smaller than a set value, if so, executing S5; otherwise, go to S4;
s4: determining the geometric center pixel point determined this time as a historical geometric center pixel point, determining the geometric center pixel point determined this time as a new origin, and executing S1;
s5: fitting a target ellipse model according to each suspicious edge point, and marking the position of the pupil image in the human eye image through the target ellipse model, which specifically comprises the following steps:
a0: fitting an initial elliptical model according to each suspicious edge point;
a1: randomly selecting at least three sampling edge points from the suspicious edge points, determining sampling tangent lines of the at least three sampling edge points on the initial elliptical model respectively, and determining the sampling pupil center according to the sampling edge points and the sampling tangent lines, which specifically comprises:
determining the midpoint between every two adjacent sampling edge points according to the initial ellipse model;
for each midpoint, determining the intersection point of the sampling tangent lines corresponding to the two sampling edge points of the midpoint respectively, and determining the midpoint and the straight line where the intersection point is located;
calculating a polar distance point which is closest to each straight line in the human eye image according to a least square method, and determining the polar distance point as the center of a sampling pupil;
a2: for each non-sampling edge point which is not selected as a sampling edge point in each suspicious edge point, determining a calibration pupil center according to the non-sampling edge point, a tangent line of the non-sampling edge point on the initial elliptical model, each sampling edge point and each sampling tangent line;
a3: for each calibration pupil center, when the separation distance between the calibration pupil center and the sampling pupil center is not greater than a set distance, determining a non-sampling edge point corresponding to the calibration pupil center as a trusted edge point;
a4: determining a ratio between the first total amount of each trusted edge point and the second total amount of each suspicious edge point, detecting whether the ratio is smaller than a set threshold value, and if so, executing A1; otherwise, a5 is executed;
a5: and fitting a target ellipse model according to each trusted edge point, and marking the position of the pupil image in the human eye image through the target ellipse model.
2. The method of claim 1,
the fitting of the target ellipse model according to each trusted edge point includes:
b0: forming a credible set by using each credible edge point;
b1: fitting a transition ellipse model according to each trusted edge point included in the trusted combination;
b2: calculating an algebraic distance between each trusted edge point in the trusted set and the transition ellipse model;
b3: calculating average fitting deviation according to the algebraic distances and the credible total amount of the credible edge points in the credible set, and determining a deviation threshold according to the average fitting deviation;
b4: for each suspicious edge point, detecting an algebraic distance between the suspicious edge point and the transition ellipse model, determining the suspicious edge point as a noise point when the algebraic distance between the suspicious edge point and the transition ellipse model is greater than the deviation threshold, and forming a new credible set by using each edge point which is not determined as a noise point;
b5: detecting whether the formed credible set is completely the same as the formed credible set at the previous time, if so, executing B6; otherwise, B1 is executed;
b6: and determining the corresponding transition ellipse model when the belief set is formed at this time as a target ellipse model.
3. The method of claim 1,
the determining a plurality of suspicious edge points from each reference pixel point according to the distance between each reference pixel point and the origin point comprises:
detecting the spacing distance between each reference pixel point and the origin point;
calculating an expected value and a standard deviation for each of the separation distances;
extracting a plurality of target spacing distances from the spacing distances according to the expected value and the standard deviation;
and determining the reference pixel points corresponding to the target spacing distances as suspicious edge points.
4. The method according to any one of claims 1 to 3,
before determining the initial pixel points in the human eye image to be processed, the method further comprises the following steps:
collecting a video image;
respectively filtering each frame of original image of the video image to obtain a preprocessed image corresponding to each frame of the original image;
and respectively extracting the human eye image to be processed carried by the current preprocessed image from each preprocessed image.
5. The method of claim 4,
before determining the initial pixel points in the human eye image to be processed, the method further comprises the following steps:
sequentially selecting an unselected human eye image to be processed, and determining the corresponding frame number of the selected human eye image to be processed in the video image;
then, the determining an initial pixel point in the to-be-processed human eye image includes:
when the frame number is 1, determining the geometric center of the selected human eye image to be processed as an initial pixel point; or when the frame number is greater than 1, determining an initial pixel point in the to-be-processed eye image according to the pupil center of the pupil image carried by the to-be-processed eye image selected last time;
and/or the presence of a gas in the gas,
the S5 further includes determining the geometric center pixel point as a pupil center of a pupil image carried in the to-be-processed human eye image.
6. An apparatus for locating a pupil image in a person-eye image, comprising:
the initial point determining module is used for determining initial pixel points in the human eye image to be processed and taking the initial pixel points as original points;
the edge point detection module is used for determining a plurality of rays in the human eye image to be processed according to the original point, calculating the pixel gradient amplitude of each pixel point forming each ray, and determining the pixel point corresponding to each pixel gradient amplitude which is greater than a preset threshold value as a reference pixel point;
the noise point filtering module is used for determining a plurality of suspicious edge points from each reference pixel point according to the spacing distance between each reference pixel point and the original point and determining a geometric center pixel point corresponding to each suspicious edge point;
the center detection module is used for detecting whether the distance value between the geometric center pixel point determined this time and the historical geometric center pixel point determined last time is smaller than a set numerical value or not, and if so, the model fitting module is triggered; otherwise, triggering a transition processing module;
the transition processing module is used for determining the geometric center pixel point determined this time as a historical geometric center pixel point, determining the geometric center pixel point determined this time as a new original point, and triggering the edge point detection module;
the model fitting module is used for fitting a target ellipse model according to each suspicious edge point and marking the position of the pupil image in the human eye image through the target ellipse model; the model fitting module specifically comprises: the device comprises a preprocessing unit, a sampling processing unit, a calibration processing unit, a trusted processing unit, a detection processing unit and a marking processing unit; the preprocessing unit is used for fitting an initial elliptical model according to each suspicious edge point; the sampling processing unit is configured to randomly select at least three sampling edge points from the suspicious edge points, determine that the at least three sampling edge points respectively correspond to sampling tangents on the initial elliptical model, and determine a sampling pupil center according to each sampling edge point and each sampling tangent, and specifically include: determining the midpoint between every two adjacent sampling edge points according to the initial ellipse model; for each midpoint, determining the intersection point of the sampling tangent lines corresponding to the two sampling edge points of the midpoint respectively, and determining the midpoint and the straight line where the intersection point is located; calculating a polar distance point which is closest to each straight line in the human eye image according to a least square method, and determining the polar distance point as the center of a sampling pupil; the calibration processing unit is configured to determine, for each non-sampling edge point that is not selected as a sampling edge point in each suspicious edge point, a calibration pupil center according to the non-sampling edge point, a tangent line that the non-sampling edge point corresponds to on the initial elliptical model, each sampling edge point, and each sampling tangent line; the trusted processing unit is configured to, for each calibration pupil center, determine a non-sampling edge point corresponding to the calibration pupil center as a trusted edge point when an interval distance between the calibration pupil center and the sampling pupil center is not greater than a set distance; the detection processing unit is configured to determine a ratio between a first total amount of each trusted edge point and a second total amount of each suspicious edge point, detect whether the ratio is smaller than a set threshold, and trigger the sampling processing unit if the ratio is smaller than the set threshold; otherwise, triggering the mark processing unit; and the marking processing unit is used for fitting a target ellipse model according to each trusted edge point and marking the position of the pupil image in the human eye image through the target ellipse model.
7. The apparatus of claim 6,
further comprising: the device comprises an image acquisition module, a filtering processing module and an image extraction module; wherein the content of the first and second substances,
the image acquisition module is used for acquiring video images;
the filtering processing module is used for respectively filtering each frame of original image of the video image to obtain a preprocessed image corresponding to each frame of the original image;
the image extraction module is used for respectively extracting the human eye image to be processed carried by the current preprocessed image from each preprocessed image.
CN201810934979.8A 2018-08-16 2018-08-16 Method and device for positioning pupil image in human eye image Active CN109086734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810934979.8A CN109086734B (en) 2018-08-16 2018-08-16 Method and device for positioning pupil image in human eye image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810934979.8A CN109086734B (en) 2018-08-16 2018-08-16 Method and device for positioning pupil image in human eye image

Publications (2)

Publication Number Publication Date
CN109086734A CN109086734A (en) 2018-12-25
CN109086734B true CN109086734B (en) 2021-04-02

Family

ID=64793428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810934979.8A Active CN109086734B (en) 2018-08-16 2018-08-16 Method and device for positioning pupil image in human eye image

Country Status (1)

Country Link
CN (1) CN109086734B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832344B (en) * 2019-04-17 2023-10-24 深圳熙卓科技有限公司 Dynamic pupil detection method and device
CN110349203B (en) * 2019-07-15 2023-05-09 深圳市威尔德医疗电子有限公司 Ultrasonic equipment and method for measuring blood vessel diameter in ultrasonic image thereof
CN110348408A (en) * 2019-07-16 2019-10-18 上海博康易联感知信息技术有限公司 Pupil positioning method and device
CN110619628B (en) * 2019-09-09 2023-05-09 博云视觉(北京)科技有限公司 Face image quality assessment method
CN110807427B (en) * 2019-11-05 2024-03-01 中航华东光电(上海)有限公司 Sight tracking method and device, computer equipment and storage medium
CN111339982A (en) * 2020-03-05 2020-06-26 西北工业大学 Multi-stage pupil center positioning technology implementation method based on features
CN113160161B (en) * 2021-04-14 2023-07-11 歌尔股份有限公司 Method and device for detecting defects at edge of target
CN113379744B (en) * 2021-08-12 2021-11-19 山东大拇指喷雾设备有限公司 Nozzle device surface defect detection method and system based on image processing
CN116503387B (en) * 2023-06-25 2024-03-26 聚时科技(深圳)有限公司 Image detection method, device, equipment, system and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774863A (en) * 2016-12-03 2017-05-31 西安中科创星科技孵化器有限公司 A kind of method that Eye-controlling focus are realized based on pupil feature
CN108197622A (en) * 2017-12-26 2018-06-22 新智数字科技有限公司 A kind of detection method of license plate, device and equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144754A (en) * 1997-03-28 2000-11-07 Oki Electric Industry Co., Ltd. Method and apparatus for identifying individuals
US7961952B2 (en) * 2007-09-27 2011-06-14 Mitsubishi Electric Research Laboratories, Inc. Method and system for detecting and tracking objects in images
CN107067018A (en) * 2016-12-09 2017-08-18 南京理工大学 A kind of hot line robot bolt recognition methods based on random Hough transformation and SVM

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774863A (en) * 2016-12-03 2017-05-31 西安中科创星科技孵化器有限公司 A kind of method that Eye-controlling focus are realized based on pupil feature
CN108197622A (en) * 2017-12-26 2018-06-22 新智数字科技有限公司 A kind of detection method of license plate, device and equipment

Also Published As

Publication number Publication date
CN109086734A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109086734B (en) Method and device for positioning pupil image in human eye image
CN109086691B (en) Three-dimensional face living body detection method, face authentication and identification method and device
CN108256404B (en) Pedestrian detection method and device
CN108932456B (en) Face recognition method, device and system and storage medium
CN108985199A (en) Detection method, device and the storage medium of commodity loading or unloading operation
JP2019079553A (en) System and method for detecting line in vision system
US20190156499A1 (en) Detection of humans in images using depth information
EP3182067A1 (en) Method and apparatus for determining spacecraft attitude by tracking stars
CN112016475B (en) Human body detection and identification method and device
CN109840883B (en) Method and device for training object recognition neural network and computing equipment
US11210502B2 (en) Comparison method and apparatus based on a plurality of face image frames and electronic device
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
US20180168446A1 (en) Method of detecting boundary between iris and sclera
US11036967B2 (en) Method and device for face selection, recognition and comparison
CN112634201A (en) Target detection method and device and electronic equipment
CN113505682A (en) Living body detection method and device
CN109614858B (en) Pupil center detection method and device
CN108289176B (en) Photographing question searching method, question searching device and terminal equipment
CN109145821B (en) Method and device for positioning pupil image in human eye image
CN113129298A (en) Definition recognition method of text image
CN110782425A (en) Image processing method, image processing device and electronic equipment
CN114998283A (en) Lens blocking object detection method and device
CN110782439B (en) Method and device for auxiliary detection of image annotation quality
CN111476132A (en) Video scene recognition method and device, electronic equipment and storage medium
CN109376585B (en) Face recognition auxiliary method, face recognition method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant