CN111369496B - Pupil center positioning method based on star ray - Google Patents

Pupil center positioning method based on star ray Download PDF

Info

Publication number
CN111369496B
CN111369496B CN202010098080.4A CN202010098080A CN111369496B CN 111369496 B CN111369496 B CN 111369496B CN 202010098080 A CN202010098080 A CN 202010098080A CN 111369496 B CN111369496 B CN 111369496B
Authority
CN
China
Prior art keywords
pupil
edge points
edge
area
star
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010098080.4A
Other languages
Chinese (zh)
Other versions
CN111369496A (en
Inventor
韩慧妍
马启玮
韩燮
杨婷
李俊伯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN202010098080.4A priority Critical patent/CN111369496B/en
Publication of CN111369496A publication Critical patent/CN111369496A/en
Application granted granted Critical
Publication of CN111369496B publication Critical patent/CN111369496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a pupil center positioning method based on a star ray. The method comprises the steps of firstly preprocessing a picture, then detecting a pupil area through a star ray algorithm, obtaining a binaryzation threshold value in an ROI area through an iteration method, optimizing the pupil area after binaryzation, and removing star ray edge point errors. The pupil center under the shielding condition can be accurately detected by the method. The invention better meets the requirements of real-time performance and robustness.

Description

Pupil center positioning method based on star ray
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a pupil center positioning method based on a star ray.
Background
Pupil positioning is the first and most important step in many computer vision applications. Such as face recognition, face feature tracking, facial expression analysis, and iris detection and localization without departing from pupil localization. The accuracy of pupil centering will directly and profoundly affect the further processing and analysis. The first step in gaze tracking is pupil center location. The eye tracking system provides a powerful analysis tool for real-time cognitive processing and information transmission. Gaze tracking has about two important application areas: diagnostic analysis and human-computer interaction. An eye tracking system for diagnosis provides an objective, quantitative, powerful way to record the reader's gaze. The information provided by the eye tracking system has significant application in many areas.
The current pupil center positioning method mainly comprises a characteristic-based method and a statistical learning-based method; the feature-based method is further divided into an integral projection-based method and hough circle detection; the shape-based method mainly performs model training through a large amount of data to achieve the effect of pupil center positioning. The integral projection method only calculates the gray value of an image, has small calculation amount, is easily influenced by illumination, eyelashes, eyelid shielding and the like, and therefore has large detection errors. The Hough circle detection has large calculation amount and is difficult to meet the requirement of real-time property. The statistical learning-based method requires a large amount of data to train the model, and thus the workload is large. The learning model is complex and cannot meet the requirement of accurately positioning the pupil center.
Disclosure of Invention
Aiming at the problems, the invention provides a pupil center positioning method based on a star ray.
In order to achieve the purpose, the invention adopts the following technical scheme:
a pupil center positioning method based on star rays comprises the steps of firstly preprocessing a picture, then detecting a pupil region through a star ray algorithm, obtaining a binaryzation threshold value in the ROI region through an iteration method, optimizing the pupil region after binaryzation, and removing star ray edge point errors.
Further, the specific operation of the picture preprocessing is as follows: data are collected through the infrared camera, collected video data are converted into gray level images, and median filtering is conducted on the gray level images. The influence of eyelashes can be reduced through median filtering, and the method is the basis for effectively extracting the binary pupil area.
Further, the specific operation of detecting the pupillary region through the star ray algorithm is as follows: taking the central point of the pupil region detected by yolov3 as a point inside the pupil region; yolov3 is fast, the model is simple, can well detect small objects, and is a real-time target detection model. Pupillary regions can be detected in real time by yolov 3.
Further, obtaining a binary threshold value in the ROI through an iterative method, wherein the algorithm comprises the following steps:
step 1, calculating the average value of the gray levels in the pupil area, recording the average value as a, and setting an initial threshold value as a;
step 2, counting the average value of the pixels larger than the threshold a as b, and counting the average value of the pixels smaller than the threshold a as c;
step 3, obtaining a new threshold value a ═ b + c)/2;
step 4, if a (k) is a (k +1), the obtained value is the threshold value; otherwise, continuing to execute the operation of the step 2 and carrying out iterative computation. The gray value of the pupil area is lower than that of other areas, so that the binarization threshold of the pupil area can be obtained in a self-adaptive mode through an iteration method.
Further, the specific operation of optimizing the binarized pupil region is as follows: carrying out median filtering on the pupil area after binarization, and removing the rough edge to make the edge as smooth as possible; and filling the light reflecting area, namely effectively filling the holes in the pupil area from top to bottom and from left to right through a scanning line algorithm. Then, rays are emitted every 20 degrees by taking the central point of the pupil area as a center, and 18 pupil edge points are obtained. The edges of the binary image can be effectively smoothed through median filtering, and more accurate pupil edges can be extracted through a star ray model; the scanning line algorithm can effectively fill the cavity inside the pupil area, and can prevent the extracted pupil edge points from being points inside the pupil. The pupil edge points extracted through this step are the coarsely extracted pupil edge points.
Further, the edge points with errors of the star rays are removed, and the error points are respectively as follows: the eyelid shelters from the edge point, the edge point of the reflecting area and the binary image outlier.
Still further, the eyeThe eyelid shielding edge points are upper eyelid shielding edge points, and the upper eyelid shielding edge points are removed by the method comprising the following steps: calculating the slope h between the initial pupil edge points1,h2,h3...; calculating the slope of the edge point above the pupil center; and removing pupil edge points at the sheltered position. The slope of the pupil edge point at the eyelid occlusion position is low, the slope of the pupil edge point at the non-occlusion position approaching 0 degree is high, and the slope is obviously different from the slope of the edge point at the occlusion position.
Still further, the method for eliminating outliers of the binary image comprises the following steps: the pupil is approximate ellipse, and reasonable pupil edge points are in accordance with the distribution of the ellipse; the slope of the ray emanating from the center of the ellipse should be perpendicular to the tangential direction of the point at the pupil edge; eliminating points which are not vertical to the slope of the ray emitted from the pupil center point; and the error is larger. By the method, eyelid occlusion edge points and binary image outliers can be effectively eliminated, the calculated amount is small, and the speed is high.
Still further, the method for eliminating the edge points of the light reflecting area comprises the following steps: firstly, pupil edge points with error points removed are subjected to ellipse fitting; because the point of the light reflecting region is closer to the fitted pupil center; and eliminating points closest to the center of the pupil, and further eliminating the edge points of the light reflecting area. By the step, edge points of the light reflecting area can be effectively removed, so that the fitted ellipse is closer to the pupil, and the pupil center point is effectively positioned.
Compared with the prior art, the invention has the following advantages:
and by adopting a strategy from coarse to fine, pupil edge points with large errors are gradually eliminated, and then the pupil center point is accurate. The pupil center positioning method can accurately position the pupil center under the condition of shielding.
Yolov3 is fast in speed, simple in model, capable of well detecting small objects, and is a real-time target detection model; the iterative method can obtain the binary threshold value of the pupil area in a self-adaptive manner, and the pupil can be effectively extracted; the edges of the binary image can be effectively smoothed through median filtering, and more accurate pupil edges can be extracted through a star ray model; the scanning line algorithm can effectively fill the cavity inside the pupil area, and can prevent the extracted pupil edge points from being points inside the pupil.
The method has the advantages of small calculated amount, high speed, high pupil center positioning precision and high robustness, can better meet the real-time performance, and can be suitable for different individuals.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a diagram of pupillary region examination by yolov 3;
FIG. 3 is a binarized image by an iterative method;
FIG. 4 is a diagram of the pupil area after optimization and binarization;
FIG. 5 is a diagram of the effective filling of holes in the pupillary region;
FIG. 6 is a star ray edge point error map;
FIG. 7 is an image with pupil edge points at the occlusion removed;
FIG. 8 is a culling map of outliers of a binarized image;
FIG. 9 is a graph of the effect of a first fit and the effect of a second fit;
fig. 10 is a diagram of the precise detection of the pupil center point according to the present invention.
Detailed Description
Example 1
The invention relates to a pupil center positioning method based on star rays, which adopts a 1 x 7 median filtering template.
1 Picture preprocessing
Detection of pupillary region
The pupillary region was examined by yolov3, and the examination results are shown in fig. 2: taking the central point of the pupil region detected by yolov3 as a point inside the pupil region; rays are emitted every 20 degrees by taking the central point of the pupil area as the center, and 18 pupil edge points are obtained. The pupil area can be accurately and efficiently detected through yolov 3.
Binarization of pupil region 3
Obtaining a binarization threshold value in the ROI through an iteration method, wherein the algorithm comprises the following steps:
1) calculating the average value of the gray levels in the area, marking the average value as a, and setting the initial threshold value as a;
2) counting the average value of the pixels larger than the threshold value a as b, and counting the average value of the pixels smaller than the threshold value a as c;
3) determining a new threshold value a ═ b + c)/2;
4) if a (k) is a (k +1), the result is the threshold value; otherwise, turning to 2, and performing iterative computation.
The binarized image by the iterative method is shown in fig. 3:
3.1 optimization of pupil area after binarization
And performing median filtering on the binarized pupil area to remove the roughness of the edge and smooth the edge as much as possible. For example, as shown in FIG. 4; it can be seen from fig. 4 that the pupil edge becomes smooth after median filtering, and the error of subsequent pupil edge point extraction is reduced. Filling the reflecting area, and effectively filling the holes in the pupil area by a scanning line algorithm (from top to bottom and from left to right). The effect is shown in fig. 5; it can be seen from fig. 5 that the region inside the pupil is effectively filled.
4 star ray edge point error:
an eyelid obscures a point at the edge; edge points of the reflective region; outliers obtained from the binarized image;
as shown in fig. 6: it can be seen from fig. 6 that in order to improve the precision of the pupil center point, the three pupil edge points must be eliminated.
4.1 removal of Upper eyelid occlusion edge points
Calculating the slope h between the initial pupil edge points1,h2,h3...; as can be seen from fig. 6, the occlusion is mainly an upper eyelid occlusion. In order to reduce the calculation amount, only the slope of the edge point above the pupil center is calculated; the slope of the pupil edge point at eyelid occlusions is low, approaching 0. The slope of the pupil edge point at the non-occlusion position is higher and is obviously different from that of the edge point at the occlusion position. The effect of removing the pupil edge points at the occlusion is shown in fig. 7; it can be seen from fig. 7 that the occlusion points of the upper eyelid can be better eliminated.
4.2 rejection of outliers in binary images
The pupil is approximate ellipse, and reasonable pupil edge points are in accordance with the distribution of the ellipse; the slope of the ray emanating from the center of the ellipse should be perpendicular to the tangential direction of the point at the pupil edge; eliminating points which are not vertical to the slope of the ray emitted from the pupil center point; and the error is bigger; the effect is shown in fig. 8; it can be seen from fig. 8 that such error points are better addressed.
4.3 removal of edge points of the reflective region
As can be seen from fig. 6, the edge points of the light reflection region cannot be removed by using the slope, and the pupil edge points from which the error points are removed are subjected to ellipse fitting; because the point of the light reflecting region is closer to the fitted pupil center; and (4) rejecting the points closest to the center of the pupil and further rejecting the points.
The effect of the first fit and the effect of the second fit are shown in FIG. 9; it can be seen from fig. 9 that the positioning accuracy of the pupil center is improved; the anterior pupil center is removed as [471.203,148.868], and the posterior pupil center is removed as [470.726,147.615 ].
Through the elimination of the three steps, the pupil center point under the shielding condition can be accurately detected by the method, as shown in fig. 10; fig. 10 shows that the method of the present subject is efficient and accurate, and tests are performed, so that the requirements of real-time performance and robustness can be better satisfied.
Those matters not described in detail in the present specification are well known in the art to which the skilled person pertains. Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (6)

1. A pupil center positioning method based on star rays is characterized by comprising the following steps: firstly, preprocessing a picture, then detecting a pupil area through a star ray algorithm, obtaining a binarization threshold value in an ROI (region of interest) through an iteration method, optimizing the pupil area after binarization, and removing star ray edge point errors;
the specific operation of detecting the pupil area through the star ray algorithm is as follows: taking the central point of the pupil region detected by yolov3 as a point inside the pupil region; taking the central point of the pupil area as a center, and emitting rays at intervals of 20 degrees to obtain 18 pupil edge points;
obtaining a binary threshold value in the ROI through an iterative method, wherein the algorithm comprises the following steps:
step 1, calculating the average value of the gray levels in the pupil area, recording the average value as a, and setting an initial threshold value as a;
step 2, counting the average value of the pixels larger than the threshold a as b, and counting the average value of the pixels smaller than the threshold a as c;
step 3, solving a new threshold value a = (b + c)/2;
step 4, if a (k) = a (k +1), the obtained value is the threshold value; otherwise, continuing to execute the operation of the step 2 and performing iterative computation;
the specific operation of the optimized binarized pupil area is as follows: carrying out median filtering on the pupil area after binarization, and removing the rough edge to make the edge as smooth as possible; filling the reflecting region, and effectively filling the holes in the pupil region from top to bottom and from left to right through a scanning line algorithm.
2. The star ray based pupil center positioning method according to claim 1, characterized in that: the specific operation of the picture preprocessing is as follows: data are collected through an infrared camera, collected video data are converted into gray level images, and median filtering is conducted on the gray level images.
3. The method for pupil center positioning based on star ray as claimed in claim 1, wherein: and eliminating errors of the edge points of the star rays, wherein the errors are respectively as follows: the eyelid obscures the edge points, the edge points of the reflective area and the binary image outliers.
4. The method for pupil center positioning based on star ray as claimed in claim 3, wherein: the eyelid occlusion edge points are upper eyelid occlusion edge points, and the upper eyelid occlusion edge point removing method comprises the following steps: calculating the slope h between the initial pupil edge points1,h2,h3...; calculating the slope of the edge points above the pupil center; and removing pupil edge points at the sheltered position.
5. The star ray based pupil center positioning method according to claim 3, wherein: the method for eliminating the outliers of the binary image comprises the following steps: the pupil is approximate ellipse, and reasonable pupil edge points are in accordance with the distribution of the ellipse; the slope of the ray emanating from the center of the ellipse should be perpendicular to the tangential direction of the point at the pupil edge; rejecting points with a slope which is not perpendicular to the slope of rays emitted from the pupil center point; and the error is larger.
6. The star ray based pupil center positioning method according to claim 3, wherein: the method for eliminating the edge points of the light reflection area comprises the following steps: firstly, carrying out ellipse fitting on pupil edge points with the error points removed; because the point of the light reflecting region is closer to the fitted pupil center; and eliminating points closest to the center of the pupil, and further eliminating the edge points of the light reflecting area.
CN202010098080.4A 2020-02-18 2020-02-18 Pupil center positioning method based on star ray Active CN111369496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098080.4A CN111369496B (en) 2020-02-18 2020-02-18 Pupil center positioning method based on star ray

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098080.4A CN111369496B (en) 2020-02-18 2020-02-18 Pupil center positioning method based on star ray

Publications (2)

Publication Number Publication Date
CN111369496A CN111369496A (en) 2020-07-03
CN111369496B true CN111369496B (en) 2022-07-01

Family

ID=71210703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098080.4A Active CN111369496B (en) 2020-02-18 2020-02-18 Pupil center positioning method based on star ray

Country Status (1)

Country Link
CN (1) CN111369496B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8520310B2 (en) * 2008-09-26 2013-08-27 Konica Minolta Opto, Inc. Image display device, head-mounted display and head-up display
US8600124B2 (en) * 2004-09-16 2013-12-03 Imatx, Inc. System and method of predicting future fractures
CN108509873A (en) * 2018-03-16 2018-09-07 新智认知数据服务有限公司 Pupil image edge point extracting method and device
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN109993749A (en) * 2017-12-29 2019-07-09 北京京东尚科信息技术有限公司 The method and apparatus for extracting target image
CN110348408A (en) * 2019-07-16 2019-10-18 上海博康易联感知信息技术有限公司 Pupil positioning method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9773154B2 (en) * 2015-10-09 2017-09-26 Universidad Nacional Autónoma de México System for the identification and quantification of helminth eggs in environmental samples

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600124B2 (en) * 2004-09-16 2013-12-03 Imatx, Inc. System and method of predicting future fractures
US8520310B2 (en) * 2008-09-26 2013-08-27 Konica Minolta Opto, Inc. Image display device, head-mounted display and head-up display
CN109993749A (en) * 2017-12-29 2019-07-09 北京京东尚科信息技术有限公司 The method and apparatus for extracting target image
CN108509873A (en) * 2018-03-16 2018-09-07 新智认知数据服务有限公司 Pupil image edge point extracting method and device
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN110348408A (en) * 2019-07-16 2019-10-18 上海博康易联感知信息技术有限公司 Pupil positioning method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Servo Control Based on Pupil Detection Eye Tracking";Constantin Barabaşa;《2018 IEEE 24th International Symposium for Design and Technology in Electronic Packaging​》;20190103;327-330 *
"基于融合星射线法和椭圆拟合法的瞳孔定位研究";高源;《电子世界》;20170408;145 *
"基于骨架的三维点云模型分割";韩慧妍 等;《计算机工程与设计》;20190515;第40卷(第5期);1418-1423 *
"红外头盔式眼动仪的瞳孔中心定位算法";王军宁 等;《西安电子科技大学学报(自然科学版)》;20110620;第38卷(第3期);7-12 *

Also Published As

Publication number Publication date
CN111369496A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
Haleem et al. A novel adaptive deformable model for automated optic disc and cup segmentation to aid glaucoma diagnosis
Aquino et al. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques
Giancardo et al. Elliptical local vessel density: a fast and robust quality metric for retinal images
CN105243669A (en) Method for automatically identifying and distinguishing eye fundus images
WO2020098038A1 (en) Pupil tracking image processing method
CN113342161B (en) Sight tracking method based on near-to-eye camera
He et al. A new segmentation approach for iris recognition based on hand-held capture device
Giachetti et al. Multiresolution localization and segmentation of the optical disc in fundus images using inpainted background and vessel information
KR102250688B1 (en) Method and device for automatic vessel extraction of fundus photography using registration of fluorescein angiography
Wang et al. Nucleus segmentation of cervical cytology images based on depth information
Choi et al. Accurate eye pupil localization using heterogeneous CNN models
CN111507932A (en) High-specificity diabetic retinopathy characteristic detection method and storage equipment
Biyani et al. A clustering approach for exudates detection in screening of diabetic retinopathy
CN114202795A (en) Method for quickly positioning pupils of old people
CN109446935B (en) Iris positioning method for iris recognition in long-distance traveling
CN114020155A (en) High-precision sight line positioning method based on eye tracker
Zhou et al. Segmentation of optic disc in retinal images using an improved gradient vector flow algorithm
KR102250689B1 (en) Method and device for automatic vessel extraction of fundus photography using registration of fluorescein angiography
Balakrishnan NDC-IVM: An automatic segmentation of optic disc and cup region from medical images for glaucoma detection
Shetty et al. A novel approach for glaucoma detection using fractal analysis
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
Malek et al. Automated optic disc detection in retinal images by applying region-based active aontour model in a variational level set formulation
Radman et al. Iris segmentation in visible wavelength images using circular gabor filters and optimization
CN111369496B (en) Pupil center positioning method based on star ray
Karakaya et al. An iris segmentation algorithm based on edge orientation for off-angle iris recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant