CN113190117B - Pupil and light spot positioning method, data calculation method and related device - Google Patents

Pupil and light spot positioning method, data calculation method and related device Download PDF

Info

Publication number
CN113190117B
CN113190117B CN202110472872.8A CN202110472872A CN113190117B CN 113190117 B CN113190117 B CN 113190117B CN 202110472872 A CN202110472872 A CN 202110472872A CN 113190117 B CN113190117 B CN 113190117B
Authority
CN
China
Prior art keywords
pupil
image
light spot
determining
iris
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110472872.8A
Other languages
Chinese (zh)
Other versions
CN113190117A (en
Inventor
翟进有
沈忱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Virtual Reality Institute Co Ltd
Original Assignee
Nanchang Virtual Reality Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Virtual Reality Institute Co Ltd filed Critical Nanchang Virtual Reality Institute Co Ltd
Priority to CN202110472872.8A priority Critical patent/CN113190117B/en
Publication of CN113190117A publication Critical patent/CN113190117A/en
Application granted granted Critical
Publication of CN113190117B publication Critical patent/CN113190117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

A pupil and light spot positioning method, a data calculation method and a related device are provided, wherein the pupil positioning method comprises the following steps: extracting a characteristic region of the current near-eye image by using a pre-trained deep convolution neural network model to obtain a pupil region image with a preset size; carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image; extracting pupil contour characteristic points in the preprocessed pupil area image according to an edge extraction algorithm; and carrying out ellipse fitting on the pupil contour characteristic points to determine the central position of the pupil. And positioning the accurate region of the pupil through a depth convolution neural network, performing elliptic boundary fitting on the pupil in the positioning region, and positioning the center of the pupil. By the method, noise areas such as eyelashes and irises can be avoided, and the efficiency and the accuracy of pupil positioning are greatly improved.

Description

Pupil and light spot positioning method, data calculation method and related device
Technical Field
The invention relates to the technical field of computer vision, in particular to a pupil and light spot positioning method, a data calculation method and a related device.
Background
The sight tracking is also called eye movement tracking, and is a technology for observing the movement condition of human eyeballs in real time by utilizing a camera device and estimating the direction of sight and the coordinates of a sight falling point by a certain method.
Non-interfering gaze tracking technologies based on image processing are currently a major trend. The pupil center-corneal reflection method is one of the popular methods in the sight tracking algorithm, and the method estimates the sight direction by calculating the vector between the pupil center and the corneal reflection. However, due to the inherent physiological mechanisms of the human eye and the non-linearity, randomness and complexity of eye movement, the difficulty of positioning the pupil center and the corneal reflection spot is increased. In practical application, the algorithm also provides challenges for the accuracy and real-time performance of extracting relevant parameters from the sight line estimation.
The core data of the sight line tracking algorithm is the position information of the pupil and the light spot. In the prior art, the image of the whole eye is directly used for threshold segmentation and edge detection no matter the pupil is positioned or the light spot is positioned, so that the edge detection error is large due to noises such as eyelashes and a high-brightness area of an eyeball, and the accuracy of positioning the pupil and the light spot is influenced.
Disclosure of Invention
In view of the above, it is necessary to provide a pupil and light spot positioning method, a data calculation method and a related device for solving the problem of poor accuracy of calculating the core data in the gaze tracking algorithm in the prior art.
A pupil location method, comprising:
extracting a characteristic region of the current near-eye image by using a pre-trained deep convolution neural network model to obtain a pupil region image with a preset size;
carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image;
extracting pupil contour characteristic points in the preprocessed pupil area image according to an edge extraction algorithm;
and carrying out ellipse fitting on the pupil contour characteristic points to determine the central position of the pupil.
Further, in the pupil location method, the step of extracting the pupil contour feature points in the preprocessed pupil region image according to an edge extraction algorithm includes:
clustering analysis is carried out on the preprocessed pupil area image by utilizing a clustering threshold algorithm, so that all pixel points in the preprocessed pupil area image are clustered into pixel points of a pupil area, pixel points of a light spot area and pixel points of an iris area;
and searching the neighborhood of the pixel points in the pupil area as the pixel points in the iris area, and determining the pixel points as the pupil outline characteristic points.
Further, in the pupil positioning method, before the step of extracting the feature region from the current near-eye image by using the pre-trained deep convolutional neural network model, the method further includes:
acquiring a plurality of near-eye images, and performing pupil region annotation on each near-eye image by using a rectangular frame with a preset size to obtain a pupil region image;
acquiring vertex coordinates, length and width of each pupil region graph, and taking the acquired vertex coordinates, length and width of each pupil region graph as label data;
and constructing a deep convolution neural network model, and training the deep convolution neural network model by using the obtained multiple near-eye images and the label data.
Further, in the above pupil positioning method, the step of fitting an ellipse to the feature points of the pupil profile to determine the center position of the pupil includes:
carrying out ellipse fitting on the pupil contour characteristic points, and determining the pupil contour of the pupil according to the fitted ellipse;
and calculating the central position of the pupil according to the pupil contour.
The invention positions the accurate area of the pupil through the depth convolution neural network, performs ellipse boundary fitting on the pupil in the positioning area to fit the pupil outline, and positions the pupil center. By the method, noise areas such as eyelashes and irises can be avoided, and the efficiency and the accuracy of pupil positioning are greatly improved.
The invention also discloses a data calculation method for sight tracking, which comprises the following steps:
extracting a characteristic region of a current near-eye image by using a pre-trained deep convolution neural network model to obtain an iris region image with a first preset size and a pupil region image with a second preset size, wherein the iris region image is an image covering the iris region and the pupil region;
respectively carrying out binarization processing on the iris region image and the pupil region image to respectively obtain a pre-processed iris region image and a pre-processed pupil region image;
determining the central position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
determining the number of light spots in the current near-eye image and the central position of each light spot according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
when the number of the missing light spots is smaller than a preset value, determining the ID of each light spot according to the central position of each light spot;
and when the number of the missing light spots is larger than or equal to a preset value, determining the ID of each light spot in the current near-to-eye image according to the central position of the pupil.
Further, in the data calculation method, the step of determining the center position of the pupil in the current near-eye image according to the preprocessed pupil region image includes:
extracting pupil contour characteristic points in the preprocessed pupil area image according to an edge extraction algorithm;
and carrying out ellipse fitting on the pupil contour characteristic points to determine the central position of the pupil.
Further, in the data calculation method, the step of extracting the pupil contour feature points in the pre-processed pupil region image according to an edge extraction algorithm includes:
clustering analysis is carried out on the preprocessed pupil area image by utilizing a clustering threshold algorithm, so that all pixel points in the preprocessed pupil area image are clustered into pixel points of a pupil area, pixel points of a light spot area and pixel points of an iris area;
and searching the neighborhood of the pixel points in the pupil area as the pixel points in the iris area, and determining the pixel points as the pupil outline characteristic points.
Further, the above data calculation method, wherein the step of determining the number of light spots in the current near-eye image and the center position of each light spot from the preprocessed iris region image comprises:
clustering analysis is carried out on the preprocessed pupil area image by utilizing a clustering threshold algorithm so as to cluster all pixel points in the preprocessed iris area image into pixel points of a pupil area, pixel points of a light spot area, pixel points of an iris area and pixel points of an iris area;
determining the number of light spots in the current near-to-eye image according to the clustering number of the pixel points of the light spot area;
and performing mean calculation on the coordinates of the pixel points of the light spot area in the transverse direction and the longitudinal direction, and taking the coordinates after the mean calculation as the coordinates of the central position of the corresponding light spot.
Further, in the data calculation method, the number of the light spots in the standard near-eye image is 4, and the preset value is 3, wherein,
when the number of missing light spots is 0, the step of determining the ID of each light spot according to the center position of each light spot comprises:
determining the current distance from each light spot to each boundary of the iris area image according to the central position of each light spot and the boundary position information of the iris area image to obtain current distance information;
determining the ID of each light spot according to the deviation between the current distance information and standard distance information, wherein the standard distance information comprises the distance from each standard light spot in a standard near-eye image to each boundary of a standard iris region;
when the number of missing light spots is 1 or 2, the step of determining the ID of each light spot according to the central position of each light spot comprises:
determining the current distance from each light spot to each boundary of the iris area image according to the central position of each light spot and the boundary position information of the iris area image;
determining the ID of each light spot according to the deviation of the current distance between each light spot and each boundary of the iris area;
when the number of the missing light spots is 3, the step of determining the ID of each light spot in the current near-to-eye image according to the center position of the pupil includes:
and determining the ID of the light spot according to the offset direction of the center position of the pupil relative to the center position of a standard pupil in a standard near-eye image.
Further, in the data calculation method, the step of determining the ID of each of the light spots according to the deviation between the current distance information and the standard distance information includes:
establishing a 4-by-4 matrix according to the current distance information, wherein each column of data in the matrix is the current distance from the center of each light spot to the upper boundary, the lower boundary, the left boundary and the right boundary of the iris area image, and establishing a 4-by-4 standard matrix according to the standard distance information, wherein each column of data in the standard matrix is the standard distance from the center of each standard light spot to the upper boundary, the lower boundary, the left boundary and the right boundary of the standard iris area;
performing dot product operation on the matrix and the standard matrix, and searching for the standard light spot with the minimum deviation with each light spot according to the operation result;
and determining the ID of each light spot as the ID of the searched corresponding standard light spot.
The invention locates the accurate area of iris and pupil by depth convolution neural network, extracts iris area image and pupil area image. And (4) performing elliptic boundary fitting on the pupil in the pupil area image to obtain a pupil profile, and positioning the pupil center. And dynamic threshold segmentation is carried out in the iris area image to position the center of the light spot, so that the calculated amount of the light spot extracted from the whole eye image is smaller, and the influence of the eyeball highlight area is smaller. And noise bright spots except the bright spots on the iris can be removed by the artificial feature vector algorithm. And by designing the characteristic vector of the central offset matrix, the offset of relative direct vision when the pupil is offset is detected, and the ID of the light spot lacking on the iris is predicted. Compared with the traditional method for positioning the light spot by threshold segmentation and contour information, the method can predict the light spot which is not shown on the iris, and can position the accurate light spot ID under the condition of lacking the light spot.
The invention also discloses a light spot positioning method, which comprises the following steps:
extracting a characteristic region of the current near-eye image by using a pre-trained deep convolution neural network model to obtain an iris region image with a first preset size;
carrying out binarization processing on the iris area image to obtain a preprocessed iris area image;
determining the number of light spots in the current near-eye image and the central position of each light spot according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
when the number of the missing light spots is smaller than a preset value, determining the ID of each light spot according to the central position of each light spot;
when the number of the missing light spots is larger than or equal to a preset value, extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model to obtain a pupil region image with a second preset size;
carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image, and determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
and determining the ID of each light spot in the current near-to-eye image according to the center position of the pupil.
The invention also discloses a pupil positioning device, which is characterized by comprising:
the first image extraction module is used for extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model so as to obtain a pupil region image with a preset size;
the first processing module is used for carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image;
the first contour extraction module is used for extracting pupil contour characteristic points in the preprocessed pupil area image according to an edge extraction algorithm;
and the first pupil positioning module is used for carrying out ellipse fitting on the pupil contour characteristic points so as to determine the central position of the pupil.
Further, the pupil positioning device further includes:
the marking module is used for acquiring a plurality of near-eye images and marking the pupil area of each near-eye image by using a rectangular frame with a preset size to obtain a pupil area image;
the data acquisition module is used for acquiring the vertex coordinates, the length and the width of each pupil region figure and taking the acquired vertex coordinates, the length and the width of each figure as label data;
and the pre-training module is used for constructing a deep convolutional neural network model and training the deep convolutional neural network model by using the acquired multiple near-eye images and the label data.
The invention also discloses a data calculation device for sight tracking, comprising:
the second image extraction module is used for extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model so as to obtain an iris region image with a first preset size and a pupil region image with a second preset size, wherein the iris region image is an image covering an iris region and a pupil region;
the second processing module is used for respectively carrying out binarization processing on the iris area image and the pupil area image to respectively obtain a preprocessed iris area image and a preprocessed pupil area image;
the second pupil positioning module is used for determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
the first determining module is used for determining the number of light spots and the central position of each light spot in the current near-eye image according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
the first light spot positioning module is used for determining the ID of each light spot according to the central position of each light spot when the number of the missing light spots is smaller than a preset value;
and the second light spot positioning module is used for determining the ID of each light spot in the current near-eye image according to the central position of the pupil when the number of the missing light spots is greater than or equal to a preset value.
The invention also discloses a light spot positioning device, which comprises:
the third image extraction module is used for extracting a characteristic region of the current near-eye image by using a pre-trained deep convolution neural network model so as to obtain an iris region image with a first preset size;
the third processing module is used for carrying out binarization processing on the iris area image to obtain a preprocessed iris area image;
the second determining module is used for determining the number of light spots and the central position of each light spot in the current near-eye image according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
the third light spot positioning module is used for determining the ID of each light spot according to the central position of each light spot when the number of the missing light spots is smaller than a preset value;
the fourth image extraction module is used for extracting a characteristic region of the current near-eye image by using a pre-trained depth convolution neural network model when the number of the missing light spots is larger than or equal to a preset value so as to obtain a pupil region image with a second preset size;
the fourth processing module is used for carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image, and determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
and the fourth light spot positioning module is used for determining the ID of each light spot in the current near-eye image according to the central position of the pupil.
The invention also discloses an electronic device comprising a memory and a processor, wherein the memory stores a program, and the program realizes any one of the methods when being executed by the processor.
The invention also discloses a computer readable storage medium, on which a program is stored, characterized in that the program realizes any of the above methods when executed by a processor.
Drawings
FIG. 1 is a flowchart illustrating a pupil positioning method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a near-eye image acquisition system;
FIG. 3 is a schematic diagram of a pupil region image extracted by a deep convolutional neural network;
FIG. 4 is a flowchart illustrating a pupil positioning method according to a second embodiment of the present invention;
fig. 5 is a flowchart of a data calculation method for gaze tracking according to a third embodiment of the present invention;
FIG. 6 is a schematic diagram of an iris region image and a pupil region image extracted by a deep convolutional neural network;
fig. 7 is a flowchart of a positioning method of a light spot according to a fourth embodiment of the present invention;
fig. 8 is a block diagram of a pupil positioning device in a fifth embodiment of the present invention;
fig. 9 is a block diagram showing a data calculation device for eye tracking according to a sixth embodiment of the present invention;
fig. 10 is a block diagram of a spot positioning apparatus according to a seventh embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
These and other aspects of embodiments of the invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the embodiments of the invention may be employed, but it is understood that the embodiments of the invention are not limited correspondingly in scope. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
The method for determining the pupil position in the eye image has wide application in many fields, and particularly in the human-computer interaction neighborhood, the determination of the pupil position is an important step for realizing the sight tracking. Referring to fig. 1, the method for positioning a pupil according to the first embodiment of the present invention includes steps S11 to S14.
And S11, extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model to obtain a pupil region image with a preset size.
The near-eye image is an acquired image of a single eye of a human body, and as shown in fig. 2, the near-eye image acquisition system is shown and comprises a near control unit, an eye display device, an infrared light supplement lamp and an infrared camera. The infrared camera of the near-to-eye image acquisition system is used for shooting the eye images of the user and detecting the subsequent iris, pupil and light spot. The near-eye image acquisition system is provided with four infrared light supplement lamps for emitting infrared light, illuminating pupils and generating light spots on the pupils for sight tracking and pupil modeling. The near-eye display equipment is used for displaying the calibration points and the calibration background and is used for calibrating and verifying the feasibility and the accuracy of the algorithm. The control unit is used for controlling the cooperative work of the camera and the light supplementing lamp, displaying the calibration image on the display device and controlling the display brightness of the display device.
In this embodiment, a near-eye image acquired by the near-eye image acquisition system is acquired in real time, and the acquired current near-eye image is input into a pre-trained depth convolution neural network model to perform feature region extraction, so as to obtain a pupil region image with a preset size.
In this embodiment, an 8-layer deep convolutional neural network model is built, the first layer of convolutional neural network inputs an RGB image with three channels of 640 × 480, 2 × 2 convolutional kernels are set, the step length is 2, image features are extracted from parameters of padding =0, and an output result is a feature map with a preset size. The model input image is a near-eye image containing the entire eye information as shown in fig. 3, and the model outputs a feature map (shown as an area a in fig. 3) of a predetermined size. The constructed convolutional neural network adopts larger convolution kernel calculation, so that the boundary contour information of the pupil can be extracted more quickly. The subsequent calculation process only needs to be carried out in the image of the pupil area, so that the calculation amount is greatly reduced.
It can be understood that, the deep convolutional neural network model is trained in advance, and the pre-training process is as follows:
acquiring a plurality of near-eye images, and performing pupil region annotation on each near-eye image by using a rectangular frame with a preset size to obtain a pupil region image;
acquiring vertex coordinates, length and width of each pupil region graph, and taking the acquired vertex coordinates, length and width of each pupil region graph as label data;
and training a deep convolution neural network model by using the acquired multiple near-eye images and the label data.
In specific implementation, a large number of near-eye images are obtained, a rectangular frame of a pupil area of the near-eye images is marked manually, label data are coordinates of the upper left corner of the rectangular frame and the length and the width of the rectangle, and the label data serve as training test data. And training the constructed deep convolution neural network model by the label data and the near-eye image so as to enable the trained deep convolution neural network model to be capable of fully identifying the exit pupil region by training, and extracting the image of the pupil region by using a rectangular shape with a preset size.
And S12, carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image.
And S13, extracting pupil contour characteristic points in the preprocessed pupil area image according to an edge extraction algorithm.
Firstly, the extracted pupil area image is subjected to binarization processing to obtain a preprocessed pupil area image, and the purpose is to extract a pupil contour and prepare for the next pupil center positioning. And detecting the pupil contour in the pre-processed pupil area image by utilizing an edge extraction algorithm. The basic principle is that the gray value of a pixel point on a target image is set to be 0 or 255, all pixels with the gray value not less than a selected threshold value are regarded as target objects, the gray value of the pixels is represented by 255, and otherwise, the gray value is 0 and represents a background or other object areas.
And S14, carrying out ellipse fitting on the pupil contour characteristic points to determine the central position of the pupil.
In specific implementation, the pupil center can be obtained based on least square ellipse fitting, and the general expression of the ellipse is as follows:
Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0;
in the process of solving the elliptic coefficients, in order to prevent zero root, the above equation needs to be constrained, the constraint condition is a + C =1, and the expression after constraint is as follows:
Bxy+C(y 2 -x 2 )+Dx+Ey+F=-x 2
as can be seen from the general expression of the equation, the value of the parameter in the equation must be not less than 6 points to obtain the ellipse. The basic principle of ellipse fitting based on the least square method is that 6 points are arbitrarily taken out from the extracted pupil profile characteristic points, the points are substituted into the formula to calculate parameter values, then the pupil profile is fitted, and the center of the pupil is calculated.
In the embodiment, the accurate area of the pupil is positioned through the deep convolution neural network, the pupil contour is fitted by performing an elliptic boundary on the pupil in the positioning area, and the center of the pupil is positioned. By the method, noise areas such as eyelashes and irises can be avoided, and the efficiency and the accuracy of pupil positioning are greatly improved.
Referring to fig. 4, a pupil positioning method according to a second embodiment of the present invention includes steps S21 to S26.
And S21, extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model to obtain a pupil region image with a preset size.
And S22, carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image.
And S23, performing clustering analysis on the preprocessed pupil area image by using a clustering threshold algorithm so as to cluster all pixel points in the preprocessed pupil area image into pixel points of a pupil area, pixel points of a light spot area and pixel points of an iris area.
The gray level image obtained by binarizing the extracted pupil region image is referred to as a pre-processed pupil region image. According to the characteristics of the image of the pupil area, the pupil area can be contained, and the threshold value selection is influenced. In this embodiment, a K-means clustering threshold method may be used to cluster the pre-processed pupil region images. Based on the gray scale characteristics of the pupil area, a histogram is solved for the preprocessed pupil area image, and each pixel point in the area is classified into three types by adopting a K-means clustering threshold method, namely the pixel point of the pupil area, the pixel point of the light spot area and the pixel point of the iris area. The threshold values of the gray scale values of the pixels of the pupil area, the light spot area and the iris area can be determined according to the gray scale characteristics of the general pupil area image, for example, 0, 255 and 176, respectively. After clustering, setting the gray value of the pixel point in the pupil area as 0, the gray value of the pixel point in the light spot area as 255, and the gray value of the pixel point in the iris area as 176. Compared with an adaptive threshold and fixed threshold segmentation method, the method can eliminate the influence of light spots and improve the pupil positioning accuracy, three thresholds are obtained according to K-means clustering, and the method has better adaptability and compatibility compared with a threshold algorithm for calculating the pupil and the background by using a single threshold.
And S24, searching pixel points with neighborhoods as iris areas in the pixel points of the pupil areas, and determining the pixel points as pupil contour characteristic points.
In specific implementation, the gray values of the pixel points in the pupil area are 0, and the gray values of the adjacent pixel points around the pixel points contain points which are not 0, and the points are all determined as pupil contour points. And then, for all the found pupil contour boundary points, calculating points of which the adjacent domains are irises as boundary points by judging the adjacent domains, and selecting 100-300 characteristic points from all the boundary points as the characteristic points determined as the pupil contour and storing the characteristic points in a corresponding container.
And S25, carrying out ellipse fitting on the pupil contour characteristic points, and determining the pupil contour of the pupil according to the fitted ellipse.
And S26, calculating the center position of the pupil according to the pupil contour.
And carrying out ellipse fitting on the pupil contour characteristic points in the container to obtain a corresponding ellipse which is the pupil contour, wherein the center of the ellipse is the center of the pupil.
Referring to fig. 5, a data calculation method for gaze tracking according to a third embodiment of the present invention includes steps S31 to S36. The basis of the sight tracking technology is pupil positioning and spot positioning, so that the position of the pupil and the position of the spot are core data for calculating the sight direction. In this embodiment, the user accurately positions the light spot and the pupil in the acquired near-eye image in real time.
And S31, extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model to obtain an iris region image with a first preset size and a pupil region image with a second preset size, wherein the iris region image is an image covering an iris and a pupil.
The deep convolutional neural network model is a pre-trained model, and an iris area image and a pupil area image with smaller sizes can be extracted through the model. In specific implementation, the step of extracting the feature region of the current near-eye image by using the pre-trained deep convolutional neural network model further comprises the following steps:
acquiring a plurality of near-eye images, and respectively labeling an iris area and a pupil area of each near-eye image by using a rectangular frame with a first preset size and a rectangular frame with a second preset size to obtain an iris area image and a pupil area image, wherein the iris area image comprises the pupil area image;
acquiring the vertex coordinates, the length and the width of each iris region figure and the vertex coordinates, the length and the width of each pupil region figure, and taking the acquired vertex coordinates, the length and the width of each figure as label data;
and constructing a deep convolution neural network model, and training the deep convolution neural network model by using the acquired multiple near-eye images and the label data.
Acquiring a large number of near-eye images containing whole eye information, wherein the near-eye images contain four light spots, a rectangular frame marking an iris region in each near-eye image is taken as a type 1, a rectangular frame marking a pupil region is taken as a type 2, and label data are coordinates of the upper left corner of the rectangular frame and the length and the width of the rectangle.
The near-eye image input by the model is shown in fig. 6, and the result is output as an image of an area shown by a rectangular frame 1 and a rectangular frame 2 in fig. 6 through a deep convolution neural network, wherein the larger rectangular frame 1 is an iris area, and the smaller rectangular frame 2 is a pupil area. The final model output converts the original near-eye image captured by the device into a smaller size iris image and a pupil image therein. In the subsequent calculation process, calculation is only carried out in the areas in the rectangular frame 1 and the rectangular frame 2, so that the calculation amount is greatly reduced, and the algorithm error generated by light spots outside the iris area can be eliminated.
And step S32, respectively carrying out binarization processing on the iris region image and the pupil region image to respectively obtain a pre-processed iris region image and a pre-processed pupil region image.
And respectively carrying out binarization processing on the iris area image and the pupil area image to convert the iris area image and the pupil area image into gray level images so as to obtain a preprocessed iris area image and a preprocessed pupil area image.
And S33, determining the center position of the pupil in the current near-to-eye image according to the pre-processed pupil area image.
The determination of the central position of the pupil may be performed in the manner described in embodiment 1 or 2, and is not described herein again.
And S34, determining the number of light spots and the central position of each light spot in the current near-eye image according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots.
Specifically, the step of determining the number of light spots and the center position of each light spot in the current near-eye image according to the preprocessed iris area image includes:
clustering analysis is carried out on the preprocessed pupil area image by utilizing a clustering threshold algorithm so as to cluster all pixel points in the preprocessed iris area image into pixel points of a pupil area, pixel points of a light spot area, pixel points of an iris area and pixel points of an iris area;
determining the number of light spots in the current near-to-eye image according to the clustering number of the pixel points of the light spot area;
and performing mean calculation on the coordinates of the pixel points of the light spot area in the transverse direction and the longitudinal direction, and taking the coordinates after the mean calculation as the coordinates of the central position of the corresponding light spot.
In this embodiment, a K-means cluster threshold method may be used to perform cluster analysis on the pre-processed pupillary region image, and in specific implementation, the selection of the K value is determined first, and K in this embodiment is 4. In the second step, k initialized centroids need to be selected, and the gray values of the initialized centroids can be set empirically, for example, in this embodiment, the gray values of 4 centroids are 0, 176, 200, and 255 respectively. And (3) performing clustering calculation on each pixel point in the iris rectangular region, namely calculating the difference between each pixel point and four centroids, wherein the calculated pixel point belongs to the centroid category with the minimum difference, such as a point with a gray value of 150, and the calculated pixel point belongs to the iris region with the minimum difference of 176.
According to the gray scale characteristics of the iris area, the histogram of the whole iris area is divided into four types, namely a pupil area, a light spot area, the iris area and an iris area. Taking the gray value of the pixel point in the pupil area as 0, the gray value of the pixel point in the facula area as 255, the gray value of the pixel point in the iris area as 176 and the gray value of the pixel point in the area outside the iris as 200. And calculating the horizontal and longitudinal mean values of the adjacent areas of the light spot with the pixel value of 255 as the central point of the light spot.
And determining the number of light spots in the current near-to-eye image according to the clustering number of the pixel points of the light spot region. If the clustered pixels of the three light spot areas exist after clustering, the number of the light spots in the current near-eye image is 3.
And S35, when the number of the missing light spots is smaller than a preset value, determining the ID of each light spot according to the central position of each light spot.
And S36, when the number of the missing light spots is larger than or equal to a preset value, determining the ID of each light spot in the current near-to-eye image according to the center position of the pupil.
The near-eye image acquisition equipment in the embodiment is provided with four infrared light supplement lamps for emitting infrared light, illuminating pupils, generating light spots on the pupils and being used for sight tracking and pupil modeling. Therefore, the number of spots in the standard near-eye image is 4. For near-eye images collected by the near-eye image collection system based on the four infrared light supplementing lamps, the 4 light spots are distributed on four vertexes of the rectangle, and the IDs of the light spots are used for distinguishing the light spots, such as an upper left light spot number 1, an upper right light spot number 2, a lower left light spot number 3 and a lower right light spot number 4. The positions of the light spots in the near-eye image and the IDs of the light spots are important parameters for calculating the curvature of the eyeball, and the position of the line of sight can be calculated from the calculated curvature and the center position of the pupil.
The number of the light spots of the current near-eye image acquired in real time is any one of 1 to 4, that is, there are 4 missing light spots in the current near-eye image, 0 missing, 1 missing, 2 missing and 3 missing. According to the design requirement of the near-eye image acquisition system, the situation that all light spots are lost is not possible.
The method of calculating the center position of each spot in each spot absence differs. Specifically, the method comprises the following steps:
when the number of missing light spots is 0, the step of determining the ID of each light spot according to the center position of each light spot comprises:
determining the current distance from each light spot to each boundary of the iris area image according to the central position of each light spot and the boundary position information of the iris area image to obtain current distance information;
determining the ID of each light spot according to the deviation between the current distance information and standard distance information, wherein the standard distance information comprises the distance from each standard light spot in a standard near-eye image to each boundary of a standard iris region;
when the number of missing light spots is 1 or 2, the step of determining the ID of each light spot according to the center position of each light spot comprises:
determining the current distance from each light spot to each boundary of the iris area image according to the central position of each light spot and the boundary position information of the iris area image;
determining the ID of each light spot according to the deviation of the current distance between each light spot and each boundary of the iris area;
when the number of the missing light spots is 3, the step of determining the ID of each light spot in the current near-to-eye image according to the center position of the pupil includes:
and determining the ID of the light spot according to the offset direction of the center position of the pupil relative to the center position of a standard pupil in a standard near-eye image.
Aiming at the condition that the number of the missing light spots is 0, establishing a 4-by-4 matrix according to the current distance information, wherein each column of data in the matrix is the current distance from the center of each light spot to the upper boundary, the lower boundary, the left boundary and the right boundary of the iris area image, and establishing a 4-by-4 standard matrix according to the standard distance information, wherein each column of data in the standard matrix is the standard distance from the center of each standard light spot to the upper boundary, the lower boundary, the left boundary and the right boundary of the standard iris area;
performing dot multiplication operation on the matrix and the standard matrix, and searching for a standard light spot with the minimum deviation with each light spot according to an operation result;
and determining the ID of each light spot as the ID of the searched corresponding standard light spot.
And taking a near-eye image containing four light spots as a standard near-eye image, wherein a rectangular area with a first preset size extracted by a depth convolution neural network model in the standard near-eye image is taken as a standard iris area. The pupil in the standard near-eye image is defined as the standard near-eye pupil and the light spot is defined as the standard light spot. And recording the central position of the standard pupil, the central position of each standard light spot and the boundary position of the standard iris area in the standard near-eye image. And during initialization, establishing a standard matrix according to the relative distance between each standard light spot in the standard near-eye image and the standard iris area boundary, wherein the standard matrix comprises the center coordinates of the standard pupil and a standard matrix of 4-by-4 of each standard light spot from the standard iris area boundary. And determining the distance difference between the matrix and the standard matrix according to the dot multiplication operation of the matrix and the standard matrix, and taking the light spot with the minimum distance difference between the matrix and the standard matrix as the corresponding light spot ID.
For the condition that one light spot is absent, the ID of the light spot can be judged by using iris boundary information or the pupil center position. The first way is to calculate the offset of the pupil relative to the initial position, if the center of the pupil deviates upward to the left, the missing spot point is the lower right spot, if the center of the pupil deviates upward to the right, the missing spot point is the lower left spot, and the other positions can be obtained by the same method. The second method is to judge the ID of the light spot by recording the relative distance between the light spot and the iris boundary. For example, if the distance from the light spot to the left boundary of the iris is smaller than the right boundary, and the distance from the upper boundary is smaller than the lower boundary, the light spot is the upper left light spot. The distance from the light spot to the left boundary of the iris is larger than that of the right boundary, the distance from the upper boundary is smaller than that of the lower boundary, the light spot is a lower right light spot, and other positions can be obtained in the same way.
For the case of missing two light spots, the ID of each light spot needs to be determined by using the boundary information of the iris area image of the center position information of each light spot. Firstly, according to the iris imaging principle, two missing light spots cannot be diagonal light spots and are necessarily same-side light spots. And judging the ID of the light spots, and calculating the mean value of the image boundary distances from the two light spots to the iris area, wherein the position of the light spot is the side with the minimum mean value. For example, if the two spots are the smallest average distance from the left boundary of the iris region image, the spots are the upper left spot and the lower left spot. And if the distance from the lower boundary of the iris area image to the minimum is the left lower light spot and the right lower light spot, the other positions can be obtained by the same method.
For the case of missing three spots, the center position information of the pupil is used to determine the ID of the spot. The method mainly comprises the steps of calculating the center position information of the pupil in the current near-eye image, and calculating the offset of the pupil in the current near-eye image relative to the standard pupil according to the center position information of the standard pupil in the standard near-eye image. If the pupil in the current near-eye image deviates to the upper left from the standard pupil position, the light spot ID is the upper left light spot, if the pupil deviates to the lower right from the standard pupil position, the light spot ID is the lower right light spot, and other positions can be obtained in the same way.
The invention locates the accurate area of iris and pupil by deep convolution neural network, extracts iris area image and pupil area image. And (4) performing elliptic boundary fitting on the pupil in the pupil area image to obtain a pupil profile, and positioning the pupil center. And dynamic threshold segmentation is carried out in the iris area image to position the center of the light spot, so that the calculated amount of the light spot extracted from the whole eye image is smaller, and the influence of the eyeball highlight area is smaller. And noise bright spots except the bright spots on the iris can be removed by the artificial feature vector algorithm. And by designing the characteristic vector of the central offset matrix, the offset of relative direct vision when the pupil is offset is detected, and the ID of the light spot lacking on the iris is predicted. Compared with the traditional method for positioning the light spot by threshold segmentation and contour information, the embodiment can predict the light spot which is not shown on the iris, and can position the accurate light spot ID under the condition of lacking the light spot.
Referring to fig. 7, a method for positioning a light spot according to a fourth embodiment of the present invention includes steps S41 to S47:
step S41, extracting a characteristic region of the current near-eye image by using a pre-trained deep convolution neural network model to obtain an iris region image with a first preset size;
step S42, carrying out binarization processing on the iris area image to obtain a preprocessed iris area image;
step S43, determining the number of light spots in the current near-eye image and the central position of each light spot according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
s44, when the number of the missing light spots is smaller than a preset value, determining the ID of each light spot according to the central position of each light spot;
step S45, when the number of the missing light spots is larger than or equal to a preset value, extracting a characteristic region of the current near-to-eye image by using a pre-trained deep convolution neural network model to obtain a pupil region image with a second preset size;
step S46, carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image, and determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
and S47, determining the ID of each light spot in the current near-to-eye image according to the central position of the pupil.
Taking a near-eye image acquisition system with four infrared fill-in lamps as an example, the acquired near-eye image should include 4 light spots. In the case of no light spot and 1 or 2 light spots, the ID of each light spot only needs to be determined according to the center position of each light spot in the current near-eye image, and the specific implementation method thereof can refer to the relevant contents in the third embodiment. Only in the case of lacking 3 light spots, it is necessary to extract the through-hole area image in the near-eye image, locate the center position of the pupil based on the extracted image of the pupil area, and determine the ID of the light spot in the current near-eye image based on the center position of the pupil. The accuracy of positioning the center position of the pupil according to the extracted pupil area image is high.
It can be understood that the iris region image includes a pupil region, and therefore, in other embodiments of the present invention, the central position of the pupil may also be calculated and solved according to the iris region image, that is, the pupil contour feature points in the iris region image are extracted according to an edge extraction algorithm, and then the pupil contour feature points are subjected to ellipse fitting to determine the central position of the pupil. The method has the advantages that the step of extracting the pupil area image by the deep convolutional neural network model and the step of carrying out binarization processing and the like on the pupil area image are omitted by directly positioning the center of the pupil according to the iris area image, so that the calculation efficiency can be greatly improved, and the calculation cost is saved.
Referring to fig. 8, a pupil positioning device in a fifth embodiment of the invention includes:
the first image extraction module 51 is configured to perform feature region extraction on a current near-eye image by using a pre-trained deep convolutional neural network model to obtain a pupil region image of a preset size;
the first processing module 52 is configured to perform binarization processing on the pupil region image to obtain a preprocessed pupil region image;
a first contour extraction module 53, configured to extract a pupil contour feature point in the preprocessed pupil region image according to an edge extraction algorithm;
and a first pupil positioning module 54, configured to perform ellipse fitting on the pupil contour feature points to determine a center position of a pupil.
Further, the pupil positioning device further includes:
the marking module is used for acquiring a plurality of near-eye images and marking the pupil area of each near-eye image by using a rectangular frame with a preset size to obtain a pupil area image;
the data acquisition module is used for acquiring the vertex coordinates, the length and the width of each pupil region figure and taking the acquired vertex coordinates, the length and the width of each figure as label data;
and the pre-training module is used for constructing a deep convolution neural network model and training the deep convolution neural network model by using the acquired multiple near-eye images and the label data.
The implementation principle and the resulting technical effects of the pupil positioning device provided in the embodiment of the present invention are the same as those of the pupil positioning method embodiment, and for brief description, reference may be made to the corresponding contents in the method embodiment where no mention is made in the embodiment of the device.
Referring to fig. 9, a data calculating apparatus for gaze tracking according to a sixth embodiment of the present invention includes:
the second image extraction module 61 is configured to perform feature region extraction on the current near-eye image by using a pre-trained deep convolutional neural network model to obtain an iris region image of a first preset size and a pupil region image of a second preset size, where the iris region image is an image covering an iris region and a pupil region;
a second processing module 62, configured to perform binarization processing on the iris region image and the pupil region image, respectively, to obtain a pre-processed iris region image and a pre-processed pupil region image;
a second pupil positioning module 63, configured to determine a center position of a pupil in the current near-to-eye image according to the preprocessed pupil region image;
a first determining module 64, configured to determine, according to the preprocessed iris region image, the number of light spots in the current near-eye image and a center position of each light spot, and determine, according to the number of light spots, the number of missing light spots in the current near-eye image;
the first light spot positioning module 65 is configured to determine, according to the central position of each light spot, an ID of each light spot when the number of missing light spots is smaller than a preset value;
and the second light spot positioning module 66 is configured to determine, according to the center position of the pupil, an ID of each light spot in the current near-eye image when the number of missing light spots is greater than or equal to a preset value.
The data calculation apparatus for eye tracking according to the embodiment of the present invention has the same implementation principle and technical effect as the foregoing data calculation method for eye tracking, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the apparatus embodiments that are not mentioned.
Referring to fig. 10, a spot positioning apparatus according to a seventh embodiment of the present invention includes:
the third image extraction module 71 is configured to perform feature region extraction on the current near-eye image by using a pre-trained deep convolutional neural network model to obtain an iris region image of a first preset size;
a third processing module 72, configured to perform binarization processing on the iris region image to obtain a preprocessed iris region image;
a second determining module 73, configured to determine, according to the preprocessed iris region image, the number of light spots in the current near-eye image and the center position of each light spot, and determine, according to the number of light spots, the number of missing light spots in the current near-eye image;
a third light spot positioning module 74, configured to determine, when the number of missing light spots is smaller than a preset value, an ID of each light spot according to a center position of each light spot;
the fourth image extraction module 75 is configured to, when the number of missing light spots is greater than or equal to a preset value, perform feature region extraction on the current near-eye image by using a pre-trained deep convolutional neural network model to obtain a pupil region image of a second preset size;
a fourth processing module 76, configured to perform binarization processing on the pupil region image to obtain a pre-processed pupil region image, and determine a center position of a pupil in the current near-to-eye image according to the pre-processed pupil region image;
and a fourth light spot positioning module 77, configured to determine, according to the center position of the pupil, an ID of each light spot in the current near-to-eye image.
The implementation principle and the generated technical effect of the spot positioning device provided by the embodiment of the invention are the same as those of the foregoing spot positioning method embodiment, and for brief description, no part of the embodiment of the device is mentioned, and reference may be made to the corresponding contents in the foregoing method embodiment.
The invention also discloses an electronic device comprising a memory and a processor, wherein the memory stores a program, and the program realizes any one of the methods when being executed by the processor.
The invention also discloses a computer readable storage medium having a program stored thereon, which when executed by a processor implements any of the methods described above.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent should be subject to the appended claims.

Claims (11)

1. A data calculation method for gaze tracking, comprising:
extracting characteristic regions of a current near-eye image by using a pre-trained deep convolution neural network model to obtain an iris region image with a first preset size and a pupil region image with a second preset size, wherein the iris region image is an image covering the iris region and the pupil region;
respectively carrying out binarization processing on the iris area image and the pupil area image to respectively obtain a preprocessed iris area image and a preprocessed pupil area image;
determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
determining the number of light spots in the current near-eye image and the central position of each light spot according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
when the number of the missing light spots is smaller than a preset value, determining the ID of each light spot according to the central position of each light spot;
and when the number of the missing light spots is larger than or equal to a preset value, determining the ID of each light spot in the current near-to-eye image according to the central position of the pupil.
2. The data calculation method of claim 1, wherein the step of determining the center position of the pupil in the current near-eye image from the pre-processed pupil region image comprises:
extracting pupil contour characteristic points in the preprocessed pupil area image according to an edge extraction algorithm;
and carrying out ellipse fitting on the pupil contour characteristic points to determine the central position of the pupil.
3. The data calculation method of claim 2, wherein the step of extracting the pupil contour feature points in the pre-processed pupil region image according to an edge extraction algorithm comprises:
clustering analysis is carried out on the preprocessed pupil area image by utilizing a clustering threshold algorithm so as to cluster all pixel points in the preprocessed pupil area image into pixel points of a pupil area, pixel points of a light spot area and pixel points of an iris area;
and searching pixel points with neighborhoods as iris areas in the pixel points of the pupil areas, and determining the pixel points as pupil contour characteristic points.
4. The data calculation method of claim 1, wherein the step of determining the number of light spots in the current near-eye image and the center position of each light spot from the pre-processed iris region image comprises:
clustering analysis is carried out on the preprocessed pupil area image by utilizing a clustering threshold algorithm so as to cluster all pixel points in the preprocessed iris area image into pixel points of a pupil area, pixel points of a light spot area, pixel points of an iris area and pixel points of an iris area;
determining the number of light spots in the current near-to-eye image according to the clustering number of the pixel points of the light spot area;
and performing mean calculation on the coordinates of the pixel points in the light spot area in the transverse direction and the longitudinal direction, and taking the coordinates after the mean calculation as the coordinates of the central position of the corresponding light spot.
5. The data calculation method of claim 1, wherein the number of the light spots in a standard near-eye image is 4 and the preset value is 3, wherein,
when the number of missing light spots is 0, the step of determining the ID of each light spot according to the center position of each light spot comprises:
determining the current distance from each light spot to each boundary of the iris area image according to the central position of each light spot and the boundary position information of the iris area image to obtain current distance information;
determining the ID of each light spot according to the deviation between the current distance information and standard distance information, wherein the standard distance information comprises the distance from each standard light spot in a standard near-eye image to each boundary of a standard iris region;
when the number of missing light spots is 1 or 2, the step of determining the ID of each light spot according to the central position of each light spot comprises:
determining the current distance from each light spot to each boundary of the iris area image according to the central position of each light spot and the boundary position information of the iris area image;
determining the ID of each light spot according to the deviation of the current distance between each light spot and each boundary of the iris area;
when the number of the missing light spots is 3, the step of determining the ID of each light spot in the current near-to-eye image according to the center position of the pupil includes:
and determining the ID of the light spot according to the offset direction of the center position of the pupil relative to the center position of a standard pupil in a standard near-eye image.
6. The data calculation method according to claim 5, wherein the step of determining the ID of each of the light spots based on the deviation between the current range information and the standard range information comprises:
establishing a 4-by-4 matrix according to the current distance information, wherein each column of data in the matrix is the current distance from the center of each light spot to the upper boundary, the lower boundary, the left boundary and the right boundary of the iris area image, and establishing a 4-by-4 standard matrix according to the standard distance information, wherein each column of data in the standard matrix is the standard distance from the center of each standard light spot to the upper boundary, the lower boundary, the left boundary and the right boundary of the standard iris area;
performing dot product operation on the matrix and the standard matrix, and searching for the standard light spot with the minimum deviation with each light spot according to the operation result;
and determining the ID of each light spot as the ID of the searched corresponding standard light spot.
7. A method of spot positioning, comprising the steps of:
extracting a characteristic region of the current near-eye image by using a pre-trained deep convolution neural network model to obtain an iris region image with a first preset size;
carrying out binarization processing on the iris area image to obtain a preprocessed iris area image;
determining the number of light spots in the current near-eye image and the central position of each light spot according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
when the number of the missing light spots is smaller than a preset value, determining the ID of each light spot according to the central position of each light spot;
when the number of the missing light spots is larger than or equal to a preset value, extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model to obtain a pupil region image with a second preset size;
carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image, and determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
and determining the ID of each light spot in the current near-to-eye image according to the center position of the pupil.
8. A data computing device for gaze tracking, comprising:
the second image extraction module is used for extracting a characteristic region of the current near-eye image by using a pre-trained deep convolutional neural network model so as to obtain an iris region image with a first preset size and a pupil region image with a second preset size, wherein the iris region image is an image covering an iris region and a pupil region;
the second processing module is used for respectively carrying out binarization processing on the iris region image and the pupil region image to respectively obtain a pre-processed iris region image and a pre-processed pupil region image;
the second pupil positioning module is used for determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
the first determining module is used for determining the number of light spots and the central position of each light spot in the current near-eye image according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
the first light spot positioning module is used for determining the ID of each light spot according to the central position of each light spot when the number of the missing light spots is smaller than a preset value;
and the second light spot positioning module is used for determining the ID of each light spot in the current near-to-eye image according to the central position of the pupil when the number of the missing light spots is greater than or equal to a preset value.
9. A spot positioning apparatus, comprising:
the third image extraction module is used for extracting a characteristic region of the current near-eye image by using a pre-trained deep convolution neural network model so as to obtain an iris region image with a first preset size;
the third processing module is used for carrying out binarization processing on the iris area image to obtain a preprocessed iris area image;
the second determining module is used for determining the number of light spots in the current near-eye image and the central position of each light spot according to the preprocessed iris area image, and determining the number of missing light spots in the current near-eye image according to the number of the light spots;
the third light spot positioning module is used for determining the ID of each light spot according to the central position of each light spot when the number of the missing light spots is smaller than a preset value;
the fourth image extraction module is used for extracting a characteristic region of the current near-eye image by using a pre-trained depth convolution neural network model when the number of the missing light spots is larger than or equal to a preset value so as to obtain a pupil region image with a second preset size;
the fourth processing module is used for carrying out binarization processing on the pupil area image to obtain a preprocessed pupil area image, and determining the center position of a pupil in the current near-to-eye image according to the preprocessed pupil area image;
and the fourth light spot positioning module is used for determining the ID of each light spot in the current near-eye image according to the central position of the pupil.
10. An electronic device, comprising a memory and a processor, the memory storing a program that, when executed by the processor, performs the method of any of claims 1-7.
11. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202110472872.8A 2021-04-29 2021-04-29 Pupil and light spot positioning method, data calculation method and related device Active CN113190117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110472872.8A CN113190117B (en) 2021-04-29 2021-04-29 Pupil and light spot positioning method, data calculation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110472872.8A CN113190117B (en) 2021-04-29 2021-04-29 Pupil and light spot positioning method, data calculation method and related device

Publications (2)

Publication Number Publication Date
CN113190117A CN113190117A (en) 2021-07-30
CN113190117B true CN113190117B (en) 2023-02-03

Family

ID=76980507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110472872.8A Active CN113190117B (en) 2021-04-29 2021-04-29 Pupil and light spot positioning method, data calculation method and related device

Country Status (1)

Country Link
CN (1) CN113190117B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024035108A1 (en) * 2022-08-10 2024-02-15 삼성전자 주식회사 Method for determining user's gaze and electronic device therefor
CN115294202B (en) * 2022-10-08 2023-01-31 南昌虚拟现实研究院股份有限公司 Pupil position marking method and system
CN116524581B (en) * 2023-07-05 2023-09-12 南昌虚拟现实研究院股份有限公司 Human eye image facula classification method, system, equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7391887B2 (en) * 2001-08-15 2008-06-24 Qinetiq Limited Eye tracking systems
CN101923645A (en) * 2009-06-09 2010-12-22 黑龙江大学 Iris splitting method suitable for low-quality iris image in complex application context
CN104182720A (en) * 2013-05-22 2014-12-03 北京三星通信技术研究有限公司 Pupil detection method and device
CN105590109A (en) * 2016-02-29 2016-05-18 徐鹤菲 Method and device for pre-treating iris identification
CN108073889A (en) * 2016-11-11 2018-05-25 三星电子株式会社 The method and apparatus of iris region extraction
CN108509908A (en) * 2018-03-31 2018-09-07 天津大学 A kind of pupil diameter method for real-time measurement based on binocular stereo vision
CN109189216A (en) * 2018-08-16 2019-01-11 北京七鑫易维信息技术有限公司 A kind of methods, devices and systems of line-of-sight detection
TW201928759A (en) * 2017-12-27 2019-07-16 大陸商北京七鑫易維信息技術有限公司 Method and apparatus for determining position of pupil
CN110472521A (en) * 2019-07-25 2019-11-19 中山市奥珀金属制品有限公司 A kind of Pupil diameter calibration method and system
CN110929570A (en) * 2019-10-17 2020-03-27 珠海虹迈智能科技有限公司 Iris rapid positioning device and positioning method thereof
CN111144413A (en) * 2019-12-30 2020-05-12 福建天晴数码有限公司 Iris positioning method and computer readable storage medium
CN112541433A (en) * 2020-12-11 2021-03-23 中国电子技术标准化研究院 Two-stage human eye pupil accurate positioning method based on attention mechanism

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6827444B2 (en) * 2000-10-20 2004-12-07 University Of Rochester Rapid, automatic measurement of the eye's wave aberration
CN1299231C (en) * 2004-06-11 2007-02-07 清华大学 Living body iris patterns collecting method and collector
CN103530618A (en) * 2013-10-23 2014-01-22 哈尔滨工业大学深圳研究生院 Non-contact sight tracking method based on corneal reflex
TWI754806B (en) * 2019-04-09 2022-02-11 栗永徽 System and method for locating iris using deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7391887B2 (en) * 2001-08-15 2008-06-24 Qinetiq Limited Eye tracking systems
CN101923645A (en) * 2009-06-09 2010-12-22 黑龙江大学 Iris splitting method suitable for low-quality iris image in complex application context
CN104182720A (en) * 2013-05-22 2014-12-03 北京三星通信技术研究有限公司 Pupil detection method and device
CN105590109A (en) * 2016-02-29 2016-05-18 徐鹤菲 Method and device for pre-treating iris identification
CN108073889A (en) * 2016-11-11 2018-05-25 三星电子株式会社 The method and apparatus of iris region extraction
TW201928759A (en) * 2017-12-27 2019-07-16 大陸商北京七鑫易維信息技術有限公司 Method and apparatus for determining position of pupil
CN108509908A (en) * 2018-03-31 2018-09-07 天津大学 A kind of pupil diameter method for real-time measurement based on binocular stereo vision
CN109189216A (en) * 2018-08-16 2019-01-11 北京七鑫易维信息技术有限公司 A kind of methods, devices and systems of line-of-sight detection
CN110472521A (en) * 2019-07-25 2019-11-19 中山市奥珀金属制品有限公司 A kind of Pupil diameter calibration method and system
CN110929570A (en) * 2019-10-17 2020-03-27 珠海虹迈智能科技有限公司 Iris rapid positioning device and positioning method thereof
CN111144413A (en) * 2019-12-30 2020-05-12 福建天晴数码有限公司 Iris positioning method and computer readable storage medium
CN112541433A (en) * 2020-12-11 2021-03-23 中国电子技术标准化研究院 Two-stage human eye pupil accurate positioning method based on attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
New 2D pupil and spot center positioning technology under real — Time eye tracking;J. Zou 等;《2017 12th International Conference on Computer Science and Education (ICCSE)》;20171030;110-115 *
基于暗瞳图像的人眼视线估计;张太宁等;《物理学报》;20130708(第13期);262-270 *

Also Published As

Publication number Publication date
CN113190117A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN113190117B (en) Pupil and light spot positioning method, data calculation method and related device
KR102339915B1 (en) Systems and methods for guiding a user to take a selfie
US11775056B2 (en) System and method using machine learning for iris tracking, measurement, and simulation
EP2888718B1 (en) Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
US6229905B1 (en) Animal identification based on irial granule analysis
CN107292868B (en) Video disc positioning method and device
Tang et al. Splat feature classification with application to retinal hemorrhage detection in fundus images
Dai et al. Optic disc segmentation based on variational model with multiple energies
US8379920B2 (en) Real-time clothing recognition in surveillance videos
CN109697719B (en) Image quality evaluation method and device and computer readable storage medium
Guo et al. Robust fovea localization based on symmetry measure
CN106960199B (en) Complete extraction method of white eye region of true color eye picture
CN114360039A (en) Intelligent eyelid detection method and system
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
Kusuma et al. Retracted: Heart Abnormalities Detection Through Iris Based on Mobile
CN115359548B (en) Handheld intelligent pupil detection device and detection method
EP4113433A1 (en) Classification and improvement of quality of vascular images
EP3244346A1 (en) Determining device and determination method
KR102174246B1 (en) Catheter tracking system and controlling method thereof
Poonguzhali et al. Review on localization of optic disc in retinal fundus images
KR102575370B1 (en) Method for detecting change of fundus for longitudinal analysis of fundusimage and device performing the same
CN110674828A (en) Method and device for normalizing fundus images
KR102575371B1 (en) Method for registrating fundus images to generate wide angle fundus image and device performing the same
Ali et al. Automatic Detection of Retinal Optic Disc using Vessel Inpainting
US11980491B2 (en) Automatic recognition method for measurement point in cephalo image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant