CN115731601A - Eye movement tracking device and method - Google Patents

Eye movement tracking device and method Download PDF

Info

Publication number
CN115731601A
CN115731601A CN202211512569.7A CN202211512569A CN115731601A CN 115731601 A CN115731601 A CN 115731601A CN 202211512569 A CN202211512569 A CN 202211512569A CN 115731601 A CN115731601 A CN 115731601A
Authority
CN
China
Prior art keywords
eye
algorithm model
display device
point
pupil
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211512569.7A
Other languages
Chinese (zh)
Inventor
卢增祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yixin Technology Development Co ltd
Original Assignee
Yixin Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yixin Technology Development Co ltd filed Critical Yixin Technology Development Co ltd
Priority to CN202211512569.7A priority Critical patent/CN115731601A/en
Publication of CN115731601A publication Critical patent/CN115731601A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses an eye movement tracking device and method. The eye tracking apparatus includes a dense display device having a first display area configured to emit infrared vector structured light to a first eye; a first detection assembly comprising at least two first probes, the first detection assembly configured to receive light reflected or scattered by a first eye to acquire first detected image information, the first detected image information comprising a first pupil center point and at least one first glint point; the control assembly is electrically connected with the dense display device and the first detection assembly, is configured to drive the dense display device to emit light, and calculates the gazing direction of a first eye according to the first detection image information; wherein the control component is configured to calculate the gaze direction of the first eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model. The invention can meet the requirements of simple structure, high tracking precision and the like of the eye movement tracking device.

Description

Eye movement tracking device and method
Technical Field
The invention relates to the technical field of eye movement tracking, in particular to an eye movement tracking device and method.
Background
Eye tracking technology has wide application in modern society, and is one of the technical means of visual behavior and human behavior in multiple fields such as psychology, neural marketing, neurocognition, user experience, basic research and market research. Application terminals for eye tracking include five major categories: desktop computers, televisions and display panels, head-worn, automotive devices, and handheld devices. The application on the desktop computer mainly uses eye gaze to input text and identify a login system; the applications on the television and the display panel are selecting, navigating and switching channels; the head-wearing type mobile phone is mainly applied to virtual reality and used for collecting research, mental analysis and the like of user gazing and cognition relation; the automobile device is mainly applied to tracking the eyes of a driver, evaluating the sleepiness degree of the driver and connecting with an auxiliary system of the driver; the system can be activated by tracking the user's gaze on the handheld device for authentication, interactive display, and the like.
However, the existing eye tracking device using the eye tracking technology cannot meet the requirements of simple structure, high tracking accuracy, no need of external calibration and the like, and the application of the eye tracking device is limited.
Disclosure of Invention
The invention provides an eye movement tracking device and method, which meet the requirements of simple structure, high tracking precision and the like of the eye movement tracking device.
According to an aspect of the present invention, there is provided an eye tracking device comprising:
a dense display device provided with a first display area configured to emit infrared vector structured light to a first eye;
a first detection assembly comprising at least two first probes, the first detection assembly configured to receive light reflected or scattered by a first eye to acquire first detected image information, the first detected image information comprising a first pupil center point and at least one first glint point;
the control assembly is electrically connected with the dense display device and the first detection assembly, is configured to drive the dense display device to emit light, and calculates the gazing direction of a first eye according to the first detection image information;
wherein the control component is configured to calculate the gaze direction of the first eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
Optionally, the control component is configured to calculate a first calculation of a gaze direction of the first eye using a pupil center corneal reflection algorithm model; and learning a corneal curvature algorithm model by using a machine learning method and the first calculation result, and calculating a second calculation result of the gazing direction of the first eye by using the learned corneal curvature algorithm model and the first flicker point data.
Optionally, the control component is further configured to calculate a second calculation result of the gazing direction of the first eye by using the corneal curvature algorithm model when no new first calculation result is calculated by using the pupil center corneal reflection algorithm model; after a new first calculation result is calculated by using the pupil center corneal reflection algorithm model, the corneal curvature algorithm model is calibrated by using the new first calculation result.
Optionally, the eye tracking apparatus further comprises at least one first mirror; the dense display device is further provided with at least one second display area configured to emit infrared vector structured light to the first eye by reflection of the corresponding first mirror.
Optionally, the eye tracking device further comprises: the semi-reflecting and semi-transmitting lens and the second detection component are arranged on the optical fiber; the dense display device is also provided with a third display area, and the third display area is configured to transmit the infrared vector structured light to the second eye part through the semi-reflecting and semi-transparent mirror;
the second detection assembly comprises at least two second probes, the second detection assembly is configured to receive light reflected or scattered by a second eye to acquire second detection image information, the second detection image information comprises a second pupil center point and at least one second glint point;
the control component is electrically connected with the second detection component, is configured to drive the dense display device to emit light, and calculates the gazing direction of a second eye according to the second detection image information;
wherein the control component is configured to calculate the gaze direction of the second eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
Optionally, the eye-tracking apparatus further includes at least one second reflector, and the dense display device is further provided with at least one fourth display area, and the fourth display area is configured to transmit the transflective mirror and emit the infrared vector structured light to the second eye by reflection of the corresponding second reflector.
Optionally, the control assembly further comprises a learning machine configured to learn a corneal curvature algorithm model.
Optionally, the first probe is a fiber optic probe, and the detection assembly further includes a first photodetector; the first photoelectric detector is optically connected with the optical fiber probe and electrically connected with the control component;
or, the first probe is a photoelectric detector and is electrically connected with the control component.
According to another aspect of the present invention, there is provided an eye-tracking method performed by the above eye-tracking apparatus, the eye-tracking method including:
the control component drives the dense display device to emit light and calculates the gazing direction of a first eye according to the first detection image information;
wherein the control component is configured to calculate a gaze direction of the first eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
Optionally, calculating a first calculation result of the gazing direction of the first eye by using a pupil center corneal reflection algorithm model; and learning a corneal curvature algorithm model by using a machine learning method and the first calculation result, and calculating a second calculation result of the gazing direction of the first eye by using the learned corneal curvature algorithm model and the first flicker point data.
Optionally, the control component is further configured to calculate a second calculation result of the gazing direction of the first eye by using the corneal curvature algorithm model when no new first calculation result is calculated by using the pupil center corneal reflection algorithm model; after a new first calculation result is calculated by using the pupil center corneal reflection algorithm model, the corneal curvature algorithm model is calibrated by using the new first calculation result.
Optionally, before the control component drives the dense display device to emit light and calculates the gazing direction of the first eye according to the first detected image information, the method further comprises:
the control component drives the dense display device to perform global scanning, a first eye detection image formed by the global scanning is obtained, and the pupil position of a first eye and the position of a corresponding first scintillation point are calculated according to the first eye detection image.
Optionally, the eye tracking device further comprises: the semi-reflecting and semi-transmitting lens and the second detection component are arranged on the optical fiber; the dense display device is also provided with a third display area, and the third display area is configured to transmit infrared vector structured light to a second eye part through the semi-reflecting and semi-transparent mirror; the second detection assembly comprises at least two second probes, the second detection assembly is configured to receive light reflected by a second eye to acquire second detected image information, the second detected image information comprises a second pupil center point and at least one second glint point; the control component is electrically connected with the second detection component;
the eye tracking method further comprises: the control component drives the dense display device to emit light and calculates the gazing direction of a second eye according to the second detection image information; wherein the control component is configured to calculate a gaze direction of the second eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model;
before the control component drives the dense display device to emit light and calculates the gazing direction of the second eye according to the second detection image information, the method further comprises the following steps:
the control component drives the dense display device to perform global scanning, a second eye detection image formed by the global scanning is obtained, and the pupil position of a second eye and the position of a corresponding second flicker point are calculated according to the second eye detection image.
Optionally, the driving the dense display device by the control component to emit light, and calculating the gazing direction of the first eye according to the first detected image information comprises:
the control component determines the displacement range of the pupil region of the next frame according to the pupil region of the current frame and a first preset formula, determines the displacement range of the first flashing point of the next frame according to the first flashing point position of the current frame and a second preset formula, and drives the dense display device to scan only the displacement range of the pupil region and/or only the displacement range of the first flashing point in the next frame.
Optionally, the driving the dense display device to scan only the pupil region displacement range and the first blinking dot displacement range includes:
driving the dense display device to emit and scan the first structured light calculated according to the pupil area of the current frame in the displacement range of the pupil area of the next frame; the first structured light is a perfect circle structured light or a structured light with a contour corresponding to the pupil contour of the first eye of the current frame;
driving the dense display device to scan only the first blinking dot displacement range includes:
driving the dense display device to emit a line structure to optically scan the displacement range of the first scintillation point; the cross section of the line structured light is two line segments which are vertical to each other.
Optionally, the eye tracking device further comprises a first mirror; the dense display device is also provided with a second display area which is configured to emit infrared vector structured light to the first eye through reflection of the first reflector;
the control component is further configured to drive the first display area and the second display area to emit light in a time-sharing mode.
According to the technical scheme of the embodiment of the invention, under a general condition of adopting a dense display device, a plurality of first scintillation points can be generated on a first eye simultaneously or asynchronously, the scintillation points falling on a cornea area and the scintillation points falling on a non-cornea area exist in the first scintillation points, the coordinate of a first pupil center point can be obtained through first detection image information, and then the gazing direction of the first eye can be calculated by utilizing at least one of a cornea curvature algorithm model and a pupil center cornea reflection algorithm model. Because the gazing direction of the first eye can be calculated by adopting two algorithm models, the calculation accuracy of the gazing direction can be greatly improved, and the compatibility is stronger. It should be noted that, during the calculation, it is not necessary to determine whether the glint point falls on the corneal region, and the control component is configured with the machine learning capability, so that the glint point falling on the corneal region and the glint point falling on the non-corneal region can be identified during the subsequent recalculation through the increase of the learning amount. The eye tracking device of the embodiment has the advantages of simple equipment structure and easiness in integration. And at least one of the corneal curvature algorithm model and the pupil center corneal reflection algorithm model can be adopted to calculate the gazing direction, so that the method has the advantage of high calculation precision. That is, the eye tracking device of the present embodiment is compatible with the requirements of simple structure and high tracking accuracy.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an eye tracking apparatus according to an embodiment of the present invention;
fig. 2 is a schematic diagram of first detected image information according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a corneal curvature algorithm model according to an embodiment of the present invention;
FIG. 4 is a schematic view of a change curve of an eyeball gazing direction and a light beam elevation angle;
fig. 5 is a flowchart of an eye tracking method according to an embodiment of the invention;
FIG. 6 is a flowchart of another eye tracking method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the determination of pupil region displacement range;
fig. 8 is a schematic diagram when the first blinking dot displacement range is determined.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic structural diagram of an eye tracking device according to an embodiment of the present invention, and referring to fig. 1, the eye tracking device includes: a dense display device 11 provided with a first display area configured to emit infrared vector structured light to a first eye; a first detection assembly 12 including at least two first probes 121, the first detection assembly 12 configured to receive light reflected or scattered by a first eye to acquire first detection image information, the first detection image information including a first pupil center point and at least one first scintillation point; a control component 13 electrically connected with the dense display device 11 and the first detection component 12, configured to drive the dense display device 11 to emit light, and calculate the gazing direction of the first eye according to the first detection image information; wherein the control component 13 is configured to calculate the gaze direction of the first eye using a corneal curvature algorithm and/or a pupil center corneal reflection algorithm model.
Specifically, as shown in fig. 1, the eye movement tracking device may be disposed on the glasses, and it should be noted that, after the eye movement tracking device is disposed on the glasses, the position of the eye movement tracking device may be corrected first, and then the eye movement tracking device may be fixed; and the positions of the lenses, the dense display device and the first detection assembly of the glasses are calculated and designed according to the specifications of the glasses, and the position calibration is completed before the glasses leave a factory, namely, the relative positions of the lenses, the dense display device and the first detection assembly are known. The dense display device 11 may be, for example, an LED (Light Emitting Diode) chip, which typically includes four thousand pixels, and the scanning rate of the LED chip when the four thousand pixels are fully lighted at the same time is 200K, and the single-point scanning may reach 800M frames/sec. Illustratively, the LED chip is a high-speed projection chip emitting infrared vector structured light, and the aperture of the lens of the projection chip is small enough, with a diameter of about 0.2mm, so that the depth of field is large enough, the light emission is fine enough, and the resolution is high enough. The projection chip can be driven to project the structured light in a time-sharing manner to scan human eyes, can be dynamically driven to project any one or more light paths to a specified position, and can also be driven to scan single points and the structured light in a time-sharing manner. The dense display device 11 includes a plurality of display regions, and each display region includes a plurality of pixel points for emitting infrared vector structured light in different directions. For example, a first display region on the dense display device 11 may emit infrared vector structured light to a first eye. The first eye is, for example, a left eye or a right eye of a living body (e.g., a human), which is not particularly limited in this embodiment. Preferably, the position of the eye tracking device may be adjusted by calibrating the position of the eye tracking device (e.g., adjusting the position of the eyeglasses) so that the infrared vector structured light emitted from the first display region is irradiated to the region of the first eye where the eyeball is located as far as possible. The first detection assembly 12 may receive light reflected or scattered by the first eye such that the eye tracking apparatus may derive first detected image information from the light received by the first detection assembly 12. The different first probes 121 are responsible for detecting different regions of the first eye, and the information detected by the plurality of first probes 121 is integrated to obtain the first detected image information. Of course, the greater the number of the first probes 121, the thinner the detected section, and the higher the accuracy of detection. As shown in fig. 2, fig. 2 is a schematic diagram of first detected image information according to an embodiment of the present invention, it should be noted that the first detected image information of the embodiment may also be only a part of a complete image in fig. 2 (which will be described later); the first detected image information includes at least the first pupil center point P and at least one first flare point a (two first flare points A1 and A2 are exemplarily included in the drawing). Wherein, the first pupil center point P is also the pupil center corresponding to the first eye; the first scintillation point is a scintillation point-Purkinje image (prkinje image) generated by the first display region of the dense display device illuminating the cornea, and the scintillation point (CR) is generated by the reflection of light entering the pupil on the outer surface of the cornea. Because the eyeball is considered to be approximately spherical when the existing human eye is tracked by using the Pupil Center Corneal Reflection Principle (PCCR), the position of the scintillation point irradiated on the eyeball basically does not change along with the rotation of the eyeball. In addition, since the first display region includes a plurality of pixels, that is, the first display region is equivalently provided with a plurality of light sources for illuminating the first eye, at least one first flashing point is generated in the first detection image information. The control component 13 may be integrated in the mirror frame and configured to send a drive signal to the dense display devices 11 to drive the dense display devices 11 to emit light. And the control component 13 may also calculate the gaze direction of the first eye from the first detected image information. The gaze direction of the first eye may specifically be calculated by at least one of a corneal curvature algorithm model and a pupil center corneal reflection algorithm model. The principle of the corneal curvature algorithm model and the pupil center corneal reflection algorithm model will be described first.
Fig. 3 is a schematic diagram illustrating a principle of a corneal curvature algorithm model according to an embodiment of the present invention, and referring to fig. 3, it is assumed that an eyeball 51 is a substantially spherical structure, and a curvature of a whole structure of a cornea 52 is different from that of the eyeball 51, and the cornea 52 has a certain degree of convexity. The eyeball 51 and the cornea 52 are spherical surfaces of two different curvatures. Assuming that the center of the eyeball 51 at the time of normal viewing is 0 and the center of the cornea is o, the eyeball center is always kept constant when the gaze direction is changed, the cornea 52 after the gaze direction is changed is represented by a solid line, and at this time, the line connecting the eyeball center 0 and the cornea center o' is shown by a solid line
Figure BDA0003966965940000091
The direction of normal vision of eyeball
Figure BDA0003966965940000092
At an angle of
Figure BDA0003966965940000093
The deflection angle of the gaze direction of the human eye can be determined. Suppose S i For dense display devices, i.e. emitters>0,P j Means for receiving a first eye reflected or scattered signal (i.e. first probe), j>0;
Figure BDA0003966965940000094
Transmitted by a probe P for a transmitter j The elevation angle of the received light beam,
Figure BDA0003966965940000095
alpha is the light-emitting angle in the horizontal direction, beta is the light-emitting angle in the vertical direction, L ij Is a transmitter S i To P j The distance of (a) to (b),
Figure BDA0003966965940000096
coordinate position of the received light beam on the surface of the eyeball for the eyeball' S orthophoto i (x,y,z)、P j (x, y, z) is a fixed device, the position is known, and the distance L is ij Is determinable, by P j The received light beam can also be determined, i.e.
Figure BDA0003966965940000097
As well as determinable. Therefore, we know that there must be a formula f 1 Can calculate the reflection point
Figure BDA0003966965940000098
The position of the coordinates of (a) is,
Figure BDA0003966965940000099
Figure BDA00039669659400000910
there is also a formula f, which can be based on
Figure BDA00039669659400000911
Finding the gaze direction
Figure BDA00039669659400000912
Figure BDA00039669659400000913
Therefore, the gaze direction can be learned by acquiring a plurality of flicker point coordinates and utilizing a machine learning mode
Figure BDA00039669659400000914
Fig. 4 is a schematic diagram of a variation curve of an eyeball fixation direction and a light beam elevation angle, as shown in fig. 4. Suppose that
Figure BDA00039669659400000915
The eyeball can horizontally rotate 180 degrees at the elevation angle of the light beam in the normal view, and then the light beam passes through the emitter S i And means P for receiving reflected infrared light j Can track the elevation angle of the light beam
Figure BDA00039669659400000916
The change curve of (2). Based on the characteristics of the eyeball structure, the light beam elevation angle tracked by the eyeball is not related to the left or right, or upward or downward
Figure BDA0003966965940000101
The change of the angle can occur at a certain moment and the change of the angle can be basically unchanged after the angle is rotated to a certain angle.
Figure BDA0003966965940000102
It is possible to reflect a change in the gaze direction of the human eye. Of course, it should be noted that fig. 4 is only a schematic diagram, each eyeball may have a difference, and the actual curves corresponding to each eyeball are not completely the same. With our eye tracking device, f can be known at no time 1 And in the case of how to calculate f, the standard signal is input to the learning machine, so that the machine learns the calculation formula f, and the gaze direction of the eyeball can be output by tracking the position of the flicker point
Figure BDA0003966965940000103
The application of the corneal curvature algorithm has higher output efficiency.
The specific principle of the pupil center corneal reflection algorithm model is as follows:
the eye is illuminated with a light source to produce significant reflections and a camera is used to capture images of the eye with these reflections. The images collected by the camera are then used to identify the reflections of the light source on the pupil and the cornea, i.e. the glint points, so that the vector of the eye movement can be calculated by calculating the angle between the pupil center and the glint points, and then the direction of the line of sight can be calculated by combining the direction of the vector with the geometric features of other reflections. In other words, the pupil center corneal reflection algorithm model needs to acquire the coordinates of the glint point on the cornea and the pupil center point coordinates.
In this embodiment, since the dense display device is collected as the projection device, the dense display device may generate at least one first flare point on the first eye. In the prior art, when the corneal curvature algorithm model is used for calculation, the calculation result of the first blinking dot is reliable only when the first blinking dot falls on the corneal region, and when the pupil center corneal reflection algorithm model is used for calculation, the calculation result of the first blinking dot is reliable only when the first blinking dot falls on the non-corneal region. Because the gazing direction of the first eye can be calculated by adopting two algorithm models, the calculation accuracy of the gazing direction can be greatly improved, and the compatibility is stronger. It should be noted that, it is not necessary to determine whether the glint points fall on the corneal region during the calculation, and the control component 13 is configured with the machine learning capability, so that the glint points falling on the corneal region and the glint points falling on the non-corneal region can be identified during the subsequent recalculation through the increase of the learning amount.
The eye tracking device of the embodiment has the advantages of simple equipment structure and easiness in integration. And at least one of the corneal curvature algorithm model and the pupil center corneal reflection algorithm model can be adopted to calculate the gazing direction, so that the method has the advantage of high calculation precision. That is, the eye tracking device of the present embodiment is compatible with the requirements of simple structure and high tracking accuracy.
Optionally, with continued reference to fig. 1, the eye-tracking apparatus comprises at least one first mirror 14, and the dense display device 11 is further provided with at least one second display region configured to emit the infrared vector structured light to the first eye by reflection of the corresponding first mirror.
Specifically, in the present embodiment, by providing at least one first mirror, a first virtual image 111 of a plurality of dense display devices can be virtualized, and equivalently, infrared vector structured light is emitted to the first eye from a plurality of angles by using a plurality of light sources. When a light source is adopted, the traditional mode independently adopts a pupil center corneal reflection algorithm model and needs to rely on external calibration so as to enable the gaze direction to have a unique solution. In this embodiment, the first virtual image 111 is virtualized by the first reflecting mirror 14, and the first eye can be scanned from multiple angles, so that multiple flicker points exist in the first eye, the probability that the multiple flicker points all fall on the cornea is relatively low, and as long as one flicker point does not fall on the cornea, the point can be used as a reference for correction, so that the calculation of the gazing direction has a unique solution, and further correction is avoided. In the above embodiment, whether the blinking point falls outside the cornea can be determined according to the distance between the position of the blinking point and the pupil center point, for example, if the distance between the blinking point and the pupil center point is greater than a preset value, it indicates that the blinking point is located outside the cornea, and the preset value can be determined according to actual conditions; alternatively, the point farthest from the pupil center point may be directly regarded as a point not falling on the cornea. As shown in fig. 2, at this time, the first detected image information further includes a pupil center point P 'and a flicker point a' (in this embodiment, A1 'and A2') corresponding to the first virtual image 111. Further, the dense display device and the first virtual image 111 may not conflict with each other when finding the pupil center point and the flicker point by time division multiplexing (which will be described later).
Or, the gaze direction may be calculated by separately using a corneal curvature algorithm model, the dense display device 11 may project a plurality of structured lights to the first eye, and may implement multi-angle scanning of the first eye by using the virtual first virtual image, and this scanning manner may obtain enough flare point data. For example, for the first eye, the two first probes detect and return light signals to the control module, and the control module records the positions of the return signals of the lower scanning region to generate a blinking-point histogram. The scintillation point angular distribution graph records all known information such as the area of the dense display device emitting the structured light, the position of the first probe receiving the reflected light, the elevation angle of the emergent light and the like, and the emergent light beam is a vector, so that the vector information has four dimensions of up, down, left and right. Because the resolution of the gazing direction is achieved only when the scintillation point falls on the cornea when the corneal curvature algorithm model is adopted, the structured light which projects out multiple points from multiple angles can ensure that enough scintillation points fall on the cornea, so that enough data can be substituted into the corneal curvature algorithm model to output the gazing direction of the first eye. It will be appreciated that the same keratometry algorithm model is not applicable to all organisms due to the different eye curvatures of different organisms, and therefore external calibration is preferably used before use when tracking the eye gaze direction using the keratometry algorithm model alone.
In the above embodiment, since the light beams of the dense display device have a certain divergence angle, a large number of light beams cover the portion other than the eyeball when the eye is scanned, which causes a waste of resources. Preferably, when the dense display device is installed, the angle of the dense display device may be adjusted, so that the emergent light of the first display area and the light of the second display area reflected by the first reflector on the dense display device can both cover the first eye, thereby scanning the first eye from different angles. And arranging a corresponding first probe in the area covered by each light beam, so that the probe in the corresponding area is only responsible for receiving the reflected light or scattered light of the scanning light beam in the area.
Alternatively, the control component in the present embodiment may be configured to calculate a first calculation result of the gaze direction of the first eye using a pupil center corneal reflection algorithm model; and learning a corneal curvature algorithm model by using a machine learning method and the first calculation result, and calculating a second calculation result of the gazing direction of the first eye by using the learned corneal curvature algorithm model and the first flicker point data.
Specifically, the control component can calculate the gazing direction of the first eye by combining the corneal curvature algorithm model and the pupil center corneal reflection algorithm model besides adopting the corneal curvature algorithm model and the pupil center corneal reflection algorithm model separately. Because the traditional corneal curvature algorithm model can obtain certain robustness only by using massive manual labeling data as a training model, the self-learning self-calibration can be completed by combining the two algorithms in the embodiment without data labeling. Since the manual marking is not carried out, a specific formula of the corneal curvature algorithm f cannot be obtained at the beginning, but at the moment, a first calculation result of the gazing direction can be calculated through the pupil center corneal reflection algorithm model, the first calculation result of the gazing direction calculated by the pupil center corneal reflection algorithm model is output to a learning machine (arranged in a control component or a cloud end) as standard information, the learning machine receives the standard information and extracts a scintillation point role distribution diagram collected on a corresponding frame for machine learning, so that the corneal curvature algorithm f is learned, the relation between the coordinate change of the corneal scintillation point and the gazing direction is established, the subsequent tracking can be directly carried out by substituting scintillation point role distribution data into the corneal curvature algorithm f, and the gazing direction of the eye is rapidly calculated. The embodiment can be supplemented by two algorithm models, so that the information calculation amount can be greatly reduced, and the refresh rate is improved.
Further, in the above embodiment, the control component is further configured to calculate a second calculation result of the gazing direction of the first eye by using the corneal curvature algorithm model when no new first calculation result is calculated by using the pupil center corneal reflection algorithm model; and after a new first calculation result is calculated by using the pupil center corneal reflection algorithm model, calibrating the corneal curvature algorithm model by using the new first calculation result.
Specifically, the coordinates of the scintillation point and the coordinates of the pupil center point need to be acquired when the pupil center corneal reflection algorithm model is used, and only the coordinates of the scintillation point need to be acquired when the corneal curvature algorithm model is used for calculation. The speed of acquiring the coordinates of the scintillation point is high, and the speed of acquiring the coordinates of the pupil center point is low (the speed of acquiring the coordinates of the scintillation point is about ten times of the speed of acquiring the coordinates of the pupil center point). Therefore, in the present embodiment, a second calculation result of the gaze direction is calculated by using the corneal curvature algorithm model as soon as new coordinates of the glint points are provided, and the second calculation result is output as the gaze direction. After the new pupil center point coordinate is calculated, a first calculation result of the gazing direction is calculated by using the pupil center corneal reflection algorithm model and is output as the gazing direction of the first eye, the first calculation result is input to the learning machine as standard information, and the learning machine extracts the flicker point role distribution data on the frame position corresponding to the standard information for learning, so that the corneal curvature algorithm model is updated, and the corneal curvature algorithm model has the characteristic of being more accurate when being used. By the mode, the real-time calibration and the real-time learning of the corneal curvature algorithm model can be realized without external calibration, so that the calculated gazing direction is more accurate.
In the above embodiments, the first eye is taken as an example, and it is understood that a living body generally includes both eyes, i.e., the first eye and the second eye, and therefore:
further, with continued reference to fig. 1, the eye tracking device further comprises: a half-reflecting and half-transmitting mirror 15 and a second detection component 16; the dense display device 11 is further provided with a third display area, and the third display area is configured to transmit the infrared vector structured light to the second eye part through the half-reflecting and half-transmitting mirror; the second detection assembly 16 comprises at least two second probes, the second detection assembly being configured to receive light reflected or scattered by the second eye to acquire second detected image information, the second detected image information comprising a second pupil center point and at least one second glint point; the control component 13 is electrically connected with the second detection component 16, and is configured to drive the dense display device to emit light, and calculate the gazing direction of the second eye according to the second detection image information; wherein the control component 13 is configured to calculate the gaze direction of the second eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
Specifically, the second detection module 16 is similar to the first detection module, the second detected image information is similar to the first detected image information, and the calculation process of the control module 13 for the gazing direction of the second eye is the same as the calculation process for the gazing direction of the first eye, which is not repeated herein. It should be particularly noted that, in this embodiment, no light source is additionally disposed for the second eye, but one second dense display device 112 is separated from the dense display device through one half-reflecting half-transmitting mirror, so that the emergent light of different display areas in the dense display device is emitted to different eyes, and the emergent light corresponding to the third display area is emitted to the second eye through the half-reflecting half-transmitting mirror, so that the arrangement of the light source can be reduced, and the light originally irradiated to the non-eyeball part of the first eye is emitted to the second eye, thereby improving the utilization rate of the dense display device.
Further, with continued reference to FIG. 1, the eye-tracking apparatus further comprises at least one second reflector 17, and the dense display device is further provided with at least one fourth display area configured to transmit the transflective mirror 15 and emit the infrared vector structured light to the second eye by reflection from the corresponding second reflector 17.
Specifically, similar to the effect of first speculum, through setting up second mirror 17, can make the transflective further virtual image 113 of a second of the intensive display device 112 of second of transflective that further virtual goes out to can realize the scanning to the second eye multi-angle, further improve the accuracy of gazing the direction calculation to the second eye.
In the above embodiment, the half mirror 15 is disposed at the middle position of the glasses corresponding to the two eyes, and is far away from the frame as far as possible to approach the eyes without affecting the display.
The dense display device, the optical fiber probe, the semi-reflecting and semi-transmitting mirror and the reflector can follow the following principles when being installed: the eye is approached as much as possible under the condition of not influencing the display; the detection of multiple angles is realized by dispersing the detection as much as possible; the detection graph can be correspondingly generated when the dense display device is irradiated, namely, pupils, scintillation points and virtual images corresponding to the pupils and the scintillation points exist on the detection graph.
Optionally, with continued reference to fig. 1, in the above embodiment, the first probe 121 and the second probe 161 may be fiber probes, in which case the first detection assembly may further include the first photodetector 122, the second detection assembly may further include the second photodetector 162, and the first photodetector 122 is optically connected to the fiber probes and electrically connected to the control assembly; the second photodetector 162 is optically connected to the second probe and electrically connected to the control unit 13. One end of the optical fiber probe (comprising the first probe and the second probe) is fixedly distributed on the inner side of the lens frame and used as a sensor for receiving light reflected or scattered from the eyes. The top end of the optical fiber probe can be provided with a device for helping space light coupling to improve the light intensity of signals, such as a coupling lens; of course, color filters can also be added to reduce interference. When the optical fiber probe is arranged, the optical fiber probe needs to be close to eyes as much as possible under the condition of not influencing display; and the optical fiber probes are required to be arranged dispersedly as much as possible so as to realize multi-angle detection, each optical fiber probe is provided with a corresponding detection area, and distance factors can be considered during the arrangement, so that the received projection signals are not overlapped. The coupling between the optical fiber probe and the photoelectric detector can adopt various modes (for example, N groups of 1 coupling, N groups of 2 coupling or a plurality of optical fibers are connected into the photoelectric detector after being connected with a combiner), and the combined optical signals enable the multiple optical fibers to be better glued and installed, so that the photoelectric detector is saved, the configuration of the photoelectric detector can be reduced, and the cost is optimized. Meanwhile, the circuit design can be optimized, so that the design is simple and the installation is easier. The photoelectric detector can convert the combined optical signal into a path of electric signal, and the electric signal is amplified and transmitted to the control assembly. Of course, the first probe and the second probe can also be directly set as photodetectors, and at this time, the number of photodetectors in the eye-tracking device is large, and the cost is high.
Preferably, the control component 13 comprises a learning machine configured to learn a corneal curvature algorithm model. Specifically, the corneal curvature algorithm model needs to be obtained by learning, and the learning machine in the control component 13 may be a remote learning machine or a local learning machine, which is not particularly limited in the present invention.
Fig. 5 is a flowchart of an eye tracking method according to an embodiment of the present invention, and referring to fig. 5, the eye tracking method includes:
step S301, the control assembly drives the dense display device to emit light and calculates the gazing direction of the first eye according to the first detection image information; wherein the control component is configured to calculate the gaze direction of the first eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
Specifically, the control component drives the dense display device to emit light, so that the first display area emits infrared vector structured light to the first eye, and the light reflected or scattered by the first eye is detected through the first detection component; the control component generates first detection image information according to the light detected by the first detection component, and the first detection image information comprises a first pupil center point and at least one first scintillation point. The dense display device may produce at least one first blinking dot on the first eye. In the embodiment, generally, a dense display device is adopted, a plurality of first scintillation points can be generated on a first eye, wherein the first scintillation points inevitably have scintillation points falling on the corneal region and scintillation points falling on the non-corneal region, and the coordinates of a first pupil center point can be obtained through first detection image information, so that the gazing direction of the first eye can be calculated by using at least one of the corneal curvature algorithm model and the pupil center corneal reflection algorithm model. Because the gazing direction of the first eye can be calculated by adopting two algorithm models, the calculation precision and efficiency of the gazing direction can be greatly improved, and the compatibility is stronger. It should be noted that, during the calculation, it is not necessary to determine whether the glint point falls on the corneal region, and the control component 13 is configured with the machine learning capability, so that the glint point falling on the corneal region and the glint point falling on the non-corneal region can be identified during the subsequent recalculation through the increase of the learning amount.
Optionally, fig. 6 is a flowchart of another eye tracking method according to an embodiment of the present invention, and referring to fig. 6, the eye tracking method may specifically include:
step S401, the control module drives the dense display device to perform global scanning, obtains the global scanning to form a first eye detection diagram, and calculates a pupil position of a first eye and a corresponding first scintillation point position according to the first eye detection diagram.
Specifically, when the eye tracking device further comprises a transflective mirror and a second detection assembly, the step may further comprise: controlAnd the component drives the dense display device to carry out global scanning, a second eye detection image formed by the global scanning is obtained, and the pupil position of a second eye and the position of a corresponding second flicker point are calculated according to the second eye detection image. The first detection image information and the second detection image information may be acquired simultaneously. After the eye tracking device is powered on, blind scanning may be started first, that is, the first eye and the second eye are scanned globally by using a single-point scanning mode, for example, a point-by-point scanning mode may be used, and when a point is printed, brightness information fed back by the point is recorded. Taking the first eye as an example, the first eye detection map formed by point-by-point swiftly scanning also refers to fig. 2, where fig. 2 shows bright areas and also dark areas AA, and according to the features of the eyes, the darkest area AA in the detection map is considered to be the pupil area to be tracked, and the brightest point is considered to be the blinking point. It should be noted that when the light beam scans the area outside the pupil, the light beam will be directly reflected, and the detection assembly will receive stronger luminance information, so as to form a bright area in the detection image; when the light beam is scanned to the pupil area, the returned brightness information of the light beam after being reflected for multiple times in the pupil is weaker or cannot be received by the detection assembly, so that a dark area is formed. The major axis D of the dark area AA can be calculated by image processing techniques 0 Ratio of major axis to minor axis beta 0 Angle theta between the first eye and the dense display device 0 . The coordinates P (x) of the dark area center (pupil center) can also be determined in the detection map 0 ,y 0 ) And a flicker point A1 (x) A1 ,y A1 ) And A2 (x) A2 ,y A2 ) The coordinates of (a). Meanwhile, the coordinates P ' (x ') of the pupil center broken by the mirror can be seen in the probe map ' 0 ,y’ 0 ) And a virtual flicker point A1 '(x' A1 ,y’ A1 ) And A2 '(x' A2 ,y’ A2 ) The coordinates of (c).
Step S402, the control component determines a displacement range of the pupil region of the next frame according to the pupil region of the current frame and a first preset formula, determines a displacement range of the first flicker point of the next frame according to the first flicker point position of the current frame and a second preset formula, and drives the dense display device to scan only the displacement range of the pupil region and/or only the displacement range of the first flicker point in the next frame.
Specifically, a detection map is obtained through global scanning, and after the coordinates of the pupil center and the cornea flicker point are located, the structured light tracking mode is started to enter, and the advantage of structured light scanning is that only a useful information area can be scanned, namely, only positions in the pupil movement range and the cornea flicker point movement range are tracked, and a data structure change relation between structured light and the pupil center and the flicker point is established. The mode can convert image processing into digital signal processing, greatly reduce the calculation pressure and the scanning times and improve the processing rate; the signal-to-noise ratio can also be improved by increasing a certain number of scans. After each global scanning, a detection image of the eye can be obtained, and a group of data structures M can be obtained through an image processing technology 0 And a data structure M 'virtually emerging in the mirror' 0 . We define i as the previous frame, i +1 as the current frame, and the amount without subscript as the current measurement, which can be calculated by image analysis. The following data structure M can be obtained by image processing after global scanning 0 、M’ 0
Figure BDA0003966965940000181
Figure BDA0003966965940000182
Where PC is the pupil center and CR is the glint point.
Subsequently, the pupil region displacement range and the displacement range of the flicker point of the i +1 frame can be determined according to the i frame data. FIG. 7 is a schematic diagram of the determination of pupil region displacement range, as shown in FIG. 7, with Δ t set as the interval between scan frames, ε, based on the viewing angle and coordinates of a known dense display device max The moving pixel with the maximum pupil center within Δ t, ω max The fastest rotating speed of the eyeball. Then e is max =α 1 ×Δt×ω max ,α 1 Is a structural coefficient, and a sensing device and a sensorThe relative position correlation of the set display device and the eyes can be obtained by static correction. In order to ensure that the pupil center is still in the scanning range of the dense display device within the delta t interval, the scanning range when the pupil is tracked by the first structural light of the i +1 frame is determined by a rectangle along the long axis direction of the pupil of the i frame, the center of the rectangle is the same as the pupil center, and the long axis of the rectangle is D i +2ε max D, minor axis of the rectangle is
Figure BDA0003966965940000183
The first preset formula comprises a formula corresponding to the long axis and a formula corresponding to the short axis.
After the displacement range of the pupil region is determined, the tracking scan within the pupil displacement range may be started by using the first structured light 50, where the first structured light 50 may be a perfect circle structured light with a diameter d, and when d =0, the method is reduced to a single-point (pixel) scanning mode, and the dark region AA in the obtained detection map is the pupil region. When d is>At 0, in order to ensure that the center of the first structured light is always within the pupil range, the assumed diameter d of the perfect circle structured light is much smaller than the pupil size, and the short axis of the pupil area in the i-frame detection image is taken as the reference,
Figure BDA0003966965940000191
the control component drives the corresponding pixels in the dense display device to emit light, so that the light with the diameter d of a perfect circle structure is emitted. Of course, in other embodiments, other forms of structured light, such as elliptical structured light, may be used for scanning, or other preferred structured light. And when the brightness information returned by each pixel point in the regular circular structured light is recorded, the coordinates of the central point of the regular circular structured light are recorded at the same time. In this case, the diameter d, d is used>In the detection image scanned by the light of the perfect circle structure 0, the boundary of the dark area AA2 is not the real pupil boundary, and the real pupil boundary is an area concentric with the dark area AA2 coaxially but having a larger size (the dotted line portion 70 in fig. 7). Assuming that the length of the dark region in the long axis direction is Δ D, Δ D and θ can be obtained by image processing technique i+1 Combining the relationship between the light with a perfect circle structure and the pupil, the long axis of the pupil of the current frameD i+1 =ΔD+d,
Figure BDA0003966965940000192
Center of pupil P i+1 The coordinates are updated to (x) i+1 ,y i+1 ),x i+1 =Δx,y i+1 = Δ y. By tracking and scanning the pupil, the coordinates of the pupil center point in the i +1 frame can be updated in real time. In the conventional method using single-point scanning, scanning is required
Figure BDA0003966965940000193
In the embodiment, the number of times required by scanning in the displacement range of the pupil region is reduced proportionally according to the scanning area proportion compared with the traditional full-scanning mode; and scanning with light of a perfect circle structure, the number of times of scanning is reduced again
Figure BDA0003966965940000194
Then, as long as d>0, the number of scans can be greatly reduced.
Determining the scanning range of the first flashing points of the i +1 frame according to the positions of the first flashing points of the i frames; the first principle of determining the scanning range of the scintillation spot is as follows: due to the structural characteristics of the eye (the curvature of the cornea is different from that of the eyeball), the scintillation point on the cornea is changed when the eye rotates or the relative position of the dense display device and the eye is changed. Unlike pupil motion, the movement of the moving pixels of the cornea blinking dots is very small when the relative position of the dense display device and the eye is not changed. Setting the maximum moving pixel of the corneal flicker point within delta t as N max ,N max =α 2 ×Δt×ω max ,N max The corresponding formula is a second predetermined formula, alpha 2 Is a structural coefficient, which is related to the relative positions of the sensing device, the dense display device and the eyes, and can be obtained by static correction. With a flicker point A (x) A ,y A ) For example, with the coordinates of the known cornea glint point A as the center, the scanning range for tracking the glint point A is 2N with the glint point A as the center max ×2N max Rectangular shape (x) N max ,y N max ) FIG. 8 is a schematic diagram illustrating the determination of the firstA schematic diagram of the displacement range of the flash point; after the first scintillation point displacement range is determined, the light of the dense display device emission line structure can be driven to scan in the first scintillation point displacement range; the cross section of the line structured light is two line segments which are vertical to each other; it should be noted that the scanning range of the line structured light cannot exceed the detection area of the original fiber probe (or photodetector).
The line-structured light tracking scanning of the blinking dots may specifically be: in the determined scanning range of the flicker point of the i +1 frame, a length L =2N concentric with the flicker point is used max The short line structured light (second structured light) of (2) scans the flicker point a in the 0 degree (x-axis) and 90 degree (y-axis) directions, records the brightness information and position returned by each pixel of the short line structured light, generates a graph of the brightness information, and the position corresponding to the pixel with the highest brightness in the x-axis direction and the y-axis direction, namely, the flicker point a i+1 The coordinate position of (a). The brightness information is detected by the optical fiber probe of the region, and is transmitted back to the photoelectric detector to be converted into an electric signal which is processed by the control component. The coordinate of the cornea scintillation point is tracked by the coordinate value with the highest brightness scanned and recorded in the horizontal direction and the vertical direction in the scanning range of each scintillation point. And determining the coordinates of the flicker point of the i frame as the center of the flicker point scanning range of the i +1 frame. The same is true for data structure updates in the virtual image.
If the single-point scanning mode is used in the determined flicker point scanning range, scanning is required (2N) max ) 2 Secondly; and the scanning times are reduced to 4N again by short-line structured light scanning max Next, the process is carried out. Certainly, the short-line structured light is not necessarily the optimal structured light, and this embodiment is only an example, and better structured light and algorithm may be provided to improve the scanning efficiency and reduce the scanning times.
In this embodiment, since only the pupil displacement range or the first blinking dot displacement range is scanned, the amount of information calculation can be greatly reduced, so that the calculation pressure can be reduced, and the output efficiency can be improved.
In addition, in this embodiment, when the dense display device emits the first structural light and the second structural light, the pixels corresponding to the structural light in one frame emit light simultaneously, instead of outputting the light in a point-by-point scanning manner, so that the scanning speed can be greatly increased compared with the point-by-point scanning manner. And the scanning is carried out by utilizing a multi-point simultaneous light emitting mode, and the signal to noise ratio is higher than that of single-point scanning, so that the received signal is closer to a real analog signal, and the boundary is clearer. Meanwhile, the human eyes are easily damaged due to overhigh single-point luminous power, the single-point power can be dispersed by multi-point structured light scanning under the same power condition, a larger area can be scanned under the same single-point power condition, and the overall power is improved.
In addition, because the pupil center and the flicker point are tracked in real time, the pupil center or the flicker point may be lost in some cases, and once the pupil center or the flicker point is lost, the step S401 needs to be returned to perform global scanning again, so that the tracking accuracy can be ensured.
In addition, in the embodiment, when the pupil center corneal reflection algorithm model is adopted for calculation, the method can be suitable for a small-gain dense display device; and when the corneal curvature algorithm model mode is adopted, the method can be suitable for a large-gain dense display device.
Preferably, because the structured light with the corresponding shape is adopted for scanning, and the dense display device comprises a plurality of pixels, the control component can drive the dense display device, so that the corresponding structured light can be switched to a defined displacement range at any time, and the tracking is more efficient.
Step S302, calculating a first calculation result of the gazing direction of the first eye by using a pupil center corneal reflection algorithm model; and learning a corneal curvature algorithm model by using a machine learning method and the first calculation result, and calculating a second calculation result of the gazing direction of the first eye by using the learned corneal curvature algorithm model and the first flicker point data.
In particular, the control component employs a combination of two algorithms to calculate the gaze direction of the first eye. Because the traditional corneal curvature algorithm model can obtain certain robustness only by using massive manual labeling data as a training model, the self-learning self-calibration can be completed by combining the two algorithms in the embodiment without data labeling. When a specific formula of the corneal curvature algorithm f is not obtained, a first calculation result of the gazing direction calculated by the pupil center corneal reflection algorithm model can be output to a learning machine as standard information, the learning machine receives the standard information and extracts a scintillation point role distribution diagram collected on a corresponding frame for machine learning, so that the corneal curvature algorithm f is learned, the relation between the coordinate change of the corneal scintillation point and the gazing direction is established, and the scintillation point role distribution diagram can be substituted into the corneal curvature algorithm f during subsequent tracking to quickly calculate the gazing direction of the eye. The embodiment can be supplemented by two algorithm models, so that the information calculation amount can be greatly reduced, and the refresh rate is improved.
Step S303, controlling whether the component calculates a new first calculation result; if yes, go to step S304, otherwise go to step S305; returning to step S303 after step S304; step S304, calibrating the corneal curvature algorithm model by using the new first calculation result; step S305 is a second calculation result of calculating the gaze direction of the first eye using the corneal curvature algorithm model.
Specifically, the coordinates of the scintillation point and the coordinates of the pupil center point need to be acquired when the pupil center corneal reflection algorithm model is used, and only the coordinates of the scintillation point need to be acquired when the corneal curvature algorithm model is used for calculation. Since the speed of acquiring the coordinates of the glint points is high and the speed of acquiring the coordinates of the pupil center point is low, in this embodiment, a second calculation result of the gazing direction is calculated by using the corneal curvature algorithm model as long as a new glint point coordinate exists, and the second calculation result is output as the gazing direction. After the coordinates of the pupil center point are calculated, a first calculation result of the gazing direction is calculated by using the pupil center corneal reflection algorithm model, the first calculation result is output as the gazing direction and is substituted into the learning machine, the corneal curvature algorithm model is learned and updated, the corneal curvature algorithm model is more accurate, and a second calculation result of the gazing direction of the first eye is calculated by using the new corneal curvature algorithm model. By the method, the real-time calibration of the corneal curvature algorithm model can be realized, so that the calculated gazing direction is more accurate.
Preferably, in the above embodiment, the control unit is configured to time-divisionally drive the first display region and the second display region. Since the first display area and the second display area are both irradiated to the first eye, if the first display area and the second display area are irradiated simultaneously, the control component may not be able to distinguish the blinking point and the pupil center point generated by which display area of the light source is. Therefore, the embodiment adopts a time-sharing driving mode, so that the first display area and the second display area have different irradiation time to the first eye, and the first display area and the second display area do not generate interference. It will of course be appreciated that the control component also time-divisionally drives the corresponding regions on the dense display device such that the second dense display device and the second virtual image do not interfere with each other when illuminating the second eye.
Illustratively, assuming a resolution of 384 × 384 for a dense display device, a field of view of 50mm × 50mm, the fastest speed ω of rotation of the human eye max The angle is 720 degrees/s, and the rotation radius of the eyeball is 13.5mm. Assuming that D =40, β =2,
Figure BDA0003966965940000231
pi d points (31 points) are needed when the light spot of the circular structure lights up one circle), and the calculation formula is based on the scanning times
Figure BDA0003966965940000232
Completing the tracking of one Pupil Center (PC) requires about 1250 scans per frame, and 2500 scans of both PCs for the left and right eyes. The PC ' image, as ghosted by the first mirror, will be smaller, the largest moving pixel will be smaller, and we assume that the ghost in the mirror is one third of the real image, then D ' =13, epsilon ' max =4,d '=2, about 400 scans per frame are required to complete PC' tracking in both virtual images. Thus, the refresh rate can be calculated to be about 130Hz when the dense display device is dropped at a rate of 400 Kframes/sec to track all PCs and PCs'.
Similarly, when tracing a scintillation point (CR), assume N max =5, formula 4N calculated from number of scans in CR max One CR scan is completed 20 times per frame, and all CR tracking is completed 2 × 2 × 4 × 20=320 scans. Intensive displayThe refresh rate can reach 1000Hz when the device is shown dropping at a rate of 300K frames/second to track all CR and CR'.
Further, when the dense display device performs global scanning (384 × 384) on the human eye at a rate of 300K, the scanning of both eyes needs 384 × 384=300k times per frame, the refresh rate is about 2Hz, it can be ensured that 2 times of global scanning can be performed per second to obtain the human eye image, and the human eye gazing direction is directly calculated by the pupil center corneal reflection algorithm model
Figure BDA0003966965940000233
It can also be output as a gaze in case of loss of tracking or loss of eyelid occlusion or as standard and calibration information for learning
Figure BDA0003966965940000234
Input to the learning machine.
Of course, it should be noted that the rate allocation in the software time division multiplexing is only illustrated, and there may be a better allocation method. The dense display device can also be upgraded by the upgrading of the chip and the transmission rate of the sensing device, and the refresh rate can be further improved. The dense display device supports three driving modes (serial, narrow and wide) to be switched to a higher refresh rate, and in this example only the driving mode (serial) with the lowest refresh rate is selected for evaluation.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. An eye tracking device, comprising:
a dense display device provided with a first display area configured to emit infrared vector structured light to a first eye;
a first detection assembly comprising at least two first probes, the first detection assembly configured to receive light reflected or scattered by a first eye to acquire first detected image information, the first detected image information comprising a first pupil center point and at least one first glint point;
the control assembly is electrically connected with the dense display device and the first detection assembly, is configured to drive the dense display device to emit light, and calculates the gazing direction of a first eye according to the first detection image information;
wherein the control component is configured to calculate the gaze direction of the first eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
2. The eye tracking device of claim 1,
the control component is configured to calculate a first calculation of a gaze direction of a first eye using a pupil center corneal reflection algorithm model; and learning a corneal curvature algorithm model by using a machine learning method and the first calculation result, and calculating a second calculation result of the gazing direction of the first eye by using the learned corneal curvature algorithm model and the first flicker point data.
3. The eye tracking device of claim 2,
the control component is further configured to calculate a second calculation of the gaze direction of the first eye using the corneal curvature algorithm model if no new first calculation is calculated using the pupil center corneal reflection algorithm model; after a new first calculation result is calculated by using the pupil center corneal reflection algorithm model, the corneal curvature algorithm model is calibrated by using the new first calculation result.
4. The eye tracking device of claim 1 further comprising at least one first mirror; the dense display device is further provided with at least one second display area configured to emit infrared vector structured light to the first eye by reflection of the corresponding first mirror.
5. The eye tracking device of claim 1, further comprising: the semi-reflecting and semi-transmitting lens and the second detection component are arranged on the optical fiber; the dense display device is also provided with a third display area, and the third display area is configured to transmit the infrared vector structured light to the second eye part through the semi-reflecting and semi-transparent mirror;
the second detection assembly comprises at least two second probes configured to receive light reflected or scattered by a second eye to acquire second detection image information comprising a second pupil center point and at least one second glint point;
the control component is electrically connected with the second detection component, is configured to drive the dense display device to emit light, and calculates the gazing direction of a second eye according to the second detection image information;
wherein the control component is configured to calculate the gaze direction of the second eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
6. The eye tracking apparatus of claim 5 further comprising at least one second mirror, wherein the dense display device is further provided with at least one fourth display region configured to transmit the transflective mirror and emit the infrared vector structured light toward the second eye by reflection from the corresponding second mirror.
7. The eye tracking device of claim 1 wherein the control assembly further comprises a learning machine configured to learn a corneal curvature algorithm model.
8. The eye tracking device of claim 1 wherein the first probe is a fiber optic probe, the detection assembly further comprising a first photodetector; the first photoelectric detector is optically connected with the optical fiber probe and electrically connected with the control component;
or the first probe is a photoelectric detector and is electrically connected with the control assembly.
9. An eye tracking method performed by the eye tracking device of any one of claims 1-8, the eye tracking method comprising:
the control component drives the dense display device to emit light and calculates the gazing direction of a first eye according to the first detection image information;
wherein the control component is configured to calculate the gaze direction of the first eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model.
10. The eye tracking method of claim 9,
calculating a first calculation result of the gazing direction of the first eye by using a pupil center corneal reflection algorithm model; and learning a corneal curvature algorithm model by using a machine learning method and the first calculation result, and calculating a second calculation result of the gazing direction of the first eye by using the learned corneal curvature algorithm model and the first scintillation point data.
11. The eye tracking method of claim 10,
the control component is further configured to calculate a second calculation of the gaze direction of the first eye using the corneal curvature algorithm model if no new first calculation is calculated using the pupil center corneal reflection algorithm model; after a new first calculation result is calculated by using the pupil center corneal reflection algorithm model, the corneal curvature algorithm model is calibrated by using the new first calculation result.
12. The eye-tracking method according to claim 9, further comprising, before the controlling component drives the dense display device to emit light and calculates the gaze direction of the first eye from the first detected image information:
the control assembly drives the dense display device to carry out global scanning, a first eye detection image formed by the global scanning is obtained, and the pupil position of a first eye and the position of a corresponding first flashing point are calculated according to the first eye detection image.
13. The eye tracking method of claim 12, wherein the eye tracking device further comprises: the semi-reflecting and semi-transmitting lens and the second detection component are arranged on the optical fiber; the dense display device is also provided with a third display area, and the third display area is configured to transmit the infrared vector structured light to the second eye part through the semi-reflecting and semi-transparent mirror; the second detection assembly comprises at least two second probes configured to receive light reflected by a second eye to acquire second detected image information comprising a second pupil center point and at least one second glint point; the control component is electrically connected with the second detection component;
the eye tracking method further comprises: the control component drives the dense display device to emit light and calculates the gazing direction of a second eye according to the second detection image information; wherein the control component is configured to calculate a gaze direction of the second eye using a corneal curvature algorithm model and/or a pupil center corneal reflection algorithm model;
before the control component drives the dense display device to emit light and calculates the gazing direction of the second eye according to the second detection image information, the method further comprises the following steps:
the control component drives the dense display device to perform global scanning, a second eye detection image formed by the global scanning is obtained, and the pupil position of a second eye and the position of a corresponding second flicker point are calculated according to the second eye detection image.
14. The eye-tracking method according to claim 12, wherein the controlling component drives the dense display device to emit light, and calculating the gaze direction of the first eye from the first detected image information comprises:
the control component determines the displacement range of the pupil region of the next frame according to the pupil region of the current frame and a first preset formula, determines the displacement range of the first scintillation point of the next frame according to the first scintillation point position of the current frame and a second preset formula, and drives the dense display device to scan only the displacement range of the pupil region and/or only the displacement range of the first scintillation point in the next frame.
15. The eye-tracking method according to claim 14, wherein driving the dense display device to scan only the pupil region displacement range and the first blinking dot displacement range comprises:
driving the dense display device to emit and scan the first structured light calculated according to the pupil area of the current frame in the displacement range of the pupil area of the next frame; the first structured light is a perfect circle structured light or a structured light with a contour corresponding to the pupil contour of the first eye of the current frame;
driving the dense display device to scan only the first blinking dot displacement range includes:
driving the dense display device to emit a line structure to optically scan the displacement range of the first scintillation point; the cross section of the line structured light is two line segments which are vertical to each other.
16. The eye tracking method of claim 9, wherein the eye tracking device further comprises a first mirror; the dense display device is also provided with a second display area which is configured to emit infrared vector structured light to the first eye through reflection of the first reflector;
the control component is further configured to drive the first display area and the second display area to emit light in a time-sharing mode.
CN202211512569.7A 2022-11-28 2022-11-28 Eye movement tracking device and method Pending CN115731601A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211512569.7A CN115731601A (en) 2022-11-28 2022-11-28 Eye movement tracking device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211512569.7A CN115731601A (en) 2022-11-28 2022-11-28 Eye movement tracking device and method

Publications (1)

Publication Number Publication Date
CN115731601A true CN115731601A (en) 2023-03-03

Family

ID=85299496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211512569.7A Pending CN115731601A (en) 2022-11-28 2022-11-28 Eye movement tracking device and method

Country Status (1)

Country Link
CN (1) CN115731601A (en)

Similar Documents

Publication Publication Date Title
US9924866B2 (en) Compact remote eye tracking system including depth sensing capacity
US10257507B1 (en) Time-of-flight depth sensing for eye tracking
EP3660575B1 (en) Eye tracking system and eye tracking method
US20180160079A1 (en) Pupil detection device
CN100421614C (en) Method and installation for detecting and following an eye and the gaze direction thereof
JP2988178B2 (en) Gaze direction measuring device
US20160131902A1 (en) System for automatic eye tracking calibration of head mounted display device
US20150238087A1 (en) Biological information measurement device and input device utilizing same
CN108700932A (en) It can carry out the wearable device of eye tracks
US11435577B2 (en) Foveated projection system to produce ocular resolution near-eye displays
EP3075304B1 (en) Line-of-sight detection assistance device and line-of-sight detection assistance method
US20130003028A1 (en) Floating virtual real image display apparatus
KR20130107981A (en) Device and method for tracking sight line
US11675429B2 (en) Calibration, customization, and improved user experience for bionic lenses
EP2731049A1 (en) Eye-tracker
US20150185321A1 (en) Image Display Device
US20230098675A1 (en) Eye-gaze detecting device, eye-gaze detecting method, and computer-readable storage medium
CN115731601A (en) Eye movement tracking device and method
WO2024114258A1 (en) Eye movement tracking apparatus and method
US9746915B1 (en) Methods and systems for calibrating a device
US20210290133A1 (en) Evaluation device, evaluation method, and non-transitory storage medium
US11890057B2 (en) Gaze detection apparatus, gaze detection method, and gaze detection program
WO2022038814A1 (en) Corneal curvature radius calculation device, line-of-sight detection device, corneal curvature radius calculation method, and corneal curvature radius calculation program
WO2022038815A1 (en) Distance calculation device, distance calculation method, and distance calculation program
JP2018181025A (en) Sight line detecting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40081581

Country of ref document: HK