CN112099615B - Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium - Google Patents

Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium Download PDF

Info

Publication number
CN112099615B
CN112099615B CN201910522956.0A CN201910522956A CN112099615B CN 112099615 B CN112099615 B CN 112099615B CN 201910522956 A CN201910522956 A CN 201910522956A CN 112099615 B CN112099615 B CN 112099615B
Authority
CN
China
Prior art keywords
light source
position information
light
image
pupil image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910522956.0A
Other languages
Chinese (zh)
Other versions
CN112099615A (en
Inventor
姚涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Priority to CN201910522956.0A priority Critical patent/CN112099615B/en
Publication of CN112099615A publication Critical patent/CN112099615A/en
Application granted granted Critical
Publication of CN112099615B publication Critical patent/CN112099615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements

Abstract

The invention discloses a gaze information determination method, a gaze information determination device, an eyeball tracking device and a storage medium. The method is applied to eyeball tracking equipment, wherein the eyeball tracking equipment comprises a combined light source, an image acquisition device and a processor, the combined light source comprises a first light source, and the first light source and the image acquisition device are positioned on the same optical path; the combined light source further comprises a second light source and a third light source located on different sides of the first light source and on different optical paths from the image acquisition device, the method being performed by the processor, the method comprising: controlling working states of the first light source, the second light source, the third light source and the image acquisition device to acquire a bright pupil image and a dark pupil image; determining a first model estimation parameter according to the spatial position relation of the bright pupil image, the dark pupil image and the light source; inputting the first model estimation parameters into an estimation model to acquire the gaze information of the user. By the method, accuracy of gaze information estimation in a large field angle range can be improved.

Description

Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium
Technical Field
The embodiment of the invention relates to the technical field of eye tracking, in particular to a method and a device for determining gaze information, eye tracking equipment and a storage medium.
Background
With the rapid development of computer vision, artificial intelligence technology and digitization technology, eyeball tracking technology has become a current hot research field, and has wide application in the field of man-machine interaction. Eye tracking, also known as gaze tracking, is a technique that estimates eye gaze information, including gaze and/or gaze point, by measuring eye movement.
When the user needs to estimate the gaze information in a large field angle range, the infrared light reflected by the cornea cannot be imaged on the imaging surface of the camera due to overlarge head gesture or eyeball movement amplitude of the user, or the imaging quality is very poor, so that the estimation result is inaccurate when the gaze information is estimated by the existing gaze information estimation method, and the experience of the user for estimating the gaze information by using the eyeball tracking device is reduced.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining fixation information, eyeball tracking equipment and a storage medium, so as to improve the accuracy of the fixation information estimation in a large field angle range.
In a first aspect, an embodiment of the present invention provides a gaze information determining method, which is applied to an eye tracking device, where the eye tracking device includes a combined light source, an image acquisition device, and a processor, where the combined light source includes a first light source, and the first light source and the image acquisition device are located on the same optical path; the combined light source further includes a second light source and a third light source located on different sides of the first light source and on different optical paths than the image capture device, the method performed by the processor, the method comprising:
controlling working states of the first light source, the second light source, the third light source and the image acquisition device to acquire a bright pupil image and a dark pupil image;
determining a first model estimation parameter according to the spatial position relation of the bright pupil image, the dark pupil image and the light source;
and inputting the first model estimation parameters into an estimation model to acquire the gazing information of the user.
Optionally, controlling the working states of the first light source, the second light source, the third light source and the image acquisition device, to obtain a bright pupil image and a dark pupil image includes:
Controlling the first light source to start, and controlling the image acquisition device to acquire a bright pupil image;
after the first light source is turned off, the second light source and the third light source are controlled to be started, and the image acquisition device is controlled to acquire dark pupil images.
Optionally, the first model estimation parameter includes: pupil position information, first light spot position information of light spots in the bright pupil image, first spatial position information of a light source corresponding to the first light spot position information, second light spot position information of light spots in the dark pupil image and second spatial position information of a light source corresponding to the second light spot position information;
correspondingly, determining a first model estimation parameter according to the bright pupil image and the dark pupil image comprises:
determining pupil position information according to the bright pupil image and the dark pupil image;
respectively determining first light spot position information of light spots included in the bright pupil image and second light spot position information of light spots included in the dark pupil image;
taking the position information of the first light source as first spatial position information;
and determining second spatial position information of the light source corresponding to the second light spot position information based on the first light spot position information, the second light spot position information and a predetermined spatial position relation of the light source.
Optionally, determining the second spot position information of the spot included in the dark pupil image includes:
if the dark pupil image comprises two light spots, taking the position information of the light spots meeting the quality condition as second light spot position information, wherein the quality condition is determined according to the light spot characteristics;
and if the dark pupil image comprises one light spot, taking the position information of the light spot included in the dark pupil image as second light spot position information.
Optionally, the operation of determining the estimation model includes:
displaying the calibration points;
controlling working states of the first light source, the second light source, the third light source and the image acquisition device, and acquiring a first calibration image and a second calibration image when a user looks at the calibration point;
determining a second model estimation parameter according to the first calibration image and the second calibration image;
inputting the second model estimation parameters and the position information of the calibration points into an estimation model to be solved, and determining the numerical value of the calibration parameters;
substituting the determined numerical value of the calibration parameter into an estimation model to be solved to obtain the solved estimation model.
Optionally, controlling the working states of the first light source, the second light source, the third light source and the image acquisition device, and acquiring a first calibration image and a second calibration image when the user gazes at the calibration point includes:
Controlling the first light source to start, and controlling the image acquisition device to acquire a first calibration image;
after the first light source is turned off, the second light source and the third light source are controlled to be started, and the image acquisition device is controlled to acquire a second calibration image.
Optionally, the second model estimation parameter includes: calibrating pupil position information, third light spot position information of light spots in the first calibration image, third spatial position information of light sources corresponding to the third light spot position information, fourth light spot position information of light spots in the second calibration image and fourth spatial position information of light sources corresponding to the fourth light spot position information;
correspondingly, determining a second model estimation parameter according to the first calibration image and the second calibration image comprises:
determining calibration pupil position information according to the first calibration image and the second calibration image;
respectively determining third light spot position information of light spots included in the first calibration image and fourth light spot position information of light spots included in the second calibration image;
taking the position information of the first light source as third spatial position information;
fourth spatial position information of the light source corresponding to the fourth light spot position information is determined based on the third light spot position information, the fourth light spot position information and a predetermined spatial position relationship of the light source.
In a second aspect, an embodiment of the present invention further provides a gaze information determining apparatus, including:
the image acquisition module is used for controlling the working states of the first light source, the second light source, the third light source and the image acquisition device to acquire a bright pupil image and a dark pupil image;
the determining module is used for determining a first model estimation parameter according to the spatial position relation of the bright pupil image, the dark pupil image and the light source;
and the information acquisition module is used for inputting the first model estimation parameters into an estimation model and acquiring the gazing information of the user.
In a third aspect, an embodiment of the present invention further provides an eyeball tracking device, including: combining the light source and the image acquisition device; the combined light source comprises a first light source, and the first light source and the image acquisition device are positioned on the same optical path; the combined light source further comprises a second light source and a third light source, wherein the second light source and the third light source are positioned on different sides of the first light source and positioned on different optical paths with the image acquisition device;
one or more processors;
a storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the gaze information determination method provided by the embodiment of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the gaze information determining method provided by the embodiment of the present invention.
The embodiment of the invention provides a fixation information determining method, a fixation information determining device, an eyeball tracking device and a storage medium, by utilizing the technical scheme, a bright pupil image and a dark pupil image can be obtained by controlling the working states of a first light source, a second light source, a third light source and an image acquisition device; then analyzing the bright pupil image and the dark pupil image to obtain a first model estimation parameter; and inputting the first model estimation parameters into an estimation model to obtain the gazing information of the user. And effectively determining the first model estimation parameters through the bright pupil image and the dark pupil image generated by the light sources at different positions so as to conveniently determine the gazing information of the user. The fixation information determining method is low in cost, and because a certain distance is reserved between the light sources, the range of light spots captured by the image acquisition device is larger, the requirement of overlarge head gesture or eyeball movement amplitude of a user can be met, and the dynamic calculation of fixation information is realized through the spatial position relation of a bright pupil image, a dark pupil image and the light sources, so that the direction of vision and the fixation point can be accurately obtained in a large-angle-of-view environment.
Drawings
Fig. 1a is a flowchart of a gaze information determining method according to a first embodiment of the present invention;
FIG. 1b is a schematic diagram of a pupil-brightening principle according to an embodiment of the present invention;
FIG. 1c is a schematic diagram of a bright pupil image according to an embodiment of the present invention;
FIG. 1d is a schematic diagram of a dark pupil principle according to an embodiment of the present invention;
FIG. 1e is a schematic diagram of a dark pupil image according to an embodiment of the present invention;
fig. 1f is a schematic diagram of a bright pupil image acquired by an image acquisition device according to an embodiment of the present invention;
fig. 1g is a schematic diagram of a dark pupil image acquired by an image acquisition device according to an embodiment of the present invention;
FIG. 1h is a schematic diagram of a bright pupil image of a user with his head rotated to the left by a certain angle according to an embodiment of the present invention;
FIG. 1i is a schematic diagram of a dark pupil image of a user with his head rotated to the left by a certain angle according to an embodiment of the present invention;
fig. 2a is a flow chart of a gaze information determining method according to a second embodiment of the present invention;
fig. 2b is a schematic structural diagram of an eye tracking apparatus according to the present invention;
fig. 3 is a schematic structural diagram of a gaze information determining apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an eye tracking apparatus according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like. Furthermore, embodiments of the invention and features of the embodiments may be combined with each other without conflict.
Example 1
Fig. 1a is a flowchart of a gaze information determining method according to a first embodiment of the present invention, where the method may be applicable to determining gaze information, and in particular, the method may be applicable to determining gaze information in a large angle of view. Gaze information may be understood as information when a user is looking at, including but not limited to, a gaze point and/or a line of sight. The method may be performed by gaze information determining means, wherein the means may be implemented in software and/or hardware and is typically integrated on an eye tracking device, which in this embodiment includes but is not limited to: an eye movement instrument. The eye tracking device may be used alone or integrated into a head mounted display device including Virtual Reality (VR) devices and augmented Reality (Augmented Reality, AR) devices.
Current commercial eye tracking devices are based primarily on pupil-cornea reflection principles for gaze estimation. The eye tracking device generally comprises several infrared lamps and a camera, wherein the infrared lamps emit active infrared light sources, and the active infrared light sources reflect on the cornea surface of the eyeball, so that pupils on the imaging surface of the camera and bright spots (namely light spots) formed by the reflection of the infrared lamps on the cornea surface are positioned. After the pupil center and the light spot center are provided, the visual axis direction and the cornea curvature center space position are solved by establishing function fitting from the vector from the pupil to the light spot to the fixation point or based on parameter estimation of a three-dimensional eyeball model.
With the popularization of large-sized screens, the use field angle of view of users needs to be larger or in some scenes, the users need to support a larger field angle with the eye tracking device. For example, in a large-sized curved screen display, the angle of view required to be used is large, and the horizontal field of view of the head when used by the user exceeds 100 °. For another example, in an interactive environment constituted by a front windshield in a vehicle, the horizontal angle of view range is also large, i.e., a large angle of view is required.
In order to meet the user's need for a large angle of view, one possible solution is to use multiple eye tracking devices to meet the user's need for a large angle of view. However, this solution has some drawbacks. First, the simultaneous operation of multiple eye tracking devices can cause infrared crosstalk, requiring a complex set of mechanisms to coordinate the multiple eye tracking devices. Second, calibration of multiple eye tracking devices is also an additional burden to the user.
In order to meet the requirement of a user on a large field angle, the pupil and the bright spot center are effectively extracted by using a bright and dark pupil technology. Fig. 1b is a schematic diagram of a bright pupil principle according to an embodiment of the present invention. Fig. 1c is a schematic diagram of a bright pupil image according to an embodiment of the present invention. Fig. 1d is a schematic diagram of a dark pupil principle according to an embodiment of the present invention. Fig. 1e is a schematic diagram of a dark pupil image according to an embodiment of the present invention. Referring to fig. 1b, 1c, 1d, and 1e, pupil cornea reflection tracking may use two different light source configurations: the bright pupil tracking (i.e., bright pupil tracking), i.e., the first light source 11 and the first imaging device 12 are on the same optical path, causing the pupil to appear shiny (this is the same as the red-eye effect that appears in the photograph). Wherein the bright pupil image 13 acquired by the first imaging device 12 is shown in fig. 1 c. Dark pupil tracking, i.e. the placement of the second light source 14 at a remote location (not on the same optical path) from the second imaging device 15, produces the effect that the pupil is darker than the iris (a distinct contrast). Wherein the dark pupil image 16 acquired by the second imaging device 15 is shown in fig. 1 e.
As shown in fig. 1a, a gaze information determining method according to a first embodiment of the present invention is applied to an eye tracking device, where the eye tracking device includes a combined light source, an image capturing device, and a processor, where the combined light source includes a first light source, and the first light source and the image capturing device are located on the same optical path; the combined light source further comprises a second light source and a third light source, the second light source and the third light source being located on different sides of the first light source and on different optical paths from the image acquisition device, the method being performed by the processor, wherein the combined light source comprises at least a first light source, a second light source and a third light source. The number of the light sources included in the second light source and the third light source is at least one. If the second light source and the third light source comprise at least two light sources, the spatial position relationship of the light sources can be further refined to determine the corresponding relationship between each light source and the light spot in the dark pupil image. In addition, the combined light source may further include a fourth light source, a fifth light source … …, and an nth light source, where n is a positive integer. The fourth light source, fifth light source … …, nth light source may be located on different sides of the first light source for forming a dark pupil image. The embodiment of the invention is described by taking the example that the second light source and the third light source comprise one light source. The method comprises the following steps:
S101, controlling working states of the first light source, the second light source, the third light source and the image acquisition device, and acquiring a bright pupil image and a dark pupil image.
In this embodiment, each light source, i.e., the first light source, the second light source, and the third light source, may be infrared light sources, since infrared light does not affect the vision of the eye. Image acquisition devices include, but are not limited to: an infrared imaging device, an infrared image sensor, a camera or a video camera.
The first light source and the image acquisition device are located on the same optical path, so that the working states of the first light source and the image acquisition device can be controlled to acquire a bright pupil image. The operating state may include start-up and shut-down. If the step is that the first light source is controlled to start, then the image acquisition device is controlled to start shooting, so that a bright pupil image is acquired. A bright pupil image is understood to include an image of the user's eyes in which the brightness of the user's pupil is greater than a certain value, causing the pupil to appear shiny. The fixed value may be set according to actual conditions, and is not limited herein.
The second and third light sources are located on different optical paths than the image acquisition device, ensuring that the images acquired by the first and second light sources are dark pupil images. A dark pupil image may be understood to include an image of the user's eye in which the brightness of the user's pupil is lower than the brightness of the iris.
The process of acquiring the dark pupil image in this step may be: controlling the second light source and the third light source to be turned on, and then controlling the image acquisition device to take a picture so as to obtain a dark pupil image; or controlling the second light source to be started, and then starting the image acquisition device to take a picture to obtain a dark pupil image. And turning off the second light source, turning on the third light source, and then starting the image acquisition device to take a picture to obtain another dark pupil image.
The present step does not limit the order in which the bright pupil image and the dark pupil image are acquired, as long as the bright pupil image and the dark pupil image can be acquired separately at different timings.
The second light source and the third light source are positioned on different sides of the first light source, so that fixation information of an eye image acquired under a large field angle can be effectively determined. The eye images include bright pupil images and dark pupil images.
Fig. 1f is a schematic diagram of a bright pupil image acquired by an image acquisition device according to an embodiment of the present invention. Fig. 1g is a schematic diagram of a dark pupil image acquired by an image acquisition device according to an embodiment of the present invention. Referring to fig. 1f and 1g, illustrated as an eye, the pupil image comprises a bright spot, i.e. the first bright spot 17, produced by the first light source. The dark pupil image includes two bright spots, namely a second bright spot 181 and a third bright spot 182, which are respectively located on two sides of the first bright spot 17 in the bright pupil image, and can be respectively generated by the second light source and the third light source.
It should be noted that in a use scene of a large angle of view, the head rotation angle and the eyeball rotation angle have a large dynamic range. The number of bright spots formed on both bright and dark pupil images is dynamic due to the variation in the head pose of the user. When the user looks at the right edge position in the large-view angle scene, the rightward rotation angle of the head or the rightward rotation angle of the eyeball is relatively large, and it is highly likely that the reflected light of the light source on the left side on the cornea surface cannot be imaged in the image acquisition device to form a bright spot, so that only one bright spot corresponding to the light source on the right side is on the dark pupil picture.
When the user looks at the left edge position in the large-view angle scene, the left rotation angle of the head or the left rotation angle of the eyeball is relatively large, and it is highly likely that the reflected light of the light source on the right side on the cornea surface cannot be imaged in the image acquisition device to form a bright spot, so that only one bright spot corresponding to the left light source exists on the dark pupil picture.
Fig. 1h is a schematic diagram of a bright pupil image after a user's head rotates to the left by a certain angle according to an embodiment of the present invention. Fig. 1i is a schematic diagram of a dark pupil image of a user with his head rotated to the left by a certain angle according to an embodiment of the present invention. See fig. 1h and 1i, wherein the rotated pupil image includes a bright spot generated by the first light source, i.e. the fourth bright spot 19. Only a bright spot corresponding to the light source on the left side of the eye tracking apparatus, i.e., the fifth bright spot 20, exists at the cornea position in the dark pupil image obtained after rotation. Therefore, the second light source and the third light source are arranged on different sides of the first light source, so that the fact that bright spots exist in dark pupil images all the time can be ensured, and the accuracy of determining fixation information is improved. The eye images include bright pupil images and dark pupil images.
S102, determining a first model estimation parameter according to the spatial position relation of the bright pupil image, the dark pupil image and the light source.
The spatial positional relationship of the light sources can be understood as the relative positions of the light sources in the eye tracking device. The light sources are used to form the light spots, so that the light source generating the light spots can be determined based on the relative positions between the light sources and the relative positions between the light spots.
The first model estimation parameters may be understood as arguments in the estimation model, based on which the corresponding gaze information can be determined. The specific content included in the estimation parameters of the first model is not limited herein, and may be determined according to the specific form of the estimation model. For example, the first model estimation parameters may include pupil position information, spot position information, and spatial position information of the corresponding light source. The first model estimation parameters may further comprise iris position information, spot position information and spatial position information of the corresponding light source.
When the first model estimation parameter is determined according to the spatial position relation of the bright pupil image, the dark pupil image and the light source, the specific determination means can be determined according to specific content included in the first model estimation parameter. Such as extracting corresponding ocular features in the bright pupil image and the dark pupil image to determine first model estimation parameters. Ocular features include, but are not limited to: pupil position, pupil shape, iris position, iris shape, eyelid position, corner of the eye position, spot (also known as purkinje spot) position, and the like.
For example, when the first model estimation parameter includes pupil position information, the step may determine pupil position information according to gray values of the bright pupil image and the dark pupil image, for example, perform differential operation on the gray values of the bright pupil image and the dark pupil image to obtain pupil position information; when the first model estimation parameter comprises second light spot position information of light spots included in the dark pupil image, respectively extracting first light spot positions and second light spot positions of the light spots included in the bright pupil image and the dark pupil image, and then determining second spatial position information of the light source corresponding to the second light spot position information according to the first light spot position information, the second light spot position information and the spatial position relation of the light source. That is, the second spatial position information is determined based on the relative positional relationship of the second spot position information and the first spot position information and the spatial positional relationship of the light source.
When the second light source and the third light source are turned on simultaneously, two light spots may be included in the dark pupil image. If the dark pupil image comprises two light spots, any one light spot can be taken in the step, and the space position information of the light source corresponding to the light spot is taken as second space position information; or selecting a light spot meeting the quality condition from the two light spots, and then taking the space position information of the light source corresponding to the light spot as second space position information; if the dark pupil image only comprises one light spot, the spatial position information of the light source corresponding to the light spot is directly used as second spatial position information.
When the second light source and the third light source are turned on at intervals, one light spot can be included in each acquired dark pupil image. The step can analyze the light spot and the light source space position information in the bright pupil image and the dark pupil image respectively to determine second space position information. If the two dark pupil images are included, and light spots exist in the dark pupil images, selecting the light spot in one dark pupil image, and taking the space position information of the light source corresponding to the light spot as second space position information; or selecting light spots meeting quality conditions from the two dark pupil images, and then taking the spatial position information of the light source corresponding to the light spots as second spatial position information; if only one dark pupil image exists in the two dark pupil images, the space position information of the light source corresponding to the light spot is directly determined to be the second space position information.
S103, inputting the first model estimation parameters into an estimation model to acquire the gaze information of the user.
The estimation model may be understood as a model for gaze information estimation. The estimation model may use the first model estimation parameter as an independent variable and the gaze information as an independent variable. The step may input a first model estimation parameter to the estimation model, and obtain corresponding gaze information. After the fixation information of the user is determined, the eye tracking effect can be achieved by analyzing the fixation information.
According to the fixation information determining method provided by the embodiment of the invention, by using the method, the bright pupil image and the dark pupil image can be obtained by controlling the working states of the first light source, the second light source, the third light source and the image acquisition device; then analyzing the bright pupil image and the dark pupil image to obtain a first model estimation parameter; and inputting the first model estimation parameters into an estimation model to obtain the gazing information of the user. And effectively determining the first model estimation parameters through the bright pupil image and the dark pupil image generated by the light sources at different positions so as to conveniently determine the gazing information of the user. The fixation information determining method is low in cost, and because a certain distance is reserved between the light sources, the range of light spots captured by the image acquisition device is larger, the requirement of overlarge head gesture or eyeball movement amplitude of a user can be met, and the dynamic calculation of fixation information is realized through the spatial position relation of a bright pupil image, a dark pupil image and the light sources, so that the direction of vision and the fixation point can be accurately obtained in a large-angle-of-view environment.
Example two
Fig. 2a is a flow chart of a gaze information determining method according to a second embodiment of the present invention, where the second embodiment is optimized based on the first embodiment. In this embodiment, the working states of the first light source, the second light source, the third light source and the image acquisition device are controlled to obtain a bright pupil image and a dark pupil image, which is further specified as: controlling the first light source to start, and controlling the image acquisition device to acquire a bright pupil image;
And after the first light source is turned off, controlling the second light source and the third light source to be started, and controlling the image acquisition device to acquire a dark pupil image.
Further, the present embodiment further optimizes the first model estimation parameter as: pupil position information, first spot position information of a spot in the bright pupil image, first spatial position information of a light source corresponding to the first spot position information, second spot position information of a spot in the dark pupil image, and second spatial position information of a light source corresponding to the second spot position information. Correspondingly, according to the spatial position relation of the bright pupil image, the dark pupil image and the light source, determining a first model estimation parameter, and optimizing comprises:
determining pupil position information according to the bright pupil image and the dark pupil image;
respectively determining first light spot position information of light spots included in the bright pupil image and second light spot position information of light spots included in the dark pupil image;
taking the position information of the first light source as first space position information;
and determining second spatial position information of the light source corresponding to the second light spot position information based on the first light spot position information, the second light spot position information and a predetermined spatial position relation of the light source.
Based on the optimization, the gaze information determination method optimization includes the operations of determining an estimation model:
displaying the calibration points;
controlling working states of the first light source, the second light source, the third light source and the image acquisition device, and acquiring a first calibration image and a second calibration image when a user looks at the calibration point;
determining a second model estimation parameter according to the first calibration image and the second calibration image;
inputting the second model estimation parameters and the position information of the calibration points into an estimation model to be solved, and determining the numerical value of the calibration parameters;
substituting the determined numerical value of the calibration parameter into an estimation model to be solved to obtain the solved estimation model. For details not yet described in detail in this embodiment, refer to embodiment one.
As shown in fig. 2a, a gaze information determining method provided in a second embodiment of the present invention includes the following steps:
s201, displaying the calibration points.
Because of individual eye physiological differences, the present embodiment may first perform user calibration on the estimation model before using the estimation model. User calibration, also known as user calibration, is often performed during gaze/gaze point estimation in order to determine certain parameters to be determined (also called calibration parameters, which generally correspond to certain intrinsic parameters of the user's eye, such as the radius of the eye, etc.) in an estimation model for gaze/gaze point estimation, by: the calibration parameters can be solved back by letting the user look at one or more target points, assuming that the information of the target points is a known line of sight (since the target points are preset). Wherein the target point location is a calibration point.
The display positions, the number and the shape of the calibration points are not limited, and a person skilled in the art can set the display according to practical situations, for example, the positions of the calibration points are uniformly distributed on the display, the calibration points can be distributed at the center position and the boundary position of the display or the calibration points are displayed on the display in a form of a nine-grid. The shape of the calibration points may be circular. The calibration parameters are determined by acquiring an image of the user looking at the calibration point and position information of the calibration point.
Upon inverse solution of the calibration parameters, gaze information is determined from the user's ocular characteristics, which may also include corneal location or foveal location.
S202, controlling working states of the first light source, the second light source, the third light source and the image acquisition device, and acquiring a first calibration image and a second calibration image when a user looks at the calibration point.
The means for controlling the working states of the first light source, the second light source, the third light source and the image acquisition device in this step can refer to the technical means for controlling the working states when the bright pupil image and the dark pupil image are acquired, and will not be described here. The first calibration image may be a bright pupil image acquired in the calibration phase. The second calibration image may be a dark pupil image acquired during the calibration phase.
The first calibration image and the second calibration image are acquired when the user looks at the calibration point, and the acquisition time is not limited, for example, the first calibration image and the second calibration image can be directly acquired after the set time of the calibration point is displayed; or can be obtained after receiving the determining instruction of the user. The user's determination instruction may be obtained in the form of voice or may be obtained in the form of a key, which is not limited herein.
S203, determining a second model estimation parameter according to the first calibration image, the second calibration image and the light source spatial position relation.
The second model estimation parameter may comprise the same content as the first model estimation parameter, the first model estimation parameter and the second model estimation parameter differing in that the first model estimation parameter is applied in an application phase of the estimation model and the second model estimation parameter is applied in a calibration phase of the estimation model.
The means for determining the second model estimation parameter according to the first calibration image, the second calibration image and the spatial position information of the light source can refer to the means for determining the first model estimation parameter, which is not described herein.
S204, inputting the second model estimation parameters and the position information of the calibration points into an estimation model to be solved, and determining the numerical value of the calibration parameters.
In the calibration stage of the estimation model, calibration parameters need to be determined, so that the step inputs the second model estimation parameters and the position information of the calibration points into the estimation model to be solved so as to reversely solve the numerical values of the calibration parameters.
The calibration points are preset points to be watched by the user, so the position information of the calibration points is a known parameter.
S205, substituting the determined numerical value of the calibration parameter into an estimation model to be solved to obtain a solved estimation model.
After determining the calibration parameters, the step can directly substitute the values of the calibration parameters into the estimation model to be solved to obtain the solved estimation model, so as to determine the fixation information based on the estimation model.
S206, controlling the first light source to start and controlling the image acquisition device to acquire a bright pupil image.
In the step, when the bright pupil image and the dark pupil image are acquired, the bright pupil image can be acquired first, and then the dark pupil image can be acquired.
Specifically, the step may first control the first light source to start, the first light source irradiates the eye, and then control the image acquisition device to take a picture of the eye of the user, and correspondingly take a reflection point, i.e. a light spot, of the first light source on the cornea, thereby obtaining a bright pupil image.
S207, after the first light source is turned off, the second light source and the third light source are controlled to be started, and the image acquisition device is controlled to acquire dark pupil images.
After the bright pupil image is acquired, the first light source can be turned off first in this step to prevent the first light source from affecting the acquisition of the dark pupil image. Then the second light source and the third light source are controlled to be started simultaneously, the second light source and the third light source irradiate towards eyes, then the image acquisition device is controlled to shoot eyes of a user, and reflection points, namely light spots, of the second light source and the third light source on a cornea are correspondingly shot, so that dark pupil images are obtained.
S208, determining pupil position information according to the bright pupil image and the dark pupil image.
The first model estimation parameters in this embodiment include: pupil position information, first light spot position information of light spots in the bright pupil image, first spatial position information of a light source corresponding to the first light spot position information, second light spot position information of light spots in the dark pupil image and second spatial position information of a light source corresponding to the second light spot position information;
pupil position information may be a parameter indicative of pupil position, such as pupil center. The first spot location information may be a location of a spot in the pupil image, e.g. the first spot location information is characterized by a spot center of the spot in the pupil image. The first spatial position information may be understood as spatial position information of the light source forming the spot in the bright pupil image. The second spot location information may be understood as the location of the spot in the dark pupil image, e.g. by characterizing the second spot location information by the spot center of the spot in the dark pupil image. The second spatial position information may be understood as spatial position information of the light source forming the spot in the dark pupil image. The spatial position information may be determined by the position of each light source in the eye tracking device.
In this step, when determining pupil position information, both bright pupil images and dark pupil images may be analyzed. It can be understood that the eyes in the bright pupil image and the dark pupil image have different gray values, and the step can perform differential operation on the gray values of the bright pupil image and the dark pupil image to obtain pupil position information.
S209, respectively determining first spot position information of spots included in the bright pupil image and second spot position information of spots included in the dark pupil image.
The step may analyze the bright pupil image and the dark pupil image, respectively, to determine the first spot location information and the second spot location information, where the means for determining the first spot location information and the second spot location information are not limited.
Taking the first spot position information as an example for determining, thresholding is performed on the bright pupil image, contour information of the spot is extracted, and then fitting, such as circle fitting, is performed on the contour information. Then, the center of the fitted pattern is calculated, and the center is used as first light spot position information.
The step can select one light spot from the dark pupil image to determine the second light spot position information.
S210, the position information of the first light source is regarded as first spatial position information.
It can be understood that the light source corresponding to the first light spot position information is the first light source, so that the position information of the first light source is directly used as the first spatial position information in this step.
It should be noted that the order of determining the pupil position information, the first spot position information, the second spot position information, and the first spatial position information is not limited.
S211, determining second spatial position information of a light source corresponding to the second light spot position information based on the first light spot position information, the second light spot position information and a predetermined spatial position relation of the light source.
When the second spatial position information is determined, a light source corresponding to the second light spot position information is selected based on the relative position relation between the first light spot position information and the second light spot position information and the predetermined light source spatial position relation, and then the spatial position information of the selected light source is determined to be the second spatial position information.
For example, if the second spot position information is located at the left side of the first spot position information, a light source located at the left side of the first light source is selected from the spatial position relations of the light sources, and the spatial position information of the light source is determined as the second spatial position information. If the second light source is positioned at the left side of the first light source, the spatial position information of the second light source is determined as second spatial position information.
S212, inputting the first model estimation parameters into an estimation model to acquire the gaze information of the user.
The gaze information determining method provided by the second embodiment of the invention embodies the operation of acquiring the bright pupil image and the dark pupil image, the first model estimation parameter and the determining operation of the first model estimation parameter, and optimizes the determining operation including the estimation model. With this method, the estimation model can be determined by estimating the position information of the parameter and the calibration point by the second model. And then sequentially acquiring a bright pupil image and a dark pupil image, and determining corresponding first light spot position information and second light spot position information based on the bright pupil image and the dark pupil image. And determining second spatial position information based on the first light spot position information, the second light spot position information and the spatial position relation of the light source to obtain first model estimation parameters, so as to determine the gazing information of the user. By locating the corresponding relation between the light spots and the light source, the dynamic calculation of the fixation information is realized, the accuracy of eyeball tracking is improved, and the use experience of a user in using the eyeball tracking device is improved.
The present embodiment further provides an optional embodiment, in which determining second spot position information of a spot included in the dark pupil image includes:
If the dark pupil image comprises two light spots, taking the position information of the light spots meeting the quality condition as second light spot position information, wherein the quality condition is determined according to the light spot characteristics;
and if the dark pupil image comprises one light spot, taking the position information of the light spot included in the dark pupil image as second light spot position information.
The method for determining gaze information in this embodiment is applied to a scene with a large field angle, and if the second light source and the third light source are started at the same time, at most two light spots can be displayed in the dark pupil image. And if the number of the light spots in the dark pupil image is two, selecting the light spots meeting the quality condition to determine the second light spot position information so as to improve the accuracy of the fixation information determination.
The quality of the spot may be characterized by spot characteristics including at least one of: the pixel area of the spot and edge information of the spot, which may include edge position information and edge brightness information. Accordingly, the quality condition may be determined from the spot characteristics, i.e. the quality condition comprises at least one of: the pixel area is larger than the set area threshold and the light spot edge information meets the edge condition.
The setting of the set area threshold is not limited, and one skilled in the art may set according to an actual scene, such as a size of the light source, a distance of the light source from a user, and/or a set position of the light source. The edge condition may be determined according to the specific content included in the edge information, which is not limited herein. If the edge condition is that the edge integrity is greater than the set integrity threshold, the edge integrity can be determined by the edge position information.
If only one light spot is included in the dark pupil image, second light spot position information is determined directly based on the light spot.
The present embodiment further provides an optional embodiment, in which the controlling the working states of the first light source, the second light source, the third light source, and the image acquisition device, acquiring a first calibration image and a second calibration image when the user gazes at the calibration point, includes:
controlling the first light source to start, and controlling the image acquisition device to acquire a first calibration image;
after the first light source is turned off, the second light source and the third light source are controlled to be started, and the image acquisition device is controlled to acquire a second calibration image.
When the first calibration image and the second calibration image are acquired, the first calibration image can be acquired first, then the second calibration image is acquired, and correspondingly, the first light source is started first, and then the second light source and the third light source are started. The means for acquiring the first calibration image and the second calibration image may refer to the means for acquiring the dark pupil image and the bright pupil image, which are not described herein.
The present embodiment further provides an optional embodiment, in which the second model estimation parameter includes: calibrating pupil position information, third light spot position information of light spots in the first calibration image, third spatial position information of light sources corresponding to the third light spot position information, fourth light spot position information of light spots in the second calibration image and fourth spatial position information of light sources corresponding to the fourth light spot position information;
Correspondingly, determining a second model estimation parameter according to the first calibration image and the second calibration image comprises:
determining calibration pupil position information according to the first calibration image and the second calibration image;
respectively determining third light spot position information of light spots included in the first calibration image and fourth light spot position information of light spots included in the second calibration image;
taking the position information of the first light source as third spatial position information;
fourth spatial position information of the light source corresponding to the fourth light spot position information is determined based on the third light spot position information, the fourth light spot position information and a predetermined spatial position relationship of the light source.
The second model estimation parameter may be the same as the first model estimation parameter in that the two acquisition opportunities are different. I.e. the calibration pupil position information is pupil position information obtained in the calibration phase. The third spot position information may be understood as position information of the spot included in the calibration stage pupil image. The third spatial position information is the spatial position information of the light source corresponding to the third light spot position information in the calibration stage. The fourth spot position information can be understood as the position information of the spot in the dark pupil image in the calibration phase. The fourth spatial position information may be understood as spatial position information of the light source corresponding to the fourth spot position information during the calibration phase.
Specific means for determining the second model estimation parameters may refer to means for determining the first model estimation parameters, which are not described herein.
In the following, an exemplary description is given of this embodiment, and fig. 2b is a schematic structural diagram of an eye tracking apparatus according to the present invention. Referring to fig. 2b, the eye tracking apparatus comprises an image acquisition device 21, such as a camera. The eye tracking device further comprises a first light source 22, a second light source 23 and a third light source 24. Each light source may be an infrared lamp. The first light source 22 is coaxial with the image acquisition device 21. The eye tracking apparatus further comprises a processor for controlling the operation of the light sources and the image acquisition means 21 to determine gaze information of the user.
One of the functions of the processor is to turn on and off the light sources at a timing, and store bright and dark pupil images. Specifically, the processor activates the first light source 22, and when the first light source 22 is turned on, controls the image acquisition device 21 to take a picture, and saves the obtained bright pupil image. Then the first light source 22 is turned off, the second light source 23 and the third light source 24 are turned on at the same time, and when the second light source 23 and the third light source 24 are turned on, the image acquisition device 21 is controlled to take a picture and save a dark pupil image.
It should be noted that when the processor controls the operation states of the light sources and the image acquisition device 21, a certain time may be delayed after each device is started, so as to determine to acquire accurate bright pupil images and dark pupil images.
Illustratively, after the first light source 22 is activated, the first time period is synchronously waited, and then the image acquisition device 21 is activated to take a picture, and Chu Liang pupil images are stored. The waiting for the second time period is delayed again, the second light source 23 and the third light source 24 are started, the waiting for the third time period is delayed again, and the image acquisition device 21 is started to acquire dark pupil images. The values of the first duration, the second duration, and the third duration are not limited.
In the process of tracking the user's gaze information, the process of acquiring bright pupil images and dark pupil images is repeated continuously for tracking use.
The eye tracking device determines gaze information based on a correspondence of the light spots and the light source. The light spots in the dark pupil image correspond to the spatial relationship of the light sources in the eye tracking device. If there are two spots in the dark pupil image, on both sides of the spot in the bright pupil image, the spot generated by the second light source 23 corresponds to the left side of the spot in the bright pupil image, and the bright spot generated by the third light source 24 corresponds to the right side.
Because the acquisition time of two adjacent frames of images is very short (millisecond level), in the time interval, the head gesture and the eye pupil gesture basically keep consistent or have very small variation degree, and on the premise, the pupil area of the bright pupil and the dark pupil in the images can be calculated according to the obvious characteristic difference presented by the bright pupil image and the dark pupil image, so that the coordinate of the pupil center is acquired.
When the spatial position relation of the light source is determined, the positions of light spots in the bright pupil image and the dark pupil image are firstly positioned, the corresponding relation between the light spots and the light source is determined by combining the relative position relation between the light spots in the bright pupil image and the dark pupil image and the structure information of the eyeball tracking equipment, and the corresponding relation is determined as the spatial position relation of the light source.
Taking fig. 1f and fig. 1g as an example for illustration, processing the bright pupil image to determine first spot position information; the dark pupil image is processed to determine second spot location information, which includes the first location of the second bright spot 181 and the second location of the third bright spot 182, with no bright spots noted as empty. And judging the spatial relationship between the first position and the second position relative to the first light spot position information. If the first position is located at the left side of the first spot position information and is not empty, the second bright spot 181 corresponding to the first position is generated for the second light source 23; otherwise, the clock is marked as empty. If the second position is located on the right side of the first spot position information and is not empty, a third bright spot 182 corresponding to the second position is generated for the third light source 24; otherwise, the clock is marked as empty.
Note that if no bright spot is included in the bright pupil image, the bright pupil image and the dark pupil image are newly acquired.
In determining the gaze information, the gaze information is determined using the light spots of the first light source 22 and the second light source 23 or the light spots of the first light source 22 and the third light source 24 and the pupil position information. When the light spots of the second light source 23 and the third light source 24 are simultaneously generated, sorting is performed according to the detected light spot quality, and the light spot with higher light spot quality is selected to participate in calculation of gaze information. One possible criterion for ordering is the effective pixel area of the spots, with more spots in the effective pixel area being more accurate in extracting the center of the spot.
In determining the calibration parameters, the present embodiment needs to acquire the second model estimation parameters, and then determine the calibration parameters based on the second model estimation parameters and the position information of the calibration points. After determining the calibration parameters, the gaze information may be determined directly based on the estimation model. When the fixation information is determined, a first model estimation parameter is acquired, and the first model estimation parameter is input into an estimation model to obtain the fixation information.
According to the fixation information determining method, the problem of eyeball tracking under a large field angle is solved by a low-cost method, and fixation information is accurately determined through the corresponding relation between the light spots and the light source.
Example III
Fig. 3 is a schematic structural diagram of a gaze information determining apparatus according to a third embodiment of the present invention, where the apparatus may be adapted to determine gaze information, and in particular, the method may be adapted to determine gaze information in a large angle of view. Wherein the apparatus may be implemented in software and/or hardware and is typically integrated on an eye tracking device.
As shown in fig. 3, the apparatus includes: an image acquisition module 31, a determination module 32, and an information acquisition module 33;
the image acquisition module 31 is configured to control the working states of the first light source, the second light source, the third light source and the image acquisition device, and acquire a bright pupil image and a dark pupil image;
a determining module 32, configured to determine a first model estimation parameter according to the spatial position relationship of the bright pupil image, the dark pupil image and the light source;
the information obtaining module 33 is configured to input the first model estimation parameter into an estimation model, and obtain gaze information of the user.
In this embodiment, the device first controls the working states of the first light source, the second light source, the third light source and the image acquisition device through the image acquisition module 31 to acquire a bright pupil image and a dark pupil image; then determining, by a determining module 32, a first model estimation parameter according to the spatial position relationship of the bright pupil image, the dark pupil image and the light source; finally, the information acquisition module 33 inputs the first model estimation parameters into an estimation model to acquire the gaze information of the user.
The embodiment provides a fixation information determining device, which can acquire a bright pupil image and a dark pupil image by controlling working states of a first light source, a second light source, a third light source and an image acquisition device; then analyzing the bright pupil image and the dark pupil image to obtain a first model estimation parameter; and inputting the first model estimation parameters into an estimation model to obtain the gazing information of the user. And effectively determining the first model estimation parameters through the bright pupil image and the dark pupil image generated by the light sources at different positions so as to conveniently determine the gazing information of the user. The fixation information determining method is low in cost, and because a certain distance is reserved between the light sources, the range of light spots captured by the image acquisition device is larger, the requirement of overlarge head gestures or eyeball movement amplitude of a user can be met, and the dynamic calculation of fixation information and the accurate acquisition of the direction of sight and the fixation point under the environment of a large field angle are realized through the spatial position relation of a bright pupil image, a dark pupil image and the light sources.
Further, the image acquisition module 31 is specifically configured to: controlling the first light source to start, and controlling the image acquisition device to acquire a bright pupil image;
and after the first light source is turned off, controlling the second light source and the third light source to be started, and controlling the image acquisition device to acquire a dark pupil image.
Based on the optimization, the first model estimation parameters include: pupil position information, first light spot position information of light spots in the bright pupil image, first spatial position information of a light source corresponding to the first light spot position information, second light spot position information of light spots in the dark pupil image and second spatial position information of a light source corresponding to the second light spot position information; the determining module 32 is specifically configured to: determining pupil position information according to the bright pupil image and the dark pupil image;
respectively determining first light spot position information of light spots included in the bright pupil image and second light spot position information of light spots included in the dark pupil image;
taking the position information of the first light source as first space position information;
and determining second spatial position information of the light source corresponding to the second light spot position information based on the first light spot position information, the second light spot position information and a predetermined spatial position relation of the light source.
Based on the above technical solution, the determining module 32 is specifically configured to:
if the dark pupil image comprises two light spots, taking the position information of the light spots meeting the quality condition as second light spot position information, wherein the quality condition is determined according to the light spot characteristics;
And if the dark pupil image comprises one light spot, taking the position information of the light spot included in the dark pupil image as second light spot position information.
Further, the apparatus also includes a model determination module for: displaying the calibration points;
controlling working states of the first light source, the second light source, the third light source and the image acquisition device, and acquiring a first calibration image and a second calibration image when a user looks at the calibration point;
determining a second model estimation parameter according to the first calibration image, the second calibration image and the light source spatial position relation;
inputting the second model estimation parameters and the position information of the calibration points into an estimation model to be solved, and determining the numerical value of the calibration parameters;
substituting the determined numerical value of the calibration parameter into an estimation model to be solved to obtain the solved estimation model.
Further, the model determining module is used for controlling the working states of the first light source, the second light source, the third light source and the image acquisition device, and obtaining a first calibration image and a second calibration image when the user looks at the calibration point, and is specifically used for:
controlling the first light source to start, and controlling the image acquisition device to acquire a first calibration image;
After the first light source is turned off, the second light source and the third light source are controlled to be started, and the image acquisition device is controlled to acquire a second calibration image.
Further, the second model estimation parameters include: calibrating pupil position information, third light spot position information of light spots in the first calibration image, third spatial position information of light sources corresponding to the third light spot position information, fourth light spot position information of light spots in the second calibration image and fourth spatial position information of light sources corresponding to the fourth light spot position information;
correspondingly, the model determining module is specifically configured to, when determining the second model estimation parameter according to the first calibration image and the second calibration image:
determining calibration pupil position information according to the first calibration image and the second calibration image;
respectively determining third light spot position information of light spots included in the first calibration image and fourth light spot position information of light spots included in the second calibration image;
taking the position information of the first light source as third spatial position information;
fourth spatial position information of the light source corresponding to the fourth light spot position information is determined based on the third light spot position information, the fourth light spot position information and a predetermined spatial position relationship of the light source.
The gaze information determining device can execute the gaze information determining method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 4 is a schematic structural diagram of an eye tracking apparatus according to a fourth embodiment of the present invention. As shown in fig. 4, an eye tracking apparatus according to a fourth embodiment of the present invention includes: one or more processors 41 and a storage device 42; the number of processors 41 in the device may be one or more, one processor 41 being taken as an example in fig. 4; the storage device 42 is used for storing one or more programs; the one or more programs are executed by the one or more processors 41, such that the one or more processors 41 implement a gaze information determination method as in any of the embodiments of the present invention.
The eye tracking apparatus may further include: an input device 43 and an output device 44.
The processor 41, the storage means 42, the input means 43 and the output means 44 in the eye tracking device may be connected by a bus or other means, in fig. 4 by way of example.
The storage device 42 in the eye tracking apparatus is used as a computer readable storage medium, and may be used to store one or more programs, such as a software program, a computer executable program, and a module, such as program instructions/modules corresponding to the gaze information determining method provided in the first or second embodiments of the present invention (for example, the modules in the gaze information determining device shown in fig. 3, including the image acquisition module 31, the determining module 32, and the information acquisition module 33). The processor 41 executes various functional applications of the eye tracking apparatus and data processing by executing software programs, instructions, and modules stored in the storage device 42, that is, implements the gaze information determination method in the above-described method embodiment.
The storage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created from the use of the eye tracking apparatus, and the like. In addition, the storage 42 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the storage 42 may further include memory remotely located with respect to the processor 41, which may be connected to the eye tracking device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 43 may be used to receive entered numerical or character information and to generate key signal inputs related to user settings and function control of the eye tracking apparatus. The output device 44 may include a display device such as a display screen.
The eyeball tracking device further comprises a combined light source 45 and an image acquisition device 46; the combined light source 45 comprises a first light source, which is located on the same optical path as the image acquisition device 46; the combined light source 45 further includes a second light source and a third light source located on different sides of the first light source and on different optical paths from the image capture device 46.
And, when one or more programs included in the above-described eye tracking apparatus are executed by the one or more processors 41, the programs perform the following operations:
controlling working states of the first light source, the second light source, the third light source and the image acquisition device to acquire a bright pupil image and a dark pupil image;
determining a first model estimation parameter according to the spatial position relation of the bright pupil image, the dark pupil image and the light source;
and inputting the first model estimation parameters into an estimation model to acquire the gazing information of the user.
Example five
A fifth embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program for executing a gaze information determining method when executed by a processor, the method comprising: controlling working states of the first light source, the second light source, the third light source and the image acquisition device to acquire a bright pupil image and a dark pupil image;
determining a first model estimation parameter according to the spatial position relation of the bright pupil image, the dark pupil image and the light source;
and inputting the first model estimation parameters into an estimation model to acquire the gazing information of the user.
Optionally, the program may be further configured to perform the gaze information determination method provided by any embodiment of the present invention when executed by a processor.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to: electromagnetic signals, optical signals, or any suitable combination of the preceding. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, radio Frequency (RF), and the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. The gazing information determining method is characterized by being applied to eyeball tracking equipment, wherein the eyeball tracking equipment comprises a combined light source, an image acquisition device and a processor, the combined light source comprises a first light source, and the first light source and the image acquisition device are positioned on the same optical path; the combined light source further includes a second light source and a third light source located on different sides of the first light source and on different optical paths than the image capture device, the method performed by the processor, the method comprising:
Controlling working states of the first light source, the second light source, the third light source and the image acquisition device to acquire a bright pupil image and a dark pupil image;
determining a first model estimation parameter according to the bright pupil image, the dark pupil image and a light source spatial position relation, wherein the light source spatial position relation is the relative position of each light source in the eyeball tracking equipment;
inputting the first model estimation parameters into an estimation model to acquire the gazing information of a user;
wherein the first model estimation parameters include: pupil position information, first light spot position information of light spots in the bright pupil image, first spatial position information of a light source corresponding to the first light spot position information, second light spot position information of light spots in the dark pupil image and second spatial position information of a light source corresponding to the second light spot position information;
correspondingly, determining a first model estimation parameter according to the spatial position relation of the bright pupil image, the dark pupil image and the light source comprises the following steps:
determining pupil position information according to the bright pupil image and the dark pupil image;
respectively determining first light spot position information of light spots included in the bright pupil image and second light spot position information of light spots included in the dark pupil image;
Taking the position information of the first light source as first spatial position information;
determining second spatial position information of a light source corresponding to the second light spot position information based on the first light spot position information, the second light spot position information and a predetermined spatial position relationship of the light source;
the determining the second spot position information of the spot included in the dark pupil image includes:
and if the dark pupil image comprises two light spots, taking the position information of the light spots meeting the quality condition as second light spot position information, wherein the quality condition is determined according to the light spot characteristics, and the quality condition comprises at least one of the following: the pixel area is larger than a set area threshold value, and the light spot edge information meets the edge condition;
and if the dark pupil image comprises one light spot, taking the position information of the light spot included in the dark pupil image as second light spot position information.
2. The method of claim 1, wherein controlling the operation states of the first light source, the second light source, the third light source, and the image capturing device to obtain a bright pupil image and a dark pupil image includes:
controlling the first light source to start, and controlling the image acquisition device to acquire a bright pupil image;
After the first light source is turned off, the second light source and the third light source are controlled to be started, and the image acquisition device is controlled to acquire dark pupil images.
3. The method of claim 1, wherein determining an estimation model comprises:
displaying the calibration points;
controlling working states of the first light source, the second light source, the third light source and the image acquisition device, and acquiring a first calibration image and a second calibration image when a user looks at the calibration point;
determining a second model estimation parameter according to the first calibration image, the second calibration image and the light source spatial position relation;
inputting the second model estimation parameters and the position information of the calibration points into an estimation model to be solved, and determining the numerical value of the calibration parameters;
substituting the determined numerical value of the calibration parameter into an estimation model to be solved to obtain the solved estimation model.
4. A method according to claim 3, wherein controlling the operation states of the first light source, the second light source, the third light source, and the image acquisition device, acquiring a first calibration image and a second calibration image when the user gazes at the calibration point, comprises:
Controlling the first light source to start, and controlling the image acquisition device to acquire a first calibration image;
after the first light source is turned off, the second light source and the third light source are controlled to be started, and the image acquisition device is controlled to acquire a second calibration image.
5. A method according to claim 3, wherein the second model estimation parameters comprise: calibrating pupil position information, third light spot position information of light spots in the first calibration image, third spatial position information of light sources corresponding to the third light spot position information, fourth light spot position information of light spots in the second calibration image and fourth spatial position information of light sources corresponding to the fourth light spot position information;
correspondingly, determining a second model estimation parameter according to the first calibration image and the second calibration image comprises:
determining calibration pupil position information according to the first calibration image and the second calibration image;
respectively determining third light spot position information of light spots included in the first calibration image and fourth light spot position information of light spots included in the second calibration image;
taking the position information of the first light source as third spatial position information;
Fourth spatial position information of the light source corresponding to the fourth light spot position information is determined based on the third light spot position information, the fourth light spot position information and a predetermined spatial position relationship of the light source.
6. The gazing information determining device is characterized by being arranged on eyeball tracking equipment, wherein the eyeball tracking equipment comprises a combined light source, an image acquisition device and a processor, the combined light source comprises a first light source, and the first light source and the image acquisition device are positioned on the same optical path; the combined light source further comprises a second light source and a third light source, the second light source and the third light source being located on different sides of the first light source and on different optical paths than the image acquisition device, the device comprising:
the image acquisition module is used for controlling the working states of the first light source, the second light source, the third light source and the image acquisition device to acquire a bright pupil image and a dark pupil image;
the determining module is configured to determine a first model estimation parameter according to the bright pupil image, the dark pupil image and a spatial position relationship of light sources, where the spatial position relationship of light sources is a relative position of each light source in the eye tracking device, and the first model estimation parameter includes: pupil position information, first light spot position information of light spots in the bright pupil image, first spatial position information of a light source corresponding to the first light spot position information, second light spot position information of light spots in the dark pupil image and second spatial position information of a light source corresponding to the second light spot position information;
The information acquisition module is used for inputting the first model estimation parameters into an estimation model to acquire the gazing information of a user;
the determining module is specifically configured to:
determining pupil position information according to the bright pupil image and the dark pupil image;
respectively determining first light spot position information of light spots included in the bright pupil image and second light spot position information of light spots included in the dark pupil image;
taking the position information of the first light source as first space position information;
determining second spatial position information of a light source corresponding to the second light spot position information based on the first light spot position information, the second light spot position information and a predetermined spatial position relationship of the light source;
the determining the second spot position information of the spot included in the dark pupil image includes:
and if the dark pupil image comprises two light spots, taking the position information of the light spots meeting the quality condition as second light spot position information, wherein the quality condition is determined according to the light spot characteristics, and the quality condition comprises at least one of the following: the pixel area is larger than a set area threshold value, and the light spot edge information meets the edge condition;
And if the dark pupil image comprises one light spot, taking the position information of the light spot included in the dark pupil image as second light spot position information.
7. An eye tracking apparatus, comprising: combining the light source and the image acquisition device; the combined light source comprises a first light source, and the first light source and the image acquisition device are positioned on the same optical path; the combined light source further comprises a second light source and a third light source, wherein the second light source and the third light source are positioned on different sides of the first light source and positioned on different optical paths with the image acquisition device;
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the gaze information determination method of any of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a gaze information determination method according to any of claims 1-5.
CN201910522956.0A 2019-06-17 2019-06-17 Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium Active CN112099615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910522956.0A CN112099615B (en) 2019-06-17 2019-06-17 Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910522956.0A CN112099615B (en) 2019-06-17 2019-06-17 Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium

Publications (2)

Publication Number Publication Date
CN112099615A CN112099615A (en) 2020-12-18
CN112099615B true CN112099615B (en) 2024-02-09

Family

ID=73748675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910522956.0A Active CN112099615B (en) 2019-06-17 2019-06-17 Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium

Country Status (1)

Country Link
CN (1) CN112099615B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926523B (en) 2021-03-30 2022-07-26 青岛小鸟看看科技有限公司 Eyeball tracking method and system based on virtual reality
CN116612162B (en) * 2023-07-21 2023-11-17 中色创新研究院(天津)有限公司 Hole site calibration analysis method, system and storage medium based on image comparison model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201477518U (en) * 2009-08-31 2010-05-19 北京科技大学 Sight line tracking unit based on pupilla-cornea reflection method
CN101788848A (en) * 2009-09-29 2010-07-28 北京科技大学 Eye characteristic parameter detecting method for sight line tracking system
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN103677270A (en) * 2013-12-13 2014-03-26 电子科技大学 Human-computer interaction method based on eye movement tracking
CN104113680A (en) * 2013-04-19 2014-10-22 北京三星通信技术研究有限公司 Sight line tracking system and method
CN104657648A (en) * 2013-11-18 2015-05-27 广达电脑股份有限公司 Head-mounted display device and login method thereof
CN104657973A (en) * 2013-11-25 2015-05-27 联想(北京)有限公司 Image processing method, electronic equipment and control unit

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6056323B2 (en) * 2012-09-24 2017-01-11 富士通株式会社 Gaze detection device, computer program for gaze detection
US10417782B2 (en) * 2014-08-22 2019-09-17 National University Corporation Shizuoka University Corneal reflection position estimation system, corneal reflection position estimation method, corneal reflection position estimation program, pupil detection system, pupil detection method, pupil detection program, gaze detection system, gaze detection method, gaze detection program, face orientation detection system, face orientation detection method, and face orientation detection program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201477518U (en) * 2009-08-31 2010-05-19 北京科技大学 Sight line tracking unit based on pupilla-cornea reflection method
CN101788848A (en) * 2009-09-29 2010-07-28 北京科技大学 Eye characteristic parameter detecting method for sight line tracking system
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN104113680A (en) * 2013-04-19 2014-10-22 北京三星通信技术研究有限公司 Sight line tracking system and method
CN104657648A (en) * 2013-11-18 2015-05-27 广达电脑股份有限公司 Head-mounted display device and login method thereof
CN104657973A (en) * 2013-11-25 2015-05-27 联想(北京)有限公司 Image processing method, electronic equipment and control unit
CN103677270A (en) * 2013-12-13 2014-03-26 电子科技大学 Human-computer interaction method based on eye movement tracking

Also Published As

Publication number Publication date
CN112099615A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN106662917B (en) Eye tracking calibration system and method
CN109633907B (en) Method for automatically adjusting brightness of monocular AR (augmented reality) glasses and storage medium
CN108700933B (en) Wearable device capable of eye tracking
WO2020015468A1 (en) Image transmission method and apparatus, terminal device, and storage medium
CN109410285B (en) Calibration method, calibration device, terminal equipment and storage medium
JP6303297B2 (en) Terminal device, gaze detection program, and gaze detection method
US9836639B2 (en) Systems and methods of light modulation in eye tracking devices
US7783077B2 (en) Eye gaze tracker system and method
JP6577454B2 (en) On-axis gaze tracking system and method
CN109976535B (en) Calibration method, device, equipment and storage medium
US10936162B2 (en) Method and device for augmented reality and virtual reality display
US10820796B2 (en) Pupil radius compensation
CN112099615B (en) Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium
CN108369744B (en) 3D gaze point detection through binocular homography mapping
US11163994B2 (en) Method and device for determining iris recognition image, terminal apparatus, and storage medium
US10867252B2 (en) Continuous calibration based on pupil characteristics
US20190158717A1 (en) Information processing apparatus and information processing method
KR20100038897A (en) Apparatus of estimating user's gaze and the method thereof
CN112748798B (en) Eyeball tracking calibration method and related equipment
CN112148119A (en) Method, eye tracker and computer program for determining eye position in digital image data
CN113974546A (en) Pterygium detection method and mobile terminal
CN113325947A (en) Display method, display device, terminal equipment and storage medium
CN114816065A (en) Screen backlight adjusting method, virtual reality device and readable storage medium
CN113641238A (en) Control method, control device, terminal equipment, controlled equipment and storage medium
CN113783999A (en) Method and device for controlling terminal display and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant