CN117130468A - Eye movement tracking method, device, equipment and storage medium - Google Patents

Eye movement tracking method, device, equipment and storage medium Download PDF

Info

Publication number
CN117130468A
CN117130468A CN202210557432.7A CN202210557432A CN117130468A CN 117130468 A CN117130468 A CN 117130468A CN 202210557432 A CN202210557432 A CN 202210557432A CN 117130468 A CN117130468 A CN 117130468A
Authority
CN
China
Prior art keywords
point
coordinates
target
coordinate
detecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210557432.7A
Other languages
Chinese (zh)
Inventor
孟祥熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210557432.7A priority Critical patent/CN117130468A/en
Publication of CN117130468A publication Critical patent/CN117130468A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present disclosure relates to an eye movement tracking method, device, apparatus and storage medium, the method comprising: acquiring a preliminary predicted fixation point of a tested person based on a preset eye tracking mode; inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of a target interpolation point; and determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point. The method and the device can enable the actual predicted gaze point to be closer to the actual gaze point of the tested person, so that the accuracy of eye movement tracking can be improved, the method and the device can be suitable for various preset eye movement tracking modes, an algorithm program of the preset eye movement tracking modes is not required to be modified, and the method and the device have high flexibility.

Description

Eye movement tracking method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of eye movement tracking, and in particular relates to an eye movement tracking method, an eye movement tracking device, eye movement tracking equipment and a storage medium.
Background
Eye tracking technology is an important means for researching and utilizing eyeball motion, and is mainly used for researching the acquisition, modeling, simulation and the like of eyeball motion information, and based on the technology, the focus of attention of the human eye can be determined, so that the behavior and consciousness of the human can be analyzed. At present, the eye movement tracking technology is widely applied to the fields of man-machine interaction, sports, shopping scenes and the like.
In the related art, after calibration, the eye tracking device may track the eye movement of the user, for example, collect the scene image of the user's gaze and the eye movement data of the user, and predict the gaze point of the user by combining with the cornea reflection method (corneal reflection). However, in the use process of the eye tracking device, once the position of the device worn by the user changes, or the scene of the device used by the user changes, the problem of eye tracking drift easily occurs, that is, a certain deviation of the gaze point predicted by the eye tracking device relative to the real gaze point (ground trunk) of the user occurs, and the requirement of the user on the eye tracking accuracy cannot be met.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide an eye tracking method, apparatus, device, and storage medium, which are used to solve the drawbacks in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided an eye movement tracking method, the method comprising:
acquiring a preliminary predicted fixation point of a tested person based on a preset eye tracking mode;
inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of a target interpolation point;
and determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
In some embodiments, the method further comprises fitting the thin-plate spline interpolation function based on:
detecting target feature points in a currently acquired foreground image;
in response to detecting the target feature point, matching a first coordinate of the target feature point with a second coordinate of a current predicted gaze point acquired based on a preset eye tracking mode;
and in response to successful matching of the first coordinate and the second coordinate, respectively taking the first coordinate and the second coordinate as the coordinates of a control point and the coordinates of a corresponding interpolation point, and fitting the thin-plate spline interpolation function.
In some embodiments, the detecting the target feature point in the currently acquired foreground image includes:
detecting characteristic points of a preset object in a currently acquired foreground image;
in response to detecting the feature points of the preset object, determining the feature points of the preset object as target feature points.
In some embodiments, the detecting the target feature point in the currently acquired foreground image includes:
detecting a mark point stuck on a preset object in a currently acquired foreground image;
in response to detecting the marker point, the marker point is determined to be a target feature point.
In some embodiments, the detecting the target feature point in the currently acquired foreground image includes:
detecting a mark point displayed on a screen of preset equipment in a currently acquired foreground image;
in response to detecting a marker point displayed on the preset device screen, the marker point is determined as a target feature point.
In some embodiments, the detecting the target feature point in the currently acquired foreground image includes:
and detecting target feature points in the currently acquired foreground image based on the preset frequency.
According to a second aspect of embodiments of the present disclosure, there is provided an eye tracking device, the device comprising:
the preliminary gaze point acquisition module is used for acquiring a preliminary predicted gaze point of the tested person based on a preset eye movement tracking mode;
the target interpolation point acquisition module is used for inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of the target interpolation point;
and the actual fixation point determining module is used for determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
In some embodiments, the apparatus further comprises a function fitting module;
the function fitting module comprises:
the detection unit is used for detecting target feature points in the currently acquired foreground image;
the matching unit is used for responding to the detection of the target characteristic point and matching the first coordinate of the target characteristic point with the second coordinate of the current predicted gaze point acquired based on a preset eye movement tracking mode;
and the fitting unit is used for fitting the thin plate spline interpolation function by taking the first coordinate and the second coordinate as the coordinates of the control point and the coordinates of the corresponding interpolation point respectively in response to successful matching of the first coordinate and the second coordinate.
In some embodiments, the detection unit is further configured to:
detecting characteristic points of a preset object in a currently acquired foreground image;
in response to detecting the feature points of the preset object, determining the feature points of the preset object as target feature points.
In some embodiments, the detection unit is further configured to:
detecting a mark point stuck on a preset object in a currently acquired foreground image;
in response to detecting the marker point, the marker point is determined to be a target feature point.
In some embodiments, the detection unit is further configured to:
detecting a mark point displayed on a screen of preset equipment in a currently acquired foreground image;
in response to detecting a marker point displayed on the preset device screen, the marker point is determined as a target feature point.
In some embodiments, the detection unit is further configured to:
and detecting target feature points in the currently acquired foreground image based on the preset frequency.
According to a third aspect of embodiments of the present disclosure, there is provided an eye-tracking device, the device comprising:
a processor and a memory for storing a computer program;
wherein the processor is configured to implement, when executing the computer program:
acquiring a preliminary predicted fixation point of a tested person based on a preset eye tracking mode;
inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of a target interpolation point;
and determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
acquiring a preliminary predicted fixation point of a tested person based on a preset eye tracking mode;
inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of a target interpolation point;
and determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
according to the method, the preliminary predicted fixation point of the tested person is obtained based on the preset eye movement tracking mode, the coordinates of the predicted fixation point are input into the pre-fitted thin plate spline interpolation function, the coordinates of the target interpolation point are obtained, the actual predicted fixation point of the tested person is further determined based on the coordinates of the target interpolation point, and due to the characteristics of the thin plate spline interpolation function, namely the change of one control point adopted in the fitting function, the surrounding non-control points can be driven to generate non-rigid changes of different degrees, so that the coordinates of the actual predicted fixation point can be displaced to different degrees along with the change of the preliminary predicted fixation point, the actual predicted fixation point can be enabled to be closer to the actual fixation point (ground truth) of the tested person, and accordingly the accuracy of eye movement tracking can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating an eye tracking method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating how to fit the thin-plate spline interpolation function according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating how to detect target feature points in a currently acquired foreground image according to an exemplary embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating how target feature points are detected in a currently acquired foreground image according to yet another exemplary embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating how to detect target feature points in a currently acquired foreground image according to another exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram of an eye tracking device according to an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of an eye tracking device according to yet another exemplary embodiment of the present disclosure;
fig. 8 is a block diagram of an eye tracking device, according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
FIG. 1 is a flowchart illustrating a method of eye tracking according to an exemplary embodiment; the method of the present embodiment may be applied to eye tracking devices (e.g., wearable electronic devices such as smart glasses, smart helmets, etc.).
As shown in fig. 1, the method includes the following steps S101-S102:
in step S101, a preliminary predicted gaze point of the subject is obtained based on a preset eye tracking manner.
In this embodiment, after the user wears the eye tracking device and turns on the eye tracking function, the device may obtain the preliminary predicted gaze point of the subject based on the preset eye tracking mode.
The preset eye tracking method may be set based on actual service requirements, for example, an eye tracking method based on a cornea reflection method in the related art, which is not limited in this embodiment.
For example, the obtaining the preliminary predicted gaze point of the subject based on the preset eye tracking mode may include obtaining a foreground image of the front of the eye of the subject based on a foreground camera, and collecting eye movement data (e.g., an eye image of the subject) of the user based on an eye movement sensor (a graph, an image collecting module, etc.), and further analyzing the foreground image and the eye movement data based on a corneal reflection method to obtain the preliminary predicted gaze point of the subject. The specific analysis process can be referred to the description of the cornea reflection method in the related art, which is not limited in this embodiment.
It should be noted that, in addition to the above eye tracking method based on the cornea reflection method, other eye tracking methods may be adopted in the actual implementation process of the present embodiment, and the obtained preliminary predicted gaze point of the subject is also applicable to the subsequent steps of the present embodiment.
In step S102, the coordinates of the predicted gaze point are input to a pre-fitted thin-plate spline interpolation function, so as to obtain the coordinates of the target interpolation point.
In this embodiment, after obtaining the preliminary predicted gaze point of the subject based on the preset eye tracking mode, the coordinates of the predicted gaze point may be input to a pre-fitted thin-plate spline interpolation function to obtain the coordinates of the target interpolation point. The coordinates of the predicted gaze point can be used as an independent variable of the thin-plate spline interpolation function, and the coordinates of the target interpolation point are corresponding dependent variables of the thin-plate spline interpolation function.
It is worth to say that the thin-plate spline interpolation function has the following characteristics that the change of one control point adopted in the fitting function can drive the surrounding non-control points to generate non-rigid changes with different degrees, so that the coordinates of the actual predicted gaze point can generate displacement with different degrees along with the change of the preliminary predicted gaze point, and the actual predicted gaze point can be enabled to be closer to the actual gaze point (ground trunk) of the tested person, thereby improving the accuracy of eye tracking.
In some embodiments, the thin-plate spline interpolation function may be fitted based on the target feature points detected by the eye tracking device in the currently acquired foreground image, and the specific fitting process may refer to a thin-plate spline interpolation function fitting manner in the related art, which is not limited in this embodiment.
In other embodiments, the fitting process of the thin-plate spline interpolation function may be described in the embodiment shown in fig. 2, which is not described in detail herein.
In step S103, an actual predicted gaze point of the subject is determined based on the coordinates of the target interpolation point.
In this embodiment, when the coordinates of the predicted gaze point are input to a pre-fitted thin-plate spline interpolation function, the coordinates of the target interpolation point may be obtained, and then the actual predicted gaze point of the subject may be determined based on the coordinates of the target interpolation point. For example, the coordinates of the target interpolation point may be determined as the coordinates of the actual predicted gaze point of the subject, that is, the target interpolation point may be determined as the eye tracking result of the subject.
As can be seen from the above description, in the method of this embodiment, the preliminary predicted gaze point of the measured person is obtained based on the preset eye tracking mode, and the coordinates of the predicted gaze point are input to the pre-fitted thin-plate spline interpolation function, so as to obtain the coordinates of the target interpolation point, and then the actual predicted gaze point of the measured person is determined based on the coordinates of the target interpolation point.
FIG. 2 is a flowchart illustrating how to fit the thin-plate spline interpolation function according to an exemplary embodiment of the present disclosure; the present embodiment is exemplified on the basis of the above embodiment by taking as an example how the thin-plate spline interpolation function is fitted. As shown in fig. 2, the eye tracking method of the present embodiment further includes fitting the thin-plate spline interpolation function based on the following steps S201 to S203:
in step S201, a target feature point is detected in the foreground image currently acquired.
In this embodiment, when the thin-plate spline interpolation function needs to be fitted, the eye tracking device may detect the target feature point in the currently acquired foreground image.
For example, the eye tracking device may acquire a foreground image of the front of the eye of the subject based on the foreground camera, and may detect the target feature point in the currently acquired foreground image.
The target feature points may be marker points that are presented in the environment in which the eye-tracking device is located. The marking point may be a marking point preset in an environment where the eye tracking device is located, or may be a point (for example, a corner point of a frame of a display) on a preset object in the environment, which is not limited in this embodiment.
In some embodiments, the target feature points may be detected in the currently acquired foreground image based on a preset frequency. It will be appreciated that by setting the frequency at which the target feature points are detected, it is possible to avoid the detection operation being too frequent, i.e. to avoid frequent re-fitting of the thin-plate spline interpolation function.
In other embodiments, the above-mentioned manner of detecting the target feature point in the currently acquired foreground image may be referred to the embodiments shown in fig. 3 to 5 below, which will not be described in detail herein.
In step S202, in response to detecting the target feature point, the first coordinate of the target feature point is matched with the second coordinate of the current predicted gaze point acquired based on the preset eye tracking manner.
In this embodiment, after the target feature point is detected in the currently acquired foreground image, the first coordinate of the target feature point may be matched with the second coordinate of the current predicted gaze point acquired based on the preset eye tracking mode. The first coordinate and the second coordinate may be coordinates in an eye tracking coordinate system of the eye tracking device.
For example, a target feature point may be detected in a currently acquired foreground image based on a preset frequency, and after the target feature point is detected, the coordinate of the target feature point (i.e., the first coordinate) is matched with the coordinate of the gaze point (i.e., the second coordinate) of the currently predicted tested person, that is, whether the tested person gazes at the target feature point is judged, if the matching is successful (i.e., the first coordinate is the same as the second coordinate), the tested person may be considered to be gazing at the target feature point; otherwise, the measured person is not stared at the target characteristic point.
In step S203, in response to the first coordinate and the second coordinate being successfully matched, the first coordinate and the second coordinate are respectively used as the coordinates of the control point and the coordinates of the corresponding interpolation point, and the thin-plate spline interpolation function is fitted.
In this embodiment, after the first coordinate of the target feature point is matched with the second coordinate of the current predicted gaze point acquired based on the preset eye tracking mode, if the first coordinate and the second coordinate are successfully matched, the first coordinate and the second coordinate may be respectively used as the coordinate of the control point and the coordinate of the corresponding interpolation point, and the thin-plate spline interpolation function may be fitted.
The specific way of fitting the thin-plate spline interpolation function based on the coordinates of the control points and the coordinates of the corresponding interpolation points can be referred to the explanation in the related art, for example, substituting the numerical values of the related parameters of the thin-plate spline interpolation function calculated based on the coordinates of the control points and the coordinates of the corresponding interpolation points, so that the fitting operation of the thin-plate spline interpolation function can be realized.
It should be noted that the number of the target feature points detected in the currently acquired foreground image may be one, two or more, which is not limited in this embodiment.
As can be seen from the foregoing description, in this embodiment, by detecting a target feature point in a currently acquired foreground image, and in response to detecting the target feature point, matching a first coordinate of the target feature point with a second coordinate of a current predicted gaze point acquired based on a preset eye tracking manner, and further in response to successful matching of the first coordinate and the second coordinate, respectively using the first coordinate and the second coordinate as a coordinate of a control point and a coordinate of a corresponding interpolation point, fitting the thin-plate spline interpolation function, accurate fitting of the thin-plate spline interpolation function can be achieved, and further, subsequent input of the coordinate of the predicted gaze point to the pre-fitted thin-plate spline interpolation function can be achieved, so as to obtain a coordinate of the target interpolation point.
FIG. 3 is a flowchart illustrating how to detect target feature points in a currently acquired foreground image according to an exemplary embodiment of the present disclosure; the present embodiment is exemplified on the basis of the above-described embodiments by taking as an example how to detect a target feature point in a currently acquired foreground image. As shown in fig. 3, the above-mentioned step S201 of detecting the target feature point in the currently acquired foreground image may include the following steps S301 to S302:
in step S301, feature points of a preset object are detected in a foreground image collected currently;
in step S302, in response to detecting the feature points of the preset object, the feature points of the preset object are determined as target feature points.
In this embodiment, after the eye tracking device acquires the foreground image in front of the eyes of the tested person based on the foreground camera, a preset object may be detected in the foreground image, and when the preset object is detected, a feature point may be detected on the preset object, and when the feature point of the preset object is detected, the feature point of the preset object may be determined as a target feature point.
The preset object may be set as a common object in a daily application scene of the eye tracking device, such as at least one of a display, a keyboard, a mouse, and the like, according to needs. On the basis, the characteristic points can be corner points or center points of frames of corresponding objects, and the like. For example, if the preset object is a display, the feature point may be set as a lower left corner vertex of a frame of the display.
In this embodiment, since the target feature points are determined based on the feature points of the preset object detected in the currently acquired foreground image, the detection of the target feature points can be automatically triggered in the daily application process of the eye tracking device, that is, when the user gazes to the feature points unintentionally or intentionally, the operation of fitting the thin-plate spline interpolation function based on the target feature points can be triggered, and the fitting of the thin-plate spline interpolation function does not need to be manually triggered by the user, so that the fitting efficiency and the automation level of the thin-plate spline interpolation function can be improved.
FIG. 4 is a flowchart illustrating how target feature points are detected in a currently acquired foreground image according to yet another exemplary embodiment of the present disclosure; the present embodiment is exemplified on the basis of the above-described embodiments by taking as an example how to detect a target feature point in a currently acquired foreground image. As shown in fig. 4, the above-mentioned step S201 of detecting the target feature point in the currently acquired foreground image may include the following steps S401 to S402:
in step S401, a mark point stuck on a preset object is detected in a foreground image collected at present;
in step S402, in response to detecting the marker point, the marker point is determined as a target feature point.
In this embodiment, after the eye tracking device obtains the foreground image in front of the eyes of the tested person based on the foreground camera, a preset object may be detected in the foreground image, and when the preset object is detected, a pasted mark point may be detected on the preset object, and when the mark point pasted on the preset object is detected, the mark point may be determined as the target feature point.
The preset object may be set as a common object in a daily application scene of the eye tracking device, such as at least one of a display, a keyboard, a mouse, and the like, according to needs. On this basis, the feature points may be mark points or the like stuck on the corresponding object. For example, if the preset object is a display, the feature points may be attached to the marking points at any position on the frame of the display.
In this embodiment, since the target feature point is determined based on the mark stuck on the preset object detected in the currently acquired foreground image, the detection of the target feature point can be automatically triggered in the daily application process of the eye tracking device, that is, when the user gazes to the feature point unintentionally or intentionally, the operation of fitting the thin plate spline interpolation function based on the target feature point can be triggered, without manually triggering the fitting of the thin plate spline interpolation function by the user, so that the fitting efficiency and the automation level of the thin plate spline interpolation function can be improved.
FIG. 5 is a flowchart illustrating how to detect target feature points in a currently acquired foreground image according to another exemplary embodiment of the present disclosure; the present embodiment is exemplified on the basis of the above-described embodiments by taking as an example how to detect a target feature point in a currently acquired foreground image. As shown in fig. 5, the above-mentioned step S201 of detecting the target feature point in the currently acquired foreground image may include the following steps S501-S502:
in step S501, a marker point displayed on a screen of a preset device is detected in a foreground image currently acquired;
in step S502, in response to detecting a marker point displayed on the preset device screen, the marker point is determined as a target feature point.
In this embodiment, after the eye tracking device acquires the foreground image in front of the eyes of the tested person based on the foreground camera, a preset device screen may be detected in the foreground image, and when the preset device screen is detected, a mark point may be detected on the preset device screen, and when the mark point is detected, the mark point may be determined as a target feature point.
The preset device screen may be set as a common device screen in a daily application scene of the eye tracking device, such as at least one of a display of a computer, a screen of a smart phone, or a screen of a wearable device, according to needs. On this basis, the feature points may be marker points displayed on the screen of the corresponding device. For example, if the preset object is a display of a computer, the marking points may be set as marking points displayed on the display periodically, or the like.
In this embodiment, the target feature points are determined based on the mark points displayed on the preset device screen detected in the currently acquired foreground image, so that the detection of the target feature points can be automatically triggered in the daily application process of the eye tracking device, further the fitting of the subsequent thin-plate spline interpolation function can be realized based on the detected target feature points, the fitting of the thin-plate spline interpolation function does not need to be manually triggered by a user, and the fitting efficiency and the automation level of the thin-plate spline interpolation function can be improved.
FIG. 6 is a block diagram of an eye tracking device according to an exemplary embodiment of the present disclosure; the device of the embodiment can be applied to eye tracking equipment (such as wearable electronic equipment such as intelligent glasses and intelligent helmets). As shown in fig. 6, the apparatus includes: a preliminary gaze point acquisition module 110, a target interpolation point acquisition module 120, and an actual gaze point determination module 130, wherein:
a preliminary gaze point acquisition module 110, configured to acquire a preliminary predicted gaze point of a subject based on a preset eye tracking manner;
the target interpolation point obtaining module 120 is configured to input the coordinates of the predicted gaze point to a pre-fitted thin-plate spline interpolation function to obtain coordinates of a target interpolation point;
an actual gaze point determination module 130 for determining an actual predicted gaze point of the subject based on coordinates of the target interpolation point.
As can be seen from the above description, the device in this embodiment obtains the preliminary predicted gaze point of the testee based on the preset eye tracking mode, and inputs the coordinates of the predicted gaze point to the pre-fitted thin plate spline interpolation function to obtain the coordinates of the target interpolation point, and then determines the actual predicted gaze point of the testee based on the coordinates of the target interpolation point.
FIG. 7 is a block diagram of an eye tracking device according to yet another exemplary embodiment of the present disclosure; the device of the embodiment can be applied to eye tracking equipment (such as wearable electronic equipment such as intelligent glasses and intelligent helmets). The functions of the preliminary gaze point obtaining module 210, the target interpolation point obtaining module 220, and the actual gaze point determining module 230 are the same as those of the preliminary gaze point obtaining module 110, the target interpolation point obtaining module 120, and the actual gaze point determining module 130 in the foregoing embodiment shown in fig. 6, and will not be described herein. As shown in fig. 7, the apparatus may further include a function fitting module 240;
the function fitting module 240 includes:
a detection unit 241, configured to detect a target feature point in a currently acquired foreground image;
a matching unit 242, configured to match, in response to detecting the target feature point, a first coordinate of the target feature point with a second coordinate of a current predicted gaze point acquired based on a preset eye tracking manner;
and a fitting unit 243, configured to fit the thin-plate spline interpolation function with the first coordinate and the second coordinate as the coordinate of the control point and the coordinate of the corresponding interpolation point, respectively, in response to successful matching of the first coordinate and the second coordinate.
In some embodiments, the detection unit 241 may also be configured to:
detecting characteristic points of a preset object in a currently acquired foreground image;
in response to detecting the feature points of the preset object, determining the feature points of the preset object as target feature points.
In some embodiments, the detection unit 241 may also be configured to:
detecting a mark point stuck on a preset object in a currently acquired foreground image;
in response to detecting the marker point, the marker point is determined to be a target feature point.
In some embodiments, the detection unit 241 may also be configured to:
detecting a mark point displayed on a screen of preset equipment in a currently acquired foreground image;
in response to detecting a marker point displayed on the preset device screen, the marker point is determined as a target feature point.
In some embodiments, the detection unit 241 may also be configured to:
and detecting target feature points in the currently acquired foreground image based on the preset frequency.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 8 is a block diagram illustrating an eye tracking device according to an example embodiment. For example, device 900 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, and the like.
Referring to fig. 8, device 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 902 may include one or more processors 920 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 902 can include one or more modules that facilitate interaction between the processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and the like. The memory 904 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 906 provides power to the various components of the device 900. Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 900.
The multimedia component 908 comprises a screen between the device 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 900 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 914 includes one or more sensors for providing status assessment of various aspects of the device 900. For example, the sensor assembly 914 may detect the on/off state of the device 900, the relative positioning of the components, such as the display and keypad of the device 900, the sensor assembly 914 may also detect the change in position of the device 900 or one component of the device 900, the presence or absence of user contact with the device 900, the orientation or acceleration/deceleration of the device 900, and the change in temperature of the device 900. The sensor assembly 914 may also include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the device 900 and other devices, either wired or wireless. The device 900 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G or 5G, or a combination thereof. In one exemplary embodiment, the communication component 916 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory 904 including instructions executable by the processor 920 of the device 900 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An eye tracking method, the method comprising:
acquiring a preliminary predicted fixation point of a tested person based on a preset eye tracking mode;
inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of a target interpolation point;
and determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
2. The method of claim 1, further comprising fitting the thin-plate spline interpolation function based on:
detecting target feature points in a currently acquired foreground image;
in response to detecting the target feature point, matching a first coordinate of the target feature point with a second coordinate of a current predicted gaze point acquired based on a preset eye tracking mode;
and in response to successful matching of the first coordinate and the second coordinate, respectively taking the first coordinate and the second coordinate as the coordinates of a control point and the coordinates of a corresponding interpolation point, and fitting the thin-plate spline interpolation function.
3. The method of claim 2, wherein detecting the target feature point in the currently acquired foreground image comprises:
detecting characteristic points of a preset object in a currently acquired foreground image;
in response to detecting the feature points of the preset object, determining the feature points of the preset object as target feature points.
4. The method of claim 2, wherein detecting the target feature point in the currently acquired foreground image comprises:
detecting a mark point stuck on a preset object in a currently acquired foreground image;
in response to detecting the marker point, the marker point is determined to be a target feature point.
5. The method of claim 2, wherein detecting the target feature point in the currently acquired foreground image comprises:
detecting a mark point displayed on a screen of preset equipment in a currently acquired foreground image;
in response to detecting a marker point displayed on the preset device screen, the marker point is determined as a target feature point.
6. The method of claim 2, wherein detecting the target feature point in the currently acquired foreground image comprises:
and detecting target feature points in the currently acquired foreground image based on the preset frequency.
7. An eye tracking device, the device comprising:
the preliminary gaze point acquisition module is used for acquiring a preliminary predicted gaze point of the tested person based on a preset eye movement tracking mode;
the target interpolation point acquisition module is used for inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of the target interpolation point;
and the actual fixation point determining module is used for determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
8. The apparatus of claim 7, further comprising a function fitting module;
the function fitting module comprises:
the detection unit is used for detecting target feature points in the currently acquired foreground image;
the matching unit is used for responding to the detection of the target characteristic point and matching the first coordinate of the target characteristic point with the second coordinate of the current predicted gaze point acquired based on a preset eye movement tracking mode;
and the fitting unit is used for fitting the thin plate spline interpolation function by taking the first coordinate and the second coordinate as the coordinates of the control point and the coordinates of the corresponding interpolation point respectively in response to successful matching of the first coordinate and the second coordinate.
9. An eye-tracking device, the device comprising:
a processor and a memory for storing a computer program;
wherein the processor is configured to implement, when executing the computer program:
acquiring a preliminary predicted fixation point of a tested person based on a preset eye tracking mode;
inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of a target interpolation point;
and determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
10. A computer readable storage medium having stored thereon a computer program, the program being embodied when executed by a processor:
acquiring a preliminary predicted fixation point of a tested person based on a preset eye tracking mode;
inputting the coordinates of the predicted fixation point into a pre-fitted thin plate spline interpolation function to obtain the coordinates of a target interpolation point;
and determining the actual predicted fixation point of the tested person based on the coordinates of the target interpolation point.
CN202210557432.7A 2022-05-20 2022-05-20 Eye movement tracking method, device, equipment and storage medium Pending CN117130468A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557432.7A CN117130468A (en) 2022-05-20 2022-05-20 Eye movement tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557432.7A CN117130468A (en) 2022-05-20 2022-05-20 Eye movement tracking method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117130468A true CN117130468A (en) 2023-11-28

Family

ID=88861569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557432.7A Pending CN117130468A (en) 2022-05-20 2022-05-20 Eye movement tracking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117130468A (en)

Similar Documents

Publication Publication Date Title
CN108052079B (en) Device control method, device control apparatus, and storage medium
EP3258418B1 (en) Fingerprint entry prompting method and device
EP3076382B1 (en) Method, apparatus and electronic device for display control
EP3096209B1 (en) Method and device for recognizing object
CN107515669B (en) Display method and device
CN106527682B (en) Method and device for switching environment pictures
KR20160052309A (en) Electronic device and method for analysis of face information in electronic device
CN112118380A (en) Camera control method, device, equipment and storage medium
CN111445413A (en) Image processing method, image processing device, electronic equipment and storage medium
CN105266756A (en) Pupil distance measuring method, device and terminal
CN112114653A (en) Terminal device control method, device, equipment and storage medium
CN111988522B (en) Shooting control method and device, electronic equipment and storage medium
CN115601316A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN117130468A (en) Eye movement tracking method, device, equipment and storage medium
CN114063876A (en) Virtual keyboard setting method, device and storage medium
CN116166115A (en) Eye movement tracking method, device, equipment, storage medium and chip
CN114418865A (en) Image processing method, device, equipment and storage medium
CN115665398B (en) Image adjusting method, device, equipment and medium based on virtual reality technology
CN108334762B (en) Terminal unlocking method and device
CN114079729A (en) Shooting control method and device, electronic equipment and storage medium
CN114187874B (en) Brightness adjusting method, device and storage medium
CN111857326A (en) Signal control method and device
US11790692B2 (en) Method for behaviour recognition, electronic equipment, and storage medium
CN110865720A (en) Human body posture reminding method and device
CN111507202B (en) Image processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination