CN110619303A - Method, device and terminal for tracking point of regard and computer readable storage medium - Google Patents

Method, device and terminal for tracking point of regard and computer readable storage medium Download PDF

Info

Publication number
CN110619303A
CN110619303A CN201910874625.3A CN201910874625A CN110619303A CN 110619303 A CN110619303 A CN 110619303A CN 201910874625 A CN201910874625 A CN 201910874625A CN 110619303 A CN110619303 A CN 110619303A
Authority
CN
China
Prior art keywords
image
pupil
coordinate system
target face
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910874625.3A
Other languages
Chinese (zh)
Inventor
陈岩
方攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910874625.3A priority Critical patent/CN110619303A/en
Publication of CN110619303A publication Critical patent/CN110619303A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Abstract

The present application belongs to the technical field of user interaction, and in particular, to a gaze point tracking method, apparatus, terminal, and computer-readable storage medium, wherein the gaze point tracking method includes: acquiring a target face image; establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system; determining a target fixation point coordinate corresponding to the coordinate of the pupil of the target face image under the real-time coordinate system according to a mapping relation between a pre-established pupil coordinate and the fixation point coordinate; the problem that the fixation point of the human eyes cannot be accurately identified due to the fact that bright spots formed by reflection of the light source in the cornea and the pupil cannot be accurately extracted due to reflection of the lens under the condition that a user wears glasses is solved; the accuracy of the gaze point tracking is improved.

Description

Method, device and terminal for tracking point of regard and computer readable storage medium
Technical Field
The present application belongs to the field of user interaction technologies, and in particular, to a method, an apparatus, a terminal, and a computer-readable storage medium for tracking a gaze point.
Background
In human-computer interaction, human vision can be used as a visual channel for receiving visual information by a human and also can be used as an important channel for acquiring and expressing brain information. For example, the gaze point and the track of the gaze point of human eyes express the processing process of the attention object, the craving mind and other visual information of human beings, and a robust and non-invasive gaze point tracking plays a very important role in human-computer interaction, virtual reality and emotion understanding.
Currently, a commonly used gazing point tracking technology is a Pupil center Corneal Reflection technology (PCCR), which mainly acquires an image of a human eye and obtains a bright spot formed by Reflection of a light source in a cornea and a Pupil to determine a gazing direction of the human eye. However, the gaze point tracking technology has great limitation in some application scenarios, and the problem that the gaze point of human eyes cannot be accurately identified easily occurs.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a computer readable storage medium for tracking a gaze point, which can solve the technical problem that the gaze point of human eyes cannot be accurately identified.
A first aspect of the embodiments of the present application provides a gaze point tracking method, where the gaze point tracking method includes:
acquiring a target face image;
establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system;
and determining target fixation point coordinates corresponding to the coordinates of the pupil of the target face image under the real-time coordinate system according to a mapping relation between the pre-established pupil coordinates and the fixation point coordinates.
A second aspect of the embodiments of the present application provides a gaze point tracking apparatus, including:
an acquisition unit for acquiring a target face image;
the first determining unit is used for establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference object and determining the coordinates of the pupil of the target face image under the real-time coordinate system;
and the second determining unit is used for determining a target fixation point coordinate corresponding to the coordinate of the pupil of the target face image in the real-time coordinate system according to the pre-established mapping relation between the pupil coordinate and the fixation point coordinate.
A third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
In the embodiment of the application, a target face image is obtained, the coordinates of the pupil of the target face image in a real-time coordinate system are determined, and then the target fixation point coordinates corresponding to the coordinates of the pupil of the target face image are determined according to the pre-established mapping relation between the pupil coordinates and the fixation point coordinates, so that when the fixation point of human eyes is identified, bright spots formed by reflection of a light source in the cornea and the pupil are not required to be obtained, and the problem that the fixation point of human eyes cannot be identified accurately due to the fact that the bright spots formed by reflection of the light source in the cornea and the pupil cannot be extracted accurately caused by reflection of a lens under the condition that a user wears glasses is solved; the accuracy of the gaze point tracking is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1a is a schematic diagram of a bright spot of a human eye image formed by reflection of a light source in a cornea and a pupil under the condition of no glasses worn according to an embodiment of the present application;
FIG. 1b is a schematic diagram of a bright spot of a human eye image formed by reflection of a light source in a cornea and a pupil when glasses are worn according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a first implementation of a method for tracking a gaze point according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a first specific implementation of step 203 of a gaze point tracking method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an acquisition process of a first face sample picture provided by an embodiment of the present application;
FIG. 5 is a first schematic diagram of establishing a sample coordinate system according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a first specific implementation of step 202 of a gaze point tracking method according to an embodiment of the present application;
fig. 7 is a flowchart illustrating a second specific implementation of step 203 of a gaze point tracking method according to an embodiment of the present application;
FIG. 8 is a second schematic diagram of establishing a sample coordinate system according to an embodiment of the present application;
fig. 9 is a flowchart illustrating a second specific implementation of step 202 of a gaze point tracking method according to an embodiment of the present application;
fig. 10 is a flowchart illustrating a third specific implementation of step 203 of a gaze point tracking method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a lens gridding process for glasses according to an embodiment of the present disclosure;
fig. 12 is a schematic flowchart of a second implementation of a method for tracking a gaze point according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a gaze point tracking apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In human-computer interaction, human vision can be used as a visual channel for receiving visual information by a human and also can be used as an important channel for acquiring and expressing brain information. For example, the gaze point and the track of the gaze point of human eyes express the processing process of the attention object, the craving mind and other visual information of human beings, and a robust and non-invasive gaze point tracking plays a very important role in human-computer interaction, virtual reality and emotion understanding.
At present, a commonly used fixation point tracking technology PCCR mainly acquires an image of a human eye, and acquires a bright spot formed by reflecting a light source in a cornea and a pupil to determine a fixation direction of the human eye.
However, in some application scenarios, the method is prone to a problem that the gaze point of human eyes cannot be accurately identified.
For example, as shown in fig. 1a, when the eye image does not include glasses, the bright spot 101 formed by the reflection of the light source in the cornea and pupil can be accurately acquired; however, as shown in fig. 1b, when the eye image includes the glasses, due to the lens reflection of the glasses, a relatively bright spot 102 may be formed in the eye image, so that the terminal cannot accurately extract the bright spot formed by the light source reflected in the cornea and the pupil, and even in some cases, the bright spot 102 formed by the lens reflection may overlap with the bright spot formed by the light source reflected in the cornea and the pupil, so that the bright spot formed by the light source reflected in the cornea and the pupil cannot be accurately acquired, and the fixation point of the eye cannot be accurately identified.
Based on this, embodiments of the present application provide a method, an apparatus, a terminal and a computer-readable storage medium for tracking a gaze point, which can solve the problem that a gaze point of a human eye cannot be accurately identified.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 2 is a schematic diagram illustrating an implementation flow of a gaze point tracking method provided by an embodiment of the present application, where the method is applied to a terminal, and can be executed by a gaze point tracking device configured on the terminal, and is suitable for a situation where a gaze point of a human eye needs to be accurately identified. The terminal can be an intelligent terminal such as a mobile phone, a tablet computer and wearable equipment. For convenience of description, the following description will be given by taking the terminal as a mobile phone as an example.
In some embodiments of the present application, the above-mentioned gaze point tracking method may include steps 201 to 203.
Step 201, obtaining a target face image.
In this embodiment of the application, the target face image may be a face image of a target person acquired by a camera.
For example, when a user faces a mobile phone and various functions are implemented by using the mobile phone, a front camera of the mobile phone can acquire a face image of the user and recognize a fixation point of the user according to the face image. And, the point of regard may be located on the cell phone screen.
Step 202, establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference, and determining the coordinates of the pupil of the target face image in the real-time coordinate system.
In the embodiment of the present application, the facial feature points of the target face image are feature points that do not change position relative to the head of the face image.
Such as eyebrow feature points, eye feature points, nose tip feature points, mouth corner feature points, and eyeglass feature points of the eyeglass wearer, among others.
Because the position change of the pupil of the human eye relative to the head has a certain correlation with the position change of the fixation point, a real-time coordinate system can be established by taking the facial feature point of the target face image as a reference object, the coordinate of the pupil of the target face image under the real-time coordinate system is determined, the position change of the pupil relative to the head is obtained, and then step 203 is executed, and the target fixation point coordinate corresponding to the coordinate of the pupil under the real-time coordinate system is obtained.
And 203, determining target fixation point coordinates corresponding to the coordinates of the pupil of the target face image according to the pre-established mapping relation between the pupil coordinates and the fixation point coordinates.
In the embodiment of the application, a target face image is obtained, the coordinates of the pupil of the target face image in a real-time coordinate system are determined, and then the target fixation point coordinates corresponding to the coordinates of the pupil of the target face image are determined according to the pre-established mapping relation between the pupil coordinates and the fixation point coordinates, so that when the fixation point of human eyes is identified, bright spots formed by reflection of a light source in the cornea and the pupil are not required to be obtained, and the problem that the fixation point of the human eyes cannot be identified accurately due to the fact that the bright spots formed by reflection of the light source in the cornea and the pupil cannot be extracted accurately under the condition that a user wears glasses is solved; the accuracy of the gaze point tracking is improved.
Specifically, as shown in fig. 3, in the step 203, when the mapping relationship is the first mapping relationship, the establishing of the first mapping relationship may include: step 301 to step 304.
Step 301, acquiring a first face sample image and a first sample fixation point coordinate corresponding to the first face sample image; and the first human face sample image and the target human face image are human face images of the same target person.
Because the facial images of different users are different and the position relations of all feature points in the facial images are different, the first facial sample image used when the mapping relation between the pupil coordinates and the fixation point coordinates is established should be the facial image of the same target person as the target facial image, so that the established mapping relation between the pupil coordinates and the fixation point coordinates can be applied to the target facial image without errors.
For example, as shown in fig. 4, the first face sample image acquisition may include: displaying a plurality of mark points 41 on a display screen of the terminal, recording the coordinates of each mark point, guiding the eyes of the user to visually observe one mark point at a time, and acquiring the face image of the user at the moment, so as to obtain a first face sample image and first sample fixation point coordinates corresponding to the first face sample image.
Step 302, performing feature recognition on a first face sample image, and determining pupils of the first face sample image and face feature points of the first face sample image.
In the embodiment of the application, when the first face sample image is subjected to feature recognition, the face image can be subjected to feature recognition by using algorithms such as face + + tools or dlib algorithms, so as to obtain the face feature points of the face image.
Step 303, establishing a sample coordinate system using the facial feature points of the first facial sample image as a reference according to the position relationship between the facial feature points of the first facial sample image, and determining the coordinates of the pupil of the first facial sample image in the sample coordinate system.
In some embodiments of the present application, the establishing of the sample coordinate system may include: and establishing a sample coordinate system which takes the facial feature points of the first facial sample image as a reference by taking one of the feature points as a coordinate origin and taking the vector direction of the connecting line of any two feature points as the positive direction of a coordinate axis.
For example, as shown in fig. 5, by performing feature recognition on the face sample image, feature points such as a left external canthus feature point a and a right external canthus feature point b can be determined, so that a two-dimensional coordinate system can be established by using the left external canthus feature point a as a coordinate origin and using a vector direction in which the left external canthus feature point a points to the right external canthus feature point b as an x-axis positive direction when establishing a sample coordinate system. And after the sample coordinate system is established, the coordinates of the pupil of the first human face sample image in the sample coordinate system can be determined.
The coordinates of the pupil in the sample coordinate system may be coordinates of a pupil center point, a feature point at the leftmost side of the pupil, a feature point at the rightmost side of the pupil, or other pupil feature points in the sample coordinate system, which is not limited in this application, and only needs to be able to indicate the position of the pupil relative to the feature point of the face.
And 304, fitting the coordinates of the pupil of the first human face sample image in the sample coordinate system with the first sample fixation point coordinates corresponding to the first human face sample image to obtain a first mapping relation between the pupil coordinates and the fixation point coordinates in the sample coordinate system.
Specifically, fitting the coordinates of the pupil of the first human face sample image in the sample coordinate system with the first sample fixation point coordinates corresponding to the first human face sample image to obtain the first mapping relationship between the pupil coordinates and the fixation point coordinates in the sample coordinate system may include: the method comprises the steps of obtaining coordinates of pupils of a plurality of groups of first human face sample images under a sample coordinate system and first sample fixation point coordinates corresponding to the first human face sample images, inputting the coordinates of the pupils under the sample coordinate system and the first sample fixation point coordinates into mathematic software such as matalab for fitting, and obtaining a fitting function f (x), namely a first mapping relation.
Based on the mapping relationship establishing method shown in fig. 3, in some embodiments of the present application, as shown in fig. 6, the establishing a real-time coordinate system by using the facial feature point of the target face image as a reference object, and determining the coordinates of the pupil of the target face image in the real-time coordinate system may include steps 601 to 602.
Step 601, performing feature recognition on the target face image, and determining pupils of the target face image and face feature points of the target face image.
Step 602, according to the position relationship among the facial feature points of the target face image, establishing the real-time coordinate system using the facial feature points of the target face image as a reference, and determining the coordinates of the pupil of the target face image in the real-time coordinate system.
It should be noted that the real-time coordinate system in the embodiment of the present application may be a coordinate system having a coordinate axis with the same origin and the same direction as the sample coordinate system; or a coordinate system having a different origin or having coordinate axes in different directions from the above-mentioned sample coordinate system.
When the real-time coordinate system is a coordinate system having a different origin or having a coordinate axis in a different direction from the sample coordinate system, and the target fixation point coordinate corresponding to the coordinate of the pupil of the target face image in the real-time coordinate system is determined according to the mapping relationship, after the coordinate of the pupil of the target face image in the real-time coordinate system needs to be mapped to the coordinate in the sample coordinate system, the target fixation point coordinate corresponding to the coordinate of the pupil of the target face image in the sample coordinate system is determined according to a first mapping relationship.
In the embodiment of the application, a target face image is obtained, the coordinates of the pupil of the target face image in a real-time coordinate system are determined, and then the target fixation point coordinates corresponding to the coordinates of the pupil of the target face image are determined according to a pre-established first mapping relation, so that when the fixation point of human eyes is identified, bright spots formed by reflection of a light source in the cornea and the pupil are not required to be obtained, and the problem that the bright spots formed by reflection of the light source in the cornea and the pupil cannot be accurately extracted due to reflection of a lens when a user wears glasses is avoided, and then the fixation point of the human eyes cannot be accurately identified; the accuracy of the gaze point tracking is improved.
In another embodiment of the present application, as shown in fig. 7, the establishing of the first mapping relationship may further include, when glasses are worn by a target person, that is, when the first face sample image is a first face sample image including the glasses, because an edge of the glasses is regular, the first face sample image can be recognized more quickly and accurately than the face feature recognition, and therefore, in order to reduce a calculation amount of the feature recognition performed on the first face sample image and improve a speed and an accuracy of the eye gaze point recognition, the establishing of the first mapping relationship may further include: step 701 to step 705.
Step 701, acquiring a first face sample image and a first sample fixation point coordinate corresponding to the first face sample image; the first face sample image and the target face image are face images of the same target person.
Step 702, performing feature recognition on the first face sample image, and judging whether the first face sample image is a first face sample image containing glasses.
Step 703, if the first face sample image is a first face sample image including glasses, identifying feature points of the glasses, and using the feature points of the glasses as face feature points of the first face sample image.
Step 704, establishing the sample coordinate system using the feature points of the glasses as a reference according to the position relationship between the feature points of the glasses, and determining the coordinates of the pupil of the first face sample image in the sample coordinate system.
Step 705, fitting the coordinates of the pupil of the first human face sample image in the sample coordinate system and the first sample fixation point coordinates corresponding to the first human face sample image to obtain a first mapping relation between the pupil coordinates and the fixation point coordinates in the sample coordinate system.
Specifically, the establishing of the sample coordinate system may include establishing the sample coordinate system by using the feature point of one of the glasses as the origin of coordinates and using a vector of a connection line between any two feature points of the glasses as a positive direction of a coordinate axis.
For example, as shown in fig. 8, after a face image including glasses is acquired, feature points of the glasses, such as a frame midpoint feature point a and a right stub feature point b, can be recognized, and then a sample coordinate system can be established with the frame midpoint feature point a as the origin of coordinates of the sample coordinate system, a direction from the frame midpoint feature point a to the right stub feature point b as the positive x-axis direction, and a direction perpendicular to the x-axis in the counterclockwise direction as the positive y-axis direction, and coordinates of the pupil c in the sample coordinate system can be confirmed.
Based on the above-mentioned method for establishing mapping relationships shown in fig. 7 to 8, in some embodiments of the present application, when the obtained target face image is a face image including glasses, as shown in fig. 9, the above-mentioned establishing a real-time coordinate system by using the facial feature point of the target face image as a reference, and determining the coordinates of the pupil of the target face image in the real-time coordinate system may include steps 901 to 902.
Step 901, performing feature recognition on the target face image, and determining pupils of the target face image and feature points of glasses of the target face image;
step 902, establishing the real-time coordinate system using the feature points of the glasses as a reference according to the position relationship among the feature points of the glasses of the target face image, and determining the coordinates of the pupil of the target face image in the real-time coordinate system.
It should be noted that the real-time coordinate system in this embodiment may be a coordinate system having a coordinate axis with the same origin and the same direction as the sample coordinate system; or a coordinate system having a different origin or having coordinate axes in different directions from the above-mentioned sample coordinate system.
When the real-time coordinate system is a coordinate system having a different origin or having a coordinate axis in a different direction from the sample coordinate system, and when the target fixation point coordinate corresponding to the coordinate of the pupil of the target face image in the real-time coordinate system is determined according to the first mapping relationship shown in fig. 7, after the coordinate of the pupil of the target face image in the real-time coordinate system needs to be mapped to the coordinate in the sample coordinate system, the target fixation point coordinate corresponding to the coordinate of the pupil of the target face image in the real-time coordinate system is determined according to the first mapping relationship.
In practical applications, since the user may not wear glasses frequently or only wear glasses occasionally, in this case, the first sample face image in the mapping relationship may be a sample image including glasses, and the target face sample image may be a face sample image not including glasses, which belongs to a coordinate system in which a coordinate system used in the mapping relationship establishment and a coordinate system established in the gaze point identification have different origins or coordinate axes in different directions, and therefore, it is necessary to map the coordinate system established in the gaze point identification to the coordinate system used in the mapping relationship establishment and then determine target gaze point coordinates corresponding to the coordinates of the pupil.
In other embodiments of the present application, when the target person is a target person wearing glasses, in order to more accurately determine the gaze point of the eyes, as shown in fig. 10, the mapping relationship may be a second mapping relationship, and the establishment of the second mapping relationship may include steps 1001 to 1003.
1001, acquiring a second face sample image and a second sample fixation point coordinate corresponding to the second face sample image; the second face sample image and the target face image are face images of the same target person, and the second face sample image comprises glasses.
Step 1002, performing feature recognition on the second face sample image, and determining a lens of glasses of the second face sample image and a pupil of the second face sample image.
Step 1003, performing meshing processing on the lens of the second face sample image, determining a grid where a pupil of the second face sample image is located, and obtaining a second mapping relation between each grid of the lens and a fixation point coordinate according to the grid where the pupil of the second face sample image is located and the fixation point coordinate of the second sample corresponding to the second face sample image.
For example, as shown in fig. 11, after the feature points of the glasses are obtained, a lens m and a pupil c of the glasses may be determined, the lens m of the glasses may be subjected to meshing processing, so as to obtain a mesh w where the pupil c is located, and then a second mapping relationship g (x) between each mesh of the lenses and the gaze point coordinate is obtained according to the second sample gaze point coordinate corresponding to the second face sample image.
In some embodiments of the present application, the above-described gridding process may include a plurality of grid numbers, and when the grid number of the grid is large, the accuracy of the identification of the eye fixation point will be higher.
It should be noted that the grid shown in fig. 11 is a rectangular grid with the same size, but this is merely an example, and in practical applications, the grid may be a grid with different shapes and sizes divided according to the curvature of the lens shape.
Based on the manner of establishing the mapping relationship between the pupil coordinates and the gaze point coordinates shown in fig. 10 to 11, as shown in fig. 12, in some embodiments of the present application, the gaze point tracking method may include: step 1201 to step 1204.
Step 1201, acquiring a target face image.
Step 1202, performing feature recognition on the target face image, and determining a lens of glasses of the target face image and a pupil of the target face image.
Step 1203, performing meshing processing on the lens of the target face image, and determining a mesh where a pupil of the target face image is located.
Step 1204, determining a target fixation point coordinate corresponding to a mesh where a pupil of the target face image is located according to the second mapping relationship.
The above-mentioned gridding processing on the lens of the target face image may be exactly the same processing method as the above-mentioned gridding processing on the lens of the second face sample image, and the obtained number of grids and the size of the grids are the same.
However, in some embodiments of the present application, the number and shape of the meshes after the lens of the target face image is subjected to the meshing processing may also be different from the number and shape of the meshes after the lens of the second face sample image is subjected to the meshing processing, and at this time, the mesh where the pupil of the target face image is located needs to be mapped to the mesh in the lens when the second mapping relationship is established, so that the target fixation point coordinate corresponding to the mesh where the pupil of the target face image is located may be determined according to the second mapping relationship. The method can be similar to the method that the coordinates of the pupil in the real-time coordinate system need to be mapped to the coordinates in the sample coordinate system, and then the target fixation point coordinates corresponding to the coordinates of the pupil of the target face image in the real-time coordinate system are determined by using the first mapping relation.
In the embodiment, the pupil position can be determined in the grid, and the target fixation point coordinate can be obtained according to the grid where the pupil is located, so that after the second mapping relation is established, the target fixation point coordinate can be obtained only after the grid where the pupil is located in the target face image is determined, so that when the fixation point of human eyes is identified, bright spots formed by reflection of the light source in the cornea and the pupil do not need to be obtained, and the problem that the bright spots formed by reflection of the light source in the cornea and the pupil cannot be accurately extracted due to reflection of a lens under the condition that a user wears glasses, and the human eye fixation point cannot be accurately identified is solved; the accuracy of the gaze point tracking is improved.
It should be noted that for simplicity of description, the aforementioned method embodiments are all presented as a series of combinations of acts, but those skilled in the art will appreciate that the present invention is not limited by the order of acts described, as some steps may occur in other orders in accordance with the present invention.
Fig. 13 is a schematic structural diagram of a gaze point tracking apparatus 13 according to an embodiment of the present application, including an obtaining unit 131, a first determining unit 132, and a second determining unit 133.
An acquisition unit 131 configured to acquire a target face image;
a first determining unit 132, configured to establish a real-time coordinate system by using the facial feature point of the target face image as a reference, and determine coordinates of a pupil of the target face image in the real-time coordinate system;
a second determining unit 133, configured to determine, according to a mapping relationship between pre-established pupil coordinates and fixation point coordinates, target fixation point coordinates corresponding to coordinates of a pupil of the target face image in the real-time coordinate system.
In some embodiments of the present application, the mapping relationship is a first mapping relationship, and the second determining unit is further configured to: acquiring a first face sample image and a first sample fixation point coordinate corresponding to the first face sample image; the first face sample image and the target face image are face images of the same target person; performing feature recognition on the first face sample image, and determining pupils of the first face sample image and face feature points of the first face sample image; establishing a sample coordinate system which takes the facial feature points of the first facial sample image as a reference object according to the position relation among the facial feature points of the first facial sample image, and determining the coordinates of the pupil of the first facial sample image under the sample coordinate system; fitting the coordinates of the pupils of the first human face sample image under the sample coordinate system with the first sample fixation point coordinates corresponding to the first human face sample image to obtain a first mapping relation between the pupil coordinates and the fixation point coordinates under the sample coordinate system.
In some embodiments of the present application, the first determining unit is further configured to: carrying out feature recognition on the target face image, and determining pupils of the target face image and face feature points of the target face image; and according to the position relation among all the facial feature points of the target face image, establishing the real-time coordinate system with the facial feature points of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system.
In some embodiments of the present application, the second determining unit is further configured to: performing feature recognition on the first face sample image, and judging whether the first face sample image is a first face sample image containing glasses; if the first face sample image is a first face sample image containing glasses, identifying feature points of the glasses, and taking the feature points of the glasses as face feature points of the first face sample image; and establishing the sample coordinate system taking the characteristic points of the glasses as a reference according to the position relation among the characteristic points of the glasses.
In some embodiments of the present application, the first determining unit is further configured to: when the acquired target face image is a face image containing glasses, performing feature recognition on the target face image, and determining pupils of the target face image and feature points of the glasses of the target face image; and according to the position relation among the characteristic points of the glasses of the target face image, establishing the real-time coordinate system with the characteristic points of the glasses as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system.
In some embodiments of the present application, the mapping relationship is a second mapping relationship, and the second determining unit is further configured to: acquiring a second face sample image and a second sample fixation point coordinate corresponding to the second face sample image; the second face sample image and the target face image are face images of the same target person, and the second face sample image comprises glasses; performing feature recognition on the second face sample image, and determining a lens of glasses of the second face sample image and a pupil of the second face sample image; and performing meshing processing on the lens of the second face sample image, determining a grid where a pupil of the second face sample image is located, and obtaining a second mapping relation between each grid of the lens and a fixation point coordinate according to the grid where the pupil of the second face sample image is located and the fixation point coordinate of the second sample corresponding to the second face sample image.
In some embodiments of the present application, the first determining unit is further configured to: when the target face image is a target face image containing glasses, performing feature recognition on the target face image, and determining lenses of the glasses of the target face image and pupils of the target face image; carrying out gridding processing on the lens of the target face image, and determining a grid where a pupil of the target face image is located; correspondingly, the second determining unit is further configured to determine, according to the second mapping relationship, a target gaze point coordinate corresponding to a mesh where a pupil of the target face image is located.
It should be noted that, for convenience and simplicity of description, the specific working process of the above-described gazing point tracking apparatus 13 may refer to the corresponding process of the method described in fig. 1 to fig. 12, and is not described herein again.
As shown in fig. 14, the present application provides a terminal for implementing the above-mentioned gaze point tracking method, and the terminal may include: a processor 141, a memory 142, one or more input devices 143 (only one shown in fig. 14), and one or more output devices 144 (only one shown in fig. 14). Processor 141, memory 142, input device 143, and output device 144 are connected by bus 145.
It should be understood that in the embodiment of the present Application, the Processor 141 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 143 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 144 may include a display, a speaker, etc.
Memory 142 may include both read-only memory and random-access memory, and provides instructions and data to processor 141. Some or all of memory 142 may also include non-volatile random access memory. For example, memory 142 may also store device type information.
The memory 142 stores a computer program that can be executed by the processor 141, and the computer program is, for example, a program of a gazing point tracking method. The processor 141 implements the steps of the above-mentioned gazing point tracking method embodiment when executing the above-mentioned computer program, for example, the steps 201 to 203 shown in fig. 2. Alternatively, the processor 141, when executing the computer program, implements the functions of the units in the device embodiment, for example, the functions of the units 131 to 133 shown in fig. 13.
The computer program may be divided into one or more modules/units, which are stored in the memory 142 and executed by the processor 141 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the first terminal for performing gaze point tracking. For example, the computer program may be divided into a first acquisition unit, a first determination unit, and a second determination unit, and each unit may specifically function as follows:
the first acquisition unit is used for acquiring a target face image;
the first determining unit is used for establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference object and determining the coordinates of the pupil of the target face image under the real-time coordinate system;
and the second determining unit is used for determining a target fixation point coordinate corresponding to the coordinate of the pupil of the target face image in the real-time coordinate system according to the pre-established mapping relation between the pupil coordinate and the fixation point coordinate.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The present application further provides a computer program product, which when running on a terminal, causes the terminal to perform the steps in the above-mentioned gazing point tracking method embodiment, for example, steps 201 to 203 shown in fig. 2.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal are merely illustrative, and for example, the division of the above-described modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by hardware related to instructions of a computer program, which may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the methods described above may be implemented. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier signal, telecommunications signal, software distribution medium, and the like. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for tracking a gaze point, comprising:
acquiring a target face image;
establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system;
and determining target fixation point coordinates corresponding to the coordinates of the pupil of the target face image under the real-time coordinate system according to a mapping relation between the pre-established pupil coordinates and the fixation point coordinates.
2. The method of tracking a gaze point of claim 1, wherein the mapping relationship is a first mapping relationship, the establishing of the first mapping relationship comprising:
acquiring a first face sample image and a first sample fixation point coordinate corresponding to the first face sample image; the first face sample image and the target face image are face images of the same target person;
performing feature recognition on the first face sample image, and determining pupils of the first face sample image and face feature points of the first face sample image;
establishing a sample coordinate system which takes the facial feature points of the first facial sample image as a reference object according to the position relation among the facial feature points of the first facial sample image, and determining the coordinates of the pupil of the first facial sample image under the sample coordinate system;
fitting the coordinates of the pupils of the first human face sample image under the sample coordinate system with the first sample fixation point coordinates corresponding to the first human face sample image to obtain a first mapping relation between the pupil coordinates and the fixation point coordinates under the sample coordinate system.
3. The method for tracking a fixation point according to claim 1 or 2, wherein the establishing a real-time coordinate system by using the facial feature point of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system comprises:
carrying out feature recognition on the target face image, and determining pupils of the target face image and face feature points of the target face image;
and according to the position relation among all the facial feature points of the target face image, establishing the real-time coordinate system with the facial feature points of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system.
4. The method for tracking a gazing point according to claim 2, wherein the performing feature recognition on the first face sample image and determining facial feature points of the first face sample image includes:
performing feature recognition on the first face sample image, and judging whether the first face sample image is a first face sample image containing glasses;
if the first face sample image is a first face sample image containing glasses, identifying feature points of the glasses, and taking the feature points of the glasses as face feature points of the first face sample image;
the establishing of the sample coordinate system with the facial feature points of the first facial sample image as a reference object according to the position relationship among the facial feature points of the first facial sample image includes:
and establishing the sample coordinate system taking the characteristic points of the glasses as a reference according to the position relation among the characteristic points of the glasses.
5. The method for tracking a fixation point according to claim 1 or 4, wherein when the obtained target face image is a face image containing glasses, the establishing a real-time coordinate system with the facial feature point of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system comprises:
carrying out feature recognition on the target face image, and determining pupils of the target face image and feature points of glasses of the target face image;
and according to the position relation among the characteristic points of the glasses of the target face image, establishing the real-time coordinate system with the characteristic points of the glasses as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system.
6. The method of tracking a gaze point of claim 1, wherein the mapping relationship is a second mapping relationship, the establishing of the second mapping relationship comprising:
acquiring a second face sample image and a second sample fixation point coordinate corresponding to the second face sample image; the second face sample image and the target face image are face images of the same target person, and the second face sample image comprises glasses;
performing feature recognition on the second face sample image, and determining a lens of glasses of the second face sample image and a pupil of the second face sample image;
and performing meshing processing on the lens of the second face sample image, determining a grid where a pupil of the second face sample image is located, and obtaining a second mapping relation between each grid of the lens and a fixation point coordinate according to the grid where the pupil of the second face sample image is located and the fixation point coordinate of the second sample corresponding to the second face sample image.
7. The method for tracking a fixation point of claim 6, wherein the target face image is a target face image including glasses; the method comprises the following steps of establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference object, and determining the coordinates of the pupil of the target face image under the real-time coordinate system, wherein the method comprises the following steps:
carrying out feature recognition on the target face image, and determining lenses of glasses of the target face image and pupils of the target face image;
carrying out gridding processing on the lens of the target face image, and determining a grid where a pupil of the target face image is located;
the determining the target fixation point coordinate corresponding to the coordinate of the pupil of the target face image under the real-time coordinate system according to the mapping relation between the pre-established pupil coordinate and the fixation point coordinate comprises the following steps:
and determining the target fixation point coordinate corresponding to the grid where the pupil of the target face image is located according to the second mapping relation.
8. A gaze point tracking apparatus, comprising:
an acquisition unit for acquiring a target face image;
the first determining unit is used for establishing a real-time coordinate system by taking the facial feature points of the target face image as a reference object and determining the coordinates of the pupil of the target face image under the real-time coordinate system;
and the second determining unit is used for determining a target fixation point coordinate corresponding to the coordinate of the pupil of the target face image in the real-time coordinate system according to the pre-established mapping relation between the pupil coordinate and the fixation point coordinate.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201910874625.3A 2019-09-16 2019-09-16 Method, device and terminal for tracking point of regard and computer readable storage medium Pending CN110619303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910874625.3A CN110619303A (en) 2019-09-16 2019-09-16 Method, device and terminal for tracking point of regard and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910874625.3A CN110619303A (en) 2019-09-16 2019-09-16 Method, device and terminal for tracking point of regard and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110619303A true CN110619303A (en) 2019-12-27

Family

ID=68923164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910874625.3A Pending CN110619303A (en) 2019-09-16 2019-09-16 Method, device and terminal for tracking point of regard and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110619303A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767844A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Method and apparatus for three-dimensional modeling
CN111857329A (en) * 2020-05-26 2020-10-30 北京航空航天大学 Method, device and equipment for calculating fixation point
CN112149598A (en) * 2020-09-29 2020-12-29 江苏提米智能科技有限公司 Side face evaluation method and device, electronic equipment and storage medium
CN112163519A (en) * 2020-09-28 2021-01-01 浙江大华技术股份有限公司 Image mapping processing method, device, storage medium and electronic device
CN112308932A (en) * 2020-11-04 2021-02-02 中国科学院上海微系统与信息技术研究所 Gaze detection method, device, equipment and storage medium
CN113366491A (en) * 2021-04-26 2021-09-07 华为技术有限公司 Eyeball tracking method, device and storage medium
WO2021249300A1 (en) * 2020-06-11 2021-12-16 广州汽车集团股份有限公司 Method for visually tracking gaze point of human eyes, and vehicle early-warning method and apparatus
WO2022062422A1 (en) * 2020-09-28 2022-03-31 京东方科技集团股份有限公司 Gaze point calculation apparatus and driving method therefor, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102498430A (en) * 2009-04-17 2012-06-13 依视路国际集团(光学总公司) Method of determining an ophthalmic lens
CN107850939A (en) * 2015-03-10 2018-03-27 艾弗里协助通信有限公司 For feeding back the system and method for realizing communication by eyes
US20180149720A1 (en) * 2015-06-08 2018-05-31 Koninklijke Philips N.V. Mri with variable density sampling
CN108985172A (en) * 2018-06-15 2018-12-11 北京七鑫易维信息技术有限公司 A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses
CN109697688A (en) * 2017-10-20 2019-04-30 虹软科技股份有限公司 A kind of method and apparatus for image procossing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102498430A (en) * 2009-04-17 2012-06-13 依视路国际集团(光学总公司) Method of determining an ophthalmic lens
CN107850939A (en) * 2015-03-10 2018-03-27 艾弗里协助通信有限公司 For feeding back the system and method for realizing communication by eyes
US20180149720A1 (en) * 2015-06-08 2018-05-31 Koninklijke Philips N.V. Mri with variable density sampling
CN109697688A (en) * 2017-10-20 2019-04-30 虹软科技股份有限公司 A kind of method and apparatus for image procossing
CN108985172A (en) * 2018-06-15 2018-12-11 北京七鑫易维信息技术有限公司 A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857329A (en) * 2020-05-26 2020-10-30 北京航空航天大学 Method, device and equipment for calculating fixation point
US11748906B2 (en) 2020-05-26 2023-09-05 Beihang University Gaze point calculation method, apparatus and device
CN113815623B (en) * 2020-06-11 2023-08-08 广州汽车集团股份有限公司 Method for visually tracking eye point of gaze of human eye, vehicle early warning method and device
WO2021249300A1 (en) * 2020-06-11 2021-12-16 广州汽车集团股份有限公司 Method for visually tracking gaze point of human eyes, and vehicle early-warning method and apparatus
CN113815623A (en) * 2020-06-11 2021-12-21 广州汽车集团股份有限公司 Method for visually tracking human eye fixation point, vehicle early warning method and device
US11697428B2 (en) 2020-06-29 2023-07-11 Apollo Intelligent Driving Technology (Beijing) Co., Ltd. Method and apparatus for 3D modeling
CN111767844B (en) * 2020-06-29 2023-12-29 阿波罗智能技术(北京)有限公司 Method and apparatus for three-dimensional modeling
CN111767844A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Method and apparatus for three-dimensional modeling
CN112163519A (en) * 2020-09-28 2021-01-01 浙江大华技术股份有限公司 Image mapping processing method, device, storage medium and electronic device
WO2022062422A1 (en) * 2020-09-28 2022-03-31 京东方科技集团股份有限公司 Gaze point calculation apparatus and driving method therefor, and electronic device
CN112149598A (en) * 2020-09-29 2020-12-29 江苏提米智能科技有限公司 Side face evaluation method and device, electronic equipment and storage medium
CN112308932A (en) * 2020-11-04 2021-02-02 中国科学院上海微系统与信息技术研究所 Gaze detection method, device, equipment and storage medium
CN112308932B (en) * 2020-11-04 2023-12-08 中国科学院上海微系统与信息技术研究所 Gaze detection method, device, equipment and storage medium
WO2022226747A1 (en) * 2021-04-26 2022-11-03 华为技术有限公司 Eyeball tracking method and apparatus and storage medium
CN113366491A (en) * 2021-04-26 2021-09-07 华为技术有限公司 Eyeball tracking method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110619303A (en) Method, device and terminal for tracking point of regard and computer readable storage medium
CN109086726B (en) Local image identification method and system based on AR intelligent glasses
EP3063602B1 (en) Gaze-assisted touchscreen inputs
US9291834B2 (en) System for the measurement of the interpupillary distance using a device equipped with a display and a camera
EP3339943A1 (en) Method and system for obtaining optometric parameters for fitting eyeglasses
CN108513668B (en) Picture processing method and device
CN104036169B (en) Biological authentication method and biological authentication apparatus
JP6307805B2 (en) Image processing apparatus, electronic device, spectacle characteristic determination method, and spectacle characteristic determination program
CN106526857B (en) Focus adjustment method and device
CN111839455A (en) Eye sign identification method and equipment for thyroid-associated ophthalmopathy
WO2022129591A1 (en) System for determining one or more characteristics of a user based on an image of their eye using an ar/vr headset
JP2022099130A (en) Determination method, determination apparatus, and determination program
WO2019095117A1 (en) Facial image detection method and terminal device
WO2022272230A1 (en) Computationally efficient and robust ear saddle point detection
US10108259B2 (en) Interaction method, interaction apparatus and user equipment
WO2022032911A1 (en) Gaze tracking method and apparatus
Yang et al. Wearable eye-tracking system for synchronized multimodal data acquisition
US20230020160A1 (en) Method for determining a value of at least one geometrico-morphological parameter of a subject wearing an eyewear
WO2021049059A1 (en) Image processing method, image processing device, and image processing program
CN113744411A (en) Image processing method and device, equipment and storage medium
CN113132642A (en) Image display method and device and electronic equipment
JP2015123262A (en) Sight line measurement method using corneal surface reflection image, and device for the same
JP6757949B1 (en) Image processing method, image processing device, and image processing program
JP6721169B1 (en) Image processing method, image processing apparatus, and image processing program
WO2022185596A1 (en) Estimation system, estimation method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191227