CN109491508B - Method and device for determining gazing object - Google Patents

Method and device for determining gazing object Download PDF

Info

Publication number
CN109491508B
CN109491508B CN201811427047.0A CN201811427047A CN109491508B CN 109491508 B CN109491508 B CN 109491508B CN 201811427047 A CN201811427047 A CN 201811427047A CN 109491508 B CN109491508 B CN 109491508B
Authority
CN
China
Prior art keywords
user
gazing
determining
expansion
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811427047.0A
Other languages
Chinese (zh)
Other versions
CN109491508A (en
Inventor
秦林婵
黄通兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Priority to CN201811427047.0A priority Critical patent/CN109491508B/en
Publication of CN109491508A publication Critical patent/CN109491508A/en
Application granted granted Critical
Publication of CN109491508B publication Critical patent/CN109491508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The embodiment of the application discloses a method and a device for determining a gazing object, wherein a virtual object intersected with the sight of a user and virtual objects around the virtual object are used as objects to be selected, the virtual object intersected with the sight of the user is not used as the gazing object, other virtual objects around the virtual object intersected with the sight of the user are considered, and the accuracy of determining the gazing object of the user is improved. Specifically, a visual field image of the user may be determined according to the gazing information of the user, the visual field image includes a plurality of objects to be selected, an expansion region of the initial gazing point is determined in the visual field image, and the gazing object of the user is selected from the plurality of objects to be selected according to a position relationship between the expansion region and the objects to be selected.

Description

Method and device for determining gazing object
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for determining a gazing object.
Background
With the increasing demand for interactive experiences, Virtual Reality (VR) applications have been rapidly developed, such as VR games, VR medical devices, and the like. In a virtual reality system and application, a virtual object can be displayed for a user, wherein the virtual object can be obtained by modeling through a three-dimensional modeling tool and importing a three-dimensional model into a physical engine.
The user may see different virtual objects by turning his eyes, which virtual objects may be arranged to interact with the user when the user's line of sight falls on one of the virtual objects, e.g. a virtual character may speak to the user, or the virtual object may be arranged to display detailed information to the user, etc. Meanwhile, according to the watching object corresponding to the sight line of the user, the user can also determine which products in artworks in the virtual museum, products in a shelf of a virtual supermarket and the like are concerned by the user more. The interaction between the virtual object and the user needs to acquire the sight direction of the user, and then the virtual object watched by the user, namely the watching object, is determined, so that the interaction between the watching object and the user is realized.
In the prior art, the gazing information of the user can be acquired through the image of the eyes of the user, and then the virtual object on which the sight line of the user falls is determined according to the data of each virtual object in the three-dimensional model. However, there may be a certain deviation in the gazing information obtained from the eyes of the user, which easily causes the gazing point of the user to fall on the wrong virtual object, and therefore, the determination of the gazing object of the user is not accurate.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present application provide a method and an apparatus for determining a gazing object, so that accuracy of determining a gazing object of a user is improved.
The embodiment of the application provides a method for determining a gazing object, and the method comprises the following steps:
determining a visual field image of a user according to the gazing information of the user, wherein the visual field image comprises a plurality of objects to be selected;
determining an expansion area of the user initial fixation point in the visual field image;
and selecting the gazing object of the user from the plurality of objects to be selected according to the position relation between the expansion area and the objects to be selected.
Optionally, the selecting, according to the position relationship between the extended region and the object to be selected, the gazing object of the user from the multiple objects to be selected includes:
determining the overlapping area of the preset area where each object to be selected is located and the expansion area;
and determining the object to be selected with the overlapping area larger than or equal to a preset area as the gazing object of the user.
Optionally, the expansion area of the initial gaze point is a plurality of expansion points around the initial gaze point, the expansion points and the initial gaze point have preset scores, and the preset scores of the expansion points are negatively correlated with the distance between the expansion points and the initial gaze point;
the selecting the gazing object of the user from the plurality of objects to be selected according to the position relation between the extension area and the object to be selected includes:
obtaining a target score of each object to be selected according to an expansion point and/or a preset score of an initial fixation point in a preset area where each object to be selected is located;
and determining the object to be selected with the target score being greater than or equal to a preset score as the gazing object of the user.
Optionally, the expansion area of the initial gaze point is a plurality of expansion lines around the initial gaze point;
the selecting the gazing object of the user from the plurality of objects to be selected according to the position relation between the extension area and the object to be selected includes:
obtaining a target distance corresponding to each object to be selected according to the distance of a part of lines, which are overlapped with the preset area where each object to be selected is located, in the expansion lines;
and determining the object to be selected with the target distance greater than or equal to a preset distance as the gazing object of the user.
Optionally, the determining an extended region of the initial point of regard of the user in the sight field image includes:
taking the intersection point of the sight of the user and the object to be selected as an initial fixation point;
and determining an expansion area of the initial fixation point in the visual field image according to the position of the initial fixation point in the visual field image.
Optionally, the gaze information comprises at least one of: the information of the angle of the glasses, the position of the corneal facula, the position of the pupil, the shape of the pupil, the position of the iris, the shape of the iris, the position of the eyelid and the position of the canthus.
Optionally, the method further includes:
operating and/or recording the gazing object.
An embodiment of the present application further provides an image processing apparatus, including:
the visual field image determining unit is used for determining a visual field image of a user according to the gazing information of the user, and the visual field image comprises a plurality of objects to be selected;
an extended region determination unit configured to determine an extended region of the user's initial gaze point in the sight field image;
and the gazing object determining unit is used for selecting the gazing object of the user from the plurality of objects to be selected according to the position relation between the expansion area and the objects to be selected.
Optionally, the gazing object determining unit includes:
the area determining unit is used for determining the overlapping area of the preset area where each object to be selected is located and the expansion area;
and the first gazing object determining subunit is used for determining the object to be selected, of which the coincidence area is greater than or equal to a preset area, as the gazing object of the user.
Optionally, the expansion area of the initial gaze point is a plurality of expansion points around the initial gaze point, the expansion points and the initial gaze point have preset scores, and the preset scores of the expansion points are negatively correlated with the distance between the expansion points and the initial gaze point;
the gaze object determination unit includes:
the score calculating unit is used for obtaining a target score of each object to be selected according to an expansion point and/or an initial fixation point in a preset area where each object to be selected is located;
and the second gazing object determining subunit is used for determining the object to be selected, of which the target score is greater than or equal to a preset score, as the gazing object of the user.
Optionally, the expansion area of the initial gaze point is a plurality of expansion lines around the initial gaze point;
the gaze object determination unit includes:
the length determining unit is used for obtaining a target distance corresponding to each object to be selected according to the distance of a part of lines, which are overlapped with the preset area where each object to be selected is located, in the expansion lines;
and the third gazing object determining subunit is used for determining the object to be selected, of which the target distance is greater than or equal to the preset distance, as the gazing object of the user.
Optionally, the extended area determining unit includes:
an initial fixation point determining unit, configured to use an intersection point of the user's sight line and the object to be selected as an initial fixation point;
an extended region determining subunit, configured to determine an extended region of the initial gaze point in the view field image according to a position of the initial gaze point in the view field image.
Optionally, the gaze information comprises at least one of: glasses angle information, cornea light spot position, pupil shape, iris position, iris shape, eyelid position, and canthus position.
Optionally, the apparatus further comprises:
and the operation unit is used for operating and/or recording the gazing object.
The embodiment of the application provides a method and a device for determining a gazing object, wherein a virtual object intersected with a sight line of a user and virtual objects around the virtual object are used as objects to be selected, the virtual object intersected with the sight line of the user is not used as the gazing object, other virtual objects around the virtual object intersected with the sight line of the user are also considered, and accuracy of determining the gazing object of the user is improved. Specifically, a visual field image of the user may be determined according to the gazing information of the user, the visual field image includes a plurality of objects to be selected, an expansion region of the initial gazing point is determined in the visual field image, and the gazing object of the user is selected from the plurality of objects to be selected according to a position relationship between the expansion region and the objects to be selected.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic view of a user's visual field image according to an embodiment of the present application;
fig. 2 is a flowchart of a method for determining a gazing object according to an embodiment of the present application;
fig. 3 is a schematic view of an extended area provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of another expanded area provided in an embodiment of the present application;
FIG. 5 is a schematic view of another extended area provided in an embodiment of the present application;
fig. 6 is a block diagram illustrating a structure of an apparatus for determining a gazing object according to an embodiment of the present application.
Detailed Description
The inventor finds that, in the prior art, the gaze information of the user is acquired through the eye image of the user, and then the virtual object on which the data of the user falls is determined according to the data of each virtual object in the three-dimensional model. However, there may be a certain deviation in the gazing information obtained from the eyes of the user, which easily causes the gazing point of the user to fall on the wrong virtual object, and thus the gazing object of the user is determined inaccurately.
For example, referring to fig. 1, a schematic view image of a user provided in an embodiment of the present application is shown, where a position (a white circle point) of a gazing point in the view image of the user between two vases on the right side is determined according to gazing information of the user, in this case, in the conventional technology, the gazing point of the user is considered to be a wall behind the vases, but light of the wall behind the vases is insufficient, and the user is less likely to gaze the wall between the vases, that is, the gazing point of the user falls on an incorrect virtual object, which results in inaccurate determination of the gazing object of the user.
Based on this, the embodiment of the application provides a method and a device for determining a gazing object, a virtual object intersecting with the sight of a user and virtual objects around the virtual object are used as objects to be selected, the virtual object intersecting with the sight of the user is not used as the gazing object, other virtual objects around the virtual object intersecting with the sight of the user are also considered, and accuracy of determining the gazing object of the user is improved. Specifically, a visual field image of the user may be determined according to the gazing information of the user, the visual field image includes a plurality of objects to be selected, an expansion region of the initial gazing point is determined in the visual field image, and the gazing object of the user is selected from the plurality of objects to be selected according to a position relationship between the expansion region and the objects to be selected.
The following describes in detail a specific implementation manner of a method and an apparatus for determining a gazing object according to an embodiment of the present application, with reference to the accompanying drawings.
Referring to fig. 2, a flowchart of a method for determining a gazing object according to an embodiment of the present application may include the following steps.
S101, determining a visual field image of the user according to the gazing information of the user.
The gaze information of the user may include a gaze direction, which represents an angle of an object gazed by the user with respect to the eyes of the user, and/or a gaze depth, which represents a distance of the object gazed by the user from the eyes of the user. The gazing information may be obtained by obtaining an eye image of the user, for example, the eyeball tracking module may obtain related information of the user, such as a glasses angle, a pupil position, a pupil shape, an iris position, an iris shape, an eyelid position, and an eye corner position, so as to determine the gazing information of the user by using a tracking algorithm according to the related information, or irradiate the eye of the user to form a light spot on a cornea, and determine the gazing information of the user according to the position of the light spot.
The sight field image of the user is an image formed by objects visible in the sight line direction of the user, that is, an image formed by objects visible to the user. It will be appreciated that different views of the image may be made from different views for different user lines of sight. The visual image of the user may include a plurality of objects to be selected, and referring to the visual image shown in fig. 1, there are 5 vases, 2 bouquets, 1 table, and 1 wall, and these objects may be the objects to be selected.
The sight field image may be a two-dimensional image captured in the direction of the user's line of sight or may be a three-dimensional image having depth information.
When the gazing information of the user includes gazing depth and the sight field image is a three-dimensional image including depth information, an object to be selected can be determined from a plurality of objects in the sight field image according to the gazing depth of the user, wherein the distance between the object to be selected and the eyes of the user is approximately the same as the gazing depth of the user.
S102, an expansion area of the initial fixation point of the user is determined in the visual field image.
The initial gaze point is determined according to gaze information of the user, and since the sight field image of the user is an image formed by objects visible in the sight line direction of the user, an intersection point of the sight line of the user and an object to be selected in the sight field image can be used as the initial gaze point of the user.
The initial fixation point may be a point in the two-dimensional visual field image, and the position of the point in the visual field image is represented by a two-dimensional coordinate value; the initial fixation point may be a point in the three-dimensional visual field image, and thus its position in the visual field image may be represented by a three-dimensional coordinate value.
In the prior art, if an initial gaze point of a user falls on one object to be selected, the object to be selected is used as a gaze object, and once the gaze direction of the user has a deviation, the gaze point easily falls on an incorrect object, which results in inaccurate determination of the gaze object.
In the embodiment of the present application, after the initial gaze point of the user is determined, the extended area of the initial gaze point may also be determined. The extended region may be determined according to a coordinate value of the initial gazing point in the image coordinates of the sight field image.
Specifically, the extended region may be a region formed of a circle, a regular polygon, an irregular polygon, or the like covering the initial gaze point, and the center point of the extended region may be the initial gaze point in order to improve accuracy. Wherein the side length and direction of the regular polygon or the irregular polygon are predetermined.
Of course, the extended region may be a lattice region formed by a plurality of extended points surrounding the initial gaze point. Specifically, the dot matrix region may be a polygon, a circle, or other shapes; the expansion points can be uniformly distributed or can be radiated outwards by taking the initial fixation point as the center; the longest distance between the extension point in the lattice region and the initial gaze point may be predetermined.
The initial fixation point and each extension point may have a preset score, the preset score of the extension point may be determined in advance according to a position relationship between the extension point and the initial fixation point, and specifically, the preset score of the extension point is negatively related to a distance between the extension point and the initial fixation point. For example, the preset score of the initial point of regard may be a maximum value, and the farther away from the initial point of regard, the smaller the preset score may be.
The expansion region may be a region formed by a plurality of expansion lines surrounding the initial point of regard. The expansion lines can be linear lines, and can also be wavy lines, sawtooth lines and other lines; the plurality of expansion lines can be parallel or not parallel; the length, spacing and number of the extension lines may be predetermined.
S103, selecting the gazing object of the user from the multiple objects to be selected according to the position relation between the expansion area and the objects to be selected.
The position relationship between the extension area and the object to be selected may be the position relationship between the extension area and the object to be selected in the view field image, for example, the extension area and the area where the object to be selected is located have a certain overlapping area, or the extension area and the area where the object to be selected is located do not overlap.
As a possible implementation manner, when the extended region may be a region formed by a circle, a regular polygon, or an irregular polygon covering the initial gaze point, the overlapping area of the preset region where the object to be selected is located and the overlapping region of the extended region may be determined according to the position relationship between the extended region and the object to be selected, and the object to be selected whose overlapping area is greater than or equal to the preset area is determined as the gaze object of the user. The preset region of the object to be selected may be a region of the object to be selected in the view field image, or a region of a preset size and shape determined according to a position of the object to be selected in the view field image. When the number of the to-be-selected objects with the overlapping areas larger than or equal to the preset area is multiple or 0, the to-be-selected object with the highest overlapping area can be determined as the gazing object of the user.
Referring to fig. 3, a schematic diagram of an extended area provided in an embodiment of the present application is shown, where the extended area is a circular area with an initial point of regard as a center, and a radius of the circular area is predetermined. The region, the two vases, the wall and the table are all provided with overlapping regions, and the object to be selected with the largest overlapping region is the second vase on the right side, so that the vase can be used as a watching object of a user.
As another possible implementation manner, when the expansion area is a plurality of expansion points around the initial gaze point, the expansion point and/or the initial gaze point which falls within the preset area where each object to be selected is located is determined, the target score of each object to be selected can be obtained according to the preset scores of the expansion point and the initial gaze point, and the object to be selected, of which the target score is greater than or equal to the preset score, is determined as the gaze object of the user. When the number of the objects to be selected with the target scores greater than or equal to the preset scores is multiple or 0, the object to be selected with the highest target score may be determined as the gazing object of the user.
Referring to fig. 4, another extended area schematic diagram provided in the embodiment of the present application is shown, where the extended area is a plurality of extended points surrounding the initial gaze point, and a score of the extended point may be determined according to a position relationship between the extended point and the initial gaze point. The spread points in the figure can be divided into two categories, one being gray dots, which are a first distance from the initial gaze point, and the other being black dots, which are a second distance from the initial gaze point. As can be seen from the figure, the first distance is smaller than the second distance, and thus, the score determined in advance for the initial point of regard of white may be 22, the score determined for the expansion point of gray may be 12, and the score determined for the expansion point of black may be 6.
There is only one black spread point on the table with a score of 6; the expansion points falling on the wall have a gray expansion point, a black expansion point and an initial fixation point, and the score of the wall is 40; the extension points on the first vase on the right side are 3 grey and 3 black, and the score of the first vase on the right side is 54; the spread points falling on the second vase on the right are 4 grey and 3 black, the score for the second vase on the right is 66. Therefore, the score of the second vase on the right side is the highest, and the second vase on the right side can be used as a watching object of the user.
As another possible implementation manner, when the expansion area is a plurality of expansion lines around the initial gaze point, a target distance corresponding to each object to be selected may be obtained according to a distance between the expansion line and a part of the line overlapping with the preset area where each object to be selected is located, and the object to be selected whose target distance is greater than or equal to the preset distance is determined as the gaze object of the user. The distance between the expansion line and the part of the line overlapped with each object to be selected may be the distance between two end points of the part of the line or the actual length of the part of the line. When the expansion lines are linear lines, the distance between the expansion lines and a part of lines overlapped with the object to be selected is the actual length of the part of lines. When there are a plurality of or 0 objects to be selected having a target distance greater than or equal to the preset distance, the object to be selected having the largest target distance may be determined as the gazing object of the user.
Referring to fig. 5, there is provided another schematic diagram of an extended area provided in the embodiment of the present application, where there are multiple extended lines around the initial gaze point, and the extended lines and multiple objects to be selected have coinciding lines, and according to the length of the coinciding line corresponding to each object to be selected, the gaze object of the user may be determined in the objects to be selected. As can be seen from the figure, the corresponding coincident line of the second vase on the right side is the longest, so that the second vase on the right side can be used as the fixation point of a user.
After the gaze object of the user is determined, the gaze object may be manipulated, such as speaking the gaze object to the user, or displaying detailed information of the user object, etc. After the gazing object of the user is determined, the gazing object can be recorded, so that the information of the gazing object which is interested by the user is obtained, and the interest of the user is analyzed.
The embodiment of the application provides a method for determining a gazing object, which is characterized in that a visual field image of a user is determined according to gazing information of the user, the visual field image comprises a plurality of objects to be selected, an expansion area of an initial gazing point is determined in the visual field image, and the gazing object of the user is selected from the plurality of objects to be selected according to the position relation between the expansion area and the objects to be selected. In the embodiment of the application, the virtual object intersected with the sight line of the user and the virtual objects around the virtual object are used as the objects to be selected, the virtual object intersected with the sight line of the user is not used as the watching object, other virtual objects around the virtual object intersected with the sight line of the user are also considered, and the accuracy of determining the watching object of the user is improved.
Based on the above method for determining a gazing object, an embodiment of the present application further provides an apparatus for determining a gazing object, and referring to fig. 6, the apparatus for determining a gazing object according to an embodiment of the present application is shown in a block diagram, and the apparatus includes:
a visual field image determining unit 110, configured to determine a visual field image of a user according to gaze information of the user, where the visual field image includes a plurality of objects to be selected;
an extended region determining unit 120 for determining an extended region of the user's initial gaze point in the sight field image;
a gazing object determining unit 130, configured to select a gazing object of the user from the multiple objects to be selected according to a position relationship between the extended area and the object to be selected.
Optionally, the gazing object determining unit includes:
the area determining unit is used for determining the overlapping area of the preset area where each object to be selected is located and the expansion area;
and the first gazing object determining subunit is used for determining the object to be selected, of which the overlapping area is greater than or equal to a preset area, as the gazing object of the user.
Optionally, the expansion region of the initial gaze point is a plurality of expansion points around the initial gaze point, the expansion points and the initial gaze point have preset scores, and the preset scores of the expansion points are negatively correlated with the distance between the expansion points and the initial gaze point;
the gaze object determination unit includes:
the score calculating unit is used for obtaining a target score of each object to be selected according to an expansion point and/or an initial fixation point in a preset area where each object to be selected is located;
and the second gazing object determining subunit is used for determining the object to be selected, of which the target score is greater than or equal to a preset score, as the gazing object of the user.
Optionally, the expansion area of the initial gaze point is a plurality of expansion lines around the initial gaze point;
the gaze object determination unit includes:
the length determining unit is used for obtaining a target distance corresponding to each object to be selected according to the distance of a part of lines, which are overlapped with the preset area where each object to be selected is located, in the expansion lines;
and the third gazing object determining subunit is used for determining the object to be selected, of which the target distance is greater than or equal to the preset distance, as the gazing object of the user.
Optionally, the extended area determining unit includes:
an initial fixation point determining unit, configured to use an intersection point of the user's sight line and the object to be selected as an initial fixation point;
an extended region determining subunit, configured to determine an extended region of the initial gaze point in the view field image according to a position of the initial gaze point in the view field image.
Optionally, the gaze information comprises at least one of: the information of the angle of the glasses, the position of the corneal facula, the position of the pupil, the shape of the pupil, the position of the iris, the shape of the iris, the position of the eyelid and the position of the canthus.
Optionally, the apparatus further comprises:
and the operation unit is used for operating and/or recording the gazing object.
The embodiment of the application provides a device for determining a gazing object, which takes a virtual object intersected with the sight of a user and virtual objects around the virtual object as objects to be selected, does not take the virtual object intersected with the sight of the user as the gazing object, and considers other virtual objects around the virtual object intersected with the sight of the user, so that the accuracy of determining the gazing object of the user is improved. Specifically, a visual field image of the user may be determined according to the gazing information of the user, the visual field image includes a plurality of objects to be selected, an expansion region of the initial gazing point is determined in the visual field image, and the gazing object of the user is selected from the plurality of objects to be selected according to a position relationship between the expansion region and the objects to be selected.
In the embodiments of the present application, the term "first" in the names "first … …", "first … …", etc. is used merely as a name identification, and does not represent the first in sequence. The same applies to "second" etc.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a read-only memory (ROM)/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a router, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the method embodiments and apparatus embodiments are substantially similar to the system embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the system embodiments for related points. The above-described embodiments of the apparatus and system are merely illustrative, wherein modules described as separate parts may or may not be physically separate, and parts shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only a preferred embodiment of the present application and is not intended to limit the scope of the present application. It should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the scope of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (8)

1. A method of determining a gaze object, comprising:
determining a visual field image of a user according to gazing information of the user, wherein the visual field image comprises a plurality of objects to be selected, and the objects to be selected comprise virtual objects intersected with the sight line of the user and virtual objects around the virtual objects;
determining an expansion area of the user initial fixation point in the visual field image;
selecting the gazing object of the user from the plurality of objects to be selected according to the position relation between the expansion area and the objects to be selected;
the selecting the gazing object of the user from the plurality of objects to be selected according to the position relation between the extension area and the object to be selected includes: determining the overlapping area of the preset area where each object to be selected is located and the expansion area; determining the object to be selected with the overlapping area larger than or equal to a preset area as the gazing object of the user;
or, the expansion area of the initial gaze point is a plurality of expansion points around the initial gaze point, the expansion points and the initial gaze point have preset scores, and the preset scores of the expansion points are negatively correlated with the distance between the expansion points and the initial gaze point; the selecting the gazing object of the user from the plurality of objects to be selected according to the position relation between the extension area and the object to be selected includes: obtaining a target score of each object to be selected according to an expansion point and/or a preset score of an initial fixation point in a preset area where each object to be selected is located; determining the object to be selected with the target score larger than or equal to a preset score as the gazing object of the user;
or the expansion area of the initial fixation point is a plurality of expansion lines around the initial fixation point; selecting the gazing object of the user from each object to be selected according to the position relation between the expansion area and the object to be selected, wherein the method comprises the following steps of: obtaining a target distance corresponding to each object to be selected according to the distance of a part of lines, which are overlapped with the preset area where each object to be selected is located, in the expansion lines; and determining the object to be selected with the target distance greater than or equal to a preset distance as the gazing object of the user.
2. The method of claim 1, wherein determining an extended region of the user's initial gaze point in the field of view image comprises:
taking the intersection point of the sight of the user and the object to be selected as an initial fixation point;
and determining an expansion area of the initial fixation point in the visual field image according to the position of the initial fixation point in the visual field image.
3. The method according to any of claims 1-2, wherein the gaze information comprises at least one of: glasses angle information, cornea light spot position, pupil shape, iris position, iris shape, eyelid position, and canthus position.
4. The method according to any one of claims 1-2, further comprising:
operating and/or recording the gazing object.
5. An apparatus for determining a gaze object, comprising:
the visual field image determining unit is used for determining a visual field image of a user according to the gazing information of the user, and the visual field image comprises a plurality of objects to be selected;
an extended region determination unit configured to determine an extended region of the user's initial gaze point in the sight field image;
a gazing object determining unit, configured to select a gazing object of the user from the multiple objects to be selected according to a position relationship between the extended area and the object to be selected;
the gaze object determination unit includes: the area determining unit is used for determining the overlapping area of the preset area where each object to be selected is located and the expansion area; the first gazing object determining subunit is used for determining the object to be selected, of which the overlapping area is larger than or equal to a preset area, as the gazing object of the user;
or, the expansion area of the initial gaze point is a plurality of expansion points around the initial gaze point, the expansion points and the initial gaze point have preset scores, and the preset scores of the expansion points are negatively correlated with the distance between the expansion points and the initial gaze point; the gaze object determination unit includes: the score calculating unit is used for obtaining a target score of each object to be selected according to a preset score of an expansion point and/or an initial fixation point in a preset area where each object to be selected is located; the second gazing object determining subunit is used for determining the object to be selected, of which the target score is greater than or equal to a preset score, as the gazing object of the user;
or the expansion area of the initial fixation point is a plurality of expansion lines around the initial fixation point; the gaze object determination unit includes: the length determining unit is used for obtaining a target distance corresponding to each object to be selected according to the distance of a part of lines, which are overlapped with the preset area where each object to be selected is located, in the expansion lines; and the third gazing object determining subunit is used for determining the object to be selected, of which the target distance is greater than or equal to the preset distance, as the gazing object of the user.
6. The apparatus of claim 5, wherein the extended area determining unit comprises:
an initial fixation point determining unit, configured to use an intersection of the user's sight line and the object to be selected as an initial fixation point;
an extended region determining subunit, configured to determine an extended region of the initial gaze point in the view field image according to a position of the initial gaze point in the view field image.
7. The apparatus of any of claims 5-6, wherein the gaze information comprises at least one of: the information of the angle of the glasses, the position of the corneal facula, the position of the pupil, the shape of the pupil, the position of the iris, the shape of the iris, the position of the eyelid and the position of the canthus.
8. The apparatus of any one of claims 5-6, further comprising:
and the operation unit is used for operating and/or recording the gazing object.
CN201811427047.0A 2018-11-27 2018-11-27 Method and device for determining gazing object Active CN109491508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811427047.0A CN109491508B (en) 2018-11-27 2018-11-27 Method and device for determining gazing object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811427047.0A CN109491508B (en) 2018-11-27 2018-11-27 Method and device for determining gazing object

Publications (2)

Publication Number Publication Date
CN109491508A CN109491508A (en) 2019-03-19
CN109491508B true CN109491508B (en) 2022-08-26

Family

ID=65697806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811427047.0A Active CN109491508B (en) 2018-11-27 2018-11-27 Method and device for determining gazing object

Country Status (1)

Country Link
CN (1) CN109491508B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111751987B (en) 2019-03-29 2023-04-14 托比股份公司 Holographic eye imaging apparatus
CN111813214B (en) * 2019-04-11 2023-05-16 广东虚拟现实科技有限公司 Virtual content processing method and device, terminal equipment and storage medium
CN110161721A (en) * 2019-04-24 2019-08-23 苏州佳世达光电有限公司 Eyeglass focus adjusting method and liquid Zoom glasses equipment
SE543229C2 (en) * 2019-06-19 2020-10-27 Tobii Ab Method and system for determining a refined gaze point of a user
CN114830011A (en) 2019-12-06 2022-07-29 奇跃公司 Virtual, augmented and mixed reality systems and methods
CN113536909B (en) * 2021-06-08 2022-08-26 吉林大学 Pre-aiming distance calculation method, system and equipment based on eye movement data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012063547A (en) * 2010-09-15 2012-03-29 Canon Inc Image processing apparatus and image processing method
US8379981B1 (en) * 2011-08-26 2013-02-19 Toyota Motor Engineering & Manufacturing North America, Inc. Segmenting spatiotemporal data based on user gaze data
CN107229898A (en) * 2016-03-24 2017-10-03 国立民用航空学院 Boolean during 3D is shown is managed
CN107396086A (en) * 2017-07-28 2017-11-24 歌尔科技有限公司 The method and VR helmets of video are played based on VR helmets

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050047629A1 (en) * 2003-08-25 2005-03-03 International Business Machines Corporation System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking
US9323325B2 (en) * 2011-08-30 2016-04-26 Microsoft Technology Licensing, Llc Enhancing an object of interest in a see-through, mixed reality display device
EP2709060B1 (en) * 2012-09-17 2020-02-26 Apple Inc. Method and an apparatus for determining a gaze point on a three-dimensional object
CN103475893B (en) * 2013-09-13 2016-03-23 北京智谷睿拓技术服务有限公司 The pick-up method of object in the pick device of object and three-dimensional display in three-dimensional display
EP3120221A1 (en) * 2014-03-17 2017-01-25 ITU Business Development A/S Computer-implemented gaze interaction method and apparatus
CN104317392B (en) * 2014-09-25 2018-02-27 联想(北京)有限公司 A kind of information control method and electronic equipment
CN108874148A (en) * 2018-07-16 2018-11-23 北京七鑫易维信息技术有限公司 A kind of image processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012063547A (en) * 2010-09-15 2012-03-29 Canon Inc Image processing apparatus and image processing method
US8379981B1 (en) * 2011-08-26 2013-02-19 Toyota Motor Engineering & Manufacturing North America, Inc. Segmenting spatiotemporal data based on user gaze data
CN107229898A (en) * 2016-03-24 2017-10-03 国立民用航空学院 Boolean during 3D is shown is managed
CN107396086A (en) * 2017-07-28 2017-11-24 歌尔科技有限公司 The method and VR helmets of video are played based on VR helmets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于视线跟踪的智能界面实现机制研究;胡文婷等;《计算机应用与软件》;20160115(第01期);全文 *
注意与记忆中的"积极效应"――"老化悖论"与社会情绪选择理论的视角;伍麟等;《心理科学进展》;20090315(第02期);全文 *

Also Published As

Publication number Publication date
CN109491508A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109491508B (en) Method and device for determining gazing object
CN106959759B (en) Data processing method and device
US20240111164A1 (en) Display systems and methods for determining registration between a display and eyes of user
US20230037046A1 (en) Depth plane selection for multi-depth plane display systems by user categorization
Zhao et al. CueSee: exploring visual cues for people with low vision to facilitate a visual search task
JP7233927B2 (en) A head-mounted display system configured to exchange biometric information
US9313481B2 (en) Stereoscopic display responsive to focal-point shift
US9135508B2 (en) Enhanced user eye gaze estimation
Rogers et al. Motion parallax as an independent cue for depth perception
EP3339943A1 (en) Method and system for obtaining optometric parameters for fitting eyeglasses
Fuhl et al. Non-intrusive practitioner pupil detection for unmodified microscope oculars
AU2022202543A1 (en) Eye image collection, selection, and combination
US6329989B1 (en) Ocular optical system simulating method and simulating apparatus
EP1691670B1 (en) Method and apparatus for calibration-free eye tracking
CN109766011A (en) A kind of image rendering method and device
JP6625976B2 (en) Method for determining at least one optical design parameter of a progressive ophthalmic lens
CN108139600A (en) For determining the method for the optical system of progressive lens
CN108089332B (en) VR head-mounted display equipment and display method
US11822718B2 (en) Display systems and methods for determining vertical alignment between left and right displays and a user's eyes
US11789262B2 (en) Systems and methods for operating a head-mounted display system based on user identity
US10488949B2 (en) Visual-field information collection method and system for executing the visual-field information collection method
KR20180034278A (en) Visual perception training device, method and program for visual perception training using head mounted device
CN108135465A (en) The method of the equipment for testing the visual behaviour of individual and at least one optical design parameters that ophthalmic lens are determined using the equipment
CN106708249B (en) Interaction method, interaction device and user equipment
CN110269586A (en) For capturing the device and method in the visual field of the people with dim spot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant