KR20160026565A - method for 3-D eye-gage tracking - Google Patents

method for 3-D eye-gage tracking Download PDF

Info

Publication number
KR20160026565A
KR20160026565A KR1020140115680A KR20140115680A KR20160026565A KR 20160026565 A KR20160026565 A KR 20160026565A KR 1020140115680 A KR1020140115680 A KR 1020140115680A KR 20140115680 A KR20140115680 A KR 20140115680A KR 20160026565 A KR20160026565 A KR 20160026565A
Authority
KR
South Korea
Prior art keywords
user
gaze
camera
information
eyes
Prior art date
Application number
KR1020140115680A
Other languages
Korean (ko)
Inventor
박강령
장제웅
허환
황민철
Original Assignee
상명대학교서울산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 상명대학교서울산학협력단 filed Critical 상명대학교서울산학협력단
Priority to KR1020140115680A priority Critical patent/KR20160026565A/en
Publication of KR20160026565A publication Critical patent/KR20160026565A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Abstract

The present invention relates to a three-dimensional gaze tracking method using binocular gaze information, and a system using the same. The tracking method of the present invention comprises the following steps: obtaining conversion information which can obtain a gaze position of a user from gaze information of the user with respect to a plurality of objects which have different distances from the user; obtaining binocular gaze information of the user who gazes at one object; individually obtaining the binocular gaze difference with respect to each of the objects by using the binocular gaze information; and comparing the gaze difference with respect to each of the objects to determine the object which indicates a minimum value as the object gazed by the user.

Description

Method for 3-D Eye-gage Tracking Using Two-Eyed Eye Information [

The present invention relates to a three-dimensional (3D) line-of-sight tracking method, and more particularly, to a three-dimensional line-of-sight tracking method using binocular eye information.

A pointing method for tracking a user's gaze in a real space and selecting an object existing within a user's viewing range is mainly applied to an interface device such as a user-electronic device for a severely disabled person.

The prior art of Patent Document 1 relates to a wearable line-of-sight tracking apparatus and method of a goggle type. This prior art utilizes two cameras that can be worn in the form of a goggle, one to shoot one eye to track the gaze and the other to shoot a screen of the monitor that the user sees. In this prior art, a monitor area is set by template matching based on the edge position of the monitor in the forward image acquired through the front camera, and the eye is traced.

Such a gaze tracking device is a method of tracking a user's gaze on a monitor, and its application field is extremely limited. It is easy to use as an interface device for a special disabled person as well as a serious disabled person, but it is more realistic and it is necessary to pursue development of application of the gaze to the actual space and application of this gaze tracking result in order to expand the application field.

Patent Document 1 proposes a method of grasping the two-dimensional (horizontal and vertical) line-of-sight position. However, since the distance from the user in the three-dimensional space is not considered, accurate eye tracking is difficult.

KR 1013438750000 B

Cheol Woo Cho, Ji Woo Lee, Eui Chul Lee, Kang Ryoung Park, "A Robust Gaze Tracking Method Using Frontal Viewing and Eye Tracking Cameras", Optical Engineering, Vol. 48, No. 12, 127202, Dec. 2009. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ, 2002. W. Doyle, Operation useful for similarity-invariant pattern recognition, J. Assoc. Comput. Mach, 1962, 9: 259-267 Y. J. Ko, E. C. Lee, and K. R. Park, "A robust gaze detection method by compensating for facial movements based on corneal specularities," Pattern Recogn. Lett. 29, 10, 1474? 1485 2008. Lee, Won-Oh, Lee, Lee, and Gang-ryeong Park, 'A Study on the New Emotional Interface for the Disabled', 2011.2, Journal of the Ergonomics Society of Korea, Vol. 30 No. 1 pp229-23

The present invention provides a method capable of three-dimensional line-of-sight tracking and a system for applying the method.

Accordingly, the present invention provides a gaze tracking method with improved accuracy and a system for applying the same.

A three-dimensional line-of-sight tracking method according to the present invention:

Obtaining conversion information capable of acquiring a user's gazing position from the user's gaze information for a plurality of objects having different distances from the user;

Obtaining gaze information of both eyes of a user gazing at an object;

Obtaining binocular disparity differences for each of the individuals using the binocular visual information;

And comparing the gaze difference of each of the individuals to determine an entity representing the minimum value as an entity to which the user is looking.

A three-dimensional line-of-sight tracking system according to the present invention:

10. A line-of-sight tracking system for performing the method according to claim 1,

A first camera for photographing both eyes of the user;

A second camera for photographing an object to be looked at by the user;

And an analysis system for determining the direction of the user's gaze from the image information from the first camera and the second camera.

According to an embodiment of the present invention, the entity may be an electronic product located in a space in which the user resides.

According to another embodiment of the present invention, the conversion information may include information on one entity arranged at a different distance from the user.

According to another embodiment of the present invention, the transformation information may be a calibration matrix or a transformation matrix obtained by a geometric transform.

According to another embodiment of the present invention, the gaze information of the user is acquired from a camera that photographs the user's eyes, and eye images obtained from the camera are subjected to histogram analysis-based binarization and component labeling to determine pupil center coordinates Can be obtained.

  According to the present invention, it is possible to recognize objects arranged at different distances on a three-dimensional plane even if only image data in two-dimensional images are used. According to the present invention, a plurality of transformation matrices for gaze distance measurement can be obtained through calibration at a plurality of distances, and the gaze in the three-dimensional space can be accurately determined.

1 is a schematic block diagram of a three-dimensional line-of-sight tracking system according to the present invention.
FIG. 2 is a flowchart of a gaze tracking algorithm applied to a three-dimensional line-of-sight tracking method according to the present invention.
Fig. 3 is a conceptual diagram of a geometric transformation showing the mapping relationship between the pupil position and anterior image. Fig.
FIG. 4 illustrates a three-dimensional line-of-sight tracking method according to the present invention. FIG. 4 illustrates a user calibration process and a forward line of sight of the user at this time.
FIG. 5 illustrates a three-dimensional line-of-sight tracking method according to the present invention. FIG. 5 shows a method of predicting a target object (object) using the difference between the left and right eye lines.
6 (a), 6 (b) and 6 (c) show the difference of the binocular vision for each individual when a transformation matrix is applied to each of the individuals located at different distances from the user.
FIG. 7 schematically shows the entire flow of a three-dimensional line-of-sight tracking method according to the present invention.
FIG. 8 is a photograph of an experimental procedure of a three-dimensional line-of-sight tracking method according to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of a gaze tracking method and a system to which the present invention is applied will be described with reference to the accompanying drawings.

1 shows an arrangement of a camera 10 between a user 10 and an object 30 such as a TV or a monitor that the user 10 watches and a camera 20 between the user and the object 30 in the line- Respectively. The user 10 is positioned to view the object 30 directly and the camera device 20 therebetween includes a first camera 21 for photographing the face of the user, an object 30 to be viewed by the user 10, And a second camera (22) for photographing the second camera (22). The first camera 21 is a camera that is always used to track the line of sight of the user 10, and the second camera 22 is a camera used for line of sight calibration. The camera device 20 is connected to a computer-based system 40 as in a conventional system.

The first camera 21 may include an illumination lamp for illuminating the face portion of the user's face, particularly a pair of eyes, not an eye, preferably an infrared illumination lamp, for example a far-infrared LED lamp . On the other hand, position indicators 30a, 30b, 30c and 30d for recognizing the second camera 22 are provided at four corners of the object 30, and the second camera 22 uses infrared rays To perform the calibration of the eye position.

According to another embodiment of the present invention, the illumination lamp of the first camera 21 may be a far-infrared LED lamp of 700 nm to 900 nm, specifically, a 880 nm wavelength band to prevent blindness. At this time, the first camera 21 is an infrared camera equipped with an infrared ray transmission filter in order to obtain a constant brightness image which is not affected by a sharp pupil boundary and external light. That is, in an ordinary camera, an infrared ray cutoff filter is provided in front of a sensor. In the present invention, an eye camera acquires an eye image by reflected light of infrared rays irradiated from the illumination lamp without influence of external light provided with an infrared ray transmission filter.

In addition, according to the present invention, the system 40 may include an input device including a mouse, a keyboard, and the like for inputting information and commands, and the camera device 20 may be a USB (Universal Serial Bus) And may be connected to the system 40 in a Universal Serial Bus (USB) manner.

The eye tracking method according to the present invention and a system to which the present invention is applied can be applied to a conventional method of detecting circular edge detection, local binarization, component labeling and region filling, And performs a geometric transform to detect the position and eye line of the pupil.

First, non-patent document 5 can be referred to for understanding the basic gaze tracking method, which will be briefly described below.

FIG. 2 shows a flowchart of a gaze tracking algorithm applied to the present invention. After acquiring the eye image (21), the center of the pupil is extracted using the algorithm described later (22). After the extraction of the pupil center, if the initial calibration is not performed and it is determined that calibration is necessary (23), calibration (25) of the following process is performed. If it is determined that calibration has been performed (23), the position in the front image region corresponding to the detected position of the center of the pupil is calculated and the gaze position 24 for the actual space is calculated.

Specifically, the whole process is as follows. A local binarization, a component labeling and a region filling method are performed to find the center of the pupil in the eye image acquired by the first camera 21 .

The circular detection algorithm determines the initial pupil region through circular template matching. (Refer to non-patent document [1]). Since the detected pupil can be displayed in an elliptical shape instead of a circle depending on the gaze position and the camera photographing angle, the position determined by the circular template matching method is not accurate. Therefore, a local binarization process is performed based on the determined position. Since the rectangular region is classified into two types, ie, a pupil region (foreground) and a non-pupil region (background), the binarization threshold value is determined by Gonzalez's method (refer to Non-Patent Document 2) The proposed histogram-based binarization method (refer to non-patent document [3]) is used.

After local binarization, there may be noises due to eyebrows or shadows, and if the reflected light is present inside the pupil region, it may appear as an opening. In order to solve this problem, a method of labeling a binarized region by applying a method of labeling to regions adjacent to each other, removing an area having the largest area and having another identity, Remove the noise area. Finally, after performing a morphological closing operation to fill the perforated region, the center of gravity of the filled region is determined to be the final pupil center.

In the calibration, as shown in FIG. 2, four vertexes defined in the front image, that is, the upper left (Px 1 , Py 1 ), the upper right (Px 2 , Py 2 ), lower right (Px 3 , Py 3 ), and lower left (Px 4 , Py 4 ). Since the user 10 gazes at the real world (entity), not the monitor screen, the difference between the forward image region and the real world region of the monitor screen must be minimized. Therefore, the front image area of the monitor coincident with the real world area is manually set. Four pupil center coordinates the user calibration obtained after performing the four vertexes of the monitor in front image region (Mx 1, My 1), (Mx 2, My 2), (Mx 3, My 3), (Mx 4, My 4 ) and geometric transformation method is used to calculate the eye pupil position (Pxc, Pyc) and thus the gaze position (Mxc, Myc) for the forward image according to the following equations , And non-patent document [4] can be referred to for specific contents.

Figure pat00001

Figure pat00002

Figure pat00003

In the above equation, M = T × C, which can be expressed as T = M × C -1 . According to this, the transformation matrix T by the constants a to h as an unknown matrix can be obtained from the inverse of the matrix C for the matrix M. [ That is, the elements of the matrix C are located at the four positions of the pupil when viewing the four vertices defined in the forward image obtained from the image obtained from the first camera, that is, the upper left (Px 1 , Py 1 ) 2 , Py 2 ), the lower right (Px 3 , Py 3 ) and the lower left (Px 4 , Py 4 ), and the elements of the matrix M are composed of four vertices Mx 1 , My 1 ), (Mx 2 , My 2 ), (Mx 3 , My 3 ), and (Mx 4 , My 4 ).

The coordinates (Mxc, Myc) for the object can be obtained from the user's coincidence coordinates (Pxc, Pyc) by the following equation (4) using the transformation matrix T obtained by the above equation.

Figure pat00004

An example of obtaining the transformation matrix according to the above equation is as follows.

For example, if the coordinates of the four corners of the object obtained from the first camera are {11, 28}, {-15, 28}, {-16, 19}, {14, Px2 = -15, Px3 = -16, Px4 = 14, and Py1 = 28, Py2 = 28, Py3 = -16 and Py4 = 18 when applied to the above matrix of mathematical expressions. The same matrix C is constructed.

Figure pat00005

Then, the inverse matrix (C -1 ) of the above matrix C is calculated as follows.

Figure pat00006

Assuming that the object to which the user is interested in the above inverse matrix is a 1920x1080 resolution monitor, the matrix M by the four corners of the monitor is constructed as follows.

Figure pat00007

Thus, the transformation matrix T (Calibration Matrix, CM) is obtained by multiplying the inverse matrix of C by the matrix M, and the solution is as follows.

Figure pat00008

The transform matrix may be obtained for all entities located at a specific distance from the user and may be obtained by multiplying the matrix of eye coordinates at the time the user looks at the entity, do.

For example, if the coordinates of the eye of the user gazing at the object are {5, 27}, the matrix E consisting of eye coordinates is as follows.

Figure pat00009

Therefore, from the product of the matrix E and T, the following entity can obtain the gaze coordinate matrix P:

Figure pat00010

According to the above results, the user can judge that the coordinates (pixel) of {456, 111} are stared at the monitor having the resolution of 1920X1080.

The computation of these gaze coordinates is obtained for each eye, and both coordinates may be coincident or spaced apart by some distance depending on the distance between the user and the object and the conditions of both eyes of the user. The distance of the distance is referred to as the difference of the line of sight in the present invention, and it is possible to obtain the gaze coordinates of the user's gaze object and the corresponding object with respect to various objects having different distances by comparing the sizes of the gaze lines.

The gaze tracking by the above method can accurately track the direction of the user's gaze when the distance between the user and the subject remains the same as the distance at the time of calibration. However, when the distance between the user and the object changes, the accuracy of the eye tracking is degraded, and the tracking may fail. In the present invention, a transformation matrix for a plurality of distances is obtained in consideration of a change in distance between a user and an object, and the distance (Z) of the line of sight as well as the line X-Y coordinate is obtained.

In order to prevent the failure of eye tracking due to the movement of the user, two or more calibration results (expressions) are obtained while varying the distance between the user and the object, and using the calibration results, Though it will track or calculate accurate gaze.

Unlike the prior art, the present invention obtains a plurality of transformation matrices having different distances, and this transformation matrix is calculated for both eyes.

The present invention tracks the user's gaze direction and distance using the difference between the left eye and the right eye absolute values of the user's eyes, that is, the specific position. The present invention performs a calibration as described above for an object located at a certain distance to obtain a transformation matrix T, wherein the calibration is performed for both eyes. The calibration is performed for a plurality of distances different in distance from the user and the object, and a plurality of calibration matrices (matrices) obtained for each distance, that is, a plurality of transformation matrices, are used to track the gaze direction and the distance. By multiplying the plurality of transformation matrices obtained by the calibration by the coordinates of the pupil of both eyes, pupil coordinates for a different object, for example, a specific portion of the monitor, are obtained. The differences of the pupil coordinates thus obtained are obtained. The difference in pupil coordinates includes the difference in the X-X '(left and right) direction and the difference in the Y-Y' (up and down) direction, respectively.

Figure pat00011

Table 1 above illustrates the eye-gaze (X-Y) coordinates of the user staring at an arbitrary object from the arbitrary transformation matrix obtained at a Z distance and the gaze difference of both eyes converted from the coordinates. Here, the arbitrary plurality of transformation matrices obtained at a plurality of Z distances are applied to each of nine eye lines gazed at one Z distance, and when the average of the difference (absolute value) is the smallest, Z distance ".

Figure 3 illustrates a calibration method for multiple entities.

The user takes an image of both eyes 11a and 11b and a forward image directed to each of the objects while observing the four corners in order for each of the objects 31,

Based on the acquired eye images, the pupil coordinates for the four reference points (edges) of the individual objects 31, 32, and 33 are extracted. The object may be, for example, a household electric appliance, such as a TV, a computer monitor, a refrigerator, and an air conditioner, and is represented in the form of a box in the form of a symbol.

These entities use the four corner coordinates extracted from the forward image and the pupil (reference point) coordinates extracted from the eye image to obtain a transformation matrix by the geometric transformation by the above-described equations. Here, the transformation matrix corresponds to the number of entities, and thus three transformation matrices CM1, CM2, CM3 are obtained as shown in Fig.

(CM1, CM2, CM3), and obtains a binocular image when an actual user looks at an object, and obtains pupil coordinates (Pxc, (Pxc ', Pyc') and multiplying the matrix of these coordinates by the three transformation matrices T (CM1, CM2, CM3) to obtain three coordinates Mxc, Myc) ((Mxc ', Myc').

5 shows the gaze difference for the object-1 and the object-2 when the eyes 11a and 11b of the user 10 gaze at the object-1 31. FIG.

Since the user 10 is watching the entity -1 31, the eyes of the eyes 11a and 11b are gathered at a point P1 in the entity-1 31 and the eyes of the eyes 11a and 11b are gathered at a point P1, In the rear, the eye difference d2 is increased.

Likewise, when the user 10 gazes at the object-2 32, the eyes of the eyes 11a and 11b are widened by d1 in the object-1 31 and one point P2 ). Therefore, in FIG. 5, when the user 10 gazes at the object-1 31, the gaze difference in the object-1 31 is smaller than that in the object-2 32.

Based on this principle, a first transformation matrix CM1 is applied to the entity-1 31 and a second transformation matrix CM2 is applied to the entity- 32) of the eyes of both eyes. At this time, the object with the smallest gaze difference is the object that the user looks at.

The calculated binocular eye difference value has no significance in the presence of a sign because both determine only the length or the distance, and therefore the absolute value of the binocular eye difference value is applied.

In FIG. 6, three entities 31, 32 and 33 are arranged at a predetermined distance in the Z direction according to the method of the present invention according to the above principle, and then the transformation matrix obtained from each of the entities 31, To determine which entity the user is actually gazing at.

6 (a) shows a gathering of eyes and a widening when gazing at the object-1 31 having the first transformation matrix CM1. That is, in the first entity 31, there is no difference in the line of sight of both eyes or is very small. Therefore, when CM1 is applied to obtain the left and right sight line coordinates and the difference is calculated, the second and third entities 32 and 33 obtain an actor's small gaze difference d1 as compared with the gradually increasing gaze distances d2 and d3 .

6 (b) shows the gaze of both eyes flaring in the front and rear entities 31 and 33 when gazing at the object-2 32 having the second transformation matrix CM2. That is, when gazing at the entity-2 32 having the second transformation matrix CM2, a gazing occurs in the entity-1 and the entity-3 (31, 33) having the first transformation matrix and the third transformation matrix . Therefore, when calculating the difference between the left and right eye coordinates by applying the CM2, the gaze difference d1, d2, d3 which is generated in the first entity and the third entity 31, 33 and converged in the second entity 32 is obtained . Accordingly, it can be determined from the above result that the user is looking at the second entity 32 having the minimum gaze difference.

FIG. 6C shows a group of eye lines when gazing at the object-3 33 having the third transformation matrix CM3. That is, in the third entity 33, there is no or small difference in the line of sight of the eyes, and the line-of-sight difference d2, d1 gradually enlarged in the direction of the second entity and the first entity 32, 31 can be obtained have.

The following table illustrates the binocular vision difference for three individuals (31, 32, 33) located at a distance (Z = 80 cm, 210 cm, 340 cm) in the Z direction.

Figure pat00012

In Table 1 above, the conversion matrix for object # 31 or the least visual difference in calibration matrix CM1 is derived for object # 31, the conversion matrix for # 32 for object # 32, or the least visual difference in calibration matrix CM2 And a conversion matrix for # 33 or the least visual difference in calibration CM3. As a result, the transformation matrixes for each of the objects located at three Z distances are obtained as shown in the above table, and the user's gaze coordinates are assigned to three transformation matrices, respectively, It can be determined that the case in which the smallest gaze difference among the gaze eyes is the Z distance at which the user gazes.

The present invention for tracking the Z position of an object to which a line of sight is focused by the above method, particularly the distance from the user, can be arranged as shown in FIG.

In the process of FIG. 7, a conversion matrix (CM1, CM2, CM3) for a plurality of distances is prepared. The number of distance-specific transformation matrices can be increased according to the required accuracy, and the measurement accuracy can be increased more finely.

S71: The calculation of both eyes is started when the transformation matrix for several Z distances is ready.

S72: When the user watches any one of several objects arranged at an arbitrary Z distance, a binocular image at this time is acquired using the first camera.

S73: Obtaining the forward image using the second camera facing the entities arranged in the Z direction corresponding to the binocular image acquisition.

S74: The coordinates of both eyes and the reference coordinates of the forward image are obtained through the procedure described above, and the gaze of both eyes is calculated by using this.

S75: The difference between the lines of sight of both eyes calculated using a plurality of transformation matrices is calculated, and its absolute value is taken.

S76: The object of arbitrary distance that the absolute value of the gaze difference of both eyes is the smallest is judged as the object to which the current user strikes.

8 is a photograph of a scene in which the method of the present invention is being experimented. In front of the user, there is a camera for photographing the face of the user in front of the user and a camera for photographing the screen of the monitor as an object beyond the user's front. The white point on the monitor is used as a reference point to show the eye coordinates and the gaze difference of the eyes according to the change of the user's gaze.

In the embodiment of the present invention as described above, it is necessary to individually calibrate individual entities existing in the space in which the user resides, that is, to calculate a transformation matrix for each entity, and the transformation matrix obtained therefrom, And the like.

According to the present invention, since the distances between the objects can be estimated by the user, precise pointing can be performed on the objects installed at different distances in the space where the user resides, and in particular, The position or distance of the gazing position can be tracked, so that the gazing object can be accurately pointed without any crosstalk.

Also, by preparing a plurality of transformation matrices with different distances for one entity, the user can precisely point to one entity at various positions. If a plurality of transformation matrices are provided for each object as described above, the range of the user's fidelity is widened, and thus the object can be pointed more conveniently.

In the foregoing, exemplary embodiments have been described and shown in the accompanying drawings to facilitate understanding of the present invention. It should be understood, however, that such embodiments are merely illustrative of the present invention and not limiting thereof. And it is to be understood that the invention is not limited to the details shown and described. Since various other modifications may occur to those of ordinary skill in the art.

10: User
11a and 11b: the eyes of the user
20: Camera device
30, 31, 32, 33: object
40: Analysis system

Claims (10)

Obtaining conversion information capable of acquiring a user's gazing position from the user's gaze information for a plurality of objects having different distances from the user;
Obtaining gaze information of both eyes of a user gazing at an object;
Obtaining binocular disparity differences for each of the individuals using the binocular visual information;
And comparing the gaze difference of each of the individuals to determine an object representing the minimum value as an object to which the user is to gaze.
The method according to claim 1,
Wherein the object is an electronic product located in a space in which the user resides.
The method according to claim 1,
Wherein the conversion information includes information on a single object arranged at a different distance from the user.
4. The method according to any one of claims 1 to 3,
Wherein the transformation information is a calibration matrix or a transformation matrix obtained by a geometric transform.
5. The method of claim 4,
The gaze information of the user is acquired from a camera that photographs the user's eyes,
Wherein eye center coordinates are obtained by binarization based on histogram analysis and component labeling of eye images acquired from the camera.
10. A line-of-sight tracking system for performing the method according to claim 1,
A first camera for photographing both eyes of the user;
A second camera for photographing an object to be looked at by the user;
And an analysis system for determining the direction of the user's gaze from the image information from the first camera and the second camera.
The method according to claim 6,
Wherein the object is an electronic product located in a space in which the user resides.
The method according to claim 6,
Wherein the conversion information includes information on one entity arranged at a different distance from the user.
9. The method according to any one of claims 6 to 8,
Wherein the transformation information is obtained by the analysis system and is a calibration matrix or transformation matrix obtained by a geometric transform.
10. The method of claim 9,
The gaze information of the user is acquired from a camera that photographs the user's eyes,
Wherein the analysis system obtains pupil center coordinates by binarization based on histogram analysis and component labeling of eye images acquired from a camera.
KR1020140115680A 2014-09-01 2014-09-01 method for 3-D eye-gage tracking KR20160026565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140115680A KR20160026565A (en) 2014-09-01 2014-09-01 method for 3-D eye-gage tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140115680A KR20160026565A (en) 2014-09-01 2014-09-01 method for 3-D eye-gage tracking

Publications (1)

Publication Number Publication Date
KR20160026565A true KR20160026565A (en) 2016-03-09

Family

ID=55536928

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140115680A KR20160026565A (en) 2014-09-01 2014-09-01 method for 3-D eye-gage tracking

Country Status (1)

Country Link
KR (1) KR20160026565A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11216974B2 (en) 2017-12-14 2022-01-04 Samsung Electronics Co., Ltd. Staring distance determination method and device
US11249305B2 (en) 2019-04-11 2022-02-15 Samsung Electronics Co., Ltd. Head-mounted display device and operating method of the same for determining a measurement parameter

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101343875B1 (en) 2011-12-06 2013-12-23 경북대학교 산학협력단 Analysis device of user cognition and method for analysis of user cognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101343875B1 (en) 2011-12-06 2013-12-23 경북대학교 산학협력단 Analysis device of user cognition and method for analysis of user cognition

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Cheol Woo Cho, Ji Woo Lee, Eui Chul Lee, Kang Ryoung Park, "A Robust Gaze Tracking Method by Using Frontal Viewing and Eye Tracking Cameras", Optical Engineering, Vol. 48, No. 12, 127202, Dec. 2009.
R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ, 2002.
W. Doyle, Operation useful for similarity-invariant pattern recognition, J. Assoc. Comput. Mach, 1962, 9: 259~267
Y. J. Ko, E. C. Lee, and K. R. Park, "A robust gaze detection method by compensating for facial movements based on corneal specularities," Pattern Recogn. Lett. 29, 10, 1474?1485 2008.
허환, 이지우, 이원오, 이의철, 박강령, '장애인을 위한 새로운 감성 인터페이스 연구', 2011.2., 대한인간공학회지 제30권 제1호 pp229-23

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11216974B2 (en) 2017-12-14 2022-01-04 Samsung Electronics Co., Ltd. Staring distance determination method and device
US11249305B2 (en) 2019-04-11 2022-02-15 Samsung Electronics Co., Ltd. Head-mounted display device and operating method of the same for determining a measurement parameter
US11526004B2 (en) 2019-04-11 2022-12-13 Samsung Electronics Co., Ltd. Head-mounted display device and operating method of the same
US11809623B2 (en) 2019-04-11 2023-11-07 Samsung Electronics Co., Ltd. Head-mounted display device and operating method of the same

Similar Documents

Publication Publication Date Title
KR102212209B1 (en) Method, apparatus and computer readable recording medium for eye gaze tracking
CN107688391B (en) Gesture recognition method and device based on monocular vision
US10007336B2 (en) Apparatus, system, and method for mobile, low-cost headset for 3D point of gaze estimation
CN107004275B (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
JP6295645B2 (en) Object detection method and object detection apparatus
CN104380338B (en) Information processor and information processing method
JP6417702B2 (en) Image processing apparatus, image processing method, and image processing program
KR101486543B1 (en) Method and apparatus for recognition and segmentation object for 3d object recognition
US9659408B2 (en) Mesh reconstruction from heterogeneous sources of data
KR101471488B1 (en) Device and Method for Tracking Sight Line
EP3243162A1 (en) Gaze detection offset for gaze tracking models
KR101255219B1 (en) Method of eye-gaze tracking and system adopting the method
CN104834901A (en) Binocular stereo vision-based human face detection method, device and system
Cho et al. Gaze Detection by Wearable Eye‐Tracking and NIR LED‐Based Head‐Tracking Device Based on SVR
Chansri et al. Hand gesture recognition for Thai sign language in complex background using fusion of depth and color video
CA3207099A1 (en) Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
Benalcazar et al. A 3D iris scanner from multiple 2D visible light images
KR101226668B1 (en) 3 Dimensional Motion Recognition System and Method Using Stereo Camera
JP5336325B2 (en) Image processing method
KR20160026565A (en) method for 3-D eye-gage tracking
Heo et al. Object recognition and selection method by gaze tracking and SURF algorithm
EP3074844A1 (en) Estimating gaze from un-calibrated eye measurement points
KR101868520B1 (en) Method for hand-gesture recognition and apparatus thereof
JP2012003724A (en) Three-dimensional fingertip position detection method, three-dimensional fingertip position detector and program
KR101581586B1 (en) Compensation method for noise of depth image

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application