US20180106999A1 - Gaze detection system, gaze point detection method, and gaze point detection program - Google Patents

Gaze detection system, gaze point detection method, and gaze point detection program Download PDF

Info

Publication number
US20180106999A1
US20180106999A1 US15/842,120 US201715842120A US2018106999A1 US 20180106999 A1 US20180106999 A1 US 20180106999A1 US 201715842120 A US201715842120 A US 201715842120A US 2018106999 A1 US2018106999 A1 US 2018106999A1
Authority
US
United States
Prior art keywords
gaze
user
right eye
left eye
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/842,120
Inventor
Lochlainn Wilson
Keiichi Seko
Yuka Kojima
Yamato Kaneko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fove Inc Japan
Original Assignee
Fove Inc Japan
Fove Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fove Inc Japan, Fove Inc filed Critical Fove Inc Japan
Priority to US15/842,120 priority Critical patent/US20180106999A1/en
Assigned to FOVE, INC. reassignment FOVE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KANEKO, Yamato, KOJIMA, Yuka, SEKO, Keiichi, WILSON, Lochlainn
Publication of US20180106999A1 publication Critical patent/US20180106999A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/145Arrangements specially adapted for eye photography by video means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • This disclosure relates to a head mounted display.
  • the eye-gaze detection technique is known as analyzing the captured emitted light images of user's eye using illuminating light on the user's eye.
  • the detected eye-gaze information can be used, for example, for PC (Personal Computer) or game monitoring, and can be applied to use as a pointing device.
  • the head mounted display is wearable on the user to display 3D images but normally covers the user's eye sights. The user therefore cannot recognize the outside environment while the user wears the HMD. If the user needs to find input devices such as a controller while wearing the HMD as a display device for movie or games, the user has difficulties to find controllers.
  • a gaze detection system comprising: a head mounted display, mounted on a user's head for use, comprising, a first infrared light source unit illuminating the user's right eye with infrared light; a second infrared light source unit illuminating the user's left eye; a first image capturing device imaging the right eye illuminated by the infrared light source; a second image capturing device imaging the left eye illuminated by the infrared light source; and a display unit displaying a three-dimensional image, and a gaze detection device, detecting the gaze of the user, comprising, a first detection unit detecting the gaze direction of the right eye, based on the image captured by the first image capturing device; a second detection unit detecting the gaze direction of the left eye, based on the image captured by the second image capturing device; and a tracking unit determining the gaze point of the user in the three-dimensional image on the basis of the gaze direction of the right eye and the gaze direction of the left eye.
  • the first detection unit may calculate the right eye gaze vector indicating the gaze direction of the right eye
  • the second detection unit may calculate the left eye gaze vector indicating the gaze direction of the left eye
  • the display unit may display a three-dimensional image consisting of a plurality of layers in the depth direction, and the tracking unit may identify the layer that the user is gazing at by selecting the layer having the shortest distance between the intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
  • the tracking unit may identify the user's gaze location based on the intersection point between the right eye gaze vector and the left eye gaze vector.
  • the tracking unit may identify the user's gaze location based on the intersection region of a cylinder with a given radius, centered on the right eye gaze vector and a cylinder with a given radius, centered on the right eye gaze vector.
  • the tracking unit may identify the gaze location of the user based on the intersection points of the first plurality of parallel vectors parallel to the right eye gaze vector and the second plurality of parallel vectors parallel to the left eye gaze vector.
  • a gaze detection system including a head mounted display, mounted on a user's head for use, including a first image capturing device imaging the right eye; a second image capturing device imaging the left eye; and a display unit displaying a three-dimensional image, and a gaze detection device detecting a gaze of the user including, a first detection unit detecting a gaze direction of a user's right eye based on an image captured by the first image capturing device; a second detection unit detecting a gaze direction of a user's left eye based on an image captured by the second image capturing device; and a tracking unit that determines a gaze point of the user in the three-dimensional image on the basis of the gaze direction of the right eye and the gaze direction of the left eye, in which the first detection unit calculates a right eye gaze vector indicating the gaze direction of the right eye, and the second detection unit calculates a left eye gaze vector indicating the gaze direction of the left eye, wherein the display unit displays a three-dimensional image consisting of a plurality of layers
  • a gaze detection system including a head mounted display worn on the head of the user and a gaze detection device determining the gaze point of the user, including: displaying a three-dimensional image on the head mounted display; acquiring images of a user's right eye and left eye; transferring an acquired image data to the gaze detection device; determining in the gaze detection device a user's right eye gaze direction vector showing the gaze direction of the user's right eye based on an acquired image of the right eye; determining in the gaze detection device a user's left eye gaze direction vector showing the gaze direction of the user's left eye based on the acquired image of the left eye; and determining a location that the user is gazing at in the three-dimensional image based on the right eye gaze direction and the left eye gaze direction, in which the head mounted display displays a three-dimensional image consisting of a plurality of layers in the depth direction, wherein the detection device identifies a layer the user is gazing at by selecting the layer having
  • a gaze detection program executing on a computer that determines a gaze point of a user wearing a head mounted display that displays a three-dimensional image
  • the computer includes: a function of obtaining the image data of a right eye and a left eye of the user wearing the head mounted display; a function of detecting a right eye gaze direction of the right eye based on the image data of the right eye; a function of detecting a left eye gaze direction of the left eye based on the image data of the left eye; a function of determining a gaze point of the user in the three-dimensional image based on the right eye gaze direction and the left eye gaze direction; a function of calculating a right eye gaze vector indicating the gaze direction of the right eye; a function of calculating a left eye gaze vector indicating the gaze direction of the left eye; and a function of displaying a three-dimensional image consisting of a plurality of layers in a depth direction, wherein the function of detecting the gaze direction identifies the layer that the user is gazing
  • any combination of the aforementioned components, and the implementation in the form of methods, devices, systems, computer programs, data structures, recording mediums, and the like may be effective.
  • FIG. 1 is an external view illustrating the head mounted display worn by a user.
  • FIG. 2 is a perspective overview of the image display system of the head mounted display.
  • FIG. 3 is a diagram schematically illustrating the optical configuration of the image display system of the head mounted display.
  • FIG. 4 is a block diagram illustrating the components of the head mounted display.
  • FIG. 5 is a diagram illustrating the pattern used to calibrate the gaze direction detection system.
  • FIG. 6 is a schematic diagram illustrating the coordinate system of the user's cornea position.
  • FIG. 7 is a schematic diagram of how the gaze point of a user is identified according to a first example.
  • FIG. 8 is a flowchart describing operation of the head mounted display system according to the first example.
  • FIG. 9 is a schematic illustration of how the gaze point of a user is identified according to a second example.
  • FIG. 10 is a flowchart describing operation of the head mounted display system according to the second example.
  • FIG. 11 is a schematic illustration of how the gaze point of a user is identified according to a third example.
  • FIG. 12 is a flowchart describing operation of the head mounted display system according to the third example.
  • 1 gaze detection system, 100 : head mounted display, 103 a: infrared light source (second infrared light illumination unit), 103 b: infrared light source (first infrared light illumination unit), 105 : bright spot, 108 : image display element, 112 : hot mirror, 114 , 114 a, 114 b: convex lens, 116 : camera (first imaging unit, second imaging unit), 118 : first communication unit, 121 : display unit, 122 : infrared illumination unit, 123 : image processing unit, 124 : image capturing unit, 130 : image display system, 150 : housing, 152 a, 152 b: lens holder, 160 : fitting harness, 170 : headphone, 200 : gaze detection device, 220 : second communication unit, 221 : first gaze detection unit, 222 : second gaze detection unit, 223 : tracking unit, 224 : video output unit, 225 : storage unit
  • FIG. 1 shows a typical view of the gaze detection system 1 as described in the examples.
  • the gaze detection system 1 comprises a head mounted display 100 and a gaze detection device 200 .
  • the head mounted display 100 is mounted on the head of the user 300 for use.
  • the gaze detection device 200 measures the gaze direction and the focal point of the right and left eyes of the user wearing the head mounted display 100 , determining the gaze point of the user in the three-dimensional image displayed by the head mounted display.
  • the gaze detection device 200 also functions as a video generator device to create the video images displayed on the head mounted display 100 .
  • the gaze detection device 200 can be used to reproduce video images on stationary game machines, portable game machines, PCs, tablets, smartphones, phablets, video players, TVs and the like.
  • the gaze detection device 200 establishes a wireless or a wired connection with the head mounted display 100 . In the example shown in FIG. 1 , the gaze detection device 200 wirelessly connects to the head mounted display 100 .
  • the wireless connection between the gaze detection device 200 and the head mounted display 100 may be established using existing wireless communication techniques such as Wi-Fi (registered trademark) or Bluetooth (registered trademark).
  • video image transfer between the head mounted display 100 and the gaze detection device 200 may be implemented using, for example, Miracast (registered trademark), WiGig (registered trademark), WHDI (registered trademark) or other communication standards.
  • FIG. 1 illustrates that the head mounted display 100 and the gaze detection device 200 are separate units. However, the gaze detection device 200 may also be incorporated into the head mounted display 100 .
  • the head mounted display 100 includes a housing 150 , a fitting harness 160 , and headphones 170 .
  • the housing 150 contains an image display system for displaying video images to the user 300 , and, not shown in the figure, a communications module using Wi-Fi (registered trademark), Bluetooth (registered trademark), or other wireless technology.
  • the head mounted display 100 is secured to the head of the user 300 with a fitting harness 160 .
  • the fitting harness 160 may be implemented with the help of, for example, belts or elastic bands.
  • the fitting harness 160 holds the housing 150 in a position where the eyes of the user 300 are covered.
  • the field of view of the user 300 is covered by the housing 150 .
  • the headphones 170 output the audio of the video reproduced by the gaze detection device 200 .
  • the headphones 170 do not need to be fixed to the head mounted display 100 . Even if the head mounted display 100 is equipped with a fitting harness 160 , the user 300 may freely attach or detach the headphones 170 .
  • FIG. 2 is a perspective diagram showing the overview of the image display system 130 of the head mounted display 100 described in the examples. Specifically, FIG. 2 shows the part of the housing 150 facing the corneas 302 when the user 300 is wearing the head mounted display 100 .
  • a convex lens 114 a is positioned to face the cornea 302 a of the left eye of the user 300 when the user 300 wears the head mounted display 100 .
  • a convex lens 114 b is positioned to face the cornea 302 b of the right eye of the user 300 when the user 300 wears the head mounted display 100 .
  • the convex lens 114 a for the left eye and the convex lens 114 b for the right eye are held by a lens holder 152 a for the left eye and a lens holder 152 b for the right eye, respectively.
  • the convex lens 114 a for the left eye and the convex lens 114 b for the right eye are referred to as a “convex lens 114 ” unless the two lenses need to be specifically distinguished.
  • the cornea 302 a of the left eye of the user 300 and the cornea 302 b of the right eye of the user 300 are simply referred to as a “cornea 302 ” unless the corneas need to be specifically distinguished.
  • the lens holder 152 a for the left eye and the lens holder 152 b for the right eye are referred to as a “lens holder 152 ” unless the holders need to be specifically distinguished.
  • infrared light sources 103 are attached to the lens holders 152 .
  • the infrared light sources illuminating cornea 302 a of the left eye of the user 300 with infrared light are collectively referred to as infrared light sources 103 a
  • the infrared light sources illuminating the cornea 302 b of the right eye of the user 300 with infrared light are collectively referred to as infrared light sources 103 b.
  • the infrared light sources 103 a and the infrared light sources 103 b are collectively referred to as “infrared light sources 103 ” except when a distinction between the two is necessary.
  • infrared light sources 103 a are attached to the lens holder 152 a for the left eye.
  • six infrared light sources 103 b are attached to the lens holder 152 b for the right eye.
  • the infrared light sources 103 are not attached directly to the convex lenses 114 , but are mounted on the lens holders 152 that grip the convex lenses 114 , making the attachment of the infrared light sources 103 easier.
  • the lens holders 152 are typically made of a resin or the like, machining that is necessary to attach the infrared light sources 103 is easier than for the convex lenses 114 that are made of glass or the like.
  • the lens holders 152 is a component that holds the convex lenses 114 . Therefore, the infrared light sources 103 mounted on the lens holders 152 are positioned along the circumference of the convex lenses 114 . Although six infrared light sources 103 illuminating each eye with infrared light are shown here, the number of the infrared light sources 103 is not limited to this number, there should be at least one light source 103 for each eye, and two or more light sources 103 are desirable.
  • FIG. 3 is a schematic diagram of the optical configuration of the image display system 130 contained in the housing 150 according to the example, with the diagram showing a view of the housing 150 from the direction of the left eye as shown in FIG. 2 .
  • the image display system 130 includes infrared light sources 103 , an image display element 108 , a hot mirror 112 , the convex lenses 114 , a camera 116 , and a first communication unit 118 .
  • the infrared right sources 103 are light sources capable of near-infrared light (700 nm to 2500 nm range). Near-infrared light is a type of non-visible light in a wavelength region that cannot be detected by the naked eye of the user 300 .
  • the image display element 108 displays an image to be presented to the user 300 .
  • the image displayed by the image display element 108 is generated by a video output unit 224 in the gaze detection device 200 .
  • the video output unit 224 will be described later.
  • the image display element 108 may be implemented by using existing liquid crystal display (LCD), or organic EL display (Organic Electro Luminescence Display).
  • the hot mirror 112 is positioned between the image display element 108 and the cornea 302 of the user 300 when the user 300 wears the head mounted display 100 .
  • the hot mirror 112 has a property of transmitting visible light created by the image display element 108 and reflecting near-infrared light.
  • the convex lenses 114 are positioned on the opposite side of the hot mirror 112 from the image display element 108 . In other words, the convex lenses 114 are positioned between the hot mirror 112 and the cornea 302 of the user 300 when the user 300 wears the head mounted display 100 . That is, the convex lenses 114 are positioned to face the corneas 302 of the user 300 when the user 300 wears the head mounted display 100 .
  • the convex lenses 114 condenses the light from the image display transmitted through the hot mirror 112 .
  • the convex lenses 114 function as image magnifiers that enlarge an image created by the image display element 108 and presented to the user 300 .
  • a lens group may be used that combines various kinds of lenses, including plano-convex lenses and biconvex lenses.
  • a plurality of infrared light sources 103 are arranged along the circumference of the convex lens 114 .
  • the infrared light sources 103 emit infrared light toward the cornea 302 of the user 300 .
  • the image display system 130 of the head mounted display 100 contains two image display units 108 , where the image presented to the right eye of the user 300 and the image presented to the left eye of the user can be independently generated. Accordingly, the head mounted display 100 , according to the example may present to the right and left eyes of the user 300 , respectively a parallax image for the right eye and a parallax image for the left eye. Thereby, the head mounted display 100 , according to the example, can present a stereoscopic scene that creates a feeling of depth for the user 300 .
  • the hot mirror 112 transmits visible light but reflects the near-infrared light.
  • the image light emitted by the image display element 108 is transmitted through the hot mirror 112 , and reaches the cornea 302 of the user 300 .
  • the infrared light emitted from the infrared light sources 103 and internally reflected in the reflective area of the convex lens 114 reaches the cornea 302 of the user 300 .
  • the infrared light reaching the cornea 302 of the user 300 is reflected on the cornea 302 of the user 300 and directed again towards the convex lens 114 .
  • the infrared light reflected by the cornea of the user is transmitted through the convex lens 114 and reflected by the hot mirror 112 .
  • the camera 116 includes a filter that blocks the visible light, and the near-infrared light reflected from the hot mirror 112 is used for imaging. That is, the camera 116 is a near-infrared camera that images the near-infrared light emitted from the infrared light sources 103 and reflected on the cornea of the eye of the user 300 .
  • the image display system 130 of the head mounted display 100 includes two cameras 116 , that is, a first image capturing unit that captures an image containing the infrared light reflected from the right eye and a second image capturing unit that captures an image containing the infrared light reflected from the left eye.
  • two cameras 116 that is, a first image capturing unit that captures an image containing the infrared light reflected from the right eye and a second image capturing unit that captures an image containing the infrared light reflected from the left eye.
  • the first communication unit 118 outputs the image captured by the camera 116 to the gaze detection device 200 that determines the gaze direction of the user 300 . Specifically, the first communication unit 118 transmits the image captured by the camera 116 to the gaze detection device 200 .
  • the gaze direction is determined with the help of a gaze detection program executed by the by CPU (Central Processing Unit) of the gaze detection device 200 .
  • the CPU of the head mounted display 100 may execute the program to determine the gaze direction.
  • the image captured by the camera 116 contains bright spots caused by the near-infrared light reflected from the cornea 302 of the user 300 and an image of the eye including the cornea 302 of the user 300 observed in the near-infrared wavelength region.
  • FIG. 4 is a block diagram of the head mounted display 100 and the gaze detection device 200 contained in the gaze detection system 1 .
  • the gaze detection system 1 contains the head mounted display 100 and the gaze detection device 200 , both of which communicate with each other.
  • the head mounted display 100 contains the first communication unit 118 , the display unit 121 , the infrared illumination unit 122 , the image processing unit 123 , and the image acquisition unit 124 .
  • the first communication unit 118 is a communication interface having the capability to communicate with the second communications unit 220 contained inside the gaze detection device 200 . As mentioned above, the first communication unit 118 communicates with the second communication unit 220 over a wired or wireless communication link. Examples of possible communication standards were already given above.
  • the first communication unit 118 transmits image data to be used for gaze detection, obtained from either the image acquisition unit 116 or the image processing unit 123 , to the second communication unit 220 . Further, the first communication unit 118 delivers to the display unit 121 three-dimensional image data sent from the gaze detection device 200 .
  • the function of the display controller 121 is to display on the image display unit 121 the three-dimensional image based on the three dimensional image data sent from the first communication unit 118 .
  • the three-dimensional image data represents a parallax image pair containing a parallax image for the right eye and a parallax image for the left eye.
  • the infrared illumination unit 122 controls the infrared light sources 103 and illumination of the user's right eye or the left eye with infrared light.
  • the image processing unit 123 performs image processing of the image data acquired by the image acquisition unit 116 as necessary, and passes the processed data to the first data communication unit 118 .
  • the image capturing unit 124 captures images of near-infrared light reflected from each eye using the right eye camera 116 and left eye camera 117 .
  • the image capturing unit 124 transfers the captured images to the first communication unit 118 or to the image processing unit 123 .
  • the gaze detection device 200 contains the second communication unit 220 , the first gaze detection unit 221 , the second gaze detection unit 222 , the tracking unit 223 , the image output unit 224 , and the storage unit 225 .
  • the second communication unit 220 is a communication interface having the function of communicating with the first communication unit 118 of the head mounted display 100 . As mentioned above, the second communication unit 220 communicates with the first communication unit 118 using either wired or wireless communication.
  • the first gaze detection unit 221 receives from the second communication unit 220 the image data for gaze detection of the right eye of the user, and determines the gaze direction of the right eye of the user. Using a technique described later, the first eye-gaze detection unit 221 calculates the gaze direction vector representing the gaze direction of the right eye.
  • the second gaze detection unit 222 receives from the second communication unit 220 the image data for gaze detection of the left eye of the user, and determines the gaze direction of the left eye of the user. Using a technique described later, the second gaze detection unit 222 calculates the gaze direction vector representing the gaze direction of the left eye.
  • the tracking unit 223 uses the right eye gaze direction vector sent from the gaze detection unit 221 , and using the left eye gaze direction vector sent from the gaze detection unit 222 , determines the point (coordinate) at which the user is looking in a three-dimensional image displayed by the display unit 108 in the head mounted display 100 .
  • the video output unit 224 generates three-dimensional video data to be displayed by the display unit 121 in the head mounted display 100 , and transfers the data to the second communication unit 220 .
  • the video output unit 224 also generates and transfers to the second communication unit 220 the data for the marker image used for gaze detection calibration.
  • the video output unit 224 stores the location of the displayed objects in the three-dimensional coordinate system as well as in the specific coordinate system of the three-dimensional output image.
  • the storage unit 225 is a recording medium that stores various kinds of programs and data required for the operation of the gaze detection device 200 .
  • FIG. 5 is a schematic diagram illustrating calibration of the gaze direction detection according to the example.
  • the gaze direction of the user 300 is determined by analyzing, with the first gaze detection unit 221 and the second gaze detection unit 222 in the gaze detection device 200 , the image captured by the camera 116 and transferred to the gaze detection device 200 by the first communication unit 118 . Only operation of the first gaze detection unit 221 is described here, but operation of the second gaze detection unit 222 is identical.
  • the video output unit 224 outputs nine points, Q 1 to Q 9 , (marker image) that are displayed by the image display element 108 of the head mounted display 100 .
  • the gaze detection device 200 instructs the user 300 to look at the points Q1 up to Q9 sequentially. At this time, the user 300 is requested to look at each of the points without moving the neck and, to the extent possible, only moving the eyeballs.
  • the camera 116 captures images containing the cornea 302 of the user 300 while the gaze of the user 300 is pointing at each of the nine points from Q 1 to Q 9 .
  • FIG. 6 is a schematic diagram illustrating the position coordinates of the cornea 302 of the user 300 .
  • the first gaze detection unit 221 contained in the gaze detection device 200 analyzes the images captured by the camera 116 , and detects bright spots 105 of the infrared light. While the user 300 looks at the points by turning the eyeballs only, the positions of the bright spots 105 are considered to be stationary while the user's gaze is directed at any single points. Thus, on the basis of the detected bright spots 105 , the first gaze detection unit 221 sets a two-dimensional coordinate system 306 in the image captured by the camera 116 .
  • the first gaze detection unit 221 detects the center P of the cornea 302 of the user 300 by analyzing the image captured by the camera 116 . This is achieved by using already known image processing techniques such as the Hough transform or edge extraction. Thereby, the gaze detection unit 221 obtains the coordinates of the center P of the cornea 302 of the user 300 in the previously-set two-dimensional coordinate system 306 .
  • the coordinates of the displayed points Q 1 to Q 9 in the two-dimensional coordinate system set for the display screen of the image display element 108 are Q 1 (x 1 , y 1 ) T , Q 2 (x 2 , y 2 ) T , . . . , Q 9 (x 9 , y 9 ) T .
  • the coordinates of, for example, the center of each point, is given by the pixel numbers.
  • the center point P of the cornea 302 of the user 300 measured while the user 300 gazes at each of the points Q 1 to Q 9 are labeled P 1 to P 9 .
  • the coordinates of the points P 1 to P 9 in the two-dimensional coordinate system 306 are at this time P 1 (X 1 , Y 1 ) T , P 2 (X 2 , Y 2 ) T , . . . , P 9 (X 9 , Y 9 ) T , wherein T represents a transposed vector or a matrix.
  • the matrix M becomes a matrix for projecting the gaze direction of the user 300 to the image plane of the image display element 108 :
  • equation (3) is obtained:
  • Equation (4) is obtained by rearranging equation (3):
  • y ( x 1 x 2 ⁇ x 9 y 1 y 2 ⁇ y 9 )
  • A ( X 1 Y 1 0 0 X 2 Y 2 0 0 ⁇ ⁇ ⁇ ⁇ X 9 Y 9 0 0 0 0 X 1 Y 1 0 0 0 X 2 Y 2 ⁇ ⁇ ⁇ ⁇ 0 0 X 9 Y 9 )
  • x ( m 11 m 12 m 21 m 22 )
  • the elements of the vector y are already known because these are the coordinates of the points Q1 to Q9 displayed by the first gaze detection unit 221 on the image display element 108 .
  • the elements of the matrix A are the coordinates of the vertex P of the cornea 302 of the user 300 and can be measured.
  • the first gaze detection unit 221 can determine the vector y and the matrix A.
  • the vector x that is the vector in which the elements of the conversion matrix M are arranged is still unknown. Since the vector y and matrix A are known, the problem of estimating the value of matrix M becomes a problem of calculating the as yet unknown vector x.
  • Equation (5) is an overdetermined problem if the number of equations (that is, the number of points Q presented to the user 300 during calibration by the first gaze detection unit 221 ) is larger than the number of the unknowns (that is, the four elements of the vector x). Since the number of equations is nine in the example shown in equation (5), it is an overdetermined problem.
  • An optimal vector x opt in the sense of minimizing the sum of squares of the elements of the vector e can be calculated from equation (6):
  • the first gaze detection unit 221 uses the elements of the calculated vector x opt to compose the matrix M of equation (1). Accordingly, using the coordinates of the vertex P of the cornea 302 of the user 300 and the matrix M, the first gaze detection unit 221 estimates, using equation (2), the point at which the right eye of the user 300 gazes within the two-dimensional range of the video image displayed on the image display element 108 . Thereby, it becomes possible for the first gaze detection unit 221 to calculate the gaze vector linking the gaze point of the right eye on the image display element 108 and the vertex of the cornea of the right eye of the user. Similarly, the second gaze detection unit 222 can calculate the gaze vector for the left eye, connecting the gaze point of the left eye on the image display element 108 and the vertex of the cornea of the left eye of the user.
  • FIG. 7 is a schematic diagram illustrating how the gaze location is determined according to example 1.
  • FIG. 7 is a schematic illustration of how the left eye 302 a and the right eye 302 b of the user 300 perceive a three-dimensional picture in the three-dimensional image displayed by the head mounted display 100 .
  • the corresponding video image is generated by the video output unit 224 and sent by the second communication unit 220 to the head mounted display 100 for display by the image display element 108 performed by the display unit 121 .
  • the three-dimensional image shown by the head mounted display 100 contains a boy, a dog, a woman, a passenger car, and the driver in the car.
  • the three axes, the x-axis, the y-axis, and the z-axis are shown in FIG. 7 , although these axes are not necessarily shown in the image.
  • FIG. 7 shows that the right eye gaze vector 701 a, determined on the basis of the gaze direction of the left eye 302 a of the user 300 , and the left eye gaze vector 701 b, determined on the basis of the gaze direction of the right eye 302 b of the user 300 , cross at the intersection point 702 .
  • the intersection point 702 is therefore the user's focus point and will be referred to as the point at which the user gazes in the three-dimensional image.
  • the intersection point is obtained from the calculated gaze vectors of both eyes, it is possible to determine that the focus point is on the car driver in the background, even if there another object (a boy) is displayed in the foreground looking from the point of view of the user.
  • FIG. 8 is a flowchart explaining the operation of the gaze detection system 1 .
  • the cameras 116 of the head mounted display 100 take images of the right eye including the infrared light reflected by the right eye, and the left eye including the infrared light reflected by the left eye (Step S 801 ).
  • the image capturing unit 124 transfers the right eye image and the left eye image acquired by the cameras 116 to the image processing unit 123 .
  • the image processing unit 123 performs predefined processing on the transferred image data, and delivers the processed data to the first communication unit 118 .
  • the first communication unit 118 then transmits the processed image data received from the image process unit 123 to the gaze detection device 200 .
  • the second communication unit 210 of the gaze detection device 200 receives the image data and transfers the right eye image to the first detection unit 221 and the left eye image to the second detection unit 222 .
  • the first gaze detection unit 221 refers the transferred right eye image and, using the above formulas, determines the gaze point of the right eye on the image display element 108 .
  • the gaze vector of the right eye is then calculated by combining the gaze point coordinates with the vertex P of the cornea of the right eye (step S 802 ).
  • the first gaze detection unit 221 transfers the calculated right eye gaze vector to the tracking unit 223 .
  • the second gaze detection unit 222 refers the transferred left eye image and, using the above formulas, determines the gaze point of the left eye on the image display element 108 .
  • the gaze vector of the left eye is then calculated by combining the gaze point coordinates with the vertex P of the cornea of the left eye (Step S 802 ).
  • the second gaze detection unit 222 transfers the calculated left eye gaze vector to the tracking unit 223 .
  • the tracking unit 223 computes the intersection point between the transferred right eye gaze vector and the transmitted left eye gaze vector (Step S 803 ).
  • the tracking unit 223 transforms the computed intersection point to the coordinate system of the three-dimensional space of the three-dimensional image generated by the video output unit 224 (Step S 804 ).
  • the tracking unit 223 determines from the transformed intersection point coordinates the location where the user is looking in the three-dimensional image space. (Step S 805 ).
  • the processing illustrated in FIG. 8 is performed serially, with the gaze detection system 1 identifying the user's gaze location as necessary.
  • the gaze detection system 1 can obtain both, the user's right eye gaze direction and the left eye gaze direction. Since the intersection point can be determined in the depth direction of the three-dimensional image as well, it is possible to identify the object that the user is gazing at even if there are various superimposed objects in the three-dimensional image.
  • the location where the user 300 is gazing in the head mounted display 100 was determined from the intersection point of the right eye gaze vector, corresponding to the gaze direction of the right eye, and the left eye gaze vector, corresponding to the gaze direction of the left eye, of the user 300 .
  • right eye gaze vector computed by the first gaze detection unit 221 and the left eye gaze vector computed by the second gaze detection unit 222 do not necessarily intersect.
  • the user's gaze location (layer) is identified by calculating the distance between the intersection point of the user's right eye gaze vector with a layer and the intersection point of the left eye gaze vector with the same layer and finding the layer with the shortest distance between the intersection points.
  • the interaction points of the right eye gaze vector with layers L 1 , L 2 , L 3 , . . . are Pr 1 , Pr 2 , Pr 3 , . . .
  • the tracking unit 223 finds the shortest distance among calculated intersection point distances D 1 , D 2 , D 3 , . . . .
  • the user's eye gaze location (layer) is thus determined by selecting a layer for which the distance between the intersection points is the smallest.
  • FIG. 9 illustrates schematically the method of determining the gaze location according to the second example.
  • an image similar to the picture in FIG. 9 is displayed in the head mounted display 100 .
  • the different point from example 1 is that in the three-dimensional image, several kinds of information are shown in a menu image 910 or a menu image 920 that are displayed in the three-dimensional image.
  • These menu images 910 , 920 are displayed in the three-dimensional image that is structured in a plurality of layers.
  • This image is generated by the video output unit 224 , transferred to the second communication unit 220 , and sent to the head mounted display 100 , where the image is displayed by the display unit 121 on the image display element 108 .
  • identifying the menu image (layer) that the user is gazing at is based on the intersection points where the left eye gaze vector 701 a and the right eye gaze vector 701 b of the user 300 intersect with the menu images 910 , 920 .
  • the intersection point 911 a is defined by the intersection of the right eye gaze vector 701 a of the user 300 and the menu image 910 .
  • the intersection point 911 b is defined by the intersection of the left eye gaze vector 701 b of the user 300 and the menu image 910 .
  • the distance between the intersection point 911 a and the intersection point 911 b is D 1 .
  • the intersection point 921 a is defined by the intersection of the right eye gaze vector 701 a of the user 300 and the menu image 920 .
  • the intersection point 921 b is defined by the intersection of the left eye gaze vector 701 b of the user 300 and the menu image 920 .
  • the distance between the intersection point 921 a and the intersection point 921 b is D 2 .
  • the shorter distance defines the location (layer) where the gaze of the user 300 is pointing at.
  • FIG. 10 shows a flowchart describing operation of the gaze detection system 1 according to the second example. This flowchart is identical to the first example up to the step S 802 and explanation of the operation up to that step is omitted.
  • the user's right eye gaze vector and the left eye gaze vector sent to the tracking unit 223 are used to calculate the intersection points with each layer among the layers displayed in the three-dimensional image (Step S 1003 ). That is, in the example in FIG. 9 , the tracking unit 223 calculates the intersection points for the menu image 910 and for the menu image 920 .
  • the tracking unit 223 calculates the intersection point distances between the right eye gaze vector intersection point and the left eye gaze vector intersection point for each layer (Step S 1004 ).
  • the tracking unit 223 finds the shortest distance among the distances between the calculated intersection points.
  • the user's gaze location (layer) is then determined by selecting the layer for which the distance between the intersection points is the shortest. (Step S 1005 ).
  • the three-dimensional image layers may be a plurality of layers stacked along the x-axis direction (depth direction).
  • the above method can be used to identify the layer that the user 300 is gazing as the virtual layer having the shortest distance between the intersection points.
  • the method used in the gaze detection system 1 to determine the place where a user is gazing in a three-dimensional image is effective when the calculated right eye gaze vector and the left eye gaze vector do not have an intersection point. Also, when a three-dimensional image is considered to consist of a plurality of layers, it becomes easy to determine the user's gaze location by calculating the intersection point distances.
  • the three-dimensional image was made up of a plurality of layers and the intersection points of the gaze vector of the user 300 with each layer were used to find the layer for which the distance between the intersection points was the shortest, allowing the layer that the user was gazing at to be determined.
  • a method of determining the user's gaze point is disclosed for when the three-dimensional 3D image does not have a layered structure and the right eye gaze vector and the left eye gaze vector do not intersect.
  • the detection method in the tracking unit 223 is different, while other functions are the same as in the first example and second examples, and detailed explanation is omitted except for the tracking unit 223 .
  • the difference in the operation of the tracking unit 223 from examples 1 and 2 is that a cylinder with a given radius is centered on the left eye gaze vector 701 a, and another cylinder with a given radius is centered on the right eye gaze vector 701 b, and an object in the three-dimensional image that is closest to the intersection region of the cylinders is identified as the point that the user is gazing at.
  • FIG. 11 is a schematic illustration of how the gaze location of the user 300 is determined in example 3.
  • the relevant image is generated by the video output unit 224 , sent to the head mounted display 100 by the second communication unit 220 , and displayed on the image display element 108 by the display unit 121 .
  • a cylinder 1101 a with a given radius is assumed to be centered on the user's left eye gaze vector 701 a.
  • a cylinder 1101 b with a given radius is assumed to be centered on the user's right eye gaze vector.
  • the region 1102 where the cylinder 1101 a and the cylinder 1101 b intersect, is identified as the place where the user 300 is gazing. Thus, the user 300 is gazing at the boy in the three-dimensional image.
  • FIG. 12 is a flowchart describing the operation of the gaze detection system 1 according to the third example.
  • the flowchart up to the step S 801 is common with the first example, and explanation up to that point is omitted.
  • the left eye gaze vector and the right eye gaze vector are transferred to the tracking unit 223 , which uses formula F to calculate the cylinder 1101 a centered on the left eye gaze vector. Also, the tracking unit 223 uses formula G to calculate the cylinder 1101 b, centered on the right eye gaze vector (step S 1203 ).
  • the tracking unit 223 determines the area 1102 where the cylinder 1101 a and the cylinder 1101 b intersect (step S 1203 ).
  • the tracking unit 223 finds a layer or an object that is closest to the determined area and identifies that object as the place where the user is gazing at.
  • the tracking unit 223 calculates a plurality of right eye parallel vectors that are parallel to the right eye gaze vector and centered on the right eye gaze vector. In the same way, the tracking unit 223 calculates a plurality of left eye parallel vectors that are parallel to the left eye gaze vector and centered on the right eye gaze vector. Intersection points are then calculated for each right eye parallel vector with each left eye parallel vector. The tracking unit 223 calculates the center point of the plurality of obtained intersection points. The tracking unit 223 defines the three-dimensional image coordinates corresponding to the calculated center point as the gaze location of the user 300 .
  • the mirror can be omitted and the eye of the user 300 may be imaged directly.
  • the processor of the gaze detection device 200 executes a gaze detection or other programs and determines the point at which the user gazes.
  • the gaze detection device 200 may be implemented as an IC (integrated circuit), LSI (Large-Scale integration), or other dedicated logic circuit.
  • the implementation may consist of one or more circuits, and the functions of a plurality of functional units shown in the aforementioned example may be implemented in a single integrated circuit.
  • the LSI may be referred to as VLSI, super LSI, ultra LSI or the like.
  • the gaze detection program may be recorded on a recording medium that is readable by a processor.
  • a “non-transitory tangible medium” such as a tape, a disc, a card, a semiconductor memory, a programmable logic circuit or the like is used as the recording medium.
  • the aforementioned search program may be provided to the aforementioned processor through any transmission medium (such as communication network, wireless link or the like) that can transmit the search program.
  • the abovementioned gaze detection program may be achieved in the forms of data signals embedded in transmission wave, or it may be implemented in the form of embedded data signal of a carrier wave.
  • the aforementioned gaze detection program may be implemented by using, for example, a script language such as ActionScript or JavaScript (registered trademark), an object-oriented language such as Objective-C or Java (registered trademark), or a markup language such as HTML5.
  • a script language such as ActionScript or JavaScript (registered trademark)
  • an object-oriented language such as Objective-C or Java (registered trademark)
  • a markup language such as HTML5.

Abstract

A gaze detection system includes a head mounted display including a gaze detection device detecting a gaze of a user; and a tracking unit that determines a gaze point of the user in the three-dimensional image on the basis of the gaze directions of the right and left eyes, in which the first detection unit calculates a right eye gaze vector indicating the gaze direction of the right eye, and the second detection unit calculates a left eye gaze vector indicating the gaze direction of the left eye, wherein a display unit displays a three-dimensional image consisting of a plurality of layers in a depth direction, and a tracking unit identifies the layer that the user is gazing at by selecting the layer having a shortest distance between intersection points of the right eye gaze vector and the left eye gaze vector with each layer.

Description

    TECHNICAL FIELD
  • This disclosure relates to a head mounted display.
  • BACKGROUND
  • Technology exists for illuminating a user's eye with infrared or other non-visible light and detecting the gaze direction by analyzing images of light reflected from the user's eye in Japanese Patent Application No. H02-264632. The eye-gaze detection technique is known as analyzing the captured emitted light images of user's eye using illuminating light on the user's eye. The detected eye-gaze information can be used, for example, for PC (Personal Computer) or game monitoring, and can be applied to use as a pointing device.
  • The head mounted display is wearable on the user to display 3D images but normally covers the user's eye sights. The user therefore cannot recognize the outside environment while the user wears the HMD. If the user needs to find input devices such as a controller while wearing the HMD as a display device for movie or games, the user has difficulties to find controllers.
  • It could therefore be helpful to detect the user's eye-gaze direction to use as a substitute of pointing device while the user wears the HMD. Thus, it could be helpful to provide a technique that detects the gaze direction of a user wearing a head mounted display.
  • SUMMARY
  • We provide a gaze detection system comprising: a head mounted display, mounted on a user's head for use, comprising, a first infrared light source unit illuminating the user's right eye with infrared light; a second infrared light source unit illuminating the user's left eye; a first image capturing device imaging the right eye illuminated by the infrared light source; a second image capturing device imaging the left eye illuminated by the infrared light source; and a display unit displaying a three-dimensional image, and a gaze detection device, detecting the gaze of the user, comprising, a first detection unit detecting the gaze direction of the right eye, based on the image captured by the first image capturing device; a second detection unit detecting the gaze direction of the left eye, based on the image captured by the second image capturing device; and a tracking unit determining the gaze point of the user in the three-dimensional image on the basis of the gaze direction of the right eye and the gaze direction of the left eye.
  • Also, the first detection unit may calculate the right eye gaze vector indicating the gaze direction of the right eye, and the second detection unit may calculate the left eye gaze vector indicating the gaze direction of the left eye.
  • The display unit may display a three-dimensional image consisting of a plurality of layers in the depth direction, and the tracking unit may identify the layer that the user is gazing at by selecting the layer having the shortest distance between the intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
  • The tracking unit may identify the user's gaze location based on the intersection point between the right eye gaze vector and the left eye gaze vector.
  • The tracking unit may identify the user's gaze location based on the intersection region of a cylinder with a given radius, centered on the right eye gaze vector and a cylinder with a given radius, centered on the right eye gaze vector.
  • The tracking unit may identify the gaze location of the user based on the intersection points of the first plurality of parallel vectors parallel to the right eye gaze vector and the second plurality of parallel vectors parallel to the left eye gaze vector.
  • We provide a gaze detection system including a head mounted display, mounted on a user's head for use, including a first image capturing device imaging the right eye; a second image capturing device imaging the left eye; and a display unit displaying a three-dimensional image, and a gaze detection device detecting a gaze of the user including, a first detection unit detecting a gaze direction of a user's right eye based on an image captured by the first image capturing device; a second detection unit detecting a gaze direction of a user's left eye based on an image captured by the second image capturing device; and a tracking unit that determines a gaze point of the user in the three-dimensional image on the basis of the gaze direction of the right eye and the gaze direction of the left eye, in which the first detection unit calculates a right eye gaze vector indicating the gaze direction of the right eye, and the second detection unit calculates a left eye gaze vector indicating the gaze direction of the left eye, wherein the display unit displays a three-dimensional image consisting of a plurality of layers in a depth direction, and the tracking unit identifies the layer that the user is gazing at by selecting the layer having a shortest distance between intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
  • We also provide a method of determining a gaze point of a user in a gaze detection system including a head mounted display worn on the head of the user and a gaze detection device determining the gaze point of the user, including: displaying a three-dimensional image on the head mounted display; acquiring images of a user's right eye and left eye; transferring an acquired image data to the gaze detection device; determining in the gaze detection device a user's right eye gaze direction vector showing the gaze direction of the user's right eye based on an acquired image of the right eye; determining in the gaze detection device a user's left eye gaze direction vector showing the gaze direction of the user's left eye based on the acquired image of the left eye; and determining a location that the user is gazing at in the three-dimensional image based on the right eye gaze direction and the left eye gaze direction, in which the head mounted display displays a three-dimensional image consisting of a plurality of layers in the depth direction, wherein the detection device identifies a layer the user is gazing at by selecting the layer having a shortest distance between the intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
  • We further provide a gaze detection program executing on a computer that determines a gaze point of a user wearing a head mounted display that displays a three-dimensional image, wherein the computer includes: a function of obtaining the image data of a right eye and a left eye of the user wearing the head mounted display; a function of detecting a right eye gaze direction of the right eye based on the image data of the right eye; a function of detecting a left eye gaze direction of the left eye based on the image data of the left eye; a function of determining a gaze point of the user in the three-dimensional image based on the right eye gaze direction and the left eye gaze direction; a function of calculating a right eye gaze vector indicating the gaze direction of the right eye; a function of calculating a left eye gaze vector indicating the gaze direction of the left eye; and a function of displaying a three-dimensional image consisting of a plurality of layers in a depth direction, wherein the function of detecting the gaze direction identifies the layer that the user is gazing at by selecting the layer having a shortest distance between intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
  • Additionally, any combination of the aforementioned components, and the implementation in the form of methods, devices, systems, computer programs, data structures, recording mediums, and the like may be effective.
  • We thus disclose a technique to detect the gaze direction of a user wearing a head mounted display.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is an external view illustrating the head mounted display worn by a user.
  • FIG. 2 is a perspective overview of the image display system of the head mounted display.
  • FIG. 3 is a diagram schematically illustrating the optical configuration of the image display system of the head mounted display.
  • FIG. 4 is a block diagram illustrating the components of the head mounted display.
  • FIG. 5 is a diagram illustrating the pattern used to calibrate the gaze direction detection system.
  • FIG. 6 is a schematic diagram illustrating the coordinate system of the user's cornea position.
  • FIG. 7 is a schematic diagram of how the gaze point of a user is identified according to a first example.
  • FIG. 8 is a flowchart describing operation of the head mounted display system according to the first example.
  • FIG. 9 is a schematic illustration of how the gaze point of a user is identified according to a second example.
  • FIG. 10 is a flowchart describing operation of the head mounted display system according to the second example.
  • FIG. 11 is a schematic illustration of how the gaze point of a user is identified according to a third example.
  • FIG. 12 is a flowchart describing operation of the head mounted display system according to the third example.
  • REFERENCE LIST
  • 1: gaze detection system, 100: head mounted display, 103 a: infrared light source (second infrared light illumination unit), 103 b: infrared light source (first infrared light illumination unit), 105: bright spot, 108: image display element, 112: hot mirror, 114, 114 a, 114 b: convex lens, 116: camera (first imaging unit, second imaging unit), 118: first communication unit, 121: display unit, 122: infrared illumination unit, 123: image processing unit, 124: image capturing unit, 130: image display system, 150: housing, 152 a, 152 b: lens holder, 160: fitting harness, 170: headphone, 200: gaze detection device, 220: second communication unit, 221: first gaze detection unit, 222: second gaze detection unit, 223: tracking unit, 224: video output unit, 225: storage unit
  • DETAILED DESCRIPTION
  • FIG. 1 shows a typical view of the gaze detection system 1 as described in the examples. The gaze detection system 1 comprises a head mounted display 100 and a gaze detection device 200. As shown in FIG. 1, the head mounted display 100 is mounted on the head of the user 300 for use.
  • The gaze detection device 200 measures the gaze direction and the focal point of the right and left eyes of the user wearing the head mounted display 100, determining the gaze point of the user in the three-dimensional image displayed by the head mounted display. The gaze detection device 200 also functions as a video generator device to create the video images displayed on the head mounted display 100. For example, the gaze detection device 200 can be used to reproduce video images on stationary game machines, portable game machines, PCs, tablets, smartphones, phablets, video players, TVs and the like. The gaze detection device 200 establishes a wireless or a wired connection with the head mounted display 100. In the example shown in FIG. 1, the gaze detection device 200 wirelessly connects to the head mounted display 100. The wireless connection between the gaze detection device 200 and the head mounted display 100 may be established using existing wireless communication techniques such as Wi-Fi (registered trademark) or Bluetooth (registered trademark). Without being limited to these two methods, video image transfer between the head mounted display 100 and the gaze detection device 200 may be implemented using, for example, Miracast (registered trademark), WiGig (registered trademark), WHDI (registered trademark) or other communication standards.
  • It should be noted that the example in FIG. 1 illustrates that the head mounted display 100 and the gaze detection device 200 are separate units. However, the gaze detection device 200 may also be incorporated into the head mounted display 100.
  • The head mounted display 100 includes a housing 150, a fitting harness 160, and headphones 170. The housing 150 contains an image display system for displaying video images to the user 300, and, not shown in the figure, a communications module using Wi-Fi (registered trademark), Bluetooth (registered trademark), or other wireless technology. The head mounted display 100 is secured to the head of the user 300 with a fitting harness 160. The fitting harness 160 may be implemented with the help of, for example, belts or elastic bands. When the user 300 wears the head mounted display 100, the fitting harness 160 holds the housing 150 in a position where the eyes of the user 300 are covered. Thus, once the user 300 wears the head mounted display 100, the field of view of the user 300 is covered by the housing 150.
  • The headphones 170 output the audio of the video reproduced by the gaze detection device 200. The headphones 170 do not need to be fixed to the head mounted display 100. Even if the head mounted display 100 is equipped with a fitting harness 160, the user 300 may freely attach or detach the headphones 170.
  • FIG. 2 is a perspective diagram showing the overview of the image display system 130 of the head mounted display 100 described in the examples. Specifically, FIG. 2 shows the part of the housing 150 facing the corneas 302 when the user 300 is wearing the head mounted display 100.
  • As shown in FIG. 2, a convex lens 114 a is positioned to face the cornea 302 a of the left eye of the user 300 when the user 300 wears the head mounted display 100. Similarly, a convex lens 114 b is positioned to face the cornea 302 b of the right eye of the user 300 when the user 300 wears the head mounted display 100. The convex lens 114 a for the left eye and the convex lens 114 b for the right eye are held by a lens holder 152 a for the left eye and a lens holder 152 b for the right eye, respectively.
  • Hereinafter, the convex lens 114 a for the left eye and the convex lens 114 b for the right eye are referred to as a “convex lens 114” unless the two lenses need to be specifically distinguished. Likewise, the cornea 302 a of the left eye of the user 300 and the cornea 302 b of the right eye of the user 300 are simply referred to as a “cornea 302” unless the corneas need to be specifically distinguished. The lens holder 152 a for the left eye and the lens holder 152 b for the right eye are referred to as a “lens holder 152” unless the holders need to be specifically distinguished.
  • Multiple infrared light sources 103 are attached to the lens holders 152. For the purpose of brevity, in FIG. 2, the infrared light sources illuminating cornea 302 a of the left eye of the user 300 with infrared light are collectively referred to as infrared light sources 103 a, and the infrared light sources illuminating the cornea 302 b of the right eye of the user 300 with infrared light are collectively referred to as infrared light sources 103 b. Hereinafter, the infrared light sources 103 a and the infrared light sources 103 b are collectively referred to as “infrared light sources 103” except when a distinction between the two is necessary. In the example shown in FIG. 2, six infrared light sources 103 a are attached to the lens holder 152 a for the left eye. In the same manner, six infrared light sources 103 b are attached to the lens holder 152 b for the right eye. In this way, the infrared light sources 103 are not attached directly to the convex lenses 114, but are mounted on the lens holders 152 that grip the convex lenses 114, making the attachment of the infrared light sources 103 easier. Since the lens holders 152 are typically made of a resin or the like, machining that is necessary to attach the infrared light sources 103 is easier than for the convex lenses 114 that are made of glass or the like.
  • As mentioned above, the lens holders 152 is a component that holds the convex lenses 114. Therefore, the infrared light sources 103 mounted on the lens holders 152 are positioned along the circumference of the convex lenses 114. Although six infrared light sources 103 illuminating each eye with infrared light are shown here, the number of the infrared light sources 103 is not limited to this number, there should be at least one light source 103 for each eye, and two or more light sources 103 are desirable.
  • FIG. 3 is a schematic diagram of the optical configuration of the image display system 130 contained in the housing 150 according to the example, with the diagram showing a view of the housing 150 from the direction of the left eye as shown in FIG. 2. The image display system 130 includes infrared light sources 103, an image display element 108, a hot mirror 112, the convex lenses 114, a camera 116, and a first communication unit 118.
  • The infrared right sources 103 are light sources capable of near-infrared light (700 nm to 2500 nm range). Near-infrared light is a type of non-visible light in a wavelength region that cannot be detected by the naked eye of the user 300.
  • The image display element 108 displays an image to be presented to the user 300. The image displayed by the image display element 108 is generated by a video output unit 224 in the gaze detection device 200. The video output unit 224 will be described later. The image display element 108 may be implemented by using existing liquid crystal display (LCD), or organic EL display (Organic Electro Luminescence Display).
  • The hot mirror 112 is positioned between the image display element 108 and the cornea 302 of the user 300 when the user 300 wears the head mounted display 100. The hot mirror 112 has a property of transmitting visible light created by the image display element 108 and reflecting near-infrared light.
  • The convex lenses 114 are positioned on the opposite side of the hot mirror 112 from the image display element 108. In other words, the convex lenses 114 are positioned between the hot mirror 112 and the cornea 302 of the user 300 when the user 300 wears the head mounted display 100. That is, the convex lenses 114 are positioned to face the corneas 302 of the user 300 when the user 300 wears the head mounted display 100.
  • The convex lenses 114 condenses the light from the image display transmitted through the hot mirror 112. Thus, the convex lenses 114 function as image magnifiers that enlarge an image created by the image display element 108 and presented to the user 300. Although only one convex lens 114 is shown in FIG. 2 for convenience, instead of a single convex lens 114, a lens group may be used that combines various kinds of lenses, including plano-convex lenses and biconvex lenses.
  • A plurality of infrared light sources 103 are arranged along the circumference of the convex lens 114. The infrared light sources 103 emit infrared light toward the cornea 302 of the user 300.
  • Although not shown in the figure, the image display system 130 of the head mounted display 100, according to the example, contains two image display units 108, where the image presented to the right eye of the user 300 and the image presented to the left eye of the user can be independently generated. Accordingly, the head mounted display 100, according to the example may present to the right and left eyes of the user 300, respectively a parallax image for the right eye and a parallax image for the left eye. Thereby, the head mounted display 100, according to the example, can present a stereoscopic scene that creates a feeling of depth for the user 300.
  • As mentioned above, the hot mirror 112 transmits visible light but reflects the near-infrared light. Thus, the image light emitted by the image display element 108 is transmitted through the hot mirror 112, and reaches the cornea 302 of the user 300. The infrared light emitted from the infrared light sources 103 and internally reflected in the reflective area of the convex lens 114 reaches the cornea 302 of the user 300.
  • The infrared light reaching the cornea 302 of the user 300 is reflected on the cornea 302 of the user 300 and directed again towards the convex lens 114. The infrared light reflected by the cornea of the user is transmitted through the convex lens 114 and reflected by the hot mirror 112. The camera 116 includes a filter that blocks the visible light, and the near-infrared light reflected from the hot mirror 112 is used for imaging. That is, the camera 116 is a near-infrared camera that images the near-infrared light emitted from the infrared light sources 103 and reflected on the cornea of the eye of the user 300.
  • Although not shown in the figure, the image display system 130 of the head mounted display 100, according to the example, includes two cameras 116, that is, a first image capturing unit that captures an image containing the infrared light reflected from the right eye and a second image capturing unit that captures an image containing the infrared light reflected from the left eye. Thereby, images used to detect gaze directions of both the right eye and the left eye of the user 300 can be acquired.
  • The first communication unit 118 outputs the image captured by the camera 116 to the gaze detection device 200 that determines the gaze direction of the user 300. Specifically, the first communication unit 118 transmits the image captured by the camera 116 to the gaze detection device 200. Although a detailed description of the first detection unit 221 and the second detection unit 222 that functions as a gaze (direction) detection unit will be given later, the gaze direction is determined with the help of a gaze detection program executed by the by CPU (Central Processing Unit) of the gaze detection device 200. When the head mounted display 100 has the necessary computational resources such as a CPU, memory or the like, the CPU of the head mounted display 100 may execute the program to determine the gaze direction.
  • Although a detailed description will be given later, the image captured by the camera 116 contains bright spots caused by the near-infrared light reflected from the cornea 302 of the user 300 and an image of the eye including the cornea 302 of the user 300 observed in the near-infrared wavelength region.
  • Although the aforementioned description has been given mainly for the configuration of presenting an image to the left eye of the user 300 in the image display system 130, according to the example, the configuration that presents an image to the right eye of the user 300 is the same.
  • FIG. 4 is a block diagram of the head mounted display 100 and the gaze detection device 200 contained in the gaze detection system 1. As shown in FIG. 4, and as explained above, the gaze detection system 1 contains the head mounted display 100 and the gaze detection device 200, both of which communicate with each other.
  • As shown in FIG. 4, the head mounted display 100 contains the first communication unit 118, the display unit 121, the infrared illumination unit 122, the image processing unit 123, and the image acquisition unit 124.
  • The first communication unit 118 is a communication interface having the capability to communicate with the second communications unit 220 contained inside the gaze detection device 200. As mentioned above, the first communication unit 118 communicates with the second communication unit 220 over a wired or wireless communication link. Examples of possible communication standards were already given above. The first communication unit 118 transmits image data to be used for gaze detection, obtained from either the image acquisition unit 116 or the image processing unit 123, to the second communication unit 220. Further, the first communication unit 118 delivers to the display unit 121 three-dimensional image data sent from the gaze detection device 200.
  • The function of the display controller 121 is to display on the image display unit 121 the three-dimensional image based on the three dimensional image data sent from the first communication unit 118. The three-dimensional image data represents a parallax image pair containing a parallax image for the right eye and a parallax image for the left eye.
  • The infrared illumination unit 122 controls the infrared light sources 103 and illumination of the user's right eye or the left eye with infrared light.
  • The image processing unit 123 performs image processing of the image data acquired by the image acquisition unit 116 as necessary, and passes the processed data to the first data communication unit 118.
  • The image capturing unit 124 captures images of near-infrared light reflected from each eye using the right eye camera 116 and left eye camera 117. The image capturing unit 124 transfers the captured images to the first communication unit 118 or to the image processing unit 123.
  • As shown in FIG. 4, the gaze detection device 200 contains the second communication unit 220, the first gaze detection unit 221, the second gaze detection unit 222, the tracking unit 223, the image output unit 224, and the storage unit 225.
  • The second communication unit 220 is a communication interface having the function of communicating with the first communication unit 118 of the head mounted display 100. As mentioned above, the second communication unit 220 communicates with the first communication unit 118 using either wired or wireless communication.
  • The first gaze detection unit 221 receives from the second communication unit 220 the image data for gaze detection of the right eye of the user, and determines the gaze direction of the right eye of the user. Using a technique described later, the first eye-gaze detection unit 221 calculates the gaze direction vector representing the gaze direction of the right eye.
  • The second gaze detection unit 222 receives from the second communication unit 220 the image data for gaze detection of the left eye of the user, and determines the gaze direction of the left eye of the user. Using a technique described later, the second gaze detection unit 222 calculates the gaze direction vector representing the gaze direction of the left eye.
  • The tracking unit 223, using the right eye gaze direction vector sent from the gaze detection unit 221, and using the left eye gaze direction vector sent from the gaze detection unit 222, determines the point (coordinate) at which the user is looking in a three-dimensional image displayed by the display unit 108 in the head mounted display 100.
  • The video output unit 224 generates three-dimensional video data to be displayed by the display unit 121 in the head mounted display 100, and transfers the data to the second communication unit 220. The video output unit 224 also generates and transfers to the second communication unit 220 the data for the marker image used for gaze detection calibration. The video output unit 224 stores the location of the displayed objects in the three-dimensional coordinate system as well as in the specific coordinate system of the three-dimensional output image.
  • The storage unit 225 is a recording medium that stores various kinds of programs and data required for the operation of the gaze detection device 200.
  • Next, a description of the gaze direction detection is given as an example.
  • FIG. 5 is a schematic diagram illustrating calibration of the gaze direction detection according to the example. The gaze direction of the user 300 is determined by analyzing, with the first gaze detection unit 221 and the second gaze detection unit 222 in the gaze detection device 200, the image captured by the camera 116 and transferred to the gaze detection device 200 by the first communication unit 118. Only operation of the first gaze detection unit 221 is described here, but operation of the second gaze detection unit 222 is identical.
  • As shown in FIG. 5, the video output unit 224 outputs nine points, Q1 to Q9, (marker image) that are displayed by the image display element 108 of the head mounted display 100. The gaze detection device 200 instructs the user 300 to look at the points Q1 up to Q9 sequentially. At this time, the user 300 is requested to look at each of the points without moving the neck and, to the extent possible, only moving the eyeballs. The camera 116 captures images containing the cornea 302 of the user 300 while the gaze of the user 300 is pointing at each of the nine points from Q1 to Q9.
  • FIG. 6 is a schematic diagram illustrating the position coordinates of the cornea 302 of the user 300. The first gaze detection unit 221 contained in the gaze detection device 200 analyzes the images captured by the camera 116, and detects bright spots 105 of the infrared light. While the user 300 looks at the points by turning the eyeballs only, the positions of the bright spots 105 are considered to be stationary while the user's gaze is directed at any single points. Thus, on the basis of the detected bright spots 105, the first gaze detection unit 221 sets a two-dimensional coordinate system 306 in the image captured by the camera 116.
  • Further, the first gaze detection unit 221 detects the center P of the cornea 302 of the user 300 by analyzing the image captured by the camera 116. This is achieved by using already known image processing techniques such as the Hough transform or edge extraction. Thereby, the gaze detection unit 221 obtains the coordinates of the center P of the cornea 302 of the user 300 in the previously-set two-dimensional coordinate system 306.
  • As shown in FIG. 5, the coordinates of the displayed points Q1 to Q9 in the two-dimensional coordinate system set for the display screen of the image display element 108 are Q1(x1, y1)T, Q2(x2, y2)T, . . . , Q9(x9, y9)T. The coordinates of, for example, the center of each point, is given by the pixel numbers. The center point P of the cornea 302 of the user 300, measured while the user 300 gazes at each of the points Q1 to Q9 are labeled P1 to P9. The coordinates of the points P1 to P9 in the two-dimensional coordinate system 306 are at this time P1(X1, Y1)T, P2(X2, Y2)T, . . . , P9(X9, Y9)T, wherein T represents a transposed vector or a matrix.
  • A matrix M with the size of 2×2 is now defined by equation (1):
  • M = ( m 11 m 12 m 21 m 22 ) . ( 1 )
  • If the matrix M then satisfies equation (2), the matrix M becomes a matrix for projecting the gaze direction of the user 300 to the image plane of the image display element 108:

  • Q N =MP N(N=1, . . . , 9)   (2).
  • If the aforementioned equation (2) is expanded, equation (3) is obtained:
  • ( x 1 x 2 x 9 y 1 y 2 y 9 ) = ( m 11 m 12 m 21 m 22 ) ( X 1 X 2 X 9 Y 1 Y 2 Y 9 ) . ( 3 )
  • Equation (4) is obtained by rearranging equation (3):
  • ( x 1 x 2 x 9 y 1 y 2 y 9 ) = ( X 1 Y 1 0 0 X 2 Y 2 0 0 X 9 Y 9 0 0 0 0 X 1 Y 1 0 0 X 2 Y 2 0 0 X 9 Y 9 ) ( m 11 m 12 m 21 m 22 ) . ( 4 )
  • If y, A, and x are defined as
  • y = ( x 1 x 2 x 9 y 1 y 2 y 9 ) , A = ( X 1 Y 1 0 0 X 2 Y 2 0 0 X 9 Y 9 0 0 0 0 X 1 Y 1 0 0 X 2 Y 2 0 0 X 9 Y 9 ) , x = ( m 11 m 12 m 21 m 22 )
  • equation (5) is obtained:

  • y=Ax   (5).
  • In equation (5), the elements of the vector y are already known because these are the coordinates of the points Q1 to Q9 displayed by the first gaze detection unit 221 on the image display element 108. Further, the elements of the matrix A are the coordinates of the vertex P of the cornea 302 of the user 300 and can be measured. Thus, the first gaze detection unit 221 can determine the vector y and the matrix A. The vector x that is the vector in which the elements of the conversion matrix M are arranged is still unknown. Since the vector y and matrix A are known, the problem of estimating the value of matrix M becomes a problem of calculating the as yet unknown vector x.
  • Equation (5) is an overdetermined problem if the number of equations (that is, the number of points Q presented to the user 300 during calibration by the first gaze detection unit 221) is larger than the number of the unknowns (that is, the four elements of the vector x). Since the number of equations is nine in the example shown in equation (5), it is an overdetermined problem.
  • An error vector between the vector y and the vector Ax is defined as vector e. That is, e=y−Ax. An optimal vector xopt in the sense of minimizing the sum of squares of the elements of the vector e can be calculated from equation (6):

  • x opt=(A T A)−1 A T y   (6).
  • “−1” represents matrix inversion.
  • The first gaze detection unit 221 uses the elements of the calculated vector xopt to compose the matrix M of equation (1). Accordingly, using the coordinates of the vertex P of the cornea 302 of the user 300 and the matrix M, the first gaze detection unit 221 estimates, using equation (2), the point at which the right eye of the user 300 gazes within the two-dimensional range of the video image displayed on the image display element 108. Thereby, it becomes possible for the first gaze detection unit 221 to calculate the gaze vector linking the gaze point of the right eye on the image display element 108 and the vertex of the cornea of the right eye of the user. Similarly, the second gaze detection unit 222 can calculate the gaze vector for the left eye, connecting the gaze point of the left eye on the image display element 108 and the vertex of the cornea of the left eye of the user.
  • FIG. 7 is a schematic diagram illustrating how the gaze location is determined according to example 1. FIG. 7 is a schematic illustration of how the left eye 302 a and the right eye 302 b of the user 300 perceive a three-dimensional picture in the three-dimensional image displayed by the head mounted display 100. The corresponding video image is generated by the video output unit 224 and sent by the second communication unit 220 to the head mounted display 100 for display by the image display element 108 performed by the display unit 121.
  • As illustrated in FIG. 7, in the field of view of the user 300, the three-dimensional image shown by the head mounted display 100 contains a boy, a dog, a woman, a passenger car, and the driver in the car. For descriptive purposes, to show clearly that this is a three-dimensional image, the three axes, the x-axis, the y-axis, and the z-axis, are shown in FIG. 7, although these axes are not necessarily shown in the image.
  • FIG. 7 shows that the right eye gaze vector 701 a, determined on the basis of the gaze direction of the left eye 302 a of the user 300, and the left eye gaze vector 701 b, determined on the basis of the gaze direction of the right eye 302 b of the user 300, cross at the intersection point 702. The intersection point 702 is therefore the user's focus point and will be referred to as the point at which the user gazes in the three-dimensional image. When the intersection point is obtained from the calculated gaze vectors of both eyes, it is possible to determine that the focus point is on the car driver in the background, even if there another object (a boy) is displayed in the foreground looking from the point of view of the user.
  • Operation of the gaze detection system 1 is explained with the help of a flowchart in FIG. 8. FIG. 8 is a flowchart explaining the operation of the gaze detection system 1.
  • The cameras 116 of the head mounted display 100 take images of the right eye including the infrared light reflected by the right eye, and the left eye including the infrared light reflected by the left eye (Step S801). The image capturing unit 124 transfers the right eye image and the left eye image acquired by the cameras 116 to the image processing unit 123. The image processing unit 123 performs predefined processing on the transferred image data, and delivers the processed data to the first communication unit 118. The first communication unit 118 then transmits the processed image data received from the image process unit 123 to the gaze detection device 200.
  • The second communication unit 210 of the gaze detection device 200 receives the image data and transfers the right eye image to the first detection unit 221 and the left eye image to the second detection unit 222.
  • The first gaze detection unit 221 refers the transferred right eye image and, using the above formulas, determines the gaze point of the right eye on the image display element 108. The gaze vector of the right eye is then calculated by combining the gaze point coordinates with the vertex P of the cornea of the right eye (step S802). The first gaze detection unit 221 transfers the calculated right eye gaze vector to the tracking unit 223.
  • The second gaze detection unit 222 refers the transferred left eye image and, using the above formulas, determines the gaze point of the left eye on the image display element 108. The gaze vector of the left eye is then calculated by combining the gaze point coordinates with the vertex P of the cornea of the left eye (Step S802). The second gaze detection unit 222 transfers the calculated left eye gaze vector to the tracking unit 223.
  • The tracking unit 223 computes the intersection point between the transferred right eye gaze vector and the transmitted left eye gaze vector (Step S803).
  • The tracking unit 223 transforms the computed intersection point to the coordinate system of the three-dimensional space of the three-dimensional image generated by the video output unit 224 (Step S804).
  • The tracking unit 223 determines from the transformed intersection point coordinates the location where the user is looking in the three-dimensional image space. (Step S805).
  • The processing illustrated in FIG. 8 is performed serially, with the gaze detection system 1 identifying the user's gaze location as necessary.
  • According to the example, the gaze detection system 1 can obtain both, the user's right eye gaze direction and the left eye gaze direction. Since the intersection point can be determined in the depth direction of the three-dimensional image as well, it is possible to identify the object that the user is gazing at even if there are various superimposed objects in the three-dimensional image.
  • SECOND EXAMPLE
  • In the first example described above, the location where the user 300 is gazing in the head mounted display 100 was determined from the intersection point of the right eye gaze vector, corresponding to the gaze direction of the right eye, and the left eye gaze vector, corresponding to the gaze direction of the left eye, of the user 300.
  • However, right eye gaze vector computed by the first gaze detection unit 221 and the left eye gaze vector computed by the second gaze detection unit 222 do not necessarily intersect.
  • Thus, in this example, a method is described to identify the approximate location of the user's gaze point even when the right eye gaze vector and the left eye gaze vector have no intersection point.
  • In the second example, only the identification method in the tracking unit 223 is different, the other parts are common with the first example and, therefore, except for the tracking unit 223, detailed explanations are omitted.
  • In the second example, a three-dimensional image is displayed in multiple layers. In the second example of the tracking unit 223, the user's gaze location (layer) is identified by calculating the distance between the intersection point of the user's right eye gaze vector with a layer and the intersection point of the left eye gaze vector with the same layer and finding the layer with the shortest distance between the intersection points. To explain in more detail, the interaction points of the right eye gaze vector with layers L1, L2, L3, . . . , are Pr1, Pr2, Pr3, . . . , and the intersection points of the left eye gaze vector with the layers L1, L2, L3, . . . are Pl1, Pl2, Pl3, . . . . The distances between the intersection points Pr1 and Pl1 is D1, the distance between Pr2 and Pl2 is D2, the distance between Pr3 and Pl3 is D3, . . . . The tracking unit 223 finds the shortest distance among calculated intersection point distances D1, D2, D3, . . . . The user's eye gaze location (layer) is thus determined by selecting a layer for which the distance between the intersection points is the smallest.
  • FIG. 9 illustrates schematically the method of determining the gaze location according to the second example. In this example, it is assumed that an image similar to the picture in FIG. 9 is displayed in the head mounted display 100. The different point from example 1 is that in the three-dimensional image, several kinds of information are shown in a menu image 910 or a menu image 920 that are displayed in the three-dimensional image. These menu images 910, 920 are displayed in the three-dimensional image that is structured in a plurality of layers. This image is generated by the video output unit 224, transferred to the second communication unit 220, and sent to the head mounted display 100, where the image is displayed by the display unit 121 on the image display element 108.
  • In the second example, an example is given where identifying the menu image (layer) that the user is gazing at is based on the intersection points where the left eye gaze vector 701 a and the right eye gaze vector 701 b of the user 300 intersect with the menu images 910, 920.
  • As shown more precisely in FIG. 9, the intersection point 911 a is defined by the intersection of the right eye gaze vector 701 a of the user 300 and the menu image 910. Also, the intersection point 911 b is defined by the intersection of the left eye gaze vector 701 b of the user 300 and the menu image 910. The distance between the intersection point 911 a and the intersection point 911 b is D1.
  • On the other hand, as shown in FIG. 9, the intersection point 921 a is defined by the intersection of the right eye gaze vector 701 a of the user 300 and the menu image 920. Also, the intersection point 921 b is defined by the intersection of the left eye gaze vector 701 b of the user 300 and the menu image 920. The distance between the intersection point 921 a and the intersection point 921 b is D2.
  • Among the distances D1 and D2 calculated in this way, the shorter distance defines the location (layer) where the gaze of the user 300 is pointing at.
  • FIG. 10 shows a flowchart describing operation of the gaze detection system 1 according to the second example. This flowchart is identical to the first example up to the step S802 and explanation of the operation up to that step is omitted.
  • The user's right eye gaze vector and the left eye gaze vector sent to the tracking unit 223 are used to calculate the intersection points with each layer among the layers displayed in the three-dimensional image (Step S1003). That is, in the example in FIG. 9, the tracking unit 223 calculates the intersection points for the menu image 910 and for the menu image 920.
  • The tracking unit 223 calculates the intersection point distances between the right eye gaze vector intersection point and the left eye gaze vector intersection point for each layer (Step S1004).
  • The tracking unit 223 finds the shortest distance among the distances between the calculated intersection points. The user's gaze location (layer) is then determined by selecting the layer for which the distance between the intersection points is the shortest. (Step S1005).
  • In the second example, an example of determining the gaze point is considered where a menu or other image data is displayed in a layered structure as shown in FIG. 9, the three-dimensional image layers may be a plurality of layers stacked along the x-axis direction (depth direction).
  • For example, when virtual planar layers, parallel to the y-z plane, are present at x-axis coordinates x1, x2, x3, the above method can be used to identify the layer that the user 300 is gazing as the virtual layer having the shortest distance between the intersection points.
  • According to the second example, the method used in the gaze detection system 1 to determine the place where a user is gazing in a three-dimensional image is effective when the calculated right eye gaze vector and the left eye gaze vector do not have an intersection point. Also, when a three-dimensional image is considered to consist of a plurality of layers, it becomes easy to determine the user's gaze location by calculating the intersection point distances.
  • THIRD EXAMPLE
  • In the second example, the three-dimensional image was made up of a plurality of layers and the intersection points of the gaze vector of the user 300 with each layer were used to find the layer for which the distance between the intersection points was the shortest, allowing the layer that the user was gazing at to be determined.
  • In the third example, a method of determining the user's gaze point is disclosed for when the three-dimensional 3D image does not have a layered structure and the right eye gaze vector and the left eye gaze vector do not intersect.
  • In the third example, only the detection method in the tracking unit 223 is different, while other functions are the same as in the first example and second examples, and detailed explanation is omitted except for the tracking unit 223.
  • The difference in the operation of the tracking unit 223 from examples 1 and 2 is that a cylinder with a given radius is centered on the left eye gaze vector 701 a, and another cylinder with a given radius is centered on the right eye gaze vector 701 b, and an object in the three-dimensional image that is closest to the intersection region of the cylinders is identified as the point that the user is gazing at.
  • FIG. 11 is a schematic illustration of how the gaze location of the user 300 is determined in example 3. The relevant image is generated by the video output unit 224, sent to the head mounted display 100 by the second communication unit 220, and displayed on the image display element 108 by the display unit 121. As shown in FIG. 11, a cylinder 1101 a with a given radius is assumed to be centered on the user's left eye gaze vector 701 a. In the same way, a cylinder 1101 b with a given radius is assumed to be centered on the user's right eye gaze vector. The region 1102, where the cylinder 1101 a and the cylinder 1101 b intersect, is identified as the place where the user 300 is gazing. Thus, the user 300 is gazing at the boy in the three-dimensional image.
  • FIG. 12 is a flowchart describing the operation of the gaze detection system 1 according to the third example. The flowchart up to the step S801 is common with the first example, and explanation up to that point is omitted.
  • The left eye gaze vector and the right eye gaze vector are transferred to the tracking unit 223, which uses formula F to calculate the cylinder 1101 a centered on the left eye gaze vector. Also, the tracking unit 223 uses formula G to calculate the cylinder 1101 b, centered on the right eye gaze vector (step S1203).
  • Based on formula F and formula G, the tracking unit 223 determines the area 1102 where the cylinder 1101 a and the cylinder 1101 b intersect (step S1203).
  • The tracking unit 223 then finds a layer or an object that is closest to the determined area and identifies that object as the place where the user is gazing at.
  • In example 3, the functions of the two abovementioned cylinders need to be evaluated, increasing the computational complexity of the intersection region calculation, causing the processing load of the gaze detection device 200 to become large. Therefore, to reduce the number of required operation, the following method may be used.
  • The tracking unit 223 calculates a plurality of right eye parallel vectors that are parallel to the right eye gaze vector and centered on the right eye gaze vector. In the same way, the tracking unit 223 calculates a plurality of left eye parallel vectors that are parallel to the left eye gaze vector and centered on the right eye gaze vector. Intersection points are then calculated for each right eye parallel vector with each left eye parallel vector. The tracking unit 223 calculates the center point of the plurality of obtained intersection points. The tracking unit 223 defines the three-dimensional image coordinates corresponding to the calculated center point as the gaze location of the user 300.
  • According to the third example, it is possible to determine the gaze location of the user 300 even if the user's left eye gaze vector 701 a and the right eye gaze vector 701 b have no intersection point and the three-dimensional image does not have a layered structure.
  • Our gaze detection system is not limited to examples 1 to 3, it is clear that other approaches to realize the spirit of our systems and methods can be used.
  • For example, although the third example described how to detect the intersection region of two cylinders, other than cylinder shapes, for example, rectangular column shapes, can be used for the calculation as well. The use of a square column instead of a cylinder simplifies the calculation.
  • In the above examples, although a technique to detect the gaze of the user 300 by acquiring an image of an eye reflected from the hot mirror 112 was described, the mirror can be omitted and the eye of the user 300 may be imaged directly.
  • In the aforementioned examples, the processor of the gaze detection device 200 executes a gaze detection or other programs and determines the point at which the user gazes. The gaze detection device 200 may be implemented as an IC (integrated circuit), LSI (Large-Scale integration), or other dedicated logic circuit. The implementation may consist of one or more circuits, and the functions of a plurality of functional units shown in the aforementioned example may be implemented in a single integrated circuit. Depending on the degree of integration, the LSI may be referred to as VLSI, super LSI, ultra LSI or the like.
  • The gaze detection program may be recorded on a recording medium that is readable by a processor. A “non-transitory tangible medium” such as a tape, a disc, a card, a semiconductor memory, a programmable logic circuit or the like is used as the recording medium.
  • The aforementioned search program may be provided to the aforementioned processor through any transmission medium (such as communication network, wireless link or the like) that can transmit the search program. The abovementioned gaze detection program may be achieved in the forms of data signals embedded in transmission wave, or it may be implemented in the form of embedded data signal of a carrier wave.
  • The aforementioned gaze detection program may be implemented by using, for example, a script language such as ActionScript or JavaScript (registered trademark), an object-oriented language such as Objective-C or Java (registered trademark), or a markup language such as HTML5.

Claims (6)

1. A gaze detection system comprising:
a head mounted display, mounted on a user's head for use, comprising,
a first image capturing device imaging the right eye;
a second image capturing device imaging the left eye; and
a display unit displaying a three-dimensional image, and
a gaze detection device detecting a gaze of the user comprising,
a first detection unit detecting a gaze direction of a user's right eye based on an image captured by the first image capturing device;
a second detection unit detecting a gaze direction of a user's left eye based on an image captured by the second image capturing device; and
a tracking unit that determines a gaze point of the user in the three-dimensional image on the basis of the gaze direction of the right eye and the gaze direction of the left eye,
in which the first detection unit calculates a right eye gaze vector indicating the gaze direction of the right eye, and the second detection unit calculates a left eye gaze vector indicating the gaze direction of the left, eye,
wherein the display unit displays a three-dimensional image consisting of a plurality of layers in a depth direction, and the tracking unit identifies the layer that the user is gazing at by selecting the layer having a shortest distance between intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
2. The gaze detection system of claim 1, wherein the tracking unit identifies the user's gaze location based on the intersection point between the right eye gaze vector and the left eye gaze vector.
3. The gaze detection system of claim 1, wherein the tracking unit identifies the user's gaze location based on the intersection region of a cylinder with a given radius, centered on the right eye gaze vector and a cylinder with a given radius, centered on the right eye gaze vector.
4. The gaze detection system according to claim 1, wherein the tracking unit identifies the gaze location of the user based on the intersection points of the first plurality of parallel vectors parallel to the right eye gaze vector and the second plurality of parallel vectors parallel to the left eye gaze vector.
5. A method of determining a gaze point of a user in a gaze detection system comprising a head mounted display worn on the head of the user and a gaze detection device determining the gaze point of the user, comprising:
displaying a three-dimensional image on the head mounted display;
acquiring images of a user's right eye and left eye;
transferring an acquired image data to the gaze detection device;
determining in the gaze detection device a user's right eye gaze direction vector showing the gaze direction of the user's right eye based on an acquired image of the right eye;
determining in the gaze detection device a user's left eye gaze direction vector showing the gaze direction of the user's left eye based on the acquired image of the left eye; and
determining a location that the user is gazing at in the three-dimensional image based on the right eye gaze direction and the left eye gaze direction,
in which the head mounted display displays a three-dimensional image consisting of a plurality of layers in the depth direction, wherein
the detection device identifies a layer the user is gazing at by selecting the layer having a shortest distance between the intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
6. A gaze detection program executing on a computer that determines a gaze point of a user wearing a head mounted display that displays a three-dimensional image, wherein the computer comprises:
a function of obtaining the image data of a right eye and a left eye of the user wearing the head mounted display;
a function of detecting a right eye gaze direction of the right eye based on the image data of e right eye;
a function of detecting a left eye gaze direction of the left eye based on the image data of the left eye;
a function of determining a gaze point of the user in the three-dimensional image based on the right eye gaze direction and the left eye gaze direction;
a function of calculating a right eye gaze vector indicating the gaze direction of the right eye;
a function of calculating a left eye gaze vector indicating the gaze direction of the left eye; and
a function of displaying a three-dimensional image consisting of a plurality of layers in a depth direction, wherein
the function of detecting the gaze direction identifies the layer that the user is gazing at by selecting the layer having a shortest distance between intersection points of the right eye gaze vector and the left eye gaze vector with each layer.
US15/842,120 2015-11-27 2017-12-14 Gaze detection system, gaze point detection method, and gaze point detection program Abandoned US20180106999A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/842,120 US20180106999A1 (en) 2015-11-27 2017-12-14 Gaze detection system, gaze point detection method, and gaze point detection program

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/JP2015/083484 WO2017090203A1 (en) 2015-11-27 2015-11-27 Line-of-sight detection system, gaze point identification method, and gaze point identification program
US15/318,709 US9880384B2 (en) 2015-11-27 2015-11-27 Gaze detection system, gaze point detection method, and gaze point detection program
US15/842,120 US20180106999A1 (en) 2015-11-27 2017-12-14 Gaze detection system, gaze point detection method, and gaze point detection program

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US15/318,709 Continuation US9880384B2 (en) 2015-11-27 2015-11-27 Gaze detection system, gaze point detection method, and gaze point detection program
PCT/JP2015/083484 Continuation WO2017090203A1 (en) 2015-11-27 2015-11-27 Line-of-sight detection system, gaze point identification method, and gaze point identification program

Publications (1)

Publication Number Publication Date
US20180106999A1 true US20180106999A1 (en) 2018-04-19

Family

ID=58763231

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/318,709 Active US9880384B2 (en) 2015-11-27 2015-11-27 Gaze detection system, gaze point detection method, and gaze point detection program
US15/842,120 Abandoned US20180106999A1 (en) 2015-11-27 2017-12-14 Gaze detection system, gaze point detection method, and gaze point detection program

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/318,709 Active US9880384B2 (en) 2015-11-27 2015-11-27 Gaze detection system, gaze point detection method, and gaze point detection program

Country Status (5)

Country Link
US (2) US9880384B2 (en)
KR (1) KR101862499B1 (en)
CN (1) CN107111381A (en)
TW (1) TW201717840A (en)
WO (1) WO2017090203A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI695997B (en) * 2019-08-09 2020-06-11 大陸商業成科技(成都)有限公司 Eye tracking architecture design
US20210041945A1 (en) * 2019-06-14 2021-02-11 Tobii Ab Machine learning based gaze estimation with confidence
US20230102371A1 (en) * 2021-09-28 2023-03-30 Ramot At Tel-Aviv University Ltd. Automatic preference quantification of displayed objects based on eye tracker data

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107111381A (en) * 2015-11-27 2017-08-29 Fove股份有限公司 Line-of-sight detection systems, fixation point confirmation method and fixation point confirm program
CN205594581U (en) * 2016-04-06 2016-09-21 北京七鑫易维信息技术有限公司 Module is tracked to eyeball of video glasses
CN106125930A (en) * 2016-06-27 2016-11-16 上海乐相科技有限公司 A kind of virtual reality device and the method for main perspective picture calibration
CN106932905A (en) * 2017-02-27 2017-07-07 阿里巴巴集团控股有限公司 Virtual reality helmet
CN106908951A (en) * 2017-02-27 2017-06-30 阿里巴巴集团控股有限公司 Virtual reality helmet
DE102017129795A1 (en) 2017-06-30 2019-01-03 Lg Display Co., Ltd. DISPLAY DEVICE AND GATE-DRIVER CONTROL CIRCUIT THEREOF, CONTROL METHOD AND VIRTUAL-REALITY DEVICE
KR102495234B1 (en) * 2017-09-06 2023-03-07 삼성전자주식회사 Electronic apparatus, method for controlling thereof and the computer readable recording medium
US10254832B1 (en) * 2017-09-28 2019-04-09 Microsoft Technology Licensing, Llc Multi-item selection using eye gaze
JP6897467B2 (en) * 2017-10-02 2021-06-30 富士通株式会社 Line-of-sight detection device, line-of-sight detection program, and line-of-sight detection method
US11237628B1 (en) 2017-10-16 2022-02-01 Facebook Technologies, Llc Efficient eye illumination using reflection of structured light pattern for eye tracking
US11073903B1 (en) 2017-10-16 2021-07-27 Facebook Technologies, Llc Immersed hot mirrors for imaging in eye tracking
CN110134222A (en) * 2018-02-02 2019-08-16 上海集鹰科技有限公司 A kind of VR shows positioning sighting system and its positioning method of sight
US10521013B2 (en) * 2018-03-01 2019-12-31 Samsung Electronics Co., Ltd. High-speed staggered binocular eye tracking systems
CN108592865A (en) * 2018-04-28 2018-09-28 京东方科技集团股份有限公司 Geometric measurement method and its device, AR equipment based on AR equipment
US11163166B1 (en) 2018-05-23 2021-11-02 Facebook Technologies, Llc Removable frames for head-mounted display systems
US10552986B1 (en) * 2018-07-20 2020-02-04 Banuba Limited Computer systems and computer-implemented methods configured to track multiple eye-gaze and heartrate related parameters during users' interaction with electronic computing devices
US10838132B1 (en) 2018-08-21 2020-11-17 Facebook Technologies, Llc Diffractive gratings for eye-tracking illumination through a light-guide
US10996748B2 (en) 2018-09-10 2021-05-04 Apple Inc. Gaze-dependent display encryption
US10852817B1 (en) 2018-11-02 2020-12-01 Facebook Technologies, Llc Eye tracking combiner having multiple perspectives
US10725302B1 (en) 2018-11-02 2020-07-28 Facebook Technologies, Llc Stereo imaging with Fresnel facets and Fresnel reflections
CN109756723B (en) 2018-12-14 2021-06-11 深圳前海达闼云端智能科技有限公司 Method and apparatus for acquiring image, storage medium and electronic device
US10771774B1 (en) * 2019-03-22 2020-09-08 Varjo Technologies Oy Display apparatus and method of producing images having spatially-variable angular resolutions
KR20200136297A (en) * 2019-05-27 2020-12-07 삼성전자주식회사 Augmented reality device for adjusting a focus region according to a direction of an user's view and method for operating the same
US11467370B2 (en) 2019-05-27 2022-10-11 Samsung Electronics Co., Ltd. Augmented reality device for adjusting focus region according to direction of user's view and operating method of the same
FR3100704B1 (en) * 2019-09-13 2021-08-27 E Swin Dev VERSATILE OPHTHALMOLOGICAL MEASUREMENT DEVICE
US11809622B2 (en) 2020-12-21 2023-11-07 Samsung Electronics Co., Ltd. Electronic device and method for eye-tracking of user and providing augmented reality service thereof
KR20220111575A (en) * 2021-02-02 2022-08-09 삼성전자주식회사 Electronic device and method for eye-tracking of user and providing augmented reality service thereof
DE102022203677A1 (en) * 2022-04-12 2023-10-12 Robert Bosch Gesellschaft mit beschränkter Haftung Method for determining a current viewing direction of a user of data glasses with a virtual retinal display and data glasses
CN116027910B (en) * 2023-03-29 2023-07-04 广州视景医疗软件有限公司 Eye bitmap generation method and system based on VR eye movement tracking technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158507A1 (en) * 2005-05-19 2008-07-03 Rodenstock Gmbh Series of Spectacle Lenses and Associated Production Method
US20140184475A1 (en) * 2012-12-27 2014-07-03 Andras Tantos Display update time reduction for a near-eye display
US20160048204A1 (en) * 2014-01-23 2016-02-18 Jason Scott Gaze swipe selection
US9329683B2 (en) * 2010-12-08 2016-05-03 National University Corporation Shizuoka University Method for detecting point of gaze and device for detecting point of gaze
US20170098330A1 (en) * 2015-07-14 2017-04-06 Colopl, Inc. Method for controlling head mounted display, and program for controlling head mounted display
US9880384B2 (en) * 2015-11-27 2018-01-30 Fove, Inc. Gaze detection system, gaze point detection method, and gaze point detection program
US20180160079A1 (en) * 2012-07-20 2018-06-07 Pixart Imaging Inc. Pupil detection device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2939988B2 (en) 1989-04-05 1999-08-25 キヤノン株式会社 Eye gaze detection device
JPH06337756A (en) 1993-05-28 1994-12-06 Daikin Ind Ltd Three-dimensional position specifying method and virtual space stereoscopic device
JP3361980B2 (en) * 1997-12-12 2003-01-07 株式会社東芝 Eye gaze detecting apparatus and method
US6806898B1 (en) * 2000-03-20 2004-10-19 Microsoft Corp. System and method for automatically adjusting gaze and head orientation for video conferencing
CN102830793B (en) * 2011-06-16 2017-04-05 北京三星通信技术研究有限公司 Sight tracing and equipment
US9342610B2 (en) * 2011-08-25 2016-05-17 Microsoft Technology Licensing, Llc Portals: registered objects as virtualized, personalized displays
US9380287B2 (en) 2012-09-03 2016-06-28 Sensomotoric Instruments Gesellschaft Fur Innovative Sensorik Mbh Head mounted system and method to compute and render a stream of digital images using a head mounted display
CN103020983B (en) * 2012-09-12 2017-04-05 深圳先进技术研究院 A kind of human-computer interaction device and method for target following
US20150003819A1 (en) * 2013-06-28 2015-01-01 Nathan Ackerman Camera auto-focus based on eye gaze
US9922253B2 (en) * 2013-10-11 2018-03-20 Interdigital Patent Holdings, Inc. Gaze-driven augmented reality
JP6608137B2 (en) 2014-01-03 2019-11-20 ハーマン インターナショナル インダストリーズ インコーポレイテッド Detection of binocular transduction on display
US10067561B2 (en) * 2014-09-22 2018-09-04 Facebook, Inc. Display visibility based on eye convergence
CN104834381B (en) * 2015-05-15 2017-01-04 中国科学院深圳先进技术研究院 Wearable device and sight line focus localization method for sight line focus location

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158507A1 (en) * 2005-05-19 2008-07-03 Rodenstock Gmbh Series of Spectacle Lenses and Associated Production Method
US9329683B2 (en) * 2010-12-08 2016-05-03 National University Corporation Shizuoka University Method for detecting point of gaze and device for detecting point of gaze
US20180160079A1 (en) * 2012-07-20 2018-06-07 Pixart Imaging Inc. Pupil detection device
US20140184475A1 (en) * 2012-12-27 2014-07-03 Andras Tantos Display update time reduction for a near-eye display
US20160048204A1 (en) * 2014-01-23 2016-02-18 Jason Scott Gaze swipe selection
US20170098330A1 (en) * 2015-07-14 2017-04-06 Colopl, Inc. Method for controlling head mounted display, and program for controlling head mounted display
US9880384B2 (en) * 2015-11-27 2018-01-30 Fove, Inc. Gaze detection system, gaze point detection method, and gaze point detection program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210041945A1 (en) * 2019-06-14 2021-02-11 Tobii Ab Machine learning based gaze estimation with confidence
TWI695997B (en) * 2019-08-09 2020-06-11 大陸商業成科技(成都)有限公司 Eye tracking architecture design
US20230102371A1 (en) * 2021-09-28 2023-03-30 Ramot At Tel-Aviv University Ltd. Automatic preference quantification of displayed objects based on eye tracker data
US11880504B2 (en) * 2021-09-28 2024-01-23 Ramot At Tel-Aviv University Ltd. Automatic preference quantification of displayed objects based on eye tracker data

Also Published As

Publication number Publication date
KR101862499B1 (en) 2018-05-29
TW201717840A (en) 2017-06-01
WO2017090203A1 (en) 2017-06-01
US20170285337A1 (en) 2017-10-05
US9880384B2 (en) 2018-01-30
KR20170082457A (en) 2017-07-14
CN107111381A (en) 2017-08-29

Similar Documents

Publication Publication Date Title
US9880384B2 (en) Gaze detection system, gaze point detection method, and gaze point detection program
US10409368B2 (en) Eye-gaze detection system, displacement detection method, and displacement detection program
US10521026B2 (en) Passive optical and inertial tracking in slim form-factor
US10146335B2 (en) Modular extension of inertial controller for six DOF mixed reality input
US11747915B2 (en) Smart ring for manipulating virtual objects displayed by a wearable device
US20180007258A1 (en) External imaging system, external imaging method, external imaging program
KR101883090B1 (en) Head mounted display
EP3422153A1 (en) System and method for selective scanning on a binocular augmented reality device
CN114761909A (en) Content stabilization for head-mounted displays
US20170344112A1 (en) Gaze detection device
US20220382064A1 (en) Metalens for use in an eye-tracking system of a mixed-reality display device
JP2018026120A (en) Eye-gaze detection system, displacement detection method, and displacement detection program
US20200213467A1 (en) Image display system, image display method, and image display program
US20180182124A1 (en) Estimation system, estimation method, and estimation program
US20200379555A1 (en) Information processing system, operation method, and operation program
US11900058B2 (en) Ring motion capture and message composition system
US20190086677A1 (en) Head mounted display device and control method for head mounted display device
US20240061798A1 (en) Debug access of eyewear having multiple socs
US20240069688A1 (en) Head-Mounted Electronic Device with Magnification Tool
JP2017045068A (en) Head-mounted display

Legal Events

Date Code Title Description
AS Assignment

Owner name: FOVE, INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WILSON, LOCHLAINN;SEKO, KEIICHI;KOJIMA, YUKA;AND OTHERS;REEL/FRAME:044399/0599

Effective date: 20161214

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION