US20230393653A1 - Calibration for gaze detection - Google Patents

Calibration for gaze detection Download PDF

Info

Publication number
US20230393653A1
US20230393653A1 US18/248,832 US202118248832A US2023393653A1 US 20230393653 A1 US20230393653 A1 US 20230393653A1 US 202118248832 A US202118248832 A US 202118248832A US 2023393653 A1 US2023393653 A1 US 2023393653A1
Authority
US
United States
Prior art keywords
image
eye
user
gaze
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/248,832
Other languages
English (en)
Inventor
Iakov CHERNYAK
Grigory CHERNYAK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fove Inc
Original Assignee
Fove Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fove Inc filed Critical Fove Inc
Assigned to FOVE, INC. reassignment FOVE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHERNYAK, Grigory, CHERNYAK, Iakov
Publication of US20230393653A1 publication Critical patent/US20230393653A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to a video system, a video generating method, a video distribution method, a video generating program, and a video distribution program. particularly in the context of a video system comprising a head-mounted display and a gaze detection device.
  • the calibration refers to causing a user to gaze at a specific indicator and specifying a position relationship between a position at which the specific indicator is displayed and a corneal center of the user gazing at the specific indicator.
  • a gaze detection system that performs calibration to perform gaze detection can specify a point at which a user is looking.
  • the present invention has been made in consideration of the above problems, and an object thereof is to provide a technology capable of accurately executing calibration for realizing gaze detection of a user wearing a head mounted display.
  • an aspect of the present invention is a method comprising: measuring head rotation speed in a direction; measuring eye rotation speed in the direction; and performing calibration of a gaze detection unit when the head rotation speed and eye rotation speed is lower than a threshold.
  • the present invention it is possible to provide a technology for detecting a gaze direction of a user wearing a head mounted display.
  • FIG. 1 shows a schematic overview of the video system 1 according to the first embodiment.
  • FIG. 2 is a block diagram illustrating the configuration of the video system 1 according to the embodiments.
  • FIG. 3 shows a diagram showing the location of each part.
  • FIG. 4 shows a flowchart of an eye-tracking method.
  • FIG. 5 shows a physical location of a virtual camera and a lens.
  • FIG. 6 shows camera images for lens shape.
  • FIG. 7 shows a flow chart of the process of 3D model based pupil prediction.
  • FIG. 8 shows an example of a scene image for calibration.
  • FIG. 9 shows a flowchart of the process of hidden calibration.
  • FIG. 10 shows a schematic overview of the video system.
  • FIG. 11 shows a flowchart of the process related to communication between the head-mounted display and the cloud server.
  • FIG. 12 shows a functional configuration diagram of the video system.
  • FIG. 13 shows another example of a functional configuration diagram of the video system.
  • FIG. 14 shows a graph showing the rotation speeds of head and eye.
  • FIG. 15 shows a physical structure of the eyeball.
  • FIG. 16 shows an example of a method of calibration of the ACD.
  • FIG. 17 shows the refraction model of the single point calibration.
  • FIG. 18 shows branches of the implicit calibration.
  • FIG. 19 shows the overview of the implicit calibration.
  • FIG. 20 shows a flow chart of the implicit calibration.
  • FIG. 1 shows a schematic overview of the video system 1 according to the first embodiment.
  • video system 1 comprises a head-mounted display 100 and a gaze detection device 200 .
  • the head-mounted display 100 is used while secured to the head of the user 300 .
  • a gaze detection device 200 detects a gaze direction of at least one of a right eye and a left eye of the user wearing the head mounted display 100 and specifies the user's focal point, that is, a point gazed by the user in a three-dimensional image displayed on the head mounted display.
  • the gaze detection device 200 also functions as a video generation device that generates a video to be displayed by the head mounted display 100 .
  • the gaze detection device 200 is a device capable of reproducing videos of stationary game machines, portable game machines, PCs, tablets, smartphones, phablets, video players, TVs, or the like, but the present invention is not limited thereto.
  • the gaze detection device 200 is wirelessly or wiredly connected to the head mounted display 100 . In the example illustrated in FIG.
  • the gaze detection device 200 is wirelessly connected to the head mounted display 100 .
  • the wireless connection between the gaze detection device 200 and the head mounted display 100 can be realized using a known wireless communication technique such as Wi-Fi (registered trademark) or Bluetooth (registered trademark).
  • Wi-Fi registered trademark
  • Bluetooth registered trademark
  • transfer of videos between the head mounted display 100 and the gaze detection device 200 is executed according to a standard such as Miracast (registered trademark), WiGig (registered trademark), or WHDI (registered trademark).
  • Other communication techniques may be used and, for example, acoustic communication techniques or optical transmission techniques may be used.
  • the head-mounted display 100 comprises a housing 150 , a fitting harness 160 , and headphones 170 .
  • the housing 150 encloses an image display system, such as an image display element for presenting video images to the user 300 , and, not shown in the figure, a Wi-Fi (registered trademark) module, a Bluetooth (registered trademark) module, or other type wireless communication module.
  • the head-mounted display 100 is secured to the head of the user 300 with a fitting harness 160 .
  • the fitting harness 160 may be implemented with the help of, for example, belts or elastic bands.
  • the headphones 170 output the audio of the video reproduced by the video generating device 200 .
  • the headphones 170 do not need to be fixed to the head-mounted display 100 . Even when the head-mounted display 100 is secured with the fitting harness 160 , the user 300 may freely put on or remove the headphones 170 .
  • FIG. 2 is a block diagram illustrating the configuration of the video system 1 according to the embodiments.
  • the head-mounted display 100 comprises a video presentation unit 110 , an imaging unit 120 , and a communication unit 130 .
  • the video presentation unit 110 presents a video to the user 300 .
  • the video presentation unit 110 may, for example, be implemented as a liquid crystal monitor or an organic EL (electroluminescence) display.
  • the imaging unit 120 captures images of the user's eye.
  • the imaging unit 120 may, for example, be implemented as a CCD (charge-coupled device), CMOS (complementary metal oxide semiconductor) or other image sensor disposed in the housing 150 .
  • CCD charge-coupled device
  • CMOS complementary metal oxide semiconductor
  • the communication unit 130 provides a wireless or wired connection to the video generating device 200 for information transfer between the head-mounted display 100 and the video generating device 200 . Specifically, the communication unit 130 transfers images captured by the imaging unit 120 to the video generating device 200 , and receives video from the video generating device 200 for presentation by the video presentation unit 110 .
  • the communication unit 130 may be implemented as, for example, a Wi-Fi module, a Bluetooth (registered trademark) module or another wireless communication module.
  • the gaze detection device 200 shown FIG. 2 will be introduced.
  • the gaze detection device 200 comprises a communication unit 210 , a gaze detection unit 220 , a calibration unit 230 , and a storage unit 240 .
  • the communication unit 210 provides a wireless or wired connection to the head-mounted display 100 .
  • the communication unit 210 receives from the head-mounted display 100 images captured by the imaging unit 120 , and transmits video to the head-mounted display 100 .
  • the gaze detection unit 220 detects a gaze of the user viewing an image displayed on the display 100 , and generates gaze data.
  • the calibration unit 230 performs the calibration of the gaze detection.
  • the storage unit 240 stores data for gaze detection and calibration.
  • the eye-tracking with lens compensation may be a method comprising:
  • the method may further comprises:
  • FIG. 3 shows a schematic view of the eye-tracking with lens compensation.
  • FIG. 3 shows human eye, lens, virtual camera and the head mounted display screen.
  • the ray from the camera go through the standard or fresnel lens and reach the human eye.
  • the gaze detection unit 220 uses the ray in order to compute eye-tracking.
  • a standard or fresnel lens is provided between the camera and the human eye.
  • the gaze detection unit 220 detects glints and the pupil on the image of the human eye using rays from the camera to each of the glints and the pupil. In the eye-tracking with lens compensation, the ray transfers through the lens. Therefore, the gaze detection unit 220 must compute such transfer.
  • the gaze detection unit 220 may compute a ray (ray before lens) from the camera to a glint position detected from the image using intrinsic and extrinsic matrices to give the 3d ray for any 2d point (glint) on the camera image.
  • the gaze detection unit 220 may apply snell's law ray tracing or use a precalculated transfer matrix in order to calculate the ray after lens.
  • the gaze detection unit 220 uses this ray after lens in order to compute eye-tracking (gaze direction).
  • Lens compensation may be done with polynomial fitting. Assuming (x, y) represents a pixel on the camera image, (xp, yp) represents the x-y position on the lens, and (xd, yd, zd) represents the x-y-z direction of the ray from the lens. Then for any pixel on the camera image the gaze detection unit 220 can find a ray after in pass the lens:
  • (x, y) can be anything that can be directly derived from pixel coordinate, such as angle in spherical coordinates etc. Further, (xd, yd, zd) can also have alternative representation (e.g. spherical coordinates).
  • FIG. 4 shows a flowchart of an eye-tracking method.
  • the left shows a conventional flow, and the right shows the eye-tracking with lens compensation according to the present embodiment.
  • the gaze detection unit 220 obtains eye images from a camera. Then, the gaze detection unit 220 finds glints and a pupil by conducting image processing. The gaze detection unit 220 uses intrinsic and extrinsic matrices to get rays from the camera to every glint.
  • the gaze detection unit 220 transfers the rays through the lens.
  • the transfer is computed matrices or polynomial fitting described above.
  • the gaze detection unit 220 solves the inverse problem to find cornea center/radius.
  • the gaze detection unit 220 uses intrinsic and extrinsic matrices to get a ray from camera to the pupil.
  • the gaze detection unit 220 transfers this ray through the lens.
  • the gaze detection unit 220 intersects this ray with cornea sphere.
  • the resulting optical axis is the vector from cornea center to 3D pupil position.
  • the camera optimization through lens fitting may be a method comprising:
  • FIG. 5 shows a physical location of a virtual camera and a lens.
  • the expected position and orientation of the camera means a lot to computing the gaze direction since the ray from the camera to the eye of the user transfers through the lens.
  • FIG. 6 shows camera images for lens shape.
  • the left side picture shows an expected camera image when the orientation of the camera is correct.
  • the right side picture shows the camera image when the orientation of the camera is wrong.
  • the lens shape (white circle) is not located in the center of the image.
  • the calibration unit 230 runs numerical optimization to correct camera position and orientation. As an optimization cost function the calibration unit 230 tries to fit the observed lens to the expected lens shape.
  • 3D model based prediction may be a method comprising:
  • FIG. 7 shows a flow chart of the process of 3D model based pupil prediction.
  • the eye-tracking system obtains an eye image by the camera. Then, based on the pupil and iris eccentricity, and glint position, and eye image from the camera, the system conducts image processing. Then, it estimates eyeball model parameters, such as position and orientation radiuses of eyeball, pupil and iris. it outputs 3D gaze point estimation. Then, it creates a 3D eyeball model from previous image frames, and it estimates pupil and iris eccentricity, and glints position from the 3D model. Then the pupil and iris eccentricity, and glint position are used for the next cycle of the image processing.
  • Calibration process burdens the user with additional effort.
  • the calibration is performed during the user viewing contents.
  • the hidden calibration may be a method comprising:
  • the calibration may be performed every time a scene changes.
  • FIG. 8 shows an example of a scene image for calibration.
  • the left side figure of FIG. 8 shows a scream image of conventional calibration.
  • the conventional calibration before the content starts, the moving dot is displayed on the screen and the user watches the dot. Or in case recalibration is conducted, it is required to stop the content to display the moving dot again. But stopping content for calibration stresses users. To address this problem, it is better to conduct calibration without stopping content.
  • the video content has a scene during a specific period of time which shows only moving object on the screen such as a logo, a firefly and a bright object.
  • the user watches the moving object and the calibration unit can conduct the calibration process.
  • the right side figure of FIG. 8 shows an example of a scene displayed with a firefly.
  • FIG. 9 shows a flowchart of the process of hidden calibration.
  • An application such as Video player
  • the application sends 3D location information (3D coordinates) of the object to the eye tracking unit.
  • the eye tracking unit uses the location information to calibrate in real time. If the eye tracking unit conducts the calibration, the application sends further timestamp information with the 3D location information.
  • the foveated camera streaming may be a method comprising:
  • a resolution of the region of interest is higher than a resolution of the outer region.
  • the image may be a video, in the step of compressing the region of interest, encoding the region of interest in a first video, in the step of compressing the outer region, encoding the outer region into a second video, wherein a frame rate of the first video is higher than a frame rate of the second video.
  • the foveated camera streaming also may be a method comprising:
  • FIG. 10 shows a schematic overview of the video system.
  • the video system comprises the head-mounted display 100 , a gaze detection device 200 and a cloud server.
  • the head-mounted display 100 further comprises an external camera.
  • the external camera is fixed with the housing 150 and arranged to record video images of the front direction of the user's head facing.
  • the external camera records video images with full resolution for the whole world where the external camera can record.
  • the video system has two image streams including high resolution images for the gazing area of the user and low resolution images for the other area.
  • the images including the high resolution images and the low resolution images are sent to the cloud server by a public communication network directly from the head-mounted display 100 or via the gaze detection device 200 .
  • the video system 1 can reduce the bandwidth for sending images because it sends full resolution images for only limited area (a gazing area) where the user looking at, and low resolution images for the other area, instead of sending full resolution images for the whole world where the external camera can record.
  • the cloud server Based on the received two types of image information, the cloud server creates contextual information which is used for AR (Augmented Reality) or MR (Mixed Reality) display.
  • the cloud server aggregates information (e.g. object identification, facial recognition, video image, etc.) to create contextual information and sends the contextual information to the head-mounted display 100 .
  • FIG. 11 shows a flowchart of the process related to communication between the head-mounted display and the cloud server.
  • the external camera facing outward on the head mount display takes images of the world (S 1101 ).
  • the control unit splits the video images into two streams based on the eye tracking coordinates (S 1102 ).
  • the control unit detects gazing point coordinates of the user based on the eye tracking coordinates, and splits the video images into a region of interest area and the other area.
  • the region of interest can be obtained from the video images by splitting a certain sized area including the gazing point.
  • the two video image streams are sent to the cloud server by a communication network (e.g. 5G network) (S 1103 ).
  • a communication network e.g. 5G network
  • images of the region of interest area are sent to the server as high resolution images
  • images of the other area are sent to the server as low resolution images.
  • the cloud server processes the images and adds contextual information (S 1104 ).
  • the images and contextual information are sent back to the head mount display to display AR or MR images to the user (S 1105 ).
  • FIG. 12 shows a functional configuration diagram of the video system.
  • the head mount display and gaze detection device comprise the external camera, the control unit, the eye tracking unit, a sensing unit, communication unit and Display unit.
  • the cloud server comprises a general awareness processing unit and a high-detail processing unit, and an information aggregation unit.
  • the external camera obtains video images and the obtained raw video image with high resolution is inputted to the control unit.
  • the eye tracking unit detects looking at point (gaze coordinates) based on eye tracking and input the gaze coordinates information to the control unit.
  • the control unit determines the region of interest in each image based on the gaze coordinates. For example, the region of interest can be obtained from the video images by splitting a certain sized area including the gazing point.
  • the image data of the region of interest is compressed at lower compression ratio and inputted to the communication unit.
  • the communication unit also receives sensing data such as headset angle and other metadata which is obtained by the sensing unit.
  • the sensing unit can be configured by GPS or geomagnetic sensor.
  • the image data of the region of interest is sent to the cloud server with a higher resolution image.
  • the image data outside of the region of interest is compressed at a higher compression ratio and inputted to the communication unit.
  • the image data outside of the region of interest is sent to the cloud server with a lower resolution image.
  • the general awareness processing unit in the cloud server receives the image data outside of “the region of interest” which is lower resolution (and the headset angle and the metadata), and performs image processing in order to identify objects in the image, such as object type and number.
  • the High-detail processing unit in the cloud server receives the image data of the region of interest which is higher resolution (and the headset angle and the metadata), and performs image processing in order to identify detail things, such as facial recognition, text recognition.
  • the information aggregation unit receives the identification result of the general awareness processing unit and the recognition result of the High-detail processing unit.
  • the information aggregation unit aggregates received results in order to create a display image, and send the display image to the head mount display via the communication network.
  • FIG. 13 shows another example of a functional configuration diagram of the video system.
  • the image data of the region of interest (higher resolution) and the image data outside of the region of interest (lower resolution) are sent to the cloud server separately. But these image data can be sent with one video stream as shown in FIG. 7 - 4 .
  • the control unit After obtaining the region of interest, the control unit stretches the image in order to reduce data outside of the region of interest. Then, it sends the stretched image and sensing data from the sensing unit to an unstretched unit in the cloud server. The unstretched unit un-stretches received image data and sends the un-stretched image data to the general awareness processing unit and the high-detail processing unit.
  • the eye tracking calibration can be made using optokinetic response, i.e. the calibration method may comprise:
  • Optokinetic response is an eye movement that occurs in response to the image movement on the retina.
  • the sum of the head rotation speed and the eye rotation speed is zero (0) during the head rotations.
  • the calibration unit 230 can perform the calibration of the gaze detection device 200 when the user gazes at a stable point which can be detected by detecting that the sum of the head rotation speed and the eye rotation speed is zero. That is, when the user rotates his/her head to the right, the user should rotate his/her eyes to the left to gaze at a point.
  • FIG. 14 shows a graph showing the rotation speeds of head and eye.
  • the dotted line shows the eye rotation speed in a direction.
  • the solid line shows the inverted head rotation speed (the head rotation speed multiplied by ⁇ 1) in the direction. As shown FIG. 14 , the inverted head rotation speed almost aligns the eye rotation speed.
  • the head-mounted display 100 comprises an IMU.
  • the IMU can measure the head rotation speed of the user 300 .
  • the gaze detection unit can measure the eye rotation speed of the user.
  • the eye rotation speed can be represented by the speed of the movement of the gaze point.
  • the calibration unit 230 may calculate the head rotation speeds in up-down direction and left-right direction from the measured value by the IMU.
  • the calibration unit 230 may also calculate the eye rotation speed in up-down direction and left-right direction from the history of the gaze point.
  • the calibration unit 230 displays a marker in a virtual space rendered in the display. The marker can be moved and be stable.
  • the calibration unit 230 calculates the head rotation speeds in left-right direction and up-down direction and the eye rotation speeds in left-right direction and up-down direction.
  • the calibration unit 230 may perform calibration when the sum of the head rotation speeds and the eye rotation speeds is lower than a pre-determined threshold.
  • the single point calibration may be a method comprising:
  • the calibration method determine a direction from a cornea center to the position of the pupil as the gaze direction.
  • the calibration method it may be possible to determine a direction from a eyeball center to the position of the pupil as the gaze direction.
  • the calibration method may further comprise: compensating a position of the pupil to angle against a direction from the camera to the pupil image.
  • FIG. 15 shows a physical structure of the eyeball.
  • the eyeball is configured with some parts including a pupil, a cornea and an anterior chamber.
  • the position of the pupil may be recognized by camera image.
  • anterior chamber depth ACD
  • it is required to compensate the position of the pupil taking the ACD into consideration in order to improve accuracy of gaze estimation.
  • the compensated pupil position is used to estimate the gaze direction.
  • FIG. 16 shows an example of a method of calibration.
  • the eye is looking at the calibration point which is known by the system, and the pupil is observed by a camera.
  • P0 shows an intersection of a ray (observed pupil by a camera) with cornea sphere.
  • P0 is the observed pupil on the camera.
  • P0 is used in general gaze estimation.
  • the pupil is positioned at P1 inside the cornea sphere by ACD.
  • the direction from the center of the eyeball (or the center of the cornel sphere) to the center of the pupil is considered to be the gaze direction of the eye.
  • the calibration may be made using the gaze direction and the known calibration point.
  • the calibration unit 230 may calibrate the parameter of the gaze estimation.
  • the calibration unit 230 may calibrate the ACD.
  • the calibration unit 230 may calibrate the center of the cornea sphere.
  • the calibration unit 230 may calibrate the angle of the camera.
  • the compensation will be adjusted assuming the cornea refracts the ray. That is the pupil position is compensated taking the anterior chamber depth (ACD) and the horizontal optical to visual axis into consideration.
  • ACD anterior chamber depth
  • FIG. 17 shows the refraction model of the single point calibration.
  • the intersection of ray (observed pupil by camera) with the cornea sphere gives P0.
  • ACD a certain distance between pupil P1 and cornea sphere surface (P0).
  • Direction from cornea center (or eyeball center) to P1 should direct towards known calibration point. If not, the calibration unit 230 optimizes ACD. Thus the calibration unit 230 calibrates the ACD.
  • the implicit calibration may be a method for calibrating gaze detection comprising:
  • the point to which the gaze point is adjusted may be the point with the highest probability of an edge occurring.
  • the implicit calibration may further comprises:
  • the screen image is referred to as a visual field.
  • the visual field may cover both screen image in VR and outside camera image in AR. It is also important to note that we do not require the user to look at any specific targets.
  • the scene content can be arbitrary. Since we have two types of images: eye images and screen images, it would be better to specify explicitly which image is referred to.
  • the correlation of gaze points and images of visual field can be defined as an assumption about human behavior: In circumstances A people are likely to look at B, where A and B are something that we can automatically extract from the visual field image. Currently we use an assumption In all circumstances people are likely to look at edges.
  • FIG. 18 shows branches of the implicit calibration.
  • the “bias” means that the difference between estimated gaze point and actual gaze point is constant. People tend to look at points with high contrast, i.e. object edges. Therefore, in the implicit calibration using the edge accumulation includes:
  • FIG. 19 shows the overview of the implicit calibration.
  • the circles are the gaze points from the eye tracker (gaze detection unit 220 ).
  • the stars are the actual gaze points.
  • the rectangles are the accumulated regions of the visual field. We use the point with the highest amount of edges as a ground truth for calibration. Therefore, there is no need to provide any calibration point in the display field.
  • FIG. 20 shows a flow chart of the implicit calibration.
  • the exe tracker gaze detection unit 220
  • the head-mounted display provides the image of the full visual field that the user sees.
  • the calibration unit 230 uses the approximate gaze direction and the image of the full visual field, the calibration unit 230 computes statistics over time, estimates the eye-tracking parameters, and feeds back the parameters to the exe tracker. Thus, gradually the gaze direction is calibrated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
US18/248,832 2020-10-12 2021-10-12 Calibration for gaze detection Pending US20230393653A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020172238 2020-10-12
JP2020-172238 2020-10-12
PCT/IB2021/059329 WO2022079585A1 (en) 2020-10-12 2021-10-12 Calibration for gaze detection

Publications (1)

Publication Number Publication Date
US20230393653A1 true US20230393653A1 (en) 2023-12-07

Family

ID=81207753

Family Applications (3)

Application Number Title Priority Date Filing Date
US18/248,832 Pending US20230393653A1 (en) 2020-10-12 2021-10-12 Calibration for gaze detection
US18/248,847 Pending US20240192771A1 (en) 2020-10-12 2021-10-12 Visual line detection device and visual line detection program
US18/248,838 Pending US20240134448A1 (en) 2020-10-12 2021-10-12 Viewpoint detection device, calibration method, and program

Family Applications After (2)

Application Number Title Priority Date Filing Date
US18/248,847 Pending US20240192771A1 (en) 2020-10-12 2021-10-12 Visual line detection device and visual line detection program
US18/248,838 Pending US20240134448A1 (en) 2020-10-12 2021-10-12 Viewpoint detection device, calibration method, and program

Country Status (3)

Country Link
US (3) US20230393653A1 (ja)
JP (3) JPWO2022079587A1 (ja)
WO (3) WO2022079584A1 (ja)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240020960A (ko) * 2022-08-09 2024-02-16 삼성전자주식회사 시선 방향을 식별하는 전자 장치 및 그 작동 방법

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0764709A (ja) * 1993-08-26 1995-03-10 Olympus Optical Co Ltd 指示処理装置
JP2013252301A (ja) * 2012-06-07 2013-12-19 Toyota Central R&D Labs Inc 眼球中心推定装置及びプログラム
EP3329316B1 (en) * 2016-03-11 2023-09-20 Facebook Technologies, LLC Corneal sphere tracking for generating an eye model
US10976813B2 (en) * 2016-06-13 2021-04-13 Apple Inc. Interactive motion-based eye tracking calibration
CN109690553A (zh) * 2016-06-29 2019-04-26 醒眸行有限公司 执行眼睛注视跟踪的系统和方法
US10820796B2 (en) * 2017-09-08 2020-11-03 Tobii Ab Pupil radius compensation
CN108038884B (zh) * 2017-11-01 2020-12-11 北京七鑫易维信息技术有限公司 校准方法、装置、存储介质和处理器

Also Published As

Publication number Publication date
WO2022079585A1 (en) 2022-04-21
US20240134448A1 (en) 2024-04-25
WO2022079587A1 (ja) 2022-04-21
JPWO2022079584A1 (ja) 2022-04-21
WO2022079584A1 (ja) 2022-04-21
US20240192771A1 (en) 2024-06-13
JPWO2022079587A1 (ja) 2022-04-21
JP2024514380A (ja) 2024-04-02

Similar Documents

Publication Publication Date Title
US11703947B2 (en) Apparatus and method for dynamic graphics rendering based on saccade detection
US20190235624A1 (en) Systems and methods for predictive visual rendering
US10182720B2 (en) System and method for interacting with and analyzing media on a display using eye gaze tracking
US20150097772A1 (en) Gaze Signal Based on Physical Characteristics of the Eye
CN109791605A (zh) 基于眼睛跟踪信息的图像区域中的自适应参数
US20120200667A1 (en) Systems and methods to facilitate interactions with virtual content
US20200120322A1 (en) Image generating device, image display system, and image generating method
US11507184B2 (en) Gaze tracking apparatus and systems
JP2018196730A (ja) 眼の位置を監視するための方法およびシステム
US11983310B2 (en) Gaze tracking apparatus and systems
US20210382316A1 (en) Gaze tracking apparatus and systems
US20230393653A1 (en) Calibration for gaze detection
CN112926523B (zh) 基于虚拟现实的眼球追踪方法、系统
US20220035449A1 (en) Gaze tracking system and method
US20240031551A1 (en) Image capturing apparatus for capturing a plurality of eyeball images, image capturing method for image capturing apparatus, and storage medium
US20210271075A1 (en) Information processing apparatus, information processing method, and program
US20220014729A1 (en) Information processing apparatus and image display method
US10915169B2 (en) Correcting method and device for eye-tracking
US10962779B2 (en) Display control device, method for controlling display control device, and storage medium
US11579690B2 (en) Gaze tracking apparatus and systems
US11222394B2 (en) Devices and headsets
US11023041B1 (en) System and method for producing images based on gaze direction and field of view
US20230308753A1 (en) Camera system for focusing on and tracking objects
Blignaut The effect of real-time headbox adjustments on data quality

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION UNDERGOING PREEXAM PROCESSING

AS Assignment

Owner name: FOVE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHERNYAK, IAKOV;CHERNYAK, GRIGORY;REEL/FRAME:064596/0517

Effective date: 20230606