CN112698725B - Method for realizing penetrating screen system based on eye tracker tracking - Google Patents

Method for realizing penetrating screen system based on eye tracker tracking Download PDF

Info

Publication number
CN112698725B
CN112698725B CN202011622545.8A CN202011622545A CN112698725B CN 112698725 B CN112698725 B CN 112698725B CN 202011622545 A CN202011622545 A CN 202011622545A CN 112698725 B CN112698725 B CN 112698725B
Authority
CN
China
Prior art keywords
coordinate system
screen
eye
matrix
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011622545.8A
Other languages
Chinese (zh)
Other versions
CN112698725A (en
Inventor
秦学英
卢世逸
宋修强
马嘉遥
王彬
宋婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202011622545.8A priority Critical patent/CN112698725B/en
Publication of CN112698725A publication Critical patent/CN112698725A/en
Application granted granted Critical
Publication of CN112698725B publication Critical patent/CN112698725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method for realizing a penetrating screen system based on eye tracker tracking, which belongs to the technical field of augmented reality and comprises the following steps of establishing a standard coordinate system, acquiring a monocular position by using a Tobii eye tracker, converting the Tobii coordinate system into the standard coordinate system and then into a Unity coordinate system, mapping a screen and eyes in reality into a Unity coordinate system, aligning the screen and the eyes with a camera in a virtual scene after a series of transformation and calibration, and enabling a user to achieve an interactive effect with the system only by using the eyes.

Description

Method for realizing penetrating screen system based on eye tracker tracking
Technical Field
The invention relates to a method for realizing a penetrating screen system based on eye tracker tracking, and belongs to the technical field of augmented reality.
Background
The history of the eye tracker dates back to the 18 th century. Since Wales started with images (ghosts) describing Eye movements in 1792, humans did not stop their research, and many studies on Human behavior have been extended with various Eye motility instruments invented to allow computers to identify The state of Human eyes, thereby facilitating an in-depth understanding of cognitive processes [ Hari Singh, Dr. Jaswand Single Human Eye Tracking and Related Issues: A Review ], [ Wade, N., and tat, B.W.the moving tablet of The Eye: The orientations of model Eye movement research. Oxford University Press,2005 ].
The Tobii eye tracker is a novel low-cost binocular eye tracker. The system is designed for natural gaze interaction, does not require continuous recalibration, and allows for modest head movements. And the official also designs a set of SDK (Software Development Kit, which is a set of Development tools for some Software engineers to establish application Software for specific Software packages, Software frameworks, hardware platforms, operating systems and the like) for the system so as to promote the Development of various eye tracker application programs, and the Kit provides various apis to facilitate users to acquire the states and data of the left and right eyes, thereby providing a Development platform for the invention mentioned herein. [ Agostino, Gibaldi, Mauricio, et al. evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for Research [ J ]. Behavior Research Methods,2016,49(3):923-
At present, many researches on augmented reality exist, most of the researches require corresponding carriers, for example, applications developed by the ARCore or the ARKit are apps on the mobile phone, and at this time, the mobile phone is used as a camera of the whole system to acquire the pose of the mobile phone, calculate the position relation between the mobile phone and a virtual object, and present a scene with fused virtual and real objects in the screen of the mobile phone. However, many challenges are faced in the research on naked eye augmented reality, such as the need to generate virtual-real fused images rather than naked eye observation, depending on the carrier, such as through a mobile phone or a camera; on the other hand, the robustness is not enough, and the virtual scene can generate jitter in the using process, so that the realistic experience of the user is damaged.
However, thanks to the powerful function of the Tobii eye tracker, the eye state can be recognized conveniently, which makes the implementation of naked eye augmented reality possible in the present application.
Disclosure of Invention
Aiming at the defects of the prior art, the method aims to solve the problem that the price of the Tobii coordinate system is corresponding to the coordinate system of the screen, the human eyes and the virtual scene, so that the positions of the human eyes and the screen can be corresponding to the coordinate system of the virtual scene. To solve yet another problem, the computer screen should create a virtual scene that can trick the human eye into believing that the screen is a "window" displaying the virtual scene. The invention provides a method for realizing a penetrating screen system based on eye tracker tracking.
The technical scheme of the invention is as follows:
the key of the method is to convert the Tobii coordinate system into the standard coordinate system and then into the Unity coordinate system, and the step aligns the real price with the virtual price so that the final imaging can be more vivid, and the method is a most basic and most critical step in realizing the reality.
A method for realizing a penetration screen system based on eye tracker tracking comprises the following steps:
the system is a monocular system and the eye positions mentioned below are all based on the left eye.
1) Establishing a standard coordinate system, as shown in fig. 1, the standard coordinate system established by the system is to use a certain default eye position as an origin, set a position of a monocular as the origin, and a connecting line between the origin and a midpoint of a screen is a z-axis, the origin can intersect with a plane of the screen at the midpoint of the screen along the z-axis, an x-axis is a straight line in a horizontal direction, a y-axis is a straight line in a vertical direction, and the plane formed by the x-axis and the y-axis is parallel to the screen of the notebook computer;
2) placing the Tobii eye tracker in front of a screen, acquiring a monocular position by using the Tobii eye tracker, and converting an eye position coordinate observed by the Tobii eye tracker into a standard coordinate system;
tobii oculomotor coordinates are UCS coordinate system, and the coordinate system shown in fig. 2 is UCS coordinate system of Tobii oculomotor, where by its SDK we can obtain eyeball coordinates in this coordinate system (a1 b1 c 1); the coordinate can be further calculated only by converting the coordinate into a standard coordinate system, and the conversion formula is as follows:
Cs=A[R(θ)Ct-R(θ)D] (1)
wherein, the Tobii coordinate system is a right-hand coordinate system, and the standard coordinate system is a left-hand coordinate system, so
Figure BDA0002874222690000021
Cs is the coordinate of the eye under a standard coordinate system, and Ct is the coordinate of the eye under a Tobii coordinate system;
on the other hand, as shown in FIG. 3, the eye tracker is oriented at an angle θ to the horizontal, so that R (θ) represents a rotation matrix of θ degrees around the x-axis
Figure BDA0002874222690000022
D represents the coordinates of the origin of the standard coordinate system under the Tobii coordinate system, and the origin of the standard coordinate system can be converted into the origin of the virtual coordinate system after actual conversion;
3) converting the standard coordinate system into a virtual coordinate system, wherein the virtual scene is built on unity, and the unity coordinate system and the standard coordinate system are both left-handed coordinate systems, and the conversion formula is as follows:
Cu=0.001Cs (2)
cu is the coordinate of the eye in a unity coordinate system;
the unit of the Unity coordinate system is m, and the unit of the standard coordinate system is mm, and only uniform units are needed; this also embodies the advantage of the design of the standard coordinate system that it is convenient to convert to the unity coordinate system.
4) Solve the imaging problem of the camera. The invention requires that the screen provides a window-like visual effect. The image on the window is essentially light traveling in a straight line, and the range outside the window that a person can see at different locations is actually the range outside the window that the light at the location he is in can travel to, and then the associated image is projected onto the window. The view cone properties of the camera express the range that it can see and present an image in that range on the front plane, which can be started by this. Firstly, mapping a screen and eyes in reality to a unity coordinate system; the eye position is mapped into a unity coordinate system through the conversion formula in the step 3); then measuring the vertical distance between the position of the origin D and the screen in reality, measuring the width and the height of the screen at the same time, determining the position and the size of the screen under a standard coordinate system, and converting the position and the size into a unity coordinate system by using a formula (2), thereby completing the mapping operation;
then, the view cone of the eye in the virtual scene (i.e. the camera in the camera imaging principle) at each position is determined, the view cone determines the projection area that the camera is finally imaging, by determining the view cone of the camera, we can define the virtual scene area that the camera should see at its position, and further control the camera imaging picture, as shown in fig. 4, the plane formed by the set point ABCD is the virtual screen, and the position of the point O is the eye, when the eye reaches a new position, the program calculates and takes out the point O to the line segment with the longest distance of four points, and re-determines the coordinates of the other three corner points in the virtual coordinate system by using the length as the standard, determines the coordinates of the point T by using the formula (3), and by taking the example in fig. 4 as OA is the longest, then extends OB, OC, OD to OB ', OC', OD ', so that | OA | | OC | | OD' |, the following relationship is then utilized:
T=(A+B'+C'+D')/4 (3)
wherein A represents the corner point coordinate with the longest distance, B ', C ' and D ' are the re-determined corner point coordinates,
the calculated point T is the coordinate of a point passing through the central line of the visual cone of the camera eye under a virtual coordinate system, and the visual angle posture can be determined by using the point;
the image in the virtual screen seen at this time is the visual image that we need, and then the perspective matrix is used to perform perspective transformation, and the following is a detailed description of the operation.
Preferably, in step 4), the step of performing perspective transformation using the perspective matrix includes:
by calling the unity method, two-dimensional coordinates of the four corner points of the virtual screen mapped to the virtual scene on the imaged image can be obtained, and four point pairs can be formed with (0,0), (0, height), (width, height), (width,0) (assuming that the finally generated image has width height pixels), thereby calculating a perspective matrix H which is a 3 × 3 matrix, and the calculation steps are as follows: for a point pair a (x, y,1), b (x1, y1,1), the following transformation equation is used:
b=HaT (4);
wherein aT is the transpose of a, a is the corner of the image to be perspective transformed, b is the corner of the imaged image, and a point pair can be constructed into the following matrix:
Figure BDA0002874222690000041
four dot pairs can construct four matrixes, and the matrixes are combined to obtain a matrix U with 8 rows and 9 columns; then constructing a UT & ltU & gt matrix, wherein eigenvectors corresponding to the minimum eigenvalues of the UT & ltU & gt matrix are 9 parameters of the perspective matrix H; and (4) after the perspective matrix is obtained, substituting the perspective matrix into the formula (4) to perform perspective transformation, and obtaining a new image after perspective.
Assuming that the pixel coordinate of a pixel in the original image is a and the pixel coordinate of the pixel converted to the newly generated image is b, b is HaT
The perspective transformation can be performed by the above formula, and in addition, the calculation of the operation is performed on the GPU in order to accelerate the calculation performance.
The mapping problem realizes the process of converting the Tobii coordinate system into the standard coordinate system and then converting the Tobii coordinate system into the unity coordinate system in the first three steps. On the other hand, since the state of the eye does not change during the observation, we cannot directly change the parameters of the visual cone, so the problem is simplified to the problem of the camera pose at each position.
The invention has the beneficial effects that:
1. the invention utilizes Tobii to identify the position of the eyes, and the position is aligned with the camera in the virtual scene after a series of transformation and calibration, so that the user can achieve the interaction effect with the system only by using the eyes.
2. The imaging of the camera in the virtual scene is controlled, so that the effect of deceiving human eyes and enabling a user to feel that the virtual scene in the screen is real exists is basically achieved.
3. The invention also gives full play to the positioning function of the Tobii eye tracker, so that the application range of the Tobii eye tracker is not limited to the two-dimensional position of the eye watching screen, but is expanded to the actual three-dimensional space of the Tobii eye tracker, human eyes and a computer screen.
Drawings
FIG. 1 depicts a diagram of an overall real-world system and a system-defined standard coordinate system;
FIG. 2 is a schematic representation of the coordinate system of a Tobii eye tracker, derived from the TobiiSDK official network;
(http://developer.tobiipro.com/commonconcepts/coordinatesystems.html);
FIG. 3 shows a schematic diagram of an eye tracker oriented at an angle to the horizontal;
FIG. 4 is a schematic diagram of an algorithm for determining a perspective pose;
FIG. 5 is a schematic diagram of the relationship between the camera near sanction plane and the virtual screen;
FIG. 6 is a schematic view of camera imaging;
fig. 7 is a schematic diagram of the final result after perspective transformation.
Detailed Description
The present invention will be further described by way of examples, but not limited thereto, with reference to the accompanying drawings.
Example 1:
a method for realizing a penetration screen system based on eye tracker tracking comprises the following steps:
the system is a monocular system and the eye positions mentioned below are all based on the left eye.
1) Establishing a standard coordinate system, as shown in fig. 1, the standard coordinate system established by the system is to use a certain default eye position as an origin, set a position of a monocular as the origin, and a connecting line between the origin and a midpoint of a screen is a z-axis, the origin can intersect with a plane of the screen at the midpoint of the screen along the z-axis, an x-axis is a straight line in a horizontal direction, a y-axis is a straight line in a vertical direction, and the plane formed by the x-axis and the y-axis is parallel to the screen of the notebook computer;
2) placing the Tobii eye tracker in front of a screen, acquiring a monocular position by using the Tobii eye tracker, and converting an eye position coordinate observed by the Tobii eye tracker into a standard coordinate system;
tobii oculomotor coordinates are UCS coordinate system, and the coordinate system shown in fig. 2 is UCS coordinate system of Tobii oculomotor, where by its SDK we can obtain eyeball coordinates in this coordinate system (a1 b1 c 1); the coordinate can be further calculated only by converting the coordinate into a standard coordinate system, and the conversion formula is as follows:
Cs=A[R(θ)Ct-R(θ)D] (1)
wherein, the Tobii coordinate system is a right-hand coordinate system, and the standard coordinate system is a left-hand coordinate system, so
Figure BDA0002874222690000051
Cs is the coordinate of the eye under a standard coordinate system, and Ct is the coordinate of the eye under a Tobii coordinate system;
on the other hand, as shown in FIG. 3, the eye tracker is oriented at an angle θ to the horizontal, so that R (θ) represents a rotation matrix of θ degrees around the x-axis
Figure BDA0002874222690000052
D represents the coordinates of the origin of the standard coordinate system under the Tobii coordinate system, and the origin under the standard coordinate system is also the origin of the Tobii coordinate system after actual conversion;
3) converting the standard coordinate system into a virtual coordinate system, wherein the virtual scene is built on unity, and the unity coordinate system and the standard coordinate system are both left-handed coordinate systems, and the conversion formula is as follows:
Cu=0.001Cs (2)
cu is the coordinate of the eye in a unity coordinate system;
the unit of the Unity coordinate system is m, and the unit of the standard coordinate system is mm, and only uniform units are needed; this also embodies the advantage of the design of the standard coordinate system that it is convenient to convert to the unity coordinate system.
4) Solve the imaging problem of the camera. The invention requires that the screen provides a window-like visual effect. The image on the window is essentially light traveling in a straight line, and the range outside the window that a person can see at different locations is actually the range outside the window that the light at the location he is in can travel to, and then the associated image is projected onto the window. The view cone properties of the camera express the range that it can see and present an image in that range on the front plane, which can be started by this. Firstly, mapping a screen and eyes in reality to a unity coordinate system; the eye position is mapped into a unity coordinate system through the conversion formula in the step 3); then, measuring the vertical distance between the position of the origin D and the screen to be 540mm in reality, measuring the width of the screen to be 345mm and the height of the screen to be 195mm at the same time, determining the position and the size of the screen under a standard coordinate system, and converting the position and the size into a unity coordinate system by using a formula (2), thereby completing the mapping operation;
then, the view cone of the eye in the virtual scene (i.e. the camera in the camera imaging principle) at each position is determined, the view cone determines the projection area that the camera is finally imaging, by determining the view cone of the camera, we can define the virtual scene area that the camera should see at its position, and further control the camera imaging picture, as shown in fig. 4, the plane formed by the set point ABCD is the virtual screen, and the position of the point O is the eye, when the eye reaches a new position, the program calculates and takes out the point O to the line segment with the longest distance of four points, and re-determines the coordinates of the other three corner points in the virtual coordinate system by using the length as the standard, determines the coordinates of the point T by using formula (3), taking the example in the figure that OA is the longest, then extends OB, OC, OD to OB ', OC', OD ', so that | OA | | OC | | OD' |, the following relationship is then utilized:
T=(A+B'+C'+D')/4 (3)
wherein A represents the corner point coordinate with the longest distance, B ', C ' and D ' are the re-determined corner point coordinates,
the calculated point T is the coordinate of the point passing through the centerline of the visual cone of the camera eye in the virtual coordinate system, and the view angle pose (i.e., how the camera is looking and thus the image of fig. 6) can be determined using this point;
the image in the virtual screen seen at this time is the visual image that we need, and then the perspective matrix is used to perform perspective transformation, and the following is a detailed description of the operation.
The steps of performing perspective transformation using the perspective matrix are as follows:
by calling unity, two-dimensional coordinates of the four corner points of the virtual screen mapped to the virtual scene on the imaged image can be obtained, and four point pairs can be formed with (0,0), (0, height), (width, height), (width,0) (i.e., the four corner points of fig. 6) (assuming that the finally generated image has width height pixels), so that a perspective matrix H can be calculated, which is a 3 < 3 > matrix, and the calculation steps are as follows: for a point pair a (x, y,1), b (x1, y1,1), the following transformation equation is used:
HaT (equation 4);
where aT is the transpose of a, a is the corner of the image to be perspective transformed (i.e., the black-line frame in fig. 6), b is the corner of the imaged image (i.e., fig. 6), and a pair of points can be constructed as the following matrix:
Figure BDA0002874222690000071
four dot pairs can construct four matrixes, and the matrixes are combined to obtain a matrix U with 8 rows and 9 columns; then constructing a UT & ltU & gt matrix, wherein eigenvectors corresponding to the minimum eigenvalues of the UT & ltU & gt matrix are 9 parameters of the perspective matrix H; after the perspective matrix is obtained, the obtained matrix is substituted into the formula (4) to perform perspective transformation, and a new image after perspective is obtained (see fig. 7).
Fig. 5 shows the relationship between the camera near plane (white frame) and the virtual screen (black frame) calculated by the above algorithm. The camera image at this time is as shown in fig. 6, and the image in the black frame is the image that we want to obtain. Therefore, it is necessary to perform perspective transformation, and the perspective matrix is calculated by corresponding four points of the black frame in the camera to four corners of the screen, and the perspective transformation is performed, and the displayed result is as shown in fig. 7.

Claims (2)

1. A method for realizing a penetration screen system based on eye tracker tracking is characterized by comprising the following steps:
1) establishing a standard coordinate system, setting a position of a monocular as an origin, setting a connecting line of the origin and a midpoint of a screen as a z-axis, setting an x-axis as a straight line in a horizontal direction, setting a y-axis as a straight line in a vertical direction, and setting a plane formed by the x-axis and the y-axis to be parallel to the screen;
2) placing the Tobii eye tracker in front of a screen, acquiring a monocular position by using the Tobii eye tracker, and converting an eye position coordinate observed by the Tobii eye tracker into a standard coordinate system;
the Tobii eye tracker coordinate is UCS coordinate system, wherein the eyeball coordinate under the coordinate system is obtained through the SDK; the coordinates need to be converted to a standard coordinate system, and the conversion formula is as follows:
Cs=A[R(θ)Ct-R(θ)D] (1)
wherein the content of the first and second substances,
Figure FDA0003407988460000011
cs is the coordinate of the eye under a standard coordinate system, and Ct is the coordinate of the eye under a Tobii coordinate system;
the eye tracker is at an angle theta to the horizontal, so that R (theta) represents a rotation matrix of theta around the x-axis,
Figure FDA0003407988460000012
d represents the coordinate of the origin of the standard coordinate system under the Tobii coordinate system;
3) converting the standard coordinate system into a virtual coordinate system, wherein the virtual scene is built on unity, and the unity coordinate system and the standard coordinate system are both left-handed coordinate systems, and the conversion formula is as follows:
Cu=0.001Cs (2)
cu is the coordinate of the eye in a unity coordinate system;
the unit of the Unity coordinate system is m, and the unit of the standard coordinate system is mm, and only uniform units are needed;
4) firstly, mapping a screen and eyes in reality to a unity coordinate system; the eye position is mapped into a unity coordinate system through the conversion formula in the step 3); then measuring the vertical distance between the position of the origin D and the screen in reality, measuring the width and the height of the screen at the same time, determining the position and the size of the screen under a standard coordinate system, and converting the position and the size into a unity coordinate system by using a formula (2), thereby completing the mapping operation;
then determining the visual cone of the eye in each position under the virtual scene, wherein the plane formed by the set point ABCD is the virtual screen, the position of the point O is the eye, when the eye reaches a new position, calculating and taking out the line segment with the longest distance from the point O to four points by a program, redetermining the coordinates of the other three corner points under the virtual coordinate system by taking the length as a standard, determining the coordinates of the point T by using a formula (3),
T=(A+B'+C'+D')/4 (3)
where a stands for the corner coordinate with the longest distance, i.e. OA is the longest, then OB, OC, OD to OB ', OC ', OD ' are extended such that | OA | OB ' | OC ' | OD ' |, B ', C ', D ' are the re-determined corner coordinates,
the calculated point T is the coordinate of a point passing through the central line of the visual cone of the camera eye under a virtual coordinate system, and the visual angle posture can be determined by using the point;
the image in the virtual screen is the visual image required by us, and then the perspective matrix is used for perspective transformation.
2. The method for implementing the transmission screen system based on the eye tracker tracking according to claim 1, wherein in step 4), the step of performing perspective transformation by using the perspective matrix comprises:
by calling the unity method, two-dimensional coordinates of four corner points of a virtual screen mapped to a virtual scene on an imaged image can be obtained, and four point pairs can be formed with (0,0), (0, height), (width, height), (width,0), whereby a perspective matrix H can be calculated, which is a matrix of 3 × 3, the calculation steps being as follows: for a point pair a (x, y,1), b (x1, y1,1), the following transformation equation is used:
b=HaT (4);
wherein aT is the transpose of a, a is the corner of the image to be perspective transformed, b is the corner of the imaged image, and a point pair can be constructed into the following matrix:
Figure FDA0003407988460000021
four dot pairs can construct four matrixes, and the matrixes are combined to obtain a matrix U with 8 rows and 9 columns; then constructing a UT & ltU & gt matrix, wherein eigenvectors corresponding to the minimum eigenvalues of the UT & ltU & gt matrix are 9 parameters of the perspective matrix H; and (4) after the perspective matrix is obtained, substituting the perspective matrix into the formula (4) to perform perspective transformation, and obtaining a new image after perspective.
CN202011622545.8A 2020-12-30 2020-12-30 Method for realizing penetrating screen system based on eye tracker tracking Active CN112698725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011622545.8A CN112698725B (en) 2020-12-30 2020-12-30 Method for realizing penetrating screen system based on eye tracker tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011622545.8A CN112698725B (en) 2020-12-30 2020-12-30 Method for realizing penetrating screen system based on eye tracker tracking

Publications (2)

Publication Number Publication Date
CN112698725A CN112698725A (en) 2021-04-23
CN112698725B true CN112698725B (en) 2022-02-11

Family

ID=75511189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011622545.8A Active CN112698725B (en) 2020-12-30 2020-12-30 Method for realizing penetrating screen system based on eye tracker tracking

Country Status (1)

Country Link
CN (1) CN112698725B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010003410A1 (en) * 2008-07-08 2010-01-14 It-University Of Copenhagen Eye gaze tracking
CN103838378A (en) * 2014-03-13 2014-06-04 广东石油化工学院 Head wearing type eye control system based on pupil recognition positioning
CN107003721A (en) * 2014-11-06 2017-08-01 英特尔公司 Improvement for eyes tracking system is calibrated
CN111880654A (en) * 2020-07-27 2020-11-03 歌尔光学科技有限公司 Image display method and device, wearable device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090196460A1 (en) * 2008-01-17 2009-08-06 Thomas Jakobs Eye tracking system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010003410A1 (en) * 2008-07-08 2010-01-14 It-University Of Copenhagen Eye gaze tracking
CN103838378A (en) * 2014-03-13 2014-06-04 广东石油化工学院 Head wearing type eye control system based on pupil recognition positioning
CN107003721A (en) * 2014-11-06 2017-08-01 英特尔公司 Improvement for eyes tracking system is calibrated
CN111880654A (en) * 2020-07-27 2020-11-03 歌尔光学科技有限公司 Image display method and device, wearable device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Multi-feature 3D Object Tracking with Adaptively-Weighted Local Bundles;Jiachen Li;《IEEE》;20201216;第229-230页 *

Also Published As

Publication number Publication date
CN112698725A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
US11741629B2 (en) Controlling display of model derived from captured image
US9541997B2 (en) Three-dimensional user interface apparatus and three-dimensional operation method
TWI521469B (en) Two - dimensional Roles Representation of Three - dimensional Action System and Method
CN108525298B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2018214697A1 (en) Graphics processing method, processor, and virtual reality system
Tomioka et al. Approximated user-perspective rendering in tablet-based augmented reality
KR102461232B1 (en) Image processing method and apparatus, electronic device, and storage medium
US20100188355A1 (en) Apparatus and method for detecting an object pointed by a user
CN106843507A (en) A kind of method and system of virtual reality multi-person interactive
US11960086B2 (en) Image generation device, head-mounted display, and image generation method
WO2019076348A1 (en) Virtual reality (vr) interface generation method and apparatus
CN111275801A (en) Three-dimensional picture rendering method and device
CN115359093A (en) Monocular-based gaze estimation and tracking method
Solari et al. Natural perception in dynamic stereoscopic augmented reality environments
US10901213B2 (en) Image display apparatus and image display method
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
CN112698725B (en) Method for realizing penetrating screen system based on eye tracker tracking
CN112698724B (en) Implementation method of penetrating screen system based on camera eye movement tracking
CN115908755A (en) AR projection method, system and AR projector
JP2019185475A (en) Specification program, specification method, and information processing device
EP3682196A1 (en) Systems and methods for calibrating imaging and spatial orientation sensors
Chessa et al. A stereoscopic augmented reality system for the veridical perception of the 3D scene layout
WO2024111783A1 (en) Mesh transformation with efficient depth reconstruction and filtering in passthrough augmented reality (ar) systems
Zhang et al. [Poster] an accurate calibration method for optical see-through head-mounted displays based on actual eye-observation model
Diller et al. Automatic Viewpoint Selection for Interactive Motor Feedback Using Principle Component Analysis.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant