CN111427150B - Eye movement signal processing method used under virtual reality head-mounted display and wearable device - Google Patents

Eye movement signal processing method used under virtual reality head-mounted display and wearable device Download PDF

Info

Publication number
CN111427150B
CN111427150B CN202010168958.7A CN202010168958A CN111427150B CN 111427150 B CN111427150 B CN 111427150B CN 202010168958 A CN202010168958 A CN 202010168958A CN 111427150 B CN111427150 B CN 111427150B
Authority
CN
China
Prior art keywords
eye movement
head
dimensional
fixation point
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010168958.7A
Other languages
Chinese (zh)
Other versions
CN111427150A (en
Inventor
唐伟
舒琳
徐向民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010168958.7A priority Critical patent/CN111427150B/en
Publication of CN111427150A publication Critical patent/CN111427150A/en
Application granted granted Critical
Publication of CN111427150B publication Critical patent/CN111427150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The invention discloses an eye movement signal processing method and wearable equipment for virtual reality head-mounted display, and belongs to the field of virtual reality and eye movement data analysis. The method comprises the following steps: extracting a head rotation angle and a two-dimensional fixation point coordinate of an eyeball in a visual window according to the head movement data and the eye movement data of the subject; according to the head rotation angle, mapping the two-dimensional fixation point coordinates in the visual window to a three-dimensional sphere space coordinate system to obtain three-dimensional fixation point coordinates; processing the three-dimensional fixation point coordinates to obtain longitude and latitude fixation point coordinates under a geographic longitude and latitude coordinate system, and analyzing fixation data by adopting a speed and time threshold-based method; eye movement features are extracted from the gaze data. The method can accurately acquire the eye movement signals in the virtual reality head display environment, extract the characteristics of the common eye movement, and has simple and easily-realized calculation process.

Description

Eye movement signal processing method used under virtual reality head-mounted display and wearable device
Technical Field
The invention relates to the field of virtual reality and eye movement data analysis, in particular to an eye movement signal processing method and wearable equipment for virtual reality head-mounted display.
Background
With the explosion of information technology and Virtual Reality (VR) in recent years, more and more panoramic content and applications are beginning to appear. The virtual reality technology provides immersive media experience for a user, 360-degree panoramic content serving as a brand-new media form can be placed under a head-mounted display (head display) for the user to browse, the user can select and display a certain part of spherical content by moving the head, and the watching process is freer and more interactive.
Visual attention modeling has rapidly developed in the field of multimedia processing and computer vision in recent years, and plays a great role in the transmission, compression and processing of pictures and videos. However, the current 2D vision algorithm cannot be directly used in virtual reality and panoramic contents, the viewing mode of the user in the head display is different from the typical 2D case, the 2D case is that the user looks forward with the fixed head position, and the 3D case can present different scenes by the rotation and translation of the head, but only a certain part of the whole content can be viewed at a time. In the 3 degree of freedom (3-DOF) case, the rotation of the head is the only influencing factor, but in the 6 degree of freedom (6-DOF) case, the translational and rotational movements of the head together determine the content presented on the head. How to acquire accurate eye movement data and know the exploration mode of a user are very important for developing effective technologies for multimedia content optimization, encoding, transmission, rendering, experience quality evaluation and the like.
Eye movement is the window in which we interact with the outside world, and comprises a spatiotemporal measure of the cognitive and visual processes. Some eye diseases affect the movement of the eyeball, such as glaucoma, visual deterioration, etc., besides, the eye movement pattern is also affected by the brain, and some psychological diseases, such as autism, dementia, parkinson, hyperactivity, attention deficit, etc., also affect the movement of the eyeball. Because the cost for acquiring the eye movement data is low at present, the eye movement data is processed, the eye movement characteristic information is accurately extracted, the eye movement characteristic information can be used as an intermediate detection result, the clinical medical detection is assisted, and the method has strong interpretability.
Disclosure of Invention
In order to solve the problems existing in the prior art, the invention provides an eye movement signal processing method and wearable equipment for virtual reality head-mounted display, which consider the display characteristics of the virtual reality head-mounted display and the position mapping problem of a panoramic picture, firstly extract eye movement data, obtain related eye movement characteristics through processing and transformation, and solve the viewport mapping problem of the panoramic content head-mounted display.
The eye movement signal processing method for the virtual reality head-mounted display comprises the following steps:
s1, acquiring head movement data and eye movement data of the subject, and extracting a head rotation angle and two-dimensional fixation point coordinates of the eyeball in the visual window according to the acquired data;
s2, mapping the two-dimensional fixation point coordinates in the visual window to a three-dimensional sphere space coordinate system according to the head rotation angle to obtain three-dimensional fixation point coordinates;
s3, processing the three-dimensional fixation point coordinates to obtain longitude and latitude fixation point coordinates under a geographic longitude and latitude coordinate system, and analyzing fixation data by adopting a speed and time threshold-based method;
and S4, extracting eye movement characteristics according to the gaze data.
In a preferred embodiment, step S2 includes the steps of:
acquiring a pitch angle, an azimuth angle and a gradient angle in the head rotation angle to obtain a rotation matrix of the head;
calculating an intrinsic matrix of the camera in the head-mounted display according to the two-dimensional gazing point coordinate, the virtual focal length of the camera in the head-mounted display, the size of a single horizontal pixel, the size of a single vertical pixel and the width and height of a visual window;
and multiplying the rotation matrix and the internal matrix to obtain the three-dimensional fixation point coordinate.
In a preferred embodiment, the process of parsing the gaze data of step S3 includes:
the three-dimensional fixation point coordinate is (x, y, z); calculating coordinates of longitude and latitude fixation points:
Figure BDA0002408476640000021
Figure BDA0002408476640000022
and (3) solving the distance delta sigma between two adjacent fixation points in time by using the longitude and latitude fixation point coordinates:
Figure BDA0002408476640000023
wherein the content of the first and second substances,
Figure BDA0002408476640000024
respectively representing the latitude coordinates, Delta lambda, of the two fixation points,
Figure BDA0002408476640000025
Respectively, the longitude difference and the latitude difference between the two gazing points.
And resolving the fixation point coordinate of the eye movement by adopting a speed and time threshold-based method.
The wearable device comprises a data acquisition device for acquiring head movement data and eye movement data of a subject, a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein when the computer program is executed by the processor, the eye movement signal processing method is realized.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the eye movement signal processing method and the wearable device, the eye movement data are extracted through the sensor, the eye movement signal can be accurately measured, the eye movement characteristic can be accurately extracted, the calculation is simple, the 2D coordinates can be conveniently mapped into the 3D space, the common characteristics related to the eye movement can be analyzed and extracted, and the problem of viewport mapping under panoramic content head display is solved.
2. The method has the advantages that accurate eye movement data of the user in the virtual reality scene are obtained, and the method plays an important role in exploring user behaviors and processing and transmitting multimedia contents; in addition, because the eye movement has good application prospects in the aspects of emotion, cognitive processes, clinical medical diagnosis and the like, the extracted common features related to the eye movement can also be used as intermediate detection results, and data support and reference are provided for medical diagnosis of eye diseases and psychological cognitive diseases.
Drawings
Fig. 1 is a flow chart of the eye movement signal processing procedure of the present invention.
Fig. 2 is a distance diagram between two gaze points under the spherical coordinates of the present invention.
FIG. 3 is a representation of the glance features extracted by the present invention.
FIG. 4 is a representation of the absolute angle of sweep feature extracted by the present invention.
FIG. 5 is a representation of relative saccade angle features extracted by the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, but the embodiments of the present invention are not limited thereto.
Examples
The eye movement signal is obtained by an eye movement instrument capable of tracking and measuring the position of an eyeball, and an optical principle is adopted; unlike an ocular signal acquired by a sensor using potential difference. As shown in fig. 1, the eye movement signal processing method for virtual reality head-mounted display according to the present invention adopts a panoramic content scene created in a virtual reality environment as an evoked material, and specifically includes the following steps:
step 1, preparing needed panoramic content and data acquisition equipment, and building an experimental environment.
In the embodiment, HTC VIVE VR heads are used for displaying panoramic materials, and the SMI eye tracker collects eye movement data. The experiment needs two computers in total, one is operated by the experimenter and is connected with the eye tracker to retrieve and process the gazing data; the other is used to display the test scenario to the subject. In order for the subjects to safely navigate the entire field of view, they need to sit on a rolling chair. The environment was kept quiet at all times during the experiment.
And 2, under the virtual reality head-mounted display environment, the subject freely browses the virtual reality scene content, and the interaction mode is head movement and eye movement.
Subjects with normal vision and achromatopsia were selected for the experiment. Calibration is needed before the experiment, a subject can watch on the red cross in the center of the screen, and the acquired watching point can be formally tested when the Euclidean distance from the central cross to the collected watching point is less than 2.5 in consideration of the precision of the eye tracker. After calibration, the material starts to be played, and each segment of the material is separated by 5 seconds.
And 3, acquiring head movement data and eye movement data of the testee, and extracting a head rotation angle and a two-dimensional fixation point coordinate of an eyeball in a visual window (viewport for short) according to the acquired data.
The present embodiment utilizes a sensor to acquire head movement data and eye movement data of a subject, wherein the head movement data includes a head rotation angle and the eye movement data includes 2D position coordinates of an eyeball in a viewport. Specifically, the rotation angle of the head can be collected through the HTC head display, and the pitch angle, the azimuth angle and the gradient angle can be obtained. Two-dimensional eye movement data of the viewport, i.e. the 2D position coordinates of the eyeball in the viewport, also referred to as 2D gaze data, is acquired by the eye tracker. The sampling frequency of the head movement data and the eye movement data are consistent and set to be 250 Hz.
And 4, mapping the two-dimensional fixation point coordinates in the visual window to a three-dimensional sphere space coordinate system according to the head rotation angle to obtain three-dimensional fixation point coordinates (x, y, z).
In the step, the eye movement data is projected onto a unit sphere, namely, the 2D position coordinates of the fixation point are mapped into a 3D space. A rotation matrix R is first created from the head rotation angle, by means of which the 2D gaze data is converted into 3D vectors.
Where the eye tracker acquires a 2D position in the viewport, which is known in case the head motion is known. It is not convenient to analyze the eye movement data directly in viewport space, requiring a 3D space that maps it to a sphere. During mapping, firstly, a rotation matrix R of the head is obtained by utilizing a pitch angle alpha, an azimuth angle beta and a gradient angle gamma in a head rotation angle:
Figure BDA0002408476640000041
the 2D gaze data, i.e. the two-dimensional gaze point coordinates, is expressed as two-dimensional eye movement coordinates (P)x,Py) (ii) a Using two-dimensional eye movement coordinates (P)x,Py) The virtual focal length f of the camera in the head-mounted display, the size H and the size v of a single horizontal pixel, and the width W and the height H of the visual window, the intrinsic matrix X of the camera in the head-mounted display is calculated:
Figure BDA0002408476640000042
and then multiplying the rotation matrix with the internal matrix to obtain the coordinate of the fixation point under the three-dimensional sphere, namely the coordinate of the three-dimensional fixation point: and G is RX.
The step can also carry out normalization processing on the obtained three-dimensional fixation point coordinates. It should be noted that normalization is not necessary.
And 5, processing the three-dimensional fixation point coordinates (x, y, z) to obtain longitude and latitude fixation point coordinates under a geographic longitude and latitude coordinate system, and analyzing fixation data by adopting a speed and time threshold-based method.
When analyzing the gaze data, the present embodiment selects a method based on the velocity and the time threshold, and obtains the velocity of the sample sequence by calculating the distance between two adjacent gaze sample point positions in time and dividing by the time elapsed between them. If the sample velocity is below the threshold VthWhile if the duration of the fixation is greater than the minimum duration threshold TthIt is marked as gaze. It should be noted that in the calculation of the distance in a sphere coordinate system, the euclidean distance cannot be used directly. In this case, as shown in fig. 2, it is necessary to calculate the longitude and latitude coordinates of the sphere coordinates using the great circle distance and to use the coordinates of the longitude and latitude fixation point
Figure BDA0002408476640000043
The surface distance of the two sphere coordinates, i.e. the distance between the two fixation points Δ σ, is calculated:
Figure BDA0002408476640000044
Figure BDA0002408476640000045
Figure BDA0002408476640000046
calculating the moving speed of the eyeball between the two adjacent fixation points by using the distance between the two adjacent fixation points in time
Figure BDA0002408476640000051
Wherein Δ t is the eyeball movement time; and comparing said speed with a speed threshold value VthMaking a comparison when
Figure BDA0002408476640000052
And Δ t>TthThen, the gaze point is marked as the corresponding gaze point.
After obtaining the corresponding fixation point on the sphere, the index n of the fixation point and the coordinate P (x) of the fixation point are outputf,yf) Staring start time tstartDuration τfThree-dimensional coordinates u in the three-dimensional sphere, and a spherical distance Δ σ between the two fixation points.
And 6, extracting relevant features of the eye movement according to the gaze data.
As shown in fig. 3-5, relevant features include:
gaze-based features: coordinate P (x) of gaze pointf,yf) Gaze duration τfNumber of fixation points nf
Pan-based features: duration of saccade τsAmplitude of saccade asSaccade velocity vsAbsolute angle Ab of saccadesRelative angle of saccade Res(ii) a Wherein the saccade duration is the time from the end of one fixation point to the start of the next fixation, τs=|tstart2-tstart1fL, |; the saccade magnitude is the great circle distance between the two fixation points, asΔ σ; saccade velocity, i.e. saccade amplitude asDivided by the scanning time,
Figure BDA0002408476640000053
the absolute angle of saccade refers to the angle between the saccade vector and the horizontal,
Figure BDA0002408476640000054
the relative sweep angle is the angle between two adjacent sweep vectors,
Figure BDA0002408476640000055
wherein S1=[xf2-xf1,yf2-yf1],S2=[xf3-xf2,yf3-yf2];
Significance map: the value σ is set to 3 as determined by gaussian convolution between two adjacent points. The saliency map is a heat map describing the probability of an observer gazing at different locations, globally reflecting the gazing effect of the subject. The saliency map is provided in the form of a rectangular equiangular projection, and for each gaze point, the gaussian kernel will be applied to a location in the viewport and projected into the sphere space, as before, and then normalized into the histogram.
And 7, further analyzing the problems by using the extracted eye movement characteristics. For example, variance analysis may be used to compare differences in features of a particular problem, or statistical features of features may be used to construct new features for pattern recognition.
Based on the same inventive concept, the present embodiment also proposes a wearable device, which includes a data acquisition device for acquiring head movement data and eye movement data of a subject, a processor, a memory, and a computer program stored on the memory and executable on the processor; when executed by a processor, the computer program implements the eye movement signal processing method in the present embodiment.
As described above, the method can be better realized, the eye movement signals in the virtual reality head display environment can be accurately acquired, the features of the common eye movement are extracted, and the calculation process is simple and easy to realize.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. An eye movement signal processing method for virtual reality head-mounted display, comprising the steps of:
s1, acquiring head movement data and eye movement data of the subject, and extracting a head rotation angle and two-dimensional fixation point coordinates of the eyeball in the visual window according to the acquired data;
s2, mapping the two-dimensional fixation point coordinates in the visual window to a three-dimensional sphere space coordinate system according to the head rotation angle to obtain three-dimensional fixation point coordinates;
s3, processing the three-dimensional fixation point coordinates to obtain longitude and latitude fixation point coordinates under a geographic longitude and latitude coordinate system, and analyzing fixation data by adopting a speed and time threshold-based method;
s4, extracting eye movement characteristics according to the gaze data;
step S2 includes the following steps:
acquiring a pitch angle, an azimuth angle and a gradient angle in the head rotation angle to obtain a rotation matrix of the head;
calculating an intrinsic matrix of the camera in the head-mounted display according to the two-dimensional gazing point coordinate, the virtual focal length of the camera in the head-mounted display, the size of a single horizontal pixel, the size of a single vertical pixel and the width and height of a visual window;
multiplying the rotation matrix and the internal matrix to obtain a three-dimensional fixation point coordinate;
the process of analyzing the gaze data at step S3 includes:
calculating longitude and latitude fixation point coordinates according to the three-dimensional fixation point coordinates (x, y, z)
Figure FDA0002824877390000011
Figure FDA0002824877390000012
Figure FDA0002824877390000013
And (3) solving the distance delta sigma between two adjacent fixation points in time by using the longitude and latitude fixation point coordinates:
Figure FDA0002824877390000014
wherein the content of the first and second substances,
Figure FDA0002824877390000015
respectively representing the latitude coordinates, Delta lambda, of the two fixation points,
Figure FDA0002824877390000016
Respectively representing longitude difference and latitude difference between the two fixation points;
and resolving the fixation point coordinate of the eye movement by adopting a speed and time threshold-based method.
2. The eye movement signal processing method according to claim 1, wherein the rotation matrix R is:
Figure FDA0002824877390000017
wherein the pitch angle is alpha, the azimuth angle is beta, and the gradient angle is gamma.
3. The eye movement signal processing method according to claim 1, wherein the intrinsic matrix X is:
Figure FDA0002824877390000018
wherein the two-dimensional eye movement coordinate is (P)x,Py) The virtual focal length of the camera in the head-mounted display is f, the size of a single horizontal pixel is H, the size of a single vertical pixel is v, the width of the visual window is W, and the height of the visual window is H.
4. The eye movement signal processing method according to claim 1, wherein the speed threshold is set to V when the fixation point coordinates are analyzed by a method based on a speed and time thresholdthAnd a time threshold of Tth(ii) a Calculating the distance between two adjacent fixation points of the eyeballSpeed of movement between
Figure FDA0002824877390000021
Wherein Δ t is the eyeball movement time; and comparing said speed with a speed threshold value VthMaking a comparison when
Figure FDA0002824877390000022
And Δ T > TthThen, the gaze point is marked as the corresponding gaze point.
5. The eye movement signal processing method according to claim 1, wherein the eye movement features extracted at step S4 include gaze-based features, saccade-based features, and saliency maps.
6. The eye movement signal processing method according to claim 5, wherein the gaze-based feature comprises a coordinate P (x) of a gaze pointf,yf) Gaze duration τfAnd the number of fixation points nf
7. The eye movement signal processing method of claim 5, wherein the saccade-based features include a saccade duration τsSaccade amplitude asSaccadic velocity vsAbsolute angle Ab of saccadesAnd relative angle of saccade Res
8. Wearable device comprising a data acquisition device for acquiring head movement data and eye movement data of a subject, a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the computer program, when executed by the processor, implements the eye movement signal processing method of any one of claims 1-7.
CN202010168958.7A 2020-03-12 2020-03-12 Eye movement signal processing method used under virtual reality head-mounted display and wearable device Active CN111427150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010168958.7A CN111427150B (en) 2020-03-12 2020-03-12 Eye movement signal processing method used under virtual reality head-mounted display and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010168958.7A CN111427150B (en) 2020-03-12 2020-03-12 Eye movement signal processing method used under virtual reality head-mounted display and wearable device

Publications (2)

Publication Number Publication Date
CN111427150A CN111427150A (en) 2020-07-17
CN111427150B true CN111427150B (en) 2021-03-30

Family

ID=71547561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010168958.7A Active CN111427150B (en) 2020-03-12 2020-03-12 Eye movement signal processing method used under virtual reality head-mounted display and wearable device

Country Status (1)

Country Link
CN (1) CN111427150B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241971A (en) * 2020-09-30 2021-01-19 天津大学 Method for measuring motion prediction capability by using entropy and eye movement data
CN112381875B (en) * 2020-11-16 2024-01-30 吉林大学 Method for unifying gaze point pixel coordinate systems of head-mounted eye tracker
CN112381735B (en) * 2020-11-16 2022-04-05 吉林大学 Method for unifying AOI boundary point pixel coordinate systems of head-mounted eye tracker
CN113253846B (en) * 2021-06-02 2024-04-12 樊天放 HID interaction system and method based on gaze deflection trend
CN115509345B (en) * 2022-07-22 2023-08-18 北京微视威信息科技有限公司 Virtual reality scene display processing method and virtual reality device
CN115061576B (en) * 2022-08-10 2023-04-07 北京微视威信息科技有限公司 Method for predicting fixation position of virtual reality scene and virtual reality equipment
CN116977600B (en) * 2023-07-03 2024-04-09 玩出梦想(上海)科技有限公司 XR equipment and XR equipment height acquisition method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109613982A (en) * 2018-12-13 2019-04-12 叶成环 Wear-type AR shows the display exchange method of equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901485B (en) * 2010-08-11 2014-12-03 华中科技大学 3D free head moving type gaze tracking system
GB2571300B (en) * 2018-02-23 2020-05-27 Sony Interactive Entertainment Inc Eye tracking method and apparatus
CN109847168B (en) * 2019-01-22 2022-03-08 中国科学院苏州生物医学工程技术研究所 Wearable fatigue detection and intervention system
CN110764613B (en) * 2019-10-15 2023-07-18 北京航空航天大学青岛研究院 Eye movement tracking and calibrating method based on head-mounted eye movement module

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109613982A (en) * 2018-12-13 2019-04-12 叶成环 Wear-type AR shows the display exchange method of equipment

Also Published As

Publication number Publication date
CN111427150A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111427150B (en) Eye movement signal processing method used under virtual reality head-mounted display and wearable device
US8343067B2 (en) System and method for quantifying and mapping visual salience
Rai et al. Which saliency weighting for omni directional image quality assessment?
US7538744B1 (en) Method and apparatus for computer-aided determination of viewer's gaze direction
CN106959759B (en) Data processing method and device
US6381339B1 (en) Image system evaluation method and apparatus using eye motion tracking
Coutinho et al. Improving head movement tolerance of cross-ratio based eye trackers
Larsson et al. Head movement compensation and multi-modal event detection in eye-tracking data for unconstrained head movements
US10936059B2 (en) Systems and methods for gaze tracking
CN103501688A (en) Method and apparatus for gaze point mapping
Weidenbacher et al. A comprehensive head pose and gaze database
Diaz et al. Real-time recording and classification of eye movements in an immersive virtual environment
Cristina et al. Unobtrusive and pervasive video-based eye-gaze tracking
JP2013244212A (en) Video analysis device, video analysis method, and point-of-gaze display system
Phillips et al. Method for tracking eye gaze during interpretation of endoluminal 3D CT colonography: technical description and proposed metrics for analysis
JP5306940B2 (en) Moving image content evaluation apparatus and computer program
Chaudhary et al. Motion tracking of iris features to detect small eye movements
Perrin et al. EyeTrackUAV2: A large-scale binocular eye-tracking dataset for UAV videos
Banitalebi-Dehkordi et al. Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video
Deligianni et al. Patient-specific bronchoscope simulation with pq-space-based 2D/3D registration
Munn et al. FixTag: An algorithm for identifying and tagging fixations to simplify the analysis of data collected by portable eye trackers
CN114027782A (en) Non-commonality strabismus diagnosis method based on virtual reality and eye movement tracking technology
Laco et al. Depth in the visual attention modelling from the egocentric perspective of view
Soon et al. Understanding head-mounted display fov in maritime search and rescue object detection
Jones et al. Volume visualisation via region enhancement around an observer's fixation point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant