CN115299945A - Attention and fatigue degree evaluation method and wearable device - Google Patents

Attention and fatigue degree evaluation method and wearable device Download PDF

Info

Publication number
CN115299945A
CN115299945A CN202210981467.3A CN202210981467A CN115299945A CN 115299945 A CN115299945 A CN 115299945A CN 202210981467 A CN202210981467 A CN 202210981467A CN 115299945 A CN115299945 A CN 115299945A
Authority
CN
China
Prior art keywords
eye movement
data
interest
movement data
gaze
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210981467.3A
Other languages
Chinese (zh)
Inventor
王菲
张锡哲
尹舒络
姚菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Brain Hospital
Original Assignee
Nanjing Brain Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Brain Hospital filed Critical Nanjing Brain Hospital
Priority to CN202210981467.3A priority Critical patent/CN115299945A/en
Publication of CN115299945A publication Critical patent/CN115299945A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis

Abstract

The invention discloses an attention and fatigue degree evaluation method and wearable equipment, and belongs to the technical field of virtual reality and eye movement data analysis. The method comprises the following steps: acquiring eye movement data sample information; creating a virtual space, tracking eye movement, and analyzing staring data; calculating an interest value in the interest area according to the gaze data; preprocessing the eye movement data sample information; constructing an evaluation analysis model, and training the evaluation analysis model according to the preprocessed eye movement data sample information and the interest value as fitting training data; and obtaining the eye movement data to be evaluated, inputting the eye movement data to be evaluated into the trained evaluation analysis model, and outputting the evaluation analysis results of the attention and the fatigue degree. The invention can quickly and accurately obtain the evaluation and analysis result and provide effective data support and reference for medical diagnosis of psychology, cognitive disorder and other diseases.

Description

Attention and fatigue degree evaluation method and wearable device
Technical Field
The invention relates to the technical field of virtual reality and eye movement data analysis, in particular to an attention and fatigue degree evaluation method and wearable equipment.
Background
The related modeling of visual attention and the like is rapidly developed in the fields of multimedia processing and computer vision in recent years, and plays a great role in the transmission, compression and processing of pictures and videos. With the explosion of information technology and Virtual Reality (VR) in recent years, more and more panoramic content and applications are beginning to appear. The virtual reality technology provides immersive media experience for a user, 360-degree panoramic content serving as a brand-new media form can be placed under a head-mounted display (head display) for the user to browse, the user can select to display a certain part of spherical content by moving the head, and the watching process is freer and more interactive.
However, in the conventional immersive virtual reality system, there are many applications that generally provide content based on task paradigm, such as the attention assessment method and system disclosed in application No. 202111429477.8, which do not pay attention to whether the content in the virtual scene is interesting to the user, because different users may generate different subjective feelings for the presented content due to individual differences, so in providing the subject video content in the virtual scene that we present to the user, only behavior data generated by the visual attention of the user in the virtual scene is considered for assessment, resulting in slow assessment speed and low accuracy of assessment result. And inaccurate attention assessments cannot assist the physician in making the determination.
Therefore, how to provide a method for evaluating attention and fatigue degree and a wearable device is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of this, the invention provides an attention and fatigue degree evaluation method and a wearable device, which extract features of three-dimensional gaze data generated by eye tracking in a virtual scene, and establish a model for detecting, evaluating and analyzing whether user attention is concentrated and fatigue degree based on indexes obtained by quantifying eye movement data content by a machine learning algorithm. The environment provided by the eye tracking and the virtual reality is beneficial to obtaining objective eye movement behavior data, and the data can reflect the possible fatigue degree or attention state of a user at a specific time point in the watching process more truly.
In order to achieve the above purpose, the invention provides the following technical scheme:
a method for evaluating attention and fatigue degree and a wearable device comprise the following steps:
acquiring eye movement data sample information;
creating a virtual space, tracking eye movement, and analyzing gaze data;
calculating an interest value in the interest area according to the gaze data;
preprocessing the eye movement data sample information;
constructing an evaluation analysis model, and training the evaluation analysis model according to the preprocessed eye movement data sample information and the interest value as fitting training data;
and obtaining the eye movement data to be evaluated, inputting the eye movement data to be evaluated into the trained evaluation analysis model, and outputting the evaluation analysis results of the attention and the fatigue degree.
Preferably, the creating a virtual space, performing eye tracking according to the eye movement data sample information, and analyzing the gaze data includes:
creating a controllable component user interface, and setting a video playing button sub-control and an interface display sub-control;
controlling the video playing button sub-control and the interface display sub-control through the staring behavior interface;
and tracking the eye movement in the control process, and analyzing the gaze data in the interest area.
Preferably, the gaze data comprises: the number of gaze points within the region of interest and the gaze vector within the region of interest.
Preferably, the calculating an interest value in the interest region according to the gaze data includes:
setting a three-dimensional vector in a controllable component user interface as v = (x, y, z), wherein x, y and z are respectively returned as floating point values;
calculate the overall gaze vector:
Figure BDA0003798593490000021
wherein, t i To start gaze time to time i, set Z b Equal to 0, such that (x) b ,y b 0) is a point on the base plane of the virtual space; v. of i Is a three-dimensional vector of the ith fixation point, v i =(x i -x b ,y i -y b ,z i -z b ) N is the total number of fixation points;
the sphere with physical collision is arranged on a user interface of the controllable assembly, the included angle between a ray from the origin of the sphere center to the sphere wall and the horizontal plane is theta, the included angle between the ray from the watching ray to the sphere wall and the horizontal plane is alpha, and the radius r and the watching distance d of the sphere are calculated:
r=||v i -0||
d=||v i -v o ||
wherein v is i =(x i ,y i ,z i ) And v o =(x o ,y o ,z o ) Respectively are a fixation point vector coordinate and a fixation origin point coordinate;
and calculating to obtain:
Figure BDA0003798593490000031
s.t.0<θ<90°,0<α<90°
wherein, | z i I is the absolute value of the z coordinate of each fixation point;
the tolerance T is obtained as follows:
T=f(v i ,v i-1 ,α,θ)
calculating interest values in the interest areas:
Figure BDA0003798593490000032
where m is the number of fixation points falling within the region of interest,
Figure BDA0003798593490000033
mean values of the injection vectors in the region of interest.
Preferably, the preprocessing the eye movement data sample information includes:
carrying out mean processing on the obtained eye movement data sample information based on the continuous points to obtain smooth data;
removing invalid areas and abnormal eye movement data information in the smooth data;
and obtaining the preprocessed eye movement data sample information.
Preferably, the constructing an evaluation analysis model and training the evaluation analysis model according to the preprocessed eye movement data sample information and the interest value as fitting training data includes:
carrying out statistical index analysis on the preprocessed eye movement data sample information to obtain the correlation among the eye movement partial characteristics;
obtaining interest degree data, taking the interest degree data and the correlation between the eye movement partial characteristics as labels of a training model, and establishing an evaluation analysis model based on the eye movement data and the interest degree data;
fitting the preprocessed eye movement data and the interest values to be used as training data to train the evaluation analysis model.
Preferably, the fitting the preprocessed eye movement data to the interest value as training data further includes: the training data were cross-validated separately.
Preferably, the method further comprises the following steps: and selecting the root mean square error and the absolute average error as evaluation indexes of the evaluation analysis model, and carrying out accuracy evaluation on the evaluation analysis model.
In another aspect, a wearable device is provided, which includes a data acquisition device for acquiring eye movement data sample information and eye movement data to be evaluated, a processor, a memory, and a computer program stored in the memory and executable on the processor, and is characterized in that when the computer program is executed by the processor, the above-mentioned method for evaluating the attention and fatigue level is implemented.
Compared with the prior art, the method for evaluating the attention and the fatigue degree and the wearable device have the advantages that the environment provided by the eye movement tracking and the virtual reality is strong and real in immersion, and compared with a mode that whether the attention of a user is focused or not and the fatigue is watched cannot be noticed, the virtual reality is obviously improved in long-term effectiveness of experience. The visual attention of the user in the experience can be stored in real time, interest degree values which are generated by the user when the user experiences videos in virtual scenes and are directly quantized by the attention can be stored, whether the user concentrates attention and fatigue degree in the whole experience process can be judged quickly and accurately through a machine learning model which is constructed by preprocessed user eye movement data and interest degree data, and more understanding of user attention distribution can be obtained according to the indexes. In addition, the depth coordinate is considered through the three-dimensional eye movement data acquisition method, the tolerance to head movement is higher, and effective and accurate quantized eye movement data can be acquired better. Provides effective data support and reference for medical diagnosis of psychopsychological and cognitive disorder and other diseases and evaluation of patient state.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of the method for evaluating attention and fatigue degree according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1, in one aspect, embodiment 1 of the present invention discloses a method for evaluating attention and fatigue, including the following steps:
acquiring eye movement data sample information;
creating a virtual space, tracking eye movement, and analyzing staring data;
calculating an interest value in the interest area according to the gaze data;
preprocessing the eye movement data sample information;
constructing an evaluation analysis model, and training the evaluation analysis model according to the preprocessed eye movement data sample information and the interest value as fitting training data;
and acquiring the eye movement data to be evaluated, inputting the eye movement data to be evaluated into the trained evaluation analysis model, and outputting the evaluation analysis results of attention and fatigue degree.
In one embodiment, in a first step, eye movement data sample information is obtained;
specifically, the selection of software and hardware is developed and the experimental setup of the acquisition equipment is simplified. The head-mounted virtual equipment is used for data acquisition, and the high-precision eye movement tracker is integrated in the virtual reality helmet, so that the requirement for watching video eye movement data acquisition in a virtual scene can be met.
More specifically, using an open-source Unity3D engine, it can help build virtual reality applications quickly. The method comprises the steps of determining indexes of eye movement behavior data adopted in recognition, calling an Application Program Interface (API) provided by an HTC (hypertext transfer protocol Interface) and an API in a Tobii XR (Software Development Kit) SDK (Software Development Kit) to obtain some indexes.
More specifically, the following are mainly used: the three-dimensional (3D) space gazing point, the pupil diameter position, the pupil diameter size, the eye opening and closing degree, the gazing direction, the gazing point origin position, the data reliability and the like. The device integrated eye movement sampling rate is 120Hz, but in the Unity application, the refresh rate is 60Hz, which corresponds to 60 eye movement data points per second each time the eye movement data is acquired by the Tick function (a function performed once per frame).
In one embodiment, the second step is to create a virtual space and perform eye tracking to analyze the gaze data;
specifically, a controllable component user interface is created, controls can be controlled through the gaze behavior interface, and a video playing button sub-control and an interface display sub-control are set, and are all associated with the gaze behavior interface in the encoding process. A sphere video player is placed in a world scene, a sphere Prefab with physical collision is arranged in a Unity world view (Prefab is an abstraction of all game objects in Unity, and special game objects are also based on the sphere Prefab) and placed.
More specifically, after the head-mounted device is worn, video content is selected and played, and in the watching process, the fixation point number and injection point data in the interest area are collected.
In a specific embodiment, the third step, calculating interest values in the interest areas according to the gaze data;
specifically, the point where the eye SDK interface returns and intersects with the collision of the sphere Prefab is a fixation point (fixionpoint), the coordinates of the eye SDK interface in the world coordinate system are three-dimensional vectors v = (x, y, z), x, y, and z are respectively returned as floating point values, and features are calculated based on each time window with a video segment, so that an overall fixation vector (overallfixionvector) is given:
Figure BDA0003798593490000061
wherein, t i To start gaze time to time i, set Z b Equal to 0, let (x) b ,y b 0) is a point on the horizontal base of the virtual space; v. of i Is a three-dimensional vector of the ith fixation point, v i =(x i -x b ,y i -y b ,z i -z b ) N is the total number of fixation points;
in particular, wherein v i =(x i -x b ,y i -y b ,z i -z b ),(x b ,y b 0) is a point on the spatial base plane (just the hemispherical section of the stereoscopic video presentation), and to facilitate the calculation of the vector in the world, z is set b Constant at zero, resulting in a behavior in world space that the baseline vector point is always at the world level. The gaze origin coordinates are already contained in the eye movement index data, so when the gaze point vector is detected, the distance from the gaze point to the gaze origin can be calculated manually, but generally since the virtual reality device is helmet-worn on a 6DOF (six degrees of freedom, which means that the device allows all-around movement in the environment, with higher tolerance) basis and allows head movement during viewing, the gaze origin position is almost impossible at a fixed point. Thus, the included angle between the ray from the origin of the sphere center to the sphere wall and the horizontal plane is theta, the included angle between the ray from the watching ray to the sphere wall and the horizontal plane is alpha, the radius r of the sphere and the watching distance d are included,
r=||v i -0||
d=||v i -v o ||
wherein v is i =(x i ,y i ,z i ) And v o =(x o ,y o ,z o ) Respectively, gaze point vector coordinates and gaze origin coordinates (approximating the three-dimensional coordinates of the helmet in space), so there are:
Figure BDA0003798593490000071
s.t.0<θ<90°,0<α<90°
wherein | z i L is the absolute value of the z-coordinate of each fixation point, the fixation duration t in the Overall Fixation Vector (OFV) i There is no value returned in the interface, so manual calculation is required, and then the setting tolerance T is:
T=f(v i ,v i-1 ,α,θ)
within tolerance (in the virtual scene world of the Unity application, a distance measurement unit is called a Unity unit, and one Unity unit is equal to 1cm of the real world), time recording and data return are allowed, so that accurate calculation and effective data acquisition of the fixation time are guaranteed. Besides the gazing index, there are pupil position vector pupil _ position and eye opening degree eye _ openness, and such indices can assist in obtaining accurate positioning and analysis of the direction seen by the subject. In addition, there is still an important confidence _ value of the index eye movement data, which ranges from 0 to 1 (floating point value), and the eye movement data can be screened according to the index in the subsequent preprocessing task. Implicitly setting (invisible to the user) some interest areas AOI (Area interests) on the sphere Prefab, whose distribution and shape can freely choose to define or set the size and shape of the collider on the gazing object, which also determines the size and shape of the AOI, extracting the name of the object attached to the collider and the length of the ray (distance from the player's eyes to the gazing object) up to the hit point whenever the gazing ray intersects the collider around the AOI, so calculating the interest values in the interest areas is:
Figure BDA0003798593490000072
where m is the number of gaze points falling within the region of interest,
Figure BDA0003798593490000073
is the mean value of the injection vector in the region of interest.
In a specific embodiment, the fourth step is to preprocess the eye movement data sample information;
specifically, the acquired eye movement data is subjected to mean processing based on the continuous points to obtain smooth data, and abrupt noise interference is reduced. And taking the video area and the credibility as rejection criteria, and rejecting eye movement data information in the invalid area and the abnormal eye movement data information.
In a specific embodiment, the fifth step is to construct an evaluation analysis model, and train the evaluation analysis model according to the preprocessed eye movement data sample information and the interest values as fitting training data;
specifically, standard statistical characteristics of the processed data are generated. And performing statistical index analysis on the preprocessed eye movement data to obtain a mean value, a variance, a skewness and a kurtosis, performing correlation analysis to find out the correlation among partial features of the eye movement, and establishing a machine learning classification model based on the eye movement data and questionnaire data by using the answers of the interest degree scale questionnaire as labels of a training model. In order to better solve the high-dimensional feature classification, a Support Vector Machine (SVM) algorithm is used for model construction (see the content of specific examples for detailed hyper-parameters of the model). Similarly, under the equal hyper-parameter selection, the eye movement data and the interest value are fitted as training data by using a Support Vector Regression (SVR) algorithm.
In an embodiment, the sixth step is to obtain the eye movement data to be evaluated, input the eye movement data to be evaluated into the trained evaluation analysis model, and output the evaluation analysis results of the attention and fatigue degree
Specifically, the optimal performance model is selected according to the model performance index to accept the eye movement data input, and the classification and regression results can be obtained. The result reflects whether the attention of the user is more concentrated and tired for the watching experiment, and the interest value calculated by preprocessing is saved in a database for subsequent research and analysis.
In another aspect, embodiment 1 of the present invention discloses a wearable device, which includes a data acquisition device for acquiring head movement data and eye movement data of a subject, a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein when the computer program is executed by the processor, the method for evaluating attention and fatigue is implemented.
Example 2
In order to increase the feasibility of the implementation of the invention, the present disclosure is further illustrated with reference to examples. We chose to implement this regimen for subjects with depression, consisting essentially of the following steps:
in the first step, the wearable device used in this embodiment 2 is based on an htcvivelayer head-mounted virtual reality device as an acquisition basis, and a high-performance graphics rendering host provides support for the stability of a virtual environment, and a subject wears a VR device and is fixed, so that it is ensured that the helmet does not affect the validity of the acquired eye movement data due to the deviation caused by the head movement of the subject during the treatment and acquisition process, and the user does not need to be intervened in the whole participation process.
And secondly, selecting and playing video contents after the head-mounted equipment is worn, wherein in the watching process, 7000 fixation points are collected totally, and 4000 fixation points are collected in the set interest area.
And thirdly, carrying out mean value processing on the collected eye movement data based on the continuous points to obtain smooth data and reduce abrupt noise interference. The video area and the credibility are used as rejection criteria, eye movement data information in the invalid area and abnormal eye movement data information are rejected, and the number of data after preprocessing is about 6000.
The fourth step, taking a fixation point data as an example, the fixation origin coordinate is v o = (1.2981, -5.7628, 1.6333), point of regard coordinate is upsilon i = (13.5216, 23.7231, 15.3221), calculate a slightly less than θ, calculate single point gaze fixation time and return point gaze related data if within tolerance T threshold. The OFV is calculated as OFV = (224.5123, 443.6423, 315.2361), the number of injection points in the interest area is 103, the total fixation time in the interest area is 7980 milliseconds, and the average fixation vector is
Figure BDA0003798593490000091
The interest value was calculated to be 75.3. Similarly, the interest value of the interest area at other moments can be calculated.
And fifthly, using the simple interest questionnaire answers as labels of the training model, using an SVM algorithm to construct a binary classification model, mainly adjusting two hyper-parameters, selecting a kernel as a nonlinear Radial Basis Function (RBF), selecting a regularization coefficient c as a default 1.0, selecting a gamma coefficient as a default 0.01, and continuously adjusting c and gamma to proper sizes in order to prevent the model from being over-fitted on training data, thereby ensuring that a decision boundary is in a proper position. The training data was cross-validated by 5, 10, 15 fold, and we averaged AUC (area under curve, usually as an indicator of model performance considerations, with a floating point value between 0 and 1, and generally the larger the model performance the better) on the receiver operating characteristic curve (ROC) to a maximum of 0.76 or more. In the selected support vector regression model, the hyper-parameter adjustment mode is almost the same as that of the classification model, the preprocessed eye movement data and the interest value are used as fitting training data, the Root Mean Square Error (RMSE) and the absolute mean error (MAE) are used as model evaluation indexes, and the Root Mean Square Error (RMSE) and the absolute mean error (MAE) are used as model evaluation indexes.
And sixthly, the model is used for receiving the eye movement data input of the depression subject newly participating in video relaxation to obtain a classification and regression result, whether the attention concentration and the fatigue degree of the participating user exist in the watching experience of the time can be known according to the result, and the interest value after preprocessing calculation is stored in the database of the user for subsequent analysis and use.
Compared with the prior art, the method for evaluating the attention and fatigue degree and the wearable device have the advantages that the immersion feeling is strong and real through the environment provided by the eye movement tracking and the virtual reality, and compared with a mode that whether the attention of a user is focused or not and the fatigue feeling cannot be watched, the virtual reality is obviously improved in long-term experience effectiveness. The visual attention of the user in the experience can be stored in real time, the interest degree values which are generated by the user when the user experiences the video in the virtual scene and are directly quantified by the attention can be stored, the attention concentration and the fatigue degree of the user in the whole experience process can be quickly and accurately judged through a machine learning model which is constructed by the preprocessed user eye movement data and the interest degree data, and more understanding on the attention distribution of the user can be obtained according to the indexes. In addition, the depth coordinate is considered through the three-dimensional eye movement data acquisition method, the tolerance to head movement is higher, and effective and accurate quantized eye movement data can be acquired better. Provides effective data support and reference for medical diagnosis of eye diseases and psychological cognitive diseases.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method for evaluating attention and fatigue degree is characterized by comprising the following steps:
acquiring eye movement data sample information;
creating a virtual space, tracking eye movement, and analyzing staring data;
calculating an interest value in the interest area according to the gaze data;
preprocessing the eye movement data sample information;
constructing an evaluation analysis model, and training the evaluation analysis model according to the preprocessed eye movement data sample information and the interest value as fitting training data;
and obtaining the eye movement data to be evaluated, inputting the eye movement data to be evaluated into the trained evaluation analysis model, and outputting the evaluation analysis results of the attention and the fatigue degree.
2. The method for assessing attention and fatigue according to claim 1, wherein the creating of the virtual space, the eye tracking, and the analyzing of the gaze data comprise:
creating a controllable component user interface, and setting a video playing button sub-control, an interface display sub-control and a staring behavior interface;
controlling the video playing button sub-control and the interface display sub-control through the staring behavior interface;
and tracking the eye movement in the control process, and analyzing the gaze data in the interest area.
3. The method of claim 2, wherein the gaze data comprises: the number of gaze points within the region of interest and the gaze vector within the region of interest.
4. The method of claim 3, wherein the calculating interest values in the interest area according to the gaze data comprises:
setting a three-dimensional vector in a controllable component user interface as v = (x, y, z), wherein x, y and z are respectively returned as floating point values;
calculate the overall gaze vector:
Figure FDA0003798593480000011
wherein, t i To start gaze time to time i, set Z b Equal to 0, let (x) b ,y b 0) is a point on the base plane of the virtual space; v. of i Is a three-dimensional vector of the ith fixation point, v i =(x i -x b ,y i -y b ,z i -z b ) N is the total number of fixation points;
set up the spheroid of taking physical collision at controllable subassembly user interface, set up the ray and the horizontal plane contained angle of centre of sphere initial point to the cliff of sphere for theta, establish the ray of watching the ray to the cliff of sphere and the contained angle of horizontal plane for alpha, ask spheroid radius r and watch distance d:
r=||v i -0||
d=||v i -v o ||
wherein v is i =(x i ,y i ,z i ) And v o =(x o ,y o ,z o ) Respectively are a fixation point vector coordinate and a fixation origin point coordinate;
and calculating to obtain:
Figure FDA0003798593480000023
s.t.0<θ<90°,0<α<90°
wherein, | z i I is the absolute value of the z coordinate of each fixation point;
the tolerance T is obtained as follows:
T=f(v i ,v i-1 ,α,θ)
calculating interest values in the interest areas:
Figure FDA0003798593480000021
wherein m is the number of fixation points falling within the region of interest,
Figure FDA0003798593480000022
mean values of the injection vectors in the region of interest.
5. The method of claim 1, wherein the preprocessing the eye movement data sample information comprises:
carrying out mean value processing on the obtained eye movement data sample information based on the continuous points to obtain smooth data;
eliminating invalid regions and abnormal eye movement data information in the smooth data;
and obtaining the preprocessed eye movement data sample information.
6. The method for assessing the degree of attention and fatigue according to claim 5, wherein the constructing an assessment analysis model and training the assessment analysis model according to the preprocessed eye movement data sample information and the interest values as fitting training data comprises:
carrying out statistical index analysis on the preprocessed eye movement data sample information to obtain the correlation among the eye movement partial characteristics;
obtaining interest degree data, taking the interest degree data and the correlation between the eye movement partial characteristics as labels of a training model, and establishing an evaluation analysis model based on the eye movement data and the interest degree data;
and fitting the preprocessed eye movement data and the interest values to be used as training data to train the evaluation analysis model.
7. The method of claim 6, wherein the fitting the pre-processed eye movement data to the interest value as training data further comprises: the training data was cross-validated.
8. The method of claim 1, further comprising: and selecting the root mean square error and the absolute average error as evaluation indexes of the evaluation analysis model, and carrying out accuracy evaluation on the evaluation analysis model.
9. A wearable device comprising a data acquisition device for acquiring eye movement data sample information and eye movement data to be evaluated, a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the method for assessing attention and fatigue of any one of claims 1-8.
CN202210981467.3A 2022-08-15 2022-08-15 Attention and fatigue degree evaluation method and wearable device Pending CN115299945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210981467.3A CN115299945A (en) 2022-08-15 2022-08-15 Attention and fatigue degree evaluation method and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210981467.3A CN115299945A (en) 2022-08-15 2022-08-15 Attention and fatigue degree evaluation method and wearable device

Publications (1)

Publication Number Publication Date
CN115299945A true CN115299945A (en) 2022-11-08

Family

ID=83862713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210981467.3A Pending CN115299945A (en) 2022-08-15 2022-08-15 Attention and fatigue degree evaluation method and wearable device

Country Status (1)

Country Link
CN (1) CN115299945A (en)

Similar Documents

Publication Publication Date Title
US10210425B2 (en) Generating and using a predictive virtual personification
US11656680B2 (en) Technique for controlling virtual image generation system using emotional states of user
Hammoud Passive eye monitoring: Algorithms, applications and experiments
CN109086726A (en) A kind of topography's recognition methods and system based on AR intelligent glasses
US20160144278A1 (en) Affect usage within a gaming context
Ghosh et al. Automatic gaze analysis: A survey of deep learning based approaches
Islam et al. Cybersickness prediction from integrated hmd’s sensors: A multimodal deep fusion approach using eye-tracking and head-tracking data
US20150186912A1 (en) Analysis in response to mental state expression requests
Pastel et al. Application of eye-tracking systems integrated into immersive virtual reality and possible transfer to the sports sector-A systematic review
Li et al. A review of main eye movement tracking methods
Mazzeo et al. Deep learning based eye gaze estimation and prediction
Sun et al. A Novel Integrated Eye-Tracking System With Stereo Stimuli for 3-D Gaze Estimation
WO2024038134A1 (en) Methods and devices in performing a vision testing procedure on a person
CN115299945A (en) Attention and fatigue degree evaluation method and wearable device
Lee et al. A study on virtual reality sickness and visual attention
Maskeliunas et al. Are you ashamed? Can a gaze tracker tell?
Park Representation learning for webcam-based gaze estimation
Adiani et al. Evaluation of webcam-based eye tracking for a job interview training platform: Preliminary results
KR20200013220A (en) Method for controlling of image based on biometric information
Chugh An Eye Tracking System for a Virtual Reality Headset
Hvass et al. A preliminary exploration in the correlation of cybersickness and gaze direction in VR
Ullmann Extracting Information about Currently Watched Videos from the Pupil Diameter/submitted by Hans Peter Ullmann
Chang Appearance-based gaze estimation and applications in healthcare
HUANG Neural Network Eye Tracking: Determining Nine-Grid Regions & Application
Choy Objective and subjective quality assessments of 3D videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination