CN114882477A - Method for predicting automatic driving takeover time by using eye movement information - Google Patents

Method for predicting automatic driving takeover time by using eye movement information Download PDF

Info

Publication number
CN114882477A
CN114882477A CN202210206978.8A CN202210206978A CN114882477A CN 114882477 A CN114882477 A CN 114882477A CN 202210206978 A CN202210206978 A CN 202210206978A CN 114882477 A CN114882477 A CN 114882477A
Authority
CN
China
Prior art keywords
driver
eye movement
time
automatic driving
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210206978.8A
Other languages
Chinese (zh)
Inventor
胡宏宇
梁耘翰
张慧珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202210206978.8A priority Critical patent/CN114882477A/en
Publication of CN114882477A publication Critical patent/CN114882477A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0057Estimation of the time available or required for the handover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Transportation (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for predicting automatic driving takeover time by using eye movement information, which comprises the following steps: step one, collecting the eye movement behavior data of a driver in the driving process, and synchronously measuring the reaction time of the driver during steering and braking operation; classifying the eye movement data of the driver according to the glance angle; thirdly, constructing a mapping relation model of the eye movement behavior data and the driving operation, and obtaining the driver operation reaction prediction time in the automatic driving takeover process by utilizing the model; the method comprises the steps of collecting eye movement data of a driver in a driving process, obtaining driving eye movement data under a visual angle image coordinate system of the driver, and substituting the driving eye movement data into a regression equation to obtain predicted automatic driving takeover time; the invention is beneficial to realizing the stable and safe control transition between the vehicle and the driver, and overcomes the defects that the solution is carried out by theoretical derivation and the data support of real vehicle test is lacked in the prior art.

Description

Method for predicting automatic driving takeover time by using eye movement information
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to a method for predicting automatic driving takeover time by using eye movement information.
Background
In the automatic driving process, a driver can give part of driving tasks to the vehicle and engage in non-driving tasks such as chatting, reading, making a call and the like, so that the aim of leisure is fulfilled. However, limited by the state of the art, current autonomous driving systems are not able to perform the driving task completely in all situations, and still require the driver to take over. The length of time that the driver takes over is influenced by non-driving tasks and is an important index for evaluating the safety of automatic driving.
Driving a vehicle is a dynamic control task, during which a driver needs to obtain road information in real time to ensure driving safety. The eye is the main source from which it obtains relevant information, making decisions and performing corresponding control responses. By measuring the driver's eye movement, the ability of the driver to take over autonomous driving can be predicted.
Eye movement includes two main types of events, fixations and saccades. Fixation refers to the action of focusing on a point of sight for more than a certain time (typically 100 ms). Measurements such as duration and percentage of gaze time on the road or on the mirror are used to estimate the driver attentiveness state. A glance is a behavior in which the line of sight moves rapidly between two fixation behaviors. The number of glances has been used to measure the frequency with which a driver visually scans a scene during natural manual driving to obtain driving-related information, and to reflect the driver's visual perception. Furthermore, the duration of a glance may indicate different perceptual activities, with long glances reflecting primarily peripheral information acquisition and short glances reflecting central object information acquisition. The eye tracker is used for recording eye movement track characteristics of a person when the person processes visual information, and eye movement characteristics such as the saccade size, the speed and the gazing area can be clearly reflected through recording and analyzing the eye movement track of the driver by the eye tracker.
At present, many researches on visual characteristics of drivers are based on driving simulators, so that the effectiveness of eye movement measurement as an automatic driving takeover time prediction index in a real vehicle environment is not determined.
Disclosure of Invention
The invention designs and develops a method for predicting the automatic driving takeover time by using the eye movement information of a driver, synchronously acquires the eye movement behavior data of the driver and the reaction time for steering and braking operation in the driving process, obtains a mapping relation model of the eye movement behavior data and the driving operation according to the information fitting, and further substitutes the eye movement data of the driver in the automatic driving into the obtained model to obtain the driver operation reaction prediction time in the automatic driving takeover process. The invention aims to obtain the automatic driving prediction takeover time by measuring the eye movement data of a driver.
The technical scheme provided by the invention is as follows:
a method for predicting automatic driving takeover time by using eye movement information is characterized by comprising the following steps:
step one, collecting the eye movement behavior data of a driver in the driving process, and synchronously measuring the reaction time of the driver during steering and braking operation;
classifying the data of the eye movement behaviors of the driver according to the glance angle;
and step three, constructing a mapping relation model of the eye movement behavior data and the driving operation, and obtaining the driver operation reaction prediction time in the automatic driving takeover process by utilizing the model.
Preferably, in the step one, the reaction time to the intended steering operation is defined as a total steering time for which the vehicle direction changes by 70 ° but the total duration is maintained less than 10s after a stabilization time of at least 5s has elapsed after the automatic driving system prompts the driver to take over control. The reaction time to the intended braking is defined as the time it takes for the driver to reduce the vehicle speed by braking substantially to a vehicle stop after the autopilot system prompts the driver to take over control.
Preferably, in the first step, the DMS camera acquires eye movement behavior data in the driver's view coordinate system, and classifies the driver's gaze behavior into two types according to regions: fixations within the front windshield area and fixations outside the front windshield area. The percentage of the driver's gaze in the area of the front windshield to the total gaze behavior is defined as the percentage of front gaze.
Preferably, in the second step, a glance of 5-12 degrees by the driver is defined as a small glance; defining a saccade of 12-19 degrees as a middle saccade; a saccade of 19-26 degrees is defined as a macro saccade.
Preferably, in the third step, obtaining the regression equation includes the following steps:
step 1, selecting p input variables; and assuming a multiple linear regression equation based on the p input variables as:
Y=β 01 X 12 X 2 +…+β p X p
wherein the parameter beta of the model 0 ,β 1 ,β 2 ,...β p Is unknown and needs to be estimated by sample data; beta is a 0 Is the regression constant, beta 1 ,β 2 ,...β p Is a regression coefficient, X is a screened variable, p is the number of the screened variables, and p is less than or equal to m; y is the autopilot take-over time predicted by eye movement information, and ε is the error term, assuming it satisfies the normal distribution, i.e., ε -N (0, σ) 2 );
Step 2, estimating by least square method
Figure BDA0003531571090000031
Dependent on the sample obtained by the variable y 1 ,y 2 ,...,y m The sum of the squares of the errors of the values of (1) and the predicted values
Figure BDA0003531571090000032
To a minimum.
The invention has the beneficial effects that:
the method for predicting the automatic driving takeover time by using the eye movement information acquires the eye movement behavior data of the driver in the driving process, acquires the driving eye movement data of the driver in a visual angle image coordinate system, and substitutes the driving eye movement data into a regression equation to obtain the predicted automatic driving takeover time; the invention is beneficial to realizing the stable and safe control transition between the vehicle and the driver, and overcomes the defects that the solution is carried out by theoretical derivation and the data support of real vehicle test is lacked in the prior art.
Detailed Description
The present invention is described in further detail below to enable those skilled in the art to practice the invention with reference to the description. The specific embodiments described herein are merely illustrative of the invention and are not intended to be limiting.
The invention provides a method for predicting automatic driving takeover time by using driver eye movement information, which mainly comprises the following specific implementation processes:
step one, acquiring the eye movement behavior data of a driver in the driving process, and synchronously measuring the reaction time of the driver during steering and braking operation.
The test apparatus includes: the camera-based DMS (driver monitoring system) can study the real-time physical and mental conditions of the driver through the facial image processing of the driver, including eyelid closure, blinking, gaze direction, yawning, and head movement, etc. The focal area of the driver's gaze and the eye movements of the driver's glances can be extracted from the video using the raw algorithms of the DMS. Prior to the experiment, the participants were asked to adjust the driver's seat position and then a calibration procedure was performed to calibrate the DMS camera. The driver's expected reaction time measurements while steering and braking are achieved by automatic sampling of the driving data of the autonomous vehicle. In said step one, the reaction time to the intended steering operation is defined as the total steering time for which the vehicle direction changes by 70 ° but the total duration of the steering is maintained less than 10s after a stabilization time of at least 5s has elapsed after the automatic driving system has prompted the driver to take over control. The reaction time to the intended braking is defined as the time it takes for the driver to reduce the vehicle speed by braking substantially to a vehicle stop after the autopilot system prompts the driver to take over control. These thresholds are selected based on the resolution of the driving data samples of the autonomous vehicle.
And step two, classifying the eye movement data of the driver according to the glance angle.
The glance angle is the range covered by the line of sight during a glance, i.e., the area swept by the line of sight from the end of the last fixation to the beginning of the next fixation, and is generally represented by the angle of rotation of the driver's gaze.
The camera-based driver monitoring system DMS is able to detect the beginning and the end of pupil movements and thus the magnitude and speed of saccadic eye movements. Due to limited sampling rate and resolution, DMS cannot accurately detect panning of less than 5 degrees. Since glances in the 5-26 degree range account for approximately 99% of all glances. Therefore, we only included glances in the range of 5-26 degrees in the analysis. To predict the autopilot takeover time by the number of glances of different sizes, we tri-divide the range of 5-26 degrees. The small saccades are defined as 5-12 degrees, the medium saccades as 12-19 degrees and the large saccades as 19-26 degrees.
A total of five measurements were included, percent pre-fixation, number of small glances, number of medium glances, number of large glances, and average glance velocity.
And step three, constructing a mapping relation model of the eye movement behavior data and the driving operation, and obtaining the driver operation reaction prediction time in the automatic driving takeover process by utilizing the model.
Step 1: selecting a variable;
when a mapping relation model is built according to a plurality of independent variables, if all the variables are used to introduce a regression equation, the built model cannot be reasonably explained. Therefore, variables must be screened before the model is built, unnecessary variables are removed, and the step of building the regression equation is simplified and is easier to explain.
Step a), m independent variables x obtained 1 ,x 2 ,...,x m Respectively fitting a univariate regression equation corresponding to the dependent variable y to obtain m, and then selecting the model with the highest F statistic (y) value and the independent variable x thereof i It is first introduced into the model;
step b) based on x of the introduction model i Respectively fitting k-1 independent variables (x) outside the introduced model 1 ,x 2 ,...,x i-1 ,x i+1 ,...,x m ) The mapping relation model of (1), i.e. the variable is x 1 +x i ,x 2 +x i ,...,x i-1 +x i ,x i+1 +x i ,...,x m +x i K-1 mapping relation models. Then respectively considering the k-1 linear models, selecting the model with the largest F statistic, and selecting the independent variable x with the largest F statistic j Introducing the model, and repeating the step b) until the independent variables outside the model have no statistical significance.
Step 2: and fitting a regression equation.
Step a), selecting p input variables through the step 1; and assuming the multiple linear regression equation of the solved mapping relation model according to the p input variables as:
Y=β 01 X 12 X 2 +…+β p X p
wherein the parameter beta of the model 0 ,β 1 ,β 2 ,...β p Is unknown and needs to be estimated by the sample data. Beta is a 0 Is a regression constant, beta 1 ,β 2 ,...β p Is a regression coefficient, X is the screened variable, p is the number of the screened variables, and p is less than or equal to m; y is the automatic driving taking-over time predicted by the eye movement information; ε is the error term, which is assumed to satisfy the normal distribution, i.e., ε -N (0, σ) 2 ) Where σ is 2 Represents the variance;
and N represents a normal distribution.
Step b) estimation by least squares
Figure BDA0003531571090000061
Dependent on the sample obtained by the variable y 1 ,y 2 ,...,y m The value of (d) and the sample dependent variable prediction value
Figure BDA0003531571090000062
Sum of squares of errors of
Figure BDA0003531571090000063
The minimum is reached by taking the observed sample data as known and substituting the known sample data into a sample regression equation, and respectively carrying out estimation on the parameters of the model
Figure BDA0003531571090000064
Taking partial derivatives to obtain their estimates. The numerical value can be calculated by using Excel and SPSS statistical software;
and step 3: carrying out regression equation correlation test;
from multiple linear regression models
Y i =β 01 X 1i2 X 2i +…+β p X pi
Constructing t-test statistic:
Figure BDA0003531571090000071
in the formula
Figure BDA0003531571090000072
Is an estimated value of partial regression coefficient, and SE is an estimated value of sample partial regression coefficient
Figure BDA0003531571090000073
Standard error of, t i And (3) according to t distribution with the degree of freedom of n-p-1, an observed value of t statistic and a corresponding probability p value can be automatically calculated by using the SPSS, and 0.001 is taken as a value of a significance level alpha. If the probability p value is less than the significance level α, then identifyThe model is a multiple regression model with significance.
And 4, step 4: obtaining an automatic driving prediction takeover time;
in the driving process, the focal region watched by the driver and the eye movement glared by the driver are extracted in real time through the DMS and taken as independent variables to be substituted into the multiple regression equation, so that the automatic driving prediction takeover time can be obtained.
Step 2, constructing a model by using the collected partial test data, and establishing a mapping relation between the eye movement information and the automatic driving takeover time; another portion of the test data is used to test the predictive model.
The independent variable in step 4 is an independent variable derived from the information extracted in the driving process.
The method can predict the automatic driving takeover time, and accordingly, the automatic driving system can adjust the behavior of the driver according to the state of the driver. From the viewpoint of artificial intelligence, considering that the eye movement reflects the mental state characteristics of the driver, it is realistic and feasible to predict the expected takeover time of the automatic driving by using the eye movement before the driver takes over the control right.
While embodiments of the invention have been described, it is not intended to be limited to the details shown, described and illustrated herein, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed, and to such extent that such modifications are possible without departing from the general concept as defined by the appended claims and their equivalents.
The above description is only for the purpose of illustrating the embodiments of the present invention, and the scope of the present invention should not be limited thereto, and any modifications, equivalents and improvements made by those skilled in the art within the technical scope of the present invention as disclosed in the present invention should be covered by the scope of the present invention. And those not described in detail in this specification are well within the skill of those in the art.

Claims (5)

1. A method for predicting automatic driving takeover time by using eye movement information is characterized by comprising the following steps:
step one, collecting the eye movement behavior data of a driver in the driving process, and synchronously measuring the reaction time of the driver during steering and braking operation;
classifying the data of the eye movement behaviors of the driver according to the glance angle;
and step three, constructing a mapping relation model of the eye movement behavior data and the driving operation, and obtaining the driver operation reaction prediction time in the automatic driving takeover process by using the model.
2. The method for predicting automatic driving takeover time using eye movement information according to claim 1, characterized in that: in the first step, the reaction time to the expected steering operation is defined as the total steering time for which the direction of the vehicle changes by 70 ° but the total duration is maintained less than 10s after a stabilization time of at least 5s has elapsed after the automatic driving system prompts the driver to take over the control authority; the reaction time to the intended braking is defined as the time it takes for the driver to reduce the vehicle speed by braking substantially to a vehicle stop after the autopilot system prompts the driver to take over control.
3. The method for predicting automatic driving takeover time using eye movement information according to claim 1, characterized in that: in the first step, eye movement behavior data of the driver under a visual angle coordinate system is obtained through a DMS camera, and the gazing behaviors of the driver are divided into two types according to regions: gaze within the front windshield area and gaze outside the front windshield area; the percentage of the driver's gaze in the front windshield area to the total gaze behavior is defined as the percentage of forward gaze.
4. The method for predicting automatic driving takeover time using eye movement information according to claim 1, characterized in that: in the second step, 5-12 degrees of glance of the driver is defined as a small-angle glance; defining a 12-19 degree sweep as a medium angle sweep; a saccade of 19-26 degrees is defined as a large saccade.
5. The method for predicting automatic driving takeover time using eye movement information according to claim 1, characterized in that: in the third step, obtaining the mapping relationship model includes the following steps:
step 1, selecting p input variables, and assuming that a multiple linear regression equation of the solved mapping relation model is as follows according to the p input variables:
Y=β 01 X 12 X 2 +…+β p X p
wherein, beta 0 Is a regression constant, beta 1 ,β 2 ,...β p Is a regression coefficient, X is a screened variable, p is the number of the screened variables, and p is less than or equal to m; y is the autopilot take-over time predicted by eye movement information, and ε is the error term, assuming it satisfies the normal distribution, i.e., ε -N (0, σ) 2 );
Step 2, estimating by least square method
Figure FDA0003531571080000021
Dependent on the sample obtained by the variable y 1 ,y 2 ,...,y m The value of (d) and the sample dependent variable prediction value
Figure FDA0003531571080000022
Sum of squares of errors of
Figure FDA0003531571080000023
To a minimum.
CN202210206978.8A 2022-03-04 2022-03-04 Method for predicting automatic driving takeover time by using eye movement information Pending CN114882477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210206978.8A CN114882477A (en) 2022-03-04 2022-03-04 Method for predicting automatic driving takeover time by using eye movement information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210206978.8A CN114882477A (en) 2022-03-04 2022-03-04 Method for predicting automatic driving takeover time by using eye movement information

Publications (1)

Publication Number Publication Date
CN114882477A true CN114882477A (en) 2022-08-09

Family

ID=82668083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210206978.8A Pending CN114882477A (en) 2022-03-04 2022-03-04 Method for predicting automatic driving takeover time by using eye movement information

Country Status (1)

Country Link
CN (1) CN114882477A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005092285A (en) * 2003-09-12 2005-04-07 Toyota Central Res & Dev Lab Inc Vehicle driving status estimating device and driver's vehicle driving characteristic estimating device
CN111656423A (en) * 2018-02-05 2020-09-11 索尼公司 Information processing device, mobile device, method, and program
CN112009397A (en) * 2020-07-31 2020-12-01 武汉光庭信息技术股份有限公司 Automatic driving drive test data analysis method and device
CN112435466A (en) * 2020-10-23 2021-03-02 江苏大学 Method and system for predicting take-over time of CACC vehicle changing into traditional vehicle under mixed traffic flow environment
US20210078609A1 (en) * 2019-09-17 2021-03-18 Aptiv Technologies Limited Method and Device for Determining an Estimate of the Capability of a Vehicle Driver to take over Control of a Vehicle
CN113222295A (en) * 2021-06-07 2021-08-06 吉林大学 Method for predicting takeover time in control right switching state of L3-level automatic driving automobile
CN113415285A (en) * 2021-07-07 2021-09-21 西南交通大学 Driver alertness assessment method and system
CN113423627A (en) * 2018-12-14 2021-09-21 伟摩有限责任公司 Operating an automated vehicle according to road user reaction modeling under occlusion
CN113734193A (en) * 2020-05-28 2021-12-03 哲内提 System and method for estimating take over time
CN113753059A (en) * 2021-09-23 2021-12-07 哈尔滨工业大学 Method for predicting takeover capacity of driver under automatic driving system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005092285A (en) * 2003-09-12 2005-04-07 Toyota Central Res & Dev Lab Inc Vehicle driving status estimating device and driver's vehicle driving characteristic estimating device
CN111656423A (en) * 2018-02-05 2020-09-11 索尼公司 Information processing device, mobile device, method, and program
CN113423627A (en) * 2018-12-14 2021-09-21 伟摩有限责任公司 Operating an automated vehicle according to road user reaction modeling under occlusion
US20210078609A1 (en) * 2019-09-17 2021-03-18 Aptiv Technologies Limited Method and Device for Determining an Estimate of the Capability of a Vehicle Driver to take over Control of a Vehicle
CN113734193A (en) * 2020-05-28 2021-12-03 哲内提 System and method for estimating take over time
CN112009397A (en) * 2020-07-31 2020-12-01 武汉光庭信息技术股份有限公司 Automatic driving drive test data analysis method and device
CN112435466A (en) * 2020-10-23 2021-03-02 江苏大学 Method and system for predicting take-over time of CACC vehicle changing into traditional vehicle under mixed traffic flow environment
CN113222295A (en) * 2021-06-07 2021-08-06 吉林大学 Method for predicting takeover time in control right switching state of L3-level automatic driving automobile
CN113415285A (en) * 2021-07-07 2021-09-21 西南交通大学 Driver alertness assessment method and system
CN113753059A (en) * 2021-09-23 2021-12-07 哈尔滨工业大学 Method for predicting takeover capacity of driver under automatic driving system

Similar Documents

Publication Publication Date Title
EP2800507B1 (en) Apparatus for psychiatric evaluation
US11715333B2 (en) Human monitoring system incorporating calibration methodology
US20160132726A1 (en) System and method for analysis of eye movements using two dimensional images
Miao et al. Virtual reality-based measurement of ocular deviation in strabismus
CN112351727B (en) System and method for measuring a visual function chart
Benkner et al. Characterizing visual performance in mice: an objective and automated system based on the optokinetic reflex.
CN110495895B (en) Fatigue detection method and system based on eye movement tracking
DE112017004596T5 (en) Line of sight measurement device
US7435227B2 (en) Method and apparatus for generating an indication of a level of vigilance of an individual
CN111105594A (en) Vehicle and recognition method and device for fatigue driving of driver
Baccour et al. Camera-based driver drowsiness state classification using logistic regression models
CN114882477A (en) Method for predicting automatic driving takeover time by using eye movement information
CN114399752A (en) Eye movement multi-feature fusion fatigue detection system and method based on micro eye jump characteristics
JP2018045451A (en) Vehicle control apparatus
CN113901866A (en) Fatigue driving early warning method based on machine vision
US11966511B2 (en) Method, system and computer program product for mapping a visual field
CN117058148B (en) Imaging quality detection method, device and equipment for nystagmus patient
US20220117529A1 (en) System and method for determining an eye movement
DE102008007150B4 (en) Method for creating a fatigue prognosis of a motor vehicle driver
US20220296092A1 (en) System and method for visual field assessment
CN111879415B (en) Temperature measurement management method, device and system
CN117635539A (en) Senile cognitive disorder diagnosis method based on hand gesture recognition
CN115607157A (en) Health assessment method and system based on eye movement data functionalization
CN116919639A (en) Visual cleaning method and system and visual cleaner thereof
bin Kamazlan et al. Investigation of a real-time driver eye-closeness for the application of drowsiness detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination