CN115429275A - Driving state monitoring method based on eye movement technology - Google Patents
Driving state monitoring method based on eye movement technology Download PDFInfo
- Publication number
- CN115429275A CN115429275A CN202211217794.8A CN202211217794A CN115429275A CN 115429275 A CN115429275 A CN 115429275A CN 202211217794 A CN202211217794 A CN 202211217794A CN 115429275 A CN115429275 A CN 115429275A
- Authority
- CN
- China
- Prior art keywords
- driving
- tester
- stage
- eye movement
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000004424 eye movement Effects 0.000 title claims abstract description 27
- 238000012544 monitoring process Methods 0.000 title claims abstract description 19
- 238000005516 engineering process Methods 0.000 title claims abstract description 15
- 238000012360 testing method Methods 0.000 claims abstract description 42
- 238000002474 experimental method Methods 0.000 claims abstract description 20
- 238000007654 immersion Methods 0.000 claims abstract description 5
- 230000000638 stimulation Effects 0.000 claims abstract description 5
- 210000001508 eye Anatomy 0.000 claims description 17
- 210000001747 pupil Anatomy 0.000 claims description 15
- 230000001133 acceleration Effects 0.000 claims description 10
- 230000006998 cognitive state Effects 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 6
- 230000035945 sensitivity Effects 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 6
- 230000004886 head movement Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 15
- 230000006399 behavior Effects 0.000 description 9
- 230000008447 perception Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000001149 cognitive effect Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 241000283070 Equus zebra Species 0.000 description 1
- 238000000540 analysis of variance Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000036992 cognitive tasks Effects 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 230000001711 saccadic effect Effects 0.000 description 1
- 238000004579 scanning voltage microscopy Methods 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/18—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Hospice & Palliative Care (AREA)
- Pathology (AREA)
- Developmental Disabilities (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Physics & Mathematics (AREA)
- Child & Adolescent Psychology (AREA)
- Biophysics (AREA)
- Educational Technology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a driving state monitoring method based on an eye movement technology, which comprises a pilot driving stage, a calibration stage, a test stage and a feedback stage. The purpose of the test drive phase is to familiarize the tester with the driving simulator and scenario without task limitations. The calibration stage is to calibrate the eye movement data of the tester before testing, and ensure that high-quality data is obtained. The testing stage comprises an experimental scene component and task setting, and related task types are respectively 'stimulation driving' and 'task driving'. The feedback stage refers to the task completion condition feedback of a tester so as to ensure that the tester performs an experiment according to requirements; and secondly, the immersion and reality feedback of the experiment ensures the validity of the data. The invention respectively records the original data of eye movement, head movement and the like of a tester under different task scenes, and the data of driving speed, track and the like, and prepares for subsequent analysis.
Description
Technical Field
The invention belongs to the field of virtual reality and man-machine interaction, and relates to information theory and computer graphics, in particular to a driving state monitoring method based on an eye movement technology.
Background
Vehicle driving has been a common cognitive task that has been of long interest. And with the fire heat in the automatic driving field, the analysis and research on the cognitive state of the driver are very important, so that the machine can be helped to better learn the driving principle while utilizing data of various sensors, and an important method is to quantify and standardize the cognitive state of the driver in the automatic driving process. In the process, the quality of data must be guaranteed, so that a driving experiment is simulated through virtual reality, testers are recruited to participate in the experiment, and behavior data of the testers, such as eye movement, head movement, pupil size and the like, are collected; the driving data of the vehicle, such as driving track, speed and steering angle, are collected. The virtual reality technology can enable a tester to have better interactivity and immersion, and meanwhile, scenes and task elements can be controlled in a self-defined mode according to requirements, so that feasibility, controllability and safety of any experiment are guaranteed. In virtual reality, a three-dimensional scene and a three-dimensional object can be constructed at multiple angles, and the change of the scene depth and the material property can be controlled, so that a driver can more closely approach to a real human reaction in the driving process. In addition, the coordinate calculation of the eye tracker in the three-dimensional scene is more accurate. By means of the method and the device, driving behaviors and complex perception processes can be reasonably abstracted, normalized, quantified and modeled, which is an open and concerned subject, and in addition, common perception exercise tasks such as driving are expected to bring general inspiration to other tasks, such as physical exercise, reading, health monitoring and the like.
From the perspective of the driver, the cognitive state can be divided into two categories, namely "top-down" and "bottom-up". Under the state of 'top-down', the driver fully exerts subjective initiative and can distribute attention to each stage of driving operation; on the contrary, in the "bottom-up" state, the driver is required to constantly pay attention to the change of the external stimulus, and is passively guided by the surrounding environment. Under the background of driving state monitoring, state abnormity early warning and automatic driving, how to distinguish two states is a very necessary topic, especially in the process of taking over the automatic driving, the cognitive state of a driver directly influences whether safe taking over can be carried out, and basic eye movement indexes such as staring, saccadic amplitude, pupil change and the like can be used as the representation of the state, and visual information change taking human eyes as bridges can be closely related to behavior rules by combining information processing modes of a brain at different stages.
The driving state of a driver in the driving process can be monitored in real time by using the eye movement data, so that the problem that the cognitive state of the driver cannot be quickly identified by the vehicle in the existing automatic driving process is solved. By collecting the eye movement data of the time sequence of the driver and combining an information theory tool, the perception cognitive indexes of the driver, such as perception capability, cognitive load, perception balance and the like, can be obtained, and a classification model can be established as a characteristic according to the significance difference generated by each index under two virtual reality task scenes, so that the driving cognitive states from bottom to top and from top to bottom are distinguished, and the possibility that the eye movement data provides rich state monitoring research is provided. At present, one of the valuable driving state monitoring methods is to fully mine and process the complex multi-modal eye movement indexes in the driving process.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a driving state monitoring method based on an eye movement technology. The invention further improves the monitoring research of the driving state in the driving process, lays a foundation for the smooth implementation of the automatic driving takeover, establishes quantitative and standardized indexes about the behavior of the driver through an information theory tool, and can visually show the driving state so as to perform explicit analysis on the result.
The purpose of the invention is realized by the following technical scheme:
a driving state monitoring method based on eye movement technology comprises a pilot driving stage, a calibration stage, a test stage and a feedback stage;
the test driving stage is used for enabling a tester to learn and be familiar with the correct operation specification and sensitivity of the driving simulator, and the tester has no task limitation in the test driving scene;
the calibration stage is to calibrate the fixation point coordinates of the eyes to ensure the unbiased movement of the eyes of the tester, capture the fixation point coordinates and the pupil size of the tester at each moment through an eye movement instrument, and the pupil size is based on the horizontal diameter of the pupil;
in the testing stage, a tester is arranged to carry out driving experiments under two different task scenes, wherein one scene is an urban road scene and comprises a plurality of visual stimulation contents, the other scene is arranged on a rural road, and the tester drives according to a specified route by means of voice prompt; the testing stage requires the tester to comply with the real traffic rules;
and the feedback stage is used for feeding back the task completion condition of the tester.
Further, the scene arrangement of the test driving stage is different from that of the test driving stage, so that the validity and the accuracy of the data are ensured.
Further, normal driving operation can be smoothly completed in a scene in the test driving stage until a tester can finish the normal driving operation, wherein the normal driving operation comprises straight running and turning.
Furthermore, a nine-point calibration method is used in the calibration stage, a central point appears in the center of a screen of VR glasses to determine a reference position, nine calibration red points are generated around the central point, and a tester sequentially watches each calibration red point clockwise; the fixation point coordinate and the pupil size of the tester at each moment are captured by the eye tracker, and the fixation is to ensure that the fixation point coordinate is matched with the calibration red point coordinate.
Further, the tester needs to be informed of experimental rules and tasks before the testing phase, including: the driving in the simulation reality follows the traffic driving rules, follows the specified route and ensures the smoothness of the speed and the steering.
Furthermore, the feedback stage ensures that the tester completes the driving experiment according to the requirement, feeds back the immersion and reality of the driving experiment, and ensures that the collected eye movement data can truly reflect the cognitive state.
Further, the average steering angle and the average acceleration of a tester in a driving experiment are used as behavior performance indexes, the larger the average steering angle is or the faster the acceleration is, the poor driving performance of the tester is indicated, and otherwise, the smaller the average steering angle is or the slower the acceleration is, the good driving performance of the tester is indicated.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
1. the method is characterized in that firstly, interaction between human eyes and the environment is carried out, and information related to tasks in the environment is collected through the human eyes, so that a tester can adjust the self state to better complete the tasks; secondly, interaction between a tester and a driving simulator, wherein the tester operates the simulator to operate the automobile; finally, interaction among all senses of the testers, such as head-eye coordination in time series, is helpful for more effective visual scanning, the efficient visual scanning enables the driver to acquire more information related to tasks, and the information has guiding significance for automatic driving, and the state of the driver is evaluated when the automatic driving takes over or the machine is helped to simulate the state of the driver.
2. In the test driving stage, in order to prevent the tester from memorizing the task scene to cause inaccuracy of the data, the scene arrangement of the test driving stage and the scene arrangement of the test driving stage are obviously different, so that the validity and the accuracy of the data are ensured.
3. In the testing stage of the invention, testers are arranged to carry out driving experiments under two different task scenes, one is an urban road scene with rich visual stimulus contents, including buildings, pedestrians, sidewalks, vehicles, traffic streams, trees, route marks and public seats, and is used for researching the influence of bottom-up control on the behaviors of the drivers. The other is a rural road scene, without stimulus content unrelated to the task, driving along a prescribed route by means of voice prompts, similar to real-world voice navigation, i.e., a "goal-driven" task, for studying the effect of "top-down" control on driver behavior. Under two different task scenes, direct and indirect influences of head-eye coordination, perception capability, cognitive load and perception balance on direction and speed stability are tested by combining an information theory tool. The driving performance can be helped to be predicted according to the proposed index.
Drawings
FIG. 1 is a diagram of hardware devices used in the present invention and their operation;
FIG. 2 is a graph of regression analysis of the head-eye coordination index and driving performance obtained in the examples;
FIG. 3 is an exploded view of the perceptual balance index obtained by a specific experiment in the example;
FIG. 4 is a test scenario diagram of the test phase in an embodiment;
fig. 5 is a partial data storage diagram obtained in the embodiment.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The hardware related in the embodiment mainly comprises a simulator and a virtual reality helmet, wherein the simulator is used for a tester to simulate and operate the automobile, and specifically uses a steering wheel, an accelerator and a brake in a Logitech G29 suite. As shown in fig. 1, the virtual reality helmet is used for displaying a virtual reality scene, an eye tracker and an inertia transfer unit are integrated in the virtual reality helmet, the eye tracker is used for collecting data of a fixation point and a pupil size, and the inertia transfer unit is used for collecting data of a head movement.
Referring to fig. 4, the Unity engine simulates urban and rural roads, and task scenes of 'stimulating driving' and 'target driving' are designed, wherein the task scenes are bidirectional four lanes, and the task scenes are formed by connecting 4 sections of straight roads and 4 curves, and the task scenes further comprise zebra crossings, common street lamps and trees on roads at two sides, and the like. In the scenario of "stimulus-driven", i.e. urban roads (upper part of fig. 4), the stimulus objects are richer, including buildings, pedestrians, signs, traffic flows, sidewalks, route signs, public seats, etc. To study the effect of "bottom-up" control on driver behavior. In the "goal driven" i.e. rural road scenario (lower half of fig. 4), there is no motivational content unrelated to the task, driving on a prescribed route by means of voice prompts, similar to the real voice navigation, i.e. the "goal driven" task, for studying the impact of the "top-down" control on driver behavior.
The virtually driven vehicle is provided with an accelerator, a brake, an instrument panel, a steering wheel, a left rearview mirror and a right rearview mirror, and the structure and the power system of the vehicle are completely simulated.
The specific operation process of the driving state monitoring method based on the eye movement technology is as follows:
1. the normal (corrected) eyesight of a tester is ensured by adjusting the degree of the eye tracker lens, and the position and the direction of the driving seat are adjusted to accord with personal habits. After the adjustment is completed, the virtual reality scene is entered, a test driving stage is entered, correct operation specifications and sensitivity of the driving simulator are learned and familiar, a tester has no task limitation in the test driving scene, and after the test driving scene is fully adapted, the tester can smoothly complete operations such as straight running, turning and the like in the scene, so that the tester can be informed of entering the next stage of testing.
2. And capturing the correct motion trail of the eyeball motion by using a nine-point calibration method. First, a central point appears in the center of the screen to determine a reference position, and then nine calibration red dots are generated around the central point, wherein the tester is required to watch each calibration red dot clockwise in turn. The specific method is that the eye tracker captures the gazing point position and pupil size (unit is mm), (x) of the tester at each moment t ,y t ) And the coordinate of the fixation point of the tester is represented, and the calibration refers to matching the coordinate of the fixation point with the coordinate of the calibration red point. The simplest way to ensure that the pupil size measurement is unbiased is to measure the horizontal diameter of the pupil directly, since the vertical diameter of the pupil is very sensitive to eyelid closure. The wearing position of the virtual reality helmet also needs to be corrected to ensure no sight line is blocked; the correct sitting posture of the tester is adjusted, and the driving operation is standard and reasonable.
3. And (3) setting up a task, and informing a tester of experimental rules and the task before a testing stage: (1) The speed of 40km/h is kept as constant as possible, the driving path (2) is prompted to keep stable on a horizontal road surface by voice 50 m before turning, and the steering wheel is forbidden to rotate freely outside the turning point.
4. The tester formally enters a test scene to start testing, and the main tester monitors the driving standard of the tester through the screen at any time and ensures that the test process is relatively quiet without external interference. If the main tester finds that an emergency occurs, for example, the tester has dizziness or misoperation, the test should be stopped in time, so as to ensure the safety of personnel and the validity and accuracy of data. And if not, waiting for data storage (a data storage chart is shown in fig. 5), interviewing each tester after the test stage, recording the evaluation of the tester on the satisfaction degree and the telepresence of the test scene, and finally asking the tester for an improvement suggestion of the whole experiment flow. And the immersion and the reality of the driving experiment are fed back, and the collected eye movement data can truly reflect the cognitive state.
In this embodiment, the average steering angle and the average acceleration are used as the two performance indicators, and a larger average steering angle or a faster acceleration indicates a poor driving performance, whereas a smaller average steering angle or a slower acceleration indicates a good driving performance. The average steering angle indicator may be referred to as direction smoothness and the acceleration indicator may be referred to as speed smoothness. The driving test is relatively fair at each time by designing a starting point and a terminal point which are consistent, a tester relies on an external device Logitech G29 driving simulator to brake and accelerate and decelerate in the driving process, a steering wheel is used for steering the virtual automobile, data is collected by a seven-Xin easy-maintenance eye tracker, and the driving test is guaranteed to be completed in an independent and quiet space.
Through analysis of actual test results, the correlation between the head-eye coordination index and the direction stability under the task scene of stimulation driving is found; and under the task scene of 'target drive', the sensing ability and the sensing balance index directly influence the speed stability. The effectiveness of the invention is verified by virtual driving experiments participated by forty-eight testers, and the virtual driving experiments are totally divided into two groups, namely twenty-four persons in one group, and respectively participate in task experiments of 'stimulation driving' and 'target driving'. The experimental result shows that, as can be seen from the linear regression analysis of fig. 2, the head-eye coordination index has a significant correlation with the direction smoothness; as can be seen from fig. 3, there is a significant correlation between perceptual capability and perceptual balance and speed smoothness. Meanwhile, the invention adopts three correlation coefficients: pearson Linear Correlation Coefficient (PLCC) for linear correlation, spearman Rank Order Correlation Coefficient (SROCC) for nonlinear correlation analysis, and Kendall Rank Order Correlation Coefficient (KROCC) for rank ordering, the analysis results are shown in Table 1 and Table 2. Finally, the characteristics are selected through the significance difference generated by the indexes under the two task scenes, and the classification efficiency is improved.
The head and eye coordination index is defined as formula (1); the perception capability and perception balance index are defined as formula (2) and formula (3):
PB Y→X (t)=TE Y→X (t)+AIS X (t) (3)
wherein Y and X respectively represent the head coordinate and the fixation point coordinate at each time, TE Y→X (t) denotes head-to-eye transfer entropy, AIS X (t) represents the active information storage of the eye, p (-), p (-,) and p (-, |) represent probabilities, joint probabilities and conditional probabilities, and I (: represents mutual information.
TABLE 1 correlation coefficient and p-value between head-eye coordination and Direction smoothness
TABLE 2 correlation coefficient and p-value between perceptual ability, perceptual balance and speed smoothness
TABLE 3 results of ANOVA under "stimulation-oriented" and "target-oriented
TABLE 4 Classification accuracy results
The difference between each index at "target drive" and "stimulus drive" was analyzed by variance, as shown in table 3. The driving states are distinguished by means of the above-mentioned indicators that produce variability. The optimal model is obtained by using cross-turn validation, and the classification model is established by using methods such as decision trees, logistic regression, SVM, KNN and the like, wherein the highest accuracy is 95.8%, and the result is shown in Table 4.
Finally, it should be pointed out that: the above examples are merely illustrative of the computational process of the present invention and are not limiting thereof. Although the present invention has been described in detail with reference to the foregoing examples, those skilled in the art will appreciate that the computing processes described in the foregoing examples can be modified or equivalent substituted for some of the parameters without departing from the spirit and scope of the computing method.
The present invention is not limited to the embodiments described above. The foregoing description of the specific embodiments is intended to describe and illustrate the technical solutions of the present invention, and the above specific embodiments are merely illustrative and not restrictive. Those skilled in the art can make various changes in form and details without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (7)
1. A driving state monitoring method based on an eye movement technology is characterized by comprising a pilot driving stage, a calibration stage, a test stage and a feedback stage;
the test driving stage is used for enabling a tester to learn and be familiar with the correct operation specification and sensitivity of the driving simulator, and the tester has no task limitation in the test driving scene;
the calibration stage is to calibrate the fixation point coordinates of the eyes to ensure the unbiased movement of the eyes of the tester, capture the fixation point coordinates and the pupil size of the tester at each moment through an eye movement instrument, and the pupil size is based on the horizontal diameter of the pupil;
in the testing stage, a tester is arranged to carry out driving experiments under two different task scenes, wherein one scene is an urban road scene and comprises a plurality of visual stimulation contents, the other scene is arranged on a rural road, and the tester drives according to a specified route by means of voice prompt; the testing stage requires the tester to comply with the real traffic rules;
and the feedback stage is used for feeding back the task completion condition of the tester.
2. The driving state monitoring method based on eye movement technology as claimed in claim 1, wherein the scene arrangement of the test driving stage is different from that of the test driving stage, so as to ensure the validity and accuracy of the data.
3. The driving state monitoring method based on the eye movement technology as claimed in claim 1 or 2, wherein normal driving operation including straight running and turning is smoothly performed in a scene until a tester can smoothly complete the normal driving operation in the test driving stage.
4. The driving state monitoring method based on eye movement technology as claimed in claim 1, wherein the calibration stage uses a nine-point calibration method, a central point appears at the center of the screen to determine the reference position, nine calibration red points are generated around the central point, and the tester watches each calibration red point clockwise in turn; the fixation point coordinate and the pupil size of the tester at each moment are captured by the eye tracker, and the fixation is to ensure that the fixation point coordinate is matched with the calibration red point coordinate.
5. The driving state monitoring method based on the eye movement technology as claimed in claim 1, wherein the tester needs to be informed of experimental rules and tasks before the testing stage, and the experimental rules and tasks comprise: driving in the simulation reality complies with the traffic driving rules, follows the specified route, and ensures the smoothness of speed and steering.
6. The driving state monitoring method based on the eye movement technology as claimed in claim 1, wherein the feedback stage ensures that the tester completes the driving experiment as required, and feeds back the immersion and the sense of reality of the driving experiment, and ensures that the collected eye movement data can truly reflect the cognitive state.
7. The driving state monitoring method based on the eye movement technology as claimed in claim 1, wherein the average steering angle and the average acceleration of the tester in the driving experiment are used as the performance indicators, the larger the average steering angle or the faster the acceleration is, the worse the driving performance of the tester is indicated, and the smaller the average steering angle or the slower the acceleration is, the better the driving performance of the tester is indicated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211217794.8A CN115429275A (en) | 2022-09-30 | 2022-09-30 | Driving state monitoring method based on eye movement technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211217794.8A CN115429275A (en) | 2022-09-30 | 2022-09-30 | Driving state monitoring method based on eye movement technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115429275A true CN115429275A (en) | 2022-12-06 |
Family
ID=84250492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211217794.8A Pending CN115429275A (en) | 2022-09-30 | 2022-09-30 | Driving state monitoring method based on eye movement technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115429275A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110236575A (en) * | 2019-06-13 | 2019-09-17 | 淮阴工学院 | The time of driver's reaction calculation method that eye tracker is combined with driving simulator |
CN112233502A (en) * | 2020-10-15 | 2021-01-15 | 天津大学 | Driving emergency test system and method based on virtual reality |
WO2021124140A1 (en) * | 2019-12-17 | 2021-06-24 | Indian Institute Of Science | System and method for monitoring cognitive load of a driver of a vehicle |
CN113946212A (en) * | 2021-10-16 | 2022-01-18 | 天津大学 | Steady driving test system based on virtual reality |
CN113992907A (en) * | 2021-10-29 | 2022-01-28 | 南昌虚拟现实研究院股份有限公司 | Eyeball parameter checking method, system, computer and readable storage medium |
US20220073079A1 (en) * | 2018-12-12 | 2022-03-10 | Gazelock AB | Alcolock device and system |
JP2022138812A (en) * | 2021-03-11 | 2022-09-26 | ダイハツ工業株式会社 | Vehicle driver state determination device |
-
2022
- 2022-09-30 CN CN202211217794.8A patent/CN115429275A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220073079A1 (en) * | 2018-12-12 | 2022-03-10 | Gazelock AB | Alcolock device and system |
CN110236575A (en) * | 2019-06-13 | 2019-09-17 | 淮阴工学院 | The time of driver's reaction calculation method that eye tracker is combined with driving simulator |
WO2021124140A1 (en) * | 2019-12-17 | 2021-06-24 | Indian Institute Of Science | System and method for monitoring cognitive load of a driver of a vehicle |
CN112233502A (en) * | 2020-10-15 | 2021-01-15 | 天津大学 | Driving emergency test system and method based on virtual reality |
JP2022138812A (en) * | 2021-03-11 | 2022-09-26 | ダイハツ工業株式会社 | Vehicle driver state determination device |
CN113946212A (en) * | 2021-10-16 | 2022-01-18 | 天津大学 | Steady driving test system based on virtual reality |
CN113992907A (en) * | 2021-10-29 | 2022-01-28 | 南昌虚拟现实研究院股份有限公司 | Eyeball parameter checking method, system, computer and readable storage medium |
Non-Patent Citations (1)
Title |
---|
苟超, 卓莹, 王康等: "眼动跟踪研究进展与展望", 自动化学报, 26 October 2021 (2021-10-26), pages 1173 - 1192 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xu et al. | Driving performance under violations of traffic rules: Novice vs. experienced drivers | |
CN107224261B (en) | Visual impairment detection system using virtual reality | |
Caird et al. | Twelve practical and useful questions about driving simulation | |
US10013893B2 (en) | Driver training | |
US9666091B2 (en) | Driver training system | |
CN102510480B (en) | Automatic calibrating and tracking system of driver sight line | |
Li et al. | Multi-sensor soft-computing system for driver drowsiness detection | |
Cao et al. | Modeling the development of vehicle lateral control skills in a cognitive architecture | |
Tanaka et al. | Study on driver agent based on analysis of driving instruction data—driver agent for encouraging safe driving behavior (1)— | |
Zhou et al. | An evaluation method for visual search stability in urban tunnel entrance and exit sections based on markov chain | |
CN111743553A (en) | Emotional feature extraction method and system based on eye movement data | |
Wald et al. | Concurrent validity of a virtual reality driving assessment for persons with brain injury | |
DeLucia et al. | Motion extrapolation of car-following scenes in younger and older drivers | |
CN111134693B (en) | Virtual reality technology-based autism child auxiliary detection method, system and terminal | |
JP2012155285A (en) | Drive hazard predictive learning support system | |
CN115429275A (en) | Driving state monitoring method based on eye movement technology | |
CN110169779A (en) | A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode | |
CN109215435B (en) | System and method for testing and training effective visual field capability of old driver | |
CN115440107A (en) | VR virtual reality-based intelligent driving training system and method for deaf-mute | |
Rhie et al. | Queueing network based driver model for varying levels of information processing | |
JP2023037694A (en) | Control apparatus of driving training apparatus | |
Romoser | Improving the road scanning behavior of older drivers through the use of situation-based learning strategies | |
Cossitt | Developing a model of driver performance, situation awareness, and cognitive load considering different levels of partial vehicle autonomy | |
RU2819843C2 (en) | Method for determining the level of formation of the skill of identifying a potentially dangerous situation and the skill of reacting to an event | |
Talamonti et al. | Eye glance and head turn correspondence during secondary task performance in simulator driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |